Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hello,

 

Some of application needs voltage level of 3.3 V, 2.5 V & 1.8 V. (Low Voltage IC families).

In this case we need to operate digital/counter output at desire voltage level.

It will be good if NI provide it.

Also, with reference to following link,

https://forums.ni.com/t5/Data-Acquisition-Idea-Exchange/0-PWM-duty-cycle-with-DAQmx/idi-p/3819545

PWM output can be set to 0% duty cycle (Idle Low) & 100% duty cycle (Idle High).

 

BR & Stay Healthy

It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.

 

What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.

 

This has recently been discussed on the multifunction DAQ board here: http://forums.ni.com/t5/Multifunction-DAQ/quadrature-encoder-based-triggering/td-p/3125468 . So I am not alone in requesting something more programmer friendly than the workaround offered there.

 

 

I bought a NI USB-6251 BNC but the support explained me that it would have no Linux support out of the box. I will have to find out how to use it on Linux systems myself now (perhaps with help of the forum). It would be a nice feature, if it would ship with Linux support.

The term "Incomplete Sample Detection" comes from DAQmx Help.  It affects buffered time measurement tasks on X-series boards, the 661x counter/timers, and many 91xx series cDAQ chassis.  It is meant to be a feature, but it can also be a real obstacle.

 

How the feature works ideally: Suppose you want to configure a counter task to measure buffered periods of a 1-channel encoder.  You use implicit timing because the signal being measured *is* the sample clock.  The 1st "sample clock" occurs on the 1st encoder edge after task start, but the time period it measures won't represent a complete encoder interval.  Reporting this 1st sample could be misleading as it measures the arbitrary time from the software call to start the task until the next encoder edge.

   On newer hardware with the "Incomplete Sample Detection" feature, this meaningless 1st sample is discarded by DAQmx.  On older hardware, this 1st sample was returned to the app, and it was up to the app programmer to deal with it.

 

Problem 1: Now suppose I'm also using this same encoder signal as an external sample clock for an AI task that I want to sync with my period measurement task.  Since DAQmx is going to discard the counter sample that came from the 1st edge, my first 5 samples will correspond to edges 2-6.  Over on the AI task, my first 5 samples will correspond to edges 1-5.

   My efforts to sync my tasks are now thwarted because their data streams start out misaligned.  The problem and workaround I'm left with are at least as troublesome as the one that was "solved" by this feature.

 

Problem 2:  Suppose I had a system where my period measurement task also had an arm-start trigger, and I depended on a cumulative sum of periods to be my master time for the entire system.  In this case, the 1st sample is the time from the arm-start trigger to the 1st encoder edge, and it is *entirely* meaningful.  On newer hardware, DAQmx will discard it and I'll have *no way* to know my timing relative to this trigger. 

   Older boards (M-series, 660x counter/timers) could handle this situation just fine. On newer boards, I'm stuck with a much bigger problem than the one that the feature was meant to solve.

 

So can we please have a DAQmx property that allows us to turn this "feature" OFF?  I understand that it'd have to be ON by default so as not to break existing code.

 

 

-Kevin P

When a DI change detection task runs, the first sample shows the DI state *after* the first detected change.  There's not a clear way to know what the DI state was just *before* the first detected change, i.e. it's *initial* state.

 

This idea has some overlap with one found here, but this one isn't restricted to usage via DAQmx Events and an Event Structure.  Forum discussions that prompted this suggestion can be seen here and here.

 

The proposal is to provide an addition to the API such that an app programmer can determine both initial state just before the first detected change and final state resulting from each detected change.  The present API provides only the latter.

 

Full state knowledge before and after each change can be used to identify the changed lines.  (Similarly, initial state and change knowledge could be used to identify post-change states.)

 

My preferred approach in the linked discussions is to expose the initial state through a queryable property node.  The original poster preferred to have a distinct task type in which initial state would be the first returned sample.  A couple good workarounds were proposed in those threads by a contributor from NI, but I continue to think direct API support would be appropriate.

 

 

-Kevin P

I have a data acquisition NI-DAQmx/C++ program where I am continuously acquiring 5 channels of data at 40KHz/channel and reading them in 0.1 second chunks.  This successfully works perfectly for over 14 hrs continuous acquisition, but at 14hrs, 54 min and 47 seconds the program hangs up due to an overflow in the int32 DAQmxInternalAIbuffer_Offset value sent to the DAQmxSetReadOffset() function.  In the DAQmxSetReadRelativeTo() function, I set the offset relative to the first sample using DAQmx_Val_FirstSample.  (See "32-bit lmitation pof the NI-DAQmx int32 DAQmxSetReadOffset() function?")

 

It would be very helpful for the DAQmxSetReadOffset() offset value to be 64-bits rather than the current int32 value.  This would make this function analogous to the DAQmxGetReadTotalSampPerChanAcquired() which returns a 64-bit value.  I understand that the offset is maintained internally as a 64-bit value, so perhaps this would not be too difficult to do.

 

I hope that National Instruments fixes this limitation in their API, not just for 64-bit Windows, but also for 32-bit Windows because a lot of us are still using 32-bit compilers and our users are using Windows XP.  Perhaps it could be implemented as a separate DAQmxSetReadOffset64() 64-bit function for the 32-bit Windows.

 

Thank you,
Bill Anderson

 

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

When using a buffered counter output task, the initial delay value is not used at all.  Instead, the user specifies an array of high and low times and the first low time is used as the initial delay.

 

If the output pulse train is repeated multiple times (or continuously), the first low time represents both the time from the trigger until the first pulse as well as the time between the last pulse and the first pulse.

 

It would be desirable to decouple these parameters by allowing for the option to use Initial Delay on buffered counter output tasks (e.g. with a channel property).  Here are a couple use cases off the top of my head where Initial Delay would be very helpful (if not required):

 

1.  This is the case I ran into (posted here😞  if you want to repeat a pulse train continuously every N seconds, you have to either have that N second delay at the start of the task or use another counter as a trigger source.  Depending on high and low times you might be able to get away with writing new values to the counter on-the-fly but this isn't a universal solution.

 

2.  If you wanted to synchronize multiple continuous buffered counter output tasks (with each one sharing a fixed desired period) to a common trigger source but with a different initial delay, you would be unable to do so since the requirement of having different initial delay would affect the period of your actual signal.  You would have to compensate by tweaking the other high/low times in your waveform (giving you something that you don't really want).

Hello,

 

DaqMx Vis only works when the NI Device Loader service is running.

 

If this windows service is not running, DaqMx functions generates error like "Device not found..., undefined board, undefined hardware ...."

 

For some week, i get such an error and it take me a long time to point the real cause !

A windows or software update had changed the service startup sequence ... and the Ni device loader service was no more starting.

The solution to this problem was to configure the NI Device Loader service in order to force restart on start failure. 

 

It should be nice if daqMX functions could generate the "right error".

 

An error like : "Ni services are not running please check their current state ... The DaqMx devices could not be accessed when Ni Device Loader service is not running".

 

This problem also generates problem in MAX ! (The device treeview takes a long time to expand ... and the device autotest fail)

 

 

Thanks for kudossing this idea which could help understanding windows services problems.

Is there any technical reason why this cannot be added to DAQmx?  M series boards still have features that cannot be found on X or S series such as analog current input.

Ideally, it would be best to be able to have multidevice tasks for both M and X at the same time.

The title pretty much says it all. I would like the ability to either configure a full hardware compliment as simulated devices then switch them over to real devices when the hardware arrives or go from real devices to simulated devices without the need to add new, discrete simulated devices to MAX.

 

This would make for much easier offline development and ultimate deployment to real hardware.

I continually come to your site looking for the DAQmx base API manual and have yet to find it.  I eventually have to dig out an old CD to find my copy.

 

How 'bout posting these online so that we can help ourselves out of jams?

 

Thanks,

Jeff

Hi,

 

it would be really nice to be able to set user defined scalings in MAX and daqmx-tasks.

It's possible to do this in common analogue input module like NI-9203, NI-9205 and so on, but I miss this option in NI-9212, NI-9217 and so on.

 

The goal is to set a calibrated scaling to several sensors to get best results and accuracy.

 

Thanks for upvoting my idea!

Yves

Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”

 

I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.  

 

I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.

 

Doing this in firmware has certain advantages:

  1. It can be done quickly (possibly within the time of the next missing sample – at 2Ms/s that’s 0.5us).
  2. Handles all situations (software lockups, excessive CPU loading by other processes, loss of communication do to bus traffic, interface disconnection…)
  3. It does not require any additional hardware (to turn off outputs externally).
  4. Buffer underrun indication on PFI line could provide additional safety measure (it could be used for example to immediately disable external power amplifier connected to DAQ AO). 

Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.

 

Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.  

 

Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.

 

The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.

 

I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.

 

Thanks,

Sebastian

Since version 19, NI stopped distributing the "run time only "installer of DAQ MX. This is a terrible idea.

Here is the thread where no good solution was given.

https://forums.ni.com/t5/Multifunction-DAQ/DaqMX-19-0-runtime/m-p/3994378#M98483

 

Please, NI, if you find it easy enough for the user to build an installer, do it yourself, like you did before.

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

For instance, a couple ideas to get the ball rolling:

  1. Add an interval attribute like "Change file after n samples".   We would then auto-increment the file name and change to that file when we have logged "n" samples.
  2. Make file path attribute changeable at runtime.  We have a file path attribute for logging.  The idea here would be to support changing the file path "on the fly" without stopping and starting the task.  The problem here is that it would not suit very well a use case where you want a specific file size.  Additionally, it wouldn't be as easy to use as #1; it would be more flexible though.
  3. (Any additional ideas/use cases?)

Thank you for your input!

 

Andy McRorie

NI R&D

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

What I'm currently thinking (because it seems the most flexible to different criteria and situations) is to simply allow you to set the file path property while the task is running (on DAQmx Read property node).  The only downside I can think of with this approach is that you wouldn't know exactly when we change to the new file.  We could guarantee within (for example) 1 second, but you wouldn't be able to specify the exact size.

 

Would this be a good solution for you?  Can you think of a better way to specify this behavior?

 

I use Daqmx a lot for writing .NET based measurement software.

 

Whereas the API itself is quite decent, the docs are horrible. Accessing them is convoluted at best, requiring the VS help viewer. Almost nothing is available online and decent examples are quite scarce, which will definitely be an issue for absolute beginners...

 

This definitely deserves some attention!

 

Cheers,

 

Kris

Would it be possible to update the export wizard in MAX so that the NI-DAQmx Tasks list under Data Neighborhood is listed in alphabetical order?  In the main MAX application the list is in order, so finding tasks that are named with a common prefix is easy.  However, in the export wizard you have to scroll and hope you clicked them all.

 

Thanks,

-Brian

Certified LabVIEW Developer

Lead Engineer - LabVIEW

Advanced Development

GE Appliances

 

T 502-452-3831

F 502-452-0467

D *334-3831

E brian.schork@ge.com

In NI MAX, mV/(m/s^2) should be added to the available choice in the dropdown menu for accelerometer sensitivity units.

 

(Currently avaliable choice is only mV/g and V/g)

It gets a bit annoying that PXI1Slot2 is listed after PXI1Slot14 when doing an ascii sort. I (ok, admittedly, my coworker) proposes having naming conventions that will allow for a better ascii sort. For instance, PXI1Slot002 PXI1Slot014.