Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I've been in many threads and seen many, many more where the root issue stems from confusion about the way DAQmx Timing and DAQmx Read interpret the meaning of "# samples" very differently for Finite Sampling vs. Continuous Sampling mode.   (For example, here's just one of the times I tried to address that confusion.)

 

First, here's what causes the confusion:

 

  • The 'samples per channel' input to DAQmx Timing is *crucial* for Finite Sampling tasks and usually *ignored* for Continuous Sampling tasks.
  • The 'number of samples per channel' input to DAQmx Read has a default value of -1 when left unwired.  However, the *meaning* of this default value is *VERY* different,  resulting in very different behavior depending on whether the the task is configured for Finite or Continuous sampling.  (See the first link I referenced.)

While the relevant info is findable in the help, it also often clearly remains unfound.  I got to wondering whether some changes in the DAQmx API could help.

 

I'll describe one approach, but would definitely be open to better solutions.  The goal is simply to find *some* way to reduce the likelihood of confusion for rookie DAQmx users.

 

I picture adding more polymorphic instances to both the DAQmx Timing and DAQmx Read vi's, so there can be distinct instances for FInite vs Continuous sampling.

 

Further, I picture that the task refnum would carry sufficient type info related to this timing config, such that that downstream DAQmx functions can "know" what kind of Timing was set up -- Finite, Continuous, on-demand (the default if DAQmx Timing was never called at all), etc.

 

Then when that task refnum is wired into DAQmx Read, the most appropriate instance of DAQmx Read would show up.  And the corresponding input parameter names, help, default values, and default value *behavior* can all be tailored to that particular instance of DAQmx Read.  For example, perhaps the "# samples" input should become a *required* input for Continuous Sampling tasks, to force a decision and encourage further inspection of the advanced help.

 

Don't know how feasible something like this is, but it's definitely something that regularly trips up newcomers to DAQmx.

 

 

-Kevin P

It would be great to develop software on 64bit Linux sytem using DAQmx.

Since we`re developing software for 64bit Linux this is a must for us - this means a 64bit Kernelmodule as well as 64bit libraries.

Many CAN protocols require a byte in a cyclic message to be incremented each time the message is sent (this is often byte 0). I might have read somewhere that this is possible with VeriStand but I am not using it. So when using only LabVIEW and the NI-XNET API, the only way to achieve this is to call the XNET Write function to manually set the value of this byte. But having to call the API each time the message should be sent removes all the benefits of cylic messages... Moreover LabVIEW can't guarantee the same level of speed and determinism (if the message is to be sent every 5ms for example).

Being able to configure a signal to be an auto-incremented counter would be a huge improvement. To me, this is a must-have, not a nice-to-have...

Based on this question, I would like to add a new category of events to LabVIEW: Max-events.

 

This category could contain the following events:

-Hardware Added

-Hardware removed

-Configuration changed

    -Scales

    -Channels

    -Tasks

 

If you know other events, please post them.

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

I'd like to know if a DAQmx Task has been triggered.  Perhaps via task property?

 

triggered.png

 

Thanks,

 

Steve K

As documented in a previous post, it is currently impossible to install the nidaqmx-python library into a Docker container. Enabling this functionality would create an opportunity for software teams to build cutting edge big data pipelines for measurement instruments using container technology. This could also optimize development time by including custom DAQ code in continuous integration pipelines.

 

I would like to be able to have multiple and distant Labview development environments installed (e.g. Labview 7, 8.0.1,8.2 and 2010). As I understand it, this is mainly limited by the DAQmx drivers.

 

The problem I run into is that I need to support many applications beyond 5 years. We have some test equipment on our production line that has been running software since the 6.0 days. Management will come along and ask me to add one little feature to the software.

 

As it is now, I have to drag out my old computer with Labview 6.0 installed on it, develop on that, and then go back to my new development in LV 2010. I cannot just upgrade the application to 2010, for several reasons.

  1) I can't have all the versions co-exist on one computer, so It needs to move from one machine to the next, upgrading along the way.

  2) Different versions can change things in dramatic ways and break other pieces of code (e.g. Traditional DAQ vs DAQmx)

  3) Because of #2, I need to do a full revalidation of the code, for what should be a minor change.

 

One thing that the NI architects do not seem to understand is that revalidation is not a trivial activity. This can interrupt the production schedule since I often cannot test on anything but the production equipment. This interruption can take days to weeks, even if no problems are uncovered, and much longer if we find that upgrading caused an issue. If I keep my old development environment, all I need to test is the changes to the code. If I change the compiler, I need to test ALL the code to be sure that the compiler change did not introduce any more bugs.

 

This is especially challenging in tightly controlled environments such as medical device manufacturing, where any change to the process requires a great deal of scrutiny.

 

Please make an effort to consider this in the future. Until then, I will be stuck with 4 computers under my desk all running different versions of Labview.

AutoStart Property.PNG

Currently AutoStart is a feature of XNET that is not intuitively obvious.  One of my co-workers was trying to test out the idea of having several queued sessions start with slight delays between them (using the Start Time Offset CAN Frame property).  The code wasn't working properly due to preloading the array of frames to transmit using XNET Write (which caused some frames to transmit early).  Note that a NI Applications Engineer on a support request was not able to figure out that AutoStart was the issue before the co-worker noticed the AutoStart property a few hours later.

Note that no documentation on XNET Write mentions the AutoStart Feature.  The XNET Start VI context help doesn't mention it, but the documentation does indirectly mention being optional and points you to the AutoStart property's help for additional details.

I would recommend an optional input to XNET Write (defaulting to true) to activate AutoStart.  This would match the DAQmx API's Write VI.  It also would be a good idea to note that input sessions always start automatically somewhere in the XNET Read documentation.

Counter tasks can only take 1 channel, due to the nature of timed signals, obviously. When setting up a system with 16 DUTs with counter outputs, this requires 16 tasks. Every single one has to be painstakingly created and configured. (As an aside: Defining a tabulator sequence still seems a mystery to NI's programmers, even though LabVIEW supports this)

Wouldn't it be nice to have a Ctrl+c,Ctrl+v sequence for tasks and then only modify physical channel? IMHO: yes.

 

KR

nimic

"Without needing to clear "all" associated events, or EVEN opening MAX, I would like the ability to replace NI-USB Device "Doohickey123" serial number "junkgarbagestuff" with another NI-USB device of the same type-  perhaps a pop-up option like.... ""Replace no longer installed NI-53xx alias "gizmo"  with new NI-53xx?""  

 

Sure would help when I swap NI-xxxx devices amongst systems- especially the USB devices!

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

Dear NI Idea Exchange,

 

I had a service request recently where the customer wished to use a mass flow meter, using the HART protocol (massive industrial protocol used worldwide with over 30 million devices) to communicate updated values to a cRIO 9074 chassis using a NI 9208 module.

 

They did not know how they could use this protocol with our products. There was only one example online regarding use of this protocol using low level VISA functions. There is currently no real support for this protocol.

 

I suggested that if they wished to use this sensor they would be required to convert it to a protocol we do support, for example Modbus or to RS-232 (using a gateway/converter).

 

Then they could use low level VISA functions to allow the data communication.

 

They were fine with doing this but they felt that NI should probably support this protocol with an off-the-shelf adaptor or module. This is the main point of this idea exchange.

 

There is clearly a reason why we do not currently provide support for this protocol in comparison to PROFIBUS or Modbus.

 

I thought I would pass on this customer feedback as it seemed relevant to NI and our vision. 

 

Regards,

 

Dominic Clarke

 

Applications Engineer

National Instruments UK

Hi,

 

I was suggested to post the issue with nidaqmx not supporting the upcoming version 3.9 of Python in this group. Here is my original post: https://forums.ni.com/t5/Multifunction-DAQ/Deprecation-warning-for-nidaqmx-using-Python/m-p/4051990/highlight/true#M99324

 

In short this is the warning currently seen:

DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working

 

... it doesn't seem like an issue that would require a large effort to solve, so I hope this will be prioritized accordingly.

 

BR Jesper

 

 

It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.

 

What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.

 

This has recently been discussed on the multifunction DAQ board here: http://forums.ni.com/t5/Multifunction-DAQ/quadrature-encoder-based-triggering/td-p/3125468 . So I am not alone in requesting something more programmer friendly than the workaround offered there.

 

 

When a DI change detection task runs, the first sample shows the DI state *after* the first detected change.  There's not a clear way to know what the DI state was just *before* the first detected change, i.e. it's *initial* state.

 

This idea has some overlap with one found here, but this one isn't restricted to usage via DAQmx Events and an Event Structure.  Forum discussions that prompted this suggestion can be seen here and here.

 

The proposal is to provide an addition to the API such that an app programmer can determine both initial state just before the first detected change and final state resulting from each detected change.  The present API provides only the latter.

 

Full state knowledge before and after each change can be used to identify the changed lines.  (Similarly, initial state and change knowledge could be used to identify post-change states.)

 

My preferred approach in the linked discussions is to expose the initial state through a queryable property node.  The original poster preferred to have a distinct task type in which initial state would be the first returned sample.  A couple good workarounds were proposed in those threads by a contributor from NI, but I continue to think direct API support would be appropriate.

 

 

-Kevin P

To save a bunch of typing, the following is copied verbatim from a post I made years ago: 

While the thread is fresh and FWIW, I'd like to add my own additions to a counter/timer wishlist:

1.  Hardware-reload of count register based on signal edge.  Currently, the only feature that's fairly close is the "Z-index reload" feature for encoder position measurement.  There are many limitations and at least one quirk as presently implemented.

 

A. It only works in "position measurement" (a.k.a. "encoder") mode.   At minimum, it should also be supported in edge-counting mode provided the other limitations/quirks are addressed.  I've done a lot of measurements with an encoder mounted to a step-and-dir stepper motor.  The step-and-dir motor must be measured as an edge-counting task with hw-controlled direction.  The encoder's z-index pulse CAN'T be used to hw-reload the count of the edge-counting task in sync with the encoder task.  It'd be GREAT if it could.  Hw-reload of count could also be useful in other counter tasks, especially pulse(train) generation.  I can imagine some clever tricks in the other modes (such as period measurement) as well.

 

B. It must be programmed to be "active" only during a specific 1 of the 4 possible states of encoder channels A & B -- LL, LH, HL, HH.  This works out fine for real-life encoders that supply their own z-index signal.   However, I've had numerous occasions where I would have logically preferred to reset the count value based on some other system pulse signal (can you say "Limit switch"?).   I'd have liked to say, "perform hw-reload on rising edge of Z-index signal regardless of A&B state".  But no such designation exists.  I'd rather have the choices {Low, High, Either} for both A & B config.

 

C. The Z-index signal must be hard-wired to the counter's default GATE pin on the 6602 board.  I *think* but haven't verified that it's user-selectable on the M-series.  Dunno if it supports just PFI inputs or also RTSI signals.  I would like to see a next-generation Counter/Timer allow user-programmable inputs for Z-index as well as encoder A & B channels.

 

D. At least on the 6602, the Z-index behavior is STATE-driven rather than EDGE-driven.  Z-index reload happens whenever A&B are in the programmed state and Z is High.  I tested by hard-wiring the Z-index signal to +5V and my X4 quadrature task counted 0,1,2,3,0,1,2,3,0,1...   I don't recall this being spelled out clearly in the documentation -- I remember expecting it to be sensitive to a rising edge rather than a high state.  I would very much like the option of making the hw-reload sensitive to an EDGE -- ideally {Rising, Falling, Either}.

Note: wishlist item "1C" has been fulfilled in M-series counters and probably also in X-series which I haven't yet tried.

 

-Kevin P

 

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

The title pretty much says it all. I would like the ability to either configure a full hardware compliment as simulated devices then switch them over to real devices when the hardware arrives or go from real devices to simulated devices without the need to add new, discrete simulated devices to MAX.

 

This would make for much easier offline development and ultimate deployment to real hardware.