From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I've been in many threads and seen many, many more where the root issue stems from confusion about the way DAQmx Timing and DAQmx Read interpret the meaning of "# samples" very differently for Finite Sampling vs. Continuous Sampling mode.   (For example, here's just one of the times I tried to address that confusion.)

 

First, here's what causes the confusion:

 

  • The 'samples per channel' input to DAQmx Timing is *crucial* for Finite Sampling tasks and usually *ignored* for Continuous Sampling tasks.
  • The 'number of samples per channel' input to DAQmx Read has a default value of -1 when left unwired.  However, the *meaning* of this default value is *VERY* different,  resulting in very different behavior depending on whether the the task is configured for Finite or Continuous sampling.  (See the first link I referenced.)

While the relevant info is findable in the help, it also often clearly remains unfound.  I got to wondering whether some changes in the DAQmx API could help.

 

I'll describe one approach, but would definitely be open to better solutions.  The goal is simply to find *some* way to reduce the likelihood of confusion for rookie DAQmx users.

 

I picture adding more polymorphic instances to both the DAQmx Timing and DAQmx Read vi's, so there can be distinct instances for FInite vs Continuous sampling.

 

Further, I picture that the task refnum would carry sufficient type info related to this timing config, such that that downstream DAQmx functions can "know" what kind of Timing was set up -- Finite, Continuous, on-demand (the default if DAQmx Timing was never called at all), etc.

 

Then when that task refnum is wired into DAQmx Read, the most appropriate instance of DAQmx Read would show up.  And the corresponding input parameter names, help, default values, and default value *behavior* can all be tailored to that particular instance of DAQmx Read.  For example, perhaps the "# samples" input should become a *required* input for Continuous Sampling tasks, to force a decision and encourage further inspection of the advanced help.

 

Don't know how feasible something like this is, but it's definitely something that regularly trips up newcomers to DAQmx.

 

 

-Kevin P

I'd like to know if a DAQmx Task has been triggered.  Perhaps via task property?

 

triggered.png

 

Thanks,

 

Steve K

Hi!


PCI Express bus became more and more popular.
PCIe has a new type of interrupts - MSI/MSI-X.
Today VISA driver is able to work only with old style interrupts (Legacy).

 

Let me explain the main difference:

Legacy interrupt mask intA/B/C/D signals (these signals was in PCI bus, but not exist in PCIe bus).
These signals (for PCIe actually packets with Message setA/B/C/D) are shared between all PCIe devices.
So VISA driver spend a lot of time when it looks who produced this interrupt.

 

MSI interrupt is actually a Memory Write packet to preallocated address.
In this case VISA should already know which device produced this interrupt.
Also MSI interrupt can contain different interrupt vectors inside of Memory Write packet. So it would be very helpful to get access to vector value too.

 

Requesting MSI/MSI-X interrupts support to VISA driver.

 

Thanks a lot!

ExpressCard slots are getting rare on Laptops these days - this idea is requesting a replacement please on a more modern bus e.g. thunderbolt3 / USB-C...

 

Yes my development day to day computer is a laptop. In production will deploy to dedicated PC. 

 

http://forums.ni.com/t5/PXI/Thunderbolt-ExpressCard-Adaptor-for-ExpressCard-8360/m-p/3332184#M16392

 

http://forums.ni.com/t5/PXI/Thunderbolt-3-PXI-chassis-interface/td-p/3314556

 

 

I would like to see a version of the 4-slot cDAQ-9185, but where the 4 slots are arranged the long way, and the whole thing fits in a 1U slot on a standard 19" rack.

Dear NI, please consider a future hardware feature addition:

 

Add a "Power Up Delay DIP Switch" to the back of the PXI Power Supply Shuttle.

 

It would allow end users to reliably sequence the powering-up of multi PXI chassis solutions. It could also be handy to sequence any other boot-order sensitive equipment in the rack or subsystem. This would also be a world-voltage solution since this capability already exists in the power shuttle. We are seeing the need for more input-voltage-agnostic solutions. I'm sure you are too.

 

It might offer time delay choices of 1,2,4,8,16 seconds, etc.

 

We run into this problem on every multi-chassis integration. We have solved it several ways in the past like: human procedure (error prone), sequencing power strips ($$$ and not world-voltage capable), custom time-delay relays ($$$).

 

Imagine never having your downstream chassis(s) disappear. Or worse yet, having them show up in MAX, but act strangely because of not enough delay time between boots.

 

Thanks for reading this, and consider tossing me a Kudos!

"Without needing to clear "all" associated events, or EVEN opening MAX, I would like the ability to replace NI-USB Device "Doohickey123" serial number "junkgarbagestuff" with another NI-USB device of the same type-  perhaps a pop-up option like.... ""Replace no longer installed NI-53xx alias "gizmo"  with new NI-53xx?""  

 

Sure would help when I swap NI-xxxx devices amongst systems- especially the USB devices!

When using TDMS on cRIO systems, there are a couple of considerations that doesn't normally play in too much when storing data as TDMS files and they are:

 

* The current version of the file system used on cRIO controllers degrades significantly in performance if -any- folder on the cRIO contains more than ~100 files. (work-around> more elaborate folder structures, but a lot of this structuring would be only to work around this shortcoming of the (old) version of the file system 

* The drives are SSD with limited life-length and wear-leveling etc. Writing and re-writing these index files add un-necessary overhead and wear on the disks

* They use up space which is (very) limited on some cRIO's (even if not much). (people may be quick to point out that you can add a thumb-drive, but down-sides to that is the thumbdrives (as far as I know) needs to be FAT. Compared to storing on the cRIO file system which is atomic and fail-safe where you pretty much don't have to worry about sudden power outages and interruptions mid-write.. on a thumb drive you would have all these issues that could worst case corrupt your whole thumb drive.)

 

I propose to add a boolean (default to false) on the TDMS Open called "supress writing tdms index to disk" or some smart name along those lines. What this would do is still allow for the tmds index to be created, but it will remain in memory only and never be written to disk. When the TDMS Close is called, the memory is released and the tdms file is written to disk without the index file. If the same file is opened again, extra time would be needed since the index file would be re-created (again in memory only if boolean indicates this), but I think for the most part this overhead would be more than acceptable.

 

I'm not sure how "simple" modifying the TDMS open and close functions would be, but I do know that there are many cases where this flag would make sense. 

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

At the new client.. no shock to many of you I "Get around"

 

I explained to some of my new compadres the DAQmx "Tasks" need to be created once.... Preferably during development!

I even created a new task in MAX using the DAQmx wizard, Dragged it to the LabVIEW project explorer and all of that!

 

I even went so far as to name the "AUX" temperature channel "armpit"- Trust me, after 5 minutes delivering a .lvproj based on the "Contineous measuement and logging (DAQmx) project template" it was impressive to the client that the plot "armpit" showed 37C on the chart.  Guess where the thermocouple was.

 

So, Because I am that amazing, I showed them that they could Drag-n-Drop the Task to MAX and use MAX to monitor my armpit temperature.  I even showed them that MAX could show them the wiring diagram!

 

"HOLD IT"! they said, The wiring diagram is right there! On SCREEN! per channel! 

That is where I just about lost my mind!  They wanted to see this connection diagram for another Channel--- that worked! BUT there was no way to output that wonderful data!

 

"Can I create a Wiring Diagram for this channel, device or task?" were the next words out of their mouths.  I WAS STUNNED!  "Not today" I said, "I'll post that excellent idea!"

 

How feasible is the idea of setting up an open source driver project? This would be something that NI could host, but anyone could participate in to build a driver that can fit into different platforms. It can build on the DDK driver, but be centralized for collaborative effort. I like the name Open-DAQ driver. This would be a good way to address Linux users who are accustomed to source code. I know we have a DAQmx Base as a separate driver for different platforms, but an open source project would allow the Linux community to build a solution for novel or unusual Linux versions.

When doing PWM with DAQmx, error -200684 is thrown if a 0% duty cycle is attempted.  For situations where a true 0% is needed, this is problematic. There are a few workarounds available, but some are less than ideal.  

 

The suggestion here is to pause the output if a 0% duty cycle is attempted.

Absolute encoders have been around for some time, but NI's motion hardware still supports only incremental encoders.  I would like to see support for absolute encoders in NI Motion or NI Soft Motion.

 

NI supports almost any bus.  Why not SSI (synchronous serial interface) ?  

 

Of course, there is always the option to use an R series card and then build an interface.  Why not have a low-cost PCI or USB card? Also, perhaps a C-series module, so that we don't have to take up FPGA space?

A DAQmx channel control allows available channels to be filtered for IO type (Analogue Input, Analogue Output etc) using the "IO Name Filtering..." function

 

channel.jpg

 

A DAQmx task control on the other hand doesn't.

task.jpg

 

I'd find the option to filter listed tasks on IO type pretty usefull.

 

cheers.

 

Hello everybody!

 

Currently the AI.PhysicalChans property returns the number of physical channels for measurement (e.g. 16). However, to calculate the maximum sampling rate for the device programmatically (e.g. 9213), we need the total number of channels including the internal ones such as cold-junction and autozero (for 9213 it's 18).

 

Therefore I would like to suggest to include a property node with the number of the internal channels or the total number of physical channels or something similar.

 

Use case: programmatically calculate the maximum sampling rate in a program that should work for multiple types of devices without being aware of their type.

 

Thanks for consideration and have a great day!

 

[actually a customer's idea]

I have a data acquisition NI-DAQmx/C++ program where I am continuously acquiring 5 channels of data at 40KHz/channel and reading them in 0.1 second chunks.  This successfully works perfectly for over 14 hrs continuous acquisition, but at 14hrs, 54 min and 47 seconds the program hangs up due to an overflow in the int32 DAQmxInternalAIbuffer_Offset value sent to the DAQmxSetReadOffset() function.  In the DAQmxSetReadRelativeTo() function, I set the offset relative to the first sample using DAQmx_Val_FirstSample.  (See "32-bit lmitation pof the NI-DAQmx int32 DAQmxSetReadOffset() function?")

 

It would be very helpful for the DAQmxSetReadOffset() offset value to be 64-bits rather than the current int32 value.  This would make this function analogous to the DAQmxGetReadTotalSampPerChanAcquired() which returns a 64-bit value.  I understand that the offset is maintained internally as a 64-bit value, so perhaps this would not be too difficult to do.

 

I hope that National Instruments fixes this limitation in their API, not just for 64-bit Windows, but also for 32-bit Windows because a lot of us are still using 32-bit compilers and our users are using Windows XP.  Perhaps it could be implemented as a separate DAQmxSetReadOffset64() 64-bit function for the 32-bit Windows.

 

Thank you,
Bill Anderson

 

After consulting the community and raising a ticket with NI’s support team, we have determined that there is no good way of programmatically removing old or disconnected remote systems.

DeleteDisconnectedSystems.png

I propose that Ni Sys Config pallet be expanded to include a “Delete Disconnected Systems”. This would clear MAX’s cache of disconnect remote systems. Just like the manual method available through MAX.

As documented in a previous post, it is currently impossible to install the nidaqmx-python library into a Docker container. Enabling this functionality would create an opportunity for software teams to build cutting edge big data pipelines for measurement instruments using container technology. This could also optimize development time by including custom DAQ code in continuous integration pipelines.