From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hello,

 

How often you have build Labview applications using simulated DaqMx boards ...

And how often you were limited by the default behaviour of simulated boards ... ( Sinewave for analogic inputs, Counter square signal for digital inputs ... )

 

It would be nice to integrate in DaqMx simulated boards, the abilty to modify the default behaviour of simulated inputs ... thru dedicated popups

 

It would be nice, for each task linked to a simulated daqMx board, to launch a popup window ...

 

  • For digital input, give the abilty to modify for each configured channel , the current binary value.
  • For analog input, give the ability to choose between a fixed value, a sine wave, a square signal ... white noise ...
  • For digital output, give the ability to view the current setted values
  • For analog output, give the ability to view the current simulated output value on a waveform chart ...

 

A more powerfull tool could also integrate a simulated channels switching mechanism ... A simulated output could be linked to a simulated input 

 

This feature could be a good way to create an application which could simulate a complete process ... this application could be used to validate a complete system

(such a kind of SIL architecture)

 

Other idea .... A complete daqMx simulation API ...

 

  • Creation of an API which could instanciate a simulated daqMx board (Wich could be seen via MAX)
    • Takes place of the actual limited daqMx simulated board
  • This device could then be accessed by other application thru daqMx
  • This API could have access to all channels of this simulated device.
  • This API could force, programmatically, the value of the simulated input channels according to a realistic process model

 

Something like this ...

 

 

 DaqMxSimulatedAPI.PNG

 

Hello,

 

For those of us who develop using DAQmx all the time, this might seem silly.  Nonetheless, I'm finding that users of my software are repeatedly having a tough time figuring out how to select multiple physical channels for applications that use DAQmx.  Here's what I'm talking about:

DAQmxChannels.png

 

Typically a user of my universal logger application wishes to acquire from ai0:7, for example.  They attempt to hold down shift and select multiple channels, only to assume that one channel at a time may be aquired.  For some odd reason, nearly everyone fears the "Browse" option because they don't know what it does.

 

 

While, as a developer, I have no problem whatsoever knowing to "Browse" in order to accomplish this, I was just asked how to do this for literally the fifth time by a user.  Thus, I'm faced with three choices: Keep answering the same question repeatedly, develop my own channel selection interface, or ask if the stock NI interface may be improved.

 

I'm not sure of the best way to improve the interface, but the least painless manner to do so might be to simply display the "Browse" dialog on first click rather than displaying the drop-down menu.

 

Please, everyone, by all means feel free to offer better ideas.  What I do know for certain, though, is that average users around here continually have a tough time with this.

 

Thanks very much,

 

Jim

 

It would be great to develop software on 64bit Linux sytem using DAQmx.

Since we`re developing software for 64bit Linux this is a must for us - this means a 64bit Kernelmodule as well as 64bit libraries.

Just ran into a situation where I need to stream a lot of data to TDMS.  The only problem is that I need to store additional metadata with the channels.  I could go through all of the generated TDMS files and insert them after the fact, but this is kind of tedius.  I propose a way to add metadata to the channel.  My first thought was to use a variant input on the Create DAQmx Channel, but some of the polymorphics already have really fully connector panes.  So I am now thinking to just add a property to the Channel Property Node that is just a variant.  When logging to TMDS, the variant attributes can be put in the metadata of the channel.  Do something similar for the group so that we can have additional group metadata.

 

Metadata that I'm currently thinking about could include sensor serial number and calibration data.  I'm sure there is plenty of other information we would like to store with the TDMS file.

Hello,

 

I recently discovered that the SCXI-1600 is not supported in 64-bit Windows.  From what NI has told me, it is possible for the hardware to be supported, but NI has chosen not to create a device driver for it.

 

I'm a bit perplexed by this position, since I have become accustomed to my NI hardware just working.  It's not like NI to just abandon support for a piece of hardware like this -- especially one that is still for sale on their website.

 

Please vote if you have an SCXI-1600 and might want to use it in a 64-bit OS at some time in the future.

 

Thanks,

Doug

 

 

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

I find myself quite often needing to modify the DaqMX tasks of chassis that aren't currently plugged into my system.  I develope on a laptop, and then transfer the compiled programs to other machines.  When the other machines are running the code and thus using the hardware I have to export my tasks and chassis, delete the live but unplugged chassis from my machine, then import the tasks and chassis back in generating the simulated chassis.  When I'm finished with the task change and code update, to test it I have to export the tasks and chassis, plug in the chassis, and re-import to get a live chassis back.

 

Can it be made as simple as right clicking on a chassis and selecting 'simulated' from the menu to allow me to configure tasks without the hardware present?

 

Thanks,

Brian

Certified LabVIEW Developer

GE Appliances

By default, DAQmx terminal constants/controls only show a subset of what is really available.  To see everything, you have to right-click the terminal and select "I/O Name Filtering", then check "Include Advanced Terminals":

 

Untitled 1 Block Diagram _2013-06-04_16-16-29.png

AdvancedTerminals.png

 

I guess this is intended to prevent new users from being overwhelmed.  However, what is really does is create a hurdle that prevents them from configuring their device in a more "advanced" manner since they have no idea that the name filtering box exists.

 

I am putting "advanced" in quotes because I find the distinction very much arbitrary.

 

 

As a more experienced DAQmx user, I change the I/O name filtering literally every time I put down a terminal without thinking about it (who can keep track of which subset of DAQmx applications are considered "advanced").  The worst part about this is trying to explain how to do something to newer users and having to tell them to change the I/O name filtering every single time (or if you don't, you'll almost certainly get a response back like this).

 

 

 

Why not make the so-called "advanced" terminals show in the drop-down list by default?

I've been in many threads and seen many, many more where the root issue stems from confusion about the way DAQmx Timing and DAQmx Read interpret the meaning of "# samples" very differently for Finite Sampling vs. Continuous Sampling mode.   (For example, here's just one of the times I tried to address that confusion.)

 

First, here's what causes the confusion:

 

  • The 'samples per channel' input to DAQmx Timing is *crucial* for Finite Sampling tasks and usually *ignored* for Continuous Sampling tasks.
  • The 'number of samples per channel' input to DAQmx Read has a default value of -1 when left unwired.  However, the *meaning* of this default value is *VERY* different,  resulting in very different behavior depending on whether the the task is configured for Finite or Continuous sampling.  (See the first link I referenced.)

While the relevant info is findable in the help, it also often clearly remains unfound.  I got to wondering whether some changes in the DAQmx API could help.

 

I'll describe one approach, but would definitely be open to better solutions.  The goal is simply to find *some* way to reduce the likelihood of confusion for rookie DAQmx users.

 

I picture adding more polymorphic instances to both the DAQmx Timing and DAQmx Read vi's, so there can be distinct instances for FInite vs Continuous sampling.

 

Further, I picture that the task refnum would carry sufficient type info related to this timing config, such that that downstream DAQmx functions can "know" what kind of Timing was set up -- Finite, Continuous, on-demand (the default if DAQmx Timing was never called at all), etc.

 

Then when that task refnum is wired into DAQmx Read, the most appropriate instance of DAQmx Read would show up.  And the corresponding input parameter names, help, default values, and default value *behavior* can all be tailored to that particular instance of DAQmx Read.  For example, perhaps the "# samples" input should become a *required* input for Continuous Sampling tasks, to force a decision and encourage further inspection of the advanced help.

 

Don't know how feasible something like this is, but it's definitely something that regularly trips up newcomers to DAQmx.

 

 

-Kevin P

"Without needing to clear "all" associated events, or EVEN opening MAX, I would like the ability to replace NI-USB Device "Doohickey123" serial number "junkgarbagestuff" with another NI-USB device of the same type-  perhaps a pop-up option like.... ""Replace no longer installed NI-53xx alias "gizmo"  with new NI-53xx?""  

 

Sure would help when I swap NI-xxxx devices amongst systems- especially the USB devices!

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

At the new client.. no shock to many of you I "Get around"

 

I explained to some of my new compadres the DAQmx "Tasks" need to be created once.... Preferably during development!

I even created a new task in MAX using the DAQmx wizard, Dragged it to the LabVIEW project explorer and all of that!

 

I even went so far as to name the "AUX" temperature channel "armpit"- Trust me, after 5 minutes delivering a .lvproj based on the "Contineous measuement and logging (DAQmx) project template" it was impressive to the client that the plot "armpit" showed 37C on the chart.  Guess where the thermocouple was.

 

So, Because I am that amazing, I showed them that they could Drag-n-Drop the Task to MAX and use MAX to monitor my armpit temperature.  I even showed them that MAX could show them the wiring diagram!

 

"HOLD IT"! they said, The wiring diagram is right there! On SCREEN! per channel! 

That is where I just about lost my mind!  They wanted to see this connection diagram for another Channel--- that worked! BUT there was no way to output that wonderful data!

 

"Can I create a Wiring Diagram for this channel, device or task?" were the next words out of their mouths.  I WAS STUNNED!  "Not today" I said, "I'll post that excellent idea!"

 

When doing PWM with DAQmx, error -200684 is thrown if a 0% duty cycle is attempted.  For situations where a true 0% is needed, this is problematic. There are a few workarounds available, but some are less than ideal.  

 

The suggestion here is to pause the output if a 0% duty cycle is attempted.

I have a data acquisition NI-DAQmx/C++ program where I am continuously acquiring 5 channels of data at 40KHz/channel and reading them in 0.1 second chunks.  This successfully works perfectly for over 14 hrs continuous acquisition, but at 14hrs, 54 min and 47 seconds the program hangs up due to an overflow in the int32 DAQmxInternalAIbuffer_Offset value sent to the DAQmxSetReadOffset() function.  In the DAQmxSetReadRelativeTo() function, I set the offset relative to the first sample using DAQmx_Val_FirstSample.  (See "32-bit lmitation pof the NI-DAQmx int32 DAQmxSetReadOffset() function?")

 

It would be very helpful for the DAQmxSetReadOffset() offset value to be 64-bits rather than the current int32 value.  This would make this function analogous to the DAQmxGetReadTotalSampPerChanAcquired() which returns a 64-bit value.  I understand that the offset is maintained internally as a 64-bit value, so perhaps this would not be too difficult to do.

 

I hope that National Instruments fixes this limitation in their API, not just for 64-bit Windows, but also for 32-bit Windows because a lot of us are still using 32-bit compilers and our users are using Windows XP.  Perhaps it could be implemented as a separate DAQmxSetReadOffset64() 64-bit function for the 32-bit Windows.

 

Thank you,
Bill Anderson

 

As documented in a previous post, it is currently impossible to install the nidaqmx-python library into a Docker container. Enabling this functionality would create an opportunity for software teams to build cutting edge big data pipelines for measurement instruments using container technology. This could also optimize development time by including custom DAQ code in continuous integration pipelines.

I bought a NI USB-6251 BNC but the support explained me that it would have no Linux support out of the box. I will have to find out how to use it on Linux systems myself now (perhaps with help of the forum). It would be a nice feature, if it would ship with Linux support.

When using a buffered counter output task, the initial delay value is not used at all.  Instead, the user specifies an array of high and low times and the first low time is used as the initial delay.

 

If the output pulse train is repeated multiple times (or continuously), the first low time represents both the time from the trigger until the first pulse as well as the time between the last pulse and the first pulse.

 

It would be desirable to decouple these parameters by allowing for the option to use Initial Delay on buffered counter output tasks (e.g. with a channel property).  Here are a couple use cases off the top of my head where Initial Delay would be very helpful (if not required):

 

1.  This is the case I ran into (posted here😞  if you want to repeat a pulse train continuously every N seconds, you have to either have that N second delay at the start of the task or use another counter as a trigger source.  Depending on high and low times you might be able to get away with writing new values to the counter on-the-fly but this isn't a universal solution.

 

2.  If you wanted to synchronize multiple continuous buffered counter output tasks (with each one sharing a fixed desired period) to a common trigger source but with a different initial delay, you would be unable to do so since the requirement of having different initial delay would affect the period of your actual signal.  You would have to compensate by tweaking the other high/low times in your waveform (giving you something that you don't really want).

Hello,

 

DaqMx Vis only works when the NI Device Loader service is running.

 

If this windows service is not running, DaqMx functions generates error like "Device not found..., undefined board, undefined hardware ...."

 

For some week, i get such an error and it take me a long time to point the real cause !

A windows or software update had changed the service startup sequence ... and the Ni device loader service was no more starting.

The solution to this problem was to configure the NI Device Loader service in order to force restart on start failure. 

 

It should be nice if daqMX functions could generate the "right error".

 

An error like : "Ni services are not running please check their current state ... The DaqMx devices could not be accessed when Ni Device Loader service is not running".

 

This problem also generates problem in MAX ! (The device treeview takes a long time to expand ... and the device autotest fail)

 

 

Thanks for kudossing this idea which could help understanding windows services problems.

Is there any technical reason why this cannot be added to DAQmx?  M series boards still have features that cannot be found on X or S series such as analog current input.

Ideally, it would be best to be able to have multidevice tasks for both M and X at the same time.