Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hello,

 

How often you have build Labview applications using simulated DaqMx boards ...

And how often you were limited by the default behaviour of simulated boards ... ( Sinewave for analogic inputs, Counter square signal for digital inputs ... )

 

It would be nice to integrate in DaqMx simulated boards, the abilty to modify the default behaviour of simulated inputs ... thru dedicated popups

 

It would be nice, for each task linked to a simulated daqMx board, to launch a popup window ...

 

  • For digital input, give the abilty to modify for each configured channel , the current binary value.
  • For analog input, give the ability to choose between a fixed value, a sine wave, a square signal ... white noise ...
  • For digital output, give the ability to view the current setted values
  • For analog output, give the ability to view the current simulated output value on a waveform chart ...

 

A more powerfull tool could also integrate a simulated channels switching mechanism ... A simulated output could be linked to a simulated input 

 

This feature could be a good way to create an application which could simulate a complete process ... this application could be used to validate a complete system

(such a kind of SIL architecture)

 

Other idea .... A complete daqMx simulation API ...

 

  • Creation of an API which could instanciate a simulated daqMx board (Wich could be seen via MAX)
    • Takes place of the actual limited daqMx simulated board
  • This device could then be accessed by other application thru daqMx
  • This API could have access to all channels of this simulated device.
  • This API could force, programmatically, the value of the simulated input channels according to a realistic process model

 

Something like this ...

 

 

 DaqMxSimulatedAPI.PNG

 

For me it is very hard to spot channel that is connected on MAX schematic below:

 

image.png

 

By changing the color of connected channel it is way easier to spot:

 

image.png

Just ran into a situation where I need to stream a lot of data to TDMS.  The only problem is that I need to store additional metadata with the channels.  I could go through all of the generated TDMS files and insert them after the fact, but this is kind of tedius.  I propose a way to add metadata to the channel.  My first thought was to use a variant input on the Create DAQmx Channel, but some of the polymorphics already have really fully connector panes.  So I am now thinking to just add a property to the Channel Property Node that is just a variant.  When logging to TMDS, the variant attributes can be put in the metadata of the channel.  Do something similar for the group so that we can have additional group metadata.

 

Metadata that I'm currently thinking about could include sensor serial number and calibration data.  I'm sure there is plenty of other information we would like to store with the TDMS file.

Based on this question, I would like to add a new category of events to LabVIEW: Max-events.

 

This category could contain the following events:

-Hardware Added

-Hardware removed

-Configuration changed

    -Scales

    -Channels

    -Tasks

 

If you know other events, please post them.

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

DAQmx allows to register dynamic events , but how about NI-Scope, NI-Fgen , ....   if you have an event you can route to something in hardware it should be possible to register it also as an dynamic event in LabVIEW.

 

Hi!


PCI Express bus became more and more popular.
PCIe has a new type of interrupts - MSI/MSI-X.
Today VISA driver is able to work only with old style interrupts (Legacy).

 

Let me explain the main difference:

Legacy interrupt mask intA/B/C/D signals (these signals was in PCI bus, but not exist in PCIe bus).
These signals (for PCIe actually packets with Message setA/B/C/D) are shared between all PCIe devices.
So VISA driver spend a lot of time when it looks who produced this interrupt.

 

MSI interrupt is actually a Memory Write packet to preallocated address.
In this case VISA should already know which device produced this interrupt.
Also MSI interrupt can contain different interrupt vectors inside of Memory Write packet. So it would be very helpful to get access to vector value too.

 

Requesting MSI/MSI-X interrupts support to VISA driver.

 

Thanks a lot!

When doing PWM with DAQmx, error -200684 is thrown if a 0% duty cycle is attempted.  For situations where a true 0% is needed, this is problematic. There are a few workarounds available, but some are less than ideal.  

 

The suggestion here is to pause the output if a 0% duty cycle is attempted.

By default, DAQmx terminal constants/controls only show a subset of what is really available.  To see everything, you have to right-click the terminal and select "I/O Name Filtering", then check "Include Advanced Terminals":

 

Untitled 1 Block Diagram _2013-06-04_16-16-29.png

AdvancedTerminals.png

 

I guess this is intended to prevent new users from being overwhelmed.  However, what is really does is create a hurdle that prevents them from configuring their device in a more "advanced" manner since they have no idea that the name filtering box exists.

 

I am putting "advanced" in quotes because I find the distinction very much arbitrary.

 

 

As a more experienced DAQmx user, I change the I/O name filtering literally every time I put down a terminal without thinking about it (who can keep track of which subset of DAQmx applications are considered "advanced").  The worst part about this is trying to explain how to do something to newer users and having to tell them to change the I/O name filtering every single time (or if you don't, you'll almost certainly get a response back like this).

 

 

 

Why not make the so-called "advanced" terminals show in the drop-down list by default?

As documented in a previous post, it is currently impossible to install the nidaqmx-python library into a Docker container. Enabling this functionality would create an opportunity for software teams to build cutting edge big data pipelines for measurement instruments using container technology. This could also optimize development time by including custom DAQ code in continuous integration pipelines.

Often I need to create a simulated version of a real device in order to debug something. It would be nice if I could right-click on a real device in MAX and select "Create simulated version of this device", and it would create a simulated version of the device I'm running. This isn't so bad with standalone devices but it could make simulating modular devices easier and reduce the likelihood of a mis-click making an incorrect simulated device.

Reasoning: Those of us who work with multiple systems need to import/export configurations often, to switch from system A to B to C and so on, since names are often interfering between projects.

Idea: Provide selectable profiles in MAX - this way you could choose which of systems configurations you would like to work with without exporting/importing the systems all the time.

When using TDMS on cRIO systems, there are a couple of considerations that doesn't normally play in too much when storing data as TDMS files and they are:

 

* The current version of the file system used on cRIO controllers degrades significantly in performance if -any- folder on the cRIO contains more than ~100 files. (work-around> more elaborate folder structures, but a lot of this structuring would be only to work around this shortcoming of the (old) version of the file system 

* The drives are SSD with limited life-length and wear-leveling etc. Writing and re-writing these index files add un-necessary overhead and wear on the disks

* They use up space which is (very) limited on some cRIO's (even if not much). (people may be quick to point out that you can add a thumb-drive, but down-sides to that is the thumbdrives (as far as I know) needs to be FAT. Compared to storing on the cRIO file system which is atomic and fail-safe where you pretty much don't have to worry about sudden power outages and interruptions mid-write.. on a thumb drive you would have all these issues that could worst case corrupt your whole thumb drive.)

 

I propose to add a boolean (default to false) on the TDMS Open called "supress writing tdms index to disk" or some smart name along those lines. What this would do is still allow for the tmds index to be created, but it will remain in memory only and never be written to disk. When the TDMS Close is called, the memory is released and the tdms file is written to disk without the index file. If the same file is opened again, extra time would be needed since the index file would be re-created (again in memory only if boolean indicates this), but I think for the most part this overhead would be more than acceptable.

 

I'm not sure how "simple" modifying the TDMS open and close functions would be, but I do know that there are many cases where this flag would make sense. 

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

Absolute encoders have been around for some time, but NI's motion hardware still supports only incremental encoders.  I would like to see support for absolute encoders in NI Motion or NI Soft Motion.

 

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.

 

What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.

 

This has recently been discussed on the multifunction DAQ board here: http://forums.ni.com/t5/Multifunction-DAQ/quadrature-encoder-based-triggering/td-p/3125468 . So I am not alone in requesting something more programmer friendly than the workaround offered there.

 

 

I bought a NI USB-6251 BNC but the support explained me that it would have no Linux support out of the box. I will have to find out how to use it on Linux systems myself now (perhaps with help of the forum). It would be a nice feature, if it would ship with Linux support.

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

Hi all,

 

Any series card should have a feature listing different  parameters like voltage, temperature etc it supports(May be a property node should be used). so that user can configure the required parameter among the supported.

Ex: SCXI -1520 module can be configured as Strain, Pressure or voltage but this information will be known only by seeing its manual or when a task is created in MAX. But in LabVIEW               Software i cant get this information directly. Because it allows me to configure 1520 as temperature also and we will come to known that 1520 module doesn't support for temperature       parameters only when once tried to acquire.

 

So what you people think about you.Share your ideas on this please. 

 

Regards,

geeta