Have an idea for new DAQ hardware or DAQ software features?
Browse by label or search in the Data Acquisition Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click Post New Idea to submit a product idea. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see implemented!
Currently AutoStart is a feature of XNET that is not intuitively obvious. One of my co-workers was trying to test out the idea of having several queued sessions start with slight delays between them (using the Start Time Offset CAN Frame property). The code wasn't working properly due to preloading the array of frames to transmit using XNET Write (which caused some frames to transmit early). Note that a NI Applications Engineer on a support request was not able to figure out that AutoStart was the issue before the co-worker noticed the AutoStart property a few hours later.
Note that no documentation on XNET Write mentions the AutoStart Feature. The XNET Start VI context help doesn't mention it, but the documentation does indirectly mention being optional and points you to the AutoStart property's help for additional details.
I would recommend an optional input to XNET Write (defaulting to true) to activate AutoStart. This would match the DAQmx API's Write VI. It also would be a good idea to note that input sessions always start automatically somewhere in the XNET Read documentation.
When doingPWM with DAQmx, error -200684 is thrown if a 0% duty cycle is attempted. For situations where a true 0% is needed, this is problematic. There are a few workarounds available, but some are less than ideal.
The suggestion here is to pause the output if a 0% duty cycle is attempted.
I can't say I've investigated this at all, maybe it's already moving this way; all hardware should have it's own available parameters stored on-board, available to be read back by software, for example on startup. Things like available, valid, ranges in volts, current, frequency, channels etc...
It would always be a good idea to read those parameters from hardware by software, instead of relying on model number, manuals, drivers. Then, software could easily adapt to different models of the same main type. Plug in a new hardware and it would announce what it's main capabilities are.
Counter tasks can only take 1 channel, due to the nature of timed signals, obviously. When setting up a system with 16 DUTs with counter outputs, this requires 16 tasks. Every single one has to be painstakingly created and configured. (As an aside: Defining a tabulator sequence still seems a mystery to NI's programmers, even though LabVIEW supports this)
Wouldn't it be nice to have a Ctrl+c,Ctrl+v sequence for tasks and then only modify physical channel? IMHO: yes.
RSI support of the NI 9361 counter module would allow for use in scan-mode within 9144/5 EtherCAT chassis. I have several use cases for this that mostly would benefit from distributed acquisition and end-user-configurable I/O.
We really need a hard drive crio module for long term monitoring and reliably storing large amounts of data remotely.
1. Solid State Drive: Fast, reliable, and durable. Extremely high
data rates. It would be a very high price module but it could be made to
handle extreme temperatures and harsh conditions. It should be
available in different capacities, varying in price.
2. Conventional Hard Drive: This would give any user the ability to
store large amounts of storage, in the order of hundreds of Gigabytes.
This type should also come in varying storage capacities.
For this to be useable:
1. It would need to support a file system other than FATxx. The risk of data corruption due to power loss/cycling during recording makes anything that uses this file system completely unreliable and utterly useless for long term monitoring. You can record for two months straight and then something goes wrong and you have nothing but a dead usb drive. So any other file system that is not so susceptible to corruption/damage due to power loss would be fine, reliance, NTFS, etc.
2. You should be able to plug in multiple modules and RAID them together for redundancy. This would insure data security and increase the usability of the cRIO for long term remote monitoring in almost any situation.
Current cRIO storage issues:
We use NI products primarily in our lab and LabVIEW is awesome. I hope that while being very forward about our issues, we will not upset anyone or turn anyone away from any NI products. However, attempting to use a cRIO device for long term remote monitoring has brought current storage shortfalls to the forefront and data loss has cost us dearly. These new hard drive modules would solve all the shortfalls of the current storage solutions for the crio. The biggest limitation of the cRIO for long term monitoring at the moment is the fact that it does not support a reliable file system on any external storage. The SD Card module has extremely fast data transfer rates but if power is lost while the SD card is mounted, not only is all the data lost, but the card needs to be physically removed from the device and reformatted with a PC. Even with the best UPS, this module is not suitable for long term monitoring. USB drives have a much slower data transfer rate and are susceptible to the same corruption due to power loss.
When we have brought up these issues in the past, the solution offered is to set up a reliable power backup system. It seems that those suggesting this have never tried to use the device with a large application in a situation where they have no physical access to the device, like 500 miles away. Unfortunately, the crio is susceptible to freezing or hanging up and becoming completely unresponsive over the network to a point that it can not be rebooted over the network at all. (Yes even with the setting about halting all processes if TCP becomes unresponsive). We would have to send someone all the way out to the device to hit the reset button or cycle power. Programs freeze, OS' freeze or crash, drivers crash, stuff happens. This should not put the data being stored at risk.
I would put money on something like this being already developed by NI. I hope you guys think the module is a good idea, even if you don't agree with all the problems I brought up. I searched around for an idea like this and my apologies if this is a re-post.
Just ran into a situation where I need to stream a lot of data to TDMS. The only problem is that I need to store additional metadata with the channels. I could go through all of the generated TDMS files and insert them after the fact, but this is kind of tedius. I propose a way to add metadata to the channel. My first thought was to use a variant input on the Create DAQmx Channel, but some of the polymorphics already have really fully connector panes. So I am now thinking to just add a property to the Channel Property Node that is just a variant. When logging to TMDS, the variant attributes can be put in the metadata of the channel. Do something similar for the group so that we can have additional group metadata.
Metadata that I'm currently thinking about could include sensor serial number and calibration data. I'm sure there is plenty of other information we would like to store with the TDMS file.
Often I need to create a simulated version of a real device in order to debug something. It would be nice if I could right-click on a real device in MAX and select "Create simulated version of this device", and it would create a simulated version of the device I'm running. This isn't so bad with standalone devices but it could make simulating modular devices easier and reduce the likelihood of a mis-click making an incorrect simulated device.
When a DI change detection task runs, the first sample shows the DI state *after* the first detected change. There's not a clear way to know what the DI state was just *before* the first detected change, i.e. it's *initial* state.
This idea has some overlap with one found here, but this one isn't restricted to usage via DAQmx Events and an Event Structure. Forum discussions that prompted this suggestion can be seen here and here.
The proposal is to provide an addition to the API such that an app programmer can determine both initial state just before the first detected change and final state resulting from each detected change. The present API provides only the latter.
Full state knowledge before and after each change can be used to identify the changed lines. (Similarly, initial state and change knowledge could be used to identify post-change states.)
My preferred approach in the linked discussions is to expose the initial state through a queryable property node. The original poster preferred to have a distinct task type in which initial state would be the first returned sample. A couple good workarounds were proposed in those threads by a contributor from NI, but I continue to think direct API support would be appropriate.
For those of us who develop using DAQmx all the time, this might seem silly. Nonetheless, I'm finding that users of my software are repeatedly having a tough time figuring out how to select multiple physical channels for applications that use DAQmx. Here's what I'm talking about:
Typically a user of my universal logger application wishes to acquire from ai0:7, for example. They attempt to hold down shift and select multiple channels, only to assume that one channel at a time may be aquired. For some odd reason, nearly everyone fears the "Browse" option because they don't know what it does.
While, as a developer, I have no problem whatsoever knowing to "Browse" in order to accomplish this, I was just asked how to do this for literally the fifth time by a user. Thus, I'm faced with three choices: Keep answering the same question repeatedly, develop my own channel selection interface, or ask if the stock NI interface may be improved.
I'm not sure of the best way to improve the interface, but the least painless manner to do so might be to simply display the "Browse" dialog on first click rather than displaying the drop-down menu.
Please, everyone, by all means feel free to offer better ideas. What I do know for certain, though, is that average users around here continually have a tough time with this.
NI Terminal block layout should be designed so that wiring can be done straight from terminal to wire trunking.
For example TBX-68 has 68 wire terminals aligned to inside of the terminal block. This causes that each wire should make tight curve to wire trunking. Another problem with TBX-68 is that wires are heavily overlapped because of the terminal alignment.
Also the cables from terminal block to DAQ device should be aligned to go directly to wire trunking (not straight up).
Many CAN protocols require a byte in a cyclic message to be incremented each time the message is sent (this is often byte 0). I might have read somewhere that this is possible with VeriStand but I am not using it. So when using only LabVIEW and the NI-XNET API, the only way to achieve this is to call the XNET Write function to manually set the value of this byte. But having to call the API each time the message should be sent removes all the benefits of cylic messages... Moreover LabVIEW can't guarantee the same level of speed and determinism (if the message is to be sent every 5ms for example).
Being able to configure a signal to be an auto-incremented counter would be a huge improvement. To me, this is a must-have, not a nice-to-have...
PCI Express bus became more and more popular. PCIe has a new type of interrupts - MSI/MSI-X. Today VISA driver is able to work only with old style interrupts (Legacy).
Let me explain the main difference:
Legacy interrupt mask intA/B/C/D signals (these signals was in PCI bus, but not exist in PCIe bus). These signals (for PCIe actually packets with Message setA/B/C/D) are shared between all PCIe devices. So VISA driver spend a lot of time when it looks who produced this interrupt.
MSI interrupt is actually a Memory Write packet to preallocated address. In this case VISA should already know which device produced this interrupt. Also MSI interrupt can contain different interrupt vectors inside of Memory Write packet. So it would be very helpful to get access to vector value too.
Requesting MSI/MSI-X interrupts support to VISA driver.
I often use one DAQ device to test the basic functionality of another device and like to be able to quickly do this through test panels. Unfortunately, MAX does not allow the user to open more than a single test panel at once. The current workaround for this is to launch the test panels outside of MAX (see this KB).
It would be nice to have the same functionality when opening test panels in MAX. Specifically, I would like to be able to do the following with a Test Panel open:
1. Be able to navigate through MAX to do things like check device pinouts, calibration date, etc.
2. Be able to move and/or resize the original MAX Window (it always seems to be blocking other applications that I want to view alongside the Test Panel)
3. Open a test panel for a second (or third...) device.
It is nice that there is a workaround in place already but I think it would be nice if MAX had this behavior to begin with.