Have an idea for new DAQ hardware or DAQ software features?
Browse by label or search in the Data Acquisition Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click Post New Idea to submit a product idea. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see implemented!
The term "Incomplete Sample Detection" comes from DAQmx Help. It affects buffered time measurement tasks on X-series boards, the 661x counter/timers, and many 91xx series cDAQ chassis. It is meant to be a feature, but it can also be a real obstacle.
How the feature works ideally: Suppose you want to configure a counter task to measure buffered periods of a 1-channel encoder. You use implicit timing because the signal being measured *is* the sample clock. The 1st "sample clock" occurs on the 1st encoder edge after task start, but the time period it measures won't represent a complete encoder interval. Reporting this 1st sample could be misleading as it measures the arbitrary time from the software call to start the task until the next encoder edge.
On newer hardware with the "Incomplete Sample Detection" feature, this meaningless 1st sample is discarded by DAQmx. On older hardware, this 1st sample was returned to the app, and it was up to the app programmer to deal with it.
Problem 1: Now suppose I'm also using this same encoder signal as an external sample clock for an AI task that I want to sync with my period measurement task. Since DAQmx is going to discard the counter sample that came from the 1st edge, my first 5 samples will correspond to edges 2-6. Over on the AI task, my first 5 samples will correspond to edges 1-5.
My efforts to sync my tasks are now thwarted because their data streams start out misaligned. The problem and workaround I'm left with are at least as troublesome as the one that was "solved" by this feature.
Problem 2: Suppose I had a system where my period measurement task also had an arm-start trigger, and I depended on a cumulative sum of periods to be my master time for the entire system. In this case, the 1st sample is the time from the arm-start trigger to the 1st encoder edge, and it is *entirely* meaningful. On newer hardware, DAQmx will discard it and I'll have *no way* to know my timing relative to this trigger.
Older boards (M-series, 660x counter/timers) could handle this situation just fine. On newer boards, I'm stuck with a much bigger problem than the one that the feature was meant to solve.
So can we please have a DAQmx property that allows us to turn this "feature" OFF? I understand that it'd have to be ON by default so as not to break existing code.
NI Terminal block layout should be designed so that wiring can be done straight from terminal to wire trunking.
For example TBX-68 has 68 wire terminals aligned to inside of the terminal block. This causes that each wire should make tight curve to wire trunking. Another problem with TBX-68 is that wires are heavily overlapped because of the terminal alignment.
Also the cables from terminal block to DAQ device should be aligned to go directly to wire trunking (not straight up).
I recently discovered that the SCXI-1600 is not supported in 64-bit Windows. From what NI has told me, it is possible for the hardware to be supported, but NI has chosen not to create a device driver for it.
I'm a bit perplexed by this position, since I have become accustomed to my NI hardware just working. It's not like NI to just abandon support for a piece of hardware like this -- especially one that is still for sale on their website.
Please vote if you have an SCXI-1600 and might want to use it in a 64-bit OS at some time in the future.
I find myself quite often needing to modify the DaqMX tasks of chassis that aren't currently plugged into my system. I develope on a laptop, and then transfer the compiled programs to other machines. When the other machines are running the code and thus using the hardware I have to export my tasks and chassis, delete the live but unplugged chassis from my machine, then import the tasks and chassis back in generating the simulated chassis. When I'm finished with the task change and code update, to test it I have to export the tasks and chassis, plug in the chassis, and re-import to get a live chassis back.
Can it be made as simple as right clicking on a chassis and selecting 'simulated' from the menu to allow me to configure tasks without the hardware present?
Currently there are only two options for acquiring +/-60V input signals:
NI 9221: 8-Channel, ±60V, 12-Bit Analog Input Modules ($582)
NI 9229: 4-Channel, ±60 V, 24-Bit Simultaneous,Channel-to-Channel Isolated Analog Input Modules ($1427)
I would like to see a new module provided that is identical to the NI9205 (32-Channel Single-Ended, 16-Channel Differential, ±200 mV to ±10 V, 16-Bit Analog Input Module, $881) but with an input signal range of ±60 V.
As someone who migrated entire product lines from PLCs to cFieldPoint platforms, and now is in the process of migrating further into cRIO platforms, I am finding some cRIO module selection limitations. One big gap I see in the selection is with analog in/out modules. A set of 2-in / 2-out analog modules would be very welcome, offering standardized +/- 10V or 0-20mA ranges. There are a many times in our products that we need to process just a single analog signal, which now with cRIO requires 2 slots be used, with many unused inputs and outputs (which just feels like a waste of money and space).
It would be great if NI offered a simple 4 Counter bus-powered USB device, like a USB-6601, but with the counter capabilities of the new X Series DAQ devices. This would give people who only need to perform counter operations a low-cost alternative to the bus-powered M Series, with double the counters.
Dear NI, please consider a future hardware feature addition:
Add a "Power Up Delay DIP Switch" to the back of the PXI Power Supply Shuttle.
It would allow end users to reliably sequence the powering-up of multi PXI chassis solutions. It could also be handy to sequence any other boot-order sensitive equipment in the rack or subsystem. This would also be a world-voltage solution since this capability already exists in the power shuttle. We are seeing the need for more input-voltage-agnostic solutions. I'm sure you are too.
It might offer time delay choices of 1,2,4,8,16 seconds, etc.
We run into this problem on every multi-chassis integration. We have solved it several ways in the past like: human procedure (error prone), sequencing power strips ($$$ and not world-voltage capable), custom time-delay relays ($$$).
Imagine never having your downstream chassis(s) disappear. Or worse yet, having them show up in MAX, but act strangely because of not enough delay time between boots.
Thanks for reading this, and consider tossing me a Kudos!
It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.
What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.
I had a service request recently where the customer wished to use a mass flow meter, using the HART protocol (massive industrial protocol used worldwide with over 30 million devices) to communicate updated values to a cRIO 9074 chassis using a NI 9208 module.
They did not know how they could use this protocol with our products. There was only one example online regarding use of this protocol using low level VISA functions. There is currently no real support for this protocol.
I suggested that if they wished to use this sensor they would be required to convert it to a protocol we do support, for example Modbus or to RS-232 (using a gateway/converter).
Then they could use low level VISA functions to allow the data communication.
They were fine with doing this but they felt that NI should probably support this protocol with an off-the-shelf adaptor or module. This is the main point of this idea exchange.
There is clearly a reason why we do not currently provide support for this protocol in comparison to PROFIBUS or Modbus.
I thought I would pass on this customer feedback as it seemed relevant to NI and our vision.
The vast majority of my working life is spent with RIO devices or midrange X series cards, but I often come across applications where an inexpensive, reliable DAQ would be handy for low level tasks - monitoring presence sensors, measuring voltages at moderate precision and slow speed, providing interlocks for material storage bins etc.
Traditionally, you'll see a lot of USB 600X units being used for applications like these. However, running on USB has a few associated problems: unreliability of the Windows bus, cable strain relief on USB connectors, mounting of USB 600X units, connection type. Don't get me wrong, you can do a lot with these units but they're not an ideal, inexpensive solution for production processes.
There's a jump between the functionality of these USB units and X (or even M or E for the vintage crowd) series cards. The only thing that's really in that range anymore is the B series PCI-6010 card, which has the fantastic benefit of using a 37W DSUB connector too, but is a little limited in terms of channel offerings and the like.
I'd like to see the B series range revived to provide products that fit between the PCIe-6320 and the USB 600X devices, providing non-USB connection and preferably with a DSUB backplane connector for cost and ease of use. This would provide a more reliable offering for simple acquisition tasks in the industrial environment at a cost-effective price point.
NI supports almost any bus. Why not SSI (synchronous serial interface) ?
Of course, there is always the option to use an R series card and then build an interface. Why not have a low-cost PCI or USB card? Also, perhaps a C-series module, so that we don't have to take up FPGA space?
Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.
Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.
A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)
Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?
Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?
May be speaking for myself here, but the M-Series DAQ in USB forms have mass termination option (to connect to VHDCI connectors) and the X-Series do not. Why?
We have hardware that is already setup for the 68-pin cables, and I would like to take advantage of the portability of the USB, and the extended performance of the X-Series vs. the M-Series. Specifically I was comparing teh USB-6361 X Series and the USB-6251. The price difference is minimal for the added sample rates and extra counters. But without the mass term option, I am forced to settle for lesser hardware. This should be fixed.