Have an idea for new DAQ hardware or DAQ software features?
Browse by label or search in the Data Acquisition Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click Post New Idea to submit a product idea. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see implemented!
I know it is possible to listen for multiple DI lines when performing a Change Detection (with a pxi 6511, for example), however it appears to not be a feature to know which channel was the cause of the event trigger without building your own comparison logic around the DIO reads.
I would like to see an additional property, either under 'Timing', or under 'Channel' that would allow me to ask, for the specific channel / line in the task that caused the change-event instance rather than having to search for it manually in a 1D boolean array.
I am not an electircal engineer, so I have no idea if there is some reason this has not been implemented in exiting versions of teh cDAQ chassis. But there are a whole host of applications where a user wants to do Hardware timed digital output to different channels using DIFFERENT time bases. It would be nice to have more than one DO timing engine available. I would love to see that in future versions of the cDAQ chassis.
A customer of mine was trying to read 8 thermocouple readings simultaneously over the course of a week and then store the data. She quickly found out that there are memory limitations with Signal Express. Eventually no matter how She saved or logged their data, she would run out of memory. My product suggestion is to write code that will determine the RAM on a customers computer as well as available hard drive space and show customers (at the start of a task) exactly how long they can acquire signals without filling up their hard drive or seriously draining their resources. That way when they start a process, they are aware of what they're working with in terms of space at that instant rather than when an error pops up three hours or so later. Most customers would figure out these needs in advance of any acquisition, but for the ease of the customer, this would be a welcomed additional feature. It would also be nice to have a document that explained how this time frame was calculated. I suggest that when a task is ran, there is a pop up that explains the maximum amount of samples that can be acquired and time that can pass. The document I mentioned could be available via hyper-link, and the hyper-link text could read "How did we calculate this number?", which could explain the process in more depth.
I have submitted this idea for Signal Express at the product suggestion page found at www.ni.com/contact ("feedback" link in bottom left of window). I figured this would be a good idea for DAQ in general for any extended signal acquisition.
If NI already has the product, please let me know. I need it in my project. If otherwise, I would like to suggest NI to develope one based its NI-9234 and WLS-9163. It can be readily developed by just remove three channels of the NI-9234 and make a smaller version of WLS-9163 for one channel. I would like the product be one unit to achieve a small size. The one channel wireless data aquisition module will have two imporatnt advantages: 1) much smaller size, therefore can be fit into difficult locations; 2) much lower power consumption, therefore can make much longer recording. These two advantages are essential for wireless measurement, thus the product will have good market potential.
What I would like is a contious sampling DAQ task which resets the count to zero evey time the DAQmx Read.vi is called. This way you see which direction and by how much the encoder has moved betwen samples. If it also provided the delta time that would be ideal.
There are ways to do things like this currently but I have run up aginst two applications which would benifit forom something like this and I cannot be the only one.
While in the development environment it is easy to get a reference to a Fieldpoint IO channel - just drag and drop it from the target in the project file. Things are much more cumbersome if you need to create such a refnum dynamically. (If you are making software that should be possible to just copy onto different controllers (without involving LabVIEW) and then configure to match the available/required IO, static refnums are not usable. )
To dynamically get access to an IO point in a built general purpose application you currently have to save and download an iak file to the controller from Measurement and Automation Studio and then open that iak file in your code (Using FP Open) to get the server reference you need as an input to the FP CreateTag function. The need for FP Open and the iak file to create a server refnum is the main problem.
If FP Create Tag could do its job without any iak-file (e.g. just with the IP of the controller as an additional input)...OR even better - if there was a Create Fieldpoint IO Refnum VI avilable (no such today!?) that took the controller IP (use local if not wired), device name (or ID number) and channel name (or ID number)..then things would be much more flexible and intuitive.
There is one function that allows you to address channels just by the use of slot and channel numbers - and that is the Input Range function (which should not be called the input range either by the way as it could be an output...), so another good solution would be to offer that same functionality for the IO read and write functions.
Problem: When creating large numbers of virtual channels, the calibraton of these channels can be very time consuming and cumbersome. In large applications there can be literally hundreds of channels requiring days to calibrate.
Calibration equipment these days can be purchased with programmable capability or remote communication. Having low level functions available in Labview to calibrate MAX virtual channels programmatically would greatly reduce the time needed for calibration and reduce the potential errors of calibrating each point individually. It would also be useful to view the calibration data and scaling data from MAX in the Labview environment.
I'm not sure who reads this, but I'm not seeing any feedback in the forum so I thought I'd post up here. This may seem like a simple thing, but hopefully my pain will be someone else gain. I messed with this off and on for two months before I finally figured it out. It's probably obvious to those who work with this equipment every day and have EE degrees, but not so much to those of us who do not.
Some will tell me to "search"... well, I did. ...and everywhere I looked all I found was that I needed to measure from the power leg to ground. All of the examples either were only measuring 120 which is fine since what you have is one power leg and neutral... which is basically ground! So those numbers come out fine. Or they were for 3 phase, which I don't use currently. BUT when you try and measure anything with two power legs (basically anything between 208VAC to 240 VAC) the numbers don't work out right if you try to measure between each power leg to ground and then try to recombine them and plug them into the Power VI's. Not one place that I looked did it tell me that I need to measure both power legs across a single channel. I only finally tried it because I'd done pretty much everything else. I know this is pretty basic stuff, but when all you give me says one thing... that's what we do. Just trying to help others.
I would really like to see a user configurable hardware front panel that integrates with LabVIEW. The panel would connect to a computer via USB. National Instruments could sell different hardware controls such as pushbuttons, LED's, switches, LCD panels for graphs, rotory encoded switches, slide switches and potentiometers, LCD indicators for text and numbers, etc.
LabVIEW would discover which panels are connected and which hardware controls are available in those panels. The user could then drop them as terminals on the block diagram and use them as normal.
Another option for the panel could be an embedded computer running LabVIEW so that the whole thing can be stand alone.
As a user of the SCXI 1000 with a 1302 board, I had a lot of difficulty finding information on the pinout of my board. The 1302 is a direct link to the 6220 DAQ card on my computer. When I first looked for the 1302, I could find very little information on it. By adding a few windows and commands to the M&AE, I probably would have been able to solve my problem in minutes instead of days.
A. Being able to add boards which link to pinouts of the computer's DAQ and treat them as a board.
B. Have a list of pinout boards which can link to the DAQ. Since I was dealing with a 1302 which has 34 pins instead of 64 (or something close to that) I had difficulty finding the information on the pinouts. It was far easier to find the pinout of the 6220.
It could be I was a little annoyed at the amount of time that I spent on finding information on the 1302, but it took me far longer to find the information than it should have. This data should be built in.
As far as I know, the DAQ-mx has provicion to generate the COUNTER output or to read digital input triggered by the change detection event.
It would be nice if one is capable of generating the DIGITAL output on accurance of change detection event.(i.e. it can be triggered)
Also digital output must be allowed to generate the waveforms which are triggered by any of the digital input lines.
If possible then COUNTER counts should be available to user in LABVIEW after triggering, to synchronise the various digital input/output to counter state/count.Since some of the actions are COUNT dependent.