Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.
Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.
A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)
Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?
Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?
By default, DAQmx terminal constants/controls only show a subset of what is really available. To see everything, you have to right-click the terminal and select "I/O Name Filtering", then check "Include Advanced Terminals":
I guess this is intended to prevent new users from being overwhelmed. However, what is really does is create a hurdle that prevents them from configuring their device in a more "advanced" manner since they have no idea that the name filtering box exists.
I am putting "advanced" in quotes because I find the distinction very much arbitrary.
As a more experienced DAQmx user, I change the I/O name filtering literally every time I put down a terminal without thinking about it (who can keep track of which subset of DAQmx applications are considered "advanced"). The worst part about this is trying to explain how to do something to newer users and having to tell them to change the I/O name filtering every single time (or if you don't, you'll almost certainly get a response back like this).
Why not make the so-called "advanced" terminals show in the drop-down list by default?
NI Terminal block layout should be designed so that wiring can be done straight from terminal to wire trunking.
For example TBX-68 has 68 wire terminals aligned to inside of the terminal block. This causes that each wire should make tight curve to wire trunking. Another problem with TBX-68 is that wires are heavily overlapped because of the terminal alignment.
Also the cables from terminal block to DAQ device should be aligned to go directly to wire trunking (not straight up).
How often you have build Labview applications using simulated DaqMx boards ...
And how often you were limited by the default behaviour of simulated boards ... ( Sinewave for analogic inputs, Counter square signal for digital inputs ... )
It would be nice to integrate in DaqMx simulated boards, the abilty to modify the default behaviour of simulated inputs ... thru dedicated popups
It would be nice, for each task linked to a simulated daqMx board, to launch a popup window ...
A more powerfull tool could also integrate a simulated channels switching mechanism ... A simulated output could be linked to a simulated input
This feature could be a good way to create an application which could simulate a complete process ... this application could be used to validate a complete system
(such a kind of SIL architecture)
Other idea .... A complete daqMx simulation API ...
Something like this ...
We really need a hard drive crio module for long term monitoring and reliably storing large amounts of data remotely.
1. Solid State Drive: Fast, reliable, and durable. Extremely high data rates. It would be a very high price module but it could be made to handle extreme temperatures and harsh conditions. It should be available in different capacities, varying in price.
2. Conventional Hard Drive: This would give any user the ability to store large amounts of storage, in the order of hundreds of Gigabytes. This type should also come in varying storage capacities.
For this to be useable:
1. It would need to support a file system other than FATxx. The risk of data corruption due to power loss/cycling during recording makes anything that uses this file system completely unreliable and utterly useless for long term monitoring. You can record for two months straight and then something goes wrong and you have nothing but a dead usb drive. So any other file system that is not so susceptible to corruption/damage due to power loss would be fine, reliance, NTFS, etc.
2. You should be able to plug in multiple modules and RAID them together for redundancy. This would insure data security and increase the usability of the cRIO for long term remote monitoring in almost any situation.
Current cRIO storage issues:
We use NI products primarily in our lab and LabVIEW is awesome. I hope that while being very forward about our issues, we will not upset anyone or turn anyone away from any NI products. However, attempting to use a cRIO device for long term remote monitoring has brought current storage shortfalls to the forefront and data loss has cost us dearly. These new hard drive modules would solve all the shortfalls of the current storage solutions for the crio. The biggest limitation of the cRIO for long term monitoring at the moment is the fact that it does not support a reliable file system on any external storage. The SD Card module has extremely fast data transfer rates but if power is lost while the SD card is mounted, not only is all the data lost, but the card needs to be physically removed from the device and reformatted with a PC. Even with the best UPS, this module is not suitable for long term monitoring. USB drives have a much slower data transfer rate and are susceptible to the same corruption due to power loss.
When we have brought up these issues in the past, the solution offered is to set up a reliable power backup system. It seems that those suggesting this have never tried to use the device with a large application in a situation where they have no physical access to the device, like 500 miles away. Unfortunately, the crio is susceptible to freezing or hanging up and becoming completely unresponsive over the network to a point that it can not be rebooted over the network at all. (Yes even with the setting about halting all processes if TCP becomes unresponsive). We would have to send someone all the way out to the device to hit the reset button or cycle power. Programs freeze, OS' freeze or crash, drivers crash, stuff happens. This should not put the data being stored at risk.
I would put money on something like this being already developed by NI. I hope you guys think the module is a good idea, even if you don't agree with all the problems I brought up. I searched around for an idea like this and my apologies if this is a re-post.
For those of us who develop using DAQmx all the time, this might seem silly. Nonetheless, I'm finding that users of my software are repeatedly having a tough time figuring out how to select multiple physical channels for applications that use DAQmx. Here's what I'm talking about:
Typically a user of my universal logger application wishes to acquire from ai0:7, for example. They attempt to hold down shift and select multiple channels, only to assume that one channel at a time may be aquired. For some odd reason, nearly everyone fears the "Browse" option because they don't know what it does.
While, as a developer, I have no problem whatsoever knowing to "Browse" in order to accomplish this, I was just asked how to do this for literally the fifth time by a user. Thus, I'm faced with three choices: Keep answering the same question repeatedly, develop my own channel selection interface, or ask if the stock NI interface may be improved.
I'm not sure of the best way to improve the interface, but the least painless manner to do so might be to simply display the "Browse" dialog on first click rather than displaying the drop-down menu.
Please, everyone, by all means feel free to offer better ideas. What I do know for certain, though, is that average users around here continually have a tough time with this.
Thanks very much,
I recently discovered that the SCXI-1600 is not supported in 64-bit Windows. From what NI has told me, it is possible for the hardware to be supported, but NI has chosen not to create a device driver for it.
I'm a bit perplexed by this position, since I have become accustomed to my NI hardware just working. It's not like NI to just abandon support for a piece of hardware like this -- especially one that is still for sale on their website.
Please vote if you have an SCXI-1600 and might want to use it in a 64-bit OS at some time in the future.
NI supports almost any bus. Why not SSI (synchronous serial interface) ?
Of course, there is always the option to use an R series card and then build an interface. Why not have a low-cost PCI or USB card? Also, perhaps a C-series module, so that we don't have to take up FPGA space?
As someone who migrated entire product lines from PLCs to cFieldPoint platforms, and now is in the process of migrating further into cRIO platforms, I am finding some cRIO module selection limitations. One big gap I see in the selection is with analog in/out modules. A set of 2-in / 2-out analog modules would be very welcome, offering standardized +/- 10V or 0-20mA ranges. There are a many times in our products that we need to process just a single analog signal, which now with cRIO requires 2 slots be used, with many unused inputs and outputs (which just feels like a waste of money and space).
Based on this question, I would like to add a new category of events to LabVIEW: Max-events.
This category could contain the following events:
If you know other events, please post them.
I use Daqmx a lot for writing .NET based measurement software.
Whereas the API itself is quite decent, the docs are horrible. Accessing them is convoluted at best, requiring the VS help viewer. Almost nothing is available online and decent examples are quite scarce, which will definitely be an issue for absolute beginners...
This definitely deserves some attention!
I often use one DAQ device to test the basic functionality of another device and like to be able to quickly do this through test panels. Unfortunately, MAX does not allow the user to open more than a single test panel at once. The current workaround for this is to launch the test panels outside of MAX (see this KB).
It would be nice to have the same functionality when opening test panels in MAX. Specifically, I would like to be able to do the following with a Test Panel open:
1. Be able to navigate through MAX to do things like check device pinouts, calibration date, etc.
2. Be able to move and/or resize the original MAX Window (it always seems to be blocking other applications that I want to view alongside the Test Panel)
3. Open a test panel for a second (or third...) device.
It is nice that there is a workaround in place already but I think it would be nice if MAX had this behavior to begin with.
It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years. DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products). DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can. It's also fairly bloated and slow compared to DAQmx. While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now. I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.
I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.
Anyone else see this as a priority for NI R&D?
I find myself quite often needing to modify the DaqMX tasks of chassis that aren't currently plugged into my system. I develope on a laptop, and then transfer the compiled programs to other machines. When the other machines are running the code and thus using the hardware I have to export my tasks and chassis, delete the live but unplugged chassis from my machine, then import the tasks and chassis back in generating the simulated chassis. When I'm finished with the task change and code update, to test it I have to export the tasks and chassis, plug in the chassis, and re-import to get a live chassis back.
Can it be made as simple as right clicking on a chassis and selecting 'simulated' from the menu to allow me to configure tasks without the hardware present?
Certified LabVIEW Developer
With NI 9234 board you can use 4 IEPE sensors but you don' have IEPE open/short detection capability.
NI 9232 board has IEPE open/short detection capability but has only 3 channels.
I think that a board with 4 channels (as 9234) and an IEPE open/short detection capability would be great!
Dear NI, please consider a future hardware feature addition:
Add a "Power Up Delay DIP Switch" to the back of the PXI Power Supply Shuttle.
It would allow end users to reliably sequence the powering-up of multi PXI chassis solutions. It could also be handy to sequence any other boot-order sensitive equipment in the rack or subsystem. This would also be a world-voltage solution since this capability already exists in the power shuttle. We are seeing the need for more input-voltage-agnostic solutions. I'm sure you are too.
It might offer time delay choices of 1,2,4,8,16 seconds, etc.
We run into this problem on every multi-chassis integration. We have solved it several ways in the past like: human procedure (error prone), sequencing power strips ($$$ and not world-voltage capable), custom time-delay relays ($$$).
Imagine never having your downstream chassis(s) disappear. Or worse yet, having them show up in MAX, but act strangely because of not enough delay time between boots.
Thanks for reading this, and consider tossing me a Kudos!
NI should make sure that the measurement uncertainty specifications for its DAQ hardware are aligned with uncertainty analyses that are performed according the ISO "Guide to the expression of Uncertainty in Measurement" (GUM). See http://www.bipm.org/en/publications/guides/gum.htm
I would like to be able to have multiple and distant Labview development environments installed (e.g. Labview 7, 8.0.1,8.2 and 2010). As I understand it, this is mainly limited by the DAQmx drivers.
The problem I run into is that I need to support many applications beyond 5 years. We have some test equipment on our production line that has been running software since the 6.0 days. Management will come along and ask me to add one little feature to the software.
As it is now, I have to drag out my old computer with Labview 6.0 installed on it, develop on that, and then go back to my new development in LV 2010. I cannot just upgrade the application to 2010, for several reasons.
1) I can't have all the versions co-exist on one computer, so It needs to move from one machine to the next, upgrading along the way.
2) Different versions can change things in dramatic ways and break other pieces of code (e.g. Traditional DAQ vs DAQmx)
3) Because of #2, I need to do a full revalidation of the code, for what should be a minor change.
One thing that the NI architects do not seem to understand is that revalidation is not a trivial activity. This can interrupt the production schedule since I often cannot test on anything but the production equipment. This interruption can take days to weeks, even if no problems are uncovered, and much longer if we find that upgrading caused an issue. If I keep my old development environment, all I need to test is the changes to the code. If I change the compiler, I need to test ALL the code to be sure that the compiler change did not introduce any more bugs.
This is especially challenging in tightly controlled environments such as medical device manufacturing, where any change to the process requires a great deal of scrutiny.
Please make an effort to consider this in the future. Until then, I will be stuck with 4 computers under my desk all running different versions of Labview.
I bought a NI USB-6251 BNC but the support explained me that it would have no Linux support out of the box. I will have to find out how to use it on Linux systems myself now (perhaps with help of the forum). It would be a nice feature, if it would ship with Linux support.
Dear NI Idea Exchange,
I had a service request recently where the customer wished to use a mass flow meter, using the HART protocol (massive industrial protocol used worldwide with over 30 million devices) to communicate updated values to a cRIO 9074 chassis using a NI 9208 module.
They did not know how they could use this protocol with our products. There was only one example online regarding use of this protocol using low level VISA functions. There is currently no real support for this protocol.
I suggested that if they wished to use this sensor they would be required to convert it to a protocol we do support, for example Modbus or to RS-232 (using a gateway/converter).
Then they could use low level VISA functions to allow the data communication.
They were fine with doing this but they felt that NI should probably support this protocol with an off-the-shelf adaptor or module. This is the main point of this idea exchange.
There is clearly a reason why we do not currently provide support for this protocol in comparison to PROFIBUS or Modbus.
I thought I would pass on this customer feedback as it seemed relevant to NI and our vision.
National Instruments UK