We really need a hard drive crio module for long term monitoring and reliably storing large amounts of data remotely.
1. Solid State Drive: Fast, reliable, and durable. Extremely high data rates. It would be a very high price module but it could be made to handle extreme temperatures and harsh conditions. It should be available in different capacities, varying in price.
2. Conventional Hard Drive: This would give any user the ability to store large amounts of storage, in the order of hundreds of Gigabytes. This type should also come in varying storage capacities.
For this to be useable:
1. It would need to support a file system other than FATxx. The risk of data corruption due to power loss/cycling during recording makes anything that uses this file system completely unreliable and utterly useless for long term monitoring. You can record for two months straight and then something goes wrong and you have nothing but a dead usb drive. So any other file system that is not so susceptible to corruption/damage due to power loss would be fine, reliance, NTFS, etc.
2. You should be able to plug in multiple modules and RAID them together for redundancy. This would insure data security and increase the usability of the cRIO for long term remote monitoring in almost any situation.
Current cRIO storage issues:
We use NI products primarily in our lab and LabVIEW is awesome. I hope that while being very forward about our issues, we will not upset anyone or turn anyone away from any NI products. However, attempting to use a cRIO device for long term remote monitoring has brought current storage shortfalls to the forefront and data loss has cost us dearly. These new hard drive modules would solve all the shortfalls of the current storage solutions for the crio. The biggest limitation of the cRIO for long term monitoring at the moment is the fact that it does not support a reliable file system on any external storage. The SD Card module has extremely fast data transfer rates but if power is lost while the SD card is mounted, not only is all the data lost, but the card needs to be physically removed from the device and reformatted with a PC. Even with the best UPS, this module is not suitable for long term monitoring. USB drives have a much slower data transfer rate and are susceptible to the same corruption due to power loss.
When we have brought up these issues in the past, the solution offered is to set up a reliable power backup system. It seems that those suggesting this have never tried to use the device with a large application in a situation where they have no physical access to the device, like 500 miles away. Unfortunately, the crio is susceptible to freezing or hanging up and becoming completely unresponsive over the network to a point that it can not be rebooted over the network at all. (Yes even with the setting about halting all processes if TCP becomes unresponsive). We would have to send someone all the way out to the device to hit the reset button or cycle power. Programs freeze, OS' freeze or crash, drivers crash, stuff happens. This should not put the data being stored at risk.
I would put money on something like this being already developed by NI. I hope you guys think the module is a good idea, even if you don't agree with all the problems I brought up. I searched around for an idea like this and my apologies if this is a re-post.
Dear NI, please consider a future hardware feature addition:
Add a "Power Up Delay DIP Switch" to the back of the PXI Power Supply Shuttle.
It would allow end users to reliably sequence the powering-up of multi PXI chassis solutions. It could also be handy to sequence any other boot-order sensitive equipment in the rack or subsystem. This would also be a world-voltage solution since this capability already exists in the power shuttle. We are seeing the need for more input-voltage-agnostic solutions. I'm sure you are too.
It might offer time delay choices of 1,2,4,8,16 seconds, etc.
We run into this problem on every multi-chassis integration. We have solved it several ways in the past like: human procedure (error prone), sequencing power strips ($$$ and not world-voltage capable), custom time-delay relays ($$$).
Imagine never having your downstream chassis(s) disappear. Or worse yet, having them show up in MAX, but act strangely because of not enough delay time between boots.
Thanks for reading this, and consider tossing me a Kudos!
I love simulated devices, but one major drawback is the static nature of the simulated data. It would be awsome to have the ability to import real world data for playback in the simulated devices. Essentially analog input channels could take a waveform in or waveform file, digital in could take a digital file or even a popup probe for the input where the user can turn on/off digital lines or turn a nob on an analog line would be very nice to have. This would allow the programmer to capture real data that a system might expect to recieve and then run the daq application in simulation using daqmx simulated devices with the exact real-world data.
Currently the AI.PhysicalChans property returns the number of physical channels for measurement (e.g. 16). However, to calculate the maximum sampling rate for the device programmatically (e.g. 9213), we need the total number of channels including the internal ones such as cold-junction and autozero (for 9213 it's 18).
Therefore I would like to suggest to include a property node with the number of the internal channels or the total number of physical channels or something similar.
Use case: programmatically calculate the maximum sampling rate in a program that should work for multiple types of devices without being aware of their type.
Thanks for consideration and have a great day!
[actually a customer's idea]
It would be nice to have a hardware request forum as well so NI could see the demand for producing hardware for us.
Just A few things I could use or want:
cDaq counter modules, like a 6602 in cDaq form so I can add more counters to my compact daq module, a lower end crio 'brick', think like a 6009 morphed with a single board rio, something under 1k for simple embedded data logging prototypes. Maybe support for PIC micros (FPGA is so empowering, more microcontroller programming native to labview would be very nice to add to our toolbox).
would a seperate HW forum be helpful?
It would be beneficial to have DAQmx property "Default Signal Type" for each Device. It would return what is the primary measurement type for given device such as NI 9201 would return Analog Input Voltage, 9203=Analog Input Current, 9211=Thermocouple, NI 9263=AO Voltage...
Original discussion here
When I want to do an installation build of my project, I have to include the whole NI-daqmx into my build in order to insert my (in this case) USB-6251-driver. Thus my install package expands from about 150 MB to more than 1 GB. This is way to much!
It would be nice to be able to choose only the wanted driver (in my example the NI-6251 and the SCC68) in the "adding additional driver's" menu and just add this one to the distribution instead of "adding them all".
When configuring DAQ such as the M-series etc. as I configure the PFI connections I always read the "I" as a "1" e.g. PFI4 as connection 14.
Please change it from "PFI4" to "PFI 4" or "PFI_4" or anything better!
I know this is really a DAQ problem but LabVIEW could present the terminals better in the purple DAQ box
Possible solution. Configuration: Suppose you had 5 Ethernet cDAQ 8 slot chassis. Start off by making a simple configuration work first, then extrapolate to more complicating network configurations. Therefore make all the chassis on the same subnet and maybe a dedicated subnet. Each cDAQ is 100s of feet from the other. You want to sample data at 1000 samples per second on all chassis and either lock all the sample clocks, or adjust the clocks on the fly, or know how much the sample clocks differ and adjust the times after the data is transferred.
This continues to go on (in the background) to detect clock drift. Obviously after the data acquisition starts the network traffic will increase, and that will cause the number of minimum packet loop times to be less than 127 out of 1000. But we should be able to determine a minimum number of minimums that will give reliable results, especially since we know beforehand the amount of traffic we will be adding to the network once the data acquisition starts. We may find out that an Ethernet bus utilization of less than 25% will give reliable results. Reliable because we consistently get loop times that are never less than some minimum value (within 10 us).
Remember, a histogram of the loop times should show a second peak at this minimum loop time. This 2nd peak is an indication of whether this idea will work or not. The “tightness” of this 2nd peak is an indication of the accuracy of the timestamp that is possible. This 2nd peak should have a much smaller standard deviation (or even a WALL at the minimum - see image). Since this standard deviation is so much smaller than the overall average of the loop time, then it is a far better value to use to sync the cDAQ chassis. If it is “tight” enough, then maybe additional time syncing can be done (more accuracy, timed to sample clocks, triggers, etc.).
Example, now that the clock are synced to within 10 us, the software initiated start trigger could be sent as a start time, not as a start trigger. In other words, the master cDAQ Ethernet driver would get a start trigger from LabVIEW and send all the slaves a packet telling them to start sampling at a specific time (computed as 75 milliseconds from now).
I mentioned parts of this idea to Chris when I called in today for support (1613269) on the cDAQ Ethernet and LabVIEW 2010 TimeSync. Chris indicated that this idea is not being done on any of your hardware platforms yet. By the way, Chris was very knowledgeable and I am impressed (as usual) with your level of support and talent of your team members.
I have a USB-6509 and a USB-8951 connected to a computer through an external USB Hub. MAX locks-up hard when I disconnect the External USB Hub from the computer, always requiring a reboot. When I connect/disconnect the individual devices on the other side of the External Hub, leaving the External Hub connected and powered, I do not get any problem.
Should there be problems when the External Hub is connected/disconnected, instead of the individual devices??? I'm thinking this is a "USB Problem", but the Windows Device Manager seems to roll smoothly with all the connection/disconnection variations. It is MAX (and any subsequent LV prog) that has the hangup.
Any ideas? Thanks!
I configure DAQmx channels, tasks, and scales as well as CAN messages and channels in MAX. It would be nice, if I could change the ordering of these elements after creation. It would also be nice to have an option to remove all configured channels (and tasks and scales) as well as CAN messages and channels, if I want to load the configuration of another project. Now I have to go to every section and delete the configuration by hand.
It would also be cool, if I could configure DAQmx variables in MAX, which I can use (write and read) in LabVIEW, too. E.g. I have a lot of tasks, which all use the same aquisition rate. If I have to change the rate, I have to change every task by hand. If I could use a variable, I just would change the variable. This would save a LOT OF WORK with huge DAQmx configurations.
Despite having dedicated RTD cards on other platforms, this doesn't exist on PXI. The existing bridge input cards have significantly higher sample rates than needed for typical temperature measurements, and it would be nice to have a more economical drop-in option for RTDs that supported more channels.
Some NI boards have different properties for different channels. For example the NI 4432 has ICP power available for channels 0 through 3 but does not have ICP power for the channel #4. Please add the IEPE or ICP power support and AC/DC coupling support information to the DAQmx Physical Channels Node.
I would really like to see a user configurable hardware front panel that integrates with LabVIEW. The panel would connect to a computer via USB. National Instruments could sell different hardware controls such as pushbuttons, LED's, switches, LCD panels for graphs, rotory encoded switches, slide switches and potentiometers, LCD indicators for text and numbers, etc.
LabVIEW would discover which panels are connected and which hardware controls are available in those panels. The user could then drop them as terminals on the block diagram and use them as normal.
Another option for the panel could be an embedded computer running LabVIEW so that the whole thing can be stand alone.
Proposal : If left unwired, the DAQmx Create Channel Minimum and Maximum Value inputs would default to the maximum value for whatever physical channel was wired. Currently for example, AO Voltage defaults to +/- 10 V. Some devices such as the USB 6008 are limited to 0 to +5 V. For programs that allow for different daq devices to be used and changed at run time, one does not know ahead of time what the allowable input limits will be and cannot be assigned. Is it possible to use the DAQmx Device property I/O Type:Analog Output:Voltage:Ranges to obtain the limits and then wire that result however this involves extra coding to make use of them. If the DAQmx Create Channel is used at many places in a vi it becomes cumbersome to have to extract the property for every instance of use. i.e. even if DAQmx Create Channel is done once, another call to it does not retain the values previously set. A solution would be for DAQmx Create Channel to simply obtain the minimum and maximum values based on whatever physical channel was wired to it and use them.
NI support for or something similar from NI's ecosystem, partners, a consortium...