Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

We really need a hard drive crio module for long term monitoring and reliably storing large amounts of data remotely.

 

Hard-Drive-Module-Concept.png

 

Options:

 

1. Solid State Drive: Fast, reliable, and durable. Extremely high data rates. It would be a very high price module but it could be made to handle extreme temperatures and harsh conditions. It should be available in different capacities, varying in price.

 

2. Conventional Hard Drive: This would give any user the ability to store large amounts of storage, in the order of hundreds of Gigabytes. This type should also come in varying storage capacities.

 

For this to be useable:

 

1. It would need to support a file system other than FATxx. The risk of data corruption due to power loss/cycling during recording makes anything that uses this file system completely unreliable and utterly useless for long term monitoring. You can record for two months straight and then something goes wrong and you have nothing but a dead usb drive. So any other file system that is not so susceptible to corruption/damage due to power loss would be fine, reliance, NTFS, etc.

 

2. You should be able to plug in multiple modules and RAID them together for redundancy. This would insure data security and increase the usability of the cRIO for long term remote monitoring in almost any situation. 

 

 

Current cRIO storage issues:

We use NI products primarily in our lab and LabVIEW is awesome. I hope that while being very forward about our issues, we will not upset anyone or turn anyone away from any NI products.  However, attempting to use a cRIO device for long term remote monitoring has brought current storage shortfalls to the forefront and data loss has cost us dearly. These new hard drive modules would solve all the shortfalls of the current storage solutions for the crio. The biggest limitation of the cRIO for long term monitoring at the moment is the fact that it does not support a reliable file system on any external storage. The SD Card module has extremely fast data transfer rates but if power is lost while the SD card is mounted, not only is all the data lost, but the card needs to be physically removed from the device and reformatted with a PC. Even with the best UPS, this module is not suitable for long term monitoring. USB drives have a much slower data transfer rate and are susceptible to the same corruption due to power loss.

 

When we have brought up these issues in the past, the solution offered is to set up a reliable power backup system. It seems that those suggesting this have never tried to use the device with a large application in a situation where they have no physical access to the device, like 500 miles away. Unfortunately, the crio is susceptible to freezing or hanging up and becoming completely unresponsive over the network to a point that it can not be rebooted over the network at all. (Yes even with the setting about halting all processes if TCP becomes unresponsive). We would have to send someone all the way out to the device to hit the reset button or cycle power. Programs freeze, OS' freeze or crash, drivers crash, stuff happens. This should not put the data being stored at risk.

 

I would put money on something like this being already developed by NI. I hope you guys think the module is a good idea, even if you don't agree with all the problems I brought up. I searched around for an idea like this and my apologies if this is a re-post.

 

 

After consulting the community and raising a ticket with NI’s support team, we have determined that there is no good way of programmatically removing old or disconnected remote systems.

DeleteDisconnectedSystems.png

I propose that Ni Sys Config pallet be expanded to include a “Delete Disconnected Systems”. This would clear MAX’s cache of disconnect remote systems. Just like the manual method available through MAX.

I love simulated devices, but one major drawback is the static nature of the simulated data.  It would be awsome to have the ability to import real world data for playback in the simulated devices.  Essentially analog input channels could take a waveform in or waveform file, digital in could take a digital file or even a popup probe for the input where the user can turn on/off digital lines or turn a nob on an analog line would be very nice to have.  This would allow the programmer to capture real data that a system might expect to recieve and then run the daq application in simulation using daqmx simulated devices with the exact real-world data.

 

 

The vi "DAQmx Tristate Output Terminal (VI)" lacks an input to toggle the drive state of the output.

Current version can only tristate - not enable the output. It is rare that you don't need both options.

 

Tri-state DAQmxTri-state DAQmx

 

Without the enable option, it is a superfluous VI which only does half the job.

 

A workaround is straightforward using the technique here https://forums.ni.com/t5/Example-Programs/DAQmx-Digital-I-O-Toggling-Tristate-on-Individual-Lines-in-the/ta-p/3528571?profile.language=en 

tri.png

Hello everybody!

 

Currently the AI.PhysicalChans property returns the number of physical channels for measurement (e.g. 16). However, to calculate the maximum sampling rate for the device programmatically (e.g. 9213), we need the total number of channels including the internal ones such as cold-junction and autozero (for 9213 it's 18).

 

Therefore I would like to suggest to include a property node with the number of the internal channels or the total number of physical channels or something similar.

 

Use case: programmatically calculate the maximum sampling rate in a program that should work for multiple types of devices without being aware of their type.

 

Thanks for consideration and have a great day!

 

[actually a customer's idea]

Dear NI, please consider a future hardware feature addition:

 

Add a "Power Up Delay DIP Switch" to the back of the PXI Power Supply Shuttle.

 

It would allow end users to reliably sequence the powering-up of multi PXI chassis solutions. It could also be handy to sequence any other boot-order sensitive equipment in the rack or subsystem. This would also be a world-voltage solution since this capability already exists in the power shuttle. We are seeing the need for more input-voltage-agnostic solutions. I'm sure you are too.

 

It might offer time delay choices of 1,2,4,8,16 seconds, etc.

 

We run into this problem on every multi-chassis integration. We have solved it several ways in the past like: human procedure (error prone), sequencing power strips ($$$ and not world-voltage capable), custom time-delay relays ($$$).

 

Imagine never having your downstream chassis(s) disappear. Or worse yet, having them show up in MAX, but act strangely because of not enough delay time between boots.

 

Thanks for reading this, and consider tossing me a Kudos!

 

 

While running a test I developed I noticed an odd DAQmx behavior.  After a USB 6212 connection was momentarilly interupted all read and write tasks associated with the device hung until the USB cable was disconnected.  It would have been easy to code around if there was a Dev connection event and a Dev disconnection event that I could use to pause operations and trigger a Device reset.  Since the devices are PnP couldn't the DAQmx API simply make the system hardware connect/disconnect events visable?

Hello All,

 

I would like to know that, why XNET sessions can not be configure in NI-MAX?

It will will be great, if developer can create session in MAX, and use in VI like DAQmx tasks.

 

BR,

Aniket Gadekar. 

It would be nice to have a hardware request forum as well so NI could see the demand for producing hardware for us. 

Just A few things I could use or want:

cDaq counter modules, like a 6602 in cDaq form so I can add more counters to my compact daq module, a lower end crio 'brick', think like a 6009 morphed with a single board rio, something under 1k for simple embedded data logging prototypes.  Maybe support for PIC micros (FPGA is so empowering, more microcontroller programming native to labview would be very nice to add to our toolbox).

 

would a seperate HW forum be helpful?

When I want to do an installation build of my project, I have to include the whole NI-daqmx into my build in order to insert my (in this case) USB-6251-driver. Thus my install package expands from about 150 MB to more than 1 GB. This is way to much!

 

It would be nice to be able to choose only the wanted driver (in my example the NI-6251 and the SCC68) in the "adding additional driver's" menu and just add this one to the distribution instead of "adding them all".

When configuring DAQ such as the M-series etc. as I configure the PFI connections I always read the "I" as a "1" e.g. PFI4 as connection 14.

Please change it from "PFI4" to "PFI 4" or "PFI_4" or anything better!

I know this is really a DAQ problem but LabVIEW could present the terminals better in the purple DAQ box

daq.jpg

Al

It would be beneficial to have DAQmx property "Default Signal Type" for each Device. It would return what is the primary measurement type for given device such as NI 9201 would return Analog Input Voltage, 9203=Analog Input Current, 9211=Thermocouple, NI 9263=AO Voltage...

 

Original discussion here

I configure DAQmx channels, tasks, and scales as well as CAN messages and channels in MAX. It would be nice, if I could change the ordering of these elements after creation. It would also be nice to have an option to remove all configured channels (and tasks and scales) as well as CAN messages and channels, if I want to load the configuration of another project. Now I have to go to every section and delete the configuration by hand.

 

It would also be cool, if I could configure DAQmx variables in MAX, which I can use (write and read) in LabVIEW, too. E.g. I have a lot of tasks, which all use the same aquisition rate. If I have to change the rate, I have to change every task by hand. If I could use a variable, I just would change the variable. This would save a LOT OF WORK with huge DAQmx configurations.

Problem:

  1. Many applications need multiple DAQ chassis synced across 100s of meters. Ethernet is not used due to its indeterminism.
  2. While NI's TimeSync uses special hardware (&1588), it seems like NI could build into the Ethernet drivers a way to do time syncing without any other hardware modules (cDAQ, cRIO, PXIe, etc.). The customized NI-Ethernet would do the master-slave timing for you. It would be built into the platform. The key may be to use the lower boundary of histogram distribution in the statistics of loop time. Not using an average loop time, but use the bounded minimum as a special loop time statistic. See the image at the bottom.
  3. Ethernet time to send a message and receive an answer is not deterministic. But, if all the Ethernet chassis are on a dedicated subnet with no other traffic, then there should be some deterministic MINIMUM time for one chassis to send a packet to another chassis.

Possible solution. Configuration: Suppose you had 5 Ethernet cDAQ 8 slot chassis. Start off by making a simple configuration work first, then extrapolate to more complicating network configurations. Therefore make all the chassis on the same subnet and maybe a dedicated subnet. Each cDAQ is 100s of feet from the other. You want to sample data at 1000 samples per second on all chassis and either lock all the sample clocks, or adjust the clocks on the fly, or know how much the sample clocks differ and adjust the times after the data is transferred.

  1. LabVIEW tells each slave chassis that it is to be a slave to a particular master cDAQ chassis (gives the IP address and MAC address).
  2. LabVIEW tells one of the cDAQ chassis to be the master and it gives the IP address (and MAC address) of all the slaves to that master.
  3. The local Ethernet driver on the chassis then handles all further syncing without any more intervention from LabVIEW. Avoids Windows’ lack of determinism.
  4. The master chassis sends an Ethernet packet to each slave (one at a time, not broadcast). The slave's Ethernet driver stores the small packet (with a time stamp of when received) and immediately sends a response packet that includes an index to the packet received (and the timestamp when the slave received it). The master stores the response packet and immediately sends a response to the slave response. This last message back to the slave may not be necessary.
  5. The local Ethernet driver for each cDAQ has stored all 1000 loop times and their associated timestamps.
  6. Now each master slave combination has a timestamp of the other's clock with a time offset due to the Ethernet delay. But this Ethernet delay is not a constant (it is indeterminate). If it were constant, then syncing would be easy. BUT
  7. One characteristic of the loop time should be determinant (very repeatable). On a local subnet the minimum loop time should be very consistent. After these loop messages and time stamps are sent 1000 times, the minimum time should be very repeatable. Example: Suppose we only want 10 us timing (one tenth of a sample period). After sending 1000 time stamped looped messages, we find that the minimum loop time falls between 875us and 885 us. We have 127 loop times that fall into this minimum range (like the bottom “bucket” in a histogram plot). If we were to plot the time distribution, we would notice an obvious WALL at the minimum times. We would not have a Gaussian distribution. This 2nd peak in the distribution at the minimum would be another good indication that this lower value is determinant.
  8. Now the master and slave chassis communicate to make sure they have the same minimum loop times on the same message packet loops. The ones that agree are the ones used to determine the timestamp differences between the master and the slaves. The master then sends to each slave the offsets to use to get the clocks synchronized to one tenth of a sample time.

This continues to go on (in the background) to detect clock drift. Obviously after the data acquisition starts the network traffic will increase, and that will cause the number of minimum packet loop times to be less than 127 out of 1000. But we should be able to determine a minimum number of minimums that will give reliable results, especially since we know beforehand the amount of traffic we will be adding to the network once the data acquisition starts. We may find out that an Ethernet bus utilization of less than 25% will give reliable results. Reliable because we consistently get loop times that are never less than some minimum value (within 10 us).

 

Remember, a histogram of the loop times should show a second peak at this minimum loop time. This 2nd peak is an indication of whether this idea will work or not. The “tightness” of this 2nd peak is an indication of the accuracy of the timestamp that is possible. This 2nd peak should have a much smaller standard deviation (or even a WALL at the minimum - see image). Since this standard deviation is so much smaller than the overall average of the loop time, then it is a far better value to use to sync the cDAQ chassis. If it is “tight” enough, then maybe additional time syncing can be done (more accuracy, timed to sample clocks, triggers, etc.).

 

Example, now that the clock are synced to within 10 us, the software initiated start trigger could be sent as a start time, not as a start trigger. In other words, the master cDAQ Ethernet driver would get a start trigger from LabVIEW and send all the slaves a packet telling them to start sampling at a specific time (computed as 75 milliseconds from now).

 

Ethernet_Loop_Time_Distribution_Shows_Determinism_at_the-Wall.jpg

 

I mentioned parts of this idea to Chris when I called in today for support (1613269) on the cDAQ Ethernet and LabVIEW 2010 TimeSync. Chris indicated that this idea is not being done on any of your hardware platforms yet. By the way, Chris was very knowledgeable and I am impressed (as usual) with your level of support and talent of your team members.

I have a USB-6509 and a USB-8951 connected to a computer through an external USB Hub.  MAX locks-up hard when I disconnect the External USB Hub from the computer, always requiring a reboot. When I connect/disconnect the individual devices on the other side of the External Hub, leaving the External Hub connected and powered, I do not get any problem.


Should there be problems when the External Hub is connected/disconnected, instead of the individual devices??? I'm thinking this is a "USB Problem", but the Windows Device Manager seems to roll smoothly with all the connection/disconnection variations. It is MAX (and any subsequent LV prog) that has the hangup.

 

Any ideas? Thanks!

Hello,

 

NI should provide input for DAQmx Create Virtual Channel.vi to set default value for output channel after task starts.

Also, Provide default value in NI-MAX task configuration panel.

 

BR & Stay Healthy, 

1. on page 2-4 of the manual (http://www.ni.com/pdf/manuals/370489g.pdf😞

here is a sketch or a picture helpful to understand the text.

 

2. a terminal diagramm in the manual for each card (PXI, PCI) is also very helpful.

Alternatively a paper with the terminal diagramm to send with the devices.

 

Despite having dedicated RTD cards on other platforms, this doesn't exist on PXI.  The existing bridge input cards have significantly higher sample rates than needed for typical temperature measurements, and it would be nice to have a more economical drop-in option for RTDs that supported more channels.

Some NI boards have different properties for different channels.  For example the NI 4432 has ICP power available for channels 0 through 3 but does not have ICP power for the channel #4.  Please add the IEPE or ICP power support and AC/DC coupling support information to the DAQmx Physical Channels Node. 

I would really like to see a user configurable hardware front panel that integrates with LabVIEW. The panel would connect to a computer via USB. National Instruments could sell different hardware controls such as pushbuttons, LED's, switches, LCD panels for graphs, rotory encoded switches, slide switches and potentiometers, LCD indicators for text and numbers, etc.

 

LabVIEW would discover which panels are connected and which hardware controls are available in those panels. The user could then drop them as terminals on the block diagram and use them as normal.

 

Another option for the panel could be an embedded computer running LabVIEW so that the whole thing can be stand alone.