03-04-2020 03:29 PM
Hello all. I'm new to LabVIEW and have been assigned to a project that requires changing a previously created program. For the most part, it works as intended. We are receiving data from 2 DAQs and a Futek. One DAQ is just reporting temperatures. One DAQ is reporting voltages from 3 devices, and each voltage is coming in at different rates. The Futek is reporting RPM and torque.
I am trying to record each piece of data in 1-second intervals. The goal is to sample all data as fast as the fastest device is reporting, then average the samples from all devices into 1-second chunks, and write to a .csv file.
03-05-2020 02:25 AM
You haven't included all of the vi's in this project so I am not sure how you have setup your DAQmx channels. This means I can't actually tell how fast your loop is running because you haven't wired anything to the 'samples per channel' terminal.
Your approach to this depends upon how accurate you want to be. A simple way of achieving this (Although not the most robust) would be to use Mean PtByPt function to keep a moving average of the previous x samples (Where x is the number of samples on that channel).
The output of this could be stored in an FGV (Field Global Variable) and read by another loop which runs every second and writes the value of the FGV to file.
This is by no means the best as there is likely to be overlap or missed samples between each of the 1second recordings.
A better way would be to restructure your code so that everything is logging at the same rate (Lets say 1kHz) and then you read 1000 samples at a time. This would also trigger a file write either in the same loop or preferably in a separate loop via a queue. This means that your file writing is linked to the hardware timing of your cDAQ and you will never miss any samples.
This approach is difficult if you are using multiple peices of data acquisition hardware though.