It's not you it's me. I needed to get the quick schooling in the different labview datatypes, typecasting, how labview handles array manipulations, and how labview writes to a file. Your added info in the last post helped to pull the pieces together.
When I tried to rebuild the typecast and reshape array section 2D Scale SGL VI, I used the wrong datatype in the typecast function and hence had erroneous values on the output. Once I got the datatypes correct I was able to duplicate your results and move on.
1D Scaled SGL Case: n rows of FXP -- SGL U32 (iee754) -- DMA pipe -- Binary File /// To read look for n rows of SGL for m samples.
Ultimately I need to process the recorded data in Matlab. Understanding how the data was written in Labview was key in reading the data in Matlab. I have attached a simple Matlab function for any others that try to use this application. Another application was written in Python to handle larger files but this Matlab one works most of the time. The only thing I added to the reference application was a write to binary file case structure. Because I am writing to a USB hard drive that the cRIO requires to formatted as FAT32 I have also written some logic that creates a new folder every x files and only writes files that are y minutes in duration.
To date I have been able to stream 8 channels @ 100kS/s/ch to disk via binary files and 50kS/s/ch in tdms files. The next challenge is 16 channels @ 50kS/s/ch via binary files.
Again this is an excellent reference example. Thanks for your assistance and hardwork!!
Just tried to install this after installing Labview 2011. The installer defaulted to my old 2010 directory. When I tried to point it to the new 2011 directory it did not install anything. I was able remedy this by manually copying the old folder to the new location.
I will take a look at the installer. Maybe I don't know how to tell the installer to default to the latest version of LabVIEW.
FYI these installers only work for one version of LabVIEW so if you had uninstalled cRIO Wfm Library from control panel then you could have installed it to LV 2011.
Thanks for the feedback,
S&V Systems Engineer
I'm using the cRIO Waveform Reference VI's to collect some accelerometer data and the accuracy of the timestamp seems to be drifting. The cRIO is synced with a time server and I am attempting to compare my data with data collected from a PLC, where the common reference is time. However my data seems to be delayed by 10-20 minutes. I started digging into this and then noticed that there was also a delay in the file name time/date and the actual values in the tdms files being generated...I have attached a data file that shows a 1 hour 13 sec difference ( I had to rename it .txt, because apparently tdms is not a valid file extension however the file is a binary tdms file). I'm sure the 1 hour part is daylight saving or time zone issue however where's the 13 seconds. I understand it might take a second or so to create the file and start writing but 13 seconds seems like a long time. The bigger issue however is that my data is delayed 10-20 minutes and since I am attempting to use the waveform's timestamp feature as a common metric, it's making things challenging. The details of my setup are:
-cRIO with a 9205 card
-collecting 12 channels of data @ 200 Hz with 800 Samples per read ( results in 80 sec of data per file)
Anybody experience similar issues or have any thoughts?
Regarding Timestamp drifting; Yes I've experienced somthing similar.
I have an cRIO application logging to TDMS files via USB, and when I change the logging file to a new one (to keep the filesize down) I get a big gap in the starting timestamp in the new file (don't remember exact how big, but I think it was around 40 seconds).
The program wasn't built aroung the Waveform Reference Application, so I'm now rebuilding it using this WF-Ref-App, in hope of getting rid of the timestamp delay. It doesn't sound promising that you have similar behavior using the WF-Ref-App.
I'll post an update here when my re-coding is finished to let you know my results...
I have two other question about implementing the WF-Ref-App:
While the delay associated with writing a new files is not ideal, the real problem is that the data with its timestamp seem to lag the "real' time. In my process I am monitoring vibration and I see a big time lag in the data collected by the cRIO vs the data collected by a PLC. For example, when certain motors are started/stopped they vibrate the system and when I overlay the cRIO data with the PLC data, there is a significant delay in the vibration signature...10-20 minutes. So my question is if there is any possibility that if the cRIO application runs for a very long time (several days) can it accumulates an error in the timestamp that would explain this time lag???
In regards to your IRQ question, the reference application does use IRQs. Specifically look at the file [FPGA] SAR Acq Main.vi
I guess were not experiencing the exact same problem regarding the time lag. However, I noticed that the WF Ref App is only reading the system time once to get a t0 (assuming Continous WaveForm Acq here) and then calculates its own timestamps thereafter as #Samples * dt. Have you tried to substitute your "GetSystemTime VI" to this calculation instead? It feels like it's more safe to do it like this.
Regarding IRQs, yes there is a IRQ usage only to sync Host and FPGA at startup, but I mean to use it later in the acquisition loop to fill the DMA FIFO with a predetermined no of samples before firing a IRQ to the host, which waits until this IRQ is recieved and the reads all availiable samples from the DMA FIFO.
I cannot see any IRQs used this way.
I think we're having the same problem with the latency between files and aquisition. I'm just seeing additional problems too.
I understand what your saying about the #Samples * dt, I'll probably make some modification to see what happens. However if the hardware is working correctly and the sampling rates are correct, this should work.
In regards to the IRQ, the application is syncronizing each aquisition not just on startup, which is kinda what you have described...here the comment from inside the FPGA vi:
"The host application waits on this interrupt to synchronize itself with the start of the FPGA's acquisition. This synchronization prevents the host application from polling the DMA FIFO before the FPGA is sending its data.
It also prevents the FPGA from sending its data before the host application is ready to receive it."
Thanks for your comments.
I don't have an easy answer for the offsets but I do have some comments on time in general. As you have already seen I calculate waveform time based off of the sample clock. I do this (as opposed to calculating a new timestamp for every block) because many of NI's processing functions will reset if there is a discontinuity between the timestamps of two blocks. There are some S&V functions for instance that will reset if the timestamps of two adjacent blocks are more than 140% of the expected dt.
So the timestamps being returned are affected by DAQ hardware clock drift. The cRIO system time has clock drift as well. As a result the time on your windows host, the cRIO time, and the data time will all be different as they drift away from each other. There are some things you can do to mitigate this in different scenarios. Some cRIO controllers have NTP capability that will periodically adjust its time to an NTP server. You can also reset the timestamps of each block by setting the read/write controll called "first read" to true after every read (or just do it yourself by bundling in a new timestamp for each block). The determinism of the read loop will affect timestamp discontinuity but perhaps you don't care about that as much.
The real solution to this problem is to discipline the DAQ clock to a global time (like GPS or NTP). We support this technique on PXI but not cRIO unfortunately.
I did experiment with the IRQ method you spoke of and actually saw a slight INCREASE in CPU performance. I had some plausible explanations from some smart people at the time but ultimately went with what worked.
As far as putting the read into a timed loop, I'm not sure what that buys us. Since the DMA channel has a large buffer, we are protected against jitter causing buffer overflows. In MOST applications I say it is adding unnecessary complexity. In the scenario above where the loop is assigning timestamps then perhaps there is some value in calling the read deterministically. I have not used timed loops in any of my wfm monitoring applications so I don't have many recommendations to make. They are invaluable for control apps but DMA channels add latency to the data which is problematic in those applications.
S&V Systems Engineer