Hello everyone! I am very excited about this reference design and would love to hear about your experiences, ideas, questions, etc. I will be monitoring this forum regularly; any feedback is both welcome and appreciated.
Systems Engineer - Sound and Vibration
I have downloaded this reference and modified it for my SHM system. It works well and was very helpful to have this available.
My last remaining difficulty is system time... I managed to get the cRIO system time right and the files saved have the correct time in the filename. However when I open the waveform the time labels in the file and on the plot say something different. In fact they give a time/date which was still in the future at the time of recording.
I have been looking in the file rwfm_AcqRead(Wfm).vi in order to see how the timestamp is applied. I don't see any reason for it to be wrong or any settings to change. I also don't see how the timestamp is related to the time the actual block data was recorded to the DataU23.
Bottom line, I would like to get the timstamp on the waveform to match the time the data was produced. Any help?
I'm glad you found it useful; I would love to hear your thoughts on how the content and/or features could be improved to make your development easier.
The waveform and file timestamps are indeed generated from two separate sources. The file timestamp is created immediately after the trigger condition occurs. The data timestamp is generated immediately after the first Read is called (as designated by the FPGA control "First Read") and the first data block is available from the FPGA. The rwfm_AcqRead(Wfm).vi generates a timestamp from the current time and subtracts the amount of time it took to acquire the data. I hope that is clear here is a picture of the relevant points on the block diagram.
One way I can see the data timestamp being way off is if the first Read VI was called siginificantly later than when the acquisition began (when the start VI was called). That wouldn't be the case with the original project however since the "Wait for File Trig" state ocurrs immediately after rwfm_Start.vi is called. I would be happy to look at your project if you want.
NI Systems Engineer
Looking at it, I realized my program is running too fast: It records what is supposed to be (and the waveform data claims actually is) 3 minutes of data in just over 1 minute. 15 seconds of actual time becomes 40 seconds in the file. I expect this is causing the problem in the waveform timestamps but I can't find the source of the problem.
If you are willing to look at my program that would be fantastic. How should I send it to you?
I just put it in. I also included my configuration file. I am running at 1024 Hz, 3 minutes per file, recorded once an hour or as other (added) triggers occur.
The main changes I have made to the reference program are:
1) Removing the strain gage module and replacing it with a second accelerometer module. I also removed the strain gage tab on the front panel.
2) Adding more temperature sensors to the scan.
3) Adding a trigger to record if the RMS of a channel is above a certain level.
4) Adding a "Record" button to trigger manually.
I don't see how any of these would cause it to run too fast but it definitely is.
Thank you for your help.
I think I figured it out. I agree the system is acquiring at ~3kHz when you asked for 1.024kHz. This behavior is actually being caused by NI-RIO because we are asking for a data rate that is not supported by the 9234.
You will see on page 19 of 36 that the valid data rates are for n = 1..31. Your configuration file asks for 49. So the RIO node on the FPGA is actually rolling over to 16 and giving you a sample rate of 3.173k. The control on the shm reference design goes that low because the cRIO waveform VIs may be used with other modules besides the 9234.
I will share this experience with the RIO team, perhaps there is a better way of handling unsupported rates other than just rolling over. Can you increase your rate to 1.652 kHz?
That was it. I increased my data rate and everything is working right.
An aspect of our particular SHM application is that all our frequencies of interest are below 60 Hz - we could even get away with 128 Hz sampling. The cRIO is capable of much more but we don't really need it. We will record the data at the higher rate and then downsample or decimate later on during processing. Maybe I can even decimate onboard the cRIO before the file is saved.
As far as feedback for the SHM reference design it was great. I am brand-new to LabView and the reference enabled me to get the system up and running and to learn my way around LabView in a few weeks. Other than the effort associated with learning a new language it was easy to use. One comment - it took me a week or so before I found the link to your application. A link to your page from the cRIO Waveform Acquisition reference app might be useful to people who are looking for automated data acquisition.
Thanks for your help today.
I finished my lab testing last week - the device is installed on the bridge and I ran it over the weekend. I have problems in the field that I did not encounter in the lab.
The main problem is that it frequently overflows the DMA buffer and restarts itself. This occurs both during file saving and routine monitoring.
Other symptoms of the problem:
- Slow connection and operation of remote panel on cRIO
- Very slow FTP interface, transfer of larger files often fails
- Only some of the cRIO's items can be seen in the Distributed System Manager. CPU and memory usage are sometimes not visible.
- Sometimes can't find the cRIO in MAX.
The problem disappeared briefly this afternoon after I:
- Increased the block size
- Did a cold reboot (unplugged)
However the trouble reappeared later and I will have to wait until tomorrow to retry the cold reboot.
My first thought was that the network was slower out on the bridge (we are connected via a wireless access point). However it doesn't seem that a slow network should be able to cause a DMA overflow in any way. Also the system responds to ping tests usually in 1-2 ms.
So it seems like the cRIO is maybe running slower on the bridge than in the lab. One thought is the cold temperature - it's hovering around the freezing point. But the RIO specs say operation down to -20 C. Another possibility is some problem with the error handling or message sending.
I will be trying to repeat my brief success of this afternoon tomorrow. Have you seen this kind of thing before? Any ideas of obvious things to check that I might not know?