09-07-2007 10:26 AM
07-18-2008 12:15 PM
10-28-2008 09:11 PM
First comment is : Tanks a lot Christian for this design, very usefull....
I love the XML config system...
I'm using this design to record: 3 channels - 24 bits up to 50 KS/s + 32 channels - 16 bits at 1 KS/s + 8 * 32 bits/s + 1K of 32 bits/s.
I have big RT FIFO then acquire every second.
I save TDMS in an USB memory.
I control by Web Server the application ( data + Config) and not by the host...
For the moment that's working at 10 KS/s however I would like to improve the TDMS to get more CPU avalaible at 50 KS/s.
The idea is to buffer the TDMS and change the loop every second and not every 10 ms....
Is there anybody who did that... who get the same issue?
Is the Web server a good solution or it will use too much ressources ( I diplay only few inforamation to check if it's working properly (like FIFO size) and not charts ) ?
I've got Labview 8.5 and would like to see the CPU usage via the web server (without using any Labview on the control PC)
What kind of change did you do on this design?
10-30-2008 02:55 PM - edited 10-30-2008 02:57 PM
First, which architecture are you using? The CompactRIO Embedded Datalogger (CRED) or the CompactRIO In-Vehicle Data Logger (CRED-IV)?
For the file writes, the performance will definitely be improved if you can write larger blocks of data with each operation. You'll need sufficient memory (FIFO or Queue space) to buffer the data until it is written. In the CRED architecture, you'll need an extra loop to separate the data acquisition and the logging. You would then use the RT FIFOs to transfer data between them. The CRED-IV architecture already has separate loops for acquisition and logging, so you should be able to modify it by changing the loop rates. In either case, you'll need to read more data out of the RT FIFO each iteration of the logging loop than you put into it each iteration of the data acquisition loop to avoid overflow.
As for the web server, it does use a fairly significant chunk of resources on the target. If you want the fastest streaming rate possible, it's best to disable it. On the other hand, if a web interface is an important requirement of your data logger, then you should keep it and find other ways to improve the performance. If you do keep the web server, be sure to remove the TCP communication interface from whichever architecture you are using, this should buy you a little extra processor bandwidth.
If you could give me some more specific information about which architecture you are using, how you have modified it, and what problems you are having, I can offer better assistance in fixing them. I'll be out of town until next Tuesday, but will respond to anything you post then.
NI Systems Engineering
11-03-2008 07:47 PM
I show only 8 row of a 1D array information to be sure that the system is running.I display the XML Config and can save it (reboot if changed). Questions: Is there a way to remove the STOP button of the VI in the web browser? LOG:I changed the loop by adding a Queue. I made another loop which will concatenate to a flat string and save it in the USB memory.I have data form FPGA every second. I used to convert U32 in Fixed point then in Double.I changed to record only the U32 which give a little more efficiency (32 bits instead of 64 bits) and convert it when I read the TDMS file depending the CRIO module . This is still not fast enough to save at 50 KS/s.The USB speed looks to be the only limitation.
Questions: Can we find a way to improve the USB speed?
or should I try to optimise the TDMS streaming (headers??)
11-04-2008 09:58 AM
If you are using the CRED-IV, there shouldn't really be a need to add an extra logging loop since it already has one, you'd only need that if you were using the standard CRED.
For the abort button, you should be able to remove it in the VI settings under window appearance.
One thing you might consider to improve the performance is replacing the queues with RT FIFOs, I use queues because I need to keep the data packets very flexible to handle CAN communication, but if you aren't using CAN and have fixed sized data packets, RT FIFOs should offer better performance since they don't need to allocate memory. However, I'm still not sure this will get you up to 50ks/s TDMS logging to a USB key.
I haven't specifically benchmarked speed for logging to USB, and there will definitely be some variation between devices, so my suggestion would be to develop a very simple VI that just streams data to your USB storage. Try it in both TDMS and raw binary. If you can't get the speeds you want with a simple program, then you won't be able to get them with the architecture either. If you can get them, then you can start experimenting with things like replacing the queues or using binary instead of TDMS to squeeze a little extra performance out of the architecture.
I don't know of any specific way to improve USB logging speed, although as mentioned above, I believe there is some variation between devices. Another similar option which should be a faster is CompactFlash.
11-10-2008 02:13 AM
On the CRED-IV, I changed the logging loop to write bigger file every second and give the most time as possible to save the file.
I will try the RT FIFO....
I made some test with a simple VI and I'm close to 700 KB/s wich could be enough ( I have 1 GigaB per hour then 1G/3600 = 278 KB/s)
I'm changing the TDMS file viewer and got some problems: Cannot read the lenght of data recorded each time (each seconds). I've got in "properties" the total length however cannot "decode" the tag recorded every second. Any clues to read it ?
11-13-2008 10:40 AM
I don't think TDMS differentiates data between writes, so you'll need to add your own tags to differentiate between the values written during one iteration and the values written during the next. One way to do this might be to keep track of points written using a shift register and then write that value to a separate channel each time you start a block of data, you'd then be able to use that as the index when you are reading it out.
01-13-2010 04:53 AM
Hi, I am trying to use this code. I'm using a cRIO-9074 with four NI-9234. I have configured already all the hardware but still don't know which routine to compile since there are two: csi_FPGA Get Cal (9215-9201).vi and csi_FPGA Acquisition (9215-9201)).vi.
How should I manage this issue?
Thanks in advance,
01-13-2010 09:12 AM
You need to compile both. The Get Cal VI returns calibration constants which are used by the RT side to convert raw binary representations of floating point values into double precision format. The Get Cal VI is run once during the startup of the program to get the calibration constants and then the RT side closes the Get Cal VI and runs the Acquisition VI.
Note that this architecture pre-dates some of the newer features in LV FPGA such as the fixed-point data type. While the method of handling floating point numbers used for this architecture still works in current LV versions, if we were designing this architecture today we would most likely use fixed-point acquisition and therefore avoid the need to get the calibration constants thus avoiding the need for a second LV FPGA VI.
NI Systems Engineering