Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Dropping Data In Continuous Sampling?

I'm working on an application to use 3 PCI-6143 cards to simultaneously and continuously log data from a shockwave test. Since I don't know exactly when the shockwave will occur over the hours-long test and my client specifically requested no triggering, I'm left with developing a high speed data logging app.

I came across a NI DevZone solution that showed the basics of how to use the lower level DAQmx VIs to synchronize the three data cards, but the problem I'm experiencing is that unless I instruct the cards to acquire X number of samples, LabVIEW will read a different number of data points per card. Example would be 23 data points on card 1, 1100 on card 2, and 3000 on card 3; i.e. cards are dropping data and not reproducing the waveform. This behavior disappears if I replace the "-1" (acquire all available samples) in the DAQmx VIs with a finite number X and a timeout Y.

I don't want to have to set a number of samples to get and and a timeout for the shockwave tests-if I'm wrong on either I could miss critical data from a non-repeatable test. Is there any way to get the cards to continously acquire data synchronously without losing data?

Thanks,
Chris
--
Chris Coughlin
0 Kudos
Message 1 of 9
(3,394 Views)
crcoughlin,
 
I am going to assume that you are running a continuous task in DAQmx (ie... read repeatedly in a loop and log).  The default behavior for DAQmx is to read all samples available if the number of samples parameter is set to -1.  I'm not sure how your VI is set up.  As you mentioned that you have three PCI-6143's I will assume that you have three analog input tasks defined.  Either in the same loop, or in 3 separate loops you are then reading from you three tasks.  In either of these scenarios, your read functions will not all execute simultaneously.  As such, I would expect that the task which gets read first will have fewer samples availabe, the 2nd task which gets read will have more, and the 3rd task which gets read will have the most data available at the time of read.  On top of this, your three 6143's probably all share the same PCI bus.  As such, they will compete for bandwidth on the bus and at any one time, any device could have transferred more data across the bus and into DAQmx's buffer than any other.  This will also affect the number of samples which are available when the DAQmx read VI is called.  One other drawback to using the -1 defualt is that it may require LabVIEW to re-allocate memory if the amout of samples available increases.
 
When you set a number of samples per channel to read, DAQmx's read VI will behave as follows.  When read gets called it will check the amout of samples which have been transfered into the buffer.  If they are not yet available, read will yeild to other processes.  When it awakes it will again check the number of samples available.  If the requested amout are available it will return them to you.  This does not stop a continuous acquisition.  The device will not stop sending data into the buffer, unless 1 of 2 things happens.  The first is that you stop the task, and the second is if an error occurs (remember to check for errors).  So as long as you repeatedly read in a loop, you should be able to specify a number of samples to read and get the same number back from all of your devices without loosing any critical data.  This is the route which I would suggest that you take.  If this doesn't answer your question, please feel free to ask for clarification.
 
Hope this helps,
Dan
Message 2 of 9
(3,378 Views)
Dan,

Thanks very much for your reply. I've taken your advice and switched to an acquire X datapoints approach, but it looks (to me, anyway) as though I'm losing data.

In my VI, I have the DAQmx Read VIs in the same loop. To test the system, I'm feeding each DAQ card a 20kHz sine wave. Since I'm sampling at 250,000 Sa/s/channel, I assumed that I should see 250,000 data points/s/channel in my data files, but this isn't the case.

If I use the default input buffer size (100 kSa/s I think) and I adjust the number of samples to acquire, I get different sizes of output files. Example: acquiring 500 samples, I average 19 MB/card for 60 seconds of operation. 2500 samples: 31.57 MB/card for 60 seconds. 10,000 samples: 35 MB/card for 60 seconds.

I'm wondering if there's some magic combination of input buffer size and number of samples to acquire that will fit my application.

Thanks again,
Chris
--
Chris Coughlin
0 Kudos
Message 3 of 9
(3,366 Views)
crcoughlin,
 
There are definitely some combinations of inputs that work better than others.  However 'ideal' numbers vary from system to system, and also will also tend to depend on what is being done with the data you are reading.
 
I am a bit curious as to why you seem to be losing data without receiving an error.  Would it be possible for you to post your VI here (or a simplified version of it which demonstrates this behavior)?  If you can't do that, a few things to check:
 
1) Ensure that you are setting up your tasks in continuous mode (input to DAQmx Timing VI)
    - each task should be configure/started once and then read in a loop (my guess is this is your setup, but without seeing your VI, it's tough to know why)
2) Ensure that you monitor the error out condition from each of the DAQmx read VI's (I would have condition which breaks the loop if error is detected).
3) There are some combinations of properties in the DAQmx Read Property Node which will allow the DAQmx driver to overwrite samples in the buffer which have not been read.  Are you setting properties in the DAQmx Read Property node?
 
Those are just a few things which came to mind.  However if I could see the VI, it's possible that I may get a much clearer idea of what's going on.
 
Dan
 
Message 4 of 9
(3,360 Views)
Dan,

Thanks for the quick followup.  I'll try to post the VI I'm test-driving in a subsequent post; for whatever reason my previous attempts at posting failed.

Anyway, I've been experimenting with the # samples and the input buffer size, with no improvement.  I've gone back over the earlier data and there appears to be updates made every 4 microseconds as predicted, however there are 5 second gaps in data sprinkled throughout the data file on a fairly regular basis.  I think this is the cause of the lack of data, although I'm open to suggestions as to the cause of the cause.

Thanks,
Chris
--
Chris Coughlin
0 Kudos
Message 5 of 9
(3,334 Views)

crcoughlin,

I took a quick look at your VI.  Your problem is that you are constantly starting and stopping your task in the loop.  This is not what you are going to want to do.  Configure and start your tasks before your main while loop.  Then in the loop, read from the tasks and write to files.  Then after your while loop you are going to want to stop your tasks.  If I have time today, I will see if I can modify your VI for you, although I've been on a pretty tight schedule today.

Couple of other things to keep in mind:

1) You are using Express VIs to write to your files.  I believe that this will continually open/close files.  This is not the most efficient method for writing to file.  You are acquiring a lot of data, so efficiency may end up being very important.

2) Hmm... I said a couple of things, but that one thing is actually the only thing that came to mind 🙂

If you make the modifications I mentioned, let me know if that does the trick for you.  Also, if I have time today, I may try to modify your VI for you, but I'm in the middle of doing some testing, so I don't know that I'll have time.

Hope this helps,

Dan

 

Message 6 of 9
(3,330 Views)
Whoops!  You're right-didn't even notice I had the start/stop going in the loop.  It's been a long project.  Smiley Happy

I'll do a quick and dirty mod to see if it helps the situation at all.  In my original VI I was using lower-level VIs to record data, but I thought I'd try the Express VIs because I was more or less reproducing their functionality anyway.  I might have to rethink the switch.

Our problem from the beginning has been the amount of data we need to record.  My client's conducting an underwater implosion test and you only get one shot at recording that kind of data.  Historically, the data have been recorded as audio; coming from this perspective the client's uneasy with the thought of triggering to acquire the implosion event and wants to just record everything.

--chris
--
Chris Coughlin
0 Kudos
Message 7 of 9
(3,326 Views)
crcoughlin,
 
I don't use the express VI's much, but I would assume that every time you call one, it's going to open and close a file.  I would think that it would be possible to implement something more efficient than that (I could be wrong... not a LabVIEW expert, more of a DAQ expert).  If I were you, I'd try it as it is implemented, and see if it works.  Once you get your task start/stops out of the read loop, you should get a continuous stream of data from your 6143's.  If the rest of your application is not fast enough to keep up, I would expect that you will see either a buffer overwrite, or device FIFO overflow error.  If you receive this error, it means that you are not processing your data fast enough to keep up with your hardware.  Some suggestions if this happens:
 
1) Non-scaled or raw versions of the DAQmx read VI will be faster (don't have to convert 2 byte binary values to 8-byte floating point numbers)  This will also save space on the disk.  The drawbacks are, if you display the data as you acquire, the raw binary values will not be as intuitive to interpret.  However, if most of the examining of data will be post-acquisition this may be feasible.
 
2) Don't update the graphs on the front panel with every read.  This causes LabVIEW to re-draw the graphs with each iteration of the loop.  If they need to be present, perhaps you could only update them every X iterations through your loop.
 
3) Look at file management, and ensure that files are only open/closed when they need to be.  Without spending a lot of time looking through the express VI's (if you right click and select Show Front Panel you can see what happens inside of them, however I recommend that copy your express VI to a blank VI first, because it will make the express VI in your block diagram non-configurable, although if you close the front panel and undo the action on your block diagram, you can revert).
 
That's the major points right now.  I'll let you know if I think of anything else.
Hope this helps,
Dan
Message 8 of 9
(3,318 Views)
Dan,

Thanks for the added ideas, they meshed with what I was thinking as well.  I was able to eventually stream straight to disk without losing data-in case anyone else coming across this thread is in a similar situation here's what worked for me...

I removed the Express I/O VIs from the program-just as Dan mentioned, they're not able to keep up under these data rates.  I went instead with the lower-level I16 write to disk VIs which worked out well.  Originally I thought I would just keep 3 data files (one per 6143 card) to avoid the overhead of opening/closing files, but this quickly (~3 minutes of data) bumped up against the 2GB LabVIEW data file size limit.  I wrote subVIs to build data filenames based on the date and time and included them in my 60-second loop so that a new data file was written every 60 seconds.  Basically, the stream to disk does everything the Express VIs were to do originally but now I'm not losing any data.

So far in tests of up to 90+ minutes I haven't received any error messages about lost data, and the file sizes work out to exactly what you'd expect from the 6143 acquiring data at full tilt (2 bytes/sample * 250,000 samples/s * 8 channels * 1 second ~ 3.8 MB/s per card, or in my application around 11.4 MB/s total data for three 6143's).

Thanks to Dan for all of his assistance in helping me get this straightened out!
--
Chris Coughlin
0 Kudos
Message 9 of 9
(3,298 Views)