LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

1)buffer overflow, 2)input terminal configuration for multiple channels, 3)waveform fluctuation

Hello, There,

I encountered a few questions.  Been pulling my hair out.  Getting bald so I am hoping to get some assistance here and save the rest of my precious hair.

1.  I am encountering buffer overrun problem.  Though it has been mentioned by multiple threads, such as (
http://forums.ni.com/ni/board/message?board.id=40&message.id=3035&view=by_date_ascending&page=1), I still have a few more questions.

I am using PCI6259 and its sampling frequency should be up to 1.25MS/sec.  Yet I am sampling at only 33.33K and it seems to me already maxing out because of buffer overrun problem.  The objective I have for this VI is to continuous monitor the sensor output for up to 48 hours.  I have already trimmed down my VI to only saving data to a file.  At this time I am sampling at 33.33K Hz, continuous samples, 3333 samples but I get a buffer overrun error after 0.5 sec.(~3000 samples)  I don't mind slowing down the sampling time a bit as long as I can sampling my sensor for 48 hours.  What else should I do ?  Any advice?

"Attempted to read samples that are no longer available. The requested sample was previously available, but has since been overwritten."

1.5 Is there anyway I can view the entire chart using x-scroll bar?  Though I am not certain, I think there is a limit to what I can review.

2.  I understand that I can do multiple input and output using this wonderful card...how should I change the input teminal configuration per channel(RSE, NRSE, differential..etc).

3.  Sometimes the waveform chart will flicker and the square pulse train is changed to a slow moving DC wave then change right back...This is more frequent as it gets close to the limit of the buffer.  Any idea why it is causing this?

I attached the VI I am working with...please advise.

Thank you and I look forward to hearing your advice,

CK
Download All
0 Kudos
Message 1 of 9
(4,454 Views)
CK,

The largest issue in your application is that you are continuously increasing the size of your array as you log more and more data. At some point, as this array uses more and more memory, it will begin to page to disk and your application will slow down drastically. Additionally, you may want to specify a larger "Number of Samples per Channel" input to the DAQmx Timing VI. This will allocate a larger PC memory buffer. When you run the VI, what do you specify as the "samples to read" on the front panel. This value will determine how many samples you read per loop iteration. Reading more samples less often is generally much more efficient. What is the value of the "iteration" whenever you get your error? Please let me know if these suggestions do not help.

Regards,
Ryan Verret
Product Marketing Engineer
Signal Generators
National Instruments
0 Kudos
Message 2 of 9
(4,426 Views)
In Addition to Ryan's comments.
  1. You don't initialize the timestamp array, meaning it starts way oversized the second time you run the program.
  2. DAQmx clear task is hidden. There are many backwards wires and "birds nests". Check the LabVIEW style guide.
  3. After you generate the filename, you branch it for writing the header and the data. Since there is no data dependency, you have no guarantee that the header gets written before the data. THere is a reason the file io fucntion have a duplicate path output. It's to ensure sequencing.
  4. You should write the data using low level file I/O. Currently the "write characters to file" in the FOR loop, opens the file, writes a line and closes the file over and over. It is sufficient to open the file once, write everything, then close it.

I would recommend to write the header at the beginning, then stream the data to disk in the main while loop. There is no reason to built-up all that data in memory (1/sec for 48 hours!). If you stream to disk, the already aquired data is safe even if the computer would crash or there is a power failure. In your version you would loose it all. You could even add some code to browse (scroll) through the aquired disk data as desired, even during the run.

Message 3 of 9
(4,422 Views)
Thank you Ryan and Christian for your generous feedback.  Point well taken for those comments I understand.  (Though I still don't understand what birds nest mean).  Based on the feedbacks I received, I wrote a different file using the producer/consumer design pattern.  And the buffer overrun problem seemed to go away....I tested it with 33.33KHz with 5000 samples as well as 333.33KHz with 100K samples and it ran well without crashing.

However...though I seem to be able to retain the data points in either binary or text format.  I am drawing a blank to add the timestamp on it.

The best case scenario is to keep the timestamp when the data point is read and write it to the file.
The worst case scenario is to keep the timestamp when the data point is written to the file.

So...since I am filtering out the points I don't need(ie. not within the max and min limitation)...how do I add the timestamp data to with my data?

Any advice?

0 Kudos
Message 4 of 9
(4,401 Views)
To clarify what I want to accomplish...

regular timestamp is able to keep track of the date/time down to the 2nd decimal place of seconds.

what i'd like to keep track of the is the time where the signal is read AND since i am reading the data at 333KHz...it'd be nice to have the timestamp down to microsecond range.(or its approximate)

so what I am thinking about doing is creating a 2D array where the data resides in 1 column, the index*dt is the elapsed time(and passe the index to the next loop so it continues counting forward) in another column and 1 column for the timestamp data.

then wire the 2D array to the enqueue function.

the questions are ...1)how do i convert timestamp data to number and 2)how do i pass a value out of a loop and back to the beginning of the loop continuously?
0 Kudos
Message 5 of 9
(4,391 Views)
Hello, There,

Ok.  There are fewer 200279 errors(if there isn't a lot of background movement) when the sampling frequency is 333KHz and 100K samples.  We are able to have both the dt data and y data entered into an array.  the data is then saved to a binary file.

Any comments?  Is there a way I can shrink the file further?  For a 33-sec data...the txt file size is ~100MB.

Do you have any pointers for graphing the binary file after the data acquisition?  Any pointers for post processing would be much appreciated.

CK
0 Kudos
Message 6 of 9
(4,384 Views)


@uclabme wrote:
We are able to have both the dt data and y data entered into an array.  the data is then saved to a binary file.

Any comments?  Is there a way I can shrink the file further?  For a 33-sec data...the txt file size is ~100MB.

Do you have any pointers for graphing the binary file after the data acquisition?  Any pointers for post processing would be much appreciated.

First of all, something is blowing the margins of your upper loop, and to me it looks like maybe a LabVIEW bug or corruption (?). For some reason, the label of the waveform chart is way off to the left and seemingly cannot be moved independently of its terminal for some reason. Deleting the chart and creating a new one fixes it. (that's why your upper loop contains so much hot air, it is autosizing when you move things around and e.g. trying to move the label pushes the right margin way out).

Secondly, your data is formatted and saved in an ASCII formatted text file, not binary. A binary file would be much more efficient! It would eliminate all the expensive formatting operations and would make the file MUCH smaller.

It seems foolish to artificially create all they x data and save it to file, since it is so redundant. Since the intermediary timestaps can be calculated at any time, they don't need to be saved. You can regenerate them when reading the file.

Obviously, you are relatively new to LabVIEW and there are a lot of odd constructs than can be simplfied, for example:

  1. After reading the data, you extract the first point from the array. Taking a lenght=1 "array subset" followed by a dynamic data conversion is not the right way. The entire thing can be replaced with a simple "index array". 🙂
  2. The lower FOR loop is autoindexing, you don't need to wire the loop count. Remember, you can resize the "index array" node to get both slices at once.
  3. For code readability, it is important to keep things organized. Align the DAQ thread and the file IO thread horizontally. Avoid all these wire kinks, backward wires and hidden wires.

Sorry, I don't have any DAQ installed, so I cannot test.

0 Kudos
Message 7 of 9
(4,375 Views)
CK,

In addition to Christian's comments, a very fast way to write to disk with small file size is the pair of DAQmx shipping examples, "Cont Acq&Graph Voltage-To File(Binary).vi" and "Graph Acquired Binary Data.vi." The first creates a file with a header which contains sampling rate, scaling coefficients, calibration coefficients, etc., and then logs the data to file in a binary format. The second example then reads this header information and applies the coefficients to the binary data to produce actual samples.

Hope this helps,
Ryan Verret
Product Marketing Engineer
Signal Generators
National Instruments
0 Kudos
Message 8 of 9
(4,368 Views)
Thank you Ryan.  Yes...the current file I am working with is a modification of the two examples you gave me.  And the reason I didn't use binary file is that I couldn't figure out how to set the different coefficients and as such the amplitudes of the waveform were all screwy.  (well...based on the graph binary data example anyway).  If I could find out how binary data is put back together...I'll for sure use the format.

And yes Christian...I am definitely new to this.  The most frustrating part for me is I know what I am doing is routined and very doable within minutes(IF I know what the heck I am doing)...but I can't find the examples to adapt. Smiley Happy  For your observation, constructing an index array using FOR loop seems redundant and slow BUT it is necessary because in the bottom loop we were filtering out points.  If I don't have the original index, the waveform will assume the data points were contiguous which would be errorneous.  If you have some other ideas to obtain the original indexes into the array, it would be super.

Thank you both for your continued contribution.  (I actually check the thread for your feedbacks on a hourly basis.  😃
0 Kudos
Message 9 of 9
(4,360 Views)