03-04-2012 11:07 PM
In my project I need to log critical data that is if any fault occurs I need to save the data that is 10 seconds before and 10 seconds after the fault occurred and then keep on collecting the data and hold the data for the next fault. So for this purpose I have created a queue that can hold 20,000 1D data and after 20 seconds it will get overwritten. So I am always holding 20,000 * 100 samples in my queue. Will that create any memory issues? If yes is there anyother way to do it?
03-05-2012 01:03 AM
20000*100= 2,000,000 samples. If it's integers it's 8Mb memory. You should be fine. 🙂
/Y
03-05-2012 02:00 AM
What you should look at is to preallocate the memory , see OPTAIN QUEUE help
and than have a look at the LOSSY ENQUEUE ELEMENT
However, from what I got from your task, you need a ringbuffer and want to store the data in case of an event. What will happen if two events arrive between 10s?Streaming 100kSample/s to disk?
Maybe a classical ringbuffer (chunk of memory and some pointers) with in place structures is more flexible.
And your buffer needs to be longer, since you have to store 10s+Xs of data while getting new data.
03-05-2012 02:17 AM - edited 03-05-2012 02:18 AM
Thanks Yamaeda am using DBL data here.
Henrik,
Yes am using Lossy Enqueue element only. Here we don't give the option for 2 occurance of error. If one error occurred my state will be Fault and whatever the fault that happens after that will not be considered but logged until I clear the 1st fault.
Before I proceed I will take a look into the Ring buffer you said
Thanks
03-05-2012 02:23 AM
I agree. This amount of data should not be a problem.
When using large queues, I suggest allocating the needed memory up front by filling the queue with fake data and then flushing the data right when the queue is created.
Otherwise the memory must be allocated in multiple steps the first time you fill it (because LabVIEW doesn't know how much memory you will need).
How you handle the data once you dequeue it can also affect your program performance.
Here are some tips: Large Data Sets
steve