From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Digital I/O

cancel
Showing results for 
Search instead for 
Did you mean: 

Reference Example for Streaming Data from Dual PCIe 6537s to Disk

Solved!
Go to solution

Hi everyone,

 

I have successfully modified the Reference Example for streaming ( http://zone.ni.com/devzone/cda/epd/p/id/5315 ) for SINGLE PCIe-6537 card and able to write at 20M clock speed (for initial testing purpose) without any error.

 

However, I am not sure how to view the *.bin file to check if I have written the right data. I saw an example VI called "Read Binary File". However, it says "File Already Open" and gives GPIB controller error (very weird!). Should I try using MatLab? I doubt it will help since it seems the File Write VI is not closing the file properly.

 

I attached the the updated SINGLE PCIe-6537 source code and the File error with this post. There are supporting file as well for window file write VIs.

 

I am kind of new to streaming and high-speed acquisition. Please let me know if I am missing something. Thanks in advance.

 

Cheers,

YK

0 Kudos
Message 1 of 7
(3,845 Views)

Hi again,

 

I think I know what the problem is. When I ran the VI, I stopped it first time by hitting "Abort Execution" but not by "Stop button". The binary file was already created  by that time and was incomplete/corrupted and since I didn't the delete the previously created file and then tried to read-binary file, it kept saying "File Already Open" (i.e. corrupted). So the trick is to stop any running VI properly NOT by abort execution.

 

So, this problem is solved but the new problem is that the Read Binary File VI cannot read a large file...I tried a file size of 150 Mega bytes and it said "not enough memory to complete the operation". I have about 3 GB of RAM. It should be able to open...

 

Any pointers will be helpful. Thanks!

 

Cheers,

YK

0 Kudos
Message 2 of 7
(3,842 Views)

Hi YK,

 

Doing a search for some information, I came across this which may help you out with resolving this issue.  Look at Why do I get "Memory is Full" error in LabVIEW, and how to use Profile and Performance Window to determine which parts of LV is using memory.  Also check out extending virtual memory in Windows

Kyle A.
National Instruments
Senior Applications Engineer
0 Kudos
Message 3 of 7
(3,814 Views)

One other option you can try is to break up the file being read into smaller chunks, so that it can be more managable in memory.  Part of the theory behind this is you need to have a contiguous space in memory in RAM for the file, so the smaller chunk you read the more likely there is an open block of memory available in RAM of that size. Also, if you are using multiple channels, looking into data compression options may be beneficial. 

Kyle A.
National Instruments
Senior Applications Engineer
0 Kudos
Message 4 of 7
(3,785 Views)

Hi Kyle:

 

Yes, I have been experimenting with this a little. I have tried a number of things you and the tutorial suggested

 

1. Increased virtual memory to maximum 16 GBytes. It didn't help.

2. Since for now I am using Raw 1D U16 data, I changed the Win32 Filewrite U16 from U32 which decreased the file size by half. This helped!

 

This is a progress but won't solve all the problems though. I will be streaming data into hard-drive (don't have a raid yet but will be getting it soon) therefore, we are talking about gigabytes of data analysis. However, I was also thinking about your last suggestion. We should break it up into pieces/chunks of smaller files. Do you know any quick way of do it or a example VI made for it? I was thinking if I should run a "for" loop and read the file until certain number of samples and copy to a new smaller file. However, in this way, I will still have to read the big file no matter what in this method.

 

I also saw something like NI DIAdem. I am not sure how that might be helpful also, we have to purchase that.

You also suggested about data compression, I can try doing that and yes we are using multi-channel data acquisition. My concern is that would data compression slow down the system? Just so you know, I am using PCIe-6537.

 

I did a little search and found Managing Large Data Sets in http://zone.ni.com/devzone/cda/tut/p/id/3625 and LabVIEW Windows Routines for Data Compression at http://zone.ni.com/devzone/cda/epd/p/id/3662. I haven't played with them yet but I will be looking at them shortly.

 

Breaking up in chunks might be a good approach...not sure how. I have to think about it. If you have a suggestion/link, please let me know.

 

Thanks a bunch!

 

Regards,

YK

0 Kudos
Message 5 of 7
(3,778 Views)
Solution
Accepted by topic author ykhan

I have a few links you can go to for more streaming resources here and here.

 

The original question you posted was using the Read From Binary VI not being able to because of memory.  You can read smaller chunks of the file and still keep the file intact.  There is a count input which indicates how many data elements are returned by the function.  If you want to save multiple files, you can use any looping function to make it work, just set the condition for how many times and when it needs to stop.  I don't know of a quick example that does this.  You will still read all the data, but since the reads are in smaller chunks, it will help the issue since the data is not needing a large contiguous space of memory to reside in during the read.

 

Data compression would be more of a post processing step rather than a streaming step, and it may slow down your stream if you were to write simultaneously after your read.  In a producer/consumer architecture you could read in data and compress it before saving to a file, which would not affect your read performance, it would format the data and create a file that is smaller and takes less space in memory.

 

 

Kyle A.
National Instruments
Senior Applications Engineer
0 Kudos
Message 6 of 7
(3,757 Views)

Hi Kyle:

 

I am using Producer and Consumer architecture (i.e. two seperate thread/loop - one for reading and one for writing using queue). So, theoretically in consumer loop I can compress the data before I write to a binary file. Right? I will definitely try that. I am also going to try breaking it up in smaller chunks by using the count in a loop. Thanks a lot for pointing me in the right direction. I really appreciate it Smiley Happy

 

-YK 

0 Kudos
Message 7 of 7
(3,730 Views)