LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Saving AVI w/ High Camera FPS and Resolution

I'm trying to capture 60 frames per second data from a GigE HD camera with a resolution of 1400x1024. I'm using a queue to hold the image data from the continous grab, then write the frames to the AVI seperately.  Saving uncompressed AVI results in a 1GB file after 13 seconds and drops about half the frames when recording, which I assume is due to the write time for the AVI.  Changing the compression filter to "Microsoft Video 1" allows me to significantly reduce the filesize, but writing the AVI still causes about half the frames to be dropped (avg write time for Microsoft Video 1 compressor is approximately the same as an uncompressed AVI).  The only way I was able to get around this was to reduce the IMAQ Image I'm saving in the AVI by half (700x512).

 

Saving HD camera data at half the resolution causes the AVI to be pixelated and kind of defeats the purpose of having an HD camera in the first place.  My project requires the 60fps so I can't change that.  Are there any other options to record the data at a better resolution w/o dropping frames?  Can the compression filter/codec used reduce the write time and allow the full size data to be recorded w/o dropping frames?  Would writing the camera frames to binary data instead of AVI be an option? 

 

Thank you, 

Stephen McClanahan

0 Kudos
Message 1 of 19
(7,762 Views)

Hey Stephen,

 

As opposed to worrying about image resolution, filesize, etc, I would just use a producer/consumer VI architecture.Here is a way that you can do that.

 

Basically, you will be acquiring at full speed, and writing at a slower speed, but since it uses a queue, you shouldn't have lost frames.

 

Hope this helps!

 

Chris

Chris Van Horn
Applications Engineer
0 Kudos
Message 2 of 19
(7,737 Views)

Thank you Chris for the suggestion.  I did implement the producer/consumer aka client/server architecture, but since I can't write the AVI as fast as the data is coming in, the queue quickly fills up and I end up dropping frames anyway (as LabView can't allocate enough memory to hold all the frames to infinity and beyond wile the consumer loop catches up).  Writing to AVI at this resolution had a write speed that was maxing out the hard-drive specs.  Using different codecs reduced the filesize, but the average write speed was relatively unchanged which I can only attribute to shifting the bottleneck from writing to disk to image processing.  The only way I have been able to reduce the write speed was by reducing the quality of the image, specifically by resizing it.  I might be running into that glass ceiling of either having high quality but lost frames or reduced quality and all the frames.  A professor of mine called this "The conservation of misery" because something always has to give.  Any other ideas/suggestions for reducing the write speed of the camera frames without sacrificing image quality?

 

Thanks,

Stephen McClanahan

0 Kudos
Message 3 of 19
(7,723 Views)

Its been awhile since I messed with this, and I don't have the image VIs in my office, but I think you don't want to write to AVI. When I was doing this someone at LV gave me an unbuffered WIN32 File I/O protocol that allowed me to save the image data to a binary file. I think it is more simple to do this now in 8.6 or 2009. I was then able to run a VI after the fact that converted the binary to AVI which worked out well because I could chop the file down right there in LV before saving as AVI.

 

I was able to get pretty close to the limit of GIGE by doing it this way, 400x300 at 200fps with two cameras. Something else I ran across, and I'm a little fuzzy on this so bear with me, was that images were getting overwritten inside of the Queue. That is, I only setup so many buffers (empty images) before the run, if I these sat in the Q unprocessed, they would actually get overwritten.

 

 

Edit: If my math is right, which would be unusual, you are exceeding GigE? 1400x1024x(3 bytes/pixel)x60fps = 258 MB/s. GigE = 1024 Mb/s = 128 MB/s. You are trying to run 258 MB/s and the camera is only capable of 128. Again, my math could be off, maybe somebody else can weigh in.

Message Edited by deskpilot on 05-03-2010 02:07 PM
0 Kudos
Message 4 of 19
(7,715 Views)

I have a very similar issue except I am using an EVS1464, very limited RAM and drive space. I am trying to write to a compact flash card and can't seem to do better than 90fps at 640x480 resolution before the queue starts to fill. I am flattening the image to a string in JPEG format and then saving it to a binary file, appending the file for each image instead of openning a new file each time. The actual save rate is only about 0.72mb per second.

 

I assume the Win32 write file I/O stuff won't work on a real time operating system!

 

Any tricks that anyone knows for saving data to disk faster would be appreciated, I don't care what the format is, I can fix it later when I go to process the files.

0 Kudos
Message 5 of 19
(7,686 Views)

The important thing that is not obvious is you DO NOT want to flatten or compress the image before writing it to file. This takes time. You can write a big file faster than you can compress and write a small file. Just look at a TDMS file. This is true in windows, couldn't attest to that in RT, but I would sort of guess it still holds true. Write it to binary file just as it comes into LV, and then wait until post test to convert to jpeg, AVI, whatever. Also, make sure you are not viewing the image anywhere, this slows the processor way down.

 

During setup, I view mine real time, but only display every 5th image, then when I hit record to start saving to file, I turn off the display all together. This way I don't have to change settings after setup and risk changing something by accident and not knowing about it.

 

Finally, the WIN32 protocol is just the way of not buffering the image data in the processor pre 8.6. In 8.6 and 2009, you just use the Write to Binary File.vi, and set buffering to false when you open/create the file. Again, I'm fairly unfamiliar with RT, but guessing this is still available? Part of the stuff LV gave me with the WIN32 Unbuffered Write to Binary File.vi, was an example that appended a header to the file. This allowed the image to be pulled back out easily. I've used it as is, so not sure how important it is, but it stored things like number of frames, image size... It was only appended after all of the images were written to file.

 

Also, something I've thought of, but not implemented, would be multiple files. Not sure if this would help or not. With the WIN32 protocol I was limited to 1 file because it would error out if I had two instances of it in memory. With 8.6, you should be able to use multiple instances of the Write to Binary and so could disperse the images over two or more files. Not sure if this would help write speed or not? In my case, I was using two cameras and so had to interleave the images into the same file and then seperate them back out afterwards. Easy to do because I knew the order that I used to put them into the file.

 

Finally, 640x480x90fps is 83 MB/s. It doesn't matter how small it is after you flatten it, that is still the amount of data coming in. If you are flattening it, that means you are bringing it into the processor at a rate of 83 MB/s, which is not all that far from the limit of PCI/PXI/PCIEx1. You would have a much better chance of taking it straight form whatever frame grabber type thing you are using to file, which is only possibly if you use the unbuffered open on a binary file and do not alter it or view it.

 

Now, I'll admit, there is a little bit of interpolation in what I've stated above because NI support doesn't seem to know much about high speed image capture (or high speed data transfer for that matter, they still say things that don't make sense sometimes). There are only a couple of guys in the whole company that really seemed to have their finger on the pulse, and one of them was out of the country most of the time I was setting up my system.

0 Kudos
Message 6 of 19
(7,668 Views)

While I'm at it...

 

"As opposed to worrying about image resolution, filesize, etc, I would just use a producer/consumer VI architecture.Here is a way that you can do that."

 

This sort of alludes to another problem with this whole high speed data transfer thing, and I've brought it up when talking about waveforms. Queues do not assimilate. That means that every iteration of the deQ loop is only going to dump one image. So, on a high speed continuous system, the Q really isn't doing much good (besides splitting the work up between processors). I've built my own Q that can assimilate waveforms so that the DeQ loop can run slower than the EnQ but still process all of the data in time. But I'm not sure how you can do that with images. I would think there would be a way because with LV there usually is, but I think NI has to do a little something with the code before that can really be implemented.

 

Message Edited by deskpilot on 05-06-2010 06:49 AM
0 Kudos
Message 7 of 19
(7,665 Views)

Hmmm, if I don't flatten the image I get about 6fps instead of 90. Processor load isn't a problem (not yet, at least until I have to add three more cameras), the write to disk speed is. I tried to turn off the buffering for the save as binary vi and it just errors out, must be some RT thing, RT seems to have a lot of "things".

 

Perhaps it just has something to do with the fact that I'm trying to write to a compact flash card, I can't write to the hard drive on an RT system because the hard drives are barely large enough for the executable file. I'm going to attempt to write to a USB drive next to see if there are any differences.

 

I've tried many different ways of writing to disk and the bottom line is, the smaller the file the faster it writes. I did have somewhat success writing the image data to a spreadsheet file but..... the data in the file seemed like it would be difficult to post process, perhaps I'll try that agin later and see if I can make heads or tails of the end result file.

0 Kudos
Message 8 of 19
(7,644 Views)

Buffering in the processor won't necessarily consume processor, but will slow down acquisition. It looks like USB could be a problem no matter which way you go since its limited to about 40 MB/s. I'd be interested in seeing a bandwidth calculation for your system as I just had a conversation with someone that was calling into question my 3 bytes/pixel number. I found this list for compact flash stuff, do you know what kind of compact flash you are using?

 

Rating -	Speed (MB/s)
  6x	  -         0.9
 32x	  -         4.8
 40x	  -         6.0
 66x	  -        10.0
100x	  -        15.0
133x	  -        20.0
150x	  -        22.5
200x	  -        30.0
266x	  -        40.0
280x	  -        42.0
300x	  -        45.0
333x	  -        50.0
400x	  -        60.0
433x	  -        65.0
600x	  -        90.0
667x	  -       100.0
0 Kudos
Message 9 of 19
(7,628 Views)

THE CF card I have is a SanDisk ExtremeIV, from what I read about it, to get the good read and write speeds you have to use its reader. I'm just using the reader on the EVS system.

 

Did some tests with different media, this is what I can get in FPS...

 

Hard disk - 160, USB - 120, CF - 90.

 

Disabling buffering only works when you write to a hard disk, not USB or CF. I might look into get an actual SanDisk reader and see if that makes any difference.

0 Kudos
Message 10 of 19
(7,621 Views)