LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Save JPG images faster

Good evening,

 

I have developed a labVIEW code to take and save images as fast as the camera allows.

The camera runs at 35Hz and is definitely taking images at 35 Hz, however, labVIEWs not saving the images at 35 images per second. It's actually saving images at about 20-22 images per second. Is there a way to help labVIEW save the images faster (35 ips)?

I am also new to producer/consumer loops and may have messed up a wiring.

 

USB-3 connection, camera: Basler acA2440-35um

 

Thank you for your time and help!

0 Kudos
Message 1 of 17
(438 Views)

Hi Johnny,

Channels and queues serve very similar functions, you do not want to use both of them.

The image data type is a bit strange, and it is actually a reference. For this reason, you cannot simply pass the image to the consumer loop, because the data the image refers to is constantly changing. 

 

I happen to have a 5MP USB camera at my desk that streams at ~35 FPS. If I save images in the same loop as the acquisition it adds a little bit of jitter, but then I know I am saving every image. I do have an SSD though, so your results may vary.

 

You can also convert the image to a data array and then pass the data array to the consumer loop. I'll attach an example of that.



CLA // LabVIEW 2016 // BALUG // Unofficial Forum Rules and Guidelines
Message 2 of 17
(416 Views)

@JohnnyDoe771 wrote:

Good evening,

 

I have developed a labVIEW code to take and save images as fast as the camera allows.

The camera runs at 35Hz and is definitely taking images at 35 Hz, however, labVIEWs not saving the images at 35 images per second. It's actually saving images at about 20-22 images per second. Is there a way to help labVIEW save the images faster (35 ips)?

I am also new to producer/consumer loops and may have messed up a wiring.

 

USB-3 connection, camera: Basler acA2440-35um

 

Thank you for your time and help!


The main reason for using a Producer/Consumer loop is your case -> the producer is faster than the consumer. If you have enough memory in your computer, then your consumer slowly goes through this buffer saving images as fast as it can. If the consumer could save data as fast as the producer then everything could be combined in a single loop.

 

I am not a channel expert, but I am sure there is a lossless channel that you could use that would store N images until you hit a memory threshold.

 

mcduff

0 Kudos
Message 3 of 17
(402 Views)
Highlighted

@mcduff wrote:

I am not a channel expert, but I am sure there is a lossless channel that you could use that would store N images until you hit a memory threshold.

Well, I've been using Asynchronous Channel Wires for almost four years and enjoy them immensely.  As it turns out, the Channel Wire isn't the problem -- rather it is the number of buffers you have allocated in the Camera.  As Gregory pointed out, an IMAQ "Image" is small and cheap, being essentially a pointer to an area in the IMAQdx driver where the pixels are (temporarily) stored as images are acquired.

 

What gets time-consuming is accessing those Images, as this entails moving a lot of data.  You don't say what the size of your Image is, nor whether it is RGB or 8-bit Grey Scale, but if we go "upper limit" and say it's a 2048 x 2048 RGB image, 35 fps (by my rough calculations) is just about 600 MB/sec, which is close to USB 3.0 speed (which is probably what is setting the upper limit of your Camera).  Incidentally, this is also close to the I/O speed of disks, so no wonder you might be having trouble.

 

So how to optimize this?  If your camera is acquiring at 35 fps, then let's try to get those data out and on to the disk at that speed.  We'll take all the help we can get, and try to avoid things that might potentially slow things up.

 

Setting up the camera:  Don't use IMAQdx Configure Grab, instead use IMAQdx Configure Acquisition (found on the Low-Level palette).  This allows you to specify more than the default 5 buffers for your images.  These Buffers live in the IMAQdx driver -- as the Camera acquires images, it stores them in a circular ring in these buffers.  You need enough to prevent the latest-acquired Images from overwriting an earlier one that hasn't "made it out of the Driver" yet.  Since memory is cheap (and fast), think about "more" (like maybe 20).

 

Setting up the Producer Loop:  The Producer Loop is where you "Grab" the Image (which basically means the Camera acquires it and puts it into a Buffer, giving you a pointer to the latest Image buffer, calling it "Image Out").  You don't want to do any processing in this loop -- just export the Buffer/Image using either a Queue or a Stream Channel Wire (the Stream is the lossless point-to-point Channel designed for the Producer/Consumer design).

 

Consumer:  You have two things you might want to do -- "see" the Image (which involves getting those Pixels, which is > 1MB for even a 640x480 RGB image) and rendering them on the screen.  The other thing you might want to do is to save them to disk.  Here's where Buffers come into their own -- if you open, say, an AVI file, there is a little bit of time spent opening the file (maybe 0.1", or 3 frames), a little time doing whatever processing is involved in Motion JPeg codec (maybe a frame or two), then writing to disk (Size Matters!!).

 

I recently had a Project that involved using video (30 fps, 640*480 RGB, as I recall) from multiple cameras, saving AVI "snippets" of 3-6" whenever "something interesting" was happening.  As it turned out, the file I/O wasn't the hold-up -- the bottleneck turned out to be the TCP/IP traffic of all those pixels coming in from all those cameras (at 100Base-T, we could handle the traffic of about 10 cameras).

 

Because an IMAQdx "Image" is really a pointer to a buffer, there is virtually no time lost transferring the "Image" from Producer to Consumer.  But when you want to "Consume" the Image (by either displaying or saving it), you need to use the Image Pointer to move those Pixels from within IMAQdx to your Display, or to your disk.  So give yourself enough room to store those Images in the Driver by allocating all of the Buffers you think you'll need (more than the default 5 that Configure Grab gives you).

 

Move code out of the Producer Loop, make it "lean and mean" and doing only the Grab and "Image Transfer" (Queue or Stream Channel is fine -- remember that you are not moving data here, only pointers).  Consider writing a Consumer that does only one thing, such as "Write to Disk" or "Display on Screen", just to get a sense if you can function at 35 fps. 

 

Keep us posted on your progress, and come back if you have more questions.

 

Bob Schor 

Message 4 of 17
(359 Views)

@Bob_Schor wrote:

@mcduff wrote:

I am not a channel expert, but I am sure there is a lossless channel that you could use that would store N images until you hit a memory threshold.

So give yourself enough room to store those Images in the Driver by allocating all of the Buffers you think you'll need (more than the default 5 that Configure Grab gives you).


Do what @Bob_Schor recommended, never used IMAQdx before and have no idea about how it stores data.

 

mcduff

0 Kudos
Message 5 of 17
(310 Views)

@Gregory wrote:

Hi Johnny,

... I do have an SSD though, so your results may vary.

 

...


I am betting on a disk I/O bottleneck.

 

If stuck with a normal drive, then pre-writing files of the correct size ahead of time and then over-writing those files will eliminate the overhead of allocating disk space, creating the file... which often require seek latency slowing down the transfer.

 

Another check is to do the math to figure out how big the files are figure out the transfer rate and then compare that with the disk spec.

 

Ben

0 Kudos
Message 6 of 17
(282 Views)

@Bob_Schor wrote:

What gets time-consuming is accessing those Images, as this entails moving a lot of data.  You don't say what the size of your Image is, nor whether it is RGB or 8-bit Grey Scale, but if we go "upper limit" and say it's a 2048 x 2048 RGB image, 35 fps (by my rough calculations) is just about 600 MB/sec, which is close to USB 3.0 speed (which is probably what is setting the upper limit of your Camera).  Incidentally, this is also close to the I/O speed of disks, so no wonder you might be having trouble.


I am a total noob at imaging applications, but is that imaging rate for Bitmaps or for Jpegs? If he doesn't care about loss, he might be able to compress the raw images to Jpegs and get some massive gains. I don't know how long it takes to compress a Jpeg but I'd guess it was at least worth investigating. PNG's may be a good way to go as well.

0 Kudos
Message 7 of 17
(264 Views)

Thank you, that was a heck of a lot of information and I moved the code around just a little. I am working with 8-bit grey-scale, 2440x2048, taking up 715 KB when saved. I am leaning the producer loop as well. How should i change the channel writer and the queue functions to be more proficient? The producer loop is operating at the camera's 35 Hz but the consumer loop is only saving at 20 frames per second.

 

I am looking into getting a new camera that operates at 150 fps and an SSD to help with data as well.

 

I would like to hear more about your project. I am ultimately wanting the program to ONLY take pictures when a particle is passing in-front of the camera for maybe 5 seconds, then stop. 

I'd really like the images to save as JPGs. 

 

Thank you again for your help.

0 Kudos
Message 8 of 17
(252 Views)

Hi Gregory,

 

I changed the file directory to one on my computer and ran your program. I am getting that it is acquiring iterations at 33.42 ips, and saning at 11.606 ips. The JPGs seem to be running faster so I am going to try allocating the code to JPG and see where it is running.

 

Thank you

0 Kudos
Message 9 of 17
(241 Views)

Put a diagram disable around the code writing the file to determine if the file write is the slow down or something else.

 

Ben

0 Kudos
Message 10 of 17
(233 Views)