thank you for newv2012 vi
I found several problem that make your code slower and also your buffer full
image dispose is in wrong position with wrong attachment
use it with flat sequence exactly after you extracts the buffer and save it or change into array, do not wire any image to image dispose and just make true the Boolean connection of this icon to dispose all image in this case,
also go to programing / application control/ memory control there is some nice icon for more professional action if you want
use time two time loop with Synchronize method one loop for pulse and stopping time and the second for image acquisition this make your code better without loosing time
using local variable could waste your time up to 20 time more than wiring so do not use them in fast action coding
also I think you do not need use buffer configure into loop just attach number of buffer with one imaq( I am a bit dout about it could you test this and tell me the result ?)
Turns out your suggestion has really helped us find the issue.
Setting the camera to "continuous" is the right way to go if you want the buffer to "wrap around" and act like a true ring. Actually, we found that we only needed to allocate around 10 more buffers than images, since we know exactly how many images we are taking at once.
Our mistakes were that:
1. We only allocated as many buffers as images taken at once (e.g. 50 buffers for 50 images). IMAQ Extract Buffer trips up at this for some reason. The buffer needs extra room so that there is always some free space. This was fixed by allocating 10 more buffers than images (e.g. 60 buffers for 50 images).
2. When extracting acquired images (IMAQ Extract Buffer), we wrongly assumed that we needed to manually wrap the value wired to "Buffer to extract" (i.e., extract buffer 48->49->0->1->2 in a 50-buffer list) in LabVIEW. Instead, we should increment the "buffer to extract" without worrying about the buffer size (i.e., extract buffer 48->49->50->51->52 in a 50-buffer list). The "wrapping" occurs internally so we don't have to deal with it in software. Of course, this is assuming the camera acquisition stays "ahead" of the extraction on the buffer list.
We are using TCP/IP-enabled video cameras (Axis), 640 x 480 RGB, 30 fps. With a 100-baseT network, we can support 9-10 cameras at this data rate. We use a Producer/Consumer process for each Camera, passing each Frame to a Consumer for saving to disk (and we only display a single selected Camera's images).
We previously were acquiring at 10 fps and often used 24 cameras simultaneously. When I re-designed the routine and changed the frame rate to 30 fps, I forgot to compute how many bytes/second we were trying to stream -- fortunately, the initial studies used no more than 6 cameras ...