I'm trying to flip-flop between two buffers and wondering the best possible solution for this. I'd like to acquire an image in one buffer, send that off to be processed, and then while that's being processed acquire a second image. Right now I have a "Create IMAQ vi" in a for loop and have it creating 10 image locations. I'm using a non-NI framegrabber (shame I know) which makes things a bit more difficult to replicate. I have two while loops. One while loop currently grabs images from the framegrabber and places in the 10 different locations. The other while loop currently holds a case structure that does the processing. I have created a local variable that holds all of the image locations and reads those to be processed. I don't know if this is actually making things faster or if it's better to just make one image location.
I have two images attached. One image is the Grab While loop. Since I have an array of locations, I have to use a for loop and index each one out to my display. I then have a shift register to carry the image location info over to the next iteration of the while loop.
The second attachment is the bulk of the main while loop. It shows what happens to the image while it's being fully processed in the left case structure. I know it does not look like much but one of the cases (which is called by Boolean Image FFT) is a subVI that does most of the processing. I believe that is what really slows it down because of how that program is written.
The right case structure shows my saving mechanism. I have two file paths. One to save the image and the other to save the processed image. I have a sequence to make sure they save at the same time once it gets to that point.
The problem though is the following:
In the grabwhileloop.png, you can see that I have a timing to see how fast the images are being acquired. This value is approximately 60 fps (which is the rate of the Basler camera). There is a similar set up in the main loop case structure. This processes very slowly and is approximately 1.04 fps. Which would mean that the image I turn into an array in the left case structure in the main while loop image is more than likely different than the image I'm trying to save off in the right case structure since the grab is occurring at the same time. I'd like to have the processed image to save alongside the image I am processing.
Sorry for the big bulk of text. This code has come a long way as it is. If you have any suggestions on making it faster or more efficient please feel free to chime in.
By the way, I'm running Labview 11.0 and the latest version of Vision package. My OS is a Windows Vista 64-bit machine.
You should be worried about a race condition between your Image indicator and the local variables you have of it. In your Grab While loop, you are doing something with the Image local variable, probably before an image gets assigned inside the For Loop. And that will only run once per iteration of the while loop where the For Loop will get an image reference assigned N times.
Likewise, the Image local variable in your main while loop may be a fresh image reference or an older image reference depending on how the loops run.
I'd recommend using a producer/consumer architecture to pass the image references to the other while loop.
Thanks for the comment. I have looked into the producer/consumer architecture, but to be honest, I'm not quite sure how everything will work while doing that. I have seen the example codes, and I have thought about implementing (or at least attempting to) but I'm still unconvinced that it will run that much more efficiently. There is other setup outside of the images that had to be done outside of either while loop. Also, I don't know where I would put both loops.
Last time, I attempted to put the grab while loop inside of the state machine. Things got chopped because it took so long to go through the main while loop's "acquisition" state (which is really the processing state). I needed both to simultaneously run. The grab reset every time it went to the "grab" state which is not what I wanted. The only way I can thing to have combatted this was to combine the grab and acquisition in the same state. If that were to happen, I'd take out the for loop and grab one image at a time. However, that would still probably make things even slower than they already are.
In terms of the doing something before an image is assigned in for loop, I don't need that pixel sum value to refresh too quickly. As it is already, the main while loop is slow enough as it is, so I am more afraid that everything will run too slowly the more I do. I know where the bottleneck is in my code, but I can't really see a way to "even out the flow". Even if I moved to the other architecture, I feel it'd take the same amount of time that it does already.
From my debugging, the Image local variable in the main while loop seems to refresh as quickly as the grab while loop spits it out. Granted once the main while loop finally completes, main images have gone by. This is what has to be though because it just take up so much processing power to run through the main while loop state.
As a side note, does labview have an issue with acquiring images in real-time that you have heard of? I ask because when I run the code, there is a solid white line that I'm supposed to see in my display. Every time things either time out or something, the line moves which is not supposed to happen. The line also moves every time I place my mouse cursor in the display or if I spin the mouse wheel to scroll. If I don't do either of those things, it'll eventually move on its own.
The Vision Software is compatable with real-time. If you are running this on your computer though then it is not a real-time target. Your trying to acquire and image and process it but while the image is being process you want to acquire another image. For this I would have to agree with Ravens Fan in the recomendation to use producer/consumer architecture. Where is the bottleneck you see in your code?
Sorry Greg. Not quite sure what you mean by "not a real-time target". Can you explain this for me please? It is definitely a real-time target in my opinion. If I put my hand in the system, I can see real-time through the labview display the disturbance from my hand.
Also what are the benefits of having a producer/consumer architecture? To me, it's just a queuing state machine. How will things run smoothly via this architecture? If I want to grab an image, wouldn't that architecture cause the next grab to "wait its turn" in the queue? Then it really will not be real-time.
Well the bottleneck is handling the array. The array is 2048x601 which is pretty large to me. In the most "accurate" case, this large array goes through another VI that does a lot of spline interpolation. I think that's what causes the biggest issue and slows this part down tremendously.
We have a special version of LabVIEW that is real-time to run on real-time targets. A Windows PC is not a real-time target because the CPU controls the resources and will cause your code to execute at different speeds depending on what else the computer is doing (i.e. Windows is not deterministic). See this link:
The producer consumer architecture is two parallel running loops that allows data to be collected in the first loops and processed in the second. The advantage of this is if the processing will take a long time you don't have to wait for it to finish before you acquire the next image. Basically you can continue to acquire images in the first loop and process them in the second without having to slow down the acquisition due to a long processing time.
Thanks again for the response. I understand what you mean now by real-time target.
When I originally researched the producer/consumer architecture, I used this link http://www.ni.com/white-paper/3023/en. I saw that it said if you wanted to acquire data/process data, this would be of good use. However, it was not quite clear to me how this would be much better. I can understand in terms of my local variable image that it queues what will go in to be processed. However, I really think things will run slower than they do because of this.
Also, I already grab images at the rate of the camera. I have a timer in my for loop to calculate this. The acquisition only slows down the production of a graph that I have which comes out as a result of the processing.
Do you have any insight though as to why when I place the cursor into the display, that my image shifts?
One last thing. I have placed another image as an attachment to show the processing. Can you assist me with making the code as efficient as possible? I'm not really too sure if it's just the fact the array is so large or what is really going on.
I wanted to try and upload the VI, but the file is way too large. Are there ways to make it smaller?
The capture of the image takes time and the processing takes time, if you make it so these two things happen at the same time then you will definitely speed up your program. This is what the producer/consumer architecture is going to do.
The only reason I could see the mouse causing an issue on the front panel is if you click something you will cause the UI thread to update at that point, where it may not have if you didn't do anything.
What is the ultimate goal of the code you attached the image of?