Here is an outline of what I want to accomplish:
-LabView program starts running and waits for GigE camera to output frames
-Hardware trigger leads to GigE camera outputting frames
-Some simple arithmetic is done on each frame to generate the average pixel value--> this average value is plotted for each frame
-Repeat the above three steps
Please see the attached VI. I have successfully set my camera's settings in MAX to make it wait for an external hardware trigger. However, the output of IMAQdx Grab2.vi inside the While Loop is only a single frame (even though in MAX I have set the Acquisition Mode to MultiFrame - 255 Frames).
Any help would be appreciated!
Solved! Go to Solution.
What camera are you using? I'm also curious -- I've got LabVIEW 2014 installed, with the Vision Toolkit, but I don't have IMAQdx Grab2, only IMAQdx Grab. What is Grab2?
I am using a FLIR SC325 Thermal Camera. I also was curious about why my 'Grab' was showing up as 'Grab2'. I did have to uninstall and reinstall the Vision Toolkit several times before things went through as desired, so maybe that has something to do with it.
My frame rate is 60 Hz.
The first thing to figure out is if you are getting images (and why or why not). I just threw this (very trivial) "Grabber" together and it works fine.
Does the analogous code work for you? Do you see video until you push the Stop button? If not, then that's the first thing to get working.
Once that's going, then you need to think about what you want to do with the images, and how much processing time it will take. If you are taking images at 60 frames/second, this means you have 16 milliseconds to process the image. You should consider using LabVIEW's parallelism and take some of the processing tasks and time out of the data acquisition loop. We've used a Producer/Consumer design successfully, but you need to be careful about managing your buffers (and having enough!).
The Grab2 is the updated version of the IMAQ Grab function. The IMAQ functions are periodically updated and the version number is attached to the name. You might have seen other Vision functions with 2,3,4 attached to their name.
Your code looks good. Does the FLIR SC325 support multi-frame triggered acquistion? Did you save the settings in MAX?
I am able to do multi-frame triggered acquisition using FLIR's ResearchIR software with no problem. However, the in-line and post-processing capabilities are very limited in FLIR's software, and that's why I'm moving the LabVIEW. And yes, I did save the settings in MAX.
If things were working correctly in an ideal world, would the output of the grab function be the 255 frames? Or would I need some sort of a looping structure to get all 255 frames? Would it be more effective to use the IMAQ Sequence.vi block?
In the meantime, since I was getting frustrated with my current method, I decided to take another approach...
-Trigger signal goes into USB-6009 DAQ board
-When the trigger signal is greater than 3V, the camera continuously collects data
-When the trigger signal is less than 3V, no data is collected
The above approach works really well for my application, BUT I am not able to achieve the frame rates I would like due to the loop and case structure taking too much time to execute. The maximum frame rate I am able to achieve is around 20 Hz, and I would like to achieve 120 Hz. Please see the attached VI. Any suggestions on how to achieve higher frame rates? My first thought is to simply collect data in the looping and case structures (while the trigger signal > 3V), and to use the down time (while the trigger signal < 3 V) to process the data.
The "problem" that you are having is the frame rate of video acquisition, which you think is about 20 Hz. Take the very simple VI I posted and run it with your camera -- all it does is continuously take frames (and display them) -- does this have an acceptable rate? I suspect it will.
If so, then "start with what works and add to it", rather than trying to "fix what is broken". First, let's consider how to (better) control the Start and Stop of frame acquisition. I like your idea of using the 6009, but I recommend (if you are using a 3V signal as the trigger) that you wire the trigger to one of the Digital I/O ports (as 0 and 3V are acceptable TTL levels for False and True).
Your Video loop will be "clocked" by the Camera at its frame rate, so you might consider using the same loop to clock the DIO. Take a Digital sample of the line to which you've wired your TTL signal, and as long as it is True, run the loop. [You'll need to think about how to get the loop started ...]. It should, I think, be possible to read from your USB 6009 within a 60th of a second -- if it is too slow, there are other ways of handling this with a separate parallel loop, but let's not go there until we see it is a problem.
So now, in principle, you've gone from a simple loop showing frames at 60 Hz to a loop controlled by a TTL signal showing frames at 60 Hz. All we need now is to process those frames.
Here is where you want to use a Consumer/Producer pattern -- you don't want to do processing inside this loop (because the loop cannot run faster than all of its parts, taken together, and if you are processing incoming data, you have to get the data, then do the processing). Instead, you have have two loops running in parallel -- the Producer loop acquiring the videos, and then "exporting" them to a Consumer loop that processes them.
Are you familiar with this pattern? There are numerous examples around (look in File/New/From Template/Framework/Design Patterns, and at some of the Sample Projects). It uses a Queue, with data put onto the Queue by the Producer and removed by the Consumer. You might need to increase the number of buffers for your camera, but you should be able to do quite a bit of processing in 1/60 of a second.
Thanks a lot for your detailed response. The only reason I went to the 6009 approach was because I was having challenges triggering my GigE camera using the DIO ports (see my opening post for the thread).
I am definitely achieving desirable frame rates when I run a simple VI that just continuously collects data. I am still achieving desirable frame rates even with a little processing done with each frame that is acquired (I may have to go to a producer-consumer setup as I add more processing, but I can worry about that later). I am not understanding why the 6009 approach is making me have an "effective" frame rate of 20 Hz... my 6009 is at 1000 Hz. Is it simply the case structure that is causing the slow down?
I was about to venture an opinion, but I realize I have a 6009 right behind me, so let me do a little test, and I'll get right back to you ...