LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Splitting one big loop into parallel loops

Hi,

 

I'm currently working on a project where I would like to capture real-time footage from a camera and perform some operations on the image based on user input. Currently, I placed all the components into a single while loop and I'm using case structures to decide which operations to perform. 

 

However, I have a feeling that some performance issues might arise later, when I start adding more functionality. I have attached an example of what I'm talking about, where the Grab function and the Count Objects function are in the same while loop.

 

I'm wondering if it is possible to maybe split this into two loops, where one performs the image acquisition and the other runs the Count Objects function, when the user clicks the "Detect objects" button on the front panel (and stops, when the user clicks the button again).

 

Thanks in advance.

Download All
0 Kudos
Message 1 of 5
(1,274 Views)

Hi rivlin,

 

please use direct links instead of hiding them behind suspicious 3rd party websites! (They may be blocked by company firewalls!)

 

Link1: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000YGxRCAW&l=de-DE

Link2: https://zone.ni.com/reference/en-XX/help/371361R-01/lvconcepts/labview_threading_model/

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 3 of 5
(1,232 Views)

IMAQdx, that you are using, is already  grabbing images in parallel, you're using it in a way that blocks your loop.  Rather than a second loop, learn techniques to not block waiting in an image.  I use the "frame done" event to have IMAQdx tell me when the image is ready, but I'll bet there are even simpler ways.  From your code, you don't even seem to be starting your task to be free running.

0 Kudos
Message 4 of 5
(1,229 Views)

I echo the comments of @drjdpowell.  I was introduced to LabVIEW Vision (and IMAQdx) while helping a colleague trying to record the behavior of a dozen mice (using a dozen cameras).  Trust me, handling a dozen image streams is more "interesting" (i.e. a pain-in-the-rear) than doing a single stream.

 

The key concept that neither of us really understood, but figured out by reading the Help, what that of "Buffers", and how to use them.  I just looked through the examples that ship with LabVIEW 2019, and couldn't find anything that really seem to address this, but I recall finding stuff on the Web (sorry, too many years ago to remember the Search terms).

 

But here's the idea:  When you Configure Grab, you configure a number of Buffers, and set up IMAQdx for "Contiguous" Acquisition.  What this does is allocate somewhere in PC memory enough space to save "# Buffers" images, and sets up pointers to point to this Storage Area.  IMAQdx also associates some numbers with these Buffers, including "Last", "Next", "Last New", and "Every" which you can specify if you use IMAQdx Get Image2.

 

Suppose you do a default Configure Grab, which in LabVIEW 2019 configures 5 Buffers.  The first 5 Grabs will go fill the Buffer space allocated and will be retrievable by specifying a Get Image2 with Buffer Number Mode set to "Buffer Number" and being called with "Buffer Number In" ranging from 0 through 4.  What happens with the sixth Image?  It overwrites the Image in the first (index 0) Buffer, and if you ask for Buffer 5, you'll get it.  The Buffer Number always increase as Frames are acquired -- note that this allows you to look "backward in time" and view images acquired earlier (in the present example, up to 4 buffers earlier than the Current Buffer).

 

What you want to do is probably something like the following:

  1. Configure the Camera for the number of Frames/sec you need, and the size of the Frame.  Recall that a large Frame Size means more Pixels to process, and high Frame Rates controls how much time you have (on average) to process a Frame.
  2. Configure Grab, specifying the Number of Buffers you want.  You will want to test the number of Buffers to be sure you have specified enough for the processing tasks you have.  Note that this actually starts the Grab and the filling of the allocated N buffers at the Frame Rate you specified, overwriting Buffer 1 with Image N+1 and keeping the (finite) Buffer filled.
  3. You've now gotten rid of the loop to get the Images from the Camera to the PC (one way to think of this is that IMAQdx stores the N buffered Images in some region of memory that it knows about and manages for you).  So you can write a Processing Loop that uses IMAQdx Grab2 with Buffer Number Mode set to "Next".  Note that this returns an "Image", which is really a "pointer" into the Buffer where the data from the Camera are stored.  You need to get the Image Data processed and saved somewhere before the Camera takes N more frames and overwrites the Image data you are processing with new data.
  4. For "extra credit", here's how you look "backwards in time".  Suppose you are taking images at 30 FPS, and you want to look at (and potentially save) what the camera is recording, starting 1 second ago.  Configure a Buffer of 60, which saves the last two seconds of data.  When you get a signal saying "What happened one second ago", you can ask IMAQdx for the current Buffer Number (let's say it is 100), subtract one seconds-worth of Buffers (30), and start processing/saving Images starting with Buffer 70.  [The reason for a "Buffer" buffer, namely the extra 30 buffers in this example, is that it make take you a fraction of a second to start processing, and you don't want the Camera to "catch up" with your data processing and overwrite data you need].

Bob Schor

0 Kudos
Message 5 of 5
(1,170 Views)