Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

parallel processing of vision algorithms

This kind of echoes Shane's post about 6 months back, but here goes:
 
What are the best strategies for vision processing on multicore computers?  Are the vision routines internally parallel or serial (or some of each)?  Is it okay to make vision routines reentrant so multiple copies can run simultaneously?  Would there be an advantage to doing this?
 
I am writing an application that will be using half a dozen different test algorithms, and each one will be used between one and one hundred times per cycle.  I am using multiple copies of the analysis engine to parallelize the analysis.  Right now, I feed the entire list of tests into a queue and each analysis engine reads the next test off the queue when it completes the previous test.  The entire analysis sequence needs to be completed within 20 msec.  I am using a quad processor PC, so I should have plenty of processing power.  The problem I am having right now is that if all of the analysis engines are working on the same type of analysis, they stop at the non-reentrant vision routines and execute one at a time.  When I only test one algorithm, the maximum processing rate occurs when the CPU usage is fairly low - about 35 percent.  If I could run the same vision algorithms in parallel, I could process much faster.
 
My other strategy that I am considering would be separating the analysis and assigning each analysis to a specific engine.  The problem I see with this strategy is the possibility of unbalanced engine usage - the engines with a large number of tests could take a lot longer than the ones with one or two tests.  Figuring out how to balance the load could be challenging, especially since the quantity of each test is determined by the user and changes frequently.  I prefer the single queue if I can increase the analysis rate.
 
I would appreciate some feedback from the NI Vision developers here, since they are the only ones that can explain the internal workings of the vision routines.
 
Thanks,
 
Bruce
Bruce Ammons
Ammons Engineering
0 Kudos
Message 1 of 11
(4,994 Views)

Another question:  Does threading affect this as well?  Do I have to put each instance of the analysis engine in a different thread, or is making them reentrant sufficient?

Thanks,

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 2 of 11
(4,996 Views)
Hi Bruce,
 
I'm assuming you are using LabVIEW to solve the application.  If I'm wrong in this assumption, let me know.
 
The Vision algorithms are internally single threaded, but designed to work in parallel with each other.  With no changes, any two different Vision VIs will run concurrently.  If you want to run the same VI in parallel with itself, you'll need to switch it to reentrant.  Internally, each VI will lock images to ensure, for example, you don't try to resize an image in one VI while another is trying to read from it.
 
You don't need to explicitly put each instance in a separate execution context; however, you can get more fine grain control if you choose to do so.
 
I gave an NI Week presentation about seven years ago showing how to effectively pipeline Vision algorithms to exploit multicore processors.  I honestly don't think I have it anymore, but I will look, and if I do have it, I will forward it on to you.
 
If this doesn't answer your question, or if there are any more questions you have, please post and I'll try to address them.
 
-Jeff
 
 
0 Kudos
Message 3 of 11
(4,985 Views)
Jeff,
 
Thanks for the quick answer!!!
 
I am using LabVIEW.  I guess I forgot to mention it.
 
It sounds like changing the Vision routines to reentrant will do the trick.  Since I am not changing the original image, the routines should all be able to operate on the same image at the same time.  I just wanted to make sure the dll wouldn't lock up when it is called twice.
 
Since you didn't mention it, I will assume threads are not a significant issue.
 
My test processes are mostly short, one step measurements or analysis, so pipelining doesn't do me any good.
 
I will try changing the routines to reentrant and post my results.
 
Thanks,
 
Bruce
Bruce Ammons
Ammons Engineering
0 Kudos
Message 4 of 11
(4,983 Views)

Jeff,

Changing the NI routines to reentrant solved the bottleneck for all but one of my test routines.  Now each time I add another analysis engine, the number of possible tests increase proportionally until I hit 100% CPU usage, which is exactly what I wanted.  I found for most of the routines the ideal number of engines is 3.  For one, the total number of possible tests actually decreased significantly when I went to 4.  I think it is best to reserve one core for other processes, such as screen updates, file IO, etc. so I will stick with 3 analysis engines.

For one test, the number of possible tests decreased each time I added an analysis engine.  I think one or more of the NI routines can't be made reentrant, but I haven't figured out all the details yet.  It looks like IMAQ Extract and IMAQ Particle Analysis may have conflicts.  When I make them both non-reentrant, the number of possible tests remains constant as I increase the number of analysis engines.  I noticed that Extract may be conflicting with another test since I use it in both.  The strange thing is one test is using grayscale images, and the other is using color images, so it doesn't seem like they would conflict anyway.

Any thoughts on either of these routines and if they can be made reentrant?

Thanks,

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 5 of 11
(4,972 Views)
Try looking over at LAVA (Here) for some more Info.

"TiT" mentioned something that only some of the VIs are safe to be made re-entrant.  He recommends contacting your local NI engineer to find out which ones are OK to make reentrant.

I came to the same conclusion myself after quite a lot of testing.

Shane.

PS I'm being a bit Schizo recently.  Intaris is my new user name......
0 Kudos
Message 6 of 11
(4,967 Views)
Hi Bruce,
 
You are right about Particle Analysis - there is a known issue that internally prevents it from running in parallel with other Vision calls.  There is an engineer looking at the problem.  We did not know about a similar problem with Extract;  we will have that one investigated as well.
 
To be clear - it is completely safe to set any Vision VI to reentrant. While doing so might not improve performance (due to internal reasons), we have made sure that there will be no data corruption, hanging, crashing, etc. if you do so.  Do not, however, change any Call Library Function nodes from "Run in UI Thread" to "Run in any Thread".  There are a few Vision function that must run in the UI thread to maintain coherency with LabVIEW.  As you might imagine given the description, these tend to be the VIs that are in the "External Display" palette.
 
As an aside, the decision to make the VIs not reentrant by default was done when we first shipped the Vision toolkit, back in the LV 5 days.  At that time, there were very few multicore/multiprocessor machines, and the negatives to marking a VI as reentrant were much more onerous.  As LabVIEW has matured and computers have been blessed with more memory and computing power, this decision becomes less beneficial.  With each release, we evaluate if we should change the default, but we have to weigh the consequences to backwards compatibility.
 
-Jeff
0 Kudos
Message 7 of 11
(4,943 Views)


To be clear - it is completely safe to set any Vision VI to reentrant.


I don't mean to be a PITA (tm) but I don't think this is correct for every version of the Vision tools.

I had tried something like this a while back (About 8 Months or so) and I found problems when setting certain vision VIs to reentrant. I think I was using the 1st Quarter 2007 DEveloper suite tools at the time (and still am actually).

I'll try to track down the post later, but IIRC, not all VIs are "re-entrant safe".  Or at least, they weren't always so.

Shane.
0 Kudos
Message 8 of 11
(4,941 Views)

Hi Intaris,

I can assure you that since IMAQ Vision 5.0 all of the IMAQ Vision VIs have been designed to be set as reentrant.  When I led the redesign for that release, we put a special emphasis on making sure the algorithms were multithread safe.  There is, of course, the possibility of a bug that caused an issue.  I am unaware of any such bug, but I haven't worked in the IMAQ group for quite a few years now, so one may have surfaced that I didn't hear about.

-Jeff

0 Kudos
Message 9 of 11
(4,936 Views)
Well I suppose I'll have to look at it again.

I was pretty certain that I was seeing problems with certain VIs being set to re-entrant.

I'll see if I can dig up my old examples.

Funny thing is, I don't seem to be the only one who has noticed this.

Shane.
0 Kudos
Message 10 of 11
(4,934 Views)