From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Efficent lookup tables. To provide bit error correct to images.

Which version(s) among the ones posted did you try? Can you show us the code that reads from the camera? Do you read the image as a 1D or 2D array? Converting between the two using reshape array might slow your code. Also, have you disabled debugging in the VI that does this operation? That will likely speed up your code too.

If possible, please post more of your code, since it's possible there's a change elsewhere that would speed things up.
0 Kudos
Message 11 of 18
(1,409 Views)

You should also set execution priority to "time criticial". Finally, if the image size is known up front, and the lookup tables are constant, you could probably decrease execution time by using an "in place element" structure. This prevents LabVIEW from reallocating variable size arrays as you go. On a similar note, it's best that the arrays in question are initialized with 0's and of a specific size to start with. Again, as was mentioned, these vague suggestions could be made more concrete if you share your code. If you can, do Save As > Duplicate File Hierarchy. Then zip the folder and upload it.

0 Kudos
Message 12 of 18
(1,396 Views)

Our data package consists of modulating a hardware state at 135 hz between 4 states, we then sum each of those states 512 times for each package.  

 

The Lookup table correct for each adc is 12 bits (4096 values).  Right now I am just focused on getting the lookup correction fast.  Currently it takes ~16 seconds to do 2048 corrections on a test array of 1024x1024 random nubmers.    The whole data processing task (including a lot of other steps) for 2048 images needs to be under 15.15 seconds.  So I think I need to find a factor or ~2 increase in speed.

Thanks

 

apply_lut3d.png

0 Kudos
Message 13 of 18
(1,395 Views)

What exactly is the point of the outer For loop? It runs 2048 times, but each time it runs the data within is discarded.

0 Kudos
Message 14 of 18
(1,389 Views)
Try it the way I suggested in an earlier reply (without the array transpose and reshape array). It will be faster.
Message 15 of 18
(1,386 Views)

Parallellize the outmost loop (and only that loop), it's generally the most efficient. The Array subset and transform can also be done outside the loops.

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 16 of 18
(1,374 Views)

If you can re-use the output data from the last iteration as the basis for the next iteration then you can shave some time off the memory allocations.

 

Without enabling parallel loops I have a version which does each 1024x1024 image in approximately 6.2 ms on my machine (just over 12s for 0248 images).

 

I tried enabling parallel loops but the memory arbitration which then occurs significantly degrades performance.  It's much faster to work through the array from 0..N due to how RAM works.

Would it be possible to process, say 4 images in parallel, each image working serially?  That should get you down to around 4-5 seconds per 2048 images.

 

Shane.

 

Hmm, I notice something strange when performing calculations on 4 cores simultaneously (4 images concurrently).  VI Profiler shows 26s for one iteration, but the software is running clearly faster than that, it's updating in approximately 7 seconds (for 2048 images).  Seems like the VI Profiler has trouble benchmarking code over more than one core.  Maybe this is not news, but I did not know the profiler worked like that.

0 Kudos
Message 17 of 18
(1,368 Views)

I reckon NathanDs method is the best for a single Image, but I would still recommend parallelising subsequent images rather than parallelising the operations within a single image.  By splitting the images over 4x VIs like NathanD has written, I get the 2048 images processed in 3.3s.

 

Shane

Message 18 of 18
(1,358 Views)