LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What is the best way to switch between multiple image buffers? AND How to synchronize saves?

I work in an optics lab.  The ultimate goal is to take images of the eye and build a 3D volume. The way we do this is by taking multiple 1D arrays and stitching them together to form a 2D area.  Then we take those 2D areas to form a 3D volume.  Right now though, we have the code taking a 2D area and transforming it into the appropriate 2D area.  I feel it would be quicker to take a 1D and transforming that into another 1D and then stitching everything together at the end.

 

Therefore, the problem lies in that we would like to see what's happening in order to debug the hardware before putting actual subjects into system.

 

Robert

0 Kudos
Message 11 of 18
(452 Views)

Hi,

 

If you are using an area scan camera then the images are already coming in as 2D arrays, so you should just work with it in this form.  If you are using a line scan camera then the images could come in as a 1D array.

 

What do you mean when you say "you want to see what happening"? 

 

Regards,

 

Greg H.

Applications Engineer
National Instruments
0 Kudos
Message 12 of 18
(440 Views)

We are using an area scan camera I believe, but can set the DCF to give us a line scan.  Plus the scanners produce a line scan image (i.e. they only give data in one line of pixels).

 

When taking in raw spectral data, we cannot tell if what we are trying to image is actually being imaged.  This could be due to hardware setup or many other reasons.  Therefore we process that spectral data into something meaningful. That is what we would like to see as "real-time" as possible.

 

Rob

0 Kudos
Message 13 of 18
(437 Views)

Hi,

 

How fast are you trying to take images?  Also the less image manipulation you do the faster your code will execute.  So the more pixels you get from your camera at one time the better.  If you have to take several arrays and piece them back together then your code is going to run slower.

 

Regards,

 

Greg H.

Applications Engineer
National Instruments
0 Kudos
Message 14 of 18
(432 Views)

We are taking images at approximately 60 fps.  We currently are getting a 2048x601 array from the camera.

 

Is that last part really true?  It wouldn't be faster to process multiple 2048x1 arrays than it would to process 1 2048x601 array?  I didn't think stitching them together would be that memory intensive.

0 Kudos
Message 15 of 18
(428 Views)

Hi,

 

What I meant is it would be faster to acquire 1 2048X601 Image than 601 2048X1 images and then put together 601 arrays that have 2048 elements each.

 

Greg H.

Applications Engineer
National Instruments
0 Kudos
Message 16 of 18
(424 Views)

Hello,

 

Ok I agree it's faster to acquire the image. The question lies in manipulation of that huge array vs the manipulation and stitching of smaller arrays.

 

I do believe though I have to completely start from scratch with my code.  At this point, I'm just trying to plan for what I would have to implement.

 

Rob

0 Kudos
Message 17 of 18
(421 Views)

Rob,

 

I still think it is going to be faster to just do the entire picture as apose to one line at a time.  Unless you are able to acquire and process at the same time, but even then you can be acquiring while processing the entire image.  Adding the array back together is just adding another step.

 

Greg H.

Applications Engineer
National Instruments
0 Kudos
Message 18 of 18
(416 Views)