Showing results for 
Search instead for 
Did you mean: 

Post-processing data from the module "IMAQdx Get Image Data"



We are developing a Labview program to integrate two Gige cameras with mono12packed and 16 bit respectively. To improve the frame rate, we just online save raw data via "IMAQdx Get Image Data" and then post-process data as images by using the following algorithm.


The current algorithm:


For the 12 bit camera, we combine value (0,0) and the four bits of value (1,0) into pixel (0,0), and pixel (1,0) includes the rest four bits from value (1,0) and value (2,0). It means two pixels are generated from three bytes of raw data.


For the 16 bit camera, we combine two adjacent 8 bit values into one pixel, i.e. pixel (0,0) has value (0,0) and value(1,0).


12 bit camera:  value (0,0)+value(1,0)+value(2,0) =  pixel (0,0) +pixel(1,0)

16 bit camera:  value (0,0)+value(1,0) =  pixel (0,0)


My question is what 's the difference between the above algorithm and the following algorithm?


12 bit camera:  value (1,0)+value(2,0)+value(3,0) =  pixel (0,0) +pixel(1,0)

16 bit camera:  value (1,0)+value(2,0) =  pixel (0,0)


Thank you very much in advance.

0 Kudos
Message 1 of 6

@morningmaple wrote:

12 bit camera:  value (0,0)+value(1,0)+value(2,0) =  pixel (0,0) +pixel(1,0)

16 bit camera:  value (0,0)+value(1,0) =  pixel (0,0)


12 bit camera:  value (1,0)+value(2,0)+value(3,0) =  pixel (0,0) +pixel(1,0)

16 bit camera:  value (1,0)+value(2,0) =  pixel (0,0)


You are kidding, right?  Forget cameras, 12-bit, 16-bit.  I have 16 bit data saved in memory.  I choose to access it as an Array of U16.  The data saved (in hex) are 0100, 0302, 0504, ..., i.e. if considered as bytes, they are 00, 01, 02, 03, ....


Now I ask what's the difference between grouping the Bytes into Words starting at Byte 0 or Byte 1.  In other words, does 0100, 0302, etc. = 0201, 0403, ...?  


Answer -- probably not!


Bob Schor

0 Kudos
Message 2 of 6

Thanks for your comments. I agree with you that there is an obvious difference from viewpoint of mathematics. I am sorry my question was not expressed clearly. Actually, what I am asking is how an array of raw data is generated. For example, we are converting one array of U8 raw data to one U16 matrix for one U16 image. Is there any possibility this array of raw data comes from two images, such as the second half of the previous image and the first half of the current image? Does the first two U8 data in one array of raw data always correspond to the first pixel of one complete U16 image?


Thanks for your attention again.

0 Kudos
Message 3 of 6

It may depend a bit on what you mean by "raw" (Cameras can have a "raw" mode that might be camera-specific, i.e. Pixel data + a small amount of "configuration info" to interpret the bits properly).


When you specify an Image Buffer as 8-bit Grayscale or 16-bit Grayscale, you are saying how many bits are used to represent "shades of gray".  Suppose you have a 100x100 pixel Image.  Black is usually 0, and white will be 255 (for 8-bit) or 65535 (for 16-bit).  So a byte representation of alternating black/white pixels will be 0, 255, 0, 255 (for an 8-bit Gray) or 0, 0, 255, 255, 0, 0, 255, 255 (for a 16-bit Gray).  I don't remember if you can specify the "endian" ordering of the Pixels in IMAQ, but that could introduce another (minor) complication in the 16-bit representation ...


Bob Schor

0 Kudos
Message 4 of 6

Many thanks for your help. I know very little on the Gige protocol our thermal and hyper-spectral cameras use. Our program can receive 224*1024*1.5=344064 U8 raw data per loop from a 12-bit camera that is then post-processed into a 224*1024 pixel image, and 480*640*2=614400 U8 raw data per loop from a 16-bit camera that is then post-possessed into a 480*640=307200 pixel image. What I am concerned about is:

1) How does "IMAQdx Get Image Data" determine the staring point of an array of raw data?

2) Suppose aborting the VI before reaching the program's end such as closing the communication ports, the buffer overflow probably takes place. What happens if we restart the program?


Thanks for your attention again.

0 Kudos
Message 5 of 6

The 16-bit camera situation is simpler to understand.  Your camera outputs 16-bit grayscale images, so you can do an IMAQ Create, specify Grayscale U16, do a Grab with that camera, and if you display the Image on an Image Display, you should be able to "see" your Image (right-click on the Image and choose "Zoom to Fit").  You should now be able to manipulate this Image as a 2D U16 array of 480 x 640.


Your 12-bit camera uses a byte-and-a-half to store the 12 bits.  What I don't know is if it "packs" two pixels in 3 bytes, or leaves "4-bit holes" in its Image representation.  I'm thinking (from your description) that it "packs" the pixels, so you only have to understand the packing and then undo it.  This isn't as hard as you might think, but you'll have to do some "experimentation" to figure out what is happening.


Here are some tests.  Rig up the 12-bit camera and have it look at a White card against a black background (or anything with a large contrasty "edge" to make it easy to tell if we did the Pixel-shuffling right).  Do a Snap into a U16 Grayscale image and see what resolution is reported.  Also look at the Image.  If, as you say, the images come in packed three Pixels in two Bytes, I'd expect the resulting visual image to look "messed up", and the reported Resolution to not be 224 x 1024, but probably 224 x 768.  If this is the case, then we should be able to "expand" the Image by transforming the pixels appropriately.  Here's a Snippet -- if you don't have LabVIEW 2019, you should still be able to find these functions and code this up.

Get 12-bit Image Resolution.pngTo unpack/repack the bits, you can use (from the Pixel Manipulation sub-Palette) IMAQ Image to Array and IMAQ Array to Image.  Image to Array lets you view the Array in various Format.  I don't know if it uses the setting (U16) that you specified when you built the Array, or if it lets you choose the version at run time.  Since we want to work with bytes, U8 would be more convenient, but let's stick with U16.

So what do we want to do?  If a row is (as I'm guessing) reported as 768 U16 pixels, we want to convert it to 1024 U16 (or I16, which will be arithmetically easier) by the following algorithm:

  1. Take the first three Words into an Array of U16.  Add a fourth word of 0 and then reverse the Array.  Have you ever used TypeCast (found on the Numeric Palette, Conversion sub-Palette)?  TypeCast this 4-element Array of U16 into a U64, which should give you the (Hex) number 0000333322221111 (you won't see the leading zeros, of course).  So you now want to pull 12 bits off from the low end 4 times and put them into an I16 number, giving you (again, in Hex) 111, 221, 322, 333, and leaving 0000 (the "padding" we added) untouched.
  2. Continue doing this for the rest of the row.  You should end up with (768/3) x 4 = 1024 I16 numbers.
  3. Repeat for the other rows of the image.
  4. You now have 224 x 1024 I16 Pixel representations.  You can use IMAQ Image to Array, wiring this Array into the I16 input, and get an Image you should be able to see in IMAQ (and save as a PNG).

Why I16 and not U16?  The most-significant bit in an Integer is the "sign" bit.  A Uxx representation says "this is just another bit", but the Ixx representation uses this to distinguish between +1 and -1 (there is no -1 in a U16, instead it is 65535, one less than 2^16).


Go ahead and see if you can convert images from your 12-bit camera into something IMAQ can display.  If it works, you can mark one (or more) of my posts as the Solution.  If it doesn't, please reply, attaching a high-contrast "Snap" from your 12-bit camera (try to align the edges of the image with the edges of the camera -- we want to "capture" edges), as well as the code that you tried (in case I gave you some very wrong suggestions).


Bob Schor


0 Kudos
Message 6 of 6