LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Why is my 16bit monochrome image displayed/saved distorted by LabVIEW?

Hi I'm using LabVIEW 2010 with the latest IMAQ (not full Vision suite).  I was trying to display and save the 16 bit monochrome image from my x-ray detector.  But my images displayed in LabVIEW look distorted.  While with the testing tool from the frame grabber vendor, it displayed and saved fine in 16 bit unsigned (0-65536).

 

I know that older version of labview and vision does have problems handling 16 bit images, but I was told that since 2010 version, full support of 16 bit unsigned has been added.

 

Here I attached the comparison for LabVIEW saved tiff and a normal tiff with same object.  I already wired "grayscale 16 unsigned" in the "Imaq create" and choose to display the data before using the "Imaq write file 2" to write it into tiff format.

 

LabVIEW saved frame can still be displayed normally if I do it like this in ImageJ: ImageJ->open->raw, choose 16 bit signed and little endian.

 

I know there was a messy solution where you manually convert the grayscale value of the frame to normalize it into the 16 bit unsigned range, but is there some built-in solution with the LabVIEW 2010? Thanks a lot.

0 Kudos
Message 1 of 4
(3,257 Views)

It looks strictly like byte order.  If you cut your histogram in half and swapped the halves, one would look like the other.

 

You even said so yourself that if you chose little endian, it looks okay.  Your frame grabber must be using the opposite endianism from the LabVIEW native byte order.

 

If you need to swap the byte order of data within LabVIEW, look at the Numeric >> Data Manipulation palette.

0 Kudos
Message 2 of 4
(3,241 Views)

Thanks Ravens Fan, it's indeed the byte order problem.

 

Just want to clarify, what does LabVIEW do exactly when handling the 16 bit image. What kind of data it uses, 16 bit signed or unsigned?

 

For example, my camera sends out this 16 bit data stream, should I wire "16 bit unsigned" or "16 bit signed" in IMAQ create?  In order to display it right in front panel, should I do the conversion talked in this knowledge base?

 

I guess my confusion is from the very beginning, which part cause this byte order problem, is it my frame grabber or labVIEW?  I'm assuming my camera output 16 bit unsigned, then I guess I need a conversion to 16 bit signed in order to display right in LabVIEW, but then when I need to save the image in correct range I need to convert back to 16 bit unsigned? 

 

0 Kudos
Message 3 of 4
(3,213 Views)

I wasn't thinking that it was a signed vs. unsigned issue, just a byte order problem.  But that link you attached seems to match the problem you are seeing pretty well.

 

I'm not familiar with the IMAQ functions, so I can't add any more information than what that article already states.  But it does seem like a mismatch between how your frame grabber creates the data and how LabVIEW interprets it.

 

You might just need to create a set of VI's that does the conversion each way so that LabVIEW can display the data correctly for you, but you can acquire the data in the way the frame grabber wants to do it.

0 Kudos
Message 4 of 4
(3,210 Views)