08-19-2019 08:41 AM
Ok, so I use the IMAQ ImageToArray VI which returns a 2D array of pixel values in 32-bit floating-point values.
My problem is I'm having difficulty trying to understand how the pixel colour values are represented as 32-bit floating-point value and how I can convert this to a color?
Thanks,
Jithesh Srinivas
08-19-2019 09:37 AM
The first question to ask, no, the second question to ask is why did you convert an IMAQ Image into a 2D array of 32-bit floats? The first question to ask is do you know anything about Image representation, do you understand that Images come as Color or Gray Scale (sometimes called Black and White, or B&W, which really stands for "Shades of Grey" (though you could use only Black, represented by 0, and White, represented by having all the bits set, so either 255 if using a U8 or 65535 if using a U16)?
The most important observation is that you (like so many new members to the LabVIEW Forum) you asked a question, but failed to post your VI so we could see what you are trying to do and maybe better guess how to help you.
Are you starting with a RGB (color) Image? If so, why convert it to a Float? Why not break it apart into its three R, G, and B color components (as three U8 arrays)? What does the Image represent? Most "ordinary" cameras only represent 256 levels of intensity (a U8), although some can render a U16 B&W image.
If you've got a B&W Image and you want to do some computation on it (like blurring, un-blurring, histogramming, etc.), you may want to use a Float so you can let the extra bits of precision help you with rounding before converting back to B&W.
On the other hand, subtle changes in Image Intensity can show up a lot clearer if you "expand the mapping" by displaying the B&W image "in color", as there are 2^24 colors (in principle) and only 2^8 (or 2^16) shades of grey. But this is a very fraught process -- I've tried it on occasion, and have rarely been satisfied with the outcome.
So learn a lot more about Images than you already know, tell us a lot more about the images you are starting with (color? B&W? how many bits of intensity?), exactly what you are hoping to do with the image, and (most important!) attach your VI.
Bob Schor
08-20-2019 03:57 AM
Sorry, I should specify I am working with the "Compute Depth Image" Example VI which has two intensity charts which show the depth of the image.
This example has the IMAQ ImageToArray to display this chart and I create an indicator to read out what I believe are the pixel values of this chart? I am trying to understand the values I am seeing, I believe it is an RGB graph
08-20-2019 04:47 AM - edited 08-20-2019 04:53 AM
The IMAQ Float format is definitely not meant for RGB images. While it is theoretically possible to represent each color plane (R, G, B) in float format that really wouldn't add much except for very esoteric applications.
If you have an RGB input format it is most likely an U32 RGB format of some sort which really contains of an U8 value per color plane and sometimes an optional alpha channel (transparency).
IMAQ Vision assumes that you know a little bit more about image processing than basic school level. The example you used is meant to work with any type of input image but at the point where the float array is extracted the information is definitely a greyscale floating point image as the VI IMAQ Get Depth Image from Stereo has stored the resulting depth map into the input image that was created as "Depth Image" with the greyscale 32 bit float format earlier in the VI.
So while that depth map is stored in an IMAQ image data element and can be displayed as a greyscale image it is not really an image but simply a computed map of depth values, with one value for each input image pixel. No color information is present at this point in the program since you don't have red, blue or green image depths, but simply a number that indicates the computed depth of the two stereovision input images at that pixel position.
08-20-2019 05:34 AM
Ohhh... I just realised the values from ImageToArray were the depth values itself... thank you for clearing that up.
Do you know how I would be able to then create a 3D repressentation of this using the new z values, instead of the depth intensity graph I already have?