I'm acquiring grayscale images with a Basler Aca2000-340km camera with 12-bit of pixel depth.
I'm using labview 2012 and the latest IMAQ drivers.
I store them in 16-bit png files with the least possible compression (value of 1000) with the VI Write PNG File 2.
When I check the maximum and minimum values of the pixels of an image with its histogram i find both the last and first 4 bits to be sometimes used (either opening the image as Grayscale-I16 or Grayscale-U16.
max value= 54842 = 0101110001101011
Moreover if I open the same image with matlab the maximum value is in this case 65455 = 1111111110101111
I was expecting the image to be saturated, being the matlab value closer to what I expected, but what I was really expecting is that either the last or first 4 bits (I don't know to which side you do the padding) weren't used at all.
Could you shed some light on it?
I think is to do with how you acquire and how you save. If you acquire as an I16 image, you must remain consistent with this. A negative number will be stored as 2s complement. If we do it for 8 bit as it is easier to read and type, an I8 goes from -127 to 128, -1 gets represented as 11111110, all ones except the last digit. Putting your values into a VI and showing representation for two outputs one as I16 and one as U16 will show the difference. The values you have concern me slightly in the fact your camera greyscale should be 0 to 4095 for 12 bit. The next thing to check is that your byte order in Matlab is the right way round. I have seen this with TIFF files before where Matlab uses the opposite byte order to LabVIEW.
Thnks for the quick answer.
I believe I'm being consistent. I get the image type from the properties value of the IMAQ session and that's the value I use for the Grab Acqure VI and the Extract Tetragon VI. I think the type is grayscale U16.
I was doing something wrong with the way I checked the values though. The maximum value of the histogram is the number of times a pixel value is repeated, so actually those 52k wasnt the maximum pixel value after all.
Now if I use the ImageToArray VI then I see the maximum value within the 12-bit range->4090.
So in light of this, should I take it that labiew is doing the padding on the left?
A 111111111111 pixel value is stored as 0000111111111111 value thus preserving the actual value of the pixel?
I'd think then the image would be dimmer because now the maximum value would be 4090 out of 65k levels. But the actual image is bright.
So in order to preserve the fidelity of the image I would expect the padding to be made on the right, thus storing a 111111111111 pixel value as 1111111111110000.
As for how matlab deals with it, my guess is that Matlab just recognises it is a uint16 image and stores the values. But, to be certain I need to know how labview deals with the padding, because in matlab I'm getting values close the maximum (2^16-1) which suggests padding on the right, but they are odd, which would suggest padding on the left.