11-02-2010 05:16 PM
I was using my firewire camera to take unsigned 16-bit grayscale pictures and processing these data in matlab. However the data becomes incorrect when the integration time of camera is large. I found out the reasons are: 1, IMAQdx decoded 16-bit monochrome data as signed 16-bit, which returns a negative value if the magnitude of any pixel exceeds 32768. 2, IMAQdx automatically adds a constant, which equals to the magnitude of the largest negative pixel value, to all the pixel values, to make the smallest value change to zero, but this process ruin the rest of my data, and makes them almost irreversible. I think the only method could solve this problem is try to make labview read image as unsigned data, but I don't know if it is possible, since it's not optional in the IMAQdx manual .
Any suggestion will be appreciated.
Solved! Go to Solution.
11-02-2010 06:03 PM
Hi JYang,
Firewire (especially earlier on) had some ambiguities about the endianness, signed-ness, and bit-depth of the pixel data returned. IMAQdx tries to take its best guess for 16-bit data using specific registers defined by the IIDC along with other information it can deduce about the camera. However, sometimes these guesses are incorrect. If you go to the Acquisition Attributes tab for you camera in MAX you should be able to change all these settings to match what the camera expects its data to be interpreted as.
Eric
11-05-2010 04:41 PM
Problem solved. Thanks, Eric.