I have a problem that I just cannot figure out myself.
I am working on analysis LV program for measuring lymphatic flow with a CCD camera. The output from this camera (and software) is 16bit TIFF pictures.
When I started the programming a while ago I imported the TIFF files as signed (I16) with “IMAQ ReadFile”. The pictures seem to look fine, and all my picture sequences looked ok so I thought that the picture must be I16. The following analysis with the I16 pixel values also worked fine.
Now I have some other sequences, and these pictures behave in a strange way. Half of the pictures look ok, and the other half have a strange pixel distribution. Then I wondered whether the pictures where actually U16 instead of I16. When I open the files in Windows, Photoshop, ImageJ they all suggest that the files are in U16. I found this post (http://forums.ni.com/t5/Machine-Vision/open-greyscale-U16-image-problem/td-p/2414022) and it seems that LV misinterpret the TIFF U16 files as I16 files, so now I am “convinced” that my pictures are U16.
I then tried to open the files as I16 and convert to U16 with this code:
This seems to work ok for some files, but not all (Same sequence recorded with same settings). When I run the code for “Picture 1” it looks like this:
When I run the code for “Picture 2” is look ok:
And just to mess everything up, when I open the to TIFF files in ImageJ and draw a histogram I get this:
The “Picture 1” to the left look perfectly normal and notice the mean pixel value I almost the same (as expected with pictures recorded at 1 second interval).
I have also attached the two pictures just for reference.
I really hope some of you can help with this.
Solved! Go to Solution.
What springs infront of my eye is the minumum and maximum values in the 2 images you have chosen.
So Picture 1 does actually have negative values while picture 2 does not. So the conversion is not working, you are just only seeing the affect of the conversion not working on the first picture because that one is only having negavive values.
Notice for Picture 1 you are changing the maximum value from 1736 to 62156 (so you scale it to the new data type) while for the second image you don't do that the image size is still the same. So it looks to me that LabVIEW sees image 2 as a U16 while image 1 is seen as I16. I think it has to do with what is in the headfile of the TIFF image. I believe for these 2 images there must be a difference in the header file that LabVIEW use and Windows file viewer and ImageJ does not.
If I was the one working around the issue of having saved the images in a wrong format. I would do the following:
1) In a loop browse through all the images acquired.
2) Pull out the histogram and look at the minimum value
3) If the minimum value is below 0 then (go to 3.a else 4)
3.a) Pull out the array values from the image
3.b) Upconvert array to I32
3.c) To all elements in the array add a constant of 32767
3.d) Convert Image to U16 after that
4) Save image as U16
5) Loop back to 1
I tried to do the conversion manually like this and it seems to work pretty good.
Let me know how this works for you.
National Instruments Denmark
The solution you gave me seems to work just fine!
I have a small difference in pixel mean values between the ImageJ histogram and the converted (I16-I32-U16) TIFF files, but I guess it is due to the conversion(s).
I also tried to find any difference in the TIFF header file but without success, but I agree that the problem must be connected with the header file.
Thanks for the support!
I had the same issue too and the Anders code worked only partially. In many cases i had pretty weird conversion especially if i had areas close to saturation in the images. (eg below)
What i did as a workaround is to read the images as raw files since i know what i've taken.
Note the offset to be used. I think it is file dependent.
Hope this helps.