I have a camera that saves 12-bit TIFF file. When I have a recorded TIFF file, it can be opened in ImageJ and all pixel values are preserved. Also if I display it in Labview using IMAQ, it can be correctly displayed and if I use ImageToArray, all values are correct too. However, if I save it as a new file using IMAQ writing File 2, the saved file can be loaded into Labview with correct pixel values, however, if I open this saved file in ImageJ, all values are converted into 16-bit wrongly. For example, value 62 (00111110) becomes 1000000000111110. Apparently the first digit when converting to 16bit, it designate the first bit to 1 instead of 0. I'm confused because the same saved image when loaded in labview vi, the value stays 62.
Is there a way to fix it? Could it be a problem with ImageJ instead of labview? I attached the original file and saved file here.
Solved! Go to Solution.
Since the images are saved correctly and then displayed correctly in LabVIEW even after writing the file, I would guess that ImageJ simply processes images differently and that there is most likely not a problem in LabVIEW itself.
How exactly are you saving the image in LabVIEW ?
I have previously run into a problem when using ImageJ; I was saving images that were in I16 in LabVIEW but when opening them and saving them in ImageJ it seemed as if the image was being saved as a U16 by default.
Thank you so much for your reply. I attached the portion of the code for savinng the image. Basically I have an image, and I tried to wire the same image to image terminal of IMAQ writing File 2 vi in Vision and Motion panel. I ran into the problem as I described. Then I created another IMAQ and converted origianl image to array and then array to image and then saved that image using the same method. I even verified the pixel values by using a third image to array function and all pixel values are right. Also the images are right when I read out that saved image in a different vi.
I didn't do any operation on the image or save the image on ImageJ. I simply opened the same saved image in ImageJ and all pixel values are messed up in a way I described above. I think it could be the sign issue as you said since in 16 bit binary, imageJ for some reason have the first digit as 1. But I do set the data type to I16 in imageJ as well.
Any more information would be very much appreciated. Thank you very mcuh!
After having looked at the attached screenshot, I have a question for you. What is being wired into the IMAQ Create for Source2 ? I would assume it is an Enum with I16 selected, correct?
Does the same thing happen if you use U16 both on LabVIEW and ImageJ, or any lower bit depth? Finally, if you save the image on a different file format, do you get similar behaviour?
Yeah, I solved my problem. I need to convert image pixels to U32 and then it can be read by imageJ correctly. Thank you so much for your help!
It has been my pleasure; glad your application is now working correctly!
I also have the problem that I was saving images that were in I16 in LabVIEW but when opening them and saving them in ImageJ it seemed as if the image was being saved as a U16 by default.
How to solve this problem ? Thank you very much.
Have you tried the steps outlined above, or the suggestion of using U32 made by soljiang?
This is a much older post, and per our community guidelines I would recommend creating a new thread instead of posting on this one. You'll likely get more input on a newer thread as well.