08-16-2021 10:55 AM - edited 08-16-2021 11:01 AM
I am trying to understand why LabView shows one set of values for an image, while OpenCV shows another set of values.
I have two U16 Grayscale PNG images that I am trying to combine vertically to create one continuous image. The majority of the pixels are near zero or low-valued, with the ROI having pixel values in the middle of the U16 range. In Python, this is achieve by reading the file using OpenCV, combining the image using numpy and then using Matplotlib to display the values:
image_one = cv2.imread("..\filename_one.png", cv2.IMREAD_UNCHANGED)
image_two = cv2.imread("..\filename_two.png", cv2.IMREAD_UNCHANGED)
combined_image = numpy.concatenate((image_one, image_two), axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_image,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
As seen above, this show the image as have two different dynamic ranges, resulting in different exposures. To normalize the images, we can try to rescale it to take advantage of the same dynamic range.
rescaled_one = ((image_one - image_one.min()) / (image_one.max() -
image_one.min())) * 65535 rescaled_two = ((image_two -
image_two.min()) / (image_two.max() - image_two.min())) * 65535
combined_rescaled = numpy.concatenate((rescaled_one, rescaled_two),
axis=0)
plt.figure(figsize=(15, 15), dpi=18) plt.imshow(combined_irescaled,
cmap="gray", vmin=0, vmax=65535) //Sliced to show the ROI
Rescaled Image - Dual Exposure
This still shows the same issue with the images.
In LabView, to combine images vertically, I adapted a VI that was published to stitch Images horizontally: https://forums.ni.com/t5/Example-Code/Stitch-Images-Together-in-LabVIEW-with-Vision-Development-Modu...
The Final VI Block Diagram looks as follows:
VI Block Diagram - Vertically combine images using IMAQ
The Output you see on the Front Panel:
Front Panel - Singular continuous image
The dual exposure issues appears to have disappeared and the image now appears as a single continuous image. This didn't make any sense to me, so I plotted the results using Plotly as follows:
fig = plty.subplots.make_subplots(1, 1, horizontal_spacing=0.05)
fig.append_trace(go.Histogram(x=image_one.ravel(), name="cv2_top",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=image_two.ravel(), name="cv2_bottom",
showlegend=True, nbinsx = 13107), 1, 1)
fig.append_trace(go.Histogram(x=lv_joined[:1024, :].ravel(),
name="LabView_joined_top", showlegend=True, nbinsx = 13107), 1, 1)
//First Image
fig.append_trace(go.Histogram(x=lv_joined[1024:,:].ravel(), name="LabView_joined_bottom", showlegend=True, nbinsx =
13107), 1, 1) //Second Image fig.update_layout(height=800) fig.show()
Histogram - Python vs Labview respective halves - Focus on Low pixels
Here it shows that the Second Image's pixel values have been "compressed" to find the same distribution as the the First Image. I don't understand why this is the case. Have I configured something wrong in LabView or have I not considered something when reading in a file with OpenCV?
Original Images:
Solved! Go to Solution.
08-16-2021 12:05 PM - edited 08-16-2021 12:05 PM
It looks like your .PNGs are 8 bit, but have a palette with 256 grayscales in the range = {0;65793;...;16777215}
the 2 palettes are identical
08-16-2021 01:36 PM
I would normalize like this
08-16-2021 01:49 PM
Okay I'm confused as to how they are 8bit with the greyscale palette as mentioned, as in Python:
Moreover, if it is an 8bit image, wouldn't the values be between 0 and 255? How could the values be any larger? Wouldn't that suggest that the image is U32 instead?
Separately, I was able to obtain this same dual exposure using non-IMAQ modules:
Here it shows it as having a bit-depth of 24, which fits into the range that you mentioned. However, now I am confused as to when using IMAQ modules, why it reads the file type as U16 and then rescales the values? How do I preserve the 24-bit image using IMAQ modules?
08-16-2021 03:06 PM - edited 08-16-2021 03:23 PM
nevermind
08-17-2021 04:25 AM
@kirkland77 wrote:
Separately, I was able to obtain this same dual exposure using non-IMAQ modules:
are those the same files which you attached here, or are these different files?
@kirkland77 wrote:
that looks like an openG sub.vi, probably from the image tools -
when you use "picture to pixmap", the output is 24 bit
I'd rather do something like this
but moreover, I would try to get rid of the PNG files and read the image data as binary files
08-17-2021 06:47 AM
or let's use the python node to use opencv-python directly to extract the image data as numpy array and pass this array to LabView
08-17-2021 11:39 AM
Yes they are the same images as far as I know:
but moreover, I would try to get rid of the PNG files and read the image data as binary files
I will try to read them in as a binary file as well.
08-17-2021 11:42 AM
I was able to observe the same dual exposure using the Python Script as well:
So, to clarify, I am trying to use the IMAQ functions to read in the image. So, if the image is U16 and the values are not being altered in Python, why does IMAQ rescale these values? How does it know to rescale?
08-17-2021 01:28 PM
@kirkland77 wrote:
So, to clarify, I am trying to use the IMAQ functions to read in the image. So, if the image is U16 and the values are not being altered in Python, why does IMAQ rescale these values? How does it know to rescale?
can you narrow it down:
does imaq already load an altered image, or does this "rescale process" happen, when "the smaller image is copied into the bigger one" at step 7 in
@kirkland77 wrote:
I
Histogram - Python vs Labview respective halves - Focus on Low pixels
Here it shows that the Second Image's pixel values have been "compressed" to find the same distribution as the the First Image.
are you worried that this is rather accidental than intentional ?