I have some unexpected results when plotting a 2D-Array as a flattened pixmap. The problem occurs in two selfmade sub VIs and a test VI I made, to do some basic image processing with a video signal I get from a IDS camera. In detail, I an averaging function over some frames and I also want to be able to downsample each frame.
I store the luminance* values for each frame in a shift-register and summarize over N frames. After that, I divide by N to calculate the averaged luminance values.
My major goal is the implementation of a gaussian pyramide, but for starters, I have to smooth the image before downsampling it. I therefore convolve the 2D-Array of the image with a low-pass filter.
In both cases, the image looks something like the attached file "pinhole_after.bmp". Note: "pinhole_before.bmp" is the blank output image I get from the camera; "pinhole_after_sharpening.bmp" is the output, when I don't convolve with a lowpass filter but a high-pass filter.
For my understanding: The low and high-pass filtering has a strange behavior. The lowpass makes trouble but the highpass not. I don't understand why.
Up to now I looked into the IDS VI and got a better understanding of the color depths: The camera has a ~10bit output (may be something else) but the IDS VI does something and I gives me a 24/32 bit output. (the pixel values are stored in a 1D-picture array). With further code from IDS I get a 2D-Array, wich should have 24-bit color depths.
Then I looked into the "flatten pixmap.vi" and got looked into the code. It looks like the VI splits the luminance value of the 2D-Array into RGB, then does something **magical** and gives me values, which are not identical. (Note: In a 24-bit image, each RGB value has to be equal to get a shade of grey, but this is not the case. The output looks something like this: 24, 201, 231 but it should look like this: 24, 24, 24 or 201, 201, 201, or 231, 231, 231. If I look at the same values for the original image from the camera, I really get the expected identical values, but not after I do some math).
So, my question is, where is my mistake? If someone is interested in helping me, I can deliver more screenshots of probes and also the VIs, but at the moment, I described everthing I know. Thanks for reading. 🙂
Why would you need to read the same file and repeat the same calculation, and set the same zoom factors millions of times per second as fast as the computer allows?
To keep the image the same size, I would change the convolution output to "size x".
Do you have an example PNG?
You probably need to convolute each color plane separately. I doubt that operating on RGB U32 integers would work as expected.
... and please, attach the images here. Don't host them on an outside site peppered with ads and popups. That's very annoying!