12-06-2024 10:55 AM - edited 12-06-2024 11:36 AM
Hi,
I am looking for the information about type of border conditions applied in IMAQ Convolute VI method by default. Let say I have 100x100 data array and I would like to make averaging smoothing with simple 3x1 kernel of "ones". The results, I get for the border data are strange and they do not resemble known padding methods. I do not see the information in IMAQ Convolute VI documentation. I would like to add, that the problem is when the kernel is not square. Please,help 😉
Solved! Go to Solution.
12-07-2024 01:59 PM
As the Help for IMAQ Convolute explains, if you want to do a 3x3 convolution on a 100x100 Image, you need a border of 1 around the image. A 5x5 convolution requires a border of 2, a 2n+1 x 2n+1 convolution needs a border of n.
Bob Schor
12-08-2024 01:55 PM
@mroz1 wrote:
I get for the border data are strange and they do not resemble known padding methods. I do not see the information in IMAQ Convolute VI documentation. I would like to add, that the problem is when the kernel is not square. Please,help 😉
The word "strange" is very vague. can you be a bit more specific? Strange in what way, exactly?
I currently don't have any imaging tools installed, but if you look at the get kernel, only square kernels are generated there.
Note that the plain LabVIEW 2D convolution supports an output size configuration and I often use it with kernels that are asymmetric. Of course it will be more expensive because the operations are in floating point, but also probably more accurate because of that. You would also need to do the RGB components separately.
12-08-2024 04:02 PM - edited 12-08-2024 04:12 PM
Interestingly, up to now I was sure that you can't use non-square kernels (the only workaround is with filling unused elements to zeros), but it seems to be OK.
Anyway, for this test, I will fill the image 10x10 with 160 and put a grid of dots with I = 70:
When you perform IMAQ Convolution, NI will grab internally the pixels outside of the image (called border). For a kernel 3x1, you will need one pixel left and one right. They will have the same intensities as neighboring pixels:
Now for the first pixel, we will take one from the border (I=70), another one from the image (also I=70), and next one, which is 160. As a result, (70+70+160)/3 = 300/3 = 100. This is what you have. The next pixel (moving to the right) will be computed from (70+160+70)/3 and again 100. But the next one is (160+70+160)/3 = 390/3 = 130. And on the right side, the border is 160, therefore you have two same pixels in dst with I = 130. So, your image after convolution is:
It's a little bit more complicated with a kernel 5x1. Here you will need two pixels outside of the image and they will be "mirrored".
(I do not set Border, because by default every IMAQ image created with border of 3 pixels, which is sufficient for convolution up to 7x7. From 9x9 you will get an error, and the border needs to be increased)
There is an IMAQ Border function, but this has no effect on convolution.
The IMAQ convolution will always fill the border with mirrored pixels and in our case:
Now if you will compute accurately moving convolution window, you will understand how this dst image was obtained:
because for first pixel (160+70 - these two from border +70+160+70)/5 = 106. And so on. The results are not strange; it is pure math. However, if you still have questions, don't hesitate to share your code snippet and the result, along with an explanation of what you expected to get and what you saw instead.
12-16-2024 02:38 AM - edited 12-16-2024 02:40 AM
Thanks for help and explanation 😁. Everything is clear now.