I'm pretty sure it's my misunderstanding of how a convolution works or just a general misunderstanding of the LabVIEW program in general, but I seem to be running into a strange issue with the convolution function.
Attached is the VI for two functions that are supposed to be convoluted. The resulting function LOOKS correct, but I'm more curious about what happens when I change the step size of the two initial functions. As the step size decreases, the convoluted function seems to grow.
Can anyone tell me if this SHOULD be happening? And if so, WHY is it happening?
Or am I just doing everything wrong?
welcome to the forum!
For others struggling to figure out the parameter to change: "step size" = "delta" (default: 1)
I believe your answer is in the definition of the discrete convolution:
First of all, no time axes are involved in the discrete convolution. Then, you increase the number of the points, keeping the same maximal amplitude of the function so, your sum is increasing naturally.
You may want to consider some kind of normalisation https://www.google.de/search?client=opera&q=normalise+dicrete+convolution&sourceid=opera&ie=UTF-8&oe...
I'm still somwhat unclear as to why the sum increases. I think it sems from my misunderstanding of what a convolution is. Reading those definitions didn't help all that much; much of the information went way above my head.
Even if I did increase the number of points defining the function, the point values adjust accordingly, meaning that smaller changes in the x result in smaller changes in the y. Therefore, the resulting sum shouldn't increase for each change in time point because the function itself is stillt he same. Unless, of course, the convolution is not being calculted according to what the function looks like but instead using the actual values themselves.
That brings me to my second point: How is the convolution being calculated without time values? By my meager understanding, a convolution is literally the sum of the overlapping area of two functions as the second inverted function is transposed across the first. Does the convolution VI assume that the bounds of the functions are the same to make this calculation, or does it assume that each point in the array is something like f(x), f(x+1), etc.?
Speaking of normalization, how would I calculate something like that? I sort of understand WHAT a normalization is, but I'm unsure how to even approach it.
The convolution does not know about step size, it only sees the values in the 1D arrays. If you look at the sum of the values in the arrays, they change as a function of step size and thus the convolution also changes.
If you multiply the array sum of the two input arrays, you get the sum of the convolution output.
If you would divide each array by it's sum, the sum of all elements in the convolution will also be one.
I typically normailize only the convolution kernel to keep the integral of the convoluted array constant, whatever it is.
Ah, so my assumption was correct.
How would I go about normalizing the functions then? Is that dependant on the original functions?