I am filtering some data using the Point-by-Point Butterworth filter. Every so often I change the input stream to a different sensor. At that point I set the "initialize" boolean on the filter assuming it would reset the filter to the incoming stream.
In this case the data is at 1Hz with a cutoff of 0.02 Hz (50 seconds).
However, "initialize" effectively sets the past data to zero. It does clear the array of stored values, but it does not adjust the scaling of the coefficients when there is less than the stored history of the order of the filter. It would be advantageous for the filter to adjust the stored data as if there had been several time constants of data before the next point. In this case it assumes the history back to -infinity was zero and that the new data represents a step function.
I can cheat and just call it with a whole bunch of the same value, but this seems to be a bug in the filter logic itself and a brute force way around it seems odd.
In the simple test attached, wait for the filter to stabilize and then hit clear data. This should reset the filter and start filling the pipe, but the value drops from 1 to 0.099 and then recovers.
1. Can't open the code as I'm not using LV2019
2. I've encountered the same kind of repeated step-response behavior of the filtering functions. My app did its data collection and processing in discontinuous chunks, giving me a similar step-function-like discontinuity between the current value fed into the filter and the prior one.
My conclusion is that the filter functions are doing exactly what they should, considering that they're meant to be general purpose functions. The present behavior is well-defined and predictable, even if you don't happen to like its implications in your present app. The step-function changes to the input values that your app wants to ignore would be real and relevant in
many other circumstances.
3. So the burden's back to you or me. The brute force method that seems odd to you is probably gonna be your best path forward. It lets *you* decide how to pre-seed the filter's memory in the exact way *you* choose for *your* app. You can always make this pre-seeding process part of your own simple wrapper function around the original filter vi to reduce block diagram clutter in your main app.
I can back port it to any version you wish. It is a simple system that feeds a number to the butterworth and plots the output. The user can step change that number.
BUT, I disagree completely that it is doing what it should. The filtering is working fine. The "intitialize" function is not. In this case the VI should fill the filter taps with the correct versions to initialize the filter to the supplied value not the arbitrary value of zero. As far as I can tell, you need to determine the coefficients to invert the filter function. It takes a bit more work than NI put into this function.
As a general purpose function, initialize with a value should initialize to that value. Not ignore step changes, you seem to be confusing the functions of the filter that I was having issues with. Feed it a constant value, the output will reach that value. Now, initialize with the same value. The filter will jump to about 9% of that value and recover. Initialize should not have a recovery like that.
The brute force path, is an extra cpu load. Inverting the filter function is probably better. Asking NI to have their general purpose VIs "do the right thing" is even better. This isn't "my choice" or "my app" that should do this. I can program my own filter and set it up so initialize works as one would expect. BUT the point of LV is that they are providing these features/functions and they should work. I will probably write a fixed filter function that does a back calculation with the coefficients from the filter but NI should know that their function is broken. (and even take ownership and fix it)
I think we come at it with different notions of what "initialize" should mean.
I've typically used the waveform or array-based IIR filter functions because I'm most typically doing filtering when doing analog acquisition at a rate that has me processing data in chunks. Now as I looked into things a bit more, I see that some of these chunk-at-a-time functions refer to the relevant boolean input as "reset filter". And the advanced help explicitly declares that it will set internal filter states to 0. This is what I learned on, this is what I'm used to.
Meanwhile, the Pt-by-Pt functions seem to use the terminology "initialize" and the help only declares that it will "initialize the internal state", which leaves plenty of room for interpretation and varying expectations, including yours.
Long long ago when I was learning LabVIEW, only the array-based filter functions were in regular LabVIEW. Distinct waveform-based functions were added at a certain point and the Pt-by-Pt family of functions were also added at a (different?) particular point. It kinda appears that each individual generation of filter functions is pretty consistent in how it names the reset/init boolean and also consistent in how much detail the help reveals. I've been accustomed to the "reset" terminology and behavior for so long, I automatically expected the zeroing behavior even when the input terminal was renamed to say "initialize". But now I see the rationale for your different expectation.
I guess you could try to write something up about this for the Idea Exchange, but I suspect it'll be a tough sell due to the long history of present behavior and the importance of back-compatibility. The kind of "initialize" you want could definitely be useful. But the present "reset to 0s" behavior would need to be the default. Supporting both those modes plus normal continuous behavior would take more than a single boolean input to designate. Adding another input and a new behavior to all the filter functions would be a pretty big implementation task.
I will probably write a fixed filter function that does a back calculation with the coefficients from the filter but NI should know that their function is broken. (and even take ownership and fix it)
If you *do* post to the Idea Exchange, might I suggest / request that you post your "fixed" filter function along with enough description that others could make their own corresponding mods for other filter topologies besides Butterworth?
It will take awhile to post to the exchange, but I will try to post a complete solution. I agree that passing in an array is significantly different than passing in a pt-by-pt, but even so, you have end effects which can be dealt with a number of ways. Most (such as bounce and feed forward) are not available to a pt-by-pt function.
But "zero" is just a special case of initializing the filter to some value. If you pass zero to the input and initialize it has the current behavior and pass x to the input it will have a more "bumpless" behavior.
Getting NI to adopt anything suggested outside the bubble can be a many year project. A very strong case of NIH syndrome. But tilting at windmills is my normal mode.