03-11-2011 05:23 PM
Sounds good. I will also research a bit on that.
Thank you for all the help!!
03-12-2011 09:51 AM
I put together a little simulation this morning to show how this might work. It will process two signals of ~ 1e6 samples in 61 segments of 16384 samples per segment in about 150 ms. This also includes the time required to generate four sinusoidal signals of that length. (This is a different computer, so speed comparisons to my earlier posts are not valid).
The filter is based on the trigonometric identities for multiplication of two sine waves.
The low pass filter is implemented as a sum. Summing is analogous to the mathematical operation of integration. Integration has a low pass filter-like spectral response. So the filtering effect is produced by summing all the elements of the array which is the product of the reference sine (or cosine) and the signal and dividing by the number of elements. This is not a sharp roll-off filter but is very easy to implement.
The attached VI (Sync Filter.vi) processes two signals. One is at the reference frequency (50000) but with random phase. The other is 5 Hz higher (50005) with a different random phase. You can see that after just a few iterations the interfering signal is attenuated with respect to the on frequency signal.
To use this in your system you would need to create a subVI incorporating the product and sum processes on both the I and Q signals and the I-Q to X-Theta.vi. I am not sure whether the shift registers for accumulating the I and Q signals should be in the subVI or in the main outer loop as in this demo. You need to generate the reference signals for each frequency of interest. Setting all this up as arrays of frequencies fed to a for loop seems like a good approach, but I do not know all of your requirements.
Lynn
03-12-2011 02:08 PM
Thank you Lynn. This is similar to the lock-in like procedure I was using, but I had not implemented the iterations. However, at first, I didn't really see why there was an advantage in doing the calculation by iterations instead of doing all samples together. If the phases of the signal and the interference are set both to be same I have checked that then the segment length is irrelevant and the only contributing paramenter is the number of samples. In contrast, if they have different phases then it appears that the segment length has also some contribution (small segment lengths seem to be best). Could you explain this a little further? I guess that best condition is occurs if the signal and interference are out of phase by 90º.
To deal with the measured phases properly I think that some correction before the sum should be required since for each segment the phase of reference signal is different.
Thank you again for your help, this has been a great help!
03-13-2011 09:27 AM
The segments and iterations represent the data acquisition process. You indicated that you were acquiring 16384 samples on each DAQ Read, so I used that size segments to simulate getting the data in chunks. If you can wait until you have all the data, it can be processed faster. I thought you indicated that you wanted fast and frequent updates.
Since the signal and the interference are at different frequencies, they may start with a similar phase relationship, but after even one cycle they will no longer be in phase. How were you checking this?
The total number of samples should be the significant factor. Of course the nulls in the interference signal may occur at values of total samples which may not be reached for certain segment sizes, although I do not observe this.
I modified the VI to add the signal and the interference and then looked at the plot of the combination compared to the analysis of the desired signal alone. The initial phase difference makes the size of the error vary substantially. To get the error below 20% seems to require summing more than 300000 samples, regardless of the segment size.
Can you post your VI or at least simple VI with an array of data like that shown in your original post? Also a list of the nominal signal frequencies.
Lynn
03-13-2011 01:43 PM
I have to wait to have all data, since the only thing that is interesting is the final value, which has the highest accuracy. I take no advantage of having "intermediate" values from previous iterations, since it is a sequential experiment that waits for the "final" value, records it, and moves to another mesure. This does not mean that I am not concerned by the speed... acquiring 1000000 samples is still too much, and I cannot take this time for each mesurement
In the previous post I was confused about the iterations because with your code it seems to me that basically the ratio samples/segment need to be an integer number to work well. I have played a little with your program (see the attached modification) to check this:
(taking the x value of the last iteration and averaging 10 times, 1000000 samples)
In any case, I do not need the iterations, and I can definitely process all the data together. The problem is that 1000000 samples is too much (my DAQ can do it I think, but I don't have so much time!), so ideally (probably impossible) I would need to keep the same accuracy but with not more than 1/5 of samples (200000 samples)
When I am at work I will post and array of of my data.
Thank you!
03-13-2011 02:19 PM
OK. Forget the iterations and segments. They were just there to simulate taking data in smaller chunks.
With non-integer total samples/segment values not all the data gets used, so the analysis ends in the middle of a cycle and that results in the errors. With larger segments the number of unused samples tends to be larger.
I cannot open your VI at home. I am using LV 8.5 here.
What you need for good "filtering" with this method is that the number of cycles of your reference frequency is an integer. The way I set up the simulation, this condition is always met if all the data is used. In your real system you would need to adjust this, probably separately for each frequency.
Lynn
03-13-2011 08:15 PM - edited 03-13-2011 08:16 PM
Yes, with this method having an integer number of cycles is essential. As I understand it I cannot adjust the number of samples for each frequency, since it frequency have intereferences from all the other, and the this interferences would not be filtered.
In my case, all the frequencies (desired are undesired) are combinations of four basic frequencies. Then the approch should be to calculate the number of points contained in one cycle for each frequency and then calculate the least common multiple of this four numbers to use it as the number of samples, isn't it? This number is eventually too high so, instead of taking an exact solution solution, I will have to pick a number of samples that is close to be a common multiple, although maybe will not be the exact multiple.
03-13-2011 08:46 PM
Since you need to do a separate multiply and sum for each frequency, extract the largest subset of the total acquired samples which meets the integer cycle criterion for that frequency. You can calculate the numbers of samples in advance. That does not need to be done for each acquisition. Array Subset is pretty fast, so this should not slow things down much. If you do all the filters in parallel for speed, you may need to be careful not to make any unnecessary data copies.
Lynn
03-13-2011 09:13 PM
@johnsold wrote:
Since you need to do a separate multiply and sum for each frequency, extract the largest subset of the total acquired samples which meets the integer cycle criterion for that frequency.
But extracting the largest subset of the total acquired sample which meets the integer cycle criterior for a particular frequency does not assure a good filtering of the other interfering frequencies. My point point was that the best result should be achived for arrays that meet the integer cycle criterion for ALL frequencies.