11-25-2012 10:33 AM
I am measuring propagation of an acceleration signal (ACC) between 2 measuring points (sensors).With 51.2 [kS/s] sampling rate the resolution is 19.5 [us]. I can count delay's samples "manually", but can this be automated with sufficient certainty?
Thanks in advance,
11-25-2012 11:15 AM
11-25-2012 11:26 AM
Yes I had, just as well I had heard of Ferrari but never got a chance ... Have you seen the spectrum of a real acceleration signal? It has noise beyond the boundaries of the DAQ system's dynamic range - especially toward DC. I can't use a filter because it introduces phase distortion. What exaclty are you suggesting?
Thanks in advance,
11-25-2012 11:36 AM
you just asked for determination of signal delays. That's what correlation functions are used for.
Now you introduced a new item in this discussion...
When there is noise in the signal you will have to filter that noise. Or you have to live with the noise and still try to detect signal delays...
11-25-2012 11:50 AM
I did mention signal is ACC before the question... Never mind, do you have experience with how filtering impacts delay resolution with respect to the numbers I had provided? I have no idea what I am allowed and what I am not allowed to do.
11-25-2012 11:58 AM
I had to deal with acceleration signals before and they weren't too noisy to detect propragation times with correlation...
I can't tell you what's allowed and what's not allowed. You have to test your algorithm with known signals before you use it on a real machine...
11-25-2012 05:24 PM
You said your system has "noise beyond the DAQ system's dynamic range." If this is true, you need to do some analog signal conditioning before the signal reaches the DAQ device. Otherwise you have no way of knowing what is valid signal and what has been corrupted by the out of range noise.
Please post some data.
11-26-2012 05:24 AM
It seems that [TSA Cross-correlation Function.VI] gives the delay ("lag") as the highest peak in its "cross correlation" output which is an array. If I can use this directly (without having to recall the math theory behind) I'd have the following couple of questions:
1. Is the delay always associated with the absolute MAX of the "cross-correlation" array? - Can I read it as the Xmax of the [Xmax,Ymax] point?
How to calculate the delay/lag - is it the relative offset from the central index of the "cross-correlation" array (Xmax - Xcenter)?
Will biased weighting "ensure" my lag to be the peak in the array?
2. The "cross-correlation" array has no time dimension - is it the the original sampling rate (dt)?
If these assumptions are not correct, what would be the proper procedure?
Thanks in advance,
11-27-2012 04:20 PM
Information on the operation of the function can be found in the LabVIEW Help Documentation:
Note the function details at the bottom of the page.
Information on what to expect from the Cross-Correlation array can be found here:
Hopefully the details found in the documentation can answer your questions.
11-28-2012 02:33 AM
with that few information bits you provided Gerd,johnsold and craig already provided a lot of information 😉
Here are some more hints:
Of course you can filter the signals! If you apply the same (linear) filter on both signals the phase delay will be the same, so the delay you will measure between the two signals.
And there are filter with zero phase shift (FIR) or you can apply a filter with phase shift , inverse the data array and apply the same filter again.. -> no phase shift 🙂
You can increase the resolution to subsample by fitting a peak on the autocorrelation or
Do an phase delay measurement in the frequency domain... I just did that to measure the dispersion of sound in a titan rod ...
BUT all of that involves a basic understanding of signal theory or to stay with your example: All parts for your (software) Ferrari are there and some hints on how to put it together are given. However without understanding what you do you will crash.