The usual answer is a different way of signal processing.
So you need to do analyse both systems (including a lot of RTFM on both) to find the differences in detail.
And it doesn't help to look at (or blame) only one system, if you want to compare two systems 😉
There are a lot of parameters to create a lot of spectra of different kind.
And Matlab and LabVIEW might use different default parameters if not defined.
conversion worked this time…
The "spectral measurements" expects a waveform as input, so why do you convert your DDT data to a plain 1D array?
Why do you apply a moving average to the resulting spectrum?
What is the expected result after that Select function? One input gets a DDT (containing an array of signals) and the other just gets a 0 (scalar DBL)?
Why do you display just some samples from the filtered spectrum? How did you come up with these numbers (50-200)?
Its great that conversion works this time.
In some previous post i attached a folder which contains Matlab Code and 2 reference input data files.
The purpose of my code is to detect the true input signal. In my case, i have to see the spectrum of signal and see its peak between 50 to 200 samples.
For true signal this peak is always greater than 20; while for invalid input signal this peak is less than 20 in amplitude.
The Matlab file always give accurate results. But the same did not work in Labview. That's where i need your help.
This is what I get for your "test_3" data:
I just changed the FromDDT properties after that subtract node…
I still don't understand why you try to exactly pick 200 elements starting at index 50 from the filtered power spectrum?
What kind of data do those files contain? What does the (unused) first column contain?
Unfortunately I don't have the needed toolboxes installed in Matlab to run your m script…