I have the PXI-5122 digitiser (that acquires in 14-bit, NI-Scope 2.5)and I do the following measurement for :
- sample rate = 50MS/s
- record length = 4M (in 16-bit)
->acquisition time = 0,08sec
and I want to sort these 4 million samples.
My algorithm is this one:
Consider that I need 2^N divisions (usually N=7 til 9)
I use a shift right function of N bit on my 4M datas so I get an index to a counter that I increment for each occurancy:
for each acquisition
for all the samples do
index = (data_16bit>>N)
hist(index)++
end
end
My PXI is a Pentium3 1,2GHz and in the worst case it seems that should take less than 1 second. But in fact it takes around 26 sec for an acquisition of 0,08se
c !!!
So I try to compare with a sortinf method not only using the output wfm of niScope Fetch Binary 16 but also the gain and offset of the channel. So I get now Voltages which are float so it should be slower to handle float than just binary numbers. It´s not the case with this float handling it takes 26,1 sec to sort the same amount of data (4M). Do LabView recognise every number nubers as float ? Because it is very weird that a basic low level application like bit shifting + counter incrementing is not faster than the same one with float.
I assume that my algorithm is not perfect so that´s why I would like to know if you have some deas that could be more convenient with LabView.
Thanks for your support.