From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

logarithmic binning

Hi,

 

I have a host of data of temperature and capacitance with respect to timestamps saved in .csv files. I have one data per second for both temperature and capacitance for the same timestamp. Now, I want to do the binning and averaging of the data points so that i get one data point every 0.1 decade. I think the approach should be reading the .csv files, fixing the lower and the upper limit of the timestamps, then doing the averaging of the temperature and capacitance values in that range by fitting seperate trendlines through the points. Finally, I would like to get the value of temperature and capacitance from the fitted line for an intermediate timestamp (somewhere in the middle of the prefixed bin range) as a representative value for that bin. Then repeating this process for the next decades to get the representative values as a function of corresponding timestapms.

 

Could you people help me with some guidelines about how to code this?

 

I have attached one sample dataset for you convenience.

 

Thanks

Sakib

0 Kudos
Message 1 of 14
(8,133 Views)
What you are wanting to do is not trivial. You should bone up on the online tutorials and then have a go at creating something. Post questions as they arise.

Mike...

Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 2 of 14
(8,104 Views)

"Now, I want to do the binning and averaging of the data points so that i get one data point every 0.1 decade. "

 

Could you define what you mean about "0.1 decade"? As I see, you only want to plot your data with decimation (in time) including an averaging. Right? There are built-in VIs for this task.

 

Logarithmic binning means something totally different to me: usually in statistical physics, "logarithmic binning" is an algorithm when you want to calculate for example earthquake amplitude distribution (or you can call it a kind of histogram). This distribution yields a power-law fit. Since the occurrence of very large earthquakes is very low, we can improve the power-law fit quality if we increase the bin-sizes in a logarithmic way. In the end, you plot your distribution (P(magnitude)) in a log-log scaled Graph. Such distribution does not contain time information, it only shows the scaling behaviour of your physical quantity.

In your case, I do not understand, why you would want such binning?

 

Edit: I would use the built-in VI called: "Decimate (single shot).vi", you can find it under Signal Processing --> Signal Operation palette

0 Kudos
Message 3 of 14
(8,086 Views)

For example:

decimation_example_BD.png

 

edit: actually the last "double to time stamp" conversion is not necessary...and you can simplify other parts too...

edit2: and you can just do the same very easily for example in Excel also...

Message 4 of 14
(8,076 Views)

Hi Blokk,

 

Your code is so so awsome. It perfectly serves my purpose when I need to do the binning and averaging in linear mode (one point out of every prementioned number of points). But, apart from that, I also need to do the binning and averaging in logarithic mode. I am so soory for the confusion with the heading ("Logarithmic Binning") that I used. Please have look at the attached excel sheet. You will get the idea about what I am trying to explain.

 

I think I somehow need to control the start and end point of all the decimation operations (and also the decimation factor) to achieve such a binning and averaging process.

 

Please share your views on these.

Sakib

0 Kudos
Message 5 of 14
(8,044 Views)

Sorry, I still do not understand what you want 🙂 Actually what is your goal with all of this? Is there a kind of exponential or power-law function including time and capacitance (temperature)? Why dont you just plot your data on a half-log or a log-log Graph? The sample data you attached, does not show any interesting feature if I plot it on log scales...

0 Kudos
Message 6 of 14
(8,025 Views)

I am extremely sorry for not making my motivation clear. Actually my research is with volume relaxation of polymers. Some of my experiments last for several days (sometimes months!!!). So, I do need to have the time axis in log scale.

 

As far as I have understood from your code, in a linear scale, depending on the decimation factor, a specific number of points will collapse into one single point. But, if I take a semi-log time axis, these points will not be equally spaced. Whereas, if somehow, I can divide the whole data set into several segements and define different decimation factors for different segments, then when plotted in a semi-log scale, I will get equally spaced points throughout the plot.

 

I have attached a modified version of the excel file for your understanding. Here, I plotted Temperature (°C) vs Time (s) with the time axis in log scale. A varying decimation factor will give me an equally spaced data as plotted here.

Hope you understand the motivation this time Smiley Happy

 

I have made some progress working with your code. Please please have a look at these and share your views ...

 

Thanks

Sakib

Download All
0 Kudos
Message 7 of 14
(7,991 Views)

"I am extremely sorry for not making my motivation clear. Actually my research is with volume relaxation of polymers. Some of my experiments last for several days (sometimes months!!!). So, I do need to have the time axis in log scale."

 

Oh, I see! Now the whole story makes sense!  🙂 

I can have a look tomorrow at your data and code, I hope, right now i need a little rest I think (just 4 hours ago, coming home from work by bike, and an old gentleman hit me right on the bicycle road by his car, thanks God, no fracture, just back from the ambulance.... people, always wear bike helmet! it may save your life... 😉 🙂 )

 

Hmm, what kind of experimental setup you use? You very the temperature, and measure the volume change of the polymer via some kind of capacitance sensor? I like such interesting, long lasting experiments 🙂 But still, the worst is experimental earthquake research: you have to wait several thousands of years to collect good statistics 😄 

Do you make computer simulation models too, beside the experimental work? Some ex-colleagues are working in the field of fracture-mechanics, doing kind of fiber-bundle computer simulations...Things gets really interesting when you need real computational power, or parallel programming 🙂

 

0 Kudos
Message 8 of 14
(7,976 Views)

Got really afraid on hearing about the accident that you had. Hope you are doing fine now.

 

You are almost right with the setup. I change the temperature and use a capacitance sensor to capture the volume change of polymers. But, I only do experiments; not any simulations. Previously, I used to collect raw data from labview and did the binning and averaing in excel. But, now I am trying to accomodate the binning and averaging part in the same code.

 

Hope to hear back from you soon. Smiley Happy

 

Thanks

Sakib

0 Kudos
Message 9 of 14
(7,960 Views)

Yep, some ibuprofen did the trick 🙂

 

I digged into some of my old ANSI C code, to see something similar, if you do not mind, I put here only a C-like pseudo code, I hope it will help you (if it works in the way you need, you can implement it into LabVIEW). I do not have right now access to LV 2014, so I could not check your VI which you posted.

 

So lets imagine, you have time values from 1 second to 1000 seconds, so you have 1000 data points.

 

P=1000;
delta = 0.1;                // this is how you want to plot on your log scale.

temperature_new_array[N]; //just to show actual sizes for these arrays.
temperature_old_array[P];

logmin = log (t_0); //        (in our case this value is zero, since t_0 = 1 sec)
logmax= log (t_max); //  ( log (1000))

N = (logmax - logmin) / delta;  //here you calculate your actual bin number, 
                                                //now it is 30!

for (i=0; i < P; i++)
{
  k = (int) ((log(t_i) - logmin) / delta);  //calc the bin location for the actual time 
                                                          //point, using (int) typecast
  temperature_new_array [k] += temperature_old_array [i] ;
  counter [k] ++;    // counter array has a size of N also
}

for (k=0; k < N; k++) temperature_new_array [k] /= counter[k];  //calc average!

 oh, I a just forgot about the X-point locations, this is the way to calculate the "middles" of the bins:

 

for (k=0; k < N; k++) time_new_array [k] = t_0 * 10^(k * delta + delta/2);

 edit2: as I remember, to be correct in C language, I should have wrote log10() instead of log() function. But I guess this is clear, not to mix up the natural and the 10-based logarithms.

 

0 Kudos
Message 10 of 14
(7,928 Views)