LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can I optimize (or redo) this graph filter?

Solved!
Go to solution

I am building a datalogger application. I want to be able to view maybe up to an hour of data sampled at 1 kHz so I can check for voltage spikes. The problems I am trying to solve are as follows:

1) When trying to graph around 100,000 data points or more, my waveform graph really starts to bog down, but I need the graph to continue running in real time.

2) Reasonably, I expect to be able to make the graph area 1000 pixels across at most. In other words, it is impossible to accurately display more than 1000 data points at once, but I need to be able to catch voltage spikes that may be only one or two data points wide.

My attempt at solving this problem is attached along with a unit test, which while also open to criticism, isn't as important as the VI it is made to test. My idea was to split the input waveform into chunks the size of a pixel and return only a max and a min for each chunk. Unfortunately, this means that I have to have a loop in there, which I assume is the main thing slowing down the VI.

 

If there is a better way to do this, I am all ears.

 

P.S. I will be away from work for the next couple of weeks, so don't take my lack of responsiveness as a sign that I am ignoring replies. I will respond as soon as I get back.

Download All
0 Kudos
Message 1 of 4
(2,729 Views)

Graphing is passive and leaves detecting spikes to the human observer. I am sure there is an algorithmic way to accurately detect voltage spikes without any graphing at all.

 

So, what criteria distinguishes "data" from "spikes"?

0 Kudos
Message 2 of 4
(2,703 Views)
Solution
Accepted by duwaar

Take a look at  this whitepaper.

 

There is also an example included. Basically, in their example they take the graph and bin into regions, they over sample by a factor of three, that is, they will allow 3000 points in a 1000 pixel plot. In each region, they determine the min and maximum and plot it. That way you can always detect spikes in your data. What I believe is slowing down your approach is: you are constantly appending to an array, waveform operation are slow, can you just use a double array and then simple array operations?

 

You could also use a wavelet method to look for spikes.

 

Attached is a method to decimate a double array for a plot that includes the spike. (Based on the link earlier, but faster.) For 100k points it is instantaneous. Although I bet @altenbach can speed it up and reduce it to the size of postage stamp. I'll let you figure out what dt should be. (Hint it will inversely depend on the number of points in the output array along your time period.)

 

mcduff

 

0 Kudos
Message 3 of 4
(2,678 Views)

altenbach,

There definitely are ways to detect spikes without graphing the data, but detecting spikes is not the main purpose of this vi--though it is an important one. The main problem I'm trying to solve is a performance issue. I have a lot of data to display, so if I try to just spew it all onto a graph I get two problems:

1) The program bogs down and starts to fall behind the real-time update speed that I want.

2) Plotting hundreds of thousands of data points within the space of only a few hundred pixels gives less of a graph than a blob of color.

I was hoping to solve these problems with the vi I attached, but I only managed to solve the second. In fact, using the filter actually slows the program down even more, so that's what I mean by asking for help with optimization; I need a vi that will do to the data exactly what this one does, but I need it to be more efficient.

 

To answer your question directly: Nothing distinguishes "data" from "spikes". I just want to make sure I keep spikes in the data when I thin it out. The previously attached vi does that, but not very efficiently (apparently).

0 Kudos
Message 4 of 4
(2,590 Views)