LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Growing String Slowing vi


I have a program where all data processing is based on processing an ASCII string. In the program, the measurement file is written to a txt file, the program does the calculation by reading the string, and the graph is also displayed by reading the values ​​from the string. The problem has become that the string grows so long that it slows down the state machine.

 

Attached is a simplified sample program where the phenomenon occurs. In the program, I increment the string at each round and plot it on a graph at the same time.

 

What do you think, is the problem because LabView dynamically allocates memory for the string variable, and at some point more memory is allocated each round, causing the iteration time to no longer remain constant?

 

In the real environment, I collect data from two measuring devices and combine the results into a graph. I have managed to eliminate all other possible causes and now it seems that the long string is causing the state machine RPM to slow down.

 

If I have to give up using string, the entire program architecture will have to be redesigned and I want to avoid that as long as possible. 

 

Is it possible to allocate a large enough memory for a string variable so that I can get rid of dynamic memory allocation, or is the problem somewhere else?

 

In the example program, the problems start at around 3000 iterations, at which point the delay I set as a constant becomes much larger.

 

Labview 2014 developper suite, 2.6GHz i5, 8G RAM

0 Kudos
Message 1 of 9
(330 Views)
String is not an efficient format to store numeric data. It is best to store as numeric array and convert to string as needed such as write to file.
Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution
0 Kudos
Message 2 of 9
(291 Views)

The conversion to a string and then back to a number is also quite slow. The more conversions done, the worse your performance will be. As previously stated, keeping an array of your data would be a lot more efficient as the said conversions would not be required.



There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 3 of 9
(272 Views)

Did you know that LabVIEW is an acronym?  It stands for Laboratory Visual Instrument Engineering Workbench.  You have to learn to "think like an Engineer".  Suppose you have a signal you want to acquire and plot, and say you want to sample this signal at 100 Hz, or every 10 msec.  You have a loop (in your State Machine) that every 10 msec generates a data point.  What does it do with this data point?  If you try to save it, or plot it, or keep it in memory, you'll be accumulating data pretty quickly, and dealing with 3000 points (and growing) after acquiring 30 seconds of data are going to slow you down!

 

But this is LabVIEW, which lets you run several tasks ("loops") at the same time, one generating the data at 100 Hz (often called the "Producer"), and another one processing the data (often called the "Consumer").  This "dual-loop" technique is called, naturally, the "Consumer/Producer Design Pattern" -- if you open LabVIEW, click on "File", then click the second entry on the drop-down called "New...", expand "From Template" (click the "+" sign) and choose "Producer/Consumer Design Pattern (Data)", you'll see a pair of While Loops, the Producer at the top with a "Timing" function to control its speed, and the Consumer down below (which doesn't need a timer, as it "waits" for the Producer to send it data).

 

So now all you have to do is speed up the Consumer.  You do that by not keeping the data in memory -- write it to a disk file and also plot it on a graph or chart (I recommend a Chart, which you can set to "scroll", as though it was a pen drawing on a moving strip of paper, or an oscilloscope, with the beam showing time by moving left-to-right and showing the data values by moving vertically).

 

Bob Schor

0 Kudos
Message 4 of 9
(205 Views)

I understand your wish to not redesign your program.
But how dificult is it to change the datatype, which holds your data (currently string)? If you change it in all sub-VIs to the cluster with X & Y- data (make this cluster typedefed), then you will have faster access to the data by far (as crossrulz stated already). This would handle the first bottleneck.
The second problem when the amount of data rises, is the preformance of the XY-Graph. You try to display 50 times / second XY- data in a graph. I suppose, the x- data is not equidistant (else you would take a normal graph with much better performance), so it is a lot of work for the XY- graph- control. So you could lower the refresh- rate of the XY- graph.
The third problem is the amount of data, fed into the XY- graph. I suppose it is not really necessary to display 3000 values on 5 cm physical Monitor length. Consider to decimate the data which is to be displayed by the graph. Don't do it with every display cycle, instead use another decimated array (cluster of X & Y- values).
As a side effect to change the datatype to a typedefed cluster you get the opportunity to transport additional information / data with your measuring values. E.g. you could also put in the decimated arrays or the index of some active data set.

Greets, Dave
0 Kudos
Message 5 of 9
(167 Views)

Fundamentally, you need to find a way to not retain data in memory indefinitely or you are guaranteed to run into problems as more data is added over time. If you want to keep using a big string to hold all data, you need to discard old data when the string reaches a certain size. The string subset node may help with that.

 

Here are some other ways to expire old data:

0 Kudos
Message 6 of 9
(142 Views)

Keep it as an array of numbers as long as you can and only convert to text when you write it to file (actually you can send it as a number to that function also)

A simple change to Complex array and it easily keeps 5ms timing after 20k numbers

Yamaeda_0-1768490974339.png

That way you don't need any conversion before the plot:

Yamaeda_1-1768491022359.png

Yamaeda_0-1768491123798.png

 

 

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 7 of 9
(28 Views)

The "right way to do this" is to stop using strings. But if you're set on using them, then quit converting the ENTIRE array to a String each time. By loop 10,000, you've converted the same Double to string 10,000 times- and the second one 9,999 times, etc.

 

If I had to guess it's your conversion that's the problem, not the string itself. Just convert the NEW stuff to string. You can keep sending strings between each module, just have each module only convert the newly arrived data instead of ALL of it.

0 Kudos
Message 8 of 9
(8 Views)

Just looked a little deeper at your code and gave it about 2 minutes (got a meeting coming up I need to prep for). So, please note this is A BAD WAY TO DO THIS, but we've all been there... sometimes you have to cludge something together and know that you'll have to fix it some other time. But before my code, some notes:

 

1- You string should, at LEAST, be properly delimited. You're using a CRLF after every entry. Change that to a Tab separating each entry in the row and a CRLF after every row. Then when you do Spreadsheet String to Array, wire in a 2D array and now you don't have to do the Decimate trick.

 

2- The method I'm showing below keeps the X and Y arrays separate- you should probably combine them into a 2D array but maybe you need them separate in your real app, and like I said I'm super crunched for time.

 

3- It's not string storage that's slowing you down. It's converting the ENTIRE string to a new array over and over again. Ever-growing strings isn't ideal, but it's not THAT big of a problem. Converting strings, on the other hand, is slow. Doing it to a 20 character string? Very fast. Doing it to a 100,000 character string? Very slow. Since you've already converted everything in a previous loop, you can always just convert the newest bit.

 

I just ran it to 10,000 tries and it stayed at 20 ms loop iteration each time through the whole test.

 

 

BertMcMahan_0-1768503112641.png

 

0 Kudos
Message 9 of 9
(5 Views)