LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Need Speedup

Solved!
Go to solution

And turn debugging off!

 

wiebeCARYA_0-1712580988184.png

 

0 Kudos
Message 11 of 21
(13,357 Views)

@Yamaeda wrote:

Making a preallocated string and replacing content might be a good (and old school) alternative.


Wait wait wait I'm one of the younger ones here! *shrivels up*

 

Still the defacto way anytime you do embedded C 😉

~ Self-professed LabVIEW wizard ~
Helping pave the path to long-term living and thriving in space.
Message 12 of 21
(13,350 Views)

wiebe@CARYA wrote:

The format into string might also be slow.

 

Format Into String probably parses the format string each iteration. While the string can't change in this example, I don't think the function will change into another function if the format string is a constant.

 

There's a chance though that concatenating a string from constants and more primitive functions is slower that parsing the format string. 

 

This could be benchmarked, but for me refactoring Format Into String usually made things faster...


wiebeCARYA_1-1712583488098.pngwiebeCARYA_2-1712583496545.png

 

Of course, Format Into String is easier to read...

0 Kudos
Message 13 of 21
(13,341 Views)

@IlluminatedG wrote:

@Yamaeda wrote:

Making a preallocated string and replacing content might be a good (and old school) alternative.


Wait wait wait I'm one of the younger ones here! *shrivels up*

 

Still the defacto way anytime you do embedded C 😉


Sadly, pre-allocating the entire string requires a shift register, and you'll loose parallel execution of the for loop.

 

If you can pre-allocate all lines, you might gain a little, but that seems a corner case to me.

 

With the pre-allocation, you loose flexibility. The numbers aren't fixed size, as a negative number has an extra '-'. That doesn't have to be a problem, but the result won't be 100% the same.

0 Kudos
Message 14 of 21
(13,331 Views)

wiebe@CARYA wrote:


Sadly, pre-allocating the entire string requires a shift register, and you'll loose parallel execution of the for loop.

 

If you can pre-allocate all lines, you might gain a little, but that seems a corner case to me.

 

With the pre-allocation, you loose flexibility. The numbers aren't fixed size, as a negative number has an extra '-'. That doesn't have to be a problem, but the result won't be 100% the same.


True, i haven't used it, i was just spouting ideas. In this case (always with strings?) it seemed to benefit a good amount from parallellisation. 

 

In a similar problem i remember having a VI that looked for some string and send the rest back through a shift register until all was parsed and it was _slow_. By simply changing to sending the Index found-number in the shift register and use as offset for future lookups it became 10x faster. No memory managment and string copies.

 

Yamaeda_1-1712589172098.png

 

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 15 of 21
(13,312 Views)

As a side note: Starting with LV 2021, the Concatenating tunnel mode works with strings.

paul_cardinale_1-1712936194283.png

"If you weren't supposed to push it, it wouldn't be a button."
Message 16 of 21
(13,261 Views)

@paul_cardinale wrote:

As a side note: Starting with LV 2021, the Concatenating tunnel mode works with strings.


WHY O WHY don't they tell us stuff like that??

Message 17 of 21
(13,253 Views)

wiebe@CARYA wrote:

@paul_cardinale wrote:

As a side note: Starting with LV 2021, the Concatenating tunnel mode works with strings.


WHY O WHY don't they tell us stuff like that??


I assumed that, because the down-converted code was broken and a concatenating string tunnel would have fixed it. 😄

(apparently, they did not bother implementing a down-conversion substitution, e.g. as with the IPE, for example)

0 Kudos
Message 18 of 21
(13,245 Views)

@altenbach wrote:

wiebe@CARYA wrote:

@paul_cardinale wrote:

As a side note: Starting with LV 2021, the Concatenating tunnel mode works with strings.


WHY O WHY don't they tell us stuff like that??


I assumed that, because the down-converted code was broken and a concatenating string tunnel would have fixed it. 😄

(apparently, they did not bother implementing a down-conversion substitution, e.g. as with the IPE, for example)


I suppose concatenating the string in the tunnel is faster? I'd guess building an array and concatenating that takes an extra step.

0 Kudos
Message 19 of 21
(13,233 Views)

wiebe@CARYA wrote:

@altenbach wrote:

wiebe@CARYA wrote:

@paul_cardinale wrote:

As a side note: Starting with LV 2021, the Concatenating tunnel mode works with strings.


WHY O WHY don't they tell us stuff like that??


I assumed that, because the down-converted code was broken and a concatenating string tunnel would have fixed it. 😄

(apparently, they did not bother implementing a down-conversion substitution, e.g. as with the IPE, for example)


I suppose concatenating the string in the tunnel is faster? I'd guess building an array and concatenating that takes an extra step.


My (wild) guess is that the compiler will probably create nearly identical machine code after all optimization. 😄

0 Kudos
Message 20 of 21
(13,225 Views)