From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

BreakPoint

cancel
Showing results for 
Search instead for 
Did you mean: 

Rube Goldberg Code


@JÞB wrote:

 

That being said, if the optomization is known to exist at a lower level there is no need to duplicate it at a higher one.  Or, is there? - it would depend on the return in performance for each specific implementation I THINK.


Philsophical questions here:

 

- If the compiler will optimise without you seeing, do you care?

- If the gain in self-optimisation is small relative to the cost of refining your method, do you care?

- Does the optimisation preclude expansion, or at least make it difficult? Do you care?

- Does optimisation result in obfuscation? Do you care?

 

The first one's fairly specific to coding here, but the last three are the same for any system you work on, mechanical, electrical etc.

 

I'm a process developer - I can improve my process specifically for one variant at the cost of a negative effect somewhere else. I can improve my process unilaterally, but for such a small marginal gain that it's not worth it. I can change my process so that it's more efficient, but harder to maintain.

 

That last point is relevant again to software - we specifically avoid LVOOP in many situations where it would be perfect for implementation, purely because we have to make sure the code is supportable by a team which is less well versed in its use, and the 'cost' of maintenance is higher than other implementations.

 

I'm not saying that we purposefully implement RGs, but there are times when I've chosen more long winded solutions so that the working is more obvious, or because the most efficient method for my current problem is to ignore adaptation for future ones.

---
CLA
Message 1751 of 2,571
(11,198 Views)

@thoult wrote:

@JÞB wrote:

 

That being said, if the optomization is known to exist at a lower level there is no need to duplicate it at a higher one.  Or, is there? - it would depend on the return in performance for each specific implementation I THINK.


Philsophical questions here:

 

- If the compiler will optimise without you seeing, do you care?

- If the gain in self-optimisation is small relative to the cost of refining your method, do you care?

- Does the optimisation preclude expansion, or at least make it difficult? Do you care?

- Does optimisation result in obfuscation? Do you care?

 

The first one's fairly specific to coding here, but the last three are the same for any system you work on, mechanical, electrical etc.

 

I'm a process developer - I can improve my process specifically for one variant at the cost of a negative effect somewhere else. I can improve my process unilaterally, but for such a small marginal gain that it's not worth it. I can change my process so that it's more efficient, but harder to maintain.

 

That last point is relevant again to software - we specifically avoid LVOOP in many situations where it would be perfect for implementation, purely because we have to make sure the code is supportable by a team which is less well versed in its use, and the 'cost' of maintenance is higher than other implementations.

 

I'm not saying that we purposefully implement RGs, but there are times when I've chosen more long winded solutions so that the working is more obvious, or because the most efficient method for my current problem is to ignore adaptation for future ones.


Probably worth a dedicated thread for dicussion.  A bit OT for this mega threadSmiley Happy

 

I guess the point of my last post was "It probably doesn't matter to modern processors but, since I'm blissfully ignorant of those low level details that LabVIEW and other layers abstract away, I'll fall back on my early training and avoid operating on the value of data that I'm going to throw out anyway, because its a good habit."


"Should be" isn't "Is" -Jay
0 Kudos
Message 1752 of 2,571
(11,186 Views)

Slicing and dicing an array of waveforms (seen here).

 

Message 1753 of 2,571
(11,109 Views)

@altenbach wrote:

Slicing and dicing an array of waveforms (seen here).



 


 Hacking out a bunch more useless code in following post.  The vi actually cleans up nice if you shave off the unneeded stuff

Rube Like This.png


"Should be" isn't "Is" -Jay
Message 1754 of 2,571
(11,103 Views)

No code shown, but VI Analyzer seems to complain that:

 

"This diagram contains 410 write global variables, which is more than the user-specified maximum number(5)"

 

I would say it is a good candidate for this thread. Just guessing! 😄

 

(seen here)

Message 1755 of 2,571
(11,024 Views)

Well, THERE'S the problem!

Capture.PNG

And I cannot even read the poster's native language:)


"Should be" isn't "Is" -Jay
0 Kudos
Message 1756 of 2,571
(11,023 Views)

@JÞB wrote:

Well, THERE'S the problem!


Next thing we'll see an "idea" to raise the default limit to 500. 😄

 


@JÞB wrote:

 

And I cannot even read the poster's native language:)


Here's what I use. Works great! 😉

 

 

0 Kudos
Message 1757 of 2,571
(11,015 Views)

This one is a good catch

A single for loop does the job


0 Kudos
Message 1758 of 2,571
(10,948 Views)

@MathieuSteiner wrote:

A single for loop does the job


This is a very long thread. Which post are you referring to?

0 Kudos
Message 1759 of 2,571
(10,921 Views)

You're absolutely right !

I had hoped that by clicking "Reply" it would somehow link the post I was trying to answer (or I made a mistake). I am myself currently unable to find back the post I answered Smiley Happy


0 Kudos
Message 1760 of 2,571
(10,915 Views)