From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Community Documents

cancel
Showing results for 
Search instead for 
Did you mean: 

An End to Brainless LabVIEW Programming

This is the An End to Brainless LabVIEW Programming presentation from Darren.

 

The slides were last refreshed in November 2022.

 

You can watch a recording of the presentation here.

 

(bit.ly/brainlesslabview redirects here)

Comments
Gary_MavSysInc
Member
Member
on

Thank you for the insights. Really good to here some one at National Instruments enouraging developers to use Globals. I had begun to think that was akin to slaughtering the sacred cow. I would also like to add a few insights that you might consider for improving development. I think development can be greatly improved by doing the backgound reseach (other wise know as RTFM) and having good requirments that can be used for testing the result. Smart programming is more about good design than fast key stokes. As LabVIEW is more intuitive than other languages, it is also to easily hacked. Developers need to resist the tempation to rush into the programming step of te development cycle. A little time in preparation for planning your attack saves a great deal of time.

wiebe@CARYA
Knight of NI Knight of NI
Knight of NI
on

I'm really not too sure about selectively removing the class method outputs. I can see how it would be more readable and easier to diagnose, but are there downsides?

 

Can we be sure LV doesn't make copies of the objects when the wires are split? Parallelism is nice, but not at the cost of too much memory copies. Is there a balance to be considered?

 

I would hate to remove all outputs, just to find out everything is slowed down because my class happens to have large amounts of data in it. Or the class evaluates to get large data, and at some point I have to add outputs and re-synchronize the callers, just to prevent the copies that might be made.

 

These kinds of answers are really hard to get. Show Buffer Allocations is just not useful, it only shows potential copies, not the costs of the copies nor the benefits...

 

Please shed some light on it (anyone).

 

wiebe@CARYA
Knight of NI Knight of NI
Knight of NI
on

@Gary_MavSysInc wrote:
Developers need to resist the tempation to rush into the programming step of te development cycle. A little time in preparation for planning your attack saves a great deal of time.

With OO development it seems to be quite normal to spend 25%-75% time on planning. I haven't seen that with LabVIEW OO development (mostly <25%, don't want to scare people in using it). Bottom line is doing a project OO takes less time then doing it "normal programming", at least for me. It is much more important to make a plan though. The results I've seen are usually better, probably because of that required planning.

 

 

nathand
Proven Zealot
Proven Zealot
on

Yes, we can be sure that LabVIEW does not make copies of an object when wires are split, because LabVIEW never makes a copy when a wire branches (or anywhere on a wire, for that matter). Copies occur only at nodes that need to modify data, when the original data is also still needed. So, if you aren't modifying the data inside a class within a VI - and you probably aren't, if you don't have the class as an output - then you aren't making copies of your class by running those VIs in parallel. You might end up making copies of particular elements of the class, depending on the work going on inside the VI, but branching the class wires doesn't make copies.

Contributors