Aerospace, Defense, & Government
Academic & Research
Benchtop Measurement and Test
Distributed Measurement and Control
Systems Engineering Software
You can request repair, schedule calibration, or get technical support. A valid service agreement may be required.
Provides support for NI data acquisition and signal conditioning devices.
Provides support for Ethernet, GPIB, serial, USB, and other types of instruments.
Provides support for NI GPIB controllers and NI embedded controllers with GPIB ports.
This is the An End to Brainless LabVIEW Programming presentation from Darren.
The slides were last refreshed in July 2020.
You can watch a recording of the presentation here.
(bit.ly/brainlesslabview redirects here)
Thank you for the insights. Really good to here some one at National Instruments enouraging developers to use Globals. I had begun to think that was akin to slaughtering the sacred cow. I would also like to add a few insights that you might consider for improving development. I think development can be greatly improved by doing the backgound reseach (other wise know as RTFM) and having good requirments that can be used for testing the result. Smart programming is more about good design than fast key stokes. As LabVIEW is more intuitive than other languages, it is also to easily hacked. Developers need to resist the tempation to rush into the programming step of te development cycle. A little time in preparation for planning your attack saves a great deal of time.
I'm really not too sure about selectively removing the class method outputs. I can see how it would be more readable and easier to diagnose, but are there downsides?
Can we be sure LV doesn't make copies of the objects when the wires are split? Parallelism is nice, but not at the cost of too much memory copies. Is there a balance to be considered?
I would hate to remove all outputs, just to find out everything is slowed down because my class happens to have large amounts of data in it. Or the class evaluates to get large data, and at some point I have to add outputs and re-synchronize the callers, just to prevent the copies that might be made.
These kinds of answers are really hard to get. Show Buffer Allocations is just not useful, it only shows potential copies, not the costs of the copies nor the benefits...
Please shed some light on it (anyone).
LabVIEW Programming ((make LV more popular, read this)
@Gary_MavSysInc wrote: Developers need to resist the tempation to rush into the programming step of te development cycle. A little time in preparation for planning your attack saves a great deal of time.
With OO development it seems to be quite normal to spend 25%-75% time on planning. I haven't seen that with LabVIEW OO development (mostly <25%, don't want to scare people in using it). Bottom line is doing a project OO takes less time then doing it "normal programming", at least for me. It is much more important to make a plan though. The results I've seen are usually better, probably because of that required planning.
Yes, we can be sure that LabVIEW does not make copies of an object when wires are split, because LabVIEW never makes a copy when a wire branches (or anywhere on a wire, for that matter). Copies occur only at nodes that need to modify data, when the original data is also still needed. So, if you aren't modifying the data inside a class within a VI - and you probably aren't, if you don't have the class as an output - then you aren't making copies of your class by running those VIs in parallel. You might end up making copies of particular elements of the class, depending on the work going on inside the VI, but branching the class wires doesn't make copies.
What do you need our team of experts to assist you with?
We'll be in touch soon!