Recently, I made a new presentation (LabVIEW User group LUGE) "Darwin applied to LabVIEW: the evolution of the data management." subtitle “The survival of the fittest applied to the LabVIEW Dataflow model ”.
With deep humor this presentation makes the link between the evolution of the LabVIEW data and the Darwin's Theory of Evolution.
The presentation deals with the evolution of the concept of "Data communication" of the LabVIEW dataflow model:
lDataflow element to Variable Interfaces, to Buffer Interfaces
lTo store a value in memory/ To access the value
lTo prevent Race Condition, buffer copies
The presentation does answer the following questions:
lWhy avoiding local, global, Property Node? How to avoid Race Conditions? Does FGV prevents Race Condition (realy!) ?
lWhich method to use? Why?
lWho is the survival of the fittest? (the survival of the most scalable)
LabVIEW follows a dataflow model
In dataflow programming, you generally do not use variables. For a literalimplementation of this model.
Dataflow models describe nodes as:
consuming data inputs,
producing data outputs.
Advanced concepts : LabVIEW contains many data communication methods.
For example, local and global variables are not inherently part of the LabVIEW dataflow execution model.
LabVIEW contains many “data communication” methods, each suited for a certain use case.
Can by order into 3 types:
lDataflow elements, the primary methods
For diner tonight: Functional Global variable (FGV, LV2), Action Engine (AE), DVR, OOP, State Machine and QMH. Speed demonstration, UI thread, memory management (buffer copies) and Race Condition with simple and downloadable examples.