LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

A New Paradigm for Large Scale LabVIEW Development

As someone who has spent the last 13 years doing extremely large scale LabVIEW development, I've learned quite a lot on how to push the envelope in large scale development and I know what works, and what doesn't and I've managed to solve a great many of the difficulties associated with large scale development through the careful application of queueing, dynamic VI launching, and state machine oriented design.

It is time for you to move beyond the simple LabVIEW flow paradigm of the 1990's and into a world where you create a series of small dynamically launched VI's that communicate to each other NOT through front panel controls but RATHER through LabVIEW queueing and/or TCP/IP connections.

The concept then is to make sure that each of these individually launched LabVIEW VI's has an individual queue or TCP socket through which it can be reached by the other VI's.  It would be composed of two main loops in most cases:  A command queue reading loop and a processing loop that acquires data and dispatches that data to other dynamically launched VI's that might need it.  In short you can think of these two loops as the "input" and the "output" loops.  Everything can happen inside these two loops and you can apply normal "state machine design" to each loop.  The input and output loops can themselves use queues to communicate with each other internally to the particular VI in question.

In order to build applications it will be necessary to take advantage of "remote queueing", something pioneered on openg.org to allow for communications between a LabVIEW executable and a dynamically launched VI - otherwise queueing will not work properly because queue references will point to different memory spaces.

Many of these VI's will need to be set to run invisibly although some may need to run in a "pop up" style when they are launched.  If the VI is running invisibly, reference managment becomes somewhat tricky as you need to make sure that whatever launches the VI's doesn't have to wait for their execution to complete before launching additional VI's or performing other processing.  On the other hand it has to also be careful not to prematurely dispose of references and close a dynamically launched VI as soon as it is launched - this would defeat the purpose.  Ideally, the dynamically launched VI would dispose of its own reference when it completes execution after receiving some sort of "EXIT" command on its command queue from the command dispatcher.

The goal is to make each individual dynamically launched VI perform a "cohesive" function in your application.  For example, your DMM driver would be one VI, your scope driver another, a data logging VI could be a third, an alarms monitoring VI could be a fourth, etc.  Cohesiveness is an important property of objects in object oriented coding.  It is highly desirable that any object is cohesive (i.e. it performs one set of organically related functions) and not "compound" (it performs two or more unrelated sets of functions).

The advantage to the architecture that I am suggesting here is that it allows for cohesiveness and scalability whereas the traditional 1990's style LabVIEW "flow" paradigm actually does the opposite by promoting compound objects and results in hopelessly entangled spaghetti code when applied to large scale projects.

The beauty is that in  using a series of dynamically launched objects that communicate with each other via queues you get to isolate the interactions between component objects to a small well defined set of commands send across those queues that are processed by state machines rather than through a large number of wires on a complicated diagram.

This makes it easy to scale your project because changes are added a command at a time to a state machine of a particular VI while limiting back propagation of code changes to all the other VI's to the point where regression testing is largely unnecessary.

If new functionality unrelated to any of the existing dynamically launched VI's becomes necessary it can likewise be added to the design without significant impact on the existing code - it is encapsulated in the new dynamically launched VI.

At the core of this design you will need some configuration management file to determine which VI's to dynamically launch - I originally did this with flat text files but my vision for the future in this is to use XML to perform this function.

You will also need something like a "dispatcher" to process commands from a user interface or from a remote (TCP or UDP) interface and then send them to the appropriate dynamically launched VI.  If a command is received for a VI that has not been launched, this dispatcher will need to decide whether to launch that VI and then send the command or whether to simply reject the command.   This dispatcher will need to be able to start and stop VI's as required or to stop all VI's in the case of system shutdown.

You could classify the launched VI's into two general classes:

Services - which are launched at system start up and remain running until system shutdown.   These would perform generic support tasks required by the system of software you have designed such as data logging, error logging, dispatching commands, alarms monitoring, etc.

Instruments - these could be launched on an "as required" basis to acquire data, control hardware or perform specific functionality as required by the operator or the test sequencer.  They would subsequentyl be shut down when no longer needed.

This leads to another observation about such a design - it is "lightweight" and "flexible", indeed it is highly configurable.

If for instance you have three very similar hardware configurations except for having a different manufacturer's DMM card for each one then you could reuse most of your code except for dynamically launching a different DMM driver VI for each hardware configuration.  You won't have to carry the metaphorical "dead elephant" of code around for hardware you don't have installed and you won't have to carry around 3 entirely separate builds of code either.

If your system contains a great deal of different functionality, you won't have to carry it all around in memory or have it all on the same back panel diagram either.  You can launch each driver as required and close it as required, greatly decreasing the memory and processor load.  The code is similarly far less complex since each VI is dedicated to one particular purpose.

One difficult issue that does arise in this process is how to add VI's to a project after the original version is built without having to rebuild the entire kit.

While it is possible to rebuild the entire kit, this may not be desirable.

LabVIEW's application builder however doesn't understand how to package VI's that are being dynamically launched into different directories - at least it didn't as of version 8.x.

The solution seems to be to use a third party installer system for instance Inno Setup to create your own installation program.   The catch is that you have to identify all the dependencies and include them in this third party build.  I have written a LabVIEW VI which traverses the LabVIEW hierarchy tree and identifies all the necessary VI's required by a project to allow me to perform this task.  You may need to find a similar solution - or maybe you just rebuild it all every time.

Douglas J. De Clue

ddeclue@bellsouth.net

0 Kudos
Message 1 of 30
(24,030 Views)

Nice post Doug.

I have been forcing myself to write my own (just to get teh op to learn more) but Dr. D's Tale of Two Servers may be of interest.

http://forums.ni.com/ni/board/message?board.id=170&thread.id=462967

Thanks,

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 2 of 30
(4,944 Views)

As the author points out, it is essential that large applications incorporate an architecture much as described in order to be scalable.

I want to mention applicable terms from general software engineering parlance, with the suggestion that we learn from what others have done (and use a shared vocabulary):

Model-View-Controller

Observer Pattern

Components.

(I highly recommend the classic work Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John M. Vlissides in this context.)

Also, as a particular implementation I suggest that networked shared variables (as one example) serve the role described well (and include logging and alarming features with the DSC Module).  It is possible to avoid developing (and maintaining!) your own middleware.

Paul

0 Kudos
Message 3 of 30
(4,944 Views)

Thanks.  I also write multithreaded C++ OO code since about 1996 and lately have moved into the world of .net coding with C++, C#, and VB as well.

Another point I failed to bring up is that if the software is designed as described with each dynamically launched VI containing two loops being "coherent" or "cohesive" in its nature as opposed to "compound" THEN you end up with a well tuned system where each loop gets to run at its own best natural speed if acquiring data and sending it outbound.  Moreover the command or "input" loops get to wait asynchronously using little or no CPU except when they time out, do nothing and return to waiting.   Finally by using queueing and being able to wait asynchronously on an item to arrive in the command queue, the commands are more or less instantaneously sent from a command producer to a command consumer for handling.

The result is a much more efficient use of the CPU bandwidth and more robust multi-threaded architecture that can take better advantage of multiple processors or cores on the host system and be much more responsive to user or remote control inputs.

Doug De Clue

ddeclue@bellsouth.net

0 Kudos
Message 4 of 30
(4,944 Views)

I have been making these kind of architectures in the last couple of years, too.

Most of all, I keep larger applications readable by building "active objects." This way, I can combine "service oriented"/"event oriented" /"asynchronous processes" with simple dataflow.

I had almost gone so far, that I could program in Labview simply by writing text script! I created a script interpreter that creates objects, sets properties and invokes methods. The main application got reduced to a case structure with a case for each method VI.

For my by reference VI's I, too, use a double loop, but nowadays I use an event node instead of a queue. It has its advantages, but so do queues.

The main challange is to keep flow & hiarchy in check, since asynchronous processes run in parallel all the time.

------------------------------------------------------------------------------------
Seriously concerned about the Labview subscription model
0 Kudos
Message 5 of 30
(4,944 Views)

And what is exactly new about this paradigm? Check out EDQSM pattern discussion on Lava and Google "LabHSM" (or go to LabHSM.com). The latter (circa 2003) implemented everything you are talking about plus a hierarchical state machine for each dynamically launched module long before NI's Statechart kit was released.

0 Kudos
Message 6 of 30
(4,944 Views)

This is also similar to the Asynchronous Message Communication (AMC) reference library and associated Queued Message Handler template posted on NI Developer Zone.

http://zone.ni.com/devzone/cda/epd/p/id/6091

authored by
Christian L, CLA
Systems Engineering Manager - Automotive and Transportation
NI - Austin, TX


  
0 Kudos
Message 7 of 30
(4,944 Views)

For the benefit of Styrum who has is undies in a bunch... no I suppose it is not "new" to ME.

I have been doing this style of coding going back to 2000/2001 personally when I basically figured out for myself how to implement the various pieces through experimentation.

What IS new though I haven't described it in any detail here is how to create this style of architecture so that you can add more dynamically launched VI's after the initial development is complete.  LabVIEW's app builder simply doesn't support this "cafeteria" style of dynamic VI launching.

Based on my extensive experience in LabVIEW development it is definitely "NEW" to a lot of people who don't have a clue about how to go about creating large scale code.

Sorry if you are offended but the point is to get people to start thinking about this way of coding.  Most of them don't even know where to start.

Sigh..

0 Kudos
Message 8 of 30
(4,944 Views)

This is a worthy post Doug, and Kudos for spending the time to post it.

However I would have to say there's little that's actually new in here. I and many others I know have been using dynamically launched vi's for many years to modularise large-scale codes. Plus, I tend not to use queues for message and information communication, but instead use an event-based messaging system. With each threaded daemon being able to create and subscribe dynamic events vi's can pass message information around each other with ease, with each event broadcasted to all listeners it becomes particularly easy to inform all running vi's of a particular message, or direct a message to just one specific daemon. The LabVIEW event case handles these very efficiently, particularly in terms of CPU usage.

Nevertheless, I did find your views on third-party application builders interesting.

Thoric (CLA, CLED, CTD and LabVIEW Champion)


0 Kudos
Message 9 of 30
(4,944 Views)

As a developer that has mostly worked on relatively small LabVIEW applications, and one without much formal training in general programming; (started out wiring panels and assembling the machines I would eventually learn to program)  I really appreciate posts such as this one.

I've still got a lot to learn about developing truely scalable applications in LabVIEW.  Communities like this are a big part of how I'm hoping to get there.

---------------------
Patrick Allen: FunctionalityUnlimited.ca
0 Kudos
Message 10 of 30
(4,944 Views)