LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Should everything **really** fit on one screen?

This issue has come up with me recently as I moved into the FPGA world.  There is only one VI that runs on FPGA and it is not always beneficial to make every parallel while loop into a subvi, especially if it contains controls or indicators values that are needed outside of the FPGA.  You would then have to pass control references for each control/indicator into the subvi which to me adds a lot of unecessary wiring.

What I would really like to see is NI start thinking of putting these parallel loops in a 3rd dimension (maybe they already are).  I can imagine NI developing something like a parallel executed stacked structure where while loops can be stacked on top of each other and be executed in parallel.  It would look much like the stacked sequence structure but act in parallel instead of serial (sequential).  This way, each window view could be labeled (self-documented) and you could easily jump to the necessary loop.  It would really clean things up nicely and allow all to fit within a single screen.  Let me know what you think of this idea.

Guy

0 Kudos
Message 21 of 26
(1,640 Views)

In short the answer is YES - they really SHOULD all fit on one screen no matter how large the overall project is.

As someone who has spent the last 13 years doing extremely large scale LabVIEW development, I've learned quite a lot on how to push the envelope in large scale development and I know what works, and what doesn't and I've managed to solve a great many of the difficulties associated with large scale development through the careful application of queueing, dynamic VI launching, and state machine oriented design.

It is time for you to move beyond the simple LabVIEW flow paradigm of the 1990's and into a world where you create a series of small dynamically launched VI's that communicate to each other NOT through front panel controls but RATHER through LabVIEW queueing and/or TCP/IP connections.

The concept then is to make sure that each of these individually launched LabVIEW VI's has an individual queue or TCP socket through which it can be reached by the other VI's.  It would be composed of two main loops in most cases:  A command queue reading loop and a processing loop that acquires data and dispatches that data to other dynamically launched VI's that might need it.  In short you can think of these two loops as the "input" and the "output" loops.  Everything can happen inside these two loops and you can apply normal "state machine design" to each loop.  The input and output loops can themselves use queues to communicate with each other internally to the particular VI in question.

In order to build applications it will be necessary to take advantage of "remote queueing", something pioneered on openg.org to allow for communications between a LabVIEW executable and a dynamically launched VI - otherwise queueing will not work properly because queue references will point to different memory spaces.

Many of these VI's will need to be set to run invisibly although some may need to run in a "pop up" style when they are launched.  If the VI is running invisibly, reference managment becomes somewhat tricky as you need to make sure that whatever launches the VI's doesn't have to wait for their execution to complete before launching additional VI's or performing other processing.  On the other hand it has to also be careful not to prematurely dispose of references and close a dynamically launched VI as soon as it is launched - this would defeat the purpose.  Ideally, the dynamically launched VI would dispose of its own reference when it completes execution after receiving some sort of "EXIT" command on its command queue from the command dispatcher.

The goal is to make each individual dynamically launched VI perform a "cohesive" function in your application.  For example, your DMM driver would be one VI, your scope driver another, a data logging VI could be a third, an alarms monitoring VI could be a fourth, etc.  Cohesiveness is an important property of objects in object oriented coding.  It is highly desirable that any object is cohesive (i.e. it performs one set of organically related functions) and not "compound" (it performs two or more unrelated sets of functions).

The advantage to the architecture that I am suggesting here is that it allows for cohesiveness and scalability whereas the traditional 1990's style LabVIEW "flow" paradigm actually does the opposite by promoting compound objects and results in hopelessly entangled spaghetti code when applied to large scale projects. 

The beauty is that in  using a series of dynamically launched objects that communicate with each other via queues you get to isolate the interactions between component objects to a small well defined set of commands send across those queues that are processed by state machines rather than through a large number of wires on a complicated diagram.

This makes it easy to scale your project because changes are added a command at a time to a state machine of a particular VI while limiting back propagation of code changes to all the other VI's to the point where regression testing is largely unnecessary.

If new functionality unrelated to any of the existing dynamically launched VI's becomes necessary it can likewise be added to the design without significant impact on the existing code - it is encapsulated in the new dynamically launched VI.

At the core of this design you will need some configuration management file to determine which VI's to dynamically launch - I originally did this with flat text files but my vision for the future in this is to use XML to perform this function.

You will also need something like a "dispatcher" to process commands from a user interface or from a remote (TCP or UDP) interface and then send them to the appropriate dynamically launched VI.  If a command is received for a VI that has not been launched, this dispatcher will need to decide whether to launch that VI and then send the command or whether to simply reject the command.   This dispatcher will need to be able to start and stop VI's as required or to stop all VI's in the case of system shutdown.

You could classify the launched VI's into two general classes:

Services - which are launched at system start up and remain running until system shutdown.   These would perform generic support tasks required by the system of software you have designed such as data logging, error logging, dispatching commands, alarms monitoring, etc.

Instruments - these could be launched on an "as required" basis to acquire data, control hardware or perform specific functionality as required by the operator or the test sequencer.  They would subsequentyl be shut down when no longer needed.

This leads to another observation about such a design - it is "lightweight" and "flexible", indeed it is highly configurable.

If for instance you have three very similar hardware configurations except for having a different manufacturer's DMM card for each one then you could reuse most of your code except for dynamically launching a different DMM driver VI for each hardware configuration.  You won't have to carry the metaphorical "dead elephant" of code around for hardware you don't have installed and you won't have to carry around 3 entirely separate builds of code either.

If your system contains a great deal of different functionality, you won't have to carry it all around in memory or have it all on the same back panel diagram either.  You can launch each driver as required and close it as required, greatly decreasing the memory and processor load.  The code is similarly far less complex since each VI is dedicated to one particular purpose.

One difficult issue that does arise in this process is how to add VI's to a project after the original version is built without having to rebuild the entire kit.

While it is possible to rebuild the entire kit, this may not be desirable.

LabVIEW's application builder however doesn't understand how to package VI's that are being dynamically launched into different directories - at least it didn't as of version 8.x.

The solution seems to be to use a third party installer system for instance Inno Setup to create your own installation program.   The catch is that you have to identify all the dependencies and include them in this third party build.  I have written a LabVIEW VI which traverses the LabVIEW hierarchy tree and identifies all the necessary VI's required by a project to allow me to perform this task.  You may need to find a similar solution - or maybe you just rebuild it all every time.

Douglas J. De Clue

ddeclue@bellsouth.net

Message 22 of 26
(1,640 Views)

Agreed.


Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 23 of 26
(1,640 Views)

ExpressionFlow gave fairly good treatment to the topic here.

Regarding their implementation of enqueueing on opposite end (front of the queue):  When you enqueue multiple items on the opposite end to another loop which is waiting to dequeue elements [where execution order is critical (e.g. State 1, State 2, State 3, State 4)], the waiting loop will dequeue and run State 4 first.  Here's why:  Before enqueueing the 4 states on the opposite end, you must first reverse their order.  When another loop is waiting to dequeue, it grabs the first item you enqueue (e.g. State 4).  It's not a concern that it grab subsequent items, regardless of execution speed, since all loops share the same non-reentrant Queue Manager, and the only instance of the Queue Manager is busy enqueueing the next three states (in this example).  Therefore, you could end up with an execution order of State 4, State 1, State 2, State 3.  So, I would suggest enqueueing a dummy state before enqueueing on opposite end your states in reverse order.

Else, odd behaviour can occur (e.g. Stop state occurs before Power Supply Off state *ouch*).  If I can get a login on ExpressionFlow, I'll let them know...


Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 24 of 26
(1,640 Views)

First off, I 'm with AristosQueue, nothing wrong with scrolling, in one direction only (for me that is horizontal).

Secondly, Hi Evan,

- The Command Cluster technique has been in use since at least 1993. Back then we used flattened strings, since we did not yet have variants or classes. As I recall, we were using Command ENUMs at first, for the reasons you state, but we abandoned ENUMs in favor of strings in order to implement plugin architectures and Psuedo Infinite State Machines (PISM), which are much more powerful and flexible.

- To implement a Basic PISM, you provide cases for all your "factory installed" commands, plus a Default case for commands that are not in the main states. Then in the  Default case you lookup an unknown command in a list of plugin names that have been dynamically loaded earlier. If the name is found, you run the VI using VIServer. If not found, only then do you call it an error, log it and post to the user GUI, etc. (so much for typo's).

- To Implement an Advanced PISM you abstract out all the state transition decisions from the cases, and put them in a State Transition Table VI that is part of the input queue.  The current command coming out of the queue, or shift register is input to the STT as one of the state variables and the STT VI then decides the State/CASE to run based on its current table (either a straight look up table type, or a more advanced trigger VIs(also plugins) type.)

- The Psuedo Infinite part of the name comes from the fact that if you implement this carefully, you can add both Command/Step/State types and reprogram your State Machine transition behavior dynamically, at run time, without shutting down, ever. You can go from a factory defined 10 Commands/States to one with a hundred or more, then drop back to a configuration of 25, all while running your processes/equipment. This architecture is invaluable for those long duration tests that must run for weeks or months unstopped in order to meet customer test specs. Its also great for embedded applications that are remote (robots, cell phone towers, etc) where you cannot just run up to it with a laptop and redevelop for a while.

- Yes, this can be done with executables too. Every version or so, some one decries various changes to the installer, project, etc, and says you cannot do dynamics like this. Don't believe it. It's been done since LV 3.1 and every version since.

Go therfore, and do thou likewise ...

0 Kudos
Message 25 of 26
(1,640 Views)

For a more general description of the problems, object-oriented solutions, and advantages and disadvantages of particular implementations, see the Command Pattern and State Pattern in the Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides.

Message 26 of 26
(1,640 Views)