Please don't take this as any type of attack on your comments. I'd like
to offer
another point of view though. Over the years I've seen several LV
projects go
through this same exact pattern. Since LV is a language, there are
pretty much
an unlimited number of ways to write an application -- some will prove
much more
satisfactory than others.
I'll try to shed light on some of these issues. Not to convince you that
you are wrong. I wasn't involved in your project, and I may be totally incorrect
in my assumptions, but I have been involved in "fixing" other projects
that wound
up in your situation.
For clarity, I'll mark my comments with ***.
> Well, we're working now for a long time with LabVIEW and NIs IO Hardware. Our
> LV-application (let's call it A*) is build out of 700 different VIs to realize a
> powerful, flexible and secure automation software. The software requires some what
> about 150 MB of RAM, a Dual Pentium 500 PC equipment and load both CPUs with 80%
> each. We optimized a lot of things in LabVIEW, but for professional programming,
> LabVIEW does not fit our requirements any more. Let me give you some examples:
>
> I) Data Management: In one of the first Versions of A* we build a global variable
> as a database. In this database all necessary configuration information has been
> stored. Approx. 300 Array elements which are clusters again of some arrays and sub
> clusters. To access one element out of this array for reading, LV creates a
> complete copy of this array (3ms hard work), picks the element and frees the
> memory again. This happends with any access to a global variable. Why? Now this is
> realized using a DLL and we've access times to the same information, which are
> incredible fast. We unloaded the CPUs by approx 15%.
*** This is unfortunately a common problem when people first start using LV
and have lots of data to store. The obvious solution -- a few global variables.
Unfortunately, globals cause problems in a parallel language, whether
its LV
or others. This is why LV didn't originally have global variables built in.
It wasn't until 1993 or so that global variables were added to LV. Why?
Because
they provide a convenient mechanism for storing and sharing data in
small to
medium sized projects, and customers just kept requsting that they be added.
The problem with globals in a parallel environment is that more than one piece
of code may want to access the data at once. The solution is to limit access
to the data. The approach uses mutexes or another type of semaphore. This
restricts access to one piece of code at a time, meaning that reading
the data
isn't affected by writing the data, and so on.
LV currently use mutexes immediately around the global read and write
nodes. The
read or write accesses the data long enough to transfer it to a private buffer.
Once in the private buffer, the parallel diagram threads can the
manipulate the
data in any way they like. This works great for small amounts of daya,
but the
obvious problem here is with large and complex globals that need to be
copied to
a private buffer, taking extra time and memory. Attempting to analyze
the user's diagram and automatically move the mutexes out to include
other diagram nodes
very quickly gets complicated. It is something we are still looking
into, and
when completed, it will mean that the read, the array index, and the cluster
unbundle could all happen within the mutex, meaning that less data needs
to be
copied to a private buffer. In the meantime, what other solutions are there.
The obvious solution, outlined in the performance chapter of the user
manual as
well as several memory application notes, is to protect access to the data
manipulations so that the entire global doesn't need to be copied. This is
easily done by making a functional interface to wrap around the global, making
it no longer global, but private data accessed by a global function or
set of
functions. At this point, the data is often stored in a shift register,
though a
control or global will work, and many of the most common data
manipulations are
moved into this function. The function can be given array indices, tags
to search
for or whatever is needed. Then the global function accesses its
private data
and returns the result. All non-reentrant VIs are protected, so you
have now
eliminated the expensive copy and probably ridded yourself of code duplicated
about the project that was indexing and searching the global data. This is
pretty much identical to what is done in C++ where instead of just publishing
a global property that anyone can access, you instead keep the property private,
and publish functions/methods for accessing the data. They methods can
be as
simple as ReadAll, WriteAll, but more commonly, they are indexing, searching,
and sorting functions that are abstracted and the code is kept with the
data so
that it can change without affecting lots of users.
I have retrofitted this approach onto similar systems, some with 100
VIs, others
with 500 VIs, and with just a few changes, you can cut the memory and
CPU usage
by a large amount, probably similar to what you achieved with the DLL.
If you
have a very large amount of data that needs very quick access, then perhaps
your DLL was the right approach after all. I do know of several customers
in the Industrial Automation area that attach LV diagrams to their in-memory
tag database. They then write some wrapper functions for doing tag lookup
and writing, same as they would in any language, and they are now able to
manage very impressive amounts of data very quickly with easy access
from LV.
>
> II) CPU load: For about 150 analog signal channels we had to provide a four
> boundary range check. In cycles of 100ms A* is checking these channels to lower
> and upper warning and alarm limits. Removing this functionality from A* we got
> about 20% of DualCPU load free, reimplementing the same (even extended)
> functionality as DLL function in the third version of A* was not visible in the
> CPU load diagram of NT.
*** I suspect that much of this is due to global access. Another possible
explanation is data conversions. The built-in LV function/operators compile
code specifically for each datatype, but libraries often require data conversions.
>
> III) Timing: For such a hard loaded Application it is not possible to keep the
> timing of control loops in 100% exact cycles. We changed many parameters of LV and
> the VIs of A* but without success.
>
*** As you elude to, once the CPU isn't loaded down with memory
management, I suspect
that the timing would be easier to control. Of course this is also heavily
affected by the OS, which is why LV is also being moved to realtime
Oses. The
VIs written for one platform could be moved to another, perhaps realtime platform
when the need arises.
> IV) Bugs: There're a lot of known Bugs in LabVIEW wich are not removed from NI
> developers since versions. E.g. it is still not possible to access all attributes
> of diagrams with multiple y-axes. If you're working with dynamical assignment of
> signals --- Peng!
*** I'd characterize this not as a bug, but as a missing feature. The documentation
never implies that this is possible. Regardless, it would be a very
nice feature,
and will appear in LV sometime soon. Since it isn't possible to have
every feature
built into LV, we attempt to make it a very open environment. It is
possible to
call DLLs EXEs, CINs, and to access ActiveX controls and servers. While
we continue
to improve the built-in controls, it has been possible to get this graphing
functionality using external Active-X controls for several years now.
>
....
>
> VII) Editing G: As you probably know, G is this graphical programming language
> inside LabVIEW. We've a lot of Diagrams (= SourceCode) of VI's which are much
> larger than one 1024x768 screen. If we've to ad some functionality, it needs more
> time to make some space in that diagramm than to program the new code. There's no
> way to print out these Diagrams in a readable way. All hidden frames (Cases,
> Sequences, etc.) are stacked hierarchical on several pages of paper. If stacking
> is much more than one level (and there're probably some empty frames), it is not
> possible to find bugs by looking on your printout. You simply loose the overview.
> Any ASCII written code is more readable and easier to be extended.
*** Anytime code isn't written to fit on a screen it becomes more
difficult to debug.
Personally, I haven't printed out a piece of code in about four years.
I find
the online tools for balancing scope, searching, accessing online documentation,
and looking up constants to be much, much more effective. I do use
paper documents
when I'm learning about a library or looking for the right set of
functions to
carry out some task, but once the code is written, its never printed.
I'm not
talking about LV diagrams here. I'm talking about C and C++ code.
>
> VIII) It could happen, that you build your GUI, you call "save"every 5 minutes,
> you insert some images on Buttons, you add code to the GUI, you test it, you close
> LabVIEW, you go home to sleep, you restart LabVIEW on the next morning, you open
> the last days work and --- BOOOOOOOOM! No way back. All you did is lost.
> GRRRRRRRR! Will never happen with ASCII files!!!!
*** Unfortunately, there are many ways to lose work. Unstable power,
file system
or disk problems, bugs in the editor (LV or the text editor), and
sometimes even
bugs in the SCC tools. Over the years I've lost some code and data to
most of
these. I often use development, alpha, and beta versions of LV; so I've definitely
crashed my fair share of times. I've also lost work when Visual Studio
or other
textual environments crash. I've lost even more work on the less common
time when
a hard drive crashed. And while it hasn't happened to me, other groups have
suffered from a corrupted source code control database, losing some
work, and
requiring lots of time to retrieve what they could.
I'm not trying to make excuses. I wish LV would never crash, and we go
to great
lengths to make it a stable editing environment. Unfortunately, the use
of graphics
alone means that it is more susceptible to crashes, particularly on
Windows, where
video drivers are so poorly written. Having looked into lots of bug reports
concerning LV, a fair share of them are our fault. Quite a lot, though, are
actually spurred by problems with the video driver, the printer, or the disk
file format. Many times when I've been sent a VI "that was fine one
day, but
cannot be loaded the next", I have looked into the file to see what went wrong.
At least half of the time the file had been corrupted because of cross-links
on the disk's file system. Sometimes the file was simply truncated -- 10K
was written out, but only 8K exists in the file to read back. Other times
the binary of the file would all of a sudden contain ASCII text from
some other
file. These are difficult situations to protect against. Fortunately,
the FAT
file system is dying out, and NTFS seems like quite a good one; so these
types of
corruptions happen way less often provided that disks are checked and repaired
after crashes and power outages.
>
> You want to know more? All this is realy true and we wasted a lot of money due to
> the disadvantages of LabVIEW.
>
> If you use LabVIEW to measure the temperature decrease of your cup of tea, it's
> ok.
> If you've a fixed, not flexible environment, that will never change, and an
> application with only one window, it may be ok.
> But for the professional programming of a big project, you'll run very quickly
> into the limitations of LabVIEW. That's why we're kicking LabVIEW out. (Probably
> LabWindows/CVI benefits)
>
Again, I'm not trying to attack your conclusion, but I suspect that your
new system
will benefit quite a bit from the experience of the one written in LV.
In other-
words, projects are easier the second time around, and changing tools
may indeed
be the right thing to do based on your level of experience with LV
versus other
tools, but the tool change may not matter nearly as much as simply
rewriting the
project and improving things based on your new knowledge.
I've seen projects attempted with LV that probably should have been written
using a different language, different design, different staffing.
On the otherhand, I've also seen a couple engineers or scientists
replace a dozen
programmers, and produce product that was more successful, more
flexible, and
way cheaper to maintain.
LV is simply another tool for making computers do work. I don't believe
that it
is always better than every other tool, and especially if it isn't used
in the
right way. I think that computer's are for everyone to use, and while
some tasks
should be relegated to an expensive process of SW development based on highly
flexible designs and languages, many projects can better be addressed
using a
different language/tool, and a different approach by a professional from a
different background.
Now if we could just write an expert system that helped us to select tools
and evaluate our designs before we implement them.
Greg McKaskle