From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview w/ TCL ??

I have a customer that wants to control my testequipment with labview
for some custom applications. My company has a C-function library in
the form of ANSI C functions a user can complile. We also have a TCL
"package" in the form of a .dll file for windows or a .so file for UNIX.

I would prefer to figure out a good way to link Labview and TCL as TCL
is much easier to program. Has anyone faced this problem?
0 Kudos
Message 1 of 14
(7,738 Views)
Hi Mike

LabVIEW can easily access DLL's but I don't believe it can access .so drivers for UNIX.
An option for the LV programmer would be compile the C library into a .CVI file, using a 32bit compiler. The CVI file will be
platform independent so you can port the code to UNIX.

You will have to be more specific on what the TCL package is before I can comment more on it.

Tim

Mike Gretzinger wrote:

> I have a customer that wants to control my testequipment with labview
> for some custom applications. My company has a C-function library in
> the form of ANSI C functions a user can complile. We also have a TCL
> "package" in the form of a .dll file for windows or a .so file for UNIX.
>
> I would prefer to figure out a good way to link Labview and TCL as
TCL
> is much easier to program. Has anyone faced this problem?
Message 2 of 14
(7,738 Views)
Timothy John Streeter wrote:

> Hi Mike
>
> LabVIEW can easily access DLL's but I don't believe it can access .so drivers for UNIX.
> An option for the LV programmer would be compile the C library into a .CVI file, using a 32bit compiler. The CVI file will be
> platform independent so you can port the code to UNIX.
>
> You will have to be more specific on what the TCL package is before I can comment more on it.

Just a small correction here. LabVIEW for UNIX will connect to a .so in the same way that
Win-LV connects to a DLL (using the CLF). I am not sure what you mean by a .CVI file
(Labwindows /CVI ?) If so then you end up with a .so again and back to the CLF function.

Depending on what your deliverables are you can go either way.
If the customer has L
V development they can choose.
If the customer has only the runtime LV then the .dll , .so path is best.
You would link them by using the Call Library Function (CLF)
In the Advanced functions palette.
Let me know if you need more help. (I have seen LV - TCL but never done it)
Kevin Kent

>
>
> Tim
>
> Mike Gretzinger wrote:
>
> > I have a customer that wants to control my testequipment with labview
> > for some custom applications. My company has a C-function library in
> > the form of ANSI C functions a user can complile. We also have a TCL
> > "package" in the form of a .dll file for windows or a .so file for UNIX.
> >
> > I would prefer to figure out a good way to link Labview and TCL as TCL
> > is much easier to program. Has anyone faced this problem?
0 Kudos
Message 3 of 14
(7,738 Views)
Whoops!!!!

I stand corrected.

I ment CIN not CVI.

I also have never used LV for UNIX so have never used .so drivers.

Next time I will keep my mouth shut when dealing in unfamiliar areas, it keeps egg off the face.

Tim

"Kevin B. Kent" wrote:

> Timothy John Streeter wrote:
>
> > Hi Mike
> >
> > LabVIEW can easily access DLL's but I don't believe it can access .so drivers for UNIX.
> > An option for the LV programmer would be compile the C library into a .CVI file, using a 32bit compiler. The CVI file will be
> > platform independent so you can port the code to UNIX.
> >
> > You will have to be more specific on what the TCL package is before I can comment more on it.
>
> Just a small correction here. LabVIEW for UNIX will connect to a .s
o in the same way that
> Win-LV connects to a DLL (using the CLF). I am not sure what you mean by a .CVI file
> (Labwindows /CVI ?) If so then you end up with a .so again and back to the CLF function.
>
> Depending on what your deliverables are you can go either way.
> If the customer has LV development they can choose.
> If the customer has only the runtime LV then the .dll , .so path is best.
> You would link them by using the Call Library Function (CLF)
> In the Advanced functions palette.
> Let me know if you need more help. (I have seen LV - TCL but never done it)
> Kevin Kent
>
> >
> >
> > Tim
> >
> > Mike Gretzinger wrote:
> >
> > > I have a customer that wants to control my testequipment with labview
> > > for some custom applications. My company has a C-function library in
> > > the form of ANSI C functions a user can complile. We also have a TCL
> > > "package" in the form of a .dll file for windows or a .so file for UNIX.
> > >
> > > I would prefer to figure out a good
way to link Labview and TCL as TCL
> > > is much easier to program. Has anyone faced this problem?
0 Kudos
Message 4 of 14
(7,738 Views)
Fogedaboutit 😉

Happens to me all the time (guess I like the taste of shoe leather !!)

Kevin

Timothy John Streeter wrote:

> Whoops!!!!
>
> I stand corrected.
>
> I ment CIN not CVI.
>
> I also have never used LV for UNIX so have never used .so drivers.
>
> Next time I will keep my mouth shut when dealing in unfamiliar areas, it keeps egg off the face.
>
> Tim
>
> "Kevin B. Kent" wrote:
>
> > Timothy John Streeter wrote:
> >
> > > Hi Mike
> > >
> > > LabVIEW can easily access DLL's but I don't believe it can access .so drivers for UNIX.
> > > An option for the LV programmer would be compile the C library into a .CVI file, using a 32bit compiler. The CVI file will be
> > > platform independent so you can port the code to UNIX.
> >
>
> > > You will have to be more specific on what the TCL package is before I can comment more on it.
> >
> > Just a small correction here. LabVIEW for UNIX will connect to a .so in the same way that
> > Win-LV connects to a DLL (using the CLF). I am not sure what you mean by a .CVI file
> > (Labwindows /CVI ?) If so then you end up with a .so again and back to the CLF function.
> >
0 Kudos
Message 5 of 14
(7,739 Views)
Hi,

we already linked TCL 8.3 and LabVIEW 5.1 in our automation system for
dynamical combustion engine test stands. With this combination we achived a
lot of flexibility in configuration, extended functionality for
testsequences (which is done completely in TCL) and a great increase in
performance, since more and more parts of the system are moving into DLLs.
(At last we'll kick out LabVIEW...). Also the complete handling and
configuration of IO-SubSystems is realized using TCL. It works great.
This environment includes also a complete redesign of named variable
management as a DLL library. Handling of unit-systems, display
configuration, etc. is included. This library is linked in TCL (as new
Tcl-Obj-Commands) and LabVIEW (as VIs) to access dynamically created
variables, controllers, lookuptables, etc. from LabVIEW and TCL. There are
only a few VIs in LabVIEW doing the startup process for TCL and providing a
TCL-Interpreter-Interface for LabVIEW.
Since all these extensions are part of a comercial product, i'm not allowed
to release source code of these packages today --- sorry. But if you contact
our sales departement (tgs.reitz@gmx.de, FaTGS@aol.com) and place requests
for this software, we could force the internal discussion about providing
some of these packages for free... ;-). (We're using unmodified libraries of
TCL/TK.)

Jens-Achim Kessel

Mike Gretzinger schrieb:

> I have a customer that wants to control my testequipment with labview
> for some custom applications. My company has a C-function library in
> the form of ANSI C functions a user can complile. We also have a TCL
> "package" in the form of a .dll file for windows or a .so file for UNIX.
>
> I would prefer to figure out a good way to link Labview and TCL as TCL
> is much easier to program. Has anyone faced this problem?

--
======================================
TECHNOGERMA Systems GmbH
Dipl.-Ing. Jens-Achim Kessel
Departement for Automation and Control
D-64291 Darmstadt, Roentgenstrasse 10a
phone: <++49> (6151) 99 58 7 - 74
fax: <++49> (6151) 99 58 7 - 62
e-mail: tgs.kessel@gmx.de
======================================
0 Kudos
Message 6 of 14
(7,739 Views)
Hi,

CAn a ask a couple of questions on this, I am not using Labview but our
company needs some ATE work and I am thinking about using it. I would
love to use Tcl/Tk or perl bindings on Linux.

as you are talking about DLL's I assume the link is with Win Tcl not Unix /
linux TCL ?

you say "(At last we'll kick out LabVIEW...)" as someone thinking about
starting with
Labview it slightly worries me. Can you make any comment on that statement

regards

Danny Thomson

Jens-Achim Kessel wrote:

> Hi,
>
> we already linked TCL 8.3 and LabVIEW 5.1 in our automation system for
> dynamical combustion engine test stands. With this combination we achived a
> lot of flexibility in configuration, extended functionality for
> testsequences (which is done completely in TCL) and a great increase in
> performance, since more and more parts of the system are moving into DLLs.
> (At last we'll kick out LabVIEW...). Also the complete handling and
> configuration of IO-SubSystems is realized using TCL. It works great.
> This environment includes also a complete redesign of named variable
> management as a DLL library. Handling of unit-systems, display
> configuration, etc. is included. This library is linked in TCL (as new
> Tcl-Obj-Commands) and LabVIEW (as VIs) to access dynamically created
> variables, controllers, lookuptables, etc. from LabVIEW and TCL. There are
> only a few VIs in LabVIEW doing the startup process for TCL and providing a
> TCL-Interpreter-Interface for LabVIEW.
> Since all these extensions are part of a comercial product, i'm not allowed
> to release source code of these packages today --- sorry. But if you contact
> our sales departement (tgs.reitz@gmx.de, FaTGS@aol.com) and place requests
> for this software, we could force the internal discussion about providing
> some of these packages for free... ;-). (We're using unmodified libraries of
> TCL/TK.)
>
> Jens-Achim Kessel
>
> Mike Gretzinger schrieb:
>
> > I have a customer that wants to control my testequipment with labview
> > for some custom applications. My company has a C-function library in
> > the form of ANSI C functions a user can complile. We also have a TCL
> > "package" in the form of a .dll file for windows or a .so file for UNIX.
> >
> > I would prefer to figure out a good way to link Labview and TCL as TCL
> > is much easier to program. Has anyone faced this problem?
>
> --
> ======================================
> TECHNOGERMA Systems GmbH
> Dipl.-Ing. Jens-Achim Kessel
> Departement for Automation and Control
> D-64291 Darmstadt, Roentgenstrasse 10a
> phone: <++49> (6151) 99 58 7 - 74
> fax: <++49> (6151) 99 58 7 - 62
> e-mail: tgs.kessel@gmx.de
> ======================================
0 Kudos
Message 7 of 14
(7,739 Views)
Danny Thomson schrieb:

> Hi,
>
> CAn a ask a couple of questions on this, I am not using Labview but our
> company needs some ATE work and I am thinking about using it. I would
> love to use Tcl/Tk or perl bindings on Linux.
>
> as you are talking about DLL's I assume the link is with Win Tcl not Unix /
> linux TCL ?

Today it is. But since all DLLs are written in plain C, multithreading safe and
using TCLs interface to Mutexes and Signals, there's no reason, why they should
not be portable in Linux without problems.

> you say "(At last we'll kick out LabVIEW...)" as someone thinking about
> starting with
> Labview it slightly worries me. Can you make any comment on that statement

Well, we're working now for a long time with LabVIEW and NIs IO Hardware. Our
LV-application (let's call it A*) is build out of 700 different VIs to realize a
powerful, flexible and secure automation software. The software requires some what
about 150 MB of RAM, a Dual Pentium 500 PC equipment and load both CPUs with 80%
each. We optimized a lot of things in LabVIEW, but for professional programming,
LabVIEW does not fit our requirements any more. Let me give you some examples:

I) Data Management: In one of the first Versions of A* we build a global variable
as a database. In this database all necessary configuration information has been
stored. Approx. 300 Array elements which are clusters again of some arrays and sub
clusters. To access one element out of this array for reading, LV creates a
complete copy of this array (3ms hard work), picks the element and frees the
memory again. This happends with any access to a global variable. Why? Now this is
realized using a DLL and we've access times to the same information, which are
incredible fast. We unloaded the CPUs by approx 15%.

II) CPU load: For about 150 analog signal channels we had to provide a four
boundary range check. In cycles of 100ms A* is checking these channels to lower
and upper warning and alarm limits. Removing this functionality from A* we got
about 20% of DualCPU load free, reimplementing the same (even extended)
functionality as DLL function in the third version of A* was not visible in the
CPU load diagram of NT.

III) Timing: For such a hard loaded Application it is not possible to keep the
timing of control loops in 100% exact cycles. We changed many parameters of LV and
the VIs of A* but without success.

IV) Bugs: There're a lot of known Bugs in LabVIEW wich are not removed from NI
developers since versions. E.g. it is still not possible to access all attributes
of diagrams with multiple y-axes. If you're working with dynamical assignment of
signals --- Peng!

V) There's some hardware of NI that seems to be not tested in real industrial
environment.
E.g. there are some PC IO boards, that wire the pins of the interface IC directly
(without buffering/opto coupler, etc.) to the connector port. If there are
disturbances on the external lines, the IC blows up and ... this IC is even not
exchangeable.
E.g. with theier SCXI-Chassis there's some big problems with powersupply
interference to the analog inputs. The IO Driver software is still buggy. If you
accidentaly configure the Hardware setup with a different order of your modules,
it could happen, that all 32 digital outputs are activated with starting the
software. There's a lot of damage in industrial processes, if DOs are driving
220V/380V blowers and other equipment with some kW of power...

VI) We're not calling the hotline (in Munich) anymore, since we need a lot of time
to explain our problems to the persons and in almost every case they were not able
to give us support.

VII) Editing G: As you probably know, G is this graphical programming language
inside LabVIEW. We've a lot of Diagrams (= SourceCode) of VI's which are much
larger than one 1024x768 screen. If we've to ad some functionality, it needs more
time to make some space in that diagramm than to program the new code. There's no
way to print out these Diagrams in a readable way. All hidden frames (Cases,
Sequences, etc.) are stacked hierarchical on several pages of paper. If stacking
is much more than one level (and there're probably some empty frames), it is not
possible to find bugs by looking on your printout. You simply loose the overview.
Any ASCII written code is more readable and easier to be extended.

VIII) It could happen, that you build your GUI, you call "save"every 5 minutes,
you insert some images on Buttons, you add code to the GUI, you test it, you close
LabVIEW, you go home to sleep, you restart LabVIEW on the next morning, you open
the last days work and --- BOOOOOOOOM! No way back. All you did is lost.
GRRRRRRRR! Will never happen with ASCII files!!!!

You want to know more? All this is realy true and we wasted a lot of money due to
the disadvantages of LabVIEW.

If you use LabVIEW to measure the temperature decrease of your cup of tea, it's
ok.
If you've a fixed, not flexible environment, that will never change, and an
application with only one window, it may be ok.
But for the professional programming of a big project, you'll run very quickly
into the limitations of LabVIEW. That's why we're kicking LabVIEW out. (Probably
LabWindows/CVI benefits)

>
> regards
>
> Danny Thomson
>
> Jens-Achim Kessel wrote:
>
> > Hi,
> >
> > we already linked TCL 8.3 and LabVIEW 5.1 in our automation system for
> > dynamical combustion engine test stands. With this combination we achived a
> > lot of flexibility in configuration, extended functionality for
> > testsequences (which is done completely in TCL) and a great increase in
> > performance, since more and more parts of the system are moving into DLLs.
> > (At last we'll kick out LabVIEW...).

> ...

Jens-Achim
--
======================================
TECHNOGERMA Systems GmbH
Dipl.-Ing. Jens-Achim Kessel
Departement for Automation and Control
D-64291 Darmstadt, Roentgenstrasse 10a
phone: <++49> (6151) 99 58 7 - 74
fax: <++49> (6151) 99 58 7 - 62
e-mail: tgs.kessel@gmx.de
======================================
0 Kudos
Message 8 of 14
(7,739 Views)
Please don't take this as any type of attack on your comments. I'd like
to offer
another point of view though. Over the years I've seen several LV
projects go
through this same exact pattern. Since LV is a language, there are
pretty much
an unlimited number of ways to write an application -- some will prove
much more
satisfactory than others.

I'll try to shed light on some of these issues. Not to convince you that
you are wrong. I wasn't involved in your project, and I may be totally incorrect
in my assumptions, but I have been involved in "fixing" other projects
that wound
up in your situation.

For clarity, I'll mark my comments with ***.


> Well, we're working now for a long time with LabVIEW and NIs IO Hardware. Our
> LV-application (let's call it A*) is build out of 700 different VIs to realize a
> powerful, flexible and secure automation software. The software requires some what
> about 150 MB of RAM, a Dual Pentium 500 PC equipment and load both CPUs with 80%
> each. We optimized a lot of things in LabVIEW, but for professional programming,
> LabVIEW does not fit our requirements any more. Let me give you some examples:
>
> I) Data Management: In one of the first Versions of A* we build a global variable
> as a database. In this database all necessary configuration information has been
> stored. Approx. 300 Array elements which are clusters again of some arrays and sub
> clusters. To access one element out of this array for reading, LV creates a
> complete copy of this array (3ms hard work), picks the element and frees the
> memory again. This happends with any access to a global variable. Why? Now this is
> realized using a DLL and we've access times to the same information, which are
> incredible fast. We unloaded the CPUs by approx 15%.

*** This is unfortunately a common problem when people first start using LV
and have lots of data to store. The obvious solution -- a few global variables.
Unfortunately, globals cause problems in a parallel language, whether
its LV
or others. This is why LV didn't originally have global variables built in.
It wasn't until 1993 or so that global variables were added to LV. Why?
Because
they provide a convenient mechanism for storing and sharing data in
small to
medium sized projects, and customers just kept requsting that they be added.

The problem with globals in a parallel environment is that more than one piece
of code may want to access the data at once. The solution is to limit access
to the data. The approach uses mutexes or another type of semaphore. This
restricts access to one piece of code at a time, meaning that reading
the data
isn't affected by writing the data, and so on.

LV currently use mutexes immediately around the global read and write
nodes. The
read or write accesses the data long enough to transfer it to a private buffer.
Once in the private buffer, the parallel diagram threads can the
manipulate the
data in any way they like. This works great for small amounts of daya,
but the
obvious problem here is with large and complex globals that need to be
copied to
a private buffer, taking extra time and memory. Attempting to analyze
the user's diagram and automatically move the mutexes out to include
other diagram nodes
very quickly gets complicated. It is something we are still looking
into, and
when completed, it will mean that the read, the array index, and the cluster
unbundle could all happen within the mutex, meaning that less data needs
to be
copied to a private buffer. In the meantime, what other solutions are there.

The obvious solution, outlined in the performance chapter of the user
manual as
well as several memory application notes, is to protect access to the data
manipulations so that the entire global doesn't need to be copied. This is
easily done by making a functional interface to wrap around the global, making
it no longer global, but private data accessed by a global function or
set of
functions. At this point, the data is often stored in a shift register,
though a
control or global will work, and many of the most common data
manipulations are
moved into this function. The function can be given array indices, tags
to search
for or whatever is needed. Then the global function accesses its
private data
and returns the result. All non-reentrant VIs are protected, so you
have now
eliminated the expensive copy and probably ridded yourself of code duplicated
about the project that was indexing and searching the global data. This is
pretty much identical to what is done in C++ where instead of just publishing
a global property that anyone can access, you instead keep the property private,
and publish functions/methods for accessing the data. They methods can
be as
simple as ReadAll, WriteAll, but more commonly, they are indexing, searching,
and sorting functions that are abstracted and the code is kept with the
data so
that it can change without affecting lots of users.

I have retrofitted this approach onto similar systems, some with 100
VIs, others
with 500 VIs, and with just a few changes, you can cut the memory and
CPU usage
by a large amount, probably similar to what you achieved with the DLL.
If you
have a very large amount of data that needs very quick access, then perhaps
your DLL was the right approach after all. I do know of several customers
in the Industrial Automation area that attach LV diagrams to their in-memory
tag database. They then write some wrapper functions for doing tag lookup
and writing, same as they would in any language, and they are now able to
manage very impressive amounts of data very quickly with easy access
from LV.

>
> II) CPU load: For about 150 analog signal channels we had to provide a four
> boundary range check. In cycles of 100ms A* is checking these channels to lower
> and upper warning and alarm limits. Removing this functionality from A* we got
> about 20% of DualCPU load free, reimplementing the same (even extended)
> functionality as DLL function in the third version of A* was not visible in the
> CPU load diagram of NT.

*** I suspect that much of this is due to global access. Another possible
explanation is data conversions. The built-in LV function/operators compile
code specifically for each datatype, but libraries often require data conversions.

>
> III) Timing: For such a hard loaded Application it is not possible to keep the
> timing of control loops in 100% exact cycles. We changed many parameters of LV and
> the VIs of A* but without success.
>

*** As you elude to, once the CPU isn't loaded down with memory
management, I suspect
that the timing would be easier to control. Of course this is also heavily
affected by the OS, which is why LV is also being moved to realtime
Oses. The
VIs written for one platform could be moved to another, perhaps realtime platform
when the need arises.

> IV) Bugs: There're a lot of known Bugs in LabVIEW wich are not removed from NI
> developers since versions. E.g. it is still not possible to access all attributes
> of diagrams with multiple y-axes. If you're working with dynamical assignment of
> signals --- Peng!

*** I'd characterize this not as a bug, but as a missing feature. The documentation
never implies that this is possible. Regardless, it would be a very
nice feature,
and will appear in LV sometime soon. Since it isn't possible to have
every feature
built into LV, we attempt to make it a very open environment. It is
possible to
call DLLs EXEs, CINs, and to access ActiveX controls and servers. While
we continue
to improve the built-in controls, it has been possible to get this graphing
functionality using external Active-X controls for several years now.

>
....

>
> VII) Editing G: As you probably know, G is this graphical programming language
> inside LabVIEW. We've a lot of Diagrams (= SourceCode) of VI's which are much
> larger than one 1024x768 screen. If we've to ad some functionality, it needs more
> time to make some space in that diagramm than to program the new code. There's no
> way to print out these Diagrams in a readable way. All hidden frames (Cases,
> Sequences, etc.) are stacked hierarchical on several pages of paper. If stacking
> is much more than one level (and there're probably some empty frames), it is not
> possible to find bugs by looking on your printout. You simply loose the overview.
> Any ASCII written code is more readable and easier to be extended.

*** Anytime code isn't written to fit on a screen it becomes more
difficult to debug.
Personally, I haven't printed out a piece of code in about four years.
I find
the online tools for balancing scope, searching, accessing online documentation,
and looking up constants to be much, much more effective. I do use
paper documents
when I'm learning about a library or looking for the right set of
functions to
carry out some task, but once the code is written, its never printed.
I'm not
talking about LV diagrams here. I'm talking about C and C++ code.

>
> VIII) It could happen, that you build your GUI, you call "save"every 5 minutes,
> you insert some images on Buttons, you add code to the GUI, you test it, you close
> LabVIEW, you go home to sleep, you restart LabVIEW on the next morning, you open
> the last days work and --- BOOOOOOOOM! No way back. All you did is lost.
> GRRRRRRRR! Will never happen with ASCII files!!!!

*** Unfortunately, there are many ways to lose work. Unstable power,
file system
or disk problems, bugs in the editor (LV or the text editor), and
sometimes even
bugs in the SCC tools. Over the years I've lost some code and data to
most of
these. I often use development, alpha, and beta versions of LV; so I've definitely
crashed my fair share of times. I've also lost work when Visual Studio
or other
textual environments crash. I've lost even more work on the less common
time when
a hard drive crashed. And while it hasn't happened to me, other groups have
suffered from a corrupted source code control database, losing some
work, and
requiring lots of time to retrieve what they could.

I'm not trying to make excuses. I wish LV would never crash, and we go
to great
lengths to make it a stable editing environment. Unfortunately, the use
of graphics
alone means that it is more susceptible to crashes, particularly on
Windows, where
video drivers are so poorly written. Having looked into lots of bug reports
concerning LV, a fair share of them are our fault. Quite a lot, though, are
actually spurred by problems with the video driver, the printer, or the disk
file format. Many times when I've been sent a VI "that was fine one
day, but
cannot be loaded the next", I have looked into the file to see what went wrong.
At least half of the time the file had been corrupted because of cross-links
on the disk's file system. Sometimes the file was simply truncated -- 10K
was written out, but only 8K exists in the file to read back. Other times
the binary of the file would all of a sudden contain ASCII text from
some other
file. These are difficult situations to protect against. Fortunately,
the FAT
file system is dying out, and NTFS seems like quite a good one; so these
types of
corruptions happen way less often provided that disks are checked and repaired
after crashes and power outages.

>
> You want to know more? All this is realy true and we wasted a lot of money due to
> the disadvantages of LabVIEW.
>
> If you use LabVIEW to measure the temperature decrease of your cup of tea, it's
> ok.
> If you've a fixed, not flexible environment, that will never change, and an
> application with only one window, it may be ok.
> But for the professional programming of a big project, you'll run very quickly
> into the limitations of LabVIEW. That's why we're kicking LabVIEW out. (Probably
> LabWindows/CVI benefits)
>

Again, I'm not trying to attack your conclusion, but I suspect that your
new system
will benefit quite a bit from the experience of the one written in LV.
In other-
words, projects are easier the second time around, and changing tools
may indeed
be the right thing to do based on your level of experience with LV
versus other
tools, but the tool change may not matter nearly as much as simply
rewriting the
project and improving things based on your new knowledge.

I've seen projects attempted with LV that probably should have been written
using a different language, different design, different staffing.
On the otherhand, I've also seen a couple engineers or scientists
replace a dozen
programmers, and produce product that was more successful, more
flexible, and
way cheaper to maintain.

LV is simply another tool for making computers do work. I don't believe
that it
is always better than every other tool, and especially if it isn't used
in the
right way. I think that computer's are for everyone to use, and while
some tasks
should be relegated to an expensive process of SW development based on highly
flexible designs and languages, many projects can better be addressed
using a
different language/tool, and a different approach by a professional from a
different background.

Now if we could just write an expert system that helped us to select tools
and evaluate our designs before we implement them.

Greg McKaskle
0 Kudos
Message 9 of 14
(7,739 Views)
Let me give some short answers to some of your statements. I don't wan't to start a
fighting discussion about tools and software, since i agree that "LV is simply another
tool for making computers do work". But i wanted to ouitline also some negative aspects
of using LV to make it easier to find a decision between products. If you look into
product flyers and read the descriptions that come along with demo versions, no company
will put any remarks on things are are not implemented yet and are not working properly.
(Some parts of the original text are cutted out (...) , to reduce length)

Greg McKaskle schrieb:

> ...
> *** This is unfortunately a common problem when people first start using LV
> and have lots of data to store. The obvious solution -- a few global variables.

Only for your information: before we started with the development of this application,
we joined prof. LV courses, to get the required infos in a compact and efficient way.

>
> Unfortunately, globals cause problems in a parallel language, whether
> its LV
> or others. This is why LV didn't originally have global variables built in.
> It wasn't until 1993 or so that global variables were added to LV. Why?
> Because
> they provide a convenient mechanism for storing and sharing data in
> small to
> medium sized projects, and customers just kept requsting that they be added.
>
> The problem with globals in a parallel environment is that more than one piece
> of code may want to access the data at once. The solution is to limit access
> to the data. The approach uses mutexes or another type of semaphore. This
> restricts access to one piece of code at a time, meaning that reading
> the data
> isn't affected by writing the data, and so on.

>
> LV currently use mutexes immediately around the global read and write
> nodes. The
> read or write accesses the data long enough to transfer it to a private buffer.
> ... . In the meantime, what other solutions are there.

Mutexing the data access is a common way to make the access multithreading safe. In the
past i implemented access to a global storage in a multi processor VME-Bus environment
running LynxOS. For this implementation i developed a protected mechanism to allow
multiple reads in parallel but only one write access a time. This was very useful, since
about 98% of the access was just for reading...

> The obvious solution, outlined in the performance chapter of the user
> manual as
> well as several memory application notes, is to protect access to the data
> manipulations so that the entire global doesn't need to be copied. This is
> easily done by making a functional interface to wrap around the global, making
> it no longer global, but private data accessed by a global function or
> set of
> functions. At this point, the data is often stored in a shift register,
> ...

We also implemented this nice approach with terrible side effects. Since we're working
in a multi processor environment, it could happen, that both CPUs are running in some
critical section on calling that VI, that holds the data in its shift register. When we
enabled the distribution on two processors, it took some time until LV crashed with
illegeal access to address 0x0000000, forcing it to one CPU, no crash occured...
(they've been called from several execution systems, we made also tests with only one
execution system...)

>
> >
> > II) CPU load: For about 150 analog signal channels we had to provide a four
> > boundary range check. In cycles of 100ms A* is checking these channels to lower
> > and upper warning and alarm limits. Removing this functionality from A* we got
> > about 20% of DualCPU load free, reimplementing the same (even extended)
> > functionality as DLL function in the third version of A* was not visible in the
> > CPU load diagram of NT.
>
> *** I suspect that much of this is due to global access. Another possible
> explanation is data conversions. The built-in LV function/operators compile
> code specifically for each datatype, but libraries often require data conversions.
>

No globals inside this code (just shift registers 😉 and no time consuming data
conversions for library calls. Due to the organisation of the 150 channels (and even
more) we'd to apply the same type of checking to 4 parallel groups of data and there's
also some mechanism to report detected errors. So the complete checking was realized
using a lot of VIs.

>
> >
> > III) Timing: For such a hard loaded Application it is not possible to keep the
> > timing of control loops in 100% exact cycles. We changed many parameters of LV and
> > the VIs of A* but without success.
> >
>
> *** As you elude to, once the CPU isn't loaded down with memory
> management, I suspect
> that the timing would be easier to control. Of course this is also heavily
> affected by the OS, which is why LV is also being moved to realtime
> Oses. The
> VIs written for one platform could be moved to another, perhaps realtime platform
> when the need arises.

Ok. (But this touches another historical and philosophical question about choosing the
OS. I always would prefer some other RT-OS but B.G.-OS 😉 It's a problem of convincing
the customers...)

> ...call DLLs EXEs, CINs, and to access ActiveX controls and servers. While
> we continue
> to improve the built-in controls, it has been possible to get this graphing
> functionality using external Active-X controls for several years now.

What about portability, if you're using ActiveX controls? I also tried to use an
ActiveX-Text control to implement a compfortable Macro-Editor for our Application, but
once the ActiveX got it's focus it did not release it any more. All kbd-typed-text went
to the ActiveX and not to the selected Textbox. (Our VIs have no window decoration,
since they occupy the complete screen.). Usable? No.

>
> >
> ...
>
> >
> > VII) Editing G: As you probably know, G is this graphical programming language
> > inside LabVIEW. We've a lot of Diagrams (= SourceCode) of VI's which are much
> > larger than one 1024x768 screen. If we've to ad some functionality, it needs more
> > time to make some space in that diagramm than to program the new code. There's no
> > way to print out these Diagrams in a readable way. All hidden frames (Cases,
> > Sequences, etc.) are stacked hierarchical on several pages of paper. If stacking
> > is much more than one level (and there're probably some empty frames), it is not
> > possible to find bugs by looking on your printout. You simply loose the overview.
> > Any ASCII written code is more readable and easier to be extended.
>
> *** Anytime code isn't written to fit on a screen it becomes more
> difficult to debug.
> Personally, I haven't printed out a piece of code in about four years.
> I find
> the online tools for balancing scope, searching, accessing online documentation,
> and looking up constants to be much, much more effective. I do use
> paper documents
> when I'm learning about a library or looking for the right set of
> functions to
> carry out some task, but once the code is written, its never printed.
> I'm not
> talking about LV diagrams here. I'm talking about C and C++ code.
>

Then you probably have multiple screen desktop with 21" each 😉 ? Ok. Some one is
printing its the code, some one else is just working with the online display. I'm
prefering my desktop with 2 x 1 meters to keep several printouts of header files,
pieces of sourcecode and documentation avaliable. Thus i've the overview without
clicking, moving windows, etc. just with a little turn of my head.

>
> >
> > VIII) It could happen, that you build your GUI, you call "save"every 5 minutes,
> > you insert some images on Buttons, you add code to the GUI, you test it, you close
> > LabVIEW, you go home to sleep, you restart LabVIEW on the next morning, you open
> > the last days work and --- BOOOOOOOOM! No way back. All you did is lost.
> > GRRRRRRRR! Will never happen with ASCII files!!!!
>
> *** Unfortunately, there are many ways to lose work. ...

This effect was reproduceable with a complete new designed VI on my PC and olso that of
the hotline sevice... ;-). (I had this problem also with Wordperfect and Word documents,
but never with documents in ASCII format, never in Matlab/SIMULINK m and mod files,
never with C/C++ source codes and makefiles...)

> ...
> Again, I'm not trying to attack your conclusion, but I suspect that your
> new system
> will benefit quite a bit from the experience of the one written in LV.
> In other-
> words, projects are easier the second time around, and changing tools
> may indeed
> be the right thing to do based on your level of experience with LV
> versus other
> tools, but the tool change may not matter nearly as much as simply
> rewriting the
> project and improving things based on your new knowledge.

For me it's is not only the second time doing this job. (The first implementation of
this software that i realized, was a redesign of an existing system, when i was phd
student at the university of darmstadt three years ago.) And when i started at TGS
everybody in this company told me about the nice features and the great performance of
LV, its easy way of programming etc. and i suspected the suitability of LV for this
project. As a newbie i had no vote for that project. Any software design tool that
places a lot of uncontrollable overhead in the background of the application could lead
into the same situation.

> I've seen projects attempted with LV that probably should have been written
> using a different language, different design, different staffing.
> On the otherhand, I've also seen a couple engineers or scientists
> replace a dozen
> programmers, and produce product that was more successful, more
> flexible, and
> way cheaper to maintain.

Right. As i also mentioned, it depends on the type of application.

> LV is simply another tool for making computers do work.
> ...
> Now if we could just write an expert system that helped us to select tools
> and evaluate our designs before we implement them.
>
> Greg McKaskle

Wonderful. Who'll do this job 😉

JAK
0 Kudos
Message 10 of 14
(7,739 Views)