LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Designing a LabView application for complex telemetry monitoring

Hello all,
I am in the initial stages of designing an automated test system for which I plan to use LabView and probably TestStand. (I'm just getting started with LabView, having run some demo code and tutorials.) My application has a few interesting characteristics, and I'd like some suggestions on the most effective way to structure the LabView application.

Background:

The test system is intended to test an electronic instrument that collects a large number of different physical measurements and reports them to a master controller in a binary telemetry stream through a specialized high-speed serial interface. The test system will include programmable power supplies and custom digital interfaces to talk to the unit under test.  The test system will also need to display and archive the instrument data, issue instrument commands, and execute scripted sequences of test operations; that's the easy part. (Assume a mid-range VXI chassis and controller to run the show.)

Where things get interesting is in interpreting the instrument's rather complex binary data stream (which is a fixed project requirement for the purposes of my test system). The instrument reports a hundred or so different data items at various repetition rates, and reports this data to the test system as binary data packets some tens of times per second. Different packets will contain different sets of data items, identified by keying information within each packet and by a predefined "data dictionary" that describes the format of each telemetry item.

Each data item has a unique set of conversion factors associated with it that scale it to engineering units (volts, amps, degrees C); some data items will also be individually compressed to reduce their data size for transmission. Most data items also have sets of limits associated with them, which may depend on the states of other telemetry items.

The test system needs to unpack the binary data packets, figure out which data items it has received, decompress the data items that are compressed, scale the binary data to engineering units, test their values against the predefined limits, and possibly take some action if limits are exceeded. It needs to do this in real time, of course.

The Hard Part:

The challenge in designing this application is how best to handle the processes of unpacking, format-converting, and limit-checking several hundred different data items per second. One could:

a) Write a virtual instrument driver in C++ that handles all comms with the unit under test, does all the telemetry unpacking, data conversion, and limit checking, and outputs fully-processed data channel readings to LabView; or

 b) Write code in LabView that does all of this, and communicates with a simple low-level virtual instrument driver that just handles packet comms; or

c) Something in between these extremes.

The Question:

What's the most effective way to extract data from complex telemetry packets and get it into LabView in real time? On the one hand, I've seen LabView code that was written in-house to unpack and reformat binary instrument data for processing, and I have my doubts that this method scales well to a hundred or two hundred different "channels"; I fear that I'll encounter speed, memory and maintainability problems in trying to graphically program the parsing of a complex telemetry data format. On the other hand, writing the telemetry extractor in C++ and building-in an interface to a data dictionary isn't quite trivial either (Been there, done that, someone else owns that code).

Comments from folks more knowledgable about LabView's strengths and limitations are most welcome.

-- Steve
0 Kudos
Message 1 of 7
(3,534 Views)

Hello Steve,

If I'm reading correctly, then I assume that the basic question here is whether or not LabVIEW will be efficient enough to manipulate and process packets of binary data quick enough to keep the system running at the speeds you desire. 

It should be no surprise that one of the biggest advantages of LabVIEW is its ease of programming and its graphical interface.  As you can probably guess, this does come at a slight expense of efficiency (both speed and memory) in most cases when compared to the comparable text-based program (such as what you could do in C or C++).  However, with most carefully written LabVIEW programs the limitations of your program are your hosting computer's processor, memory, and device communication method not the LabVIEW code itself.

Other people may also have some additional thoughts on the subject as well, but here are a few links I found that may be of some help:

Estimating Code Complexity: http://zone.ni.com/devzone/conceptd.nsf/webmain/344ee77586e7569f8625700d00709f1c
Memory Management in LabVIEW RT: http://zone.ni.com/devzone/conceptd.nsf/webmain/4db571a3513c859c862568eb0075a635
Considerations in Implementing LabVIEW Real-Time Drivers: http://zone.ni.com/devzone/conceptd.nsf/webmain/87b0c23bf81d2d9786256d2400696c87.

Thanks for posting, and have a great afternoon.

Regards,

Travis M
Applications Engineer
National Instruments


Travis M
LabVIEW R&D
National Instruments
0 Kudos
Message 2 of 7
(3,506 Views)
Thanks, Travis. I'll check out those links.

To refine my question slightly, my most critical concern is the matter of unpacking telemetry packets with a very complex structure and a large number of possible "data channels", and converting those items from raw, maybe compressed integer ADC counts to uncompressed floating-point values for the physical quantity measured.

Most measurement and control applications deal with a relatively small number of distinct "data channels"; this application has more than most, and I'm unsure whether the graphical-programming model is the best option for that specific subtask. (Think of a VI that decodes a binary data packet with an arbitrary set of data from up to a hundred different data channels.) LabView and TestStand can obviously handle the rest of the system -- that doesn't worry me.

Thanks again, Steve
0 Kudos
Message 3 of 7
(3,502 Views)
It is hard to give you a "best guess" of the "doability" of this in LabVIEW without a little more specific data information, more that fairly complex data and a large number of data channels. Of course, even with a complete spec of the worst case, short of writing the code we might not be able to guesstimate it. That having been said, if you know that the task can be accomplished in C or C++ and you are planning on using Test Stand you could "mix and match" the parts who's strengths are best suited to the tasks. LabVIEW, as an earlier poster comments, does a pretty good job of producing optimized machine code, but using LabVIEW _can_ carry with it the overhead of some of the graphical aspects. But the bigger issue is optimizing the initial algorithm, then optimizing the LabVIEW code, before the LV compiler even starts to produce the optimized machine code. As demonstrated in another thread something as simple as putting a 1mS WAIT in a while loop (vs no wait) can mean the difference between CPU usage of 1%> (with wait) and 50% (no wait) as LabVIEW optimzes thread utilization. The more information you are able to give the better answer we can probabaly return, but without actually writing the code it will still be theoretical.
 
Sorry for a less than definitive answer!
 
P.M.
Putnam
Certified LabVIEW Developer

Senior Test Engineer North Shore Technology, Inc.
Currently using LV 2012-LabVIEW 2018, RT8.5


LabVIEW Champion



0 Kudos
Message 4 of 7
(3,478 Views)

Steve;

Could you perhaps give a few specifics of how the data channel operates? Is it a synchronous or async connection? How is the data packaged? Is it a standard protocol or application specific?

Mike...


Certified Professional Instructor
Certified LabVIEW Architect
LabVIEW Champion

"... after all, He's not a tame lion..."

For help with grief and grieving.
0 Kudos
Message 5 of 7
(3,470 Views)

Hi Steve,

 

The details of your system come into play early in a discussion of this type.

 

I am devloping an application now that sounds the same on the surface.

 

I suggest you look into the details of the "OSI 7-Layer Model" for an example of how to approach this type of project.

 

In my case I am using a set of VI's to handle incoming updates that queue to a "packet director" that in turn pass the data to entities that that are familiar with the specifics of the data.

The "Entitites" do the limit checking etc.

 

You should also locate Jim Kring's discussion of design patterns for inspiration on how to move the data between sub-systems.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 7
(3,455 Views)
Thanks, folks. You raise several good points WRT the difficulty of estimating project complexity and best approach based on meager data. (Of course, I find myself in something of the same situation, as the data formats and channel count are not fully fleshed out yet.) Here's the strawman I'm presently working with.

As a baseline for understanding the system, assume that the instrument under test delivers its data in binary packets over a high-speed serial bus, we're still not certain which one; for our purposes here, assume that the bottom OSI protocol layers up to the transport layer are already built, and our job begins with receiving a binary data packet in a buffer, all ready to parse into its component data items.

Assume that there are 128 different primary data channels that measure different physical quantities; each data channel is received as an integer count (either a 16-bit integer, or a 16-bit integer compressed to an 8-bit integer using a lossy log-compression method, which is then decompressed to a 16-bit integer). Each channel has a unique set of conversion factors that converts it to a floating-point number in one of two ways: 1) Polynomial conversion, Y = AX + BX^2 + CX^3... or 2) piecewise-linear approximation to a curve defined by a set of up to 16 X,Y pairs. The size, data compression, and conversion factors for a given data channel are constant at run time, but will vary for different serial numbers of the same instrument and may be updated from time to time during development (so a loadable config file would be useful).

Add to this a set of 32 bit flags that represent various instrument states, and are converted to TRUE/FALSE or ON/OFF states.

Assume that the instrument sends 5 packets per second, every second, on a regular schedule, for a week. Each packet contains a 128-bit block of channel flags that identifies which primary data channels appear in the packet (channel flag bit = 1 means data for that channel appears in this packet), followed by the 32 status bits, followed by channel data for each channel whose associated channel flag is "1", appearing in the same order as the channel bit flags. (This setup lends itself well to a stream-based packet decoding scheme.)

The read rates for the data channels are individually configured, so a given packet can contain any combination (not predictable in advance by the test system) of up to 128 data channel values.
 
The issue at hand for me is whether to do this unpack-convert-and-scale in a C++ routine that ingests a packet and feeds converted data to the VI, or write the unpack-convert-and-scale code graphically as part of the LabView VI. If the latter route is chosen, how does one manage the set of 128 different conversion factors in a maintainable fashion? I have a good idea of what the C++ approach requires, but not so good an idea of how doing this task in LabView would compare.

Thanks again, Steve
0 Kudos
Message 7 of 7
(3,451 Views)