LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Best way to share data on RT Linux

Hello,

 

Trying to share data on NI PXI RT Linux without causing undue CPU usage or Blocking is a major concern for many of us. Here is the general system design:

 

Multiple loops that read or write data at 10 to 1000 Hz (10 to 5000+ Hz would be better)

  • 50 to 100 VIs running in parallel performing various tasks
  • Tasks can be read/writing I/O, PIDs, calculations, alarms, communications with external devices etc.
  • Each task writes an array of values (array size varies 10 to 500 elements) to a module global variable (might change to shared variable with RT single element FIFO to reduce blocking on writes)
  • Any one of the 50 to 100 VIs might need to read any other VIs data so there could be blocking happening from multiple reads of the module global variable (or FIFO)

 

Considering the speed at which I want some of the loops to iterate what might be the best architecture for this monster?

0 Kudos
Message 1 of 7
(790 Views)

Looks like VeriStand is exactly what you need. The VeriStand Engine uses the non-blocking RT-FIFO.

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 2 of 7
(756 Views)

When you say "share data on NI PXI RT Linux without causing undue CPU usage", do you mean within the RT Target (such as a CompactRIO) or between the Host (PC) and the RT Target?

 

For the former ("within"), I've used Asynchronous Channel Wires (you need at least LabVIEW 2019 to do this "properly" on an RT Target), and RT FIFOs if going between the Target and its FPGA.  For the latter ("between"), I've used Network Streams. 

 

I've achieved (burst) speeds of 16 channels of A/D from 16 custom-made, SPI-managed circuit boards with sampling at (I believe -- haven't looked at it recently) at least 10 kHz.  These data go from the FPGA to a Timed Loop on the RT Target, and the burst of data are transmitted to the Host (and streamed to the PC's disk) via Network Streams.  Transmission within Host and Target are both accomplished with Channel Wires.

 

Bob Schor

0 Kudos
Message 3 of 7
(709 Views)

@Bob_Schor wrote:

[...] and RT FIFOs if going between the Target and its FPGA.  For the latter ("between"), I've used Network Streams. 


I guess you meant DMA FIFOs for communication between RT (which you call "Target") and FPGA.

RT FIFOs are for communication within an RT application.

0 Kudos
Message 4 of 7
(691 Views)

@raphschru wrote:

@Bob_Schor wrote:

[...] and RT FIFOs if going between the Target and its FPGA.  For the latter ("between"), I've used Network Streams. 


I guess you meant DMA FIFOs for communication between RT (which you call "Target") and FPGA.

RT FIFOs are for communication within an RT application.


Right you are.  I think of LabVIEW Real-Time as Host (PC), Target (something running a Real-Time OS and communicating with the Host), and (optionally) FPGA, which communicates with the (RT) Target (via a DMA FIFO, thanks for the correction).

0 Kudos
Message 5 of 7
(664 Views)

Thanks for the reply Bob,

 

  • My application is running on a NI PXI platform in NI Realtime. I am not using an FPGA (unless I need to)
  • Each VI is running asynchronously producing an array of data and will write to a single element FIFO to prevent write blocking with reads.
  • Each VI needs data as input because it's calculation is partially based on another VI's output data

Here's the problem:

  • Since each VI must read from another VI's single element FIFO there could be read blocking because many VI's may need to read the same FIFO at the same time to get the data need. How do I prevent this blocking on reads?
0 Kudos
Message 6 of 7
(607 Views)

So far, you've given some (shall we say) vague descriptions of what you want to do.  We know you are running a LabVIEW Real-Time Project with the Target being a PXI system running RT Linux.  I assume (because you didn't say one way or another) that this is a true LabVIEW-RT Project, with the Host PC running all of the "human-interaction" stuff, including keyboard and display interaction, and connection to file systems, and the PXI concerned with deterministic, real-time interaction with DAQ (and other) hardware.  Furthermore, the Host and Target machines are physically connected via a TCP/IP Ethernet connection (perhaps direct 6'-12' Cat 6 cables) through a simple switch.  

 

The other thing it would be interesting/useful to know is what version of LabVIEW you are running, both Version number and "bittedness" (there has to be a proper way to say "32-or-64 bit" -- hmm, maybe that's what I should have said ...).

 

I'm working on a moderately-complex LabVIEW RT system (currently developing in LabVIEW 2019, contemplating moving to LabVIEW 2021).  I'm taking A/D samples from 32 (I think) A/D converters (I'm sampling current and voltage readings during 20-120 Hz Pulse Trains (lasting several seconds) with pulse widths 0.25 - 1 ms.  The RT code handles all of the timing of the Pulse generation and Pulse sampling, using a Channel Message Handler model (think Queued Message Handler, but with Messenger Channel Wires replacing Queues).  I also use Stream Channel Wires to implement Producer/Consumer "passing of the data" from the Timed-Loop "acquisition" loop to a "spool the data to disk" loop.

 

But I don't do the disk I/O on the RT Target!  Instead, I create Data Communication paths to the Host PC via Network Streams, and as fast as the sampled data comes into my "Consumer" loop on the RT side, it is sent (via a Network Stream) to the Host PC, where it gets spooled to a disk file.

 

I first learned this technique (and started using Network Streams for getting data from my PXI system to the PC Host) in code written in LabVIEW 2010-2012 (it's long enough ago I don't quite remember which version of LabVIEW I was using -- somehow, LabVIEW 2011 sounds right, but I'm giving myself a little wiggle room).  As I mentioned, my latest LabVIEW RT system uses LabVIEW 2019 (32-bit), and works quite nicely.

 

My recommendation would be to use a stable "pattern" such as the QMH, the DQMH, or the CMH, and ship the data off to the Host PC for streaming to disk files (the PC has a lot of free time, and oodles of disk space, for this purpose).

 

Bob Schor

 

 

0 Kudos
Message 7 of 7
(580 Views)