LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What is the Fastest producer consumer method. Queue, RT-FIFO, Event

Solved!
Go to solution

I am assuming your code is running on a PC which does not have a RTOS. This then begs the question, if the point of the RT FIFO is determinism, and it is being used on a non-deterministic OS, is there any reason to be using it instead of a queue? My initial assumption is to say no. The speed will probably depend more on your processor, and any time difference between a queue and a FIFO will probably not be noticable to the user, especially if the queue is only used for an event driven queued state machine. What you need to be more careful about is that the loop dequeuing elements can keep up with the loop enqueuing them, else you have bigger issues!

0 Kudos
Message 11 of 22
(1,748 Views)

@Ryan,

 

Thanks for the link to the FAQ, That was the info that I was looking for on RT-FIFO.

The key information that I was looking for was that Queues use Blocking calls and that RT FIFO preallocates memory.

The execution determinism is not really a concern to me in my application but is good to know for future work.

 

 

Here is some (hopefuly constructive) feedback to NI/Labview.

 

I have posted on this forum many times seeking wisdom and knowlege and more often than not, an NI Application engineer or LV Guru replies with a link to helpful Knowlege Base Page.

 

I do my best to search for these nuggets but I just don't can't seem to find them.

 

Is there a special "Pro" search page that I should use Smiley Happy

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 12 of 22
(1,718 Views)

@for(imstuck),

 

I use Compact fieldpoint, RT PXI, Windows and when the postman arrives, hopefuly Compact Rio.

 

Our code is used on all of these platforms and in the case of cFP needs to be written in a way that is respectful of execution time.

 

It is not such an issue when using the brute force of a vxworks on a dual core PXI-8108 but when using a cFP-2100, every execution cycle counts.

I learned recently how to overload a cFP processor making it "Forget" to service the UART driver, losing important comms data

 

@Nathan,

Thanks for the suggestion,

I am reluctant to base my architectual choices on experimentation and would prefer to use theory before emperical methods.

I have done my share of benchmarking but tend only to use it when I can't find information/documentation.

I use 4 different operating systems and 3 versions of labview, benchmarking can consume a lot of time and still not give a difinitive answer.

 

iTm - Senior Systems Engineer
uses: LABVIEW 2012 SP1 x86 on Windows 7 x64. cFP, cRIO, PXI-RT
0 Kudos
Message 13 of 22
(1,720 Views)

I have actually benchmarked all these methods.  Unfortunately, I cannot find my actual benchmark.  Results were about as expected.

 

  1. Queues are fastest if they do not have to allocate memory (and they do allocate memory the first time through, and in subsequent times when needed.  This is done in an intelligent fashion).  But they have a lot of jitter due to said possible memory allocation.
  2. RT-FIFOs are next, and have the lowest jitter.
  3. User events are last, but at least an order of magnitude.

Note that all this benchmarking was done on a desktop machine, and I know that things change from platform to platform.

 

In general, I use queues for point-to-point communication and user events for broadcast communications.  I would use RT FIFOs on an RT platform if I needed the determinism.

0 Kudos
Message 14 of 22
(1,702 Views)

DFGray wrote:
  1. Queues are fastest if they do not have to allocate memory (and they do allocate memory the first time through, and in subsequent times when needed.  This is done in an intelligent fashion).  But they have a lot of jitter due to said possible memory allocation.


Is that still true if you specify the queue size?

=====================
LabVIEW 2012


0 Kudos
Message 15 of 22
(1,687 Views)

If you specify the queue size, it removes most of the jitter from using a queue.  You take a one-time hit when the queue is created, but nothing is allocated when you actually use the queue.

0 Kudos
Message 16 of 22
(1,680 Views)

@ Timmar

 

Unfortunately there is no "pro" search page.  We search the same data base you have access to on ni.com.  My "pro" tip would be to search for knowledge base articles only and scan through the results.  To find this page search for "rt fifo" and then on the left hand side of the page select show me "KnowldegeBase".  The article I linked should be about the 4th one down.  From my experience the best three searches for these type of things are "KnowledgeBase", "Tutorials", and "Examples."

Applications Engineer
National Instruments
0 Kudos
Message 17 of 22
(1,671 Views)

@DFGray wrote:

If you specify the queue size, it removes most of the jitter from using a queue.  You take a one-time hit when the queue is created, but nothing is allocated when you actually use the queue.


Does it actually do the allocation at the time of queue creation?  I was under the perhaps-mistaken impression that queues grow as elements are added, and that setting the queue size simply sets an upper bound on how much it can grow.  I think I'd seen examples where it was suggested to pre-fill and then flush a queue when a program starts in order to make sure the queue was fully allocated prior to using it for any real work, although I couldn't find such an example in a quick search.

0 Kudos
Message 18 of 22
(1,667 Views)

I do not know for sure, and it could have changed since the last time I looked at it.  I would suggest you benchmark it.  Use a flat sequence with every other frame populated with <vi.lib>\Utility\High Resolution Relative Seconds.vi.

0 Kudos
Message 19 of 22
(1,658 Views)

I don't have that VI, but with the code below, I'm getting consistent results that suggest that setting the queue size does not cause that number of elements to be allocated.  Enqueueing 1 million elements initially takes about 140ms; after flushing the queue, enqueuing the same elements takes only about 115ms.  To me, this looks like space in the queue is not allocated until it is needed.

queue allocation.png

0 Kudos
Message 20 of 22
(1,639 Views)