Actor Framework Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Can AF handle fast data sampling rate measurement system?

That is my confustion. I am newer for AF structure. But when it comes up to my mind, I am really fascinated by this structure since it is OOP and very convenient to code. it is also very conpetent for controlling system.  However, I also worried about if it is conpetent for data acquistion measurement system if the data rate is very fast? Also, pipeline is also needed mostly.

0 Kudos
Message 1 of 9
(3,346 Views)

I recently did a project that sampled two DAQ cards at 1 Ms/s each for days on end and it was fine. I didn't store all of that data, but I had 16 channels on each card and needed to sample each quite high to detect component failures at a high temporal resolution.

0 Kudos
Message 2 of 9
(3,332 Views)

Actual data sampling rates aren't the issue with AF. It's the message rate. Analog data is nearly always read in large chunks, say 100 samples or 1000 samples. It's the "chunk rate" of how fast you want to send those chunks of data to another process that is the challenge. I send large sample sets at a 100Hz chunk rate (my sample rates vary from 5-100,000Hz). Hence direct Q's. I scale, tare, and perform redlines on every point of data on hundreds of channels. Each of those is a unique actor and I can get the data down the pipeline quick because of the direct Q's. I have other processes that I can inject in the pipeline if needed (hence my OO design with AF, given each process a single responsibility). 

0 Kudos
Message 3 of 9
(3,321 Views)

@BertMcMahan wrote:

I recently did a project that sampled two DAQ cards at 1 Ms/s each for days on end and it was fine. I didn't store all of that data, but I had 16 channels on each card and needed to sample each quite high to detect component failures at a high temporal resolution.


 

 

That is nice. Seems AF works for fast data rate measurement system. Well, comparing with simple Q handler structures, since AF enqueuer needs a little bit more steps I wonder if AF will decrease the rate a little bit.

0 Kudos
Message 4 of 9
(3,312 Views)

@wrkcrw00 wrote:

Actual data sampling rates aren't the issue with AF. It's the message rate. Analog data is nearly always read in large chunks, say 100 samples or 1000 samples. It's the "chunk rate" of how fast you want to send those chunks of data to another process that is the challenge. I send large sample sets at a 100Hz chunk rate (my sample rates vary from 5-100,000Hz). Hence direct Q's. I scale, tare, and perform redlines on every point of data on hundreds of channels. Each of those is a unique actor and I can get the data down the pipeline quick because of the direct Q's. I have other processes that I can inject in the pipeline if needed (hence my OO design with AF, given each process a single responsibility). 


Yeah, if you use direct Q's, that will be no problem.

 

 

 

0 Kudos
Message 5 of 9
(3,311 Views)

There is one gotcha with AF: the AF library allocates queues with unlimited size. This is a known memory leaker, and NI recommends queues be allocated with fixed size to reduce dynamic memory allocation, which reduces memory leaks and memory fragmentation. Both these issues cause LV code to terminate unexpectedly; they manifest themselves when code is run for a long period of time.

Message 6 of 9
(3,060 Views)

DO NOT ATTEMPT TO USE A BOUNDED QUEUE IN ANY MESSAGING ARCHITECTURE. This is very bad advice. Details below.

 

David, I'm not faulting you for trying to be helpful, but I do need to correct the record. Your information is almost right, but that "almost" causes problems in messaging architectures. These rumors circulate in the community from time to time, and I need to quash them when I see them. 🙂 Bear with me, please.

 

I created the AF with help from multiple CLAs because so many people, including those CLAs, attempted to build their own messaging architectures and hit serious roadblocks, one of which is queue bounding. Bounded queues is a serious flaw that can deadlock your applications. AF does not have the option for a reason. There are lots of discussions you can find in the forum about throttling messages and the right way to handle it. Bounded queues is not the answer. 

 

@David_Fanelli wrote:

the AF library allocates queues with unlimited size. This is a known memory leaker,


It is a known memory allocation, not a memory leak. Leaks are serious bugs that mean the program has lost track of a pointer. Please do not panic people by using the wrong term. The allocation is done at the request of your program, and even if the producer gets ahead by a lot, if the consumer catches up, it'll consume the queue. It is not a leak.

 


@David_Fanelli wrote:

and NI recommends queues be allocated with fixed size to reduce dynamic memory allocation,


I do not. I am Aristos Queue, the developer and maintainer of all the queue primitives. I don't think I've ever given that recommendation, at least, not with such a blanket statement. I also wrote the AF.

 

I'll deal with the issues of unbounded queues generally first, then talk about the open-ended queues in the AF specifically second.

 


@David_Fanelli wrote:

memory fragmentation.


First, unbounded queues cannot cause fragmentation on desktop systems. Second, fragmentation (where memory cannot be allocated because it is spaced out) is only an issue on some RT targets.  Linux RT uses virtual memory. You won't get a failure to allocate on Linux RT because it can do memory compaction. Now, you may may cause a determinism hiccup when the compaction happens and with virtual page swapping, but it is not going to bring down your system with an allocation fail when you should have plenty of available memory.

 

Modern hardware targets have lots more memory space, enough such that fragmentation is significantly less of an issue. Most users who use an unbounded queue aren't going to have any problems.

 

So, yes, fragmentation is a potential issue, but it is not common, and advice not to use unbounded queues should be narrowly tailored to the relatively specialized set of customers who might be affected.

 


@David_Fanelli wrote:

memory leaks allocation


If your application needs a large queue, then it needs a large queue. Allocation of a large block of memory is not a sign of a poorly designed application. There are plenty of systems where producers can burst ahead of consumers and then consumers catch up eventually. That's pretty much the definition of most science apps: giant amount of data comes in quickly, then it gets churned over slowly by analysis.

 

You should use a bounded queue to throttle a producer process when that makes sense for the particular process. It is not something that can be decided in general. For most applications, the unbounded queue is the better choice as it provides the highest performance and allows for burstiness in the producer. That's why unbounded is the default for the queue primitives. If bounded was the better choice, we would have made "size" be a required terminal so that people would have to opt-in to unboundedness.

 

Now, let's talk about the queues in the AF specifically...

 


@David_Fanelli wrote:

There is one gotcha feature with AF: the AF library allocates queues with unlimited size.


Fixed that for you. The unlimited queues are critical to avoiding deadlocks. IF YOU USE FIXED-SIZE QUEUES IN A MESSAGING ARCHITECTURE, YOU RAISE THE PROBABILITY OF DEADLOCK. DO NOT DO THAT.

 

Consider a caller actor that tries to send to nested actor at the same time nested is sending to caller -- very common scenario (possibly the most common scenario). If both of them have bounded queues and those queues fill up, then both enqueues will block, waiting for one or the other to dequeue. But since neither can proceed past the enqueue, you hang. That's a deadlock. And in a command pattern where every command from caller to nested produces a set of message responses, you get exactly this... the caller floods nested with a stream of requests, and each dequeue in the nested sends a flood of responses back up to caller. Eventually, both queues saturate.

 

You can have it happen in other ways. Suppose you have a caller that uses a Reply message, and you think you're safe because only one side of the communication uses Reply messages... but if the queues are bounded, you can have a nested trying to send a message to caller (e.g., "I detected an error") while caller is trying to send Reply msg to nested. If caller's queue is full, both will block.

 

Are these scenarios common? No. Are they rare? Also no. In order to have the architecture protect against deadlocks, you need unbounded queues.

 

My advice has been that if the AF queue blows up your memory, the problem is most likely not in the queue but is in your actor design. And I don't mean, "Oh, you need to work around this design flaw in the AF." No, I mean that whatever the actor is doing is causing more problems than just just filling a queue. It's probably stalling your UI or something. OR you really do need that much space in the queue because of how much data you're producing. Maybe you're like CERN, producing a petabyte of data in a millisecond. In that case, move to 64-bit and buy more RAM. But don't try to bound the queue.

 

I hope this clarifies the rules about unbounded queues and why AF uses them.

Message 7 of 9
(3,040 Views)

@champion2019 wrote:
That is nice. Seems AF works for fast data rate measurement system. Well, comparing with simple Q handler structures, since AF enqueuer needs a little bit more steps I wonder if AF will decrease the rate a little bit.

There's a minor hiccup on enqueue, but it should be pretty small. It's hard to benchmark, but I did do a lot of tuning on the VIs for exactly that reason. The biggest overhead is resolving the data value refnum in "vi.lib\ActorFramework\Message Priority Queue\Priority Enqueue.vi". Other than that, enqueue speeds should be pretty much the same as raw queue prims. The dequeue has all the overhead of prioritization. That design avoids interfering with high-speed bursts of data acquisition. If we used a priority heap or similar data structure, then the data would have to be sorted during enqueue, and that takes time.

 

Benchmarks are hard to write well, and this was particularly painful to benchmark reliably, but I believe it is pretty darn close to raw prims for enqueue speeds, and as close as possible on dequeue, with dequeue adding true zero overhead when consumer is running faster than producer.

0 Kudos
Message 8 of 9
(3,030 Views)

@champion2019 wrote:

I also worried about if it is conpetent for data acquistion measurement system if the data rate is very fast? Also, pipeline is also needed mostly.


Hello, champion2019. Now that I've responded to the people who responded to you, I'm finally getting back to your original question. I guess I'm running a stack and not a queue today. 🙂

 

The latency in the AF priority queue is as small as I can make it. It has been used for lots of data acquisition systems. Whether it is sufficient for *your* system is something only you can answer. First, ask yourself, "Do I write LabVIEW code for CERN on the Large Hadron Collider?" If the answer is "yes," then nothing is ever fast enough. 🙂 If the answer is "no," then AF probably works for you.

 

Pipelining is an interesting case. AF is very good at your overall software architecture, like separating file writing from UI from actual data collection. But for an analysis pipeline, you generally need a chain of actors where the last actor sends the final result back to the first. That contradicts the recommended tree structure of AF. It can be done (and has been done), but I'd recommend looking at channel wires for pipelining specifically. Please look at shipping examples for the Stream channel, specifically this one: examples\Channels\Stream Rate Conversion\Channel - Rate Conversion.lvproj It implements a very efficient pipeline with a Stream channel that is very easy to read and rearrange or insert stages as needed.

If you are unfamiliar with channel wires, this will get you started:

examples\Channels\Basics\Channel Basics.lvproj

Channels are in LV 2015 and later.

0 Kudos
Message 9 of 9
(3,019 Views)