06-23-2022 08:53 AM
A recent thread here mentioned using several parallel dequeue loops in order to keep up with enqueuing (i think the OG thread had an image enqueued every 100ms and then processing it would sometimes take longer than that).
This interested me and i started experimenting with it, but i started wondering at what point you'd have too many dequeue loops?
i'm up to 12 now and it still works fine, but it seems excessive.
06-23-2022 09:15 AM
I guess that answer would be "as many as you need". This could depend greatly on the application and the amount of processing that needs to be done. If you used shared clones LabVIEW will only create as many as actually needed and share the clone instances in memory. So even though you spawned 12 instances it may not actually have that many in memory.
06-23-2022 09:36 AM
I'll just state that the most I have ever needed was 1 loop for processing data. Anything I have ever needed to use a Prodcuer/Consumer with, the gathering of the data was always slower than the processing I had to perform. Even then, I would usually have another bottleneck on the output of the processing loop that would be an issue (specifically thinking about File IO here).
If going beyond 1 processing loop, you need to do something to manage the order of the processed data coming out of the parallel loops.
06-23-2022 09:37 AM
You should ask yourself how many real CPU cores you have. Even if you spawn 100 parallel workers, they still need to share the handful of CPU cores and all you get is more overhead, [possibly even interfering with the producer. If disk IO is involved, that part is also serialized.
As a first step, try to streamline the processing. Are you sure it is optimized? Often better code architecture and inplaceness can give you orders of magnitude more breathing room that just duplicating inefficient code. you'd be surprised how much slack is left in typical code. 😄
06-24-2022 01:50 AM
@Mark_Yedinak wrote:
I guess that answer would be "as many as you need". This could depend greatly on the application and the amount of processing that needs to be done. If you used shared clones LabVIEW will only create as many as actually needed and share the clone instances in memory. So even though you spawned 12 instances it may not actually have that many in memory.
Ah so just having the enqueue and several dequeues be in the same VI is not that great of an idea.
Could you provide me with a short example of the shared clones for queues? I'm having trouble finding examples online.
06-24-2022 11:13 AM
@AeroSoul wrote:
@Mark_Yedinak wrote:
I guess that answer would be "as many as you need". This could depend greatly on the application and the amount of processing that needs to be done. If you used shared clones LabVIEW will only create as many as actually needed and share the clone instances in memory. So even though you spawned 12 instances it may not actually have that many in memory.
Ah so just having the enqueue and several dequeues be in the same VI is not that great of an idea.
Could you provide me with a short example of the shared clones for queues? I'm having trouble finding examples online.
Here is a basic example. It didn't do exactly like I thought but if you run at a slow rate, you can see how LabVIEW will reuse the first few shared clones prior to spawning new copies. I had expected LabVIEW to be smart enough to not create the total number of clones since it could reuse ones previously launched but overtime it did end up spawning the total amount that I had specified. I haven't tried a very large number of clones but that could be interesting to see how it handles that.