02-28-2018 02:36 PM - edited 02-28-2018 02:37 PM
wiebe@CARYA wrote:
Sadly, ParallelLoop.MaxNumLoopInstances is read once, and then stored internally. Using LV Config Write Numeric (I32).vi changes the ini file, but sadly the old value is used in for loops... So a restart is needed.
Sadly!??? You would be very unhappy if LabVIEW read each configuration token from the ini file everytime it needs its value! Even with a super ultra SSD you would be complaining about the long delays between every edit operation!
03-01-2018 02:25 AM
@rolfk wrote:
wiebe@CARYA wrote:
Sadly, ParallelLoop.MaxNumLoopInstances is read once, and then stored internally. Using LV Config Write Numeric (I32).vi changes the ini file, but sadly the old value is used in for loops... So a restart is needed.
Sadly!??? You would be very unhappy if LabVIEW read each configuration token from the ini file everytime it needs its value! Even with a super ultra SSD you would be complaining about the long delays between every edit operation!
Those values are not read from disk every time, the ini file is kept in memory. So no disk would be involved. LV Config Write Numeric (I32).vi changes the value in memory (and then saves the file). It would still be too slow to read the ini memory file every execution, but at the moment it's never re-read.
The value is obviously read at some point anyway. So when writing it, the variable that is actually used could be written as well. That would only make thinks slower when the value is written, not when it is used.
06-23-2020 05:42 PM
I realize this thread is two years old...however I have a codeset that I inherited and the for loop parallelism was thrown into this application like candy....
In one instance, there are a number of queues being read from and the for loop is set for parallel execution.
My question is this, does this behave like a re-entrant clone? Does the memory manager always use the same clump of memory to run the for loop each time it is run, or does it skip around and just use whatever is available.
The check looking for shift registers and folding would seem to indicate that it is NOT stateful and you cannot always guarantee that the values will "line up".....in my instance....there is a large index of channels on these queues, and I suspect that the for loop parallelism being used is potentially the cause of some of the values on the front panel indicators "jumping around" from time to time.....
06-24-2020 02:44 AM
@SHowell-Atec wrote:
I realize this thread is two years old...however I have a codeset that I inherited and the for loop parallelism was thrown into this application like candy....
In one instance, there are a number of queues being read from and the for loop is set for parallel execution.
My question is this, does this behave like a re-entrant clone? Does the memory manager always use the same clump of memory to run the for loop each time it is run, or does it skip around and just use whatever is available.
The check looking for shift registers and folding would seem to indicate that it is NOT stateful and you cannot always guarantee that the values will "line up".....in my instance....there is a large index of channels on these queues, and I suspect that the for loop parallelism being used is potentially the cause of some of the values on the front panel indicators "jumping around" from time to time.....
It seems more likely to me the bug is a mistake in the program...
Does the bug disappear if you turn the parallel execution off?
Perhaps if you posted the (part of the) code, we could see what's going on.
Whether it uses the same clump of memory or not, the data should still be correct? Or are you using preallocated clone VIs in the for loop or something? That could get messy, but should still work when done correctly (by definition 😉)
06-24-2020 04:38 AM - edited 06-24-2020 04:46 AM
It is hard to say in any detail what might be the problem without seeing any code but if I understand what your code does there certainly is a potential for a race.
The ordering of elements in a queue is inherent to the order they are put in. A queue is very much like a FIFO. If you then parallize a loop in which you dequeue elements from the same queue the order in which elements get dequeued is definitely not in order of the loop index. The Dequeue function knows nothing about the loop index and simply grabs the next element available but due to parallel execution of the loop the order in which the Dequeue function is called is not in loop iteration order. This is simply a fact and nothing LabVIEW could change for you. It also has nothing to do with reentrant execution of code clumps. A queue is a refnum (like a pointer) and when LabVIEW parallelizes the loop it segments the loop in several chunks that all run in parallel and do part of the work. While each Dequeue is protected to not stomp on the queue while another queue function is accessing it, that doesn't prevent the other code chunks from accessing that queue too at the (quasi) same time.
This code shows that the retrieved elements are not in the order they were put in the queue. The last check loop will abort prematurally and give you the first index where the order does not match. You need to go certainly above 1000 elements to get a pretty sure mismatch but it can happen already after 10 or so easily.
LabVIEW could disable loop parallelization as soon as any read or write operation to any refnum object is inside the loop but that would be pretty restrictive. Order in a queue (or even a log file for instance) is not always important but for you it seems to be, which could be considered a design failure for this application.
You have two solutions for this:
1) disable parallelisme
2) add an explicit element to the queue datatype that does specify the order (channel number for instance) that you need and sort accordingly in the dequeue loop.
06-24-2020 05:41 AM
If you use Named Queues the order itself is of no importance.
If you add some index to the queued data the order can be handled easily.
A parallel loop of e.g. 4 is (basically) the same as 4 code copies, how would you handle that?