LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

parallel instances using ONE Action Engine

Solved!
Go to solution

Hi Everyone.

 

I just wanted to see what type of ideas this would bring to this issue.

 

I have an AE that holds all the pointers (or referances) that 20 instances of a reentrantVI use in parallel. Obviously the issue is the execution speed as all the instances are waiting to read the AE (and probably a heap of other issues I haven't ecountered yet). 

 

I have placed an example of the AE that I am using where I also show HOW I'm using it on a bigger picture.

 

Please DO NOT RUN the example, just open it and see the BD, it is where I tried to put all the information on and it was the best way that I can think of to explain the issue.

 

Kas

0 Kudos
Message 1 of 10
(3,045 Views)

I thought it may be better to upload an image instead.

0 Kudos
Message 2 of 10
(3,024 Views)

Frankly, even after reading it several times and looking at the code, I still don't understand the actual requirements and behavior of the system, nor what your actual question is. It's unclear whether you have an actual problem with the implementation and want to solve it, or whether you just think you might have problems. Also, I'm not sure, but I think your analogy might be broken and that the bank in your case should be not the AE, but the whole combo of the VIs and the AE, where the VIs are clerks and the AE is some shared resource, but that's hard to say (for example, clerks in the bank don't need to wait for each other before going home). Perhaps more actual details would help in understanding.

 

In general I can say that an appropriate use for an AE is when you have a shared resource that needs to protect the shared data and in general, you would want the AE spending as little time as possible doing its thing, so that it can be available for the next caller. If you have lengthy operations, I would definitely consider other options. for instance, instead of having an array of segments, you could have an array of DVRs to segments, and have the AE just manage these DVRs. Or you could have actors which manage this. Without more details it's hard to say.

 

 


___________________
Try to take over the world!
Message 3 of 10
(2,971 Views)

I am not sure exactly what you need to do either. My thought on reading your post was that there is likely a better way.

 

Questions and comments:

 

1. Are items 1 and 2 in Example.vi or the image guaranteed to run before 3 and 4 start? Or, can segments be added while the proceesing is occurring?

2. How do the changes made by item 4 affect the data not yet processed by item 3?  How do they affect segments already processed?

3. The only value on FGV.vi Segment: Out which is meaningful is the last one. The compiler is probably smart enough to not update it on earlier iterations but there might be a small speed improvement to moving the terminal outside the for loop.

4. When are the Get Segments and Get Items commands used?

5. Could a parallelized for loop do what you want rather than multiple reentrant VIs?

 

Please tell us more about the overall requirement for the program.

 

Lynn

Message 4 of 10
(2,919 Views)

While I try to not be dismissive of anything Yair or Lynn post.

 

Did you preallocate the clone pool?


"Should be" isn't "Is" -Jay
Message 5 of 10
(2,900 Views)

@tst wrote:

In general I can say that an appropriate use for an AE is when you have a shared resource that needs to protect the shared data and in general, you would want the AE spending as little time as possible doing its thing, so that it can be available for the next caller. If you have lengthy operations, I would definitely consider other options. for instance, instead of having an array of segments, you could have an array of DVRs to segments, and have the AE just manage these DVRs. Or you could have actors which manage this. Without more details it's hard to say. 


Thanks tst, didn't think of that. I have to do some benchmark on the two methods and see. 

0 Kudos
Message 6 of 10
(2,886 Views)

@johnsold wrote:

1. Are items 1 and 2 in Example.vi or the image guaranteed to run before 3 and 4 start? Or, can segments be added while the proceesing is occurring?

2. How do the changes made by item 4 affect the data not yet processed by item 3?  How do they affect segments already processed?

3. The only value on FGV.vi Segment: Out which is meaningful is the last one. The compiler is probably smart enough to not update it on earlier iterations but there might be a small speed improvement to moving the terminal outside the for loop.

4. When are the Get Segments and Get Items commands used?

5. Could a parallelized for loop do what you want rather than multiple reentrant VIs?


Ok, sorry for not properly explaining things. My explanation of My problem made sence in my head so I thought every one else will get it Smiley Tongue.

 

1. Items 1 and 2 are sequential, so guaranteed to run before 3 and 4. However, when 3 (the reentrant) is executed and sudenly gets an error (i.e. TCP TimeOut 56) then the segment function that was initialy changed to "in process" when taken out (before the timeout happened) is set again to "waiting" so that untill the error gets sorted, another reentrant can get the segment and does something with it. Also, if the reentrant in step 3 is executed and there are no errors then the output data is sent to item 4, which sets the function of the segment in AE to either "Damaged" or "Missing". This is happening while all the other reentrant VIs (in item 3) are still running.

 

2. Item 4 will ONLY make changes to the function of the segment AFTER item 3 has done something to it, and not before. And NO, after step 2 on my description, no Segments can be ADDED or DELETED. Only the segment Function changes. Based on this function, the segment is read. So, if the function says "waiting" then the data is sent out, otherwise, the reentrant VI thinks that all the segments are processed and so it exits (I do this through a "5003" error inside the AE). 

 

3. Ummm.. Maybe the compiler does what you say it does (no idea here) but I doubt it will give me any improvement that would make a difference to me.

 

4. They are used in my step 3 "Get segment" used ONCE (If no error) and TWICE (if error), "Get Items" used ONCE (if no error).

 

5. Won't make a difference, since the "Choke Point" is the AE. How you launch the VI won't make a difference. 

 

Kas

0 Kudos
Message 7 of 10
(2,879 Views)

@JÞB wrote:

Did you preallocate the clone pool?


Yes, all the VIs within the Reentrant VI in Step 3 are preallocated APPART FROM the AE for obvious reason that is not. But I had initially thought that the execution speed of the AE will be very fast, so I won't really see a difference. 

 

I'm not really into OO (well thats due to my lack of experience with it) but I was thinking if it will make a difference using the basic form of classes and methods to somewhow do what the AE is doing, but I'm not sure if that will improve anything, since now I'm worried if the class becomes the choke point instead of AE. I don't really know how the data within  classes behave with reentrant VIs executed in parallel constantly reading and writting to it. 

 

Kas

0 Kudos
Message 8 of 10
(2,877 Views)

OK, so let's see if now I understand it properly.

 

  1. You have N segments, which are pieces of data which need to be processed in multiple steps. You want to do this as fast as possible.
  2. You can have M workers (apparently 20 and less than N) to do this.
  3. A worker gets a segment, works on it and succeeds or fails.
  4. If it fails, it returns the segment, so that another worker can do something else on it and the action it wanted to do can be retried later (?)
  5. If it succeeds, the results are passed to 4, which is supposed to approve the results and set the next function that needs to be done on the segment.
  6. Once all the segments have had all their functions performed successfully, the system is finished.

If this description is correct (it doesn't seem to match everything you wrote exactly, but there are some things which are still unclear), then I would suggest that what you need is a more intelligent segment manager and maybe a system which relies on messaging, so that things mostly work on their own (I'm still not sure if you actually have a performance problem and where it is). One option, off the top of my head:

 

  1. Each segment is an object, and it includes its data, as well as its status (i.e. which functions it already did). This could also be a cluster, or the manager can hold the status info separately.
  2. The manager gives each worker a DVR to the segment, which includes the current function, and remembers who has what.
  3. When a worker is done, it reports to the manager (queue, event, whatever).
  4. The manager sends the DVR to 4 for analysis (again, using whichever messaging option you prefer).
  5. 4 returns it to the manager and the manager decides what to do next.
  6. When the manager decides that everything is done, it tells everyone to stop.

Another alternative to step 2 here is that all the workers take tasks off a single queue, but then it might be more difficult for the manager to track who's doing what.


___________________
Try to take over the world!
0 Kudos
Message 9 of 10
(2,842 Views)
Solution
Accepted by zerotolerance

A couple of thoughts:

- You may be making copies of the segment each time you add or start processing one. If the segments are large, this could be time consuming. As others have suggested, a queue or array of DVRs might let you hand off the data without making a copy. Also, storing the segments as a large array is not necessarily efficient. How many segments are there? If you have to scan through a large array to find the next available one, that could be slow too.

- How did you decide on 20 parallel instances? That could be too many -  the computer may be spending too much time swapping between them, or you might find that because they're blocked on the action engine only a few are every actually running at a time.

- Consider inverting your model. Instead of having many parallel instances each request a chunk of data, instead have the action engine call the parallel workers, handing each one the next available segment of data.

Message 10 of 10
(2,811 Views)