LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW array performance: Two loops operating on the same massive array


AnthonV wrote:

OK so after reading many threads on the topic (thanks for the references to all who contributed) I'm slowly warming up to the idea of splitting my large array into several smaller parts to overcome the concurrent access issues that you get when using single element Qs or AEs.  As suggested above, as long as different concurrent accessors request different parts of the array (this I can enforce) there should be no blocking calls.

 

Now I need to know from those who know - is single element Q's or AEs preferred?  I will be doing a lot of processing on the data so for that reason it might be easier to control memory copies in an AE, but in some threads Q's are said to be many times faster than AE's. An array of Q references also seem simpler mainly as I have no experience with dynamically spawning multiple instances of an AE.

 

Any thoughts? 

 

 

 

 


Rule of thumb

 

If a Queue will meet the requirements, use the Queue. It is capable of transferring its data "in-place".

 

 

Spwaning multiple AE is generally done using VI-Server Call by Reference and this mandates a buffer copy to get the data in and out.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 21 of 30
(1,027 Views)
Great - thanks for all the help so far.
0 Kudos
Message 22 of 30
(1,026 Views)

Anthon, 

 

I just ran accross this thread and found it particularly interesting.  From digesting the post I believe yopur question can be summed up as "How can two(or MORE) functions get access to, and operate on, differerent portions of a large array while allowing dynamic operations on the array?"

 

The best solution I can think of for this (so far) is to use and AE to store the array of DBL and a slave array of U8.  (why the U8 you ask?Smiley Wink )

 

Use the U8 array as a bitmap of what is happening to the data.  

xxxx xxx1= Data valid (invalid)

xxxx nnnx= Data checked out to fn(n) (0-7) (not checked out)

xnnn xxxx= Data process of fn(n) (0-7) Complete (not complete)

1xxx xxxx= Data obsolete -all processes done (not done)

 

Yes this creates another "large" array but,  (flourish of trumpets here)  it seams highly likely that the array size of the DBLarray is related to the amount you need to buffer to garantee all processes that need access to the data complete before it is flushed.  by tracking the process completion by array index you should be able to get rid of all the obsolete data points and reduce the average array size to only what is needed at any time (and you can track that size)

 

Some of the implementation may get tricky (such as exactly where to test for valididty/obsolescence of the data)  but this depends entirely on your application.

 

 

 


"Should be" isn't "Is" -Jay
Message 23 of 30
(1,017 Views)

Thanks for the idea.  If I understand your suggestion correcly then the only snag is that I would prefer not to 'check out' any data as this would require copying the data - the goal is to do everything in-place.  This would be possible if all the functionality (as well as the data) is encapsulated in the AE as additional cases to the traditional 'init', 'get', 'set', for instance 'segment', 'classify', 'transform', 'calibrate' etc etc.  The hurdle is that while the AE is busy with one of these algorithms, no other process can get access to the data (one of the very reasons AEs are used in many other scenarios) and are blocked from running.

 

So splitting the array into several (n) chunks and storing each section in its own AE or queue this allows n processes to concurrently access the array albeit in different locations.

 

Logically the array is still seen as one large array not n individual chunks, so this requires a little bit of effort to keep track of it all. 

 

 

0 Kudos
Message 24 of 30
(999 Views)

Hi AnthonV,

 

I think I've managed to use the same wire data in two loops without any data copy.

 

The first loop generates updated data in an array (generates a sin wave), while the second loop concurrently uses the same data to perform some computation (mean of 5 elements and display)

 

Both loops are are running at separate speeds, and all elements in the array are accessed and modified at some stage.

 

Try it out

Run the multirate.vi with the "Show Front Panels" checkbox on and the "Size" set to 100.

This will display the panels of the two loops in the sub panel containers.

 

Interact with the "Amplitude" and "millisecond to wait sliders" to change the data and accessing rate of the two loops.

 

If all has worked you will see that the two loops indeed are operating on the same set of data, as changes made in the update loop are visible in the process loop.

 

 

Now to verify there are no data copies of the large array do the following

 

Close all the VIs after running with the front panels open. (even restart labview).

open multirate.vi again

Open Activity monitor or the Task manager and note the memory usage of labview

 

This time change the dimension size to 100,000,000 and turn "Show front panels" is UNCHECKED. This will generate just under 800MB of data (doubles)

 

Now run the vi

Check the memory usage of labview again. you should now only be using 800MB more than before.

Note that it is important to press the stop button to release memory cleanly.

 

Attached is a llb file. 

 

I haven't managed to get two updating loops to run without a buffer, but maybe this will help. 


0 Kudos
Message 25 of 30
(978 Views)

heh..

 

the success of that previous code is actually dependent on the order of placement and wiring!

 

I replaced a wire and a buffer was created... removed the buffer by deleteing wire then saving a broken vi then replacing wire..

 

Sounds like a possible loophole in the compiler?

 

Some more experiments then... 

 

0 Kudos
Message 26 of 30
(976 Views)

drclaw, I ran you code and also saw that indeed the updated data is seen by the reading vi - even though the outside loop only iterates once (ie its not that the reading vi is seeing the data from the previous iteration of the main while loop).

 

As there is a fork in the wire going to these two vi's the compiler *should* make a copy of the data unless it is sure that both vi's don't modify the data OR you enforce order to the accesses by say placing the two vi's in a sequence to show the compiler that the read happens before the write.  So I do not think we can depend on this behaviour to be always be the same...

 

Maybe somebody with more insight will comment on this.

  

0 Kudos
Message 27 of 30
(965 Views)

Ben, tst - in RT is it better to use single element queues or single element RT FIFO's?  Is the behaviour the same and are there any performance benefits to using one over the other?

 

0 Kudos
Message 28 of 30
(962 Views)

drclaw wrote:

 

Sounds like a possible loophole in the compiler?


I would say bug. A buffer is being reused when it should not be. Here's a much simpified example. I will report it.

 

As for RT, I can't comment since I haven't used it in a while. My understanding is that NI currently recommends FIFO shared variables, but if you're using a single element one, it probably doesn't make much difference. I assume the queue has less overhead, but I have no experience with shared variables.


___________________
Try to take over the world!
0 Kudos
Message 29 of 30
(953 Views)
The example I posted doesn't have this issue when run in the LV 2009 beta, which most likely means that NI found this issue, considered it a bug and fixed it.

___________________
Try to take over the world!
0 Kudos
Message 30 of 30
(932 Views)