LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Clusters as data structures

Hmmm, the queues are definitely fast. I didn't expect them to put shift registers in the shade.

I had a look at your test code and I added our new GOOP kernel (GOOP2) for reference. These are the numbers I got (10000 iterations, P4 2GHz, 512MB RAM). A comment to the test is that you are comparing GOOP style read/writes with access of globals. In GOOP (valid for NI-GOOP and GOOP2) when you want to modify data we lock the data on the read action and then it is unlocked on write. The queue example is safe for rase conditions in the sense that data is not corupted on individual reads or writes. But you do not protect the combined scenario of read-write. Just for comparison I added some semaphore calls to see the overhead added for this protection. In each loop you make two calls to modify a double. To protect this using semaphores would give 2 calls to Acquire and 2 calls to Release (Semaphore). The result is listed below as Semaphore overhead. The queue example would take 500ms (150+350) using Semaphores for a GOOP style data protection.

A comment to the Shift Register is that you call Release on a Semaphore on each write data which causes some overhead. But Acquire is never called so I couldn't really figure out way the Semaphore was used. The VI is not reentrant so the Semaphore is not needed for race conditions. Even if the Semaphore call was removed the Queue would still be faster though.

Datasocket - didn't run this
NI-GOOP - 2794ms
Control References - 3155ms
Configuration VIs - 1532ms
Shift Registers - 601ms
Queue - 150ms
GOOP2 - 991ms
Semaphore overhead - 350ms

Thanks for posting the benchmark Gray! It is very valuable, I liked that you had the Control Refs too.

Again, GOOP is a way to avoid global variabels, not a technique for providing them.

Jan
Message 11 of 20
(4,680 Views)
Oops. When I upgraded my test, I added my standard shift-register database, but forgot to add the read-for-write VI. It is separate from the data VI due to deadlock problems. You can't wait in the data VI, since the data VI cannot be reentrant. The attached file contains the fixed VIs. Unzip into the Shift Register directory. Strangely enough, using the same conditions, I got faster speeds when I actually acquired the semaphore like I should have. Once again, Pentium 4, 2.4GHz HT, 512MBytes RAM, 10,000 iterations.

Shift Register - 462ms

This is probably because the semaphore did not have to go through a lot of error code when it attempted to release a non-acquired flag.

I agree that locking data when reading preparatory to writing is very important to prevent race conditions. The queue methodology does this for you if you follow two simple guidelines. First, the queue size has to be set to one element. Second, all operations on the data start with a dequeue. So, an isolated read-for-write would be a simple dequeue. The queue is now empty. If another part of the program wishes to access the data, they start with a dequeue. The queue is empty, so they wait until the write enqueues the data. I find this approach simpler and more elegant than using semaphores. The fact that it is much faster and memory performance is superior is even better.

I would also like to second your comment that GOOP does not replace globals, it allows you to avoid them. Many of the problems LabVIEW developers face would never occur if they used object-oriented design techniques. The databases in the test can all be used as the core for this development. Unfortunately (or fortunately for Endevo), only GOOP2 has inheritance...
Message 12 of 20
(4,670 Views)
Yes, that is a very elegant way to synchronize on the queue! No need for Semaphores (they also add a lot of overhead as you saw).

Jan
Message 13 of 20
(4,666 Views)
It was when queues became polymorphic that they picked all of that speed. Previously, I had to flatten to string first and that gave the SR's the edge.

For simple read-write operations this is all valid.

But...

In many applications i find myself dealing with large data sets where a single channel can have millions of samples.

This data often has to go differnt places at different times. Log files as i go, user interface when they want it, to filter and and analysis operations etc.

These often leads me back to the LV2 on steroids that allow me to keep all of the data in SR and operate in-place to do every thing else. The pumped-up LV2 tracks what new data has arived since the last log file write and only returns that info. The analysis etc is built into the LV2 so the data set is not copied out to be processed.

Just my 2 cents,

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 14 of 20
(4,638 Views)
One more thought...

THe LV2 method has semaphore protection built in (sorta, because it is non-reentrant) so this is simpler to code.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 15 of 20
(4,636 Views)
Ben,

Dequeueing an element of a queue does not necessarily make a copy, despite what the buffer viewer reports. Try it yourself by single-stepping through the attached VI (LV7.1) with the data size set to something large while watching your OS memory monitor (Task Manager for all you Windows folks). This in-placeness probably also contributes to the speed.

That said, I also use LV2 globals on steroids for the same purpose. It gives you a good "object" with both data and functions in a single, easy-to-maintain format.
Message 16 of 20
(4,623 Views)
Hello,
I was searching the discussion board and came across this thread. I also discovered that the queue (LV 7.0 and greater) has very good performance and created the dqGOOP Toolkit that takes advantage of this performance. There is more information available at this link

http://www.dataact.com/dqGOOP.htm

and a FREE download is available from this link

http://www.dataact.com/downloads.htm

Please note that this release is a beta release, but I am close to the final release. The GOOP test classes zip file on the downloads page are the classes that were used for the performance tests used to create the performance charts. If there is enough interest, I will create a new thread to discuss dqGOOP.

Brian
Brian Gangloff
DataAct Incorporated
Message 17 of 20
(4,619 Views)
You guys all get stars for such an informative discussion. Looks like I am going to have to start playing with queues as a database.

Chris
Message 18 of 20
(4,593 Views)
DFgray,

Could you save the test vi's in 7.0 format?? I'm curious where memory tags in the DSC engine would result.
0 Kudos
Message 19 of 20
(4,524 Views)
Here you go. It does not have the GOOP2 extensions since I do not have that on my machine.
0 Kudos
Message 20 of 20
(4,506 Views)