LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

wire Class into DVR

Solved!
Go to solution

Ah, so you've demonstrated you have a copying problem that a DVR improves, but not that it is a Queue issue necessarily.  

0 Kudos
Message 11 of 18
(138 Views)

I've found that most of my simple utility type classes work just fine on the by value cluster of a class.  Most of the uses I find is where there is an initialize that does some stuff, then have other functions that operate on that class.  But all the important parts of the functions rely on the data from the initialize.  And so for those having stuff be by reference just seems like another layer that isn't required.  If all the stuff that is important is from the init, then just shove everything from there into the class to be used later.

 

Yes there are times that initially the class just needs stuff from the init, and then later on I realize having something be by reference would make the API cleaner by being able to specify stuff later.  And in cases like that I will store stuff in a DVR that has a type def cluster in it.  But because of the nature of classes and private data, upgrading the class to use a new version that went from by value, to by reference, should still be compatible.  Assuming I didn't do something crazy like change connector panes or something.  But even then I can leave the VI in the package just not shown on the palette as a somewhat deprecated function, and replace it on the palette with a new one.

 

I guess I'm just trying to say I default to by value, and use a DVR to become by reference when its needed.

Message 12 of 18
(129 Views)

@drjdpowell wrote:

Ah, so you've demonstrated you have a copying problem that a DVR improves, but not that it is a Queue issue necessarily.  


Actually I have - but that's in other threads 6months+ ago.
I won't got into the full details of the months of debug... but the upshot was:
The issue is that Queue memory does not get get freed until the queue is released, and so assigning large contiguous blocks of data to a queue, then manipulating a part of the data after dequeue means the contiguous block of memory is not allocated the the queue any more, but the queue is allocated the same amount of memory until the garbage collector kicks in.
This can be a real PITA.
The DETT showed where the issue was and an internal tool showed the RAM allocated to the EXE was ramping up over time due to the queues until the restructure - and would carry on ramping for 2days - 2weeks depending on execution speed until the garbage collector kicked in properly.
The DVR inside the Queue resolves the issue as large datasets don't get put into queues so the memory is allocated and freed much more rapidly and the program settles within an hour.
(Which is kind of important considering if it locks up we can be streaming at  >180MB/s into the program and we are processing all of the data in real time. - the datasets are big!) 

James

CLD; LabVIEW since 8.0, Currently have LabVIEW 2015 SP1, 2018SP1 & 2020 installed
0 Kudos
Message 13 of 18
(108 Views)

@Hooovahh wrote:

 

I guess I'm just trying to say I default to by value, and use a DVR to become by reference when its needed.


I guess I'm not using a DVR in the normal use case. I'm actually using it to prevent a memory leak issue and queues getting over bloated.
So I create the DVR, enqueue it and destroy it straight after dequeue as I seem to get a more memory efficient architecture.
So I guess you would probably say I'm using the DVR to pass data by value - rather than by reference (since I only ever use a single data value reference with each data set sent to avoid race condition issues).
It seems to be more memory efficient and allow for better parallel processing. (In all the benchmarking I've done with arrays over 1000x1000 in size, - you have to go smaller to see a performance decrease with the functions I'm using.)

James

CLD; LabVIEW since 8.0, Currently have LabVIEW 2015 SP1, 2018SP1 & 2020 installed
0 Kudos
Message 14 of 18
(102 Views)

How does using multiple DVRs to the same data prevent race conditions? A single DVR would ensure that there are not parallel write actions to the. I believe that it will allow parallel reads though. are you creating/destroying DVRs to the same data set? That seems like it would be an issue. If it is different data than this approach would work. Another solution would be to queue a class with the DVR to the dataset in the class private data. This would fully encapsulate the data set.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 15 of 18
(101 Views)

@James_W wrote:

@drjdpowell wrote:

Ah, so you've demonstrated you have a copying problem that a DVR improves, but not that it is a Queue issue necessarily.  


Actually I have - but that's in other threads 6months+ ago.
I won't got into the full details of the months of debug... but the upshot was:
The issue is that Queue memory does not get get freed until the queue is released, and so assigning large contiguous blocks of data to a queue, then manipulating a part of the data after dequeue means the contiguous block of memory is not allocated the the queue any more, but the queue is allocated the same amount of memory until the garbage collector kicks in.
This can be a real PITA.
The DETT showed where the issue was and an internal tool showed the RAM allocated to the EXE was ramping up over time due to the queues until the restructure - and would carry on ramping for 2days - 2weeks depending on execution speed until the garbage collector kicked in properly.
The DVR inside the Queue resolves the issue as large datasets don't get put into queues so the memory is allocated and freed much more rapidly and the program settles within an hour.
(Which is kind of important considering if it locks up we can be streaming at  >180MB/s into the program and we are processing all of the data in real time. - the datasets are big!) 

James


Ah, I see what you mean, now.  I experimented with the DETT and some simple test VIs.  Though I found it wasn't the Queue, per se (same results were had with arrays), but rather that, going by-val, memory is not deallocated, but with DVRs it is.  LabVIEW must follow one algorithm with by value (retain the allocation with the expectation of reusing it) and a different one with the DVR (deallocate).  In your use case (a large buffered build up of data on startup that isn't repeated) using DVRs makes sense.

Message 16 of 18
(80 Views)

@Mark_Yedinak wrote:

How does using multiple DVRs to the same data prevent race conditions? A single DVR would ensure that there are not parallel write actions to the. I believe that it will allow parallel reads though. are you creating/destroying DVRs to the same data set? That seems like it would be an issue. If it is different data than this approach would work. Another solution would be to queue a class with the DVR to the dataset in the class private data. This would fully encapsulate the data set.


The race condition is in the consumers - I want the consumers to all run in parallel as fast possible. but I don't know which is going to be the last consumer to finish with the data.
By Value, I don't need to worry, if I use a DVR properly I can create a DVR and read one DVR from all consumers - but then I need to know when I've finished processing the data (and even if a consumer is meant to be processing the data) - the use of a DVR with multiple parallel reads, might be nice, but is going to be more of a headache due to the race conditions that I will get.
using DVRs properly (and allowing parallel reads) would create the race conditions...

So I split the wires then create the DVRs (to a copy of the same dataset) - as I said. It's about memory management.

CLD; LabVIEW since 8.0, Currently have LabVIEW 2015 SP1, 2018SP1 & 2020 installed
0 Kudos
Message 17 of 18
(71 Views)

@drjdpowell wrote:


Ah, I see what you mean, now.  I experimented with the DETT and some simple test VIs.  Though I found it wasn't the Queue, per se (same results were had with arrays), but rather that, going by-val, memory is not deallocated, but with DVRs it is.  LabVIEW must follow one algorithm with by value (retain the allocation with the expectation of reusing it) and a different one with the DVR (deallocate).  In your use case (a large buffered build up of data on startup that isn't repeated) using DVRs makes sense.


You found it then. 😉
(and hence my reasoning for throwing a Class down a DVR)

My use case is actually the build up of buffered data in in acquisition stage, but the size of the acquisition is repeatable but re-configurable.

CLD; LabVIEW since 8.0, Currently have LabVIEW 2015 SP1, 2018SP1 & 2020 installed
0 Kudos
Message 18 of 18
(69 Views)