LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reliability of SASE (Notifiers)

Yep, a single element queue may be an alternative.  I switched out the notifiers in the orignal code with SEQ's.

CLA
Message 11 of 31
(1,243 Views)

@drjdpowell wrote:

As a tangential question, why are you using Notifiers for this use case?   For a single-use "self-addressed stamped envelopes" I use one-element Queues, not Notifiers.  No timestamps, no history.  To wait on multiple replies I just wait on the Queues one after another; I don't poll them.


Same here.

 

But in general, repeatedly creating things in LV and then destroying them will always suck-up memory. That pattern is one I avoid unless I know it will be throttled by how fast (slow) a user may be. Letting a CPU loose repeatedly create/destory  seems like an acident waiting

to happen.

 

Ben

 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 12 of 31
(1,238 Views)

I am running Q Test.vi and watching the memory consumption using Windows Performance Monitor.  I did edit it by taking out the de-queue timeout (set to -1 instead of 20).  So far no steady growth but I will let it run for a couple hours.

 

Is this how the OP is monitoring memory consumption?

CLA
0 Kudos
Message 13 of 31
(1,233 Views)

3442 MB and "out of memory" messages.......

0 Kudos
Message 14 of 31
(1,191 Views)

@drjdpowell wrote:

As a tangential question, why are you using Notifiers for this use case?   For a single-use "self-addressed stamped envelopes" I use one-element Queues, not Notifiers.  No timestamps, no history.  To wait on multiple replies I just wait on the Queues one after another; I don't poll them.


OK, that's interesting Man.

 

This is a difference in implementation I was aware of (I'm sure I knew that at one stage) but have probably long since forgotten.  Yup, I think SEQ is the way to go for my design.  I will make a strong mental note of that and adapt practically ALL of my asynchronous patterns accordingly.

 

This turns out to be quite an important point, all due to an implementationd etail I had forgotten about notifiers and  Queues.  Thanks guys, this Forum still rocks.

Message 15 of 31
(1,190 Views)

Oh, and the reason why I poll them is that in our actual application any one of the responses can take up to 30 minutes to be received.  We may be sweeping the signal of a 6T magnet in a cryogenic environment.  that takes time.

 

I want the software to be responsive in that time.  Hence polling.

0 Kudos
Message 16 of 31
(1,188 Views)

@Ben wrote:

@drjdpowell wrote:

As a tangential question, why are you using Notifiers for this use case?   For a single-use "self-addressed stamped envelopes" I use one-element Queues, not Notifiers.  No timestamps, no history.  To wait on multiple replies I just wait on the Queues one after another; I don't poll them.


Same here.

 

But in general, repeatedly creating things in LV and then destroying them will always suck-up memory. That pattern is one I avoid unless I know it will be throttled by how fast (slow) a user may be. Letting a CPU loose repeatedly create/destory  seems like an acident waiting

to happen.

 

Ben

 

 


But isn't this an admission that we've come to accept LabVIEW's sub-optimal implementation of basic things?  Why should we automatically expect creation and destruction of anything to carry such a high rish of memory leaks?  I'm destroying each and every notifier correctly.  It's exactly this line of thinking which makes me think I've been drinking too much NI Kool-aid.

 

This kind of association should not be automatic in a professional programming language.  Would we make this assumption with other manguages, assuming we created and destroyed correctly?

0 Kudos
Message 17 of 31
(1,174 Views)

So to recap, Shane's msg #4 in the thread anticipates *why* the "Wait...History" node(s?) grow memory unbounded.  It appears that the History node(s) must make and own a list of associated Notifiers (unique refnum plus data), and timestamps.  And when the Notifier is destroyed, the corresponding Notifier & timestamp stay in the list, orphaned.

 

So the underlying bugfix would be that the "Release Notifier" node needs to go on a search and destroy mission through any lists held by any "...History" nodes, right?

 

The problem with a SEQ workaround as a general solution is that it doesn't scale to the case where multiple consumers wait on the same notification.  And that's exactly one of the kinds of cases that would have inspired the choice of a Notifier in the first place.

 

Seems like this needs a CAR.   Nice find and thanks for reporting!

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 18 of 31
(1,162 Views)

@Intaris wrote:

But isn't this an admission that we've come to accept LabVIEW's sub-optimal implementation of basic things?  Why should we automatically expect creation and destruction of anything to carry such a high rish of memory leaks?  

Personanlly, I don't expect this.  It's only the "Notifier History" thing that is a problem, and I've never used that.

0 Kudos
Message 19 of 31
(1,159 Views)

@Intaris wrote:

Oh, and the reason why I poll them is that in our actual application any one of the responses can take up to 30 minutes to be received.  We may be sweeping the signal of a 6T magnet in a cryogenic environment.  that takes time.

 

I want the software to be responsive in that time.  Hence polling.


Ah.  For that use case I just have a single receive User-Event/Queue for each message-handling loop (that lives for the lifetime of the loop), and I attach that communication reference as the reply address.   I use a newly created one-element queues only when I actually want to block waiting (ususally when the responce is expected in milliseconds), or I need all of a set of responces before proceeding, so a timeout on any reply is an error condition for all (this use case is also short time; I don't think I've every used it for something expected to take more than a couple seconds).

0 Kudos
Message 20 of 31
(1,143 Views)