LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reliability of SASE (Notifiers)

I don't see how a user event helps me particularly.  I have one sender sending out multiple requests to idividual handlers (each handler gets a different request).  They then return a "Value Reached" signal when they have finished (which as mentioned can take up to many minutes).

 

While I love User Events, how would that pan out in code?  How would I have code which scales from 1 to N handlers?  I'm not being overly critical here, I just don't see how User Events help me out here.  If they can, cool, but I might need some help seeing the light.

0 Kudos
Message 21 of 31
(590 Views)

@drjdpowell wrote:

@Intaris wrote:

But isn't this an admission that we've come to accept LabVIEW's sub-optimal implementation of basic things?  Why should we automatically expect creation and destruction of anything to carry such a high rish of memory leaks?  

Personanlly, I don't expect this.  It's only the "Notifier History" thing that is a problem, and I've never used that.


Warning!

 

Wild speculations follow...

 

I have anumber of 4k bytes in my head that I suspect are some type handle LV uses for all cratalbe resources. VISA tasks, queues, ...

 

Even when they are closed/destroyed the hanlde remains.

 

Still speculating...

 

Avoiding "access violations" when code in one thread attempts to access a resource that has been destroyed may play a role in the memory allocated for those "handles". An old bug in LV 6i I encountered illustrates the complexity of what LabVIEW does for us in the background.

 

I served an Action Engine that was part of a RT application that ran in FP nodes to a Windows applciation using VI server. The AE exposed queues in the RT application such that the Winodws app could acquire the info that was being stroed in the queues waiting to be logged. The "Init" of the AE was invoked by the RT app when it started creating the queues. Since the queues were crated by the AE, LV considered them as being "owned" by the AE (still speculating to explain what I saw).

 

The bug rasied it ugly head when the network coonection went down and the VI server service was closed. Since the VI- server ref to the AE was closed it attempted to close the AE and when that happened, the queue was cleaned up along with the other resources owned by the AE. That problem has since been fixed but I report it now to illustrate the challenge of "knowing" when it is safe to destroy resources in a multithreaded environment.

 

I have run into the same issue with lossing memory with what I believe is an basic LVOOP concept "Factory Pattern". Say I want to write a universal video game using LVOOP. I could use a Factory Pattern in a loop where the factory pattersn spits out an game instance of whatever game the user chooses. So on the first iteration they spit a instance of "Jutland", that uses a big memory foot-print and when the user decided to play the second game of "Monopoly" the memory allocated to Jutland  is still allocated. TO work around that issue I would have to use dynamically launched VIs that I could formally close to free-up  the memory. So Factory Pattern in a loop in LV is a bomb waiting to go off unless I bend over backwards to make it work.

 

Still trying to help...

 

I ran into the same problem with  "Controls on the Fly" application written in LVOOP. After laoding and reloading different screen layouts... out of memory.

 

Again when I wrote a 3D viewer to render images from ultrasound as spheres is space ...

 

 

I could not use a factory pattern to spit out sphere of each type repeatedly. I had to develop a "sphere pool" where I would only create new sphere instances if I needed more. Otherwise I would reuse the spheres from the previous rendering over again so that I was not continually gobling up memory as the user switched between differnt images.

 

That bring me back to this thread.

 

Rather than creating and destroying, a method could be used ot only create new resources when we need more and recycle whenever possible.

 

Done dribbling on for now...

 

Ben

 

0 Kudos
Message 22 of 31
(581 Views)

@Intaris wrote:

I don't see how a user event helps me particularly.  I have one sender sending out multiple requests to idividual handlers (each handler gets a different request).  They then return a "Value Reached" signal when they have finished (which as mentioned can take up to many minutes).

 

While I love User Events, how would that pan out in code?  How would I have code which scales from 1 to N handlers?  I'm not being overly critical here, I just don't see how User Events help me out here.  If they can, cool, but I might need some help seeing the light.


I am a small player here, and I might not got the question 100%, then sorry 🙂

You have certain number of parallel processes, any of them can be sender or receiver if you use User Events. You can use a single FGV to store all your User Event refnums created in the "Main.VI" at initialisation. Then any of your parallel processes (each of them needs to have an Event Struct even if there is no GUI) can subscribe dynamically for any Event (Register for EVents).

Also, you can use the Generate User Event from any of your VIs, and ALL of the subscribers will get these messages lossless.

Such communication methods are used in the Delacor QMH: http://sine.ni.com/nips/cds/view/p/lang/en/nid/213286

 

This was what you are asking? Sorry if I misunderstood your post...

0 Kudos
Message 23 of 31
(570 Views)

@Intaris wrote:

 

While I love User Events, how would that pan out in code?  How would I have code which scales from 1 to N handlers?  I'm not being overly critical here, I just don't see how User Events help me out here.  If they can, cool, but I might need some help seeing the light.


User Events are M to N.   The standard intended use case is to have one sender (M=1) and 1 to N receivers, but User Events work fine with M senders and only one receiver (N=1).    In my case, each loop owns a User Event that only it is registered for, and it passes the User Event to other loops in order to allow messages to be sent back to it.  I'm using the User Event like a Queue (and when I don't want to receive other Event types then I often do use a Queue).

0 Kudos
Message 24 of 31
(564 Views)

@Blokk wrote:
Such communication methods are used in the Delacor QMH: http://sine.ni.com/nips/cds/view/p/lang/en/nid/213286

 


DQMH (and the JKI frameworks it borrowed the idea from) use User Events in two ways.  Outgoing information events are 1 to N (single sender, other processes may register to receive) and incoming command events are M to 1 (only the owning process receives, which is why JKI calls them "private" events).

 

I believe there are some "bus"-style architectures, where everyone broadcasts to everyone, out there; they might by using User Events M to N (I can't remember).

0 Kudos
Message 25 of 31
(557 Views)

OK.  But If I have 1..N requests I want to handle in parallel, how to do that with Events?

 

I need to mention that I have the functionality to start asynchronous requests with private return channels for each.  I loop over all receivers, telling them all their new values (and passing on the SASE reference) to go to.  Then in a second step I start waiting for all of the return values by inspecting the SASEs.

 

I know I can register for arrays of user events, I think that would be neccessary because I can't register statically due to the unknown number of requests at compile-time.  I use the same code for 1 request as for 10.

0 Kudos
Message 26 of 31
(555 Views)

Hi Ben,

Inability for LVOOP classes to unload from memory is definately problematic.   Re the 3D stuff, I'm not sure there are memory leak bugs in LabVIEW if everything is closed properly, but I AM sure that it is very easy to screwup and fail to properly close 3D objects.  I've created memory leaks by not closing things.

0 Kudos
Message 27 of 31
(552 Views)

@Intaris wrote:

OK.  But If I have 1..N requests I want to handle in parallel, how to do that with Events?

 


My message-handling loop has one "Message" User Event that it created and registered for during initialization.  It attaches that User Event to each of the N requests.   All N replies come back asynchronously on the one User Event.

0 Kudos
Message 28 of 31
(547 Views)

Side question about architecture/design when you do this kind of S.A.S.E. stuff:

 

I think I'm missing something important.  My first reaction to the S.A.S.E. idea is, "Hey, cool!  Very parallel and decoupled.  Processes don't need to know ahead of time whose requests they will respond to or who they will send results to.  It's all up for grabs dynamically at run time."

 

But when thinking through the implementation a little bit, it seems that the datatype of the payload *does* need to be known on both sides of the comm link so that the sender can package it and the receiver can interpret it.  So now things are back to being more coupled again, and the advantages are less obvious.

 

Is the S.A.S.E. approach mainly for cases where multiple parallel processes are each doing pretty much the same thing?  I can see how it'd be useful there.  It just isn't as clear to me how it fits into a case with parallel processes doing unique things.

 

Looking to learn...

 

 

-Kevin P 

0 Kudos
Message 29 of 31
(546 Views)
Highlighted

Oh, the datatype IS known and fixed.  It's about multiple modules exposing the same interface.  If I want to set a signal, whether that signal is Temperature, Voltage or Magnetic field is irrelevant to the module setting the value, it just wants to know the value has been reached.  There may be multiple modules implementing the "Value changed" Interface (for want of a better word) even though the devices or code behind them may be different.  There may also be N instances of any given implementation.

 

In reality, the Doers are analog outputs in our software.  The requesters are any modules which need to modify an output voltage.  Our software is massively parallel so many different modules may actually all ask the same output to change to different values at the same time.  Obviously, whichever request arrives last will "win".

 

If a Request is made when previous requests are still outstanding (not yet Value Reached) then all requesters up to that point receive a notification that their request has been superceded but the return channel remains open.  The analog output module retains all return channels between "Value Reached" points.  Because of this I also need to be able to differentiate between the different requesters, ruling out the possibility of having a single return channel.  It does happen that we have several return channels active for any output at any time (this is essentially user error but we need to handle it properly).

 

I've started thinking of them more as "Context-aware responses".

0 Kudos
Message 30 of 31
(539 Views)