From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Development Best Practices Documents

cancel
Showing results for 
Search instead for 
Did you mean: 

Specification Pattern

Intent

To have different functionality for a class depending upon some value of the class without updating case structures throughout your VI hierarchy when you add another value.

Motivation

Your data has some field – perhaps an enum, perhaps a string, perhaps something more complex – and you have a case structure that cases out on that field to decide which of several actions to perform. The data in this field is “constant,” meaning that once you have initialized the class this field never changes. When you find yourself creating a lot of these case structures, or, worse, updating a lot of these structures to add a new case, you should remember this pattern.

Implementation

Examples of this pattern occur in just about every shipping example. Any time you have a child class that overrides a dynamic dispatch VI defined by the parent class, you’re seeing an application of this pattern.

If some data element is constant from the time the cluster (or existing class) is initialized onward, then you should strongly consider creating a set of child classes. The data in that field is unnecessary – stop hauling it around the diagram. A class knows its data type. For any case structure that used to select based on the type, create a new dynamic VI on the parent class and put the code that used to be in each frame of the case structure into the children’s override VIs. Instead of one monolith cluster with an enum type, you now have a parent class and many more specific classes – thus the name of this pattern.

The parent class is frequently one that you never expect to actually see traveling on a wire in your application. It exists merely to define the API that all of your child classes will implement. Such parents are frequently referred to as “abstract classes” because you never expect them to be physically instantiated on the wire.

Consequences

This refactoring typically results in many more VIs in your project. This can have a load time impact on your application as a whole. But your run time benefits may make this worth it since there is less data being copied around for each object than for each pre-refactoring cluster.

This refactoring assumes that you’re going to change all existing case structures that select on type into calls to dynamic dispatch VIs. However, you may not have time to change all the structures, or the structures may be on VIs owned by other developers. If you encounter a situation in which you cannot create a specific dynamic dispatch VI for each case structure, you can still create a single dynamic dispatch VI called Get Type.vi and have each child override it to return the original data. Just call that method and pass the results into the existing case structures. You won’t be taking full advantage of dynamic dispatching, so you’ll still have to update those case structures when adding new child classes, but at least you won’t be copying the data of that field every time you copy your object.

If you cannot add Get Type.vi to your hierarchy (perhaps because the class is password protected), there are other ways to do type testing. The most obvious is the To More Specific function, which returns an error if the given data is not of the target class. So you can test “Is it this class? No. Is it this class? No. Is it this class? Yes.” Other ways of doing type-testing exist; you may find them documented elsewhere.

Collaborations

This pattern is frequently used in conjunction with the Factory pattern. When refactoring existing code, you may find the Delegation pattern helpful to create a good class hierarchy.

Editorial Comments

[Stephen Mercer] This is an obvious pattern to anyone who has grasped the basics of object-oriented design. This is an easy pattern to understand, and it is easy to think of the implementation on your own. Not all the patterns are complex systems. Sometimes pointing out the obvious is useful. When planning a new application, you may look at a list of patterns to consider what the best design would be. Having these so-called “obvious” patterns in the list helps remind you that you ought to use them. It may seem silly to mention Specification as a pattern because this pattern is nothing more than the central use case for class inheritance and dynamic dispatching. I highlight it as a pattern because even experienced designers sometimes overlook the obvious. They may miss the fact that a piece of data in their cluster never changes. If data never changes, that data is effectively part of the data type, and rather than carry that information around as data, which has to be copied at wire forks and other junctures, it can be carried as part of the data type, with its behavior encoded in various VIs which do not require copying all the time. Clusters that carry around an enum of how to interpret its cluster elements or a string representing the source of the cluster data cry out for application of this pattern. This pattern simplifies complex VIs and large clusters into manageable chunks by giving an easy way to break up the functionality of the case structure across multiple VIs. This pattern is the first one that a new user of object-orientation should learn and is the first one to consider when refactoring existing applications.

Elijah Kerry
NI Director, Software Community
Comments
MrQuestion
Member
Member
on

I'm learning how to use classes in LabVIEW. I'm wanting to take advantage of this design pattern, but there is no simple examples to dive into. Where can we find an example.


Engineering - The art of applied creativity  ~Theo Sutton
Andreik_ee
Member
Member
on

Where is the pattern implementation to download?

komorbela
Member
Member
on

A note for the ones who have a very fast system that is at the limit of the hardware it runs on:

When considering replacing cases in case structures with dynamic dispatch VIs the small overhead of dynamic dispatch can be an issue.

Here is the fact: calling the dynamic dispatch VI that belongs to the actual class is done in dynamically as the code runs. It necessary means there is some overhead (time punishment) while deciding which VI to call. At least this is the case in LabVIEW.

So before jumping into a refactoring of old running code make a small test to compare case selection vs. dynamic dispatch.

tst
Knight of NI Knight of NI
Knight of NI
on

komorbela wrote:


                       

Here is the fact: calling the dynamic dispatch VI that belongs to the actual class is done in dynamically as the code runs. It necessary means there is some overhead (time punishment) while deciding which VI to call. At least this is the case in LabVIEW.


                   

According to what NI has to say about how the DD mechanism works here, I would actually expect the DD call to be faster than a case structure, at least for the selection part, since the case, if I understand correctly, always does a linear search, whereas the DD call simply takes a VI reference from a particular known index. Of course, you would still have the VI call overhead and the other issues mentioned in that article.

That said, I have seen people who complain about call times with DD vs. regular subVI calls, but I don't know the details. Also, I don't usually do optimization. Usually I have more than enough CPU to spare not to bother, so I don't have experience with actually comparing all of these. I would agree that a test could be beneficial, but I'm not sure if a small one would be good enough to test what you actually want to test.


___________________
Try to take over the world!
Intaris
Proven Zealot Proven Zealot
Proven Zealot
on

I've complained about the overhead for DD calls in the past HERE.

If you compare a standard DD call with a case structure choosing from a finite set of static VIs implementing the same set you'll find thqat the ability to make the static VI inlined makes this variant much faster.  We currently can't inline DD VIs.  This may (unlikely IMHO) change in the future but currently it is impossible.

tst
Knight of NI Knight of NI
Knight of NI
on

Shane, most people can't access your link. You might want to distill your results into another post.


___________________
Try to take over the world!
TurboPhil
Active Participant
Active Participant
on

FWIW:

we have done some benchmarking of DD overhead, and compared results with some provided by some NI AEs, and have found it to average about 3 microseconds.That is, calling a class method VI that is basically a no-op, but requires dynamic dispatch, takes on average about 3 microseconds. These tests were conducted on some pretty beefy hardware.

There also appears to be slight inconsistency on first call in a loop, too; the first call tends to take more time (roughly doubled or tripled, so ~6-9us).

While that time seems pretty small in theory, if you are attempting to do a lot of DD calls in a tight loop, it can definitely pose a problem. There are some places where it has not been an option for us, because we need better performance...

Intaris
Proven Zealot Proven Zealot
Proven Zealot
on

@Tst, Oops, true enough.

I found that a bare-bones DD vs others test (No error clusters, no debugging etc) saw about 700 microseconds for Dynamic dispatch versus around 60 microseconds for a static LVOOP VI call on a modern Intel CPU.

AristosQueue (NI)
NI Employee (retired)
on

TurboPhil wrote:

we have done some benchmarking of DD overhead, and compared results with some provided by some NI AEs, and have found it to average about 3 microseconds.That is, calling a class method VI that is basically a no-op, but requires dynamic dispatch, takes on average about 3 microseconds. These tests were conducted on some pretty beefy hardware.


                   

Do you have numbers comparing dynamic dispatch to a substitution that performs the same work? The work done by a dynamic dispatch method cannot be done by a static dispatch method alone. You use dynamic dispatch when you're trying to select among a set of possible VIs based on the run-time type of some input which is carrying an unknown data payload. To get that with a static dispatch method, you need some sort of selector (say an enum) and a variable payload (variant, flattened string, giant union cluster).

The giant union cluster is often faster (though not always). The other two are definitely slower.

TurboPhil
Active Participant
Active Participant
on

It's funny you should ask that, AQ. Yes, we did do a test to compare performance using classes/inheritance/DD vs. non-OO style to perform the same operations. In our case, we were testing an arbitrary arithmetic evaluator, and we were finding that the non-OO version--while much less elegant and harder to read--was significantly (up to an order of magnitude, for significantly complex formulas) faster than the OO version evaluating the same formula. The overhead was traced back to DD time.

The non-OO version was basically a "giant union cluster", with enum elements to select which cluster items were applicable for particular operations (read: ugly).

I can't post here due to NDA/ITAR restrictions, but we have discussed it some in our private community page, and there are NI AEs who have access to some examples we have provided.

AristosQueue (NI)
NI Employee (retired)
on

Ah. That's what I want to hear.

I mean, I'd rather hear that the time was slower, but if you did ditch it, I want to be sure that it was for legit reasons. Other users have ditched it without factoring in the extra code they used to get around dynamic dispatch.

TurboPhil
Active Participant
Active Participant
on

Right. Yeah, we haven't ditched it yet. Both versions exist as prototypes; neither has been released into production. I don't want to give up on the OO version, due to its readability/extensibility. But it's hard to ignore those kinds of timing differences.

I keep hoping I'll either find some boneheaded inefficiencies to removed or come up with some really clever optimizations for the OO version so we can get the performance and the OO benefits. I'd probably settle for a 2:1 time ratio (OO to non-OO), since the development benefits are so great, but 10:1 seems hard to justify.

AristosQueue (NI)
NI Employee (retired)
on

How sure are you that it is the dynamic dispatch as opposed to inplaceness? Inplaceness we can do something about... sometimes several somethings. The thing that prompts me to ask this question is this comment:

> (up to an order of magnitude, for significantly complex formulas)

Do you have a class hierarchy and each child class overrides a few methods that either return constants or perform some part of the computation?

(I really shouldn't get into this -- I don't have time to give a lot of advice/help this week, but I'm intrigued. You're one of the very few where DD overhead has ever become an issue, and I'd like to characterize more what kinds of problems are more amenable to the giant cluster solution to provide more guidance to people up front IF they are likely to be in that category. All I can say at the moment is "it is very rare for apps that have performance issues for that issue to actually be DD overhead".)

komorbela
Member
Member
on

I made a test project to compare 4 different scenarios that may be used to dynamically determine (based on the class type) which method to use:

- Dynamic dispatch

- Cast and Dynamic dispatch

- Cast and Static dispatch

- Cast and Static dispatch Inlined

In this case we have a "Number" parent class that has a class type enum. This enum is written upon creating it's childs.

when I say Cast, I mean that based on this enum I use a case structure to cast to a more specific class (the class the that I read from the enum), execute what I want on the object and and cast back to "Number" type.

Inlining: It is the biggest problem (at least to me) when I have to use dynamic dispatch. I cannot use inlined VIs. If I have a loop running many times, inlining the contained VIs can be a great benefit.

So in this project I randomly generate N number of DBL or I32 objects, and than increment those M times using any of the listed methods (as selected by the "Mode" enum control).

Here are the timing results on my PC (Intel i5) setting N and M to 1000:

340 ms - Dynamic dispatch

400 ms - Cast and Dynamic dispatch

117 ms - Cast and Static dispatch

  48 ms - Cast and Static dispatch Inlined

The same when setting N and M to 2000:

1330 ms - Dynamic dispatch

1500 ms - Cast and Dynamic dispatch

450 ms - Cast and Static dispatch

190 ms - Cast and Static dispatch Inlined

You can try it for yourself, I attach the project. Run the Loop_test.vi

Note: I couldn't attach the zip to this post, so I attached it to the whole document. It is called "DDtest.zip". I hope I didn't screw up anything by doing so

Intaris
Proven Zealot Proven Zealot
Proven Zealot
on

Hey, look it's basically the same results I had posted in the Champions forum....

There is a problem with the speed of dynamic dispatch.  Period.  It's a problem.  I know there is overhzead with a DD call but it seems that there's just too much work being done.  For a long time it seemed like I was the only one seeing this but it appears now that I'm not the only one.  Why exactly is the cast and static dispatch 5x faster than cast and dynamic dispatch?  Don't hold your breath waiting for answers though.

I had made the case previously that it would be of great benefit to be able to define a LVOOP branch to DISALLOW dynamic loading of classes inheriting from it.  This way (IIRC) it should be possible to INLINE Dynamic Dispatch VIs because the full scope of the inheritance hierarchy is known at compile time.

komorbela
Member
Member
on

Intaris wrote:

...

I had made the case previously that it would be of great benefit to be able to define a LVOOP branch to DISALLOW dynamic loading of classes inheriting from it.  This way (IIRC) it should be possible to INLINE Dynamic Dispatch VIs because the full scope of the inheritance hierarchy is known at compile time.

I also would welcome a solution that compiles what I have at compile time and loads at the first call of the parent class.

Just a suggestion on how to do it:

I should have an option in the parent class to "load all known children on call". If I have the child classes ready and I give it to the compiler, than these should be considered as the "known classes". I should also have a "conditional inline VIs" option that would mean that this VI is inlined in case it is complied together with the parent class. These are just rough ideas. Don't take it too seriously, I just post it to generate other ideas. And if we find a really good solution it could be posted to the Idea Exchange.

TurboPhil
Active Participant
Active Participant
on

Those are some great ideas, komorbela & Intaris.

I agree--the overhead of DD seems much higher than it should be. It would make sense if that were because it was designed to flexibly accept dynamic loading of new lvclasses. And then it follows that there should be a way to disable that feature.

We have already done some (admittedly, not the most elegant) things to try to force the compiler to be aware of child classes in some of our projects. In particular, where we pass objects over the network (via PSP, TCP, UDP, etc.), we'll place an instance of all child classes directly on the BD of the receiver VI. Also, because I am generally paranoid, I wire all those object constants into an array and push that to an indicator on the FP, just in case the compiler tries to get clever and removes the constants just sitting there on the BD as "dead code". [At some point, I mean to get around to making a VI to script adding any new objects to a custom VI like that, that I can add to the "always included" directory....an idea I am stealing from Fabiola, I believe]

So we're already moving in the general direction of these suggestions. Now, if we could just get the compiler to recognize what we're doing...

AristosQueue (NI)
NI Employee (retired)
on

Intaris wrote:

There is a problem with the speed of dynamic dispatch.  Period.  It's a problem. 


                   

At this point, I believe that this exists. I'm having trouble characterizing it on my end. Doesn't help that I don't officially work on this any more, but I'll continue in my spare time to try and puzzle out what's wrong here.

mike_nrao
Member
Member
on

Sorry komorbela, I'm stuck in the past... can you please recompile DDtest for LabVIEW 2012?

I'd like to try it out on my desktop, and on cRIO.

Great ideas you guys are proposing!  Thanks AQ for investigating. 

Intaris
Proven Zealot Proven Zealot
Proven Zealot
on

AristosQueue wrote:


At this point, I believe that this exists.


                   

komorbela
Member
Member
on

I posted a new version of DDtest: DDtest1_Lv2013_2012_2011.zip

It contains the original version with versions saved for LabVIEW 2012 and 2011. You can find the previous versions in the PreviousVersions folder.

to AQ: I am happy that you are committed to this problem and try to analyze it even in your spare time. I hope the ones who are responsible for LVOOP will also be interested. Thanks in advance!

P.S.:

A general question: Shall I delete the original file from the attachments? The new upload is an extended duplication of that so nothing would be lost. It would help to not overcrowd the attachments list. However it may lead to inconvenience if someone wants to find the original DDtest.zip. What is the policy in this case?

TurboPhil
Active Participant
Active Participant
on

Did some more testing of some existing DD code I had, creating new cast+static dispatch versions of the frequently-called methods (per komorbela's example) and saw results ranging between 3x and 15x performance improvement (execution time reduction). Thanks for the great tip!

The process of converting existing dynamic dispatch methods to the enum-driven static dispatch schema is pretty easy, but a bit tedious. Seems like it would be a great project for someone to write a script for. Maybe somebody out there has time to put something together to benefit the community?

Again, this is of course not the ideal solution to the DD overhead problem, but until AQ et al work out a better fix down in the bowels of the LV compiler, it's a neat trick to keep up one's sleeve. And, of course, it's not like one would want to apply this pattern everywhere that they are currently doing regular DD; just in those places where shaving off a few microseconds will actually add up to something significant...

AristosQueue (NI)
NI Employee (retired)
on

komorbela: Just FYI, the To More Generic node is always a no-op in all uses. It exists only because people don't like to see coercion dots on their diagrams. Coercion dots for upcasting are also no-ops. That's why there's no error cluster. It can't fail because it doesn't do any work. It gets totally compiled out.

TurboPhil
Active Participant
Active Participant
on

AristosQueue wrote:


                       

komorbela: Just FYI, the To More Generic node is always a no-op in all uses. It exists only because people don't like to see coercion dots on their diagrams. Coercion dots for upcasting are also no-ops. That's why there's no error cluster. It can't fail because it doesn't do any work. It gets totally compiled out.


                   

To follow up on that, AQ: if the To More Specific node fails (either because we screwed up or if it was done intentionally as a guess-and-test schema), does it return garbage, or does it return the original class data? The help doesn't seem to address this use case (it seems to be talking about it with regard to refnum classes).

So, if I wanted to do a chain-of-responsibility pattern through guess-and-test, would I need to branch the original wire to make sure the original class is preserved in all cases where my "guess" is wrong? Is there a recommended way to do that to avoid making copies?

TurboPhil
Active Participant
Active Participant
on

TurboPhil wrote:

So, if I wanted to do a chain-of-responsibility pattern through guess-and-test, would I need to branch the original wire to make sure the original class is preserved in all cases where my "guess" is wrong? Is there a recommended way to do that to avoid making copies?


                   

chainofresponsibility.PNG

Here's an example of what I mean. Originally, I was doing this with dynamic dispatch--the parent class' method was just a no-op that returned a "request fulfilled" of FALSE. Only the class(es) that could actually act on the request override and set that flag, stopping the search. Wondering if this might be a more efficient way to do it, but I'm concerned about the potential for copying the class at that wire branch upstream of the To More Specific...

AristosQueue (NI)
NI Employee (retired)
on

It's documented... it returns the default value of the cast type and an error.

komorbela
Member
Member
on

I knew that it is equivalent to a coercion, I just put it in for visual symmetry. But thanks for letting me know that even a coercion is no op in this case. Good to know.

komorbela
Member
Member
on

I am happy that the example code I gave you is a good starting point to optimize your code! The improvements sound good, but unfortunately it's a trade between the real flexibility of the specification pattern and the speed.

About the example that you share: I understand your concerns about copying the full object. If the compiler is not smart enough that may lead to actual copy operations. And the compiler is out of our control. We can just hope that it is smart and gets smarter.

This is the reason that I used an enum in the parent class to store the actual class type. This means more work on the existing classes (edit time), but less work on converting to the actual child class (run time). But don't forget, this is just a workaround!

The dynamic dispatch by itself should get more optimized, if possible.

The scripting that you suggest sounds fine, that's a better workaround. I would add the restriction to do this scripting only before compilation and undo it after. (or make a copy, or save a version in a source code control system and do the scripting only afterwards - to be extra sure to not to change the original code)

This way it would be a real benefit in case of making a stand-alone application (*.exe), but doesn't help if you want to create packed project libraries (PPL). In case of PPLs you can always introduce a new child in a new PPL while the parent is already compiled into an existing PPL. (It may also be problematic in case of Shared Libraries, but I have no experience in that area, so I'd rather not say anything)

AristosQueue (NI)
NI Employee (retired)
on

No copy of the object is made on a successful cast. On unsuccessful cast, the value output is the default value of the class wired to the middle terminal.

AristosQueue (NI)
NI Employee (retired)
on

To avoid confusion on the original post, I have moved Korombala's attachments to here:

https://decibel.ni.com/content/thread/19028

Contributors