LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Typedefs and Class Design

I have a discussion question about best practices pertaining to typedefs and class design.

 

 

Background: In terms of object composition and nested data structures, it appears that Typedefs have been deprecated by classes. I have noticed that typedef'd data members of the Private Class Data Cluster are excluded from the auto-tracked Mutation History (more about Mutation History) of the class. This leads to problems versioning persistent data, and also I have seen IDE mismatches on "Bundle/Unbundle by Name" when accessing class private data after renaming typedef'd data members.
 
Thus raises the question:
Is it typically a bad idea have typedefs as data members in class private data?

 

Message 1 of 77
(9,257 Views)
0 Kudos
Message 2 of 77
(9,237 Views)

> Is it typically a bad idea have typedefs as data members in class private data?


Yes, if it is a typedef of a cluster.

No, if it is a typedef of other types (enums and numerics).

 

Because typedefs do not themselves save mutation records, it is impossible to treat them well when classes set up their mutation records and they see that a typedef they rely upon has changed. As long as there is a 1-to-1 substitution of type (i.e., the old enum has been extended/modified to be the new enum) then everything works out. But throw a new element of a cluster into the mix, and things get a lot more complicated.

Message 3 of 77
(9,211 Views)

@JackDunaway wrote:
Is it typically a bad idea have typedefs as data members in class private data?

I can't say it's a bad idea, though I don't have any reason to disagree with AQ.  I know I rarely bother creating typedeffed clusters for the class ctl in my code.  There really isn't much point in it.  In fact, I almost never use typedeffed clusters as all any more.  Classes are safer, more flexible, and easier to work with over the long term.

 

I do believe it's a bad idea to expose a typedeffed cluster across code modules boundaries, especially if the code module is intended to be reused.  The only time I consider doing that is when I'm absolutely 100% certain the typedef won't change, and even then I try to find better solutions.

 

 


@JackDunaway wrote:

This leads to problems versioning persistent data, and also I have seen IDE mismatches on "Bundle/Unbundle by Name" when accessing class private data after renaming typedef'd data members.


This is a serious limitation of generic typedefs that appears to be not widely understood.  Renaming and/or reordering typedeffed items often confuses vis that use the typedef.  This usually happens when the vi isn't loaded into memory when typedef edits are made.  Since the only way to make sure a vi is loaded into memory is to open the front panel, editing a typedef that is used by lots of vis can be tedious.

Message 4 of 77
(9,195 Views)

The "always typedef your clusters" mentality is driven into skulls from Day One of traditional LabVIEW best practices. When dipping toes or diving into LVOOP, I think this assumption is baggage that might hinder comprehension of and success with LVOOP (anecdotal, from my own experience).

 

Some OOP design patterns and concepts can be approximated using traditional LabVIEW capabilities, namely nested clustosauri. I was under the impression that LVOOP could slowly be introduced into existing applications that naturally ascribe to OOP tenets, but for some reason I missed the memo that the first step for such apps is to ensure virtually no clustered typedefs are left in data hierarchies without using "Convert Contents of Control to Class" (or at least, unintuitively convert bottom-up rather than top-down).

 

This is a crucial realization that must be understood when attempting to convert a project to LVOOP one module at a time, especially when modules are owned by those yet-to-sip-the-Kool-Aid. (Fortunately, not a personal anecdote 😉

 

(Before going further, I'm not angry or frustrated, I'm in the middle of a 3-days-running "AHA!" experience)

 

At first, I felt vulnerable not typedef'ing data members when composing high level classes. Then it hit me... those data members should be classes themselves. But I still felt vulnerable... the clusters were not protected by type defining. Then it hit me... the Private Class Data Cluster IS a typedef.... well actually, better, a souped-up data structure that's too-cool-for-typedef-school.


Daklu wrote:  

I do believe it's a bad idea to expose a typedeffed cluster across code modules boundaries, especially if the code module is intended to be reused.  The only time I consider doing that is when I'm absolutely 100% certain the typedef won't change, and even then I try to find better solutions.


I already have a habit of "fanning out" smaller data structures onto a SubVI Connector Pane since this typically yields cleaner caller diagrams. Already this tendency has yielded similar successes when designing LVOOP interfaces (or "code module boundaries" - your continual judicious word choice noted and appreciated). More parameters of simple datatypes typically beats fewer parameters of nested datatypes. Looser coupling, and fewer .ctl (or .lvclass) files.

0 Kudos
Message 5 of 77
(9,169 Views)

So while I'm able to follow the general discussion here, I'm wondering how this is going with by-ref classes.

My own style is to make the data a type def'ed cluster (owned by lvclass) and place this inside a DVR.

This discussion leads to the impression, that it's better to do it the other way round, get the object inside the DVR, so the class can have it's mutation history as well as there is no need for a type def.

 

Felix

0 Kudos
Message 6 of 77
(9,165 Views)

Ok, me again. Maybe I shouldn't do so much metamodelling, but I'd like to publish this concept.

 

Suppose we define some kind of meta-operations, specifically a by-ref and a type-def operation {incomplete}. Let's assume that (suggested by Jack D.'s latest post) that making a private data cluster is (similar to) a type-def operation (type-def<<class private data>>).

Basically, I'm not good enough into that kind of math to write down some algebra for the proposed meta-associations, but the basic concept would be: the application of the same meta-operation on a specific meta-data is not a good idea.

So the simple case: type def a cluster and make it a private data cluster: type-def<<class private data>>(type-def(data)) -> bad

Same with ref: by-ref(type-def<<class private data>>(by-ref(data))) -> bad.

Because I cannot formulate an algbra for this, I can't judge a term like this (or other way round, I need a judgment of this befor formulating a good algebra): type-def<<class private data>> (by-ref(type-def(data))).

 

Ok, suppose I lost you, to much metamodelling in my brain. I'll go and study algebra...Smiley Very Happy

 

Felix

0 Kudos
Message 7 of 77
(9,157 Views)

I would disagree. While it is nice default behavior to have this auto-mutation for class data, I would never rely on it exclusively in any case, since the class designer has no control or insight into its behavior. I would also not change other parts of my class design to attempt to maintain a nicer auto-mutation path.

 

In my mind, if you require mutation, you should set up your own serialization and mutation scheme manually. We do this commonly in our product line, and once our convention was established for the process, it was fairly easy to keep it up. I think it's worth the initial effort.

Jarrod S.
National Instruments
Message 8 of 77
(9,140 Views)

@f. Schubert wrote:

This discussion leads to the impression, that it's better to do it the other way round, get the object inside the DVR, so the class can have it's mutation history as well as there is no need for a type def.


 

Mutation history for classes is an improvement over typedeffed clusters.  Ironically, because class data is private by definition, typedefs "need" the ability to store mutation history more than classes do.  (I'm not suggesting that should be implemented...)

 

Bundle/Unbundle actions rely on the .ctl file.  Class clients don't have access to the class ctl file so you can jiggle around the .ctl file all you want and client code doesn't care.  When using typedeffed clusters client code does care.  Since it is directly bundling/unbundling data from the typedef any change to the typedef has an effect on client code.

 

So, while editing a typedef is problematic unless all client vis are loaded into memory, the mere act of using a class instead of a typedef completely avoids that problem.  In other words, mutation history is "needed" for typedefs just to make sure your client code compiles correctly.  Classes don't need mutation history to maintain compatibility... it's maintained through the class methods.

 

There are two times I can think of where class mutation history is needed:

1. When persisting a class to disk.  If you save a class as v1.0, update the class source code to v1.1, and then load the saved class from disk, mutation history attempts to make that transition cleanly.

2. In a modular application, if part is compiled with MyClass v1.0 and another part is compiled with MyClass v1.1, I think mutation is handled automatically when MyClass objects are passed between the two modules.  (I've never tried it though so I'm not sure what will happen.)

 

How does all this apply to by-ref classes?  All the techniques for creating by-ref classes require you to create/obtain some kind of run-time reference.  If you want to persist it to disk, you have to implement your own serialization/deserialization methods.  Unflattening the class from a string won't automatically create the run-time references when you load the saved class from disk and you'll end up with an invalid reference.

 

This is a long winded way of saying... I don't think it matters.

Message 9 of 77
(9,133 Views)

 


@jarrod S. wrote:

I would disagree. While it is nice default behavior to have this auto-mutation for class data, I would never rely on it exclusively in any case, since the class designer has no control or insight into its behavior. I would also not change other parts of my class design to attempt to maintain a nicer auto-mutation path.

 

In my mind, if you require mutation, you should set up your own serialization and mutation scheme manually. We do this commonly in our product line, and once our convention was established for the process, it was fairly easy to keep it up. I think it's worth the initial effort.


 

 

  1. How do you combat the IDE issues with the Bundle/Unbundle?
  2. What advantages does developer generated/maintained data mutation code have over LabVIEW generated/maintained mutation code? (One answer: allows you to assign non-default run-time values to class members. My counter: where this is needed, have a NULL/Sentinel default value assigned, which is then detected and changed using a run-time initialization ["constructor"] method.)
  3. Let's set the auto-mutation language features of the .lvclass aside: are there other reasons to continue using typedefs as class members rather than replacing all typedefs with classes?

 

0 Kudos
Message 10 of 77
(9,126 Views)