LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Typedefs and Class Design


@JackDunaway wrote:

The "always typedef your clusters" mentality is driven into skulls from Day One of traditional LabVIEW best practices. When dipping toes or diving into LVOOP, I think this assumption is baggage that might hinder comprehension of and success with LVOOP (anecdotal, from my own experience).


"Always typedef your clusters" is good advice.  "Use classes instead of clusters" is better advice.  🙂

 

The desire to use typedefs definitely gets in the way of effectively using LVOOP.

 

 


@JackDunaway wrote:

(I'm in the middle of a 3-days-running "AHA!" experience)


They're fun, aren't they.  🙂

 

 


@JackDunaway wrote:
(or "code module boundaries" - your continual judicious word choice noted and appreciated) 

I use "modules" (or "components") because I almost never have a single class operating on its own.  My classes are grouped with several other classes in a lvlib, all which work together to provide a cohesive set of functionality.

0 Kudos
Message 11 of 77
(1,943 Views)

 


@JackDunaway wrote:

 


@jarrod S. wrote:

I would disagree. While it is nice default behavior to have this auto-mutation for class data, I would never rely on it exclusively in any case, since the class designer has no control or insight into its behavior. I would also not change other parts of my class design to attempt to maintain a nicer auto-mutation path.

 

In my mind, if you require mutation, you should set up your own serialization and mutation scheme manually. We do this commonly in our product line, and once our convention was established for the process, it was fairly easy to keep it up. I think it's worth the initial effort.


 

 

  1. How do you combat the IDE issues with the Bundle/Unbundle?
  2. What advantages does developer generated/maintained data mutation code have over LabVIEW generated/maintained mutation code? (One answer: allows you to assign non-default run-time values to class members. My counter: where this is needed, have a NULL/Sentinel default value assigned, which is then detected and changed using a run-time initialization ["constructor"] method.)
  3. Let's set the auto-mutation language features of the .lvclass aside: are there other reasons to continue using typedefs as class members rather than replacing all typedefs with classes?

 


 

1. I did not address this at all. I haven't really seen this much, though I'm aware of the issue.

2. There are plenty of reasons. One that comes to mind was when we negated the meaning of a boolean field for better clarity both internally and in the public serialized data structure. To do this properly, the mutation needed to negate the old value to populate the new field. This reasoning also applies to any other situation where the meaning of a field changes. For instance, one field that was an enum might change into multiple boolean fields or vice-versa. If you don't have manual versioning and you want to have mutation, you should at the very least set up some auto-tests to deserialize various class data from various versions into the most recent version to check the results.

Jarrod S.
National Instruments
Message 12 of 77
(1,940 Views)

@jarrod S. wrote:

In my mind, if you require mutation, you should set up your own serialization and mutation scheme manually.


I agree.  Too bad there's no Serialize/Deserialize prims we can override in our classes.

Message 13 of 77
(1,939 Views)

 


Daklu wrote:  

@JackDunaway wrote:

(I'm in the middle of a 3-days-running "AHA!" experience)


They're fun, aren't they.  🙂


 

@Daklu: AHA! is optimistic. A closer approximation is: AH-....ummm....ah-!...umm...ahh...eh...uhhh...-ha?...?? Smiley Indifferent

 

 

@jarrod: Yeah, I agree with your benefits of user-defined over LabVIEW-tracked mutation code, but that's not a point I'm really contesting. What I'm really interested in is question #3 from the bulleted list above, and the essence of why I started this thread: "Let's set the auto-mutation language features of the .lvclass aside: are there other reasons to continue using typedefs as class members rather than replacing all typedefs with classes?"

0 Kudos
Message 14 of 77
(1,927 Views)

I would like to point out one valid Use Case for a Cluster in a Class that I use a lot - that is, to group members for File IO.

The Cluster is Privately Scoped and you don't need normally need to bother with a typedef in this case.

See MGI example

 

I also don't agree with exposing Clusters, unless under very specific constraints - i.e. easily bundling up and wiring all UI references into a Class that acts as the UI Console.

 

 

Certified LabVIEW Architect * LabVIEW Champion
0 Kudos
Message 15 of 77
(1,926 Views)

I suppose I'm not really advocating using typedef clusters in class definitions per se.

 

My argument was simply that you shouldn't avoid them purely because of the auto-mutation factors if they were otherwise useful to your organization.

Jarrod S.
National Instruments
Message 16 of 77
(1,924 Views)
@JackDunaway wrote:

...

  1. Let's set the auto-mutation language features of the .lvclass aside: are there other reasons to continue using typedefs as class members rather than replacing all typedefs with classes?

 

The only reasosn that come to mind are legacy related. If I am working with a driver that has type def's in it, I'll keep them using the excuse that they will only change when the driver changes and the drive will change when the hardware changes and the hardarwa has been stable .... so...

 

Aside, keeping type def's updated...

 

A Tree.VI (catalog.vi, root.vi etc) are very useful in ensuring all Vi used in an app get a chance to  "see" an update. In multi-developer apps I insist on using a Tree.VI. Note: Tree.vi's are not fool-proof. You still have to be aware of the possible issue and watch-for/guard against those issues.

 

moving forward...

 

I have been using type-def to a lesser extent these days and moving toward LVOOP.

 

Ben

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 17 of 77
(1,900 Views)

 


@JackDunaway wrote:
3. Let's set the auto-mutation language features of the .lvclass aside: are there other reasons to continue using typedefs as class members rather than replacing all typedefs with classes?

Communication with a DLL node that takes a struct. That would call for a typedef'd cluster.

 

Within LV, I think the time needed to develop the cluster typedef is basically identical to the time needed to create a class and populate its private data control (the "autoarrange vertical" on classes already being turned on might even make the class solution faster). If I'm just using the data type programatically, I don't see any reason to use typedef clusters instead of creating a class. I reach for typedef'd clusters only when user display or data entry is required. When I need to display the data on a user-visible panel, I wish for a different solution for classes. Creating that display can get tedious, and generally I want to fall back on the cluster at that point.

 

I suppose if I were creating "Point" or "Rectange", I might make plain typedefs of those, but those common "pure numeric clusters" already exist as typedefs, and I've never really thought about whether I'd convert them or not. There doesn't seem to be much need for inheriting from either one. Generally, having them wide open to all math operations maximizes their utility. It might be nice to have some differentiation between a "Point" and an "Offset" and prevent adding two Points together, but that's such a minor thing, I'm not sure it would be worth it.

 

The concept of "tuples" is valuable in other programming languages. I might use a typedef'd cluster if all the elements within it were classes for some interesting sorting/searching cases.

 

I still typedef enums. I still typedef numerics if I think their representation may change in the future or if I'm using units. I definitely use strict typedefs from time to time on things that are going to be UI elements.

 

Message 18 of 77
(1,886 Views)

 


@Ben wrote:

A Tree.VI (catalog.vi, root.vi etc) are very useful in ensuring all Vi used in an app get a chance to  "see" an update. In multi-developer apps I insist on using a Tree.VI.


A project loads all of its classes and a class loads all of its member VIs on the desktop platform. This is sometimes seen as a negative, but it has essentially eliminated my need for a "keep my project in memory" VI because all the VIs I'm working on are always in memory.

 

0 Kudos
Message 19 of 77
(1,884 Views)

For thosea that are NOT LVOOPing then this thread from Greg McKaskle is the earliest mention of the Fake Root that I know of.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 20 of 77
(1,876 Views)