LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

LVClass Private Data Reference in Run-Time

The answer is to programmatically create a definition of the bit-packed class message including each object's name, data type, default value, limits and definition. Since this information is already stored in the private class data cluster, why not allow read-only properties to them?

0 Kudos
Message 11 of 29
(2,205 Views)

> why not allow read-only properties to them?

Because reading the data is just as bad as writing the data for creating dependencies on internal structure that no software entity inside the application but outside of the class itself should EVER know about. Ever. The acceptable uses for this info are scripting and debuggers. However, since LV dev environment and runtime environment are a single environment, even the debugger case is largely cut off.

If you want to create such a bit-packed representation of the objects of the class, add a method to the class that does this.

0 Kudos
Message 12 of 29
(2,205 Views)

Well, you would need access to type information, and hierarchy information, to create an object serialization framework, yes?  (I agree that if complete solutions for this and any other similar features are in the future part of LabVIEW then an application developer won't need this access for creating an application.  There is no complete solution yet, though.)

0 Kudos
Message 13 of 29
(2,205 Views)

paul.dct wrote:

Well, you would need access to type information, and hierarchy information, to create an object serialization framework, yes?

Not necessarily; it depends on what you want the framework to do.  If you want it to accept and serialize any arbitrary class, then yes.  While someone who understands the consequences might choose to do that for a specific project, imo it is not a good global design decision.  It is better to make each class responsible for its own serialization routines.  A universal serializer is prone to errors.  (As evidenced by the ability to flatten objects to a string.)

You can make a lightweight serialization framework using delegation instead of inheritance.  It's more work and not as clean as Interfaces, but it can be done.

0 Kudos
Message 14 of 29
(2,205 Views)

Serialization of any arbitrary class is exactly the thing that led me to discover this workaround.  Daklu, please elaborate briefly on what you mean by "not a good global design decision" and "universal serializer is prone to errors". 

0 Kudos
Message 15 of 29
(2,205 Views)

AristosQueue wrote:

If you want to create such a bit-packed representation of the objects of the class, add a method to the class that does this.

Our application contains hundreds of messages, each message already requires 3 VI's to handle parsing/building a TCP string and executing its function. This post was from a request by other developers to minimize the number of methods each message requires.

Daklu - Do you have an example you can post using delegation instead of inheritance?

0 Kudos
Message 16 of 29
(2,205 Views)

"...universal serializer is prone to errors."

The problem with a universal serializer is it treats all objects the same.  The serializer designer has to make decisions that should be left up to the serializer user.  For example, if an object has any kind of reference (queue, dvr, etc,) should the reference number of referenced data be serialized?  If not, how should the object behave when it is deserialized without valid refnums?  What if the object has some data that should be serialized and some that should not?  What if the class designer doesn't want--for whatever reason--an object to be serializable?

The most talked about problem with flattening an object to a string is version compatibility.  That causes plenty of headaches in itself, but it really just scratches the surface.  IMO these other questions are just as or more important than the versioning issue.

"...not a good global design decision..."

I mean it works fine is relatively isolated environments as long as one is willing to accept the tradeoffs associated with using a universal serializer.  I have no problem whatsoever with an organization deciding to use a universal serializer as part of their standards, but the conditions are too restrictive to be able to serialize any class in even a small to medium sized application.  In fact, I think I saw a post recently by you or Paul explaining the relatively narrow conditions in which it would be applicable.  (No references, stable classes, etc.)

Conceptually, a universal serializer should successfully serialize and deserialize any object in a way that matches the object designer's intent.  After all, it's "universal."  That's an impossible task--the serializer designer has to make decisions during implementation that will inevitably conflict with a user's desire.  So the universal serializer is not, in fact, universal. 

Furthermore, using a universal serializer forces leaky abstractions.  Anyone using it has to know *how* the serializer works internally and requires them to know the internal details of any class they want to be serialized.  Suppose I release a (locked) by-val class you decide to serialize.  During a bug fix update, I add a queue as a private internal data structure.  All of a sudden deserializing classes doesn't work anymore and you don't have any idea why.  My class' api is the identical and none of your code changed, but the update broke your code.

Serialization decisions are highly specific to the class being serialized.  It is a responsibility best left to the class designer, not the class user or serializer designer.

0 Kudos
Message 17 of 29
(2,205 Views)

Our application contains hundreds of messages, each message already requires 3 VI's to handle parsing/building a TCP string and executing its function. This post was from a request by other developers to minimize the number of methods each message requires.

Daklu - Do you have an example you can post using delegation instead of inheritance?

No, I do not.  Delegation would be most useful if you need to serialize unrelated classes and need to treat them all identically.  I can whip up a uml diagram if you need one, but if you're just concerned about serializing messages you might consider creating an abstract SerializableMessage child class from the AF message class, adding a serialize method, and have all your messages inherit from that.  From there you have several options depending on what kind of data the messages contain and the project constraints.

0 Kudos
Message 18 of 29
(2,205 Views)

Point taken about references, for example.  Where I think this is applicable is for a messaging interface (such as is in common use in several Java formats) and for which there serialization frameworks exist for competing development environments (e.g., Simple XML for Java).  I think other uses can fall in the category of  "use at your own risk."

The particular use case we have in mind is to send data messages between components in the form of command objects for use with the Command Pattern.  It's great if issues of versioning are handled by the framework, but I don't expect this.  Rather, the definition of the interfaces handles this, and the components can be responsible for any updates, if necessary.  Serialization for messaging within LabVIEW for this purpose is easy using one of the existing global frameworks.  Serializing objects for sharing with an application in another development environment presently requires a great deal of custom code (we do this!), and I certainly don't see why this should be the case.

0 Kudos
Message 19 of 29
(2,205 Views)

paul.dct wrote:

  Serializing objects for sharing with an application in another development environment presently requires a great deal of custom code (we do this!), and I certainly don't see why this should be the case.

You may not see why this should be the case, but many people have looked at the problem and realized, "Wow. That's one of the hardest problems in computer science. I never would have guessed."

For the record, it is not easy within LabVIEW either. It is easy within a single application. But as soon as you communicate between two hierarchies -- say a client running in parallel to a server -- you run into the exact same problems. The LV automatic versioning only works one way. The serialization has rules for handling references, and if you want to have other rules, you have to implement your own. There are a thousand reasons why you have to roll your own, even within LabVIEW.  The auto serialization system that I added to LV was a major boon for one particular use case, but it cannot cover all the issues.

Serialization is a significantly non-trivial problem... you can see my gigantic project to take this on here:

https://decibel.ni.com/content/docs/DOC-22568

It's taken me the better part of a year to just get a rough sketch up and running to cover even the most trivial cases.

I think that the framework that I've been constructing (and it has changed a lot since the last time I rev'd the document) will be able to make some of these situations easier, but you're always going to have a custom function on every class to control the serialization. Nothing else will do, for the reasons that Daklu specified and more.

0 Kudos
Message 20 of 29
(2,205 Views)