LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Convert Complex (Pink) Cluster to Byte Array

I've seen this asked in the past. Looking to see if any updates have been made or if it warrants a feature request post because this is an issue our team is struggling with. We need to convert complex clusters to byte arrays, without anything extra in them. By complex cluster, I'm referring to the pink-colored ones. These are clusters that contain data more complex than simple scalars, such as an array or subcluster. There are currently two functions that I'm aware of for converting a cluster to byte array:

 

Typecast - This only works with simple clusters (orange/brown).

 

Flatten to string (and then convert String to U8 array). - This prepends the size of arrays inside the cluster into the byte array, even with the Prepend Array or String Size set to False, since this selector applies only to the cluster as a whole.

I've seen this topic addressed in the past, with some of the replies including the following:

"Why is it a problem that it's prepending the string/array size?"
"But is that really an issue? How else are you going to be able to tell how many elements are in your array or bytes in your string?"

 

This is absolutely an issue for us because those byte arrays are being sent to external dlls and/or sent as UDP packets, and being parsed by a 3rd party application that requires the data to be in a specific format. The length of the arrays is already captured inside one of the fields in the cluster itself.

 

We can manually break apart the clusters and manually build up the byte arrays, but this is a very tedious process. We have a dozen or two clusters, each of them with multiple subclusters inside of them. Additionally, the original clusters are being auto-generated as type defs using some VI scripting tools. This is because the shape of the clusters occasionally changes during development. When one of those clusters changes, we have to now remember to go into a subVI deep in our application to make sure the conversion for all of those new auto-generated clusters is now correct in this new format. It's not very dynamic and maintainable.

 

Asking for any new insight on this problem. If not, it seems to me that it warrants being proposed as a feature request.

 

0 Kudos
Message 1 of 23
(3,811 Views)

UPDATE: Just had an idea of writing something up by converting to Variants first, and processing the Variant through it's components into a byte array.

0 Kudos
Message 2 of 23
(3,803 Views)

Hi ajf,

 


@ajf200 wrote:

Additionally, the original clusters are being auto-generated as type defs using some VI scripting tools.


Then you could also/additionally create a complete VI using scripting to convert that cluster into a byte array…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 3 of 23
(3,789 Views)

It's an option, but VI scripting with subclusters is kind of a pain in the ass, in my experience. And this is significantly more development time than using existing functions.

0 Kudos
Message 4 of 23
(3,778 Views)

@ajf200 wrote:

This is absolutely an issue for us because those byte arrays are being sent to external dlls and/or sent as UDP packets, and being parsed by a 3rd party application that requires the data to be in a specific format. The length of the arrays is already captured inside one of the fields in the cluster itself.


So how do you know that, except for the prepended size, the flattened cluster would be digestible by the external app?

 

How many different types of such clusters do you need? Can't you just create a subVI that does the custom flattening? (unbundle by name, flatten each part, concatenate, etc.)*

 

*Edit: just noticed that has been suggested in the other thread already.

 

(Also be careful with wording, the term complex has a special meaning and I don't think that's what you have here) 

 

 

0 Kudos
Message 5 of 23
(3,764 Views)

I'm not sure this is what you need, but you could always keep unbundling until all you have are data types that can be directly converted into byte arrays.  Then just concatenate all the byte arrays and send.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 6 of 23
(3,716 Views)

@altenbach wrote:


So how do you know that, except for the prepended size, the flattened cluster would be digestible by the external app?


1. That is what the documentation says.

2. That is what we have tested as working hundreds of times already. Superfluous data in the packet does not work.

3. That is what the tools for the external app verify is working.



How many different types of such clusters do you need?


Dozens of them. Each one is different. Below is a picture and I've attached a simplified typedef example (LV2020CE) of what one such cluster might look like. Now, consider a dozen or two permutations of that, with datatypes, orders, lengths, and hierarchy changing.

 

ClusterImageCapture.PNG


 

Can't you just create a subVI that does the custom flattening? (unbundle by name, flatten each part, concatenate, etc.)*

*Edit: just noticed that has been suggested in the other thread already.


Just look at the example above and imagine doing that for dozens of those. They are very tedious and error prone. Additionally, 3 months from now a developer might decide to add or edit one of the fields in the middle of the cluster. Now we have to regen the clusters, and then assuming we notice the difference, remember to go into the subVI and make the the changes manually. It's a lot to maintain for a very small part of a large test system, especially for changing datatypes, an evolving project, and various engineers and developers coming online and offline on this project.

 


(Also be careful with wording, the term complex has a special meaning and I don't think that's what you have here) 


If you have a better name for them, let me know and I will edit the post.

 

Anyway, I spent a few hours tonight developing a VI that will convert a Variant to Byte Array without anything extra in it. This should work on a bunch of different datatypes, including all real numbers, booleans, strings, paths, arrays (that include the aforementioned datatypes), and clusters of all of the above (the VI is recursive to process clusters).

 

One other thing I forgot to mention, and to explain the methodlogy. This VI is often used on Windows in addition to Linux x64 on RT targets (cRIO and PXI) as part of Veristand deployments. One of the limitations we ran into early on is that property nodes do not work well on RT targets, because the often put out default values since FP objects are not really a thing on RT systems. This could have been due to some sloppy development early on as we were learning to implement these in Veristand Custom Devices, but we just decided to abandon all property nodes to avoid any issues with deployments. Therefore, this entire solution uses Variants and Data Type Parsing functions.


Anyway, enjoy. For those of you who get to try this out, please test and let me know of any bugs. I finished it late at night and several beers deep.

Next up: byte array to cluster (much harder problem that will likely involve vi scripting).

 

https://github.com/aaronfleishman/VariantToByteArray

0 Kudos
Message 7 of 23
(3,698 Views)

You can as an alternative convert the cluster to a JSON string and send it as byte Array. As it's a string it's easy to read and interpret and handles Changes better.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 8 of 23
(3,678 Views)

Maybe we should attack this from Another angle. What's the definition you need to adhere to on the DLL side, and/or, can't you add a function to handle some LV format?

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 9 of 23
(3,675 Views)

So if the data structures change during development, the external app has to be changed in order to digest them properly again, right?

 

My thoughts are with Yamaeda.  Complicated, hierarchical data structures that are subject to change don't seem to map well to raw byte-mapping manipulations.  JSON, XML, or some analogously hierarchical and self-defining scheme sure *sound* like a more natural approach overall.

 

I realize this doesn't solve the problem you asked about, but I for one sometimes find myself locked in on a difficult solution path where a few impertinent questions from a colleague help me realize that I should try a whole different approach.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 10 of 23
(3,667 Views)