08-07-2015 01:18 AM
I'm writing a program that sends UDP packets and I've defined the data I want to send via large clusters (with u8/u16/u32 numbers, u8/u16/u32 arrays, and nested clusters). Right before sending the data, I need to convert the clusters either into strings or byte arrays. The flatten to string function is almost perfect for this purpose. However, it's appending lengths to arrays and strings which renders this method useless, as far as I can tell.
As I have many of these clusters, I would rather not hard code the unbundle by names and converting/typecasting to byte arrays or strings for each one.
Is there a feature or tool I am overlooking?
Thank you!
Solved! Go to Solution.
08-07-2015 01:35 AM
08-07-2015 05:46 AM
Why is it a problem that it's prepending the string/array size? The unflatten function should still be able to decode it back into a cluster. There's also an input to the flatten function which can turn off the prepending of string/array size to the string.
You can also type cast to a string or an array of bytes I believe as well.
If you're using LV2013, you can flatten to JSON which is a more human readabale format (but obviously not as efficient in terms of data transferred).
When we do TCP/UDP reads, we normally prepend an integer with the packet length so we know how much data to read from the port for our message. We wait for 4 bytes, convert that to an integer and then read that many bytes to get the full message.
08-07-2015 06:46 AM
Anything embedded in the cluster will have the lengths in there when you flatten. But is that really an issue? How else are you going to be able to tell how many elements are in your array or bytes in your string? That information is there because it is important.
Just use the Flatten To String and Unflatten From String and be done with it. An extra 4 bytes added to an array/string will be minor if your clusters are as large as you say.
08-07-2015 10:18 AM
Thanks for responses!
The program I'm developing is for communicating/polling an external device that has a well documented protocol. The messages I constructed in clusters contain the exact components the other device expects. Thus, if my program appends any additional info (such as these array/string lengths), the external device's parser will not recognize the messages and return an error.
In regards to using typecast, can you send a cluster through a typecast? I haven't been able to do so - the terminal and sink are incompatible.
08-07-2015 10:41 AM
Flatten to string has a boolean input of "Prepend string or array size" ... The default value is true.
08-07-2015 10:43 AM
Unfortunately, when the cluster contains arrays within it, that boolean control only affects the top layer - any arrays/strings in it or in its nested clusters will still get an appended length.
08-07-2015 12:05 PM
@mkirzon wrote:
The program I'm developing is for communicating/polling an external device that has a well documented protocol. The messages I constructed in clusters contain the exact components the other device expects. Thus, if my program appends any additional info (such as these array/string lengths), the external device's parser will not recognize the messages and return an error.
Unfortunately, you are going to have to unbundle your cluster and build up the individual strings to concatinate.
08-07-2015 12:41 PM
An alternate approach is to replace the embedded arrays with clusters containing exactly the right number of elements. Since the cluster size is fixed, even a nested cluster will not have a size prepended. This will also allow you to use Type Cast. You can put a cluster through Type Cast, but only if the cluster contains no variable-sized elements (strings or arrays).
08-07-2015 04:06 PM
@deceased wrote:
Flatten to string has a boolean input of "Prepend string or array size" ... The default value is true.
That only specifies if a string or array size should be prepended if the outermost data element is a string or array. For embedded strings or arrays it has no influence. This is needed for the Unflatten to be able to reconstruct the size of the embedded strings and arrays.
The choice to represent the "Strings" (and Arrays) in the external protocol to LabVIEW strings (and arrays) is actually a pretty bad one unless there is some other element in the cluster that does define the length of the string. An external protocol always needs some means to determine how long the embedded string or array would be in order to decode the subsequent elements that follow correctly.
Possible choices here are therefore:
1) some explicit length in the protocol (usually prepended to the actual string or array)
2) a terminating NULL character for strings, (not very friendly for reliable protocol parsing)
3) A fixed size array or string
For number 1) and 2) you would always need to do some special processing unless the protocol happens to use explicitedly 32 bit integer length indicators directly prepended before the variable sized that.
For number 3) the best representation in LabVIEW is actually a cluster with as many elements inside as the fixed size.