From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to convert an array of bytes into a single integer

Solved!
Go to solution

Hello!

 

So, At the moment, I'm trying to receieve data from a device that sends back data as an array of 10 bytes, where the first 2 bytes are a header, the next 6 are a word count that's suppose to be a single 48 bit value, and the last 2 are an error count which is suppose to be a single 16bit value.

Ex:

Word recieved

(---Header---)   (----------------------------------Word Count------------------------------)   (Error Count)         
0            120     0               44              221          155          96             48             0              0

 

So, in theory, what I want to do is be able to calculate the values like this:

(---Header---)   (----------------------------------Word Count------------------------------)   (Error Count)         
0            120     0               44              221          155          96             48             0              0
Binary Values:  00000000 00101100 11011101 10011011 01100000 00110000 00000000 00000000
Concat. Vals  :  000000000010110011011101100110110110000000110000      0000000000000000    
Calc'd  Vals    : = 192696508464 words                                                              = 0 Errors
                         * 40 (40 bit words)
Totals             : = 7.7078603e+12 bits total                                                         = 0 Errors Total

 

But, the problem(s) I'm running into is that all the methods I'm using seem to not be correct.

I've tried:

Flattening the data and then unflattening it as a U64

Convering the data to bool arrays, concatinating it, then converting that to a number

Joining all the bytes together using the join function

Typecasting a byte array to a U64

 

And with every method, the data I get out is incorrect (In the sense that the values are incorrect compared to calculating it manually like I did above), and at this point I'm unsure how to go about getting the correct values, so any sort of help would be greatly appreciated!

 

Thanks!

 

0 Kudos
Message 1 of 3
(4,439 Views)
Solution
Accepted by topic author Ace_Archer

This seems to work for me...


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 2 of 3
(4,411 Views)

I Don't know what part of it I was getting wrong before, but this worked perfectly! Thank you so much, I appreciate it!

 

 

0 Kudos
Message 3 of 3
(4,389 Views)