LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

floating point to hexadecimal string

Solved!
Go to solution

Hi

 

Attached is a screen shot of what I am trying to do.

 

The equation I have in the formula box basically is meant to input a number between 30-100 (which represents intensity in percentage) and it will yield a value in floating point which when converted to hex and fed into my instrument will yield the corresponding intensity (ie. 30% if I had inputted 30 as my x in the formula) I need. So for example my lowest intensity (30) would give me a decimal of 239 from the formula which is 'EF' in hex.

 

This code doesn't work for some reason though. If I replace my formula and hexadecimal conversion function, with only a simple string control and switch it to hex display and input EF and feed it into the instrument the intensity does adjust to 30%! But I'd rather use the scale bar instead which doesnt work... Can someone please tell me if my conversion function is messed up or something.

 

Thanks!

0 Kudos
Message 1 of 8
(4,821 Views)

"...code doesn't work..." is not very helpful for troubleshooting.

 

Exactly what does it do when it is not working? Does it crash the copmputer or LabVIEW? Does it give numbers which are off by 0.001%?  What values show up on String for various values on the Slide? Do you get any errors?

 

I do not see a coercion dot in your image.  When I wire a DBL to the input of Number to Hexadecimal String, I get a coercion dot, so I wonder if all those meandering wires are not connected as you think.

 

Please post the VI rather than an image.

 

Lynn

0 Kudos
Message 2 of 8
(4,807 Views)

Converting a floating point into HEX is unusual.  I am guessing that you want to convert a U32 (unsigned 32 bit) value into hex which is much more rational.  Take the output from your formula box (uggh) and then send that to the format to hex string icon.  Probably should round to an integer first to get what you want at the detail level.

 

Your formulat ranges from 240 at 30% to about 126 at 100%.  Is that what you want to send as a hex value?

 

 

LabVIEW ChampionLabVIEW Channel Wires

0 Kudos
Message 3 of 8
(4,803 Views)

Thanks for the quick replies. Apologies for not being more clear - my first time posting here and I am really new to Labview!

 

Ok so I've attached two different VIs - one gives me the desired results and the other one does not give me the desired results however I'd like to structure my VI the second way which is not working for now.

 

Yes the range of 240 to 126 is approximately correct. Basically the light intensity follows an approximate linear relationship between intensity and the code I need to write to my instrument. These values (f) are in decimal but need to be converted to hex and fed into the visa write function to be read by my instrument.

 

So for the VI which gives me the right results (ATTACHED) - I need to manually input a value of EF (which is 239 in decimal) into the INTENSITY INPUT string (set in 'hex display' mode) to give me the lowest intensity (30%) of my light source and a value of 80 (128 in decimal) to yield my highest intensity (100%). This works fine and controls my light intensity correspondingly. I can also confirm that it is read properly by looking at the 'read string' indicator which shows what the machine reads back as I input my value. So the machine is programmed so that it returns whatever hex value it receives. In this case when I input EF it returns EF and 80 when I input 80 for intensity.

 

However if I add a slider instead of the manual input and feed it through a formula to obtain my decimal value, convert it to hex and THEN input it into the visa write function - for a value of 30% I get a 'read string' output of F4F4. For 30% I should be obtaining 'EF' in the 'read string' output as I did for the first VI. Also for 100% on the slider I get FEFE as my 'read string' wheras it should have been 80.

 

I have a feeling this has something to do with my conversions...

 

Any help would be much appreciated!

 

Download All
0 Kudos
Message 4 of 8
(4,771 Views)
Solution
Accepted by topic author AKCanada

Try this.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 5 of 8
(4,768 Views)

Sweet! That's solved it! Wonder if you could point me to some resources which would explain why your conversion schematic works?

 

Thanks again

0 Kudos
Message 6 of 8
(4,756 Views)

Slightly simpler using an expression node and typecast (avoids the U8 array detour ;)). Same result!

 

 

0 Kudos
Message 7 of 8
(4,746 Views)

@AKCanada wrote:

Sweet! That's solved it! Wonder if you could point me to some resources which would explain why your conversion schematic works?


Part of the problem is the fact that "HEX" can have many meanings. It could be a string of lenght 1 containing a single byte and the string display is set to hex display. It could also mean that you for a hex formatted integer (two characters 0..F) per byte and the string indicator is set to normal display.

 

Without knowing the instrument, we cannot know what is correect. You need to be much more specific.

Once we (and you!) know what you want, things are easy.

 

Since your "works" VI attached above had the text input control set to hex display, we got a better clue what you actually need.

 

0 Kudos
Message 8 of 8
(4,740 Views)