The Daily CLAD

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 

Re: CLAD2017 - Decimal String to Number - Have you got the range

Member

Assuming Default Value In = 99.  What are the contents of Numeric Array Out following execution of the VI?

 

Decimal String to Number Ranges.png

 

 

Decimal String to Number Ranges Answers.png

Combined.png

Comments
Member

C

Member

D

Knight of NI

C


There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

The discussions from the Advanced User Track is not over. Join in the conversation: 2016 Advanced Users Track
Member

This is what I'm getting for an offset of 1, can someone explain please Capture.JPG

Member

C

 

"Default value in" specifies the numeric representation of the number (in this case I8)

 

If a number is out of range of the given numeric representation, it will be set to the max or min value for the data type. In this case the max value would be 127 and the min would be -128.

 

So we get:

 

-300  =>  -128

-1      =>  -1

0       =>   0

1       =>   1

300   =>   127

Knight of NI

Monesh,

If you have an offset of 1, then you are telling the Decimal String To Number to start looking at the second character.  So the negative signs are being ignored in the first two.  For the next two, you are telling the conversion to happen after the string, so you will get the default value (99).  For the final value, you are ignoring the '3', so you are converting "00" which will result in 0.


There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

The discussions from the Advanced User Track is not over. Join in the conversation: 2016 Advanced Users Track
Member

monesh

 

In regards to your question, when you specify the offset, you are essentially telling LabVIEW where in the string to start looking for numbers. 

 

So with an offset of '1', you are telling the Decimal String to Number Function to disregard the first character, or '0' index, of each string.

 

So for example, '-300' becomes '300' (the minus sign is skipped over) and in this example, with the numeric representation set to 'I8' (which has a max value of 127), the output will be set to the max value. Repeat the same logic for the other strings in the array and it should become apparent what's happening.

 

What's interesting to me is the behavior of the function with '0' and '1'. I guess since these only have 1 character each, there is no '1' index so the default value of '99' is returned as the output.  

Member

In 'c' with an offset of 1 into a string of length 1 you'd be looking at the NULL character at the end of the string.

Combined.png

Member

C.  That I8 representation is the kicker. 

Member

C

Member

C

Active Participant

@SercoSteveB

 

LabVIEW strings don't have a NULL character at the end: http://zone.ni.com/reference/en-XX/help/371361P-01/lvconcepts/how_labview_stores_data_in_memory/ Instead, LabVIEW checks the string length at the front of the string.

Certified LabVIEW Developer

Member

Good link JKSH, nice one.

Combined.png

Member

c)

Member

C

Member

C

Member

Why LabVIEW would take the default value representation as final value representation? Default representation I32 by default and also number (value out) I32 by default. if i change default as I8 & the number (value out) connected by conversion palette's (to long integer) l32 still i got a answer as 127 not -300 why? Is there any logical reason behind that?

Knight of NI

Because the value of 127 was generated at the Decimal String To Number.  You explicitly told it to convert as an I8.  So it did.


There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines

The discussions from the Advanced User Track is not over. Join in the conversation: 2016 Advanced Users Track
Member

For me, it is about having the flexibility to convert to the numeric representation that you want.  If you are happy with an I32 output leave the terminal unwired, if you want a different representation then wire that up and go for it.  

 

Converting to a non-default representation within a function does have an impact on code readability, IMO, as the conversion is not explicitly on the block diagram and also it is a good bet that some LabVIEW users wont know you can do a conversion within the function (only those LabVIEW users that don't read The Daily CLAD that is Smiley Wink).  

Combined.png

Member

Answer: C.  Nice one all.

 

Converts the numeric characters in string, starting at offset, to a decimal integer and returns it in number.

If number is an integer, it can overflow if the input is out of range. In that case, number is set to the maximum or minimum value for the data type. For example, if the input string is 300, and the data type is a 8-bit signed integer, LabVIEW sets the value to 127.

Combined.png

Member

C