09-25-2019 03:13 PM - edited 09-25-2019 03:40 PM
One of the problems is that there are 5 digit decimal numbers that exceed the U16 range, so we need to make sure the input is clean. If not, we need more bits. 😉
We also need to assume that the length is divisible by five.
Here's what I would probably have done (B) (or maybe a reshape before the loop followed by autoindexing on the 2D array*). I have not done any benchmarking though. Either could win.
I also strongly object to constantly resize the string in the shift register. We could just move the index based on [i], eliminating one subset operation and the shift register entirely.
*Version with reshape array:
Also, the chart terminal in the original code definitely belongs after the loop and if we show the digital display of the chart, we can also eliminate the other blue indicator.
09-25-2019 11:28 PM
@altenbach wrote:
One of the problems is that there are 5 digit decimal numbers that exceed the U16 range, so we need to make sure the input is clean. If not, we need more bits. 😉
I did realize that looking at the code and changed the indicator and chart to a U32 instead. There were so many coercion dots in the code that I assumed that the OP really had no clue about numeric representations.
I also strongly object to constantly resize the string in the shift register. We could just move the index based on [i], eliminating one subset operation and the shift register entirely.
I'm curious about that comment. Of course I'm familiar with the issues with growing string or arrays where there are slowdowns and eventually a potential out of memory error as things need to be moved around finding ever larger runs of free memory. But what would be the issue where the data is shrinking in size? I would think as parts of the string are eliminated, the part that stays in memory would be shrinking. And the memory space it occupies would happen to have the no longer used bytes grow at the beginning. That you wouldn't need to move data around or find larger blocks of memory.
09-26-2019 03:19 AM
@RavensFan wrote:
But what would be the issue where the data is shrinking in size? I would think as parts of the string are eliminated, the part that stays in memory would be shrinking. And the memory space it occupies would happen to have the no longer used bytes grow at the beginning. That you wouldn't need to move data around or find larger blocks of memory.
Well, the memory has already been used and I doubt that it''ll free up these small fragmented spaces for anything. Typically LabVIEW holds on to memory because it will be needed again with the next call. I have the feeling that having to update the valid string size with each iteration is more expensive than just moving the read position along the existing string. I have not done any benchmarking, feel free to try. 😉
It's irrelevant for such cheap code anyway, but reducing the number of "string subset" operations from 2 to 1 and eliminating the shift register makes the code easier to read and simpler to understand..
09-26-2019 07:41 AM
I appreciate the comment as I know you are someone who has a better understanding of behind the scenes stuff in programming and always set the gold standard for benchmarking.
I wasn't thrilled with the string manipulation either, but mostly I was pointing out code that had so many unnecessary constructs with the manual incrementing of the indices and the ASCII table manipulation to get digit values and the multiplication to combine digits. The fact is that message thread had far more problems with a completely messed up communication protocol far exceeding the level of Rube Goldberg that was applied to try to decipher it.
09-29-2019 10:05 AM
@RavensFan wrote:
But what would be the issue where the data is shrinking in size? I would think as parts of the string are eliminated, the part that stays in memory would be shrinking. And the memory space it occupies would happen to have the no longer used bytes grow at the beginning. That you wouldn't need to move data around or find larger blocks of memory.
Don't have LabVIEW here at the moment, but for a good example of the opposite what you described look at the JKI State Machine, specifically the parse states VI. Or you can see this link
mcduff
10-01-2019 04:56 PM - edited 10-01-2019 06:14 PM
Wow-just-wow!!!
Problem: "I want to convert a decimal number (from -32768 to 32767) into a 16 bits Binary two's complement."
Solution: 😮
(Not even showing the TRUE case. (Hint: it's mostly indentical code on the right half!).
Yes this was labeled as "Here is a simpler version of what I did so far." ! I hate to even guess how the original looked like! 😮
My gut feeling tells me that there should be an easier way 😄
10-01-2019 06:43 PM
Your response showed the binary representation with the decimal in parenthesis, how do you do that?
10-01-2019 06:54 PM - edited 10-01-2019 07:00 PM
Display format in advanced editing mode has multiple format strings for the one indicator.
10-01-2019 07:44 PM - edited 10-01-2019 07:47 PM
10-01-2019 08:20 PM
@Jacobson-ni wrote:
Your response showed the binary representation with the decimal in parenthesis, how do you do that?
Well, you are excused because you joined the forum five years after our 2007 post. 😮
Still all true, so take some time to read and study it. 😄