Reminded me of this discussion about how to initialize a "special" array.
Eh, I kinda prefer A here. Knowing the size right away isn't as clear to me in B. I mean adding 100 and 100 and 300 is pretty easy to understand and straight forward. But combining an array with 500 elements with one that has 100 elements can have a size between 500 and 600 and I need to think differently in my head to realize what the final size is. Certainly having a comment stating what the intent would help either case. Of course B uses less block diagram space but if it at the cost of read ability I wouldn't prefer it on such a simple thing as this.
Eh, I kinda prefer A here. Knowing the size right away isn't as clear to me in B. I mean adding 100 and 100 and 300 is pretty easy to understand and straight forward. But combining an array with 500 elements with one that has 100 elements can have a size between 500 and 600 ....
I am not familiar with the word "combining" (a very ambiguous word! :o). I am using "Replace array subset", which is guaranteed to operate in place and will not change the size of the upper input vs. outputs. (most often, the total size is given and needs to match e.g. a data array, etc.)
My method (B) scales significantly better if certain things are not constant. For example if user controls for width and delay are implemented, these can be wired directly to code B, while in code A one would need to recalculate quite a few blue inputs. Many more places for bugs to hide. 😉 It is now also much harder for the compiler to guess the final size allocation. My code guarantees inplaceness. 🙂
I am not familiar with the word "combining" (a very ambiguous word! :o).
I wanted to use a word that is as ambiguous as the function itself highlighting my point.
You said replace array subset guaranteed it won't change the size of that, and you are right with the way you have it. But I wouldn't know that at a first glance. I need to look at the code and realize what is happening. If you asked me what is the size of the final array I would answer 500 faster looking at A then I would looking at B.
I never said which worked better in place, or what scaled better, or what works better with controls. I just said for readability I prefer A over B.
A > Fussy programmer: "Yeah but I want to avoid so much array initialization and manipulation..."
B > Fussy programmer: "Yeah but I want to avoid extra memory allocation..."
Just re-rube it a little and I give you option C...
C > Fussy programmer: "Yeah but I want to avoid processing large sections of the data..."
There's also the Spike function:
A bit of creativity with that and the Step and Square functions and you can make almost anything 😄
Where (Fussy programmer = Me) altenbach's D wins on all counts. Best de-rube, easy to read, smallest diagram, less memory allocation, less array manipulation, less data processing (or at least it's all hidden from the user).
Otherwise I agree with Hooovahh in Message 2081.
I definitely prefer the blue (top), but it seems a bit rubegoldbergish. The lower will be faster because the input is an empty string constant.
How does the input string actually look like? Time limiting step is probably the chart update, though 😉
I forgot to put a link to the source. I have added it now. The multiple increments to increment the indices of an index array that is already self-incrementing (is that enough "in-"s in one sentence?) is quite special. The digit by digit manipulation of the ASCII values of string characters was quite sophisticated, but also unnecessary. I don't particularly like the string splitting, but it should work for the problem as it was presented, which was a rather bad protocol for sending data.