02-14-2007 11:47 PM
Here is the another method.
Find the attachement.
-Kumar.
02-15-2007 12:23 PM
02-15-2007 12:59 PM - edited 02-15-2007 12:59 PM
Message Edited by Kevin Price on 02-15-2007 02:00 PM
02-15-2007 01:19 PM - edited 02-15-2007 01:19 PM
Kumar,
Be aware that all these constant array resizings are very expensive because the cause constant memory reallocations. You method is not recommended for larger aray inputs.
For example with an input array of 100000 points with section of random lenght of 1..10 elements, your solution takes almost 3 seconds, while my version does it all in 70milliseconds. A 50x speed penalty! 100000 points is not that much, but 3 seconds is a very long time!
Even my code is not fully optimized. For example skipping the initialize array and do the replacement "in place" (See image) using an inner loop and another shift register drops the time down to 45ms because we don't need to allocate the averaged subarrays.

I am sure many other improvements are possible. This was just a quick "back of the envelope" draft. 🙂
Message Edited by altenbach on 02-15-2007 11:19 AM
02-15-2007 01:34 PM
@Kevin Price wrote:
I'm not near an LV PC to test this, but wouldn't there be some possible issues about the implied floating-point equality going on in altenbach's "Search 1D Array" approach? Doesn't it depend on where the 0 values came from and whether they are, in fact, true 0?When I've wanted to check for "proximity to 0" with processed data, I've used the "In Range and Coerce" primitive with very small bounds like +/- 1.0e-6. You could then search the Boolean 'In Range?' array output for "True" instead of searching the original floating point array for 0.-Kevin P.
Kevin
In the real world, this is certainly a concern but the posted data contained only "true zeroes" so that's what I assumed. Maybe they come from an output tunnel set to use default if unwired? Maybe he start out with an all zero array and replaces acquired data subsections at random locations?
The bounds of course need to be adjusted according to the data, for example if the data is in nanovolts bounds of 1e-6 are huge. 🙂
Thanks for pointing out this potential issue.
02-15-2007 02:25 PM
02-15-2007 05:07 PM - edited 02-15-2007 05:07 PM
Instead of having linear code, it might be more useful to make an ineractive state machine where you can go back and forth through the sound segments at will and adjust the parameters dynamically until everything looks right.
Message Edited by altenbach on 02-15-2007 03:09 PM
02-15-2007 05:50 PM
02-20-2007 02:22 PM
Altenbach
did you have a chance to go through my vi?if you did, i am really eager to find out what you think about it and if you have any tips for me
kind regards
madgreek