I am primarily interested in the chunking algorithms to optimize display speed. So the two subVIs dealing with decimation are of great interest to me as I have traditionally relied on the decimation LabVIEW automatically performs rather than the programmatic min-max decimation to deal with display issues. I expect a much more accurate and responsive display with these new algorithms.
For those of you who would like to try to understand GLV_GigaLabVIEWMemoryStoreAndBrows.vi better (from GigaLabVIEW.LLB), I recommend turning on execution hilighting and watching the block diagram as the code executes.
As a general comment since LV7 came out, my personal coding preference is to use a single-loop event-driven architecture with defined user events or value
(signaling). This single loop has an event structure with cases of events and tasks vs multiple loops (with one loop handling events and one loop handling tasks).
Thanks again Damien.
Sincerely,
Don