04-30-2007 10:56 AM
Hei Smercurio,
Thanks for the info.
i did not tought it is a related problem, that is why i put it in a different thread.
about the sorting thing: you are right it doesnt seem that i have a solution. i will maybe think of a completely different approach in order to avoid sorting altogether. i am asking myself if this time taking thing is due to a Labview inherent process, or just plain computer power. i will try to create a dll in C++ and compare timing.
04-30-2007 11:00 AM
Hello Xseadog,
i have created some DtoA converters at 16Bits, where buffer update is max 10us. i have a 6533 NI card, with a max bus speed of 2MHz. so i update to this DtoA at maximum rate whenever i can (when ramps and value change are involved, to do as smooth as possible), because application improves a bit.
in the mean time (in between these 10us, i address several digital and other analogs out
if you need more details just ask!
05-01-2007 07:13 AM
Hi Gabi1,
Could you please post a sample VI with data for us to poke at?
When you say that your explicit sort code runs just as fast as the "Sort array" I have to stop and wonder. I have tried to write sort code that runs as fast as the "sort array" and failed.
This means that your sort routine blows away my code or the time you are measuring is dominated by something other than the "sort arry".
So if you post (please!) I will learn from your example or we may be able to point at something that will help you.
That sounds like a "win - win" situation to me.
Ben
05-02-2007 01:47 AM
05-02-2007 03:51 AM
Hi Gabi,
In the pic below, the big for-loop builds two clusters and sorts on the smaller.
As I understand (perhaps mis-understand) this diagram, the small cluster-array could be eliminated by just building and sorting on the larger as-is - eliminating the subsequent for-loop and two or three [large] temporary arrays. Sorry if I'm missing something here!
Cheers.
@Gabi1 wrote:
I tried Altenbach solution from a thread about a year ago,here a pic of how it looks like.but the sorting takes actually exactly the same time.btw i have checked that it is the sorting itself and no other element that takes most of the time.Message Edited by Gabi1 on 04-30-2007 07:02 AM
05-02-2007 04:41 AM
05-02-2007 09:00 AM
05-02-2007 11:48 AM
Hi Gerd,
I assumed the [blue I32] index was being used, and (i think) the result of the "smaller FOR loop" is to reindex the big-cluster array in exactly the same order we'd get if we just sorted the big-cluster-array in the first place! No? (sure would be nice for test-purposes, if the VI was attached somewhere ;^) )
Cheers.
@GerdW wrote:
Hi tbd,
no - your misunderstanding results from bad wiring as I have complained before! (I was thinking the same at the first moment...)
The cluster is sorted by time values. In the smaller FOR loop the index value (!) is taken from the cluster - at least you can see a small portion af a BLUE wire.
Please Gabi, clean up your vi and attach it here in this discussion!
05-02-2007 12:57 PM
@smercurio_fc wrote:
You know, I still haven't heard anyone refute that 3 seconds is not that unreasonable when sorting an array of clusters of over 2 million elements that takes up, as the posted indicated, over 100 MB, regardless of what the poster might believe. Gabi never indicated what kind of machine they're using, or what else is going on in terms of other processes. I wonder if we're just splitting hairs here...
Hi Smercurio, your point is well taken - but since Gabi's interest was having this run faster, why not see whether a 2X or 10X improvement is possible?
No matter what, the big-cluster-array can be updated "in place" using a shift-register (eliminating at least one dynamically-allocated array); if the smaller cluster-array, and subsequent "re-ordering loop" are eliminated, this gets rid of 3 or four large/dynamically-allocated arrays. Seems like it would be worth a try! ![]()
05-02-2007 01:24 PM
Just a thought....
What if the data was restructured such that instead of using an array of clusters, that data was saved as a cluster of arrays or simply multiple arrays.
Done thinking, back to work!
Ben