LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

sort cluster efficiency

Hei Smercurio,

Thanks for the info.

i did not tought it is a related problem, that is why i put it in a different thread.

about the sorting thing: you are right it doesnt seem that i have a solution. i will maybe think of a completely different approach in order to avoid sorting altogether. i am asking myself if this time taking thing is due to a Labview inherent process, or just plain computer power. i will try to create a dll in C++ and compare timing.

-----------------------------------------------------------------------------------------------------
... And here's where I keep assorted lengths of wires...
0 Kudos
Message 11 of 26
(1,420 Views)

Hello Xseadog,

i have created some DtoA converters at 16Bits, where buffer update is max 10us. i have a 6533 NI card, with a max bus speed of 2MHz. so i update to this DtoA at maximum rate whenever i can (when ramps and value change are involved, to do as smooth as possible), because application improves a bit.

in the mean time (in between these 10us, i address several digital and other analogs out

if you need more details just ask!

-----------------------------------------------------------------------------------------------------
... And here's where I keep assorted lengths of wires...
0 Kudos
Message 12 of 26
(1,421 Views)

Hi Gabi1,

Could you please post a sample VI with data for us to poke at?

When you say that your explicit sort code runs just as fast as the "Sort array" I have to stop and wonder. I have tried to write sort code that runs as fast as the "sort array" and failed.

This means that your sort routine blows away my code or the time you are measuring is dominated by something other than the "sort arry".

So if you post (please!) I will learn from your example or we may be able to point at something that will help you.

That sounds like a "win - win" situation to me.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 13 of 26
(1,407 Views)
Hello Ben,
 
i can post the vi, but with data it will make about 100MB. cant do so. the pic of how this subvi looks like is in this thread a few posts above.
 
there might be some misunderstanding about my statements, so lets resume them:
i discovered that my subvi takes too long. about 600ms are used for changing some values of my clusters, and another 3000ms (!!) just for sorting it.
so i tried several options for shortening the time. the first one was to build my own sorting vi, which took about 2 orders of magnitude longer (...).
i then found an old thread, where Altenbach proposed to be sure to sort only by the first (or second or...) element of the cluster, but not by all the cluster. that might mean less calculation time. this solution is the one you see in the pic.
unfortunately, overall it took the same time to sort.
 
i am now thinking to just abandon sorting alltogether.
if you still want my explicit sorting solutions i'll send it.
-----------------------------------------------------------------------------------------------------
... And here's where I keep assorted lengths of wires...
0 Kudos
Message 14 of 26
(1,384 Views)

Hi Gabi,

      In the pic below, the big for-loop builds two clusters and sorts on the smaller.

As I understand (perhaps mis-understand) this diagram, the small cluster-array could be eliminated by just building and sorting on the larger as-is - eliminating the subsequent for-loop and two or three [large] temporary arrays.  Sorry if I'm missing something here!

Cheers.

 


@Gabi1 wrote:
I tried Altenbach solution from a thread about a year ago,
here a pic of how it looks like.
but the sorting takes actually exactly the same time.
btw i have checked that it is the sorting itself and no other element that takes most of the time.
 
 

Message Edited by Gabi1 on 04-30-2007 07:02 AM



"Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
0 Kudos
Message 15 of 26
(1,381 Views)
Hi tbd,

no - your misunderstanding results from bad wiring as I have complained before! (I was thinking the same at the first moment...)
The cluster is sorted by time values. In the smaller FOR loop the index value (!) is taken from the cluster - at least you can see a small portion af a BLUE wire.

Please Gabi, clean up your vi and attach it here in this discussion!
Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 16 of 26
(1,371 Views)
You know, I still haven't heard anyone refute that 3 seconds is not that unreasonable when sorting an array of clusters of over 2 million elements that takes up, as the posted indicated, over 100 MB, regardless of what the poster might believe. Gabi never indicated what kind of machine they're using, or what else is going on in terms of other processes. I wonder if we're just splitting hairs here...
0 Kudos
Message 17 of 26
(1,358 Views)

Hi Gerd,

      I assumed the [blue I32] index was being used, and (i think) the result of the "smaller FOR loop" is to reindex the big-cluster array in exactly the same order we'd get if we just sorted the big-cluster-array in the first place!  No?  (sure would be nice for test-purposes, if the VI was attached somewhere ;^) )

Cheers.


@GerdW wrote:
Hi tbd,

no - your misunderstanding results from bad wiring as I have complained before! (I was thinking the same at the first moment...)
The cluster is sorted by time values. In the smaller FOR loop the index value (!) is taken from the cluster - at least you can see a small portion af a BLUE wire.

Please Gabi, clean up your vi and attach it here in this discussion!



"Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
0 Kudos
Message 18 of 26
(1,348 Views)

 


@smercurio_fc wrote:
You know, I still haven't heard anyone refute that 3 seconds is not that unreasonable when sorting an array of clusters of over 2 million elements that takes up, as the posted indicated, over 100 MB, regardless of what the poster might believe. Gabi never indicated what kind of machine they're using, or what else is going on in terms of other processes. I wonder if we're just splitting hairs here...


Hi Smercurio, your point is well taken - but since Gabi's interest was having this run faster, why not see whether a 2X or 10X improvement is possible?

No matter what, the big-cluster-array can be updated "in place" using a shift-register (eliminating at least one dynamically-allocated array); if  the smaller cluster-array, and subsequent "re-ordering loop" are eliminated, this gets rid of 3 or four large/dynamically-allocated arrays.  Seems like it would be worth a try! Smiley Happy

 

"Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
0 Kudos
Message 19 of 26
(1,343 Views)

Just a thought....

What if the data was restructured such that instead of using an array of clusters, that data was saved as a cluster of arrays or simply multiple arrays.

Done thinking, back to work!

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 20 of 26
(1,335 Views)