10-11-2018 05:03 PM
Sounds good. Thanks for the input mcduff! Much appreciated.
10-11-2018 05:08 PM - edited 10-11-2018 05:10 PM
@mcduff wrote:
PS Your timing comparison is not good, run each method without any others. Yes LabVIEW can do tasks in parallel, but you cannot predict the order.
mcduff
Thanks for the suggestion, but in what way is the order not predicted? I thought each branch will execute in parallel (essentially like a separate thread, unless I'm mistaken), and the data dependency will ensure they are fired off at the same time. Keep in mind I'm not looking for micro/nano second precision.
The timing results I have so far are consistently proportional to each other.
10-11-2018 05:13 PM
The OS determines the order, some may run twice before another one. My first solution is slightly faster than a simple concatenate on my computer, but not much. See below. (Old version attached)
10-11-2018 05:15 PM
Also, displaying the data takes time, only 1 UI thread that all three loops need to share. Do not include update a display in your benchmark.
10-11-2018 05:19 PM
I thought you might say that it was related to the OS, and CPU time distribution. That does make sense, I agree. Thanks!
10-11-2018 05:26 PM
@mcduff wrote:
Also, displaying the data takes time, only 1 UI thread that all three loops need to share. Do not include update a display in your benchmark.
This is something I never realized, but makes sense. Thanks for the tip!
10-11-2018 05:26 PM
So if you are concerned about memory and speed and are starting with 1-d arrays, use the simple concatenate, you will use less memory, and it will be faster. It takes time to make a new 2d array, allocate memory for it, transpose it, etc. If you are already starting with a 2d array then use the 1st method I posted, slightly faster, and will work with any number of rows.
good luck
mcduff