Solved! Go to Solution.
Please attach your VI so that (a) we can better understand what you want to do, (b) we can try it for ourselves to see if we can identify the source of the slowness, and (c) we can try out ideas that are relevant to your question without doing a lot of work that is of absolutely no benefit to ourselves in order to help you. Do not attach a non-executable picture of (part of) your Block Diagram -- we can not "test" a picture.
What's the function you are using to concatenate the data? I think build array with concatenate option check is a lot faster than insert into arry.
I just did a benchmark with 100 billions of points and its 40 times faster.
The key may be choose the right function.
I've attachted my example. The input constant cluster is the analogue and digital waveforms for a single scan; I then concatenate these with delays to make a full time resolved scan. In the code I repeat the scan once per minute for 10 hours. Labview generats those waveforms in about 30 seconds on my PC. Any help welcome!
P.s. Vi too large to upload, apologies.
You are dealing with some very large data sets. Have you considered streaming the data into a file instead of ever increasing your memory?
And what are you doing with these waveforms? It is likely that a DAQ or signal generator could just run in regeneration mode to just repeat your waveform for however long you need it to.
I've got a lot of availiable memory and I fear streaming to a hardisk would cause more slow down.
My current solution is that I have the waves being sent periodically and that does work, I would just like the ideal case where I can prebuild a 10 hour waveform in less than a second.
I'm puzzled -- I'm clearly doing something very different from what you are doing in your code. It looked to me like you had a waveform with a million points (10 seconds at 100KHz) and you were joining it to two other waveforms of some size. So I cobbled up some code to do just that -- it generated a Waveform with a Duration of 30 (I appended the Waveform to itself three times, so 3 * 10 = 30), the size was three million points (also correct), and it took 13 milliseconds to do this. Here's the code ...
Obviously, I'm missing something important (or have focussed on the wrong part of the problem). It would seem to me that if the waveform plays for 10 seconds, and you can generate the next iteration in a few milliseconds, then exploiting LabVIEW's parallelism leaves you with 9.98 seconds to do other interesting things ...
OK, a 10 hour waveform at 100KHz is 3.6 Giga-samples, so if we're doing double-precision waveforms, that's just under 30 G Bytes. I see a real problem here ...