I have a question for which I am not able to find a proper answer. I am using a USB-6211 through which I bring a sinusoidal signal and work with it in my computer. My code so far looks like the picture below
The DAG assistant has a voltage channel set to RSE terminal configuration, with 1k Samples to Read and 10k Rate (Hz). The while loop performs a number of iterations. For each iteration I get a sinusoidal waveform of 1000 samples as expected. My problem is that between two iterations the waveforms do not align properly. Below I plotted that part of the waveform which shows what happens at the transition between the 2nd and 3rd iterations.
Is there any way to fix this? I would appreciate any help.
Solved! Go to Solution.
Are you acquiring 1k points, exiting the loop, logging to disk, and then iterating in an external loop not shown? Or just iterating a number of times, building an array as you go via the indexing terminal, and then logging once?
You have a wait function in your while loop, which is inserting a gap between successive iterations - likely the source of your problem. You need to cede resources to other processes so that you don't monopolize your CPU, but as written, you are performing your data acquisition, then waiting (or vise versa), and then iterating, producing the gaps. An alternative would be to replace the while loop with a timed loop which samples at your required rate, getting rid of the wait function (since timed loops automatically adjust their execution period to match the configured value). This loop would then acquire continuously at the desired rate.
Disk operations are not deterministic, so in the timed loop, you would write that data into a queue or other FIFO buffer, and then create a second parallel while loop to read from that buffer and write to disk asynchronously, all without affecting your acquisition timing, which is higher priority.
Alternatively, if you only need to write to disk once after acquisition as you have written, you are also dynamically building an array, which is not effective use of the memory manager. If you know the ultimate size of the array you need, start by initializing an array of that size prior to the while loop, and use the replace array subset function inside the loop to replace elements (passing the array between iterations with a shift register). This way, no dynamic memory allocations are necessary.
Your DAQ Assistant is set to read N Samples. To it reads N, stops the task. Then the next iteration it restarts the tasks, and reads another N samples when they come in. But the real world data that occurred between the end of the first loop iteration and the start of the next was never captures.
Change your Acquisition Mode to Continuous. Now it keeps the buffer open and reading data. When it comes back around again to read 1000, the 100 that might have occurred while the code was off doing other things will be sitting in the buffer and will be read once the remaining 900 come in. (100 is a guess, and is all based on how long the rest of the loop takes.) Your data will be continuous with no lost samples.
You don't need the wait function in the loop because the timing of the DAQ Assistant will control the loop rate. Worse, if your wait is too large, you could eventually fill up the buffer if you don't read the data fast enough.
Be careful about that concatenating tunnel on the While Loop. If that loop runs a long enough time before you hit stop, you'll fill up your memory with that ever growing array. In those cases, you'd be better off using a Producer/Consumer architecture with queues (or in newer versions of LV, channel wires) to off load the file writing to a parallel loop.
Thank you very much @CFER_STS for your reply and hints. The truth is that my skills in LabVIEW are quite limited at the moment and it will take me some time to go through all your suggestions but all of them are well understood at first point. I will come back hopefully soon.
As RavensFan says, you need to use Continuous samples so the hardware is constantly acquiring data. To expand on what he says about filling up the buffer, here's a quick explanation.
With Continuous Samples, reading 1k samples @ 10kS/s, the acquisition of these 1000 samples takes 1/10 of a second. If the rest of your while loop takes longer than 1/10 of a second to complete, it will be impossible to read the samples from the buffer faster than they are coming in, you will always have a surplus of samples in the buffer and eventually it will fill up and new data will be lost. That's the reason for using a producer/consumer architecture so that the producer loop is only tasked with reading those samples and dumping them to the consumer.
@RavensFan your suggestion seems to solve the problem. At the moment I did not spot any data loss between iterations. I well understood also the difference between N Samples/Continuous. I will follow your suggestions and I will try to improve my code while progressing with my application. Thank you!