From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Saturday, Nov 23rd 7:00 PM CST - Sunday, Nov 24th 7:45 AM CST, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
12-08-2023 06:26 AM
Dear all,
I would like to use NI USB-6216 (BNC) to simultaneously read and write data to specific channels.For this, I created two (or more) while loops for acquisition of analogue inputs and generation of digital/analogue outputs.However, when when multiple while loops run in one vi the rate of data acquisition on the analogue input channels (and data generation on the output channels) is limited.
Initially I set the sample clock function to "Sample Mode"- Continuous, "Rate" 2000, "Samples per channel" 200. As I understand, the DAQ board should acquire and gemnerate data at the rate 2k, data transfered in a package of 200 samples. Hence, each iteration (time of the while loop) should be max 100ms to keep the rate 2000 Hz. The Specs for the NI USB-6216 (BNC) looks good - 250kS/s per channel, so I am far from the limit.
However , when I look at the elapsed time for every loop, it often takes longer: "analogue Input" while loop has elapsed time ~100ms, digital output while loop is slower 150-180 ms, sometimes above 200ms.
I tried parallel/sequential structuring of channels, delays are similar and way too large.
Also, on the right hand-side of the figure you can see how much data are delayed (space between white bars-recorded data) :
I have also attached the example vi. Does anyone have an idea of how to improve timing?
Thank you!
12-08-2023 07:42 AM - edited 12-08-2023 07:54 AM
I cannot view your VI, which is developed in LabVIEW 2017, with either LabVIEW 2019 or LabVIEW 2021 (which should be able to view it). The Front Panel is close to 60000x1080 pixels in size (meaning it would take three nice monitors side-by-side to view the entire Front Panel!), and when I open the Block Diagram, I can't even get the Navigation Window to show me the entire screen (and I have to close LabVIEW)!
From the little bit that I was able to see, you seem to have two loops acquiring data, each doing 1 channel and multiple samples (I was unable to scroll and get the particulars), but let's say 1 channel x 1000 samples, done in two parallel loops. Can't your AO do two channels? If so, let the "hardware" do the "hard" work.
Bob Schor
P.S. -- Navigation View "unlocked" while I was writing the above. Your block diagram is only 5 screens wide by three screens tall. When I learned LabVIEW, I learned about LabVIEW "Style" from Peter Blume's "The LabVIEW Style Book", and now 95% of my LabVIEW code fits on a single laptop screen (admittedly larger now than when I started with LabVIEW!). The secret is Sub-VIs, every one with a unique Icon (if only 2-3 lines of short text).
I've not used the 6216, but I looked at its manual and I think it is capable of 2-channel D/A output ...
BS
12-08-2023 08:58 AM
It is still above 100ms even if I keep one analogue output while loop to run. My intial question: for 250kS/s per channel the speed I am trying to record should be far away from any limit.
By the way, I have not seen a lot of simultaneous "read ai and write ao" to DAQ example vi out there. Could anyone point out such examples?
12-08-2023 09:21 AM - edited 12-08-2023 09:25 AM
Hi,
when I open your VI then the block diagram has a size of ~4800*3000 pixels, still a bit too large IMHO…
You really need to cleanup that VI before we can start a good analysis!
12-08-2023 09:39 AM
Hi,
1. There are only 3 frames in the sequence: 1. initialize , 2. read/write data (while loops) and 3. close
2. It is done to work with each analogue input separately, that is plot it, extract amplitude at a specific carrier frequency, plot the result. These computations are not problematic , as this loop runs fast 5-10ms (100ms if I constrain and time it).
3. Why do you read "-1" samples from your AI channels: number of samples per channel specifies the number of samples to read. If you leave this input unwired or set it to -1, NI-DAQmx determines how many samples to read based on if the task acquires samples continuously or acquires a finite number of samples.
12-08-2023 10:09 AM
Hi Yaro,
@Yaro42 wrote:
Hi,
1. There are only 3 frames in the sequence: 1. initialize , 2. read/write data (while loops) and 3. close
2. It is done to work with each analogue input separately, that is plot it, extract amplitude at a specific carrier frequency, plot the result. These computations are not problematic , as this loop runs fast 5-10ms (100ms if I constrain and time it).
3. Why do you read "-1" samples from your AI channels: number of samples per channel specifies the number of samples to read. If you leave this input unwired or set it to -1, NI-DAQmx determines how many samples to read based on if the task acquires samples continuously or acquires a finite number of samples.
12-11-2023 04:28 PM
Maybe I can follow up later, but here are a few quick thoughts. Warning -- it may be a bit of a long road ahead of you, if I'm right in guessing roughly where you're headed.
1. Your AI task calls DAQmx Timing to set up a 2000 Hz sample clock. The 200 value you wire there to 'samples per channel' does not mean what you think it means. For a Continuous Sampling task, it often doesn't mean anything at all. (Technically, you're setting a lower limit on the buffer size, but DAQmx will be choosing something large enough to be more suitable if it thinks your # is too small).
2. Your AO tasks do NOT call DAQmx Timing at all, putting them in software-timed "on-demand" sampling mode. Here are 2 of the implications:
3. Faster AO demands that you use a hardware clock for timing, at which point you must also put both AO channels into 1 common task.
4. Odds are you want the AO and AI to be correlated in some manner, bringing in some further considerations to get them into hardware sync.
5. Once you get involved with hardware timing, you *also* get involved with buffering and latency. Since your code shows signs of wanting to change AO outputs as a function of user inputs, well, then you'll have "another hill to climb" to start getting into detailed tradeoffs concerning update rate, buffer size, latency.
Sorry it's mostly bad news, just trying to let you know from the outset that there's probably gonna be quite a bit you need to learn and that's gonna take some time...
-Kevin P