12-20-2010 04:06 PM
That does seem to work!
I am now using Continuous Mode with Regeneration at 4 kHz with a buffer size of 2 samples and a timeout of 1 ms. It's a bit jittery but I can update a 32-channel task across 2 NI 9264 modules every 5 ms.
Thank you for the advice.
05-17-2011 08:33 AM
ClausB wrote:
The strange thing about all this is, as you can see in my API call above, I set the timeout to zero. But the more channels in the task, the longer it takes the API call to return to my code (see timing table above). When it does return, it signals no error. So it must be waiting for something but it should not wait because timeout is zero.
Paul C. wrote:
As for the behavior you are seeing with multiple channels, I agree that this is strange and R&D is looking into the behavior to see if they can improve it.
Have they found any way to improve this?
The continuous mode workaround occasionally fails with error -200621 which says "the driver could not write data to the device fast enough to keep up with the device output rate." I thought regeneration would prevent such an error.
05-18-2011 01:01 PM - edited 05-18-2011 01:01 PM
Hi ClausB,
As far as I can tell there are two issues here:
1. The amount of time required for static AO writes increases proportionally as you add more channels to the task:
This is being investigated by NI under corrective action request 299889. I'm not sure if a fix will be possible or not, but NI is looking into it. If you want to check on the status of the corrective action request you can reference it when you call in to NI support (or just post back on this forum).
2. The continuous task is underflowing with error -200621
You could try writing more samples to the buffer. This will add more latency, but if you increase your sample rate as well I think it should give better results than what you are currently seeing. However, I have not benchmarked this myself. The whole problem with running the continuous task is that you are introducing latency between the write and when the sample is actually clocked out on the hardware. The benefit is that the data is transferred in one larger packet. Without doing extensive benchmarking I can't say for sure under which cases the continuous task will be sustainable as well as provide lower latency than what you are seeing with static writes.
Ideally I think the latency with static writes should be addressed so the workaround in #2 wouldn't be necessary. However, I'm not sure how difficult a fix would be to implement or if it would even be possible. As has been said before, USB is far from the ideal bus if single-point latency is a concern. Even with a possible fix to issue #1, the single-point latency of USB is going to far exceed what you might see with a PCI- or PCIe- based I/O product.
Best Regards,
06-14-2012 10:14 AM
I finally tried the CAR fix in DAQmx 9.5.1 with a 9264 module and 16 AOs updating at 200 Hz and it works.
Thanks!
06-14-2012 01:47 PM
I posted too soon. I had tested the AOs along with a 32-channel AI task in Continuous Samples mode and it ran fine. But when I added more modules and tasks it slowed to about 6 ms. The killer was a DI task in 1 Sample On Demand mode. When I changed that task to Continuous Samples mode it all ran fine again.
So there is still a problem with the 1 Sample On Demand mode at 200 Hz, at least in tasks other than AOs. The original system that revealed this problem had a DI task too - maybe that was part of the original problem. I'll have to test again on DAQmx 9.1 when I get some time.
11-21-2018 06:54 PM
@John_P1 wrote:
Hi ClausB,
As far as I can tell there are two issues here:
1. The amount of time required for static AO writes increases proportionally as you add more channels to the task:
This is being investigated by NI under corrective action request 299889. I'm not sure if a fix will be possible or not, but NI is looking into it. If you want to check on the status of the corrective action request you can reference it when you call in to NI support (or just post back on this forum).
I am a coworker of Claus and I develop the new version of our software, which runs on the latest software platforms - starting from Windows (currently version 1809, build 17763.134) to DAQmx (currently 17.6). The new software can run much faster than the 200Hz cycle of the old one and this problem is, thus, exacerbated. I am still testing on our most-popular hardware platform (cDAQ-9174 and some AI/AO/DI/DO modules) and observing the same issues. I am not willing to do any "tuning" of acquisition settings, be it DAQmx task configuration or parameters of the DAQmxWriteAnalogF64 function call. Instead, I would like to ask two questions:
1) taking John up on his suggestion to ask for an update on the status of the corrective action request above: has this been addressed (doesn't seem like it) and what to expect? And,
2) if a solution is as simple as moving to a PCIe board, I'd be willing to do it. I just need some assurances that such an attempt would not be a waste of time and money. Has anyone tried that?
Kamen
11-23-2018 07:38 AM
I can't speak to your overall code but I'd be very confident that you can do on-demand DAQmx driver calls at well over 200 Hz when you use a PCIe-based desktop card. I recall doing some quick benchmarking a year or more ago.
Hmm, I can't seem to find the thread where I reported it. By memory, I believe I was able to run a loop with on-demand DAQmx calls at upwards of 30 kHz. Sorry I can't find the specific thread just now.
-Kevin P
11-26-2018 06:51 PM
@Kevin_Price wrote:
I can't speak to your overall code but I'd be very confident that you can do on-demand DAQmx driver calls at well over 200 Hz when you use a PCIe-based desktop card. I recall doing some quick benchmarking a year or more ago.
Hmm, I can't seem to find the thread where I reported it. By memory, I believe I was able to run a loop with on-demand DAQmx calls at upwards of 30 kHz. Sorry I can't find the specific thread just now.
-Kevin P
Thank you, Kevin. I guess, I'll request a loaner from NI and test it myself. I just wish they had fixed the problem I referred to above.
Kamen