Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Fastest way to do single read/write, digital and analog? Thanks!

I need to do immediate on-demand single read/writes for digital and analog I/O, and counter inputs (buffered period measurements).  For the analog inputs I have a continuous acquisition going at 2Khz, and just grab the latest value (our loops are at 100 Hz, so that is close enough).  For the digital inputs I have to read them immediately.  With simple I/O drivers this is very fast, with the DAQmx libraries, not so fast at all.  I have tried "committing" and "reserving" and "starting" the tasks, but it does not seem to speed things up any.  Any help will be appreciated, thanks, Joe.
 
0 Kudos
Message 1 of 8
(4,781 Views)
Perhaps this is not a real answer to your question but you could use analog
inputs to read the digital signals. I sometimes do that to gain speed and to
synchronize AI to DI signals. You can avoid some lost DI samples too.


0 Kudos
Message 2 of 8
(4,779 Views)

Javier,

Thanks for the response - I've done the same thing, on different hardware.

It looks like single digital input reads are quite a bit faster than single analog input reads - it looks like the digital input read "task" defaults to some different options, or something similar.  Anyway, the per-read overhead I was seeing on the analog input reads is not happening with the digital input reads - I get about 100,000 digital input read calls / second.  Maybe the analog stuff is slower because of hardware setup or something - at 250 ks/s the single analog input read (16-ch scan) time must be primarily something besides the actual analog sampling time.

Thanks,

Joe

0 Kudos
Message 3 of 8
(4,771 Views)

Hi DynoMan,

I have benchmarked the software-timed DAQmx Digital Read/Digital Write (Port U32) to be between 15 and 20 microseconds on Windows XP on my average (1.8 GHz) Dell workstation.  I see you are getting about 100000 digital read calls per second, so the performance you are getting is about what I would expect on Windows XP. 

The overhead is due to the kernel transition time on Windows.  For instance, on an 8176 controller running LabVIEW Real-Time (ETS), the time for a digital read or write takes more like 5 microseconds.  This is because, on the real-time operating system, there is no kernel transition - this is because everything runs in the kernel !

If you need to do more than 100000 digital reads per second, both the solutions I can recommend would mean buying more stuff: 

1. Using one of the new M-Series boards, you can use "correlated digital" which is hardware-timed digital.  With hardware-timed digital combined with continuous acquisition you will be able to get much higher throughput because the data is streamed back and forth between the processor and the device using DMA.  This will help a lot if you want to read multiple digital values at once.  However, with correlated digital, you will get some additional "per-read" overhead because of the affect of the additional processing required for buffered reads and writes.  (This additional overhead due to buffering is probably what you were seeing in the analog case).  But if you can read multiple samples at a time (e.g. 100, 1000, or 10000 samples), buffered reads are the way to go.

2. If you get LabVIEW real-time and run your app using the real-time operating system, you will get a significant performance increase for software-timed digital (3x to 4x increase in performance).  But using a real-time operating system has pros and cons so this may or may not be the best solution for your application.








 

Message 4 of 8
(4,762 Views)

Jonathan,

FYI, I am using a PCI-6221 I/O board, with DACmx 8.0, from VB 6.

The digital input reads at 100,000 Hz are fast enough - the analog input reads is what was slower than expected.

I have the analog reads in continuous mode now, just grabbing the latest values, which is OK.

The single-read digital routine (reads P0, P1, P2) is faster than the single-read analog routine (reads AIn 0-15 RSE mode) by more than I think can be explained by the analog sampling time.  I'm not sure just how DACmx works, but I think I told it to do *scans* at 2,000 Hz, so the analog read rate would be 2,000 * 16 = 32,000 samples/second, well below the 250,000 s/s rate of the board. (Got errors at 100,000 Hz, which the board can do for 1 channel, so I am thinking the 2,000 Hz is for the scan, not conversion rate.) So, the conversion time is about 0.0005 seconds (2,000 Hz).  However, I had to set the task and read up to do at least 2 reads (error message when trying to use 1), so say 0.001 seconds / scan.  So, I would expect about 1,000 calls a second were possible - however, I could only get about 100 calls per second.  Any idea why?  I have to guess there is some software overhead happening, or maybe more likely a board-configuration thing happening for gain or routine or something.  Or maybe I am just not doing it quite the right way - basically using one of the (10 or so!) DACmx VB examples for the read logic.

Anyway, continuous acquisition mode and grabbing the last sample read seems to be OK for our purposes.

FYI, if I do a most-recent-sample-relative read, with an offset of 0, it is much, much slower than if I do the same read with an offset of -1 - not sure what that means, but that's what my timing shows.

Joe

 

0 Kudos
Message 5 of 8
(4,754 Views)
Hi Joe,
 
Sorry, I didn't understand your problem the first time.
Also I didn't notice you said "Visual Basic".  My previous benchmarks were assuming LabVIEW.  I don't have anything against VB, but I have to admit I'm not a VB expert.  Perhaps there is a VB expert who will respond to this post too.
 
However, I have some ideas:  since you are using Continuous reads for AI, then you are doing buffering.  Buffering has some additional overhead that is per-read only (+ additional time for each sample requested).  I have measured that a buffered read of one sample on Windows XP typically takes 50 to 100 microseconds, on my system (using LabVIEW).  But it also varies depending on your system speed.  Plus there is some additional small amount of time for each sample requested (this is _very_ system dependent, but it much less than a microsecond per sample requested).  So for your case, for a single scan I would expect you to be able to do 5000 to 10000 reads a second.  So something is affecting the performance causing you not to get what I would consider to be optimal results.

Here are some ideas:
1. Make sure you are calling DAQmx Start on the analog task _outside_ your loop. (you implied you were doing this already)
2. You could double check the convert time that the driver is using.  Sometimes the driver will use more convert time than you expect if you are scanning because of settling time issues.  You can double check the convert time by querying the Convert Rate property after you have set your timing.  However, I don't think this is your issue (it sounds like the board can keep up in your case).  You can set the Convert Rate to be faster - just make sure it is within spec for the board's settling time.
3. You mentioned you are reading the "latest" sample.  How are you getting the latest sample?  Are you setting your Read position/offset every time you call read?  This could be slowing you down - I haven't benchmarked setting the position / offset every time in a loop but you might want to measure how long it takes with the tick counter.  Maybe you can do this outside the loop.  Or if you know data is available, then you could avoid it altogether if you call the single sample read with a timeout of 0, in a loop, until you get no data back.  (This should be pretty fast as long as you are not far behind)
4. If you can, use Hardware Timed Single Point timing mode instead of Continuous, since you want to read a single scan each time.  HWTSP is faster than Continuous for reading single samples.  However, be aware that Hardware Timed Single Point will try to force you to keep up (it will return errors if you get behind). 
5. There could be some additional overhead specific to Visual Basic that I'm not aware of.  If you can set up your acquisition so that you know the samples are available (e.g. configure a finite acquisition of 10000 samples, call Start, and then call "Wait Until Done"), then you can benchmark the DAQmx Read time all by itself in a loop, and measure the average time taken.  I am curious what you will get - hopefully the performance is about as good as LabVIEW (maybe 100 microseconds per read?).

Hope these ideas help!



0 Kudos
Message 6 of 8
(4,748 Views)
 may I have a quick question. Is it possible I can change the read position (offset ) in hardware single point mode ... if so please help me thank you
0 Kudos
Message 7 of 8
(4,517 Views)
HW-Timed single point is designed to support fast (low latency) single point read of the _next_ available sample.  Unfortunately, in order to get improved performance, some features that are available in a buffered acquisition, are not available in HW-Timed single point.
 
The read position attribute is one of these features.  You can only set the read position when performing a buffered acquisition (Continuous or Finite).  The read position indicates a location in the acquisition memory buffer on the host computer, and when you are using HW-Timed Single Point, DAQmx does not use a host computer memory buffer. 
 
(Actually, I lied.  DAQmx does use a very small 2 sample acquisition buffer when you enable HW-Timed Single Point.  This is so DAQmx can program the device to transfer samples using DMA.  However, this buffer is extremely simple and only supports reading samples one by one in the order they are acquired).
 
Hope this helps,
Jonathan
 
0 Kudos
Message 8 of 8
(4,476 Views)