From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
01-03-2013 11:28 AM
Using WriteMultiSamplePort() (in C# and .NET3.5) to output an array of ints to the 4 digital output lines of a USB-6211 is very slow. (It's thankfully a lot faster than my first attempt, which used WriteSingleSampleSingleLine() in nested loops .) The ints are output at the rate of approximately 1 per millisecond and I need it to go much faster than this - maybe 100x or 1000x faster. I've googled and found mention of Task.Timing.ConfigureSampleClock(), but as far as I can see this applies to Analog I/O and I don't see a way to make it affect Digital Outputs.
Even the NI example code WriteDigPort.2008 only does 1 int per millisec.Can anyone help, please? If you can show me how to make the C# NI example go faster I can transfer the idea to my program.
Thanks.
01-03-2013 01:03 PM
As you can see from the specs, the digital lines are software timed. The ms rate you are getting is about the best you can hope for and with windows, that will be subject to considerable jitter.
01-04-2013 05:01 AM
I'm very new to programming in this environment, but it is hard to understand why a 2GHz CPU would need around 15ms to transfer 20 nibbles over USB. It must be going slow deliberately. Surely?!
So I carried on digging and discovered DigitalSingleChannelWriter.WriteWaveform() method which writes a DigitalWaveform object. It says in the help that this *is* affected by the ConfigureSampleClock(). Can I use this to write data from an array of ints to the digital outputs at a near-MHz rate? I'll try to get this running, but does anyone know of example code that would help me along?
01-04-2013 07:16 AM
01-04-2013 09:30 AM
Oh.
So I've got the wrong board.
OK, thanks. I'll have to think of another way of doing it.
Thanks again.
04-11-2014 10:50 AM - edited 04-11-2014 10:54 AM
@Dennis_Knutson wrote:
As you can see from the specs, the digital lines are software timed. The ms rate you are getting is about the best you can hope for and with windows, that will be subject to considerable jitter.
The digital lines are software timed, true. But why do you think that 1 ms is the fastest that software timed IO can operate? Do you have any evidence or reference for this? In my opinion that statement is naive.
Open up your favorite programming environment and write a simple loop. Using .Net on a modern PC (e.g. 1.6 GHz+) you'll see that the loop runs MILLIONS of times per second. I benchmarked my fairly slow PC at 7 million loops per second. That is 7000 loops PER MILLISECOND! Now add in some function call that communciates with the hardware, for example AnalogMultiChannelReader.ReadSingleSample(). Even with this function call (accurately returning the correct voltage) the loop operates at 4MS/s -- i.e. 4000 samples per millisecond.
So is digital output limited by software execution speed? NO!
Is digital output limited by USB communication speed? NO!
The 1 ms limit for digital output is NOT some jittery approximate speed. It is a very precise, controlled timing. To see this, place a function call that performs digital output in your benchmark code, e.g. using the following function: DigitalSingleChannelWriter.WriteSingleSamplePort
If you benchmark a loop in which you use that function but write the SAME value to the port (e.g. 0) over and over, you will see that runs at about 100 kS/s (i.e. about 100 times per millisecond). I'm guessing this is because the NiDaqmx driver realizes that the output is already set to the specified value and does not lock the thread for the 1 ms timeslot.
However if you write alternate values to the port, such as:
while(time < 1000)
{
writer.WriteSingleSamplePort(false, 1);
writer.WriteSingleSamplePort(false, 0);
}
You will see that this operation executes at ~500 loops per second, which is exactly 1000 writes per second. Moreover, if you time each function call individually, you will see that they consume EXACTLY ONE millisecond each.
(Note that in your own testing, you may see anywhere from 490 to 500 loops per second, depending on processor load, but this is only because the thread manager could devote time slices to another thread during the benchmark. But if you are careful to measure only the time consumed by the digital output function calls, you will see that they are exactly 1 millsecond each).
So in summary, both software execution and USB communication occur at well over 1MS/s. Function calls to write the SAME digital output value to a port execute at about 100 kS/s, but writing alternate on/off values (to a single line or to a whole port, it doesn't matter) occurs at precisely 1 kS/s (i.e. 1 ms each).
So this 1 ms limit IS NOT an operating system or software execution timing issue. My guess is that it is imposed by the NIDAQmx driver, and may be specific to the device being used.
With a device that is hardware timed, you can certainly avoid this issue. But that in itself doesn't mean that software timed IO needs to be limited to 1 ms. (Indeed analog input can easily be performed at up to 48 kS/s).
Does anyone have a reference that explains the precise 1 ms delay when writing digital output? Does anyone have a reference regarding the maximum sampling frequency of digital IO on the USB-6008/9 ? (The sampling frequency for analog input is given in the specifications, but NOT the digital sampling frequency)
Thanks!
04-11-2014 02:37 PM - edited 04-11-2014 02:51 PM
Does anyone have a reference that explains the precise 1 ms delay when writing digital output? Does anyone have a reference regarding the maximum sampling frequency of digital IO on the USB-6008/9 ?
The 6009 is a USB Full Speed device. USB Full Speed transmits at fixed 1 ms intervals. I wasn't sure where to find this in the USB specificaitons though (and don't really have the time to look to be honest), so the best link I can provide is the Wikipedia article on USB.
If your single point I/O has to go through software (as opposed to processing on an FPGA for example), you'll be much better served using a data acquisition device with lower bus latency (e.g. PCI- or PCIe- based).
Best Regards,
04-17-2014 03:29 AM
04-17-2014 01:27 PM - edited 04-17-2014 01:28 PM
@Snowpig wrote:
the 6211 is a USB-STC2 device according to its manual, section 11. Does that imply USB2?
Only coincidentally (the manual does specifically mention a "USB 2.0 Hi Speed Interface" though).
The analog I/O and the counters are all capable of hardware-timed operations on the 6211, while the digital I/O is not. This doesn't really have anything to do with the single-point performance though. Single point reads/writes use "Programmed I/O", while buffered I/O uses "USB Bulk" transfers for more throughput (but it doesn't provide lower latency).
Anyway, the precise 1 ms delays mentioned by ricovox seems quite likely related to the minimum 1 ms transaction time limitation of USB Full Speed (which the 6008/6009 is). USB High Speed (which is used by the 6211 according to the specs) does provide for micro-frames (125 us long) but I'm not sure if/how they are being used by the 6211. However, regardless of how the driver is implemented you would never be able to get the 100x or 1000x faster performance on single point operations you are asking for. Higher throughput is available on devices with clocked digital I/O of course, but this is achieved by writing an entire buffer of samples at once and then clocking it out some time later.
To repeat from my previous post...
If your single point I/O has to go through software (as opposed to processing on an FPGA for example), you'll be much better served using a data acquisition device with lower bus latency (e.g. PCI- or PCIe- based).
For example, on my PCIe-6351, I get a ~2 us write time for digital I/O (which I reduced further to ~500 ns by enabling memory mapping).
Best Regards,