From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Digital I/O

cancel
Showing results for 
Search instead for 
Did you mean: 

WriteMultiSamplePort is slow

Using WriteMultiSamplePort() (in C# and .NET3.5) to output an array of ints to the 4 digital output lines of a USB-6211 is very slow. (It's thankfully a lot faster than my first attempt, which used WriteSingleSampleSingleLine() in nested loops Smiley LOL.) The ints are output at the rate of approximately 1 per millisecond and I need it to go much faster than this - maybe 100x or 1000x faster.  I've googled and found mention of Task.Timing.ConfigureSampleClock(), but as far as I can see this applies to Analog I/O and I don't see a way to make it affect Digital Outputs.

 

Even the NI example code WriteDigPort.2008 only does 1 int per millisec.Can anyone help, please? If you can show me how to make the C# NI example go faster I can transfer the idea to my program.

 

Thanks.

0 Kudos
Message 1 of 9
(6,081 Views)

As you can see from the specs, the digital lines are software timed. The ms rate you are getting is about the best you can hope for and with windows, that will be subject to considerable jitter.

0 Kudos
Message 2 of 9
(6,076 Views)

I'm very new to programming in this environment, but it is hard to understand why a 2GHz CPU would need around 15ms to transfer 20 nibbles over USB. It must be going slow deliberately. Surely?!

 

So I carried on digging and discovered DigitalSingleChannelWriter.WriteWaveform() method which writes a DigitalWaveform object. It says in the help that this *is* affected by the ConfigureSampleClock(). Can I use this to write data from an array of ints to the digital outputs at a near-MHz rate? I'll try to get this running, but does anyone know of example code that would help me along?

0 Kudos
Message 3 of 9
(6,062 Views)
You don't have a sample clock with the digital outputs on your board. You would need a board that supports hardware timing.
0 Kudos
Message 4 of 9
(6,056 Views)

Oh.

 

So I've got the wrong board.  Smiley Frustrated

 

OK, thanks. I'll have to think of another way of doing it.

 

Thanks again.

0 Kudos
Message 5 of 9
(6,051 Views)

@Dennis_Knutson wrote:

As you can see from the specs, the digital lines are software timed. The ms rate you are getting is about the best you can hope for and with windows, that will be subject to considerable jitter.


The digital lines are software timed, true. But why do you think that 1 ms is the fastest that software timed IO can operate?   Do you have any evidence or reference for this? In my opinion that statement is naive.

 

Open up your favorite programming environment and write a simple loop. Using .Net on a modern PC (e.g. 1.6 GHz+) you'll see that the loop runs MILLIONS of times per second. I benchmarked my fairly slow PC at 7 million loops per second. That is 7000 loops PER MILLISECOND! Now add in some function call that communciates with the hardware, for example AnalogMultiChannelReader.ReadSingleSample(). Even with this function call (accurately returning the correct voltage) the loop operates at 4MS/s -- i.e. 4000 samples per millisecond.

 

So is digital output limited by software execution speed? NO!  

Is digital output limited by USB communication speed? NO!

 

The 1 ms limit for digital output is NOT some jittery approximate speed. It is a very precise, controlled timing. To see this, place a function call that performs digital output in your benchmark code, e.g. using the following function: DigitalSingleChannelWriter.WriteSingleSamplePort

 

If you benchmark a loop in which you use that function but write the SAME value to the port (e.g. 0) over and over, you will see that runs at about 100 kS/s (i.e. about 100 times per millisecond). I'm guessing this is because the NiDaqmx driver realizes that the output is already set to the specified value and does not lock the thread for the 1 ms timeslot.

 

However if you write alternate values to the port, such as:

 

while(time < 1000)

{

    writer.WriteSingleSamplePort(false, 1);

    writer.WriteSingleSamplePort(false, 0);

}

 

You will see that this operation executes at ~500 loops per second, which is exactly 1000 writes per second. Moreover, if you time each function call individually, you will see that they consume EXACTLY ONE millisecond each.

 

(Note that in your own testing, you may see anywhere from 490 to 500 loops per second, depending on processor load, but this is only because the thread manager could devote time slices to another thread during the benchmark. But if you are careful to measure only the time consumed by the digital output function calls, you will see that they are exactly 1 millsecond each).

 

So in summary, both software execution and USB communication occur at well over 1MS/s. Function calls to write the SAME digital output value to a port execute at about 100 kS/s, but writing alternate on/off values (to a single line or to a whole port, it doesn't matter) occurs at precisely 1 kS/s (i.e. 1 ms each).

 

So this 1 ms limit IS NOT an operating system or software execution timing issue. My guess is that it is imposed by the NIDAQmx driver, and may be specific to the device being used.

 

With a device that is hardware timed, you can certainly avoid this issue. But that in itself doesn't mean that software timed IO needs to be limited to 1 ms.  (Indeed analog input can easily be performed at up to 48 kS/s).

 

Does anyone have a reference that explains the precise 1 ms delay when writing digital output? Does anyone have a reference regarding the maximum sampling frequency of digital IO on the USB-6008/9 ?  (The sampling frequency for analog input is given in the specifications, but NOT the digital sampling frequency)

 

Thanks!

0 Kudos
Message 6 of 9
(5,766 Views)

Does anyone have a reference that explains the precise 1 ms delay when writing digital output? Does anyone have a reference regarding the maximum sampling frequency of digital IO on the USB-6008/9 ?

The 6009 is a USB Full Speed device.  USB Full Speed transmits at fixed 1 ms intervals.  I wasn't sure where to find this in the USB specificaitons though (and don't really have the time to look to be honest), so the best link I can provide is the Wikipedia article on USB.

 

If your single point I/O has to go through software (as opposed to processing on an FPGA for example), you'll be much better served using a data acquisition device with lower bus latency (e.g. PCI- or PCIe- based).

 

 

Best Regards,

John Passiak
Message 7 of 9
(5,748 Views)
I'm sorry to say I never managed to get my digital IO working at a reasonable speed and have been putting up with limited performance ever since.

It's odd though that the 1ms restriction, from John P's USB wiki link, refers to a USB1 full speed interface, but the 6211 is a USB-STC2 device according to its manual, section 11. Does that imply USB2? Or is it possible to have a streaming controller but only implement the USB1 spec?

It may be irrelevant though as further on in section 11, the 4 streaming controllers are allocated to analog I/O and the counters. It looks like digital I/O is the poor cousin that doesn't get the high speed treatment. I suspect ricovox is right about this being a driver limitation. There is mention of the Data Transfer Mechanism property node function (sic) in NIDAQmx which can be set to streams or programmed. I will have a look for this later, but I somehow doubt it holds the key.

So I'd like to ask: does NI have any plans to upgrade the digital I/O to match the performance of the analog?

0 Kudos
Message 8 of 9
(5,690 Views)

@Snowpig wrote:
the 6211 is a USB-STC2 device according to its manual, section 11. Does that imply USB2? 


Only coincidentally (the manual does specifically mention a "USB 2.0 Hi Speed Interface" though).

 

The analog I/O and the counters are all capable of hardware-timed operations on the 6211, while the digital I/O is not.  This doesn't really have anything to do with the single-point performance though.  Single point reads/writes use "Programmed I/O", while buffered I/O uses "USB Bulk" transfers for more throughput (but it doesn't provide lower latency).

 

Anyway, the precise 1 ms delays mentioned by ricovox seems quite likely related to the minimum 1 ms transaction time limitation of USB Full Speed (which the 6008/6009 is).  USB High Speed (which is used by the 6211 according to the specs) does provide for micro-frames (125 us long) but I'm not sure if/how they are being used by the 6211.  However, regardless of how the driver is implemented you would never be able to get the 100x or 1000x faster performance on single point operations you are asking for.  Higher throughput is available on devices with clocked digital I/O of course, but this is achieved by writing an entire buffer of samples at once and then clocking it out some time later.

 

To repeat from my previous post...

 

If your single point I/O has to go through software (as opposed to processing on an FPGA for example), you'll be much better served using a data acquisition device with lower bus latency (e.g. PCI- or PCIe- based).

 

For example, on my PCIe-6351, I get a ~2 us write time for digital I/O (which I reduced further to ~500 ns by enabling memory mapping).

 

 

Best Regards,

John Passiak
Message 9 of 9
(5,677 Views)