From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

VISA Read function slower for USB VCP than "real" serial port

Solved!
Go to solution

We have discovered that the VISA Read function (I’m using LabVIEW 8.5) is significantly slower for VCPs (“Virtual COM Ports” used in USB-to-serial converters, even NI’s) than for "real" serial ports.

 

Here is an example: We have an instrument that can output, on command, 250 “readings” per second (“RPS”) at 115,200 baud rate. Each reading is a string of (nominally) 12 bytes, including a terminating CR-LF pair (\r\n). (It’s not relevant but FYI “readings” consist of a floating point number of 4 or 5 significant digits, plus a decimal point and possibly a minus sign, then a space followed by a “unit” of 1 to 5 characters.) So 250 RPS x 12 bytes/reading = 3000 bytes per second. That is a baud rate of 30K with one start and stop bit, no parity, 8 data bits. Since we are using 115 KB, we are at roughly 25% of the rate attainable. So far, so good.

 

After sending the command to instruct the instrument to start sending readings, my test vi simply loops (with no wait ms delay) reading the number bytes at the port with the VISA Read vi function. (With my test vi I can optionally tabulate and graph the data.) With a “real” COM port the vi keeps up with the 250 RPS data rate. However, with a USB-to-serial converter (using it’s VCP driver which all use the bulk transfer USB mode) the VISA Read’s throughput is only about 70 RPS. The remaining data that has come into the port does not get lost. It is in the VISA read buffer as indicated by the output of the “Bytes at Port” VISA property node.

 

The time lost must be in the VISA Read function. It apparently takes longer to return when attempting to read from the VCP driver’s buffer than from a real serial port driver which reads the hardware UART of the real serial port. Obviously the VCP is another software layer on top of the USB software layer, so it’s not that surprising that it is slower, but less than one-third speed is surprising.

 

Have any others of you on this forum experienced this or similar throughput problems when using USB to Serial Port converters? I’m hoping that NI engineers who have intimate knowledge of the VISA drivers will be able to shed some light on this issue, as well as hopefully improve the speed in the future. Thank you.

 

Ed

Message 1 of 109
(5,808 Views)

You MAY be able to tweak the latency settings and greatly improve your throughpu depending on the flavor of the VCP hardware driver.  In this thread I placed linkes to a fairly complete series of technical docs for the FTDI chipset and offered some insight into how to reduce VCP latency for small packet sizes.

 

I do not know how the SI labs "Bridge" can also be tweeked but- you esentially need the SDK to build a custom driver.

 

The prolific driver has stability issues and I avoid it like the plague.  (To be fair- it may be poorly engineered or improperly manufatured hardware that causes some marginal hardware to make to market and the driver may be just peachy)


"Should be" isn't "Is" -Jay
0 Kudos
Message 2 of 109
(5,802 Views)

The USB layer is culprit.  I ran some tests several years ago with several different USB-232 and PCMCIA-232 adapters.  All were slower that a standard 232 port but the USB's were much slower than the PCMCIA's.  As I recall there seemed to be a delay associated with each VISA call.  Property nodes such as 'Bytes at Port' were particularly bad.

 

Suggest you modify your code to use the termination character and not use the 'Bytes at Port'.  This will cut your VISA calls in half and should help speed things up.

0 Kudos
Message 3 of 109
(5,795 Views)

Thanks, Jeff, for your reply. Yes, I have the Advanced Port Settings in the port's Properties in Device Manager set to "Use FIFO buffers (requires 16550 compatible UART)" checked, and buffers set to the maximum for highest speed. This is the default. I tried unchecking the box, since the VCP doesn't have a physical UART, but it made absolutely no difference. There are no other settings to "tweak" in this driver. (You are correct that I am using the Silicon Labs driver, and have confirmed with them that it meets the speed specification of their CP2102 chip data sheet of up to 1 Mbits/sec. I'm using about 1/8 of that speed, 115,200 baud.)

 

I also want to mention that at the start of the project I purchased the National Instruments USB-to-Serial converter ($99) and returned it when discovered that its performance was significantly SLOWER than ours or a $20 converter purchased in Staples.

 

Furthermore, we have a customer who developed his own driver for the Si Labs chip using "WinUSB". He gets the same slow performance running my vi (from the exe I built and gave him) as using the Si Labs driver. However, he gets the correct speed when using a C# program he wrote.

 

I've been working on this for quite a while, as you probably can tell, and have really narrowed it down the the fact that the VISA Read function takes much longer when interfacing with a VCP driver as opposed to a "real" COM port.

 

Ed

 

 

 

0 Kudos
Message 4 of 109
(5,790 Views)

Thanks, Wayne, I just tried your suggestion, but unfortunately it made no difference. (I actually already am using the LF as a termination character, but it never is reached because the "bytes at port" comes first.) I deleted the Bytes at Port property node and set the required Byte Count input of the VISA Read to 15 (because each reading string will be less than that). Now the VISA Read returns the whole 12-13 byte string in one packet, but, as I said, the slow performance is identical as with the Bytes at Port.

 

I would agree that this could point to the USB layer, however because a customer of ours was able to get the full 250 RPS performance with his C# program, which uses standard Windows port DLL calls, that indicates to me that it is strictly the VISA Read's interaction with the USB layer. Since Windows and C# can attain the full speed, I would think that the VISA code could also, After all, isn't LabVIEW developed in C++?

 

(I realize that VISA has tremendous functionality compared to a simple Windows DLL call, but there could be a "simple" mode for the VISA functions specifically for this application.)

 

Ed

0 Kudos
Message 5 of 109
(5,782 Views)

I also meant to tell you that using HyperTerminal to input (capture to the screen's terminal window and optionally to a file) data from our device, also keeps with the 250 RPS rate. That also tells me that the USB layer can keep up (full-speed USB uses 1 mSec frames, so theoretically it can handle about 10 bytes per mS at 115KB), and the problem lies in VISA Read's (and VISA Write, too) coding. No?

0 Kudos
Message 6 of 109
(5,779 Views)

Put the Wait in the loop and set to zero and see what happens.

0 Kudos
Message 7 of 109
(5,769 Views)

Well you've done your homwork!  (except most of LabVIEW is written in G these days). 

This very well be a LabVIEW to VISA issue.  Are you calling to viWrite / viRead or viWriteAsync / viReadAsync?  and what version of LabVIEW are you using?  There is a release note I had explained to me about a known issue fixed in 2011


"Should be" isn't "Is" -Jay
0 Kudos
Message 8 of 109
(5,768 Views)

Wayne, of course I've tried putting a 0 (my app uses a 1 so my vi yields some to Windows), and even deleted the wait ms vi completely. No difference.

0 Kudos
Message 9 of 109
(5,760 Views)

Jeff,

 

I'm simply using the VISA Read and VISA Write functions in asynchronous mode. My simple test vi loop is only using a VISA Read, of course, except for the initial write command in the first iteration to get our device to output readings.

0 Kudos
Message 10 of 109
(5,757 Views)