LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Serial read/ write primitive slow in 2016 compared to 2012

Solved!
Go to solution

Hello all,

I have a very simple VI. 

serial command

Write -> wait-> read.

 

my wait times in LV 2012 version was 20 ms that was acceptable to read back the response

when i port the same code to 2016 it requires that my wait time be increased to atleast 75 ms to be able to read back the response.

 

is this a bug in 2016 or a performance hit due to some UI threads?

Please help.

Thanks

0 Kudos
Message 1 of 13
(4,519 Views)

Is the LabVIEW version the only difference here? The host operating system, serial port hardware, serial port drivers, other running processes and more can all impact the timing when reading a serial port buffer. Your code also isn't configuring the baud rate, which can have a big effect on timing.

 

As an aside, using timing to read serial port responses isn't ideal (due to issues like you're seeing now). Use termination characters as part of the serial protocol if possible. You can then read from the port with a long timeout, and it will return the response once the termination character appears in the input buffer.




Certified LabVIEW Architect
Unless otherwise stated, all code snippets and examples provided
by me are "as is", and are free to use and modify without attribution.
Message 2 of 13
(4,480 Views)
Solution
Accepted by topic author freemason

The solution has already been stated, but

DO NOT USE THE BYTES AT PORT!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! (still haven't emphasized that enough...)

 

Judging purely from how you are doing the write, your instrument is using a termination character.  So use that to your advantage.  When you call the VISA Configure Serial Port, enable the termination character (Boolean on top) and set the termination character to 0xA (10, CR, \n).  Both of those are the defaults, so you would just leave them unwired.  Now when you do the read, all you have to do is set the number of bytes to read to be more than you ever expect in a reply.  I like to use 50 or 100, depending on the instrument.  This works since the VISA Read will stop reading when it reads the number of bytes specified or it reads the termination character, whichever happens first.   When you use the Bytes At Port, you are forcing the read the bytes that have come in at the exact moment in time, disregarding any other bytes that may still be in transit.  If you just call the VISA Read with a high number of bytes, your only real limit is the timeout, which defaults to 10 seconds.  That should be plenty of time for your data to come back.

See?  Much simpler: no waits and no checking the Bytes At Port.

 

And just for the record, I have only found 1 legitimate use for the Bytes At Port, and that is when you have no clue when data might possibly come in.  And even then the Bytes At Port is just a check to see if data has started to come, not used to command the VISA Read how many bytes to read.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 3 of 13
(4,460 Views)

@crossrulz wrote:

 

 

And just for the record, I have only found 1 legitimate use for the Bytes At Port, and that is when you have no clue when data might possibly come in.  


What about when you have many ports working in parallel?  I heard something about how using the synchronous read functions would take a CPU thread and that if I'm working with 8 serial ports talking to 8 different devices (but of the same model) then that could do weird things.  Am I wrong?  I've just always done the whole polling and bytes at port thing myself with the extra work of keeping track of partial messages, but am willing to change if it scales since I often have to talk to many serial ports in parallel. 

0 Kudos
Message 4 of 13
(4,420 Views)

@Hooovahh wrote:

What about when you have many ports working in parallel?  I heard something about how using the synchronous read functions would take a CPU thread and that if I'm working with 8 serial ports talking to 8 different devices (but of the same model) then that could do weird things.  Am I wrong?  I've just always done the whole polling and bytes at port thing myself with the extra work of keeping track of partial messages, but am willing to change if it scales since I often have to talk to many serial ports in parallel.


You might have something there.  But I go back to the fact that you should only be using the Bytes At Port to see if a message has started and then read the whole thing using a very large number of bytes to read (assuming ASCII protocol, binary/hex usually have their own protocols that you should be following instead of relying on the Bytes At Port).  This will make your life A LOT easier.  Trust me, I have gone down that road as well.  It turned into a big giant mess very quickly with lots of potential memory issues.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 5 of 13
(4,403 Views)

Great suggestion I might prototype a couple methods to see if one works better than the other.  My current method is close to what you described.  It loops reading the Bytes at Port, if there are none, just wait and try again.  Read the bytes that are at the port when they are there, and then look for terminating characters as needed.  It also wraps the write, and read into a single call that can keep track of what the request was if a timeout occurred.

 

Attached is a quick example on reading voltage or current on a power supply.  This isn't a full demo just a way of showing a write, and read using this method.  Also in this example I show a multi byte termination of \r\n which likely isn't necessary and either byte would work.  Although in the past I have had some need for a multi byte termination such as the ending bracket of a JSON, followed by a return.

0 Kudos
Message 6 of 13
(4,393 Views)

can you post 2016 version

Thanks

0 Kudos
Message 7 of 13
(4,370 Views)

I am using a USB-485 to communicate with an external device in half duplex mode. What brought me to this thread was the initial 20ms timing statement. For my HW setup a send/receive packet takes less than 4ms as observed on an O-scope, but when I have this as part of a loop there is roughly 18-20ms between packets. As I initially had the "poll for bytes" method and using the "High Resolution Relative Seconds.vi" to indicate how long this took gave me the same 18-20ms indication, I thought I had the solution to this. Changing to using the break character method then had the LabVIEW timer showing a rediculous 3 us delay between messages, but I  thought this would solve my issue. Looking at the O-scope for the messages and it was still at the 18-20ms. I can add a loop timer to adjust the delay between messages and it works for delays greater than 20ms. Is the 20ms a Win7/64 tick thing coupled with the use of the USB port? I doubt it makes any difference but the environment is LabVIEW2014.

0 Kudos
Message 9 of 13
(4,031 Views)

The 20ms is probably more to do with the half-duplex comms, and in my experience 20ms is a pretty common number. I've written drivers for different hardware (furnace controllers, lab auto samplers, etc) on multi-drop, half-duplex RS-485 networks, and 20ms was the delay required between issuing commands and receiving responses. This allows time for all hardware on the RS-485 network to switch into listening mode, wait for a command, parse the command, switch to writing mode, then write the response, all at 9600 baud.




Certified LabVIEW Architect
Unless otherwise stated, all code snippets and examples provided
by me are "as is", and are free to use and modify without attribution.
0 Kudos
Message 10 of 13
(4,003 Views)