LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Telnet Timeout is not a true Timeout?

Telnet Timeout will wait for the maximum Timeout time before ending it's operation.  Is there anyway to make it so that it functions like a normal timeout?


0 Kudos
Message 1 of 12
(5,885 Views)
What do you mean by "normal" timeout? Is there an "abnormal" timeout? (Shades of "Young Frankenstein" come to mind for some reason.)
0 Kudos
Message 2 of 12
(5,882 Views)
Amazing how a Gene Wilder movie reference pops up:)  I found his book "Kiss me like a stranger" to be very enlightening.
 
Back to the subject:  I would not expect a timeout to force operation to take the maximum time of the timeout.
 
For example:  I put a Timeout inside a function.  If my function completes operation under the specified Timeout "time",  then exit out of the function.  Why wait further if the work is already completed?
 
The Telnet Timeout works like this...OK done with my operation...I think I'll wait the remaining time until Timeout time is complete?  This is very wasteful.
 


0 Kudos
Message 3 of 12
(5,880 Views)
Please post your code.  Perhaps there is something wrong in the setup.  What "operation" are you trying to do and how is it supposed to signal that it's done?
0 Kudos
Message 4 of 12
(5,855 Views)
What mode do you have set for your TCP/IP read?  From the LV help:



mode indicates the behavior of the read operation.

0 - Standard (default)—Waits until all bytes you specify in bytes to read arrive or until timeout ms runs out. Returns the number of bytes read so far. If fewer bytes than the number of bytes you requested arrive, returns the partial number of bytes and reports a timeout error.
 
1 - Buffered—Waits until all bytes you specify in bytes to read arrive or until timeout ms runs out. If fewer bytes than the number you requested arrive, returns no bytes and reports a timeout error.

2 - CRLF—Waits until the function receives a CR (carriage return) followed by a LF (linefeed) within the number of bytes you specify in bytes to read or until timeout ms runs out. Returns the bytes up to and including the CR and LF. If the function does not find a CR and LF, returns no bytes and reports a timeout error.

3 - Immediate—Waits until the function receives any bytes from those you specify in bytes to read. Waits the full timeout only if the function receives no bytes. Returns the number of bytes so far. Reports a timeout error if the function receives no bytes.




So if you're using Standard (or haven't wired anything), then unless you get the number of bytes you wired into the bytes to read, it will timeout.  So, if you have wired 1000 bytes, and 999 come in, you will wait for a timeout.  I usually end up using the Immediate option.


Message Edited by Matthew Kelton on 12-18-2007 03:42 PM
0 Kudos
Message 5 of 12
(5,851 Views)
I'm sorry I can't post my code until I make a smaller example from my Telnet library and remove some sensitive information (I'll see if I can put something together).  The main Idea is I have thre main telnet *.vi's that I use, open, read/write, and close.  Typically in my application when I'm writing I require a read.  There may be a problem in how I implement my code.  I'm using the telnet *.vi library pallet not TCP/IP.  Although I do understand that TCP/IP is used within the Telnet library.
 
The modes that are allowed in the telnet functions are normal, line, and buffer.  I am using normal.  I believe the problem is due to the fact that I don't know how many bytes I will be getting back from a Telnet session.  So I'm usually padding more than is necessary.  For example if I read a version string from my product it's possible that the byte length can vary.  This may be what is forcing my Timeout to the maximum Timeout.  This leads to a very interesting dilemma...how to handle varying byte length responses.  I guess I could change to line mode, but I was wondering how people used the normal mode (One Byte at a time?).  Is there a way to detect that the Telnet buffer has been read and is empty?
 
I made a separate post regarding the Telnet Line Client example and now I think these posts are the same.  I’m just trying to figure out the best way to make a bullet proof flexible Telnet client.
 


0 Kudos
Message 6 of 12
(5,846 Views)

Looks like the Telnet Client Example is designed to fail the timeout for data being read that is less than 1000 bytes.  The key is they set the timeout very low and continue to process until a null string is returned.  This still guarantees the minimum Timeout will be 100ms.  I would like an implementation that guarantees the minimum time regardless of number of bytes that need to be read.

I'm looking for lighting fast implementation as well, any best practice suggestions would be appreciated.

 



0 Kudos
Message 7 of 12
(5,840 Views)
Telnet Read
 
read mode determines how the VI reads data from the Telnet connection. read mode can have the following values:
  • normal—(Default) The VI reads from the connection until it reads the maximum bytes to read or timeout ms expires, returning all data the VI reads.
  • line—The VI reads from the connection until it reads the maximum bytes to read, the line separator matches the data, or timeout ms expires. The VI returns all data it reads, up to and including the characters that match line separator.
  • buffered—The VI reads from the connection until it reads the maximum bytes to read or timeout ms expires. If the operation times out, the VI returns no data.


0 Kudos
Message 8 of 12
(5,837 Views)
Let say I open a telnet session and by default I should read a header of 126 bytes.  I set my read bytes to 126...why do I still get the 56 error?
 
I did some more investigating with timeouts and my Telnet.  I made a simple program using the telnet client example as a template(see attached) with a ms timer before and after the session.  I could have used the performance profiler but I was graphing the results in a loop to see how things compared.
 
The Open Telnet Timeout as long as it's > 4ms opens correctly (I defaulted this to 100ms since it's doesn't seem to change the overall time of my connection)
Just to see how long it took a single byte to be read back,  I set bytes read = 1, and my timeout = 1000ms.  Using my ms timer the result was around 25ms a byte is properly read back.
 
I then changed bytes read = 1000, and left Timeout at 25ms.  And to my surprise I read back a whopping 115bytes (mmm...whopper).
 
So now I change my Timeout till I get my full 126 bytes.  And again to my surprise it takes Timeout = 90ms to get the data back (not 50ms as you would expect).  This whole time I have yet to see error 56 go away:)
 
 


0 Kudos
Message 9 of 12
(5,810 Views)
If you dive into the code, you will find a No Time Out Error.vi.  In it, the author checks for a timeout error and only clears the status register, and not the message or error code.  The Telnet Buffered Read.vi uses a while loop to collect the data in much smaller timeout values so it can fall out as fast as possible.  Since the author doesn't clear the code or message, if no data is present during an iteration of the while loop, you will get that warning.  If you modify the No Time Out Error to clear everything, then you will not receive the warning.  Additonally, if you put a small time delay between your open connection and your close connection, you can avoid the warning.  In my testing, the warning came because the server had not started sending data yet immediately following the opening of the connection.
0 Kudos
Message 10 of 12
(5,802 Views)