From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP Read with custom message termination symbol (LF instead of CRLF)

I'm interfacing to an instrument that terminates all messages with a \n (LF) character instead of the more common (or assumed to be common) CRLF sequence.

Is it possible to configure a TCP connection to do reads based on this termination character?  The messages that are coming in are of indeterimnable length.

 

My current work-around is to wait for the first character with a long time-out single byte read, then feed that to a while loop that reads one character at a time on a very short time-out until it finds \n or times out, at which point it returns all the characters read.  This results in sometimes hundreds or even thousands of individual reads for the longer data messages, and this seems very in-efficient.

 

Another option is to make assumptions that all data will be returned within "x" and just do a standard read of "best guess or 999999999 bytes" and wait for the time-out and assume that the message is complete. This of course is inefficient since even short messages have to wait for the read time-out.

 

A third mental option I'm toying with as I'm typing this is to wait on first character + <x ms> then do a 9999999 read with a short time-out, the assumption being that while the instrument responce time can vary, once it starts transmitting data, it will finish the message within x ms of the instrument sending the first character. Not fool-proof, but this may be a good balance between time and efficiency?

 

Any other suggestions or creative variations?

Thanks,

Q

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 1 of 14
(7,397 Views)

I attached a snippet of my current version. It combines relatively (on the scale of our instrument communication) quick and responsive query and read operations with some degree of scalability and fault checking.

 

The constants could/should be tweaked depending on your application.

The "read immediate" byte count constant and time-out inside the while loop could be tweaked towards more iterations with shorter time-outs, or fewer iterations with longer time-outs. The while loop will build up the message string until the read throws an error (if the error is a time-out error, we pass the error from before the read while-loop out instead).

After the while-loop finishes, it will flag a warning if the last character isn't the expected termination character. This gives some flexibility downstream from the Query read on whether to process the potentially partial message, or ignore the warning.

 

Comments are appreciated.

 

Thansk,

Q

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 2 of 14
(7,392 Views)

I should have included the snippet as an in-line image as well so people can comment on the code without actually opening it.

snippet

 

Comments and constructive criticism is encouraged!

Thanks,

Q

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 3 of 14
(7,370 Views)

Hi Q,

 

Based on my research, it is not possible to modify the actual TCP Write function to trigger based on LF instead of CRLF. However, there might be another possibility to ensure the same behavior. You could use the “Search and Replace String” function to find the LF character in the string from your instrument and replace it with a CRLF before feeding the string to the TCP Write function. An example of how to do this is shown below.

 

StringReplace_LF_with_CRLF.JPG

I hope that this helps and please follow up with any more questions that you might have.

 

Regards,

Mike Watts
Product Manager - Modular Instruments
0 Kudos
Message 4 of 14
(7,346 Views)

Michael5000Watts,

 

That's an interesting answer but I'm confused about why you are talking about how to add LF/CRLF to the end of a string that is on the outbound (TCP Write) when my opening post and my code snippet all deals with TCP READ operations?

 

Anyway, thanks for looking.

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 5 of 14
(7,338 Views)

The property node associated with the VISA VIs has an option under "Message Based Settings" for setting the termination character and "Termination Character Enable".  The default is 0x0A (linefeed) for the char and False for the enable.  Try setting the enable to True then you can experiment with setting the termination character to different characters to verify that it works.

 

(If you haven't used property nodes, you will have to wire the VISA resource to a property node then click on the where it says "property" on the property node, you will then have a drop down menu of choices where you will find the "Message Based Settings" menu mentioned above.)

 

Hope that helps.

Message 6 of 14
(7,333 Views)

Riley, 

 

Using the VISA palette is not an option in this application. It needs to be done at a lower level using TCP Read's. Otherwise a good suggestion.

 

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 7 of 14
(7,325 Views)

QFang, this is a useful, well designed function. I'm really curious about the "WAIT" icon inserted in the error cluster wire just inside the case structure. Is this a built-in LV function?

 

Thanks,

Clay

0 Kudos
Message 8 of 14
(7,217 Views)

Thanks Clay, I'm happy you are finding it useful.

 

 

The "Wait" is simply a wait ms type function that follows the data-flow of the error wire so that I can control where/when the wait happens..

 

It is perhaps more complex than you would expect, even though its super simple... so here are my reasons for coding it the way I did:

 

It is a sub-vi with a flat sequence diagram with one frame in it. Inside the frame is a "wait ms" NI function. The error in wires through the sequence structure to the error out.  The Wait.vi sub-vi properties are set to "re-entrant" and "inline subVI into calling VI's".   The sequence diagram is only necessary because of the "inline subVI into calling VI's" check-mark, and it prevents the code from changing its behavior when you inline..  the reason I inline these is that I tend to use this function all over in my code, and I don't want the sub-vi to block, and I don't want the VI cache to track potentially hundreds of re-entrant instances of this simple function.    This over-complication of a simple wait, may or may not be sensible, but it works for me. your mileeage may vary! 😛

wait ms.png

 

Thanks,

Q

 

edit: I noticed your post was the first post you have made here at NI Forums, so welcome! -Also, note that he pictures in the above posts are "snippets" that can be downloaded and dragged into labVIEW (2011 or later) which will (hopefully) convert the png image to actual code.

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 9 of 14
(7,211 Views)

Inlining a SubVI adds a virtual sequence structure for you, no need to do it explicitly.

0 Kudos
Message 10 of 14
(7,208 Views)