From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Bytes at Port Property Node. When to use it and when not?

Solved!
Go to solution

Hello all,

 

Ive frequently used the bytes at port property node when performing serial reads and havent experienced problems.  I have heard mixed reviews on it and i would like know when to use it and when not to use it. 

 

Thanks for all and any input.



-Matt
0 Kudos
Message 1 of 46
(17,407 Views)
Solution
Accepted by topic author Wolleee

Let me start with when it shouldn't be used.

 

When there is ASCII data  (as in human readable) coming through and it is separated by a termination character.  Enable the termination character and read more bytes than you ever expect in a given message.

 

When there is binary data, but a protocol where you know clearly how the message packet is set up.  In this case you will want to disable the termination character because any byte could be legitimate data and be misinterpreted as the termination character.  With these protocols, if you know the message is always X bytes, the read X bytes.  If the message is variable length, but the protocol is defined to tell you low long the message is, then do partial reads.  So if every message starts with 2 bytes that says X bytes follow.  Then read 2 bytes, convert that to a number, then read that number of bytes.

 

When to use:

The only time to really use bytes at port is if you are using a terminal type of setup.  Just grab and display on screen what ever happens to arrived at the port since the last time it was read.  When you don't care where the message breaks are.

 

If you do use bytes at port and care about message breaks, then you are forced to concatenate all of your new message into a string you store in a shift register, and every read parse through that data to determine if you have a complete, valid message.  If you don't, do nothing and go back and read more.  You might actually have to run through the string data multiple times in a row before going back to read again in case multipel message packets came through in a single read.

 

The vast majority of applications fall into one of the situation above.  The last situation isn't as common, and if you use bytes at port with a messaging system that has defined packets or a defined protocol, you are doing even more work in programming to maintain a software buffer of data as well as the hardware buffer at the serial port.

 

 

Message 2 of 46
(17,400 Views)

Hi ravensfan,

 

That is all great information!  That clears up a lot of confusion for me.  Thank you.

 

Consider one more situation where you have a terminal that will give ascii data infrequently, but has a termination character.  If you setup the port to read until termination character, but also use the bytes at port to determine if you should actually perform a read or not. Application is setup so that if more than 0 bytes perform a read until termination character and read more bytes than ever expected.

 

I attached an example if that was confusing. 



-Matt
0 Kudos
Message 3 of 46
(17,385 Views)
Solution
Accepted by topic author Wolleee

In my experience, there are 6 ways of getting data from an instrument based on 2 parameters.  The first parameter is the data format: ASCII or Binary.  The second parameter is when the data is sent from the instrument: 1) when commanded (only responsed to commands), 2) stream (data is sent at a constant rate), and 3) intermittent (data is sent at unknown intervals, typically when something changes).

 

I will just focus on the ASCII data format since that has covered >90% of the situations I have seen.  For commanded and stream, Bytes At Port makes no sense to use at all.  For the intermittent, I do use the Bytes At Port to see if there is any data there (ie a message has at least started) and then read the entire message, much like what your example does.

 

EDIT: One final note, NEVER use the Bytes At Port to tell the VISA Read how many bytes to read.  Always try to read more bytes than you ever expect to have in a message.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 4 of 46
(17,362 Views)

I am most likely doing thing wrong as I typically always use the Bytes at port property. I have used too many serial instruments that do not follow procotol. For example, one gave a multiline response at times with the termination character in between lines, others have had multiple termination characters. I found the bytes at port handles the odd corner cases better than the standard read until termination character.

 

The snippet shows what I usually use. But now I am a bit worried as it goes against the knights' advice.

 

Cheers,

mcduff

 

Snippet.png

Message 5 of 46
(17,334 Views)

If it works for you, then you should use it.  I said it doesn't need to be used 95% of the time.  That still leaves about 5% of the time where it needs to be used, and your situation it might be useful.  You said yourself "odd corner cases" which implies exceptions and not very typical situations.

 

If by multiple termination characters, you mean things that end with multiple bytes such as a CR/LF.  Then I'd use the last termination character as the defined termination character.  If by multiple you mean sometimes it ends with one byte and other messages and with a different byte, then you are probably going to need the Bytes at Port and manage all of your message handling in code.  A device that does that doesn't deserve to exist and I would try to avoid ever using it.

 

If you are dealing with multi-line responses, then I would try to detect when I'd expect a multi-line response and do multiple consecutive reads.   (e.g. This query gives a 3 line response, I'll do 3 VISA Reads.)

 

In my experience, every device I used followed a protocol that was pretty clear and did not require me to use a bytes at port.  I just add to use termination characters or reading of X number of bytes without using a termination character to get a complete message.

 

I just find that the basic NI examples for serial ports use bytes at port incorrectly, and guides new user to using it and it causes them more problems in getting valid data than it ever should.  A frequent error is Write query message, Read Bytes at Port.  Read that many bytes, and there is no delay to give the device any time to begin responding.  A delay in between the Write, and the check bytes at port helps, but when the device uses a good protocol that has a termination character, then the termination character should be used and Bytes at Port not used.

Message 6 of 46
(17,324 Views)

Thanks for the advice.

 

If you are dealing with multi-line responses, then I would try to detect when I'd expect a multi-line response and do multiple consecutive reads.   (e.g. This query gives a 3 line response, I'll do 3 VISA Reads.)

 

I am admittedly pretty lazy. I just wanted a general IO VI that I could use without worrying about any specific command.

 

I could also reuse the same VI for binary data or ASCII data at the serial port. (See above)

 

Cheers,

mcduff

 

 

 

 

0 Kudos
Message 7 of 46
(17,319 Views)

Hi everybody,

 

Really great input on this matter.  Ive questioned whether i was using the bytes at port node correctly or not for some time now.  As always, appreciate the help.



-Matt
0 Kudos
Message 8 of 46
(17,295 Views)

I don't have the same version of LabVIEW can you please post a picture of the code?

0 Kudos
Message 9 of 46
(16,124 Views)

@Dr. Strange

 

You need to tell us whose code you are referring to.

 

mcduff

0 Kudos
Message 10 of 46
(16,122 Views)