From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to check if data is ready on visa/serial

Solved!
Go to solution

@billko wrote:

@crossrulz wrote:

@Dobrinov wrote:
In my case I work with embedded systems. I am responsible for implementing the end-of-line testers for those, and more or less always those use a UART interface as a mean of communication with the OS running on them. As a rule of thumb as part of the testing process I have to boot from a certain source, with a specially prepared bootloader, boot some version of Linux configured for that specific DUT, log in, and run a number of prepared scripts, log their console feedback, analyze it and determine if everything ran properly or not, i.e. was a test step pass or fail, and in case log the results in a TestStand protocol via string parsing. Termination characters are useless in that case, as every line of console feed is terminated. In every case counting number of returned bytes returned is useless as well - if any error occurs that number will change. Some scripts can run for multiple minutes and thus fixed timeouts that long are unacceptable, since with a loop I can determine if a DUT is not communication at all and interrupt the loop long before timeout would occur. In production wasting minutes for no good reason is a no go - time is money. Only way is to wait until a certain string, found by regex, comes on that console and you proceed further with whatever the test specification demands.

 

Nobody in that field would consider a standard embedded system running Linux as "brain damaged". Me insisting on anybody changing how things are done just so I can use VISA Read with termination characters would get me laughed out of the building.


Ah, a Command Line Interface, aka a Terminal.  I do plenty of those.  Use the termination character to read a line at a time.  You can easily read multiple lines or do some search to stop reading lines.  What I like to do is have my terminal interface happening in a loop all on its own.  I use a queue to send commands to that loop, including "write", "stop", "return lines that match a specific pattern".  That return data is sent through another queue defined by the writer.


In my experience, terminals always have a prompt.  I read to the prompt.  The prompt is my termination character.  If the response has human formatting of several lines separated by prompts, I as many VISA reads as needed.  If I'm not initiating a query, then I'm using Bytes at Port to see if there is anything to read.  If it's not zero, then I do a VISA read.

Is that what you meant by "termination character"?


That works.  I still tend to use the Carriage Return and/or Line Feed depending on the device.  Thinking through terminals I have had to do, I think there are some commands and/or responses that will have the prompt character inside the line.  You just have to be careful about that.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 21 of 52
(498 Views)

@crossrulz wrote:

@billko wrote:

@crossrulz wrote:

@Dobrinov wrote:
In my case I work with embedded systems. I am responsible for implementing the end-of-line testers for those, and more or less always those use a UART interface as a mean of communication with the OS running on them. As a rule of thumb as part of the testing process I have to boot from a certain source, with a specially prepared bootloader, boot some version of Linux configured for that specific DUT, log in, and run a number of prepared scripts, log their console feedback, analyze it and determine if everything ran properly or not, i.e. was a test step pass or fail, and in case log the results in a TestStand protocol via string parsing. Termination characters are useless in that case, as every line of console feed is terminated. In every case counting number of returned bytes returned is useless as well - if any error occurs that number will change. Some scripts can run for multiple minutes and thus fixed timeouts that long are unacceptable, since with a loop I can determine if a DUT is not communication at all and interrupt the loop long before timeout would occur. In production wasting minutes for no good reason is a no go - time is money. Only way is to wait until a certain string, found by regex, comes on that console and you proceed further with whatever the test specification demands.

 

Nobody in that field would consider a standard embedded system running Linux as "brain damaged". Me insisting on anybody changing how things are done just so I can use VISA Read with termination characters would get me laughed out of the building.


Ah, a Command Line Interface, aka a Terminal.  I do plenty of those.  Use the termination character to read a line at a time.  You can easily read multiple lines or do some search to stop reading lines.  What I like to do is have my terminal interface happening in a loop all on its own.  I use a queue to send commands to that loop, including "write", "stop", "return lines that match a specific pattern".  That return data is sent through another queue defined by the writer.


In my experience, terminals always have a prompt.  I read to the prompt.  The prompt is my termination character.  If the response has human formatting of several lines separated by prompts, I as many VISA reads as needed.  If I'm not initiating a query, then I'm using Bytes at Port to see if there is anything to read.  If it's not zero, then I do a VISA read.

Is that what you meant by "termination character"?


That works.  I still tend to use the Carriage Return and/or Line Feed depending on the device.  Thinking through terminals I have had to do, I think there are some commands and/or responses that will have the prompt character inside the line.  You just have to be careful about that.


I never thought about that!  Fortunately, my "terminals" were always really commands that I sent and responses I received, and I knew I wasn't expecting a prompt character inside a message.  But holy cow, if I ever expanded that to be an actual terminal emulator used by a human being, that would not work well at all.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 22 of 52
(490 Views)

@crossrulz wrote:

@Dobrinov wrote:
In my case I work with embedded systems. I am responsible for implementing the end-of-line testers for those, and more or less always those use a UART interface as a mean of communication with the OS running on them. As a rule of thumb as part of the testing process I have to boot from a certain source, with a specially prepared bootloader, boot some version of Linux configured for that specific DUT, log in, and run a number of prepared scripts, log their console feedback, analyze it and determine if everything ran properly or not, i.e. was a test step pass or fail, and in case log the results in a TestStand protocol via string parsing. Termination characters are useless in that case, as every line of console feed is terminated. In every case counting number of returned bytes returned is useless as well - if any error occurs that number will change. Some scripts can run for multiple minutes and thus fixed timeouts that long are unacceptable, since with a loop I can determine if a DUT is not communication at all and interrupt the loop long before timeout would occur. In production wasting minutes for no good reason is a no go - time is money. Only way is to wait until a certain string, found by regex, comes on that console and you proceed further with whatever the test specification demands.

 

Nobody in that field would consider a standard embedded system running Linux as "brain damaged". Me insisting on anybody changing how things are done just so I can use VISA Read with termination characters would get me laughed out of the building.


Ah, a Command Line Interface, aka a Terminal.  I do plenty of those.  Use the termination character to read a line at a time.  You can easily read multiple lines or do some search to stop reading lines.  What I like to do is have my terminal interface happening in a loop all on its own.  I use a queue to send commands to that loop, including "write", "stop", "return lines that match a specific pattern".  That return data is sent through another queue defined by the writer.


Well, reading single lines, i.e. everything up to the termination character, in a loop would work. I'd still jam everything in a shift register for displaying (and debugging) purposes, so no changes there, but if one assumes, and this is very much an assumption, that the string used as a stop criteria will be contained in a single line, i.e. between two termination characters, then the search part would be more efficient since I'd only scan every line as it comes in, and not everything in the shift register up to this point.

 

I'd have to use a low timeout though and clear it up as part of each iteration if I want to avoid blocking the exection, but it would work. Will build a prototype and see whether I like it better sometime this week.

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 23 of 52
(471 Views)

@Dobrinov wrote:

I'd have to use a low timeout though and clear it up as part of each iteration if I want to avoid blocking the exection, but it would work. Will build a prototype and see whether I like it better sometime this week.


Which brings us back to the original question.  You can use the Bytes At Port to see if any data is there.  If there is data, read the line.  If there is no data, either do nothing or have a short wait.

 

You might want to check out some of my code found here: https://github.com/crossrulz/SerialPortNuggets.  This is the code I used in my VI Week presentation last year (VIWeek 2020/Proper way to communicate over serial).


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 24 of 52
(460 Views)

Another method which I had mentioned earlier if you are doing some type of command/response system and the response you get back will be large with multiple line feeds is to do a read of a single character with a reasonable timeout. If this read times out the device failed to give any return data and therefore this would be considered an error. If this read succeeds, then go into a tight loop reading in large amounts of data, typically several K bytes in a single read. The data here is buffered while continually reading. this reads uses a very short delay such as 50 ms. When there is a time out encountered here it is considered the end of the data and this gets passed on for further processing.

 

I work with testing printer firmware and we issue commands to the printers that will return several hundreds of lines of data and we don't have a defined termination. This data is basically intended to be used with some type of terminal application a a human looking at the data. I have found this approach works very well and is very efficient. In some cases where we have a long running process with lots of return data we will do this but pass the data on for processing in parallel. Over the years we have got the development team to implement more machine friendly protocols to support our testing efforts but we still need to test and verify the old terminal based interfaces.

 

I have found that this approach is more efficient when dealing with large blocks of multi-line responses than reading line by line.with a termination character.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 25 of 52
(446 Views)

@Mark_Yedinak wrote:

Another method which I had mentioned earlier if you are doing some type of command/response system and the response you get back will be large with multiple line feeds is to do a read of a single character with a reasonable timeout. If this read times out the device failed to give any return data and therefore this would be considered an error. If this read succeeds, then go into a tight loop reading in large amounts of data, typically several K bytes in a single read. The data here is buffered while continually reading. this reads uses a very short delay such as 50 ms. When there is a time out encountered here it is considered the end of the data and this gets passed on for further processing.

 

I work with testing printer firmware and we issue commands to the printers that will return several hundreds of lines of data and we don't have a defined termination. This data is basically intended to be used with some type of terminal application a a human looking at the data. I have found this approach works very well and is very efficient. In some cases where we have a long running process with lots of return data we will do this but pass the data on for processing in parallel. Over the years we have got the development team to implement more machine friendly protocols to support our testing efforts but we still need to test and verify the old terminal based interfaces.

 

I have found that this approach is more efficient when dealing with large blocks of multi-line responses than reading line by line.with a termination character.


Well, before doing it with Bytes at Port I was reading single bytes in a tight loop and saving everything in a shift register. Didn't even bother with a loop timing of any sort - the VISA timeout slowed the loop when running out of stuff to read. Switched to Bytes at Port since I liked not having to deal with VISA timeouts at all (and using my own instead) and basically achieving the same result. But based on your experience I don't have to bother testing reading whole lines till termination character is found, since I can already foresee several potential problems (especially if those termination characters change between what the Linux OS uses and what custom written scripts would produce, since that is hardly standardized).

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 26 of 52
(427 Views)

@crossrulz wrote:

@Dobrinov wrote:

I'd have to use a low timeout though and clear it up as part of each iteration if I want to avoid blocking the exection, but it would work. Will build a prototype and see whether I like it better sometime this week.


Which brings us back to the original question.  You can use the Bytes At Port to see if any data is there.  If there is data, read the line.  If there is no data, either do nothing or have a short wait.

 

You might want to check out some of my code found here: https://github.com/crossrulz/SerialPortNuggets.  This is the code I used in my VI Week presentation last year (VIWeek 2020/Proper way to communicate over serial).


Err, no it doesn't bring us back there - that would be like a different flavor of doing the exact same thing. There are positives and negatives to both ways of doing this, no solution is a straight winner - one has to choose which set of pros and cons is more suitable to the application at hand. And I fail to see the point of mixing Bytes at Port with reading whole lines - just because there is something in the port buffer does not mean that there is a guarantee that a whole line is there (i.e. with a termination character).

 

Also, started watching your linked presentation, and frankly, I do have some issues with it (didn't watch the whole thing, only the beginning). Your stance on the Bytes at Port is to put it lightly, very strange. The problem with the standard NI example regarding serial communication isn't the use of Bytes at Port, it's the fixed amount of time used until presumably the whole answer is there to read. If one sets the wait time to a sufficiently high number there will be no issue reading the whole reply at all. Every single time. And as with all fixed waiting times this is inefficient since either one waits not long enough, and only part of the reply is in the buffer (again, has nothing to do with Bytes at Port), or waits too long, in which case everything will be there, but this is, at least partially, a pure waste of time. On the other hand doing things this way LabVIEW novices don't need to know how to scan the contents of the buffer for any sort of stop criteria, termination character or not. So, instead of telling people not to use Bytes at Port (with a thousand exclamation points), tell them how to do it properly.

 

Another point - you really seem to misunderstand what "racing conditions" mean - if, in this NI example, I set my waiting time too short, and repeatedly don't get the whole reply of a system, this has NOTHING to do with racing conditions. Like, at all.

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 27 of 52
(423 Views)

@Dobrinov wrote:

@crossrulz wrote:

@Dobrinov wrote:

I'd have to use a low timeout though and clear it up as part of each iteration if I want to avoid blocking the exection, but it would work. Will build a prototype and see whether I like it better sometime this week.


Which brings us back to the original question.  You can use the Bytes At Port to see if any data is there.  If there is data, read the line.  If there is no data, either do nothing or have a short wait.

 

You might want to check out some of my code found here: https://github.com/crossrulz/SerialPortNuggets.  This is the code I used in my VI Week presentation last year (VIWeek 2020/Proper way to communicate over serial).


Err, no it doesn't bring us back there - that would be like a different flavor of doing the exact same thing. There are positives and negatives to both ways of doing this, no solution is a straight winner - one has to choose which set of pros and cons is more suitable to the application at hand. And I fail to see the point of mixing Bytes at Port with reading whole lines - just because there is something in the port buffer does not mean that there is a guarantee that a whole line is there (i.e. with a termination character).

 

Also, started watching your linked presentation, and frankly, I do have some issues with it (didn't watch the whole thing, only the beginning). Your stance on the Bytes at Port is to put it lightly, very strange. The problem with the standard NI example regarding serial communication isn't the use of Bytes at Port, it's the fixed amount of time used until presumably the whole answer is there to read. If one sets the wait time to a sufficiently high number there will be no issue reading the whole reply at all. Every single time. And as with all fixed waiting times this is inefficient since either one waits not long enough, and only part of the reply is in the buffer (again, has nothing to do with Bytes at Port), or waits too long, in which case everything will be there, but this is, at least partially, a pure waste of time. On the other hand doing things this way LabVIEW novices don't need to know how to scan the contents of the buffer for any sort of stop criteria, termination character or not. So, instead of telling people not to use Bytes at Port (with a thousand exclamation points), tell them how to do it properly.

 

Another point - you really seem to misunderstand what "racing conditions" mean - if, in this NI example, I set my waiting time too short, and repeatedly don't get the whole reply of a system, this has NOTHING to do with racing conditions. Like, at all.


I thought this whole discussion involved how to use "Bytes at Port" the right way?

 

Also, what are "racing conditions"?  Is there some language barrier that changed "race condition" to "racing conditions"?  Do these terms mean the same thing?

 

Trying to say that figuring out how to get all the data in the reply has nothing to do with "Bytes at Port" is like saying that driving over a spike strip has nothing to do with flat tires because it is the puncturing of the tire that causes the flat, not driving over a spike strip.

 

I would like to reiterate what rolfk said so long ago, that you are just trying to recreate what VISA does automatically.  It is horribly inefficient.  Only very specific cases ever need "Bytes at Port", otherwise just let VISA read do the thing it does best, that is, manage your communications for you, so you don't have to.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 28 of 52
(417 Views)

@billko wrote:

@Dobrinov wrote:

@crossrulz wrote:

@Dobrinov wrote:

I'd have to use a low timeout though and clear it up as part of each iteration if I want to avoid blocking the exection, but it would work. Will build a prototype and see whether I like it better sometime this week.


Which brings us back to the original question.  You can use the Bytes At Port to see if any data is there.  If there is data, read the line.  If there is no data, either do nothing or have a short wait.

 

You might want to check out some of my code found here: https://github.com/crossrulz/SerialPortNuggets.  This is the code I used in my VI Week presentation last year (VIWeek 2020/Proper way to communicate over serial).


Err, no it doesn't bring us back there - that would be like a different flavor of doing the exact same thing. There are positives and negatives to both ways of doing this, no solution is a straight winner - one has to choose which set of pros and cons is more suitable to the application at hand. And I fail to see the point of mixing Bytes at Port with reading whole lines - just because there is something in the port buffer does not mean that there is a guarantee that a whole line is there (i.e. with a termination character).

 

Also, started watching your linked presentation, and frankly, I do have some issues with it (didn't watch the whole thing, only the beginning). Your stance on the Bytes at Port is to put it lightly, very strange. The problem with the standard NI example regarding serial communication isn't the use of Bytes at Port, it's the fixed amount of time used until presumably the whole answer is there to read. If one sets the wait time to a sufficiently high number there will be no issue reading the whole reply at all. Every single time. And as with all fixed waiting times this is inefficient since either one waits not long enough, and only part of the reply is in the buffer (again, has nothing to do with Bytes at Port), or waits too long, in which case everything will be there, but this is, at least partially, a pure waste of time. On the other hand doing things this way LabVIEW novices don't need to know how to scan the contents of the buffer for any sort of stop criteria, termination character or not. So, instead of telling people not to use Bytes at Port (with a thousand exclamation points), tell them how to do it properly.

 

Another point - you really seem to misunderstand what "racing conditions" mean - if, in this NI example, I set my waiting time too short, and repeatedly don't get the whole reply of a system, this has NOTHING to do with racing conditions. Like, at all.


I thought this whole discussion involved how to use "Bytes at Port" the right way?

 

Also, what are "racing conditions"?  Is there some language barrier that changed "race condition" to "racing conditions"?  Do these terms mean the same thing?

 

Trying to say that figuring out how to get all the data in the reply has nothing to do with "Bytes at Port" is like saying that driving over a spike strip has nothing to do with flat tires because it is the puncturing of the tire that causes the flat, not driving over a spike strip.

 

I would like to reiterate what rolfk said so long ago, that you are just trying to recreate what VISA does automatically.  It is horribly inefficient.  Only very specific cases ever need "Bytes at Port", otherwise just let VISA read do the thing it does best, that is, manage your communications for you, so you don't have to.


Well, it would be an actual discussion, if his stance wasn't effectively "never, ever use Bytes at Port!!!!!!!!!!!" and so on. Watch his presentation.

 

Racing conditions -> https://en.wikipedia.org/wiki/Race_condition

 

You can have a single racing condition or many of them simultaneously, not sure what your point is.

 

And in the quoted NI example the problem, which supposedly leads to said racing conditions, is that the decision to stop reading from the VISA input buffer (or rather when to read it a single time) is based entirely on a fixed amount of time before, blindly, reading all data currently in it, and hoping that everything you wait for is already in the buffer. How you read said data from the VISA input buffer is completely irrelevant in regards to the problem at hand. You can read said data without using Bytes at Port at all too, which makes it completely irrelevant when using that example as some sort of a scapegoat. The problem is the amount of time constant, not how you read the data.

 

And I would like to repeat, yet again, that you, and he, is talking about is using termination characters as a criteria to stop reading from the input buffer. In my case the stop criteria has absolutely nothing to do with termination characters - the issue isn't choosing the proper termination character, it is that my stop criteria has nothing to do with termination characters in the first place. It is an arbitrary string, which sometimes comes 3 minutes later, from an input buffer which is constantly getting filled with streaming data I need to display anyway. Either I'm not making clear how UARTs work when it comes to embedded systems booting Linux as an OS over a UART interface, or there is a very strange determination to not understand me on purpose...

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 29 of 52
(409 Views)

@Dobrinov wrote:

Another point - you really seem to misunderstand what "racing conditions" mean - if, in this NI example, I set my waiting time too short, and repeatedly don't get the whole reply of a system, this has NOTHING to do with racing conditions. Like, at all.


It is a race condition.  You are just trying to limit a race condition to software.  In the case of serial communications, using the NI example, the instrument is racing to get the full message out and read by the computer UART chip before the Bytes At Port is executed by your software.

 


@Dobrinov wrote:
So, instead of telling people not to use Bytes at Port (with a thousand exclamation points), tell them how to do it properly.

That is just my macro.  If you actually find my posts where I use that, I go through the full description of how to use it, or not, based on the situation of the thread.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 30 of 52
(405 Views)