LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to check if data is ready on visa/serial

Solved!
Go to solution

@Dobrinov wrote:

And I would like to repeat, yet again, that you, and he, is talking about is using termination characters as a criteria to stop reading from the input buffer.


If that is the conclusion you made from my presentation, you did not watch it.  I tried to be quite clear to use the message protocol to read entire messages and avoid partial messages.

 

For 99.9% of ASCII based protocols, the termination character is the way to go and VISA just makes your life very easy.

 

For raw/binary/hex based protocols, you have to very carefully use the message format.  This typically involves reading 1 byte at a time until the start character is read and progressing from there until the full message is read and verified.

 

So perhaps your terminals don't follow the 99.9% of the terminals I have used where you can consistently pull out the lines of data and find the data you need.  I have seen some weird things out there, including a raw/binary/hex data block as a response to a specific command, so I would not be surprised if that is the case.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 31 of 52
(1,570 Views)

@crossrulz wrote:

@Dobrinov wrote:

Another point - you really seem to misunderstand what "racing conditions" mean - if, in this NI example, I set my waiting time too short, and repeatedly don't get the whole reply of a system, this has NOTHING to do with racing conditions. Like, at all.


It is a race condition.  You are just trying to limit a race condition to software.  In the case of serial communications, using the NI example, the instrument is racing to get the full message out and read by the computer UART chip before the Bytes At Port is executed by your software.

 


@Dobrinov wrote:
So, instead of telling people not to use Bytes at Port (with a thousand exclamation points), tell them how to do it properly.

That is just my macro.  If you actually find my posts where I use that, I go through the full description of how to use it, or not, based on the situation of the thread.


Sorry, but that is NOT a race condition. It would be one, I guess, if those two would "race" in parallel in a non-deterministic fashion, but since everything is serialized, i.e. first you wait a certain amount of time, as determined by YOU, and THEN you read everything in the buffer it really isn't one. And the Bytes at Port is, again, irrelevant, as it is all about the VISA Read. I can have the EXACT same situation without using Bytes at Port at all, i.e. read everything from the input buffer and call it a day before the whole system response is there, i.e. too soon. Your actual issue is with the fixed amount of time you wait, not the way you read the data. If you set the waiting time too short and then attempt to read the data you will get an incomplete response every single time you do that, i.e. the system behavior is deterministic, which by definition means that it is not a race condition.

 

And yes, according to your use list one should NOT use Bytes at Port for constantly streamed data... Which I completely disagree with. Over many posts already. EDIT: And to avoid nitpicking here, I mean using Bytes at Port as in reading everything in the input buffer out, not just checking if there is anything, at all, in it.

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 32 of 52
(1,568 Views)

@Dobrinov wrote:

Sorry, but that is NOT a race condition. It would be one, I guess, if those two would "race" in parallel in a non-deterministic fashion, but since everything is serialized, i.e. first you wait a certain amount of time, as determined by YOU, and THEN you read everything in the buffer it really isn't one.


Instrument communications are non-deterministic at a system level.  The race is between you and the instrument you are communicating with.  What if the instrument has a hiccup and it takes 110ms to give you a response instead of the normal 100?  It lost the race and you are therefore reading a partial message.  So then your fix is to increase the wait, which just wastes time 99% of the time, greatly increasing test time unnecessarily.  This is based on using the Bytes At Port to determine the number of bytes to read.

 

Or how about if an instrument only sends out data when something changes?  This is a real-world situation where I developed most of the ideas from that presentation.  I had an instrument that sent out data when the angle of a nut changed.  I would constantly get partial messages because I would just read the number of bytes in the port.  That is a race condition.  This is when I figured out to use the Bytes At Port to just see if a message has started.  I could then read an entire message (ASCII protocol, so using the termination character) and I never again saw a partial message and I didn't have to jump through a ton of hoops to hold data, check for the message complete, etc.  VISA Read did all of the work for me once I could determine that a message had at least started to come in.  Just to complete that story, the same instrument constantly sent out the measured nut torque through another serial port.  For that guy, I just did the normal VISA Read using the termination character and never saw an issue.  It had other issues getting bytes at the UART level synched up at initialization, but that is another story.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 33 of 52
(1,556 Views)

@crossrulz wrote:

@Dobrinov wrote:

Sorry, but that is NOT a race condition. It would be one, I guess, if those two would "race" in parallel in a non-deterministic fashion, but since everything is serialized, i.e. first you wait a certain amount of time, as determined by YOU, and THEN you read everything in the buffer it really isn't one.


Instrument communications are non-deterministic at a system level.  The race is between you and the instrument you are communicating with.  What if the instrument has a hiccup and it takes 110ms to give you a response instead of the normal 100?  It lost the race and you are therefore reading a partial message.  So then your fix is to increase the wait, which just wastes time 99% of the time, greatly increasing test time unnecessarily.  This is based on using the Bytes At Port to determine the number of bytes to read.

 

Or how about if an instrument only sends out data when something changes?  This is a real-world situation where I developed most of the ideas from that presentation.  I had an instrument that sent out data when the angle of a nut changed.  I would constantly get partial messages because I would just read the number of bytes in the port.  That is a race condition.  This is when I figured out to use the Bytes At Port to just see if a message has started.  I could then read an entire message (ASCII protocol, so using the termination character) and I never again saw a partial message and I didn't have to jump through a ton of hoops to hold data, check for the message complete, etc.  VISA Read did all of the work for me once I could determine that a message had at least started to come in.  Just to complete that story, the same instrument constantly sent out the measured nut torque through another serial port.  For that guy, I just did the normal VISA Read using the termination character and never saw an issue.  It had other issues getting bytes at the UART level synched up at initialization, but that is another story.


Well, as far as I’m concerned more or less everything that isn’t running on an FPGA is not deterministic at a system level, as everything else will have jitter. A system running, or using, non-deterministic processes can be made deterministic nonetheless. Even if you run your modules on a RealTime target system there will still be some amount of jitter, the question is not whether there is any, but how much.

 

Now to the problem at hand – there is no race, and thus no racing condition. What you describe is effectively putting the system in such a state where the inherent jitter will make your system unpredictable. If I want to measure a signal so small that the noise contained in said signal is comparable in amplitude, this would not make said measurement an example of “racing conditions”. I will not “race” with the noise…

 

And in your case – you can make said system perfectly deterministic by increasing the wait time. Not doing so is your choice. There is nothing happening in parallel as the whole procedure is explicitly serialized. Just because you don’t want to make everything stable by waiting a sufficiently long time, i.e. longer than the time needed for the whole system response to arrive in the input buffer plus any realistic communication jitter, does not change the nature of the system. And it has absolutely nothing to do with Bytes at Port. How you choose to read the data from said input buffer is completely irrelevant in this case. You can do it without using Bytes at Port at all. Thus using it as some sort of a scapegoat is… strange at best.

 

What you misunderstand as a race condition is akin to having two processes, one producer that feeds data into a queue, and a consumer, that reads that data from it. Only in your case you insist on making sure that the consumer attempts to read the data before it has arrived and calling it a race condition. Which by definition it is not. Race condition would be if you have a producer loop feed data into said queue, but two consumers which simultaneously, at the same time, read data from it, at the same rate. In which case those two consumer loops would effectively “race” with each other to read as much data from the queue instead of the other consumer loop, resulting in completely unpredictable, i.e. non-deterministic system behaviour. You will have no means of telling which consumer loop will read what data, or how much of it.

 

You have effectively used a communication structure based on wait time as a stop criteria in regards to receiving a response, insisted on doing it wrong, blamed Bytes at Port for having insufficient wait times, as described above, called it a race condition, which it is not, and now based on that misunderstood personal experience, are telling people to stay as far away from Bytes at Port as possible as if it is the digitalized personification of the devil himself. Leave the poor Bytes at Port alone.

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 34 of 52
(1,535 Views)

Hi Dobrinov,

 


@Dobrinov wrote:

You have effectively used a communication structure based on wait time as a stop criteria in regards to receiving a response, insisted on doing it wrong, blamed Bytes at Port for having insufficient wait times, as described above, called it a race condition, which it is not, and now based on that misunderstood personal experience, are telling people to stay as far away from Bytes at Port as possible as if it is the digitalized personification of the devil himself. Leave the poor Bytes at Port alone.


I have to support crossrulz on that matter: it is a race condition!

Two processes access/influence the very same resource without any order of execution: BytesAtPort is reading the number of bytes in the buffer while the (external) device is trying to put some bytes into that buffer.

Most often BytesAtPort may win that race - and will give a wrong number of bytes because the message is not received completely yet. Placing some additional waits before BytesAtPort may give an bias to that race, but it may still fail from time to time…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 35 of 52
(1,529 Views)

@GerdW wrote:

Hi Dobrinov,

 


@Dobrinov wrote:

You have effectively used a communication structure based on wait time as a stop criteria in regards to receiving a response, insisted on doing it wrong, blamed Bytes at Port for having insufficient wait times, as described above, called it a race condition, which it is not, and now based on that misunderstood personal experience, are telling people to stay as far away from Bytes at Port as possible as if it is the digitalized personification of the devil himself. Leave the poor Bytes at Port alone.


I have to support crossrulz on that matter: it is a race condition!

Two processes access/influence the very same resource without any order of execution: BytesAtPort is reading the number of bytes in the buffer while the (external) device is trying to put some bytes into that buffer.

Most often BytesAtPort may win that race - and will give a wrong number of bytes because the message is not received completely yet. Placing some additional waits before BytesAtPort may give an bias to that race, but it may still fail from time to time…


Only there IS an order of execution, as in the standard NI example, used as basis for this... debacle. You send a command, wait a fixed amount of time, determined by YOU, and THEN you read everything currently in the VISA input buffer. It does not happen in parallel, it is serialized. Increase that wait time sufficiently and the system behaviour will be perfectly deterministic. The only way to make it unpredictable is if you, deliberatly, lower that wait time until jitter comes into play. This has absolutely nothing to do with race conditions.

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 36 of 52
(1,524 Views)

It definitely is a race condition in the very sense of its meaning. Just because you have a software degree and learned that a race condition is when two threads try to access the same resource asynchronously is called a race condition, doesn't make it the ONLY way of having a race condition. It's not limited to just software threads trying to access the same resource/variable!

 

You have the external device causing the bytes at Serial port to change and your software routine trying to interpret that value in an asynchronous manner. The only very loose synchronization gate is usually that your software sends a command that causes the device to respond. How fast it will respond is up to the device, its efficient or not so efficient firmware programming, the fact it is busy with other things at that moment, and potentially even things like the current moon phase and what else. Your software has to make guesses when the device has finished sending its message unless there is a clear synchronization point. This synchronization point can be a lot of things but the most reliable ones are:

 

- a designated termination character (for ASCII text messages)

- a certain pre agreed amount of bytes (for binary communication protocols)

 

Other much less reliable and more inefficient synchronization points could be:

 

- a certain amount of time without new data (will slow down your communication since you have to wait this amount of time before you can be sure you received all, and if the device can send unsolicited messages it's an impossible method)

- a specific pattern in the response depending on the command you send (valid but requires you to do a lot of command specific parsing)

 

For the first two methods, VISA Read can do that for you without any need for Bytes at Serial port other than to see that there is actually any data to receive (but you could also just let VISA Read simply timeout here and treat the timeout error not as fatal error but simply as indication that there is nothing to do yet), if you insist on other methods, you have to do extra work.

 

And yes, VISA Read can simply timeout, it doesn't block the rest of your LabVIEW software if a VISA Read is waiting for a timeout. It definitely doesn't block your program more than your loop waiting for the specific message condition.

 

LabVIEW is very capable of executing multiple VISA Read in parallel on different VISA sessions and do everything else in your program at the same time too, such as calculations, communicating with several dozen other devices or applications over VISA, TCP/IP, UDP, CAN, MXI, etc, updating the user interface and play a sound if you wish. The only things that force execution dependencies in LabVIEW are data flow and if you push something into the UI thread by configuring the VI or Call Library Node to execute in that thread or calling such functions in your code.

 

Of course you can make it deterministic by simply waiting long enough after sending the command before reading the Bytes at Serial Port. BUT: You will have to use an extremely overestimated wait time to account for what you call "jitter". It may work now but you may decide that you need to use a 25m RS-232 cable in the final installation situation rather than the 3m cable that you developed and tested your system with and sh*t the 115k Baud communication doesn't work over that long a cable so you have to lower it to 56k Baud instead and BAMM, your software delay in your communication driver is to small now. Or the manufacturer revises his firmware for the device and suddenly the device needs not 5ms to respond but 100ms. Of course it all can be fixed but having to do such fixes in the field is extremely expensive and it slows down your communication unnecessarily even for the situation where that extra delay is not needed.

 

A defined order of execution is NOT enough to avoid race conditions, you need clear synchronization points for that. A software delay is never such a synchronization point, it's at best a bandaid!

Rolf Kalbermatter
My Blog
Message 37 of 52
(1,520 Views)

@rolfk wrote:

A defined order of execution is NOT enough to avoid race conditions, you need clear synchronization points for that. A software delay is never such a synchronization point, it's at best a bandaid!


If anything deserved a kudo, it is that statement there.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 38 of 52
(1,498 Views)

I've always maintained that, unless your manual says to insert a wait, needing waits to make your communications code work usually means you don't fully understand the protocol.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
Message 39 of 52
(1,491 Views)

@rolfk wrote:

It definitely is a race condition in the very sense of its meaning. Just because you have a software degree and learned that a race condition is when two threads try to access the same resource asynchronously is called a race condition, doesn't make it the ONLY way of having a race condition. It's not limited to just software threads trying to access the same resource/variable!


Which meaning would that be exactly, all the defintions I know of explicitly make this NOT a race condition. Jitter, or randomness, by itself is not a race condition. And I have two degrees in engineering, thanks.

 


@rolfk wrote:

You have the external device causing the bytes at Serial port to change and your software routine trying to interpret that value in an asynchronous manner.

Err, nope, my software isn't interpreting anything, it, in this example, waits a set amount of time before reading everything in the input buffer, in a deterministic, serial, manner as defined by me. I determine if that software will a) always fail b)work some of the time or c)will always work. There is literally no interpretation of any sort in the NI example used here. Just a simple sequence of write, wait, read.

 


@rolfk wrote:

The only very loose synchronization gate is usually that your software sends a command that causes the device to respond. How fast it will respond is up to the device, its efficient or not so efficient firmware programming, the fact it is busy with other things at that moment, and potentially even things like the current moon phase and what else. Your software has to make guesses when the device has finished sending its message unless there is a clear synchronization point. This synchronization point can be a lot of things but the most reliable ones are:

 

- a designated termination character (for ASCII text messages)

- a certain pre agreed amount of bytes (for binary communication protocols)

 

Other much less reliable and more inefficient synchronization points could be:

 

- a certain amount of time without new data (will slow down your communication since you have to wait this amount of time before you can be sure you received all, and if the device can send unsolicited messages it's an impossible method)

- a specific pattern in the response depending on the command you send (valid but requires you to do a lot of command specific parsing)

 

For the first two methods, VISA Read can do that for you without any need for Bytes at Serial port other than to see that there is actually any data to receive (but you could also just let VISA Read simply timeout here and treat the timeout error not as fatal error but simply as indication that there is nothing to do yet), if you insist on other methods, you have to do extra work.


All of this has absolutely nothing to do with race conditions...

 


@rolfk wrote:

And yes, VISA Read can simply timeout, it doesn't block the rest of your LabVIEW software if a VISA Read is waiting for a timeout. It definitely doesn't block your program more than your loop waiting for the specific message condition.


Let me get this straight, you are telling me that an explicitly blocking LabVIEW function, while waiting for a timeout, is not going to block the rest of my LabVIEW code? Like, seriously? And "my loop" can actually exit at any time before said timeout is reached. Not aware of how to interrupt a VISA Read waiting for its timeout on the other hand. Even though it doesn't block anything...

 


@rolfk wrote:

LabVIEW is very capable of executing multiple VISA Read in parallel on different VISA sessions and do everything else in your program at the same time too, such as calculations, communicating with several dozen other devices or applications over VISA, TCP/IP, UDP, CAN, MXI, etc, updating the user interface and play a sound if you wish. The only things that force execution dependencies in LabVIEW are data flow and if you push something into the UI thread by configuring the VI or Call Library Node to execute in that thread or calling such functions in your code.

How is that relevant, again? Everything in LabVIEW can be done in parallel with the exception of the things that can't. 👍

 


@rolfk wrote:

Of course you can make it deterministic by simply waiting long enough after sending the command before reading the Bytes at Serial Port. BUT: You will have to use an extremely overestimated wait time to account for what you call "jitter". It may work now but you may decide that you need to use a 25m RS-232 cable in the final installation situation rather than the 3m cable that you developed and tested your system with and sh*t the 115k Baud communication doesn't work over that long a cable so you have to lower it to 56k Baud instead and BAMM, your software delay in your communication driver is to small now. Or the manufacturer revises his firmware for the device and suddenly the device needs not 5ms to respond but 100ms. Of course it all can be fixed but having to do such fixes in the field is extremely expensive and it slows down your communication unnecessarily even for the situation where that extra delay is not needed.

 

A defined order of execution is NOT enough to avoid race conditions, you need clear synchronization points for that. A software delay is never such a synchronization point, it's at best a bandaid!


So, you CAN make the system deterministic, but you... choose... not to, because then you'd have nothing to complain about? That's not how racing conditions work.

 

So, effectively you are saying that despite having full control over the system, i.e. everything that happens is determined, explicitly, by you, you choose to make the system unstable just so you can complain about the randomness of the output result? And then blame a completely irrelevant function in it, which on top of that has nothing to do with the introduced randomness in the first place. That system would be perfectly deterministic if either the chosen wait time is low or high enough, i.e. you either never get the full response, or always do. And so the only option to get it only part of the time is if you adjust the wait time just so that the inherent system randomness, i.e. the jitter, would make the outcome… drum roll… random. And then call it “race condition” as if you can’t do anything about it. Basically, I can make it stable, but I won't...

 

Whether the system would remain stable if you introduce changes to the way the communication is implemented is also, again, completely irrelevant. That's like saying, yeah using a termination character would make everything work, but if the script ran on the system starts using a different termination character tomorrow then your desing will be completely useless. How is that even an argument? One can make the system deterministic, but things can change, thus it doesn't count? And no, I don't need any explicit synchronization points to make the system deterministic, i.e. not to have any randomness, i.e. what you refer to as "race condition", as you say so yourself...

"Good judgment comes from experience; experience comes from bad judgment." Frederick Brooks
0 Kudos
Message 40 of 52
(1,476 Views)