05-27-2015 10:01 AM
05-27-2015 10:07 AM
I've been using the Bytes at Port, based on NI's basic examples, in my applicaton for years. It works flawlessly. It may not be "needed" in most cases, but it doesn't do any harm either, unless I'm missing something?
05-27-2015 10:16 AM
05-27-2015 10:34 AM
@Edjsch wrote:
Dennis - Can you point to some examples/tutorials.
You notice my vi does use the LF termination character. Wiring up the Bytes at Port to Byte Count is okay, because, from LabVIEW 2009 Help:
http://zone.ni.com/reference/en-XX/help/371361F-01/lvinstio/visa_read/. "The VISA Read function returns less than the number of byte count if
the function reaches a termination character (mine is set to LF), or it reaches the end of the buffer, or a timeout occurs." (Mine is set to 1 sec.) So this is best of both worlds. If the serial string being read does not have a termination character, it will keep reading as long as there is data being received, within the timeout period.
Ed
Actually, your VI does NOT use the termination character. It sets it, but it doesn't use it. Just eliminate the bytes at port and the wait. Then you can just leave the port open until you're done with it.
Set up as Dennis had suggested, you do automaitcally what you're tryig to manually - without polling the serial port and looping unnecessarilly (and using up CPU cycles) - because you have now changed the structure from polling to waiting on an event. When the event is triggered - i.e., the VISA read encounters the termination character - it outputs the data and the loop continues. It will also continue if there is a timeout, which triggers timeout error, or the buffer becomes full, in which case it sends a warning that the number of bytes read equals the buffer size and there might be more data to read. (This is actually quite useful because if you are using the termination character, that almost always means there was a problem with communication - the exception being if you specified the buffer to a number that was actually smaller then the largest chunk of data to be read.)
You no longer need the wait because the loop will no longer be spinning out of control.
05-27-2015 10:48 AM
My mantra for serial communications is - with few exceptions - if you need a hard coded wait, you aren't understanding the protocol.
05-27-2015 10:54 AM
My system is for a "request / reply" protocol, not like HyperTerminal which basically listens for anything being received, requested or not. (Although I have modes in my application where it just listens, but that's a special case.)
Actually, my VI does use the termination character. Because of the delay after the "reading request string" ("String to Write", usually a "?\r") and the fast, known, response time of the instruments I interface with, by the time I start reading the buffer already has data in it. For short replies such as reading a value (about 12 bytes) at high baud rates (usually 115200), the LF has already been received by the time the read occurs, and the read function returns immediately.
I do get your point that I don't strictly need Bytes at Port, but that was in the original examples from NI, and I saw no reason to not use it. Furthermore, some of my commands will retrieve a large, unknown amount of data, so what constant should I wire to Byte Count? What I do is keep looping as long as I'm receiving data.
05-27-2015 10:57 AM
The wait also serves as the required yield to the OS in the While loop, otherwise CPU usage skyrockets.
05-27-2015 11:01 AM
05-27-2015 11:07 AM
You are correct. If the response time can be long and the amount of data that will be received is known, your way is better. Just remember to set the timeout in the VISA Configure Serial Port vi to a large enough number.
I did check in my vi: If I set the delay to 0 and wire a large number (1000) to the byte count, the VISA Read function does yield to the OS while it is waiting for the LF, so CPU usage is okay. What happens is that the data is retrieved from my serial device as fast as it can respond at the given baud rate. My application needs a user-defined sampling rate, which the delay provides.
05-27-2015 11:08 AM - edited 05-27-2015 11:22 AM
@Edjsch wrote:
The wait also serves as the required yield to the OS in the While loop, otherwise CPU usage skyrockets.
As I mentioned before, using Dennis' method changes from polling to event driven. Your loops will no longer be spinning out of control. No need for the wait.
[edit] I guess I'm a little late, huh? [/edit]