Instrument Control (GPIB, Serial, VISA, IVI)

cancel
Showing results for 
Search instead for 
Did you mean: 

serial comm character available

Typically, when writing code to receive serial data in assembly or C a programmer' code would read the serial chips status register and test the bit that indicated that a character was available.  Once available the code would then read the character.  This would be done repeatedly for a string of characters.
 
In my application:
I started with a long (200mS) wait between writing a serial command string and reading back the response string.  This was much too slow for "increment or decrement the value sent" communucations with the external device.  So, I used the "Bytes at Port" property node in a while loop to determine when characters had been received.  I figured that the value the node would output would start at "0" and then increment for every character received.  Then, with a 5ms delay following the closing of that loop I would read the complete character string using a second "Bytes at Port" porperty node to tell the read VI how many characters to read.  This sped things up a lot, but is rather clumsly.  I need something more "slick".
 
The Question 1:
Is there a VI that will indicate when a character is available in the buffer?
 
and Question 2:
What is the VISA / Wait on Event / Serial Character event used for and how does it relate to the Hardware?
0 Kudos
Message 1 of 8
(4,155 Views)
Wilber,

I hope I got you right.
I'd suggest starting with the Bytes at port property. If it is >0 there are as many bytes availabel as indicated by that property and you can read them using VISA Read. Similar to what you would have done in C.
If you use a loop, use a shift register to hold the string received in previous iterations and append whatever was read in the actual iteration. Use a proper delay whenever no bytes where availabel. And scan your string or use any other means to determine when to finish the loop.

To answer your Qs:
1. Use Byte at Port
2. I understand this event as a signal that a new character is availabel. But I do not see any reason to use events here to solve your problem.


HTH   and
Greetings from Germany!<br>-- <br>Uwe
0 Kudos
Message 2 of 8
(4,140 Views)

Lul

Thanks for responding, and greetings from the US.

Initially, I tried using the Serial Character Event to do the job because I supected that it would produce an output telling me when a character was available.  I could not get it to work and the HELP was too vague to be useful. So, I came up with the idea of using the Bytes At Port property node to tell me when caharacters were in the buffer instead of it intended use,  telling the Read VI how mwny characters were in the buffer.  It was a "work-around".  If, as you say, the Serial Character Event does indicate when a character is available then it is the proper method to use for receiving serial data.  It would be exactly the method I would and have used in other programming languages for the last 20+ years.  Can you show me a working example of the Serial Character Event method?

Wilber

 

 

0 Kudos
Message 3 of 8
(4,132 Views)
Wilber,

No and no.
Bytes at serial Port is not _just_ used to tell ReadVI how many bytes to read. This is just -one- possibel application. You can as well read (much) more Bytes than are availabel at the moment. This way ReadVI would wait until it gets the requested number of bytes, a termination character was read (if activated) or a timeout occured. Seing it from the other way, I have a lot of apps, where I check if Bytes at Port > 0. If so, I read the availabel Bytes and append them to what is aleady there. Otherwise i wait an aprpropriate numer of ms. So I'd say _this_ is similar to checking the UART status byte.

IMHO the Serial Character Event is more like an interrupt. Having some code that waits for this event be fired is like an interrupt service handler. Not too many times ago I tried this out, but finally decided it _not_ being worth the effort. You open a can of worms to LabVIEW when unnecessarily using this event (Adding lots of overhead, needs an extra execution system to separate the processes, need some means to bring the data to your processing code [Its a bad idea to process data in an ISR handler!] etc.).  This is allready built in in VISA. So why not using it?

So, no, Bytes at serial port has more functionality than you expected.
And, no, I have no working example for the Serial Character event.

Reading your statement I get the impression you are an experienced C programmer.
LabVIEW is different. It took me some time (month)  to _think_ LabVIEW and to work _with_ it, not _against_ it. Search the examples! Help_SearchExamples

Just my Euro 0.02!
Greetings from Germany!<br>-- <br>Uwe
0 Kudos
Message 4 of 8
(4,126 Views)

Lul

More experienced at Assembly language than C, but you are right about having to get used to thinking LabVIEW.  They are very different.  It is nice to know that I'm advanced enough in LabVIEW to have come up with the solution of using Bytes at Port as a "status" without any outside help.  It must mean that I am thinking more LabVIEW.  I really like it, too!.  Not only do I have a seat at work, but being a student currently, I purchased the Student edition for myself.  I'm using IMAQ for USB and a CCD Web Cam.  I plan to use it with my school work. After all these years I'm going for my EE degree.

Thanks for you insight!

Wilber

 

 

 

0 Kudos
Message 5 of 8
(4,122 Views)

@Lul

i dont agree with you. Using Byte at port means that you are polling the serial interface every N ms. Take a look at:

http://forums.ni.com/ni/board/message?board.id=170&message.id=91386&query.id=118883#M91386

seems that Madri (student) made what wilber was looking for, that's to say an interrupted version at high level. We know that at low level, recent micro manage the serial interface with interrupts, so the assembly code is written in the fashion described by wilber. I made a program that emulate a Modbus slave, so i have to choose a COM port and wait there for bytes to arrive. Do you think that using "byte at port" is performing better than having an interrupt (=wait on notification, VISA wait on char, using queues structures, ecc...) ?? IMHO i dont think so!

Message 6 of 8
(4,039 Views)
Slyfer,

maybe I was not clear enough in my wording...
You are right that using Bytes_at_serial_port  does a kind of polling.  And using an event with an apropriate handling structure as described in the referenced postings is probably a better way to go. BUT:
* Using serial communications you have usually quite slow transfer rates. Even when operating at 115200 or 230400 baud with 8N1, you got just about 11.25 to 22.5 kB/s. Not too much for modern CPUs, But this translates to several thausend single characters received by the serial port, which can create several thausend events. This is easily becoming a pain when using the above referenced methods. Of course, operating at 9600 bauds, this is not a problem.
* Serial communication is often kind of message based. You send a command to a device and it responds with an answer. These messages often have a predefined format (some message-start and message-stop characters or similar mechanisms. I believe that this was one reason to introduce the 'termination character mechanism' into VISA. This way VISA can handle all the required interrupt and event handling. And VISA does not necessarily run in the application level of the operating system. Running in a level nearer to system allows better performance of such hardware-related tasks.
* As serial communication is rather slower and is message based, it is usually not necessary to have some decent real-time functionality. If one can live with a delay of some ms, it is much easier to either a) read a huge number of characters with timeout and termination character or to b) read some smaller chunks of data and to append 'em in your code for message termination and processing. Method a) works just if the message termination is done with a decent termination character. If not (for example when the message contains its length at some fixed adress), one can use method b).
Modern serial (UART) port chips have internal buffers of several bytes to kB and the system interrupt handler (which serves the hardware interrupt of the UART) copies received bytes to a system buffer and fires a system event enlarges the effective buffer to values of of up to 32 kB. Even with 230kbaud this is enough for more than 1.4s of continuous data. Should be enough even on Windows to not miss data 😉

So if I check Bytes_at_serial_port every 10ms, I will get some 100 bytes at maximum. Now I append this to my local buffer and check for complete messages. Those are than removed from the buffer and sent to a parallel process for processing. 
This reduces the coding effort as well as the CPU load, which is was my intention was.

Just my Euro 0.02!
Greetings from Germany!<br>-- <br>Uwe
0 Kudos
Message 7 of 8
(4,014 Views)
Slyfer,

maybe I was not clear enough in my wording...
You are right that using Bytes_at_serial_port  does a kind of polling.  And using an event with an apropriate handling structure as described in the referenced postings is probably a better way to go. BUT:
* Using serial communications you have usually quite slow transfer rates. Even when operating at 115200 or 230400 baud with 8N1, you got just about 11.25 to 22.5 kB/s. Not too much for modern CPUs, But this translates to several thausend single characters received by the serial port, which can create several thausend events. This is easily becoming a pain when using the above referenced methods. Of course, operating at 9600 bauds, this is not a problem.
* Serial communication is often kind of message based. You send a command to a device and it responds with an answer. These messages often have a predefined format (some message-start and message-stop characters or similar mechanisms. I believe that this was one reason to introduce the 'termination character mechanism' into VISA. This way VISA can handle all the required interrupt and event handling. And VISA does not necessarily run in the application level of the operating system. Running in a level nearer to system allows better performance of such hardware-related tasks.
* As serial communication is rather slower and is message based, it is usually not necessary to have some decent real-time functionality. If one can live with a delay of some ms, it is much easier to either a) read a huge number of characters with timeout and termination character or to b) read some smaller chunks of data and to append 'em in your code for message termination and processing. Method a) works just if the message termination is done with a decent termination character. If not (for example when the message contains its length at some fixed adress), one can use method b).
Modern serial (UART) port chips have internal buffers of several bytes to kB and the system interrupt handler (which serves the hardware interrupt of the UART) copies received bytes to a system buffer and fires a system event enlarges the effective buffer to values of of up to 32 kB. Even with 230kbaud this is enough for more than 1.4s of continuous data. Should be enough even on Windows to not miss data 😉

So if I check Bytes_at_serial_port every 10ms, I will get some 100 bytes at maximum. Now I append this to my local buffer and check for complete messages. Those are than removed from the buffer and sent to a parallel process for processing. 
This reduces the coding effort as well as the CPU load, which is was my intention was.

Just my Euro 0.02!
Greetings from Germany!<br>-- <br>Uwe
0 Kudos
Message 8 of 8
(4,013 Views)