LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

table

Hi GerdW

 

I'd like to ask a basic question...

Why not recommand the Node of BytesAtPort?

I've used it a lot in the past, something like the example

https://forums.ni.com/t5/image/serverpage/image-id/479i28695C616103B7C7?v=mpbl-1

write the command into the buffer of the RS-232, and wait for a while(this timing seemed should be tested and evaluated long enough previously), and read the buffer of the RS-232.

I'm a little worried that if the byte count are not fixed, can't I use the method of specifying the byte count?

My understanding of series communication may be not well, hoping I can strengthen my understanding.

 

By the way, how to generate word "Deprecate" in the Set Cell Value nodeof table?

 

0 Kudos
Message 21 of 50
(1,319 Views)

@William1225 wrote:

Why not recommand the Node of BytesAtPort?


Where to begin...

 

I have only legitimately ran into 1 situation that seemed like using the Bytes At Port was the way to go.  Looking back, I could have made that code A LOT simpler without it...But I digress...

 

Any good serial communications has a defined message protocol.  If the message is using ASCII data (human readable numbers etc) and a termination character, then the Bytes At Port is completely worthless.  You just set the termination character up (part of the Configure Serial Port) and then tell the VISA Read to read more bytes than you will ever receive in a single message.  The VISA Read will stop when it finds the termination character.

 

For binary messages, then the protocol usually has set length messages.  So you just tell the VISA Read to read that message length of bytes and you are done.

 


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 22 of 50
(1,310 Views)

Hi crossrulz,

 

Thanks to your instant reply.

Maybe I'm thinking too much..

There are many cases for me to communicate with data loggers or spectrums, and the data will be collected in the form of "3.72343, 2.343298, 1.22166,...", depends on the channel number I selected.

I felt it is hard to estimate how much the maximal length I would recieve, instead I tried to wait how much time I waited could recieve the whole data.

I'd like to know the pros and cons of thes two method,

1. Set the correct terminator, and estimate the maximal length of the data and set it to the VISA Read.

2. Estimate the maximal time consumed for the whole data, and set it to wait and follows the BytesAtPort.

I wonder if the wait probably over-estimates the wait timing and waste of some time, method 1 is better since it terminated as recieving the character, and save one step of the node more efficiently?

I still do not figure out what GerdW mentioned "the Wrong Command"

Though these might be little questions, but it helps me a lot, Thanks you:)

 

 

0 Kudos
Message 23 of 50
(1,294 Views)

@William1225 wrote:

There are many cases for me to communicate with data loggers or spectrums, and the data will be collected in the form of "3.72343, 2.343298, 1.22166,...", depends on the channel number I selected.

I felt it is hard to estimate how much the maximal length I would recieve, instead I tried to wait how much time I waited could recieve the whole data.


If you do not know how much data is coming, then how do you know how long to wait?  You have a race condition (may not get all of the data) or you are slowing down your program more than is necessary when using a wait and the Bytes At Port.  Besides, you can set the bytes to read to be really large.  I often see 4096 bytes in instrument drivers.  But with the termination character, it still only takes as long as is needed to get your single message.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 24 of 50
(1,289 Views)
You DON'T estimate any byte length when there is a terminator. You simply set it to some arbitrarily large number. You should look at some of the existing instrument drivers to see how simple it is with a termination character.
Message 25 of 50
(1,281 Views)

Exactly, perhaps I just thought it was easier to test the timing than to count the message.

Now I have to change my habit not to use the ByteAtPort instead of byte count.

I can realize the slowing down of the program by using wait and ByteAtPort Node.

How does the race condition happen by using wait & ByteAtNode?

I've seen some drivers indeed set the count to 4096, thanks to your suggestions and it helps me making mind up to get rid of BytesAtNode.

0 Kudos
Message 26 of 50
(1,272 Views)

Hi Dennis_Knutson,

 

Yes, but it was hard for me to set "arbitrary" large number, I was often worried it is not enough.

It was contradictory since I believe the "arbitrary" large waiting is safer...

Anyway, I should pay more attention on how much message I might recieve(how many channels, how many digits of one data in a channel, etc.)

Then I could set the large number safely without underestimating. Is it true?

Thanks to your reply, I'll change to use the byte count & termination char instead.

 

0 Kudos
Message 27 of 50
(1,264 Views)

@William1225 wrote:

Hi Dennis_Knutson,

 

Yes, but it was hard for me to set "arbitrary" large number, I was often worried it is not enough.

It was contradictory since I believe the "arbitrary" large waiting is safer...

Anyway, I should pay more attention on how much message I might recieve(how many channels, how many digits of one data in a channel, etc.)

Then I could set the large number safely without underestimating. Is it true?

Thanks to your reply, I'll change to use the byte count & termination char instead.

 


If I recall correctly (which is somewhat dubious these days), the VISA read doesn't actually preallocate the memory, so even setting it to something like 5000 isn't going to hurt anything.

 

Could someone clarify that because I really don't know if I am remembering this correctly.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
0 Kudos
Message 28 of 50
(1,245 Views)

@William1225 wrote:

How does the race condition happen by using wait & ByteAtNode?


Say you have a message that is 100 bytes long.  You wait 50ms and then do the read.  But since the instrument is slow, you only get 75 bytes.  That is the race condition: you did not wait long enough and therefore did not get all of the message.

 

So the logical thing to an engineer is to increase the wait to something like 100ms.  Ok, this time you got all the data.  But you wasted 25ms waiting for the data.  25ms does not sound like much, but in some of my test systems, that has added up to over an hour of test time.  And, for some reason, let's say the instrument has a hiccup and takes 150ms to send all of the data.  Oops.  You didn't get all of your message again!  Increase the wait again?  Sure, let's just make it 1 second to make sure we always get the message.  That will really increase your test time!

 

Back to just using the termination character, the only possible timing issue you have is the Timeout, which defaults to 10 seconds, which you should only ever take that long when something is terribly wrong.  As soon as the message is recieved, you are off and running.  No wasted time with arbitrary waits and no missing data to your messsage.  It is by far the best way to do serial communications (assuming the instrument's protocol does it).


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 29 of 50
(1,242 Views)

crossrulz wrote

So the logical thing to an engineer is to increase the wait to something like 100ms.  Ok, this time you got all the data.  But you wasted 25ms waiting for the data.  25ms does not sound like much, but in some of my test systems, that has added up to over an hour of test time.  And, for some reason, let's say the instrument has a hiccup and takes 150ms to send all of the data.  Oops.  You didn't get all of your message again!  Increase the wait again?  Sure, let's just make it 1 second to make sure we always get the message.  That will really increase your test time!

 

Back to just using the termination character, the only possible timing issue you have is the Timeout, which defaults to 10 seconds, which you should only ever take that long when something is terribly wrong.  As soon as the message is recieved, you are off and running.  No wasted time with arbitrary waits and no missing data to your messsage.  It is by far the best way to do serial communications (assuming the instrument's protocol does it).


This is kind of making me rethink my stance, but I know coming from working with serial port rather then visa (WinCE devices YAY!) I have used a rather similar method to bytes at port, where the system will read the number of bytes at port every 25ms up until the timeout value. If the byte value remains 0 it continues to timeout, if there has been a some length of data received then it waits another 25ms and if there has been no change in that 25ms period then the subVI is stopped and the message is passed out. In this version the maximum duration to overextend the read process is by 50mS.

 

0 Kudos
Message 30 of 50
(1,223 Views)