From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

how to get the number of bytes at ethernt port using tcp/ip?

I have data with variable sizes.I am getting the data from another sytem using software C.

How can I get the exact number of bytes coming at the port before using the read command so that the no. of bytes at port has to be given as the input to the tcp read vi?

0 Kudos
Message 1 of 19
(8,917 Views)

The other system running the C app should send prepend two bytes of length to the packet. That way your code can read a fixed size of two bytes, convert to a count and then read the rest.

 

The shipping TCP/IP examples use this technique.

 

Have fun,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 2 of 19
(8,910 Views)

Ben wrote:

The other system running the C app should send prepend two bytes of length to the packet. That way your code can read a fixed size of two bytes, convert to a count and then read the rest.

 

The shipping TCP/IP examples use this technique.

 

Have fun,

 

Ben


This is the prefered and traditional method for relaying this type of information. If you do this though I would recommend using a 32-bit value for the length. In addition, make sure that your C program specifies network byte ordering. If you don't you can end up with endian issues on your data.

Message Edited by Mark Yedinak on 10-29-2009 10:46 AM


Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
Message 3 of 19
(8,900 Views)

The incoming message contains header,source,destination,data and data size.Here the data size is variable.

The message contains data size but it is in between the whole message ie not at the begining.

So before reading the message I have to get the message /data size at the port

0 Kudos
Message 4 of 19
(8,896 Views)
The header that you are talking about is it the actual data header or are you talking about a TCP or UDP header? If it is a header in the actual data then you should be all set. The header should be ordered such that the source, destination and size are all fixed lengths. Therefore you should be able to read the number of bytes equal to those data fields, get the data length from that and then complete your read based on the data size read from the header. If the size field of the header comes after the data that that is a very poorly designed data header.


Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
0 Kudos
Message 5 of 19
(8,884 Views)
I am talking about the actual data header. The data size comes before data.But here the data size will vary from 0 to 4 bytes according to the command I am giving from the system where C program resids so the whole message size.the the message size not a fixed every time
0 Kudos
Message 6 of 19
(8,878 Views)

I understand that the data size may vary. However are you also telling me that the header, excluding the data, also varies in size? If so then you will need to change this so that all of your headers are consistent in size and format. If this is not possible, then the first byte or bytes of the header must be a message ID that will dictate the header format. Then using that you can read the data length field.

 

If you can use a consistent header for all of your messages it should be defined something like the following:

 

struct data_header {

int source;

int destination;

int data_length;

char *data[];

};

 

As long as the data length field will be in a consistent position you would have your LabVIEW application read 12 bytes (using the above example header) and pull out the data length from last 4 bytes of the header. Then once you have this number do a second read for number of bytes equal to the data length you just read.



Mark Yedinak
Certified LabVIEW Architect
LabVIEW Champion

"Does anyone know where the love of God goes when the waves turn the minutes to hours?"
Wreck of the Edmund Fitzgerald - Gordon Lightfoot
Message 7 of 19
(8,857 Views)

thank i hope this will work.

i will let you know the result tomorrow

0 Kudos
Message 8 of 19
(8,852 Views)

So may responses saying your question is wrong..... typical of this site, and no decent answer after 5 years !

The answer I have used is to use a call library function:

 

short int ioctlsocket(unsigned long socket, unsigned long fionread, unsigned long *len);    

where fionread is a windows defined constant =  4004667F

 

The socket can be obtained from using the       TCP Get RAW NET OBJECT.vi   which comes with Labview (even as far back as version 7).

 

Good luck.

 

Message 9 of 19
(8,054 Views)

@Philippe_RSA wrote:

So may responses saying your question is wrong..... typical of this site, and no decent answer after 5 years !

The answer I have used is to use a call library function:

 

short int ioctlsocket(unsigned long socket, unsigned long fionread, unsigned long *len);    

where fionread is a windows defined constant =  4004667F

 

The socket can be obtained from using the       TCP Get RAW NET OBJECT.vi   which comes with Labview (even as far back as version 7).

 

Good luck.

 


A protocol requiring such a hack is IMHO very poorly designed. You should always have some way on the wire to determine the data stream size. If the data is fixed size that would be inherent to the protocol, if it's variable sized there should be a fixed size header or a known message termination indication that can be used to determine how to read the rest of the message.

 

As a side node, I do consider the existence of VISA Bytes at Serial Port a big error, and that is most likely where this question originally came from. Use of "bytes at port" to decode a protocol will ALWAYS lead to protocol errors sooner or later, and code that is unneccessarily complicated to force the routine to deal with the asynchonous reading of the "bytes at port" into the protocol decoding.

 

If a protocol can't be decoded with fixed size reads, fixed size reads with following variable size reads determined from information in the header, or a specific message termination indication, then it is very badly flawed.

Rolf Kalbermatter
My Blog
Message 10 of 19
(8,028 Views)