LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP/IP Maximum Transfer Rate

I'm using the example VI bundled with LabVIEW 7.1 - "Multiple Connections - Server.vi"...
 
I have modified the program to send a longer string containing some other data across my Ethernet and have been able to pick it up OK using the sister program "Multiple Connections -  Client.vi". However, when I try to remove the "wait until next ms" timer from the upper loop of the server VI my TCP/IP transfer rate drops off to zero (the client stops receiving data). I'm not sending much (max 480kbps, typically MUCH less) so why does running the program this fast appear to choke the TCP/IP connection? With the timer set to about 10ms instead of the default 1000ms everything works ok - but ideally because my source data is being generated continuously I am now missing parts. Is this timer just helping to slow the program and ease any buffering? Or is there a way I can remove it?
 
Also is it possible to send a data type other than a string using "TCP Write"? At the moment I'm programatically converting my data type from a 16-bit binary word into a string (and hence equivalent ASCII characters) to send it across the Ether. Is there a better way to avoid any conversion? The same applies to the "UDP Write" function I am using elsewhere in my program.
 
Thanks
0 Kudos
Message 1 of 2
(3,871 Views)


russelldav wrote:
I'm using the example VI bundled with LabVIEW 7.1 - "Multiple Connections - Server.vi"...
 
I have modified the program to send a longer string containing some other data across my Ethernet and have been able to pick it up OK using the sister program "Multiple Connections -  Client.vi". However, when I try to remove the "wait until next ms" timer from the upper loop of the server VI my TCP/IP transfer rate drops off to zero (the client stops receiving data). I'm not sending much (max 480kbps, typically MUCH less) so why does running the program this fast appear to choke the TCP/IP connection? With the timer set to about 10ms instead of the default 1000ms everything works ok - but ideally because my source data is being generated continuously I am now missing parts. Is this timer just helping to slow the program and ease any buffering? Or is there a way I can remove it?
 
Also is it possible to send a data type other than a string using "TCP Write"? At the moment I'm programatically converting my data type from a 16-bit binary word into a string (and hence equivalent ASCII characters) to send it across the Ether. Is there a better way to avoid any conversion? The same applies to the "UDP Write" function I am using elsewhere in my program.
 
Thanks



You can't remove every asynchronous element from a free running LabVIEW loop, as that loop will tend to monopolize the CPU in running at max speed, not leaving much for the other loop(s) in your program (in this case probably the receiving client VI running on the same computer). So there needs to be some Wait functionality in that loop. Please note that running the loop 1000 times sending a little data, will cause much more CPU overhead than running the loop 10 times sending 100 times the amount of data you first did. So if the total amount of data is important to you do some aggragation of the data beforehand and send it in one go, instead of sending it in many smaller chunks.
 
If you have a protocol that requires small data packages being sent back and forth you may have to look into the Nagle algorithme too. This is a algorithme in the socket library that tries to do data aggregation of small packages automatically. As long as the data does not fill up an entire TCP/IP frame it will be held back and only send after either 100 to 150 ms have elapsed or the frame can be completely filled. It is possible to disable the Nagle algorithme on a TCP/IP socket connection but that is not a built in LabVIEW feature. On the NI site is a VI library that allows you to disable that socket option for a specific TCP/IP or UDP connection. (Search feature will give it to you).
 
Last but not least the TCP/IP VIs accept strings as input and return such strings too. This is not a limitation since a string is nothing more than an array of bytes. Instead of converting numbers into strings you can also simply Flatten them into strings. Flattening will not change the binary representation in memory but mainly change the interpretation of that memory area for LabVIEW. As far as the TCP/IP VIs are concerned they will simply send the binary bytes of the original data, so that conversion is completely avoided. Of course the receiver side needs to unflatten the data again in order to be able to do something useful again. But here again, unflattening is not really changing the memory but mostly just the wire color and consequently what LabVIEW nodes you can use to operate on that wire.
 
Rolf Kalbermatter 


Message Edited by rolfk on 11-19-2007 11:27 AM
Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 2 of 2
(3,843 Views)