LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP/IP communication memory full

 


@Ben wrote:

 

But now back to your situation and the issue shown above...

 

How can we get a bad packet through TCP/IP since it guarentees delivery intact ?

 


Very easily. I had a library much like the STM functionality and in there I had messed up something when receiving the data, so that I ended up interpreting the data about one byte off, from how it was send. Bingo!

 

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 21 of 25
(1,787 Views)

I will throw my hat into this discussion a little late in the game, but I have seen nearly this exact scenario using TCP a number of times before and Rolf hit the reason square on the head. The TCP read gets a a bad packet or otherwise corrupted data stream and uses the first element of the bad packet as the number of bytes to allocate for the next read. If this number happens to be some really huge value ....BLAMO!....the "Not enough memory......." dialog shows up.

 

I agree with your point Ben, that TCP does guarantee delivery but none the less I have seen this issue many times in the past. I can't necessarily explain why it occurs, but I know that it does happen on some networks and not on others and not all of the time.

 

I am not sure if the original poster has control over the TCP stream at the server end, but our solution was to change the mode of the TCP vis from standard to CRLF terminated. We obviously needed to alter the server end TCP write code to add a CRLF character at the end of each packet as well as base64 encode/decode the data stream we were sending to get rid of any potential CRLFs in the data payload. We found that this had two results. First it completely solved the bad #bytes problem because if the packet was corrupted, the TCP read kept reading until the end of the next packet. Granted it was 2X as large as it should have been but it never tried to allocate all of the available memory due to some huge number in the # bytes input. The second benefit was that if this did occur and we did get a corrupt packet, the TCP link would automatically fall back into sync and the data would begin streaming again.

 

Unfortunately I don't understand why we were getting corrupt packets, however I can assure you that it was happening from time to time.

Greg Sussman
Sr Business Manager A/D/G BU
0 Kudos
Message 22 of 25
(1,784 Views)

Hi All,

 

I am facing same problem in my project. I log one error file i.e. Mention below

 

Date           Time            Bytes      Error

7/25/2011 11:28:56 AM 1793724  0
7/25/2011 11:28:57 AM 1793724  0
7/25/2011 11:28:58 AM 1793724  0
7/25/2011 11:28:59 AM 1793724  0
7/25/2011 11:29:02 AM 286185923  2

 

As you can see in above log, from the PXI continuously I am getting data bytes 1793724. Suddenly I missed 1-2 sec data and I got data bytes 286185923. These much of bytes when I tried to read on TCP/IP I got two errors.

 

1. Windows error "Not enough memory to complete this operation"

2. LabVIEW error 2, this indicates TCP memory Full.

 

I am not able to understand that from where suddenly these much of junk characters coming.

Waiting for your responses.

 

Thanks and Regards
Himanshu Goyal

Thanks and Regards
Himanshu Goyal | LabVIEW Engineer- Power System Automation
Values that steer us ahead: Passion | Innovation | Ambition | Diligence | Teamwork
It Only gets BETTER!!!
0 Kudos
Message 23 of 25
(1,741 Views)

If your PXI system happens to be an RT based system then what may be happening is that the CPU is getting loaded to 100%. This causes the TCP process on the LabVIEW side to be preempted until higher priority tasks can complete. While this is the case, additional TCP data may have come in and filled up the TCP memory causing a loss of data in the TCP buffer. When you read it out, you are now offset in bytes and get the request for huge memory allocation due to reading part of the data payload as the payload size.

 

Because RT systems have a tendancy to do this you need to be sure that you are not using any more than about 75% of the CPU during steady state operations. This allows for some surge capacity if the CPU gets busy without causing any of the threads to be waiting on the CPU for a longer than normal period of time. 

 

The only way I have found to deal with this is to lower the overall CPU utilization or to convert the TCP communication to a delimited format that will allow it to re-sync if you do have a loss of data somewhere. 

Greg Sussman
Sr Business Manager A/D/G BU
0 Kudos
Message 24 of 25
(1,729 Views)

@gsussman wrote:

If your PXI system happens to be an RT based system then what may be happening is that the CPU is getting loaded to 100%. This causes the TCP process on the LabVIEW side to be preempted until higher priority tasks can complete. While this is the case, additional TCP data may have come in and filled up the TCP memory causing a loss of data in the TCP buffer. When you read it out, you are now offset in bytes and get the request for huge memory allocation due to reading part of the data payload as the payload size. 


That's not how TCP works, regardless of whether you are on an RT system or not. TCP streams will never lose data within the middle of a stream. If the receiving side is not receiving fast enough, it will eventually cause the sender's window to go to zero so that no more data can be sent (if the TCP send is set to block, it will block until it can send). The sender's application could choose to drop new data at that point but it would be that application's decision to do so. If you control the TCP socket on both ends then you can be guaranteed (short of memory corruption or bug in the TCP stack) that the data arrives intact and in-sequence if you get data. If you are seeing this behavior I strongly suspect it is because you code on the sending side is dropping data when it hits a timeout.

 

Eric

 

0 Kudos
Message 25 of 25
(1,714 Views)