LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP/IP communication memory full

Hi

 

 Tnx for ur quick reply here are answer to your questions

 

1) If this all ran fine, what is the throughput in BITS/second you need to keep this connection alive and well?

 

A1) Since signal is sampled at 500Bytes/S which means 1Mbits/s is data rate. I can also confirm it while watching my network utilization which is 1% on 1Gbps Ethernet cable  

 

2) What is your network topology its speed or tried a direct connect via a cross-over cable?

 

A2) I am using a 1Gbps Ethernet switch between my PC and PXI(u can say it a point to point connection) 

 

3) Try commenting out the memory hungry stuff or replacing with constants (specifically the flatten to string looks like a good candidate to attack).

 

A3) It was somewhere suggested no to use Type cast function since it take more time for conversion that’s why I was using Flatten to string in my VI’s if u have another suggestion to convert byte string to 16bit/8bit and 16/8 bit to string bytes data then I can have a look on that as well….. 

 

4) Windows is a strange animal to tame. Review the notes I posted in post # 7 of this thread to shut-down or minimize interruptions of your code.

 

A4) I have made changes to tame this animal as u have suggested in above mentioned post. 

 

5) Can you throttle that bottom loop? It looks like it spins as fast as the CPU will allow.

 

 A5) NO the lower most loop is not running very fast it has same speed as FM démodé one has. Yes the upper most Data acquisition loop is running at its full and there are iterations of this loop when there is no data (just for bytes of packet).

 

6) When the app is hanging, what is the top CPU user (under Windows Task manager?)

 A6) Under task manager no application is using much of CPU as I said CPU usage is “0” or “1” % which means no such PCU hungry application is running.

 

7) The "Out of Memory" of memory error does not mean that all memory is filled although full memory can result in that error message.  What that message really means is "LabVIEW attempted to allocate a contiguous block of memory but failed." LV only works with contiguous memory blocks so fragmented memory could result in that error. Memory fragmentation results from many repeated allocations. The first sign that I look for while developing is my app using more an more memory (as shown by the Windows Task manager). Ideally your app should wake up allocate and reuse all and sit on a fixed memory size. There are many threads that will talk about re-using memory.

 

A7) I have increased the memory but what I end up is that it runs for 2 0r 4 hours more. Which means this is not a solution. What I wander is if in my queue and dequeue status has no elements available then why is memory increasing who is consuming space in virtual memory. It seems data is getting accumulated somewhere in Virtual memory (means after being démodé ,démodé remains in the virtual memory, after being dequeued it is still in Virtual memory). 

 

😎 For us to make progress on your app I urge you to break it down into to smaller parts that each do a portion of the big goal and analyze the hell out of each section to see which parts (plural) are acting up.

 

A8) I have Done it  as well if I just receive the data means only upper most loop it runs for about 8-10 hours if I add démodé loop it runs for 6-8 hours .. I think u can understand it…..

 

9) In the end I would not be surprised if the culprit ends up being getting the data on to and of of the wire since that is a natural bottleneck that can't be avoided (you would not entertain switch to SCRAMNet reflective memory would you? ... that was a joke...).

 

A9) Truly speaking I Didn’t get your point as I have got it is yes I have a Foundry Network 1Gps Switch no reflective memory etc.

 

10) Again that flatten to string is screaming at me. It was the Flatten to String that prevented me from using teh early non-polymorphic queues since the flatten to string was a pig.

 

A10) what Holy cow can I use for flatten string pig?

 

 Regards

0 Kudos
Message 11 of 25
(2,024 Views)

Re:A10

 

Use a constant and run for double what ever it took to hang before.

 

RE: CPU @ 1%

 

Not sure but have you selected "Show Kernal Times" in the Task Manager?

 

I believe that when the OS is busy allocating memory it is in Kernal mode and those time will show up in red.

 

I think I read someone suggesting using the Trace Execution Toolkit to see what part is repeatedly demanding too much memory but ifrst you should really try to figure which parts are demanding the memory. To this end it will require commenting out large portions of the code to see which part it the culprit.

 

I'll take another quick look at those diagrams and post if I see anything.

 

Please note: Nowhere in my previous reply or this one did I imply this was an easy task (I have been analyzing computer performance in one shape or form for about 25 years and each situation reveals a new twist).

 

Reader Digest version:

 

Break it into part and make sure each part does not eat up memory.

 

Ben

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 12 of 25
(2,019 Views)

 

Hi

 

Ben I have come across few more deduction I want to share if u can look into..

 

1. I thought that it is virtual memory of my PC due to which I have memory increase since Virtual memory , paging and all that stuff usually takes more time then instructions executed by direct RAM.So what I have done is I have

disabled my Virtual memory and then I run my client vi. This time I can see that my RAM increases at about 2.5MB/mint.It keeps on increasing till my RAM overflows.justified

 

2. Then I disabled one of my while loops(the resample and playback one) and run the Vi this time the application runs for about 5 hours and I can see that there is no increase in my RAM memory it remain at 270MB through out 5 hours. But after 5 five hours it gives the same error.

 

What I have deducted is. It is not the Windows that is giving memory full error . It's actually Labview memory that got full. Also I have seen that windows error wordings are different then this one it says "not enough memory to perform this operation" while labview error says "not enough memory to complete this operation"

 

now what I am left with in my VI is(data acquistion and FM demod Loop) Enqueue/Dequeue structure same flatten to string,index array and FM demod block. So who do u think is the culprit now

 

Regards

0 Kudos
Message 13 of 25
(1,992 Views)

Could you plese show us "what the chess board looks like now" and we will kibitz if time permits.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 14 of 25
(1,988 Views)

Hello

 

Board looks like this now.......

0 Kudos
Message 15 of 25
(1,978 Views)

 


@madd wrote:

 

Hi

 

What I have deducted is. It is not the Windows that is giving memory full error . It's actually Labview memory that got full. Also I have seen that windows error wordings are different then this one it says "not enough memory to perform this operation" while labview error says "not enough memory to complete this operation"

 

Actually this reasoning is somehow flawed. LabVIEW has no way to allocate memory directly that Windows doesn't know of. So if Windows claims LabVIEW has only 270MB allocated that is the upper limit LabVIEW is really using. In reality it is usually quite a bit less since Windows tracks memory per application base on a Working Set. This is the upper bound of memory the application can allocate without causing another virutal memory reallocation. By default Windows trims the working set, when you minimize the main window of an application and then you see a more realistic number of what memory the application really has allocated.

 

So LabVIEW itself can't have allocated more memory than those 270M you see and if LabVIEW then claims that it can't allocate any more memory, then this is due to the some other LabVIEW external cause. It could be a kernel driver LabVIEW is using, or another process such as a service LabVIEW is communicating with for some reasons, or something even more complicated.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 16 of 25
(1,976 Views)

Hello

 

Rolf while saying 270MB memory I have not claimed that this is the memory only labview process taking. I know there might be hundred and thirty process running in back ground or whatever. What I want to say is My RA memory Does not increase then this value even when I have error popuped infront of me.But actual point is who is generating this error "not enough memory to complete this operation" also what do u think about the error statements given by Windows and Labview. Above everything

 

Do we have any solution to this memory problem........?

0 Kudos
Message 17 of 25
(1,969 Views)

The error is displayed by LabVIEW after it asked the OS to provide it a certain chunk of memory and the OS declined that. So is LabVIEW generating the error or the OS, who declines the allocation?

 

Seems your Windows kernel somehow gets into trouble to provide LabVIEW with a big enough chunk of memory. Since LabVIEW doesn't store arrays in small discontinued chunks, it has to have one big chunk for the entire array at once and memory fragmentation can cause trouble then. That said unless you run on a constrained system for actual terms (<1GB main memory) I wouldn't expect it to kick in at 270MB saturated memory consumption.

 

Another thing that hasn't been looked at yet enough is the Unflatten function. This function could try to allocate a huge chunk of data because of corruption of data in the flattened data stream. I have had that in the past when a bug in my network software was causing data bytes to be lost, so that the information of how big of data is to follow was corrupted and caused LabVIEW to try to allocate huge amounts.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 18 of 25
(1,963 Views)

@rolfk wrote:

...

Another thing that hasn't been looked at yet enough is the Unflatten function. This function could try to allocate a huge chunk of data because of corruption of data in the flattened data stream. I have had that in the past when a bug in my network software was causing data bytes to be lost, so that the information of how big of data is to follow was corrupted and caused LabVIEW to try to allocate huge amounts.


Excllent point Rolf!

 

I had seen a similar situation when two dta log files got thier refs crossed up resulting in the data from one file being written to the other. On read back, the data from the "bad" packet was interpreted as the byte count and bingo out of memory.

 

But now back to your situation and the issue shown above...

 

How can we get a bad packet through TCP/IP since it guarentees delivery intact ?

 

I can't answer my own question.

 

Ben

 

PS I'll look at the diagram again.

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 19 of 25
(1,960 Views)

After looking again...

 

If you look at your data path you are running into a number of situations where the data buffers are being duplicated from one form to another. I suspect that using "Show Buffers" on all of your arrays will reveal a lot of buffers.

 

The other thing that I can suggest you try quickly is to put an upper limit on that queue instead of using a unlimited queue size (-1) set a limit and try the code again.

 

Why I am suggesting that...

 

Queues will when ever possible transfer the data "in-place" meaning that once data is queued up, the data itself remains in the buffer holding it and via the mechanism of the queue, the reciever can gt at that data with any other data copies. SInce the data coming out of that queue is being transfer into a cluster I suspect there is a dat copy involved to get into the cluser so...

 

it f the queue backs up, once the bottom loops starts consuming the data, it could suddenly have an extreme hunger for memory.

 

Chaning the queues size show only hurt yu if you are feeding it faster than you are emptying it, so your code should still do what it is doing now right up until the queue fills.

 

If that prevents the memory issue and you see the queue over-flowing, then you can turn your attention to optimizing how the bottm loop handles its data.

 

I hope that does something for someone,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 20 of 25
(1,958 Views)