From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Performance question on multi core processor

What makes a header misformed? TCP has all internal mechansims to retain connection fidelity so I don't see why this is necessary? Which side establishes the initial connection?

 

How is the queue configured?

0 Kudos
Message 11 of 20
(699 Views)

A malformed header would be indicative of a previously corrupted packet which can cause the syncronization to get off since I am reading fixed lengths.  In that case, I do the socket close/open to resync.

 

We have a seperate Local Controller running code on a Linux box that is the source of the data stream.  It provides the socket.  My program connects and starts gathering realtime data.  I simply push the binary string read from the socket into the queue to be parsed async.

 

This runs for months until a peak load event happens.  It handles the peak load for several minutes and then pukes.  My local queues never get above one or at max two packets deep.

 

We have written simple C code on a Linux box that can read the same socket and it does not get the corrupted packets.  We have a complete send log from the Local Controller of what was sent and do a bit for bit comparison with what was received.  The C code in Linux works fine.  The LabVIEW code in Win7 does not.

0 Kudos
Message 12 of 20
(689 Views)

@t_houston wrote:

I didn't realize that if the different threads share the UI that they would run in the same core.  I guess it makes sense. 

 

If I launch stand alone VITs, each with its own panel, will they still always run in the same core?

 

 


 

Sorry but I should correct part of what I said.

 

THe UI THread is indeed single threaded BUT the OS can choose to schedule that thrad for any core it chooses. The imporatnt point is if your code is not utilizing multiple cores while there is plenty of work to do, then the bottle-neck could be the UI thread.

 

Example:

When trying to update complex 3D images, I can pre-proccess all of the data using as many cores as I have on my machine but when it comes time to update the graphics, the average load on the machine works out what ever fraction a single core is. In my case I could see 12.5% busy across eight cores.

 

I have seen a situation on a dual core where all of the CPU was shifted to the first core. I had that image in my head as I posted.

 

Please forgive that imporper posting.


Take care,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 13 of 20
(686 Views)

@t_houston wrote:

This runs for months until a peak load event happens.  It handles the peak load for several minutes and then pukes.  My local queues never get above one or at max two packets deep.


Can you artificially simulate a peak load at will to instantly reproduce the problem?

 

Comparing with a linux box is probably not fair, because the networking hardware could be different and the event seems so rare. Have you tried updating he NIC driver or even substitute a different NIC or cable? What is the OS of the LabVIEW machine? (mac, linux, windows(which?))

0 Kudos
Message 14 of 20
(685 Views)

The LabVIEW machine is a 2 month old Dell Precision T3500 Xeon 4 core at 3.25 ghz w/12gb Ram.  It is running Win7 Professional.  The problem shows up on multiple computers with different configs.  The setup uses a local lan with dedicated switches and everything is gigabit and interconnects between switches being fibre.  The same network is used with the Linux boxes.

 

I am working on a method of repeatably generating peak workloads.

0 Kudos
Message 15 of 20
(676 Views)

What is the maximum packets per second any of you folks have actually done over an extended period of time?

0 Kudos
Message 16 of 20
(661 Views)

I do not have numbers to quote re packets/sec.

 

I suggest you search on TCP since the online gamers talk about this a bit. There is a white paper on the MS site talking about the TCp stack and the parameters used to control time outs and other performance related packets.

 

Desk Top trace execution toolkit may also shed some light.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 17 of 20
(659 Views)

I am quite aware of the info on tailoring the TCP.  After all I have seen, it really is looking like a LV buffered read problem.  That is why I am asking about how fast anybody has actually pushed it, not how a buddy's friend's cousin did. 😉

0 Kudos
Message 18 of 20
(648 Views)

We have pushed 10 UDP transmits per second to support a 2 second timeout across various network targets ( I think we had about 32 running at one point). WHile that was going on we had TCP exchanges taking place with packets that varied in size.

 

The only isssue we had was with the CPU in some of the old FP units falling behind.

 

In a rather old thread from years ago another contributor acheived close to 100MHz bandwidth but in that case he was optimizing packets sizes to get that number.

 

But managing multiple small packets is the job of the TCP stack and since you mentioned it is only smacking you under high throughput conditions, is why I suggested you look at those factors that are exposed by the OS to manage traffic when it backs up.

 

The Desktop Trace Execution toolkit will let you look into what is happening down in the muck of LV if you suspect LV as the flaw.

 

WireShark (using a hub) could also let you trace the packets to the ethernet port so you can move forward with confidence that the drops are in the LV box.

 

Offering what I can.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 19 of 20
(632 Views)

What is interesting is that the problem only occurs after several hours of high traffic, not due to a short term spike. 

0 Kudos
Message 20 of 20
(605 Views)