LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP listen takes long time to get connection

If there is a limit where it starts to go wrong I would think it has to do with the number of threads LabVIEW allocates on startup per subsystem. It seems not likely as far as I understand how that blocking likely is caused, but it's not completely impossible.

 

The number of threads that LabVIEW allocates per subsystem is dynamically determined based on the number of CPU cores your system has. There is also a possibility to configure that through a somewhat crude configuration VI in vi.lib\Utility\sysinfo.llb\threadconfig.vi.

 

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000PARmSAO&l=nl-NL

 

You could for instance increase the number of threads for the "other1" subsystem and explicitly set the top level VI of your TCP/IP servers/clients to be assigned to this subsystem. If the blocking has to do with the number of available threads this should significantly increase the point where it starts to go wrong.

 

The threadconfig.vi stores the settings in the LabVIEW ini file. If you want to transfer this configuration to an built executable you would have to make sure to copy the relevant ini keys to the ini file of your built application. The LabVIEW Runtime System allocates threads on startup according to these settings (and choses sane default values if these keys are not present). Changing the values afterwards has no effect until you restart LabVIEW or the exe.

 

And if it is indeed related to the number of allocated threads that would absolutely explain why seemingly nobody has run into this. On modern hardware, LabVIEW creates at least about 8 to 12 threads per subsystems. Back in the days when this feature was introduced LabVIEW did 4 threads per subsystem on the Pentium CPUs, Nowadays with octa-core and more CPUs it can actually be default easily go above 16 threads per subsystem.

 

Most likely your application currently has all VIs set to run in the "same thread as the caller". This causes top level VIs (your main VI and any VI you launch with VI Server to Run or Call and Forget) to be launched in the standard subsystem. Assigning your TCP daemons explicitly to "other1" or similar might already give them more air as they don't have to compete for threads with the rest of your system.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 31 of 32
(302 Views)

Hi Rolf,

Many thanks again for the insightful information. Your explanation is likely correct, but I am unable to test it now (I don't have access to the machine that runs the server application right now).

 

To provide more detail on the structure of the server app: the app uses many different DQMH modules. Some of these are Singleton, others are Cloneable. In addition to the standard Event Handling Loop (EHL) and Message Handling Loop (MHL),  many of these modules have one or more helper loops. One singleton module has as many as 13 Helper Loops (so 15 while loops in total inside that one Singleton module). Fifteen of the streaming connections are implemented using fifteen clones of a Cloneable module that contains a total of 3 while loops. So these 15 clones use a total of 45 while loops. Moreover, I am a fan of using preallocated clone subVIs as opposed to non-reentrant VIs (my reasons for this preference are explained in this and this ideas). So there are lots of preallocated subVIs being used inside the application. In short the application is likely to "want" to use many threads. Being aware of this, during the troubleshooting session I did record the number of threads that the application was using. The number of threads oscillated between 78 and 79 every few seconds. I used Process Explorer and the Task Manager details page to view this number. Process Explorer and Task Manager were always in agreement (they both displayed 78, then 79, then back to 78, etc.)

 

"the number of CPU cores your system has" - The machine is a modern Windows 11 tower workstation, but I have not recorded the processor model number or how many cores it has. I will do so next time I have access to the machine.

 

"Most likely your application currently has all VIs set to run in the "same thread as the caller"" - I have just inspected the VI properties of the Main VI of the Cloneable module that manages the 15 streaming connections and can confirm that its "Preferred Execution System" is "same as caller". I have not knowingly changed this setting for any VI in the codebase, so it is likely that all VIs use "same as caller".

 

I recently learned about altering the number of threads that LabVIEW allocates based on INI file keys. I have not tried this yet.

 

"Assigning your TCP daemons explicitly to "other1" or similar might already give them more air as they don't have to compete for threads with the rest of your system." - I will try this during the next troubleshooting session.

0 Kudos
Message 32 of 32
(261 Views)