LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Linux 64 thread limitation In library.

If the pipe fails to create, it has no file descriptor. Simple as that. Besides, I had Error turned on at on point and there were none.  The remaining pipes beyond 64 just go in to waiting.  

0 Kudos
Message 21 of 26
(868 Views)

Could the race condition between the command line and the open pipe.vi be relevant?

0 Kudos
Message 22 of 26
(846 Views)

For a sanity check I added a sequence structure and separated the process start first. I even added a delay between execution. No change, the behavior problem still persist. I'm starting to wonder if the system is just out of CPU even though the process monitor shows otherwise although it should not be the case.  

0 Kudos
Message 23 of 26
(834 Views)

Low CPU would mostly make things slower. Only if there are time outs of any kind, the delay could make a difference. I wouldn't expect a 100% reproducible result though. As it's always exactly 64 pipes that work, I'd expect a hard limit somewhere. Even the sync on the command line and the Open Pipe was just to be sure.

0 Kudos
Message 24 of 26
(825 Views)

I agree. It doesn't really slow down, I just don't get anymore data throughput. It does appear to be a hard limit someplace. That's why I suspected the pipe read.
The pipes not being read and are in a wait state in the resource monitor. This is kind of telling. I mean it's possible something else is preventing the read but I can't imagine what else that would be. I tried many different bytes to read values thinking perhaps that was an issue but it's not. I even tried splitting the process into 4 different loops each reading 20.  No change. 

0 Kudos
Message 25 of 26
(818 Views)

Have you tried "ulimit -a"?

 

This might return a lot of info, but anything '64' is suspicious.

0 Kudos
Message 26 of 26
(807 Views)