04-11-2022 03:41 PM - edited 04-11-2022 03:47 PM
If the pipe fails to create, it has no file descriptor. Simple as that. Besides, I had Error turned on at on point and there were none. The remaining pipes beyond 64 just go in to waiting.
04-12-2022 06:52 AM
Could the race condition between the command line and the open pipe.vi be relevant?
04-12-2022 07:52 AM - edited 04-12-2022 07:55 AM
For a sanity check I added a sequence structure and separated the process start first. I even added a delay between execution. No change, the behavior problem still persist. I'm starting to wonder if the system is just out of CPU even though the process monitor shows otherwise although it should not be the case.
04-12-2022 09:12 AM
Low CPU would mostly make things slower. Only if there are time outs of any kind, the delay could make a difference. I wouldn't expect a 100% reproducible result though. As it's always exactly 64 pipes that work, I'd expect a hard limit somewhere. Even the sync on the command line and the Open Pipe was just to be sure.
04-12-2022 09:56 AM - edited 04-12-2022 10:02 AM
I agree. It doesn't really slow down, I just don't get anymore data throughput. It does appear to be a hard limit someplace. That's why I suspected the pipe read.
The pipes not being read and are in a wait state in the resource monitor. This is kind of telling. I mean it's possible something else is preventing the read but I can't imagine what else that would be. I tried many different bytes to read values thinking perhaps that was an issue but it's not. I even tried splitting the process into 4 different loops each reading 20. No change.
04-12-2022 11:05 AM
Have you tried "ulimit -a"?
This might return a lot of info, but anything '64' is suspicious.