04-11-2022 03:09 AM
@TonyStone wrote:
Wasn't trying to start a "flame war" by the way. I am never in the mood for that nonsense.
Good. Ignore that comment.
@TonyStone wrote:
I am just becoming frustrated with what should be quite simple according to the LV documentation i have gone through.
Somehow, limitations are often omitted from documentation. That's always frustrating when you run into such a limit.
@TonyStone wrote:
Is there anything special that needs to be done after modifying the configuration file? Just restarting LV should be enough, correct?
Edit when LabVIEW is closed (closing LabVIEW will overwrite the file).
If you're not using parallelized for loops, I don't think it should make a difference.
04-11-2022 03:53 AM - edited 04-11-2022 03:56 AM
wiebe@CARYA wrote:
LabVIEW was multithreading it's code when the OSs (well, Windows 3.11) only had 1. Even the CPU only had 1 thread on 1 core...
Not just on Windows! The actual cooperative parallel operation was first implemented on the Mac and then ported over to Windows. It was based in parts around asynchronous system calls that the Mac OS provided to allow a process to start IO calls and then go on doing other things and eventually collecting the result of the IO call after the OS signaled its completion. It's similar to overlapped operation in Windows, although the Mac OS way was in fact even more complex to properly handle. In this way IO operations did not block the single thread that a process had access too. And LabVIEW also used its data flow paradigm to implement its own code chunk scheduler that allowed to "multitask" independent diagram portions. This scheduler is present until today and still allows parallel execution of code junks even if there aren't as many threads available as there are parallelizable code junks.
Functions such as the file IO, network nodes, VISA nodes and similar all internally used (and still use) asynchronous IO calls to make this cooperative "multi-threading" possible.
LabVIEW threads are not OS threads.
Actually LabVIEW absolutely uses OS threads for its multithreading capabilities. With LabVIEW 5.0 it learned to use the OS provided threads, which allowed it to do real multithreading that is controlled by the OS scheduler directly.
The aforementioned LabVIEW code chunk scheduler was extended to allow making use of these threads too. This allowed LabVIEW to do parallel operation even if there are synchronous function calls, since the OS thread scheduler takes care of saving the thread state and restoring it on each thread switch. Before LabVIEW 5.0 a Call Library Call was always synchronous since LabVIEW was not able to determine what this call might change in respect to its own state including the stack.
There is one drawback to the seamless LabVIEW multi-threading capability: Its thread configuration is static and determined at startup of the process. During startup LabVIEW allocates an ini file configurable number of threads for each of its internal execution systems. There are a fixed amount of execution systems that LabVIEW knows of and which a VI can be specifically assigned to be run in. When such a VI is executed for the first time, LabVIEW determines what execution system it should run in and then assigns it to that. Whenever the LabVIEW scheduler determines that a code chunk from that VI should be executed it passes the according control to one of the available threads within that execution system.
As far as the original problem goes, it most likely has to do with the fact that the pipe VIs somehow call into a shared library, but the number of 64 maximal pipes doesn't really sound like a direct LabVIEW limitation itself but I don't know for sure as I have no idea about the actual implementation of that shared library. Unless you have a monster CPU, LabVIEW's default thread allocation on initialization should not go as high as 64 threads per execution system so that is not likely the problem. But there are various limits in the Linux OS itself such as the maximum file descriptors one can create on a system wide scope as well as on a process scope. And there might be even a limit for the number of file descriptors that are possible to be created per thread. Since this VI is called in a loop and LabVIEW does not execute a loop in multiple threads (unless it is parallelized) since switching between threads is fairly costly, such a limit certainly could be at play here.
I can't quickly find anything about a per thread limit for pipes but the per process limit certainly is suspect. You also need to consider that a pipe always consumes two file descriptors so that certainly might be a factor too.
What does give the command
ulimit -n
on your system?
04-11-2022 04:03 AM
@rolfk wrote:LabVIEW threads are not OS threads.
Actually LabVIEW absolutely uses OS threads for its multithreading capabilities. With LabVIEW 5.0 it learned to use the OS provided threads, which allowed it to do real multithreading that is controlled by the OS scheduler directly.
I mend not every parallel path in LabVIEW is an OS thread.
As a reaction to:
@TonyStone wrote:
I was under the impression LV is automatically threading my loops because it has some sort of a "smart compiler".
Execution is handled by clumping and sometimes threading.
This is not just done by the compiler, but also the run time engine...
04-11-2022 04:12 AM
@rolfk wrote:What does give the command
ulimit -n
on your system?
ulimit - Set process limits - IBM Documentation
"ulimit -a" might also give insight. It sounds to me the limit on file descriptors might be higher then the soft or hard limits?
Next, try to set the limit?
04-11-2022 04:22 AM
@n2new wrote:
Another point from the images you posted in your first post. You seem to totally ignore any error handling in your code. Are you sure the Create Pipe function doesn't return an error already for all pipes beyond 64? That error message returned in the according error cluster might give more information as to the actual cause.
If the Create Pipe function failed to create a pipe, you can try to write data to those never created pipes all day long but nothing ever will happen.
04-11-2022 01:08 PM
ulimit is 1024.
Error handling was ignored for speed but prior to turning it off. I have verified all the descriptors are being assigned by probing the array and using an indicator. I have also verified that the pipes exist in the application directory. Let me attache some code here as well as the VI it's self. Perhaps we can get the bottom of this.
04-11-2022 01:10 PM
I attached some code and the current (ulimit) is 1024. I set it higher and it made no difference.
Tom
04-11-2022 01:15 PM
The pipe Library appears to be Assembly. Very interesting code. IF recall some Ni history I believe the first platform was mac only?
04-11-2022 03:19 PM
@n2new wrote:
The pipe Library appears to be Assembly. Very interesting code. IF recall some Ni history I believe the first platform was mac only?
True but totally irrelevant. Only the Unix versions of LabVIEW ever had pipe support. And I'm not sure what you mean with interesting code. It's simply a shared object library file and that is of course C(++) code compiled into binary assembly code. Nothing strange and obscure about that but very standard procedure for Linux applications. Your entire Linux OS is build out of shuch shared libraries that access many different things of the kernel and make it available to the user space.
04-11-2022 03:22 PM - edited 04-11-2022 03:25 PM
How did you verify that the create Pipe VI did not fail to create more than 64 pipes? I see no error cluster whatsoever anywhere.