07-25-2015 08:26 PM
"a block of code can execute once it has received values on all of its inputs"
I am sure a lot of people inteprete this pharse (wrongly?) the same as I do. As a matter of fact, I treated it as a foundation on how Labview code operates. You seem to take this effect naturally while I am in shock at the moment.
As a matter of fact, I have read papers that suggested splitting copies of code and wire them up at the same time hoping for "concurrent execution". This technique worked no doubt. But now I got to ask myself if they will really execute sequentially one block after another. This is in the context that even built-in function blocks may block.
Some examles are:
07-25-2015 08:59 PM
@zigbee1 wrote:
As a matter of fact, I have read papers that suggested splitting copies of code and wire them up at the same time hoping for "concurrent execution". This technique worked no doubt. But now I got to ask myself if they will really execute sequentially one block after another.
Concurrent execution is the normal case with code that doesn't have dataflow dependencies, but it's important to understand what that actually means. If you take the first article you linked to, it shows the transition from code, through the OS to two CPU cores:
This means that at most two instructions can actually execute in parallel. Because computers are really fast, this looks like they're doing many more things in parallel, but that's just because they're breaking them up into smaller chunks and going back and forth between them. This should give you the first clue as to what might be going on - what happens if there's a task which can't be broken up? You have a core which is stuck until that task is done. What happens if you have two such tasks at the same time? Now both cores are stuck and you can't do anything. That's why I said that LV doesn't guarantee parallel execution - it can't.
This problem doesn't actually happen at the CPU level, because operations there are relatively short and should actually have a fixed time, but it can happen at higher levels. Like I said, in this specific case, I'm guessing that the replace function does a DLL call to implement the regex functionality, and DLL calls block the thread they're in. This shouldn't apply to the majority of the primitives in LV, because I far as I know they directly generate machine code. The regex goes through a DLL because it's a standard implementation (PCRE) which isn't done by NI. Presumably other threads should keep running while this blocks.
So again, generally code which you write can execute in parallel (using at least some interpretation of the term. It might be true parallelism, it might be task-swapping), but there are cases where it won't happen. This doesn't change the functionality, but it can affect the actual execution. The reason it doesn't bother me is that it's not as common as you seem to now think it is. In fact, other than your post, I don't think I ran into something similar in quite some time.
07-26-2015 08:00 AM - edited 07-26-2015 08:02 AM
Now I hope this not complete garbage I'm posting here... (if so I bank on the community to correct me...)
If this particular case is so important to you, put the timing stuff into a subVI and give it another "Preferred Execution System" in the "Execution"-tab of the VI properties. As far as I understood it, this will (most likely?) cause LabVIEW to switch the thread the subVI is running in, away from the thread that is blocked by the "search and replace" residing in the calling VI.
07-27-2015 07:05 AM
comrade wrote:
If this particular case is so important to you, put the timing stuff into a subVI and give it another "Preferred Execution System" in the "Execution"-tab of the VI properties. As far as I understood it, this will (most likely?) cause LabVIEW to switch the thread the subVI is running in, away from the thread that is blocked by the "search and replace" residing in the calling VI.
In general, that will only start to matter when you have a lot of threads already in the execution system. And since thread swapping can be expensive, I recommend only using that method for long tasks or modules that need to run parallel to the rest of your system.
What I am suspecting is happening here is the sequence frame is forcing everything in it to be a single "clump". But I am not a compiler guy, so I will not claim to know that for sure.
07-27-2015 12:27 PM
A little off-topic, but just to provide some context, LabVIEW FPGA is the environment that provides true parallel execution down to the nanosecond level. It achieves this by replacing the CPU(s) with custom execution hardware that implements the dataflow design directly. In that environment, Tick Count executes exactly as you describe and allows you to configure it to return more precise units of microseconds or actual ticks of the FPGA clock.
The tradeoff is that only a restricted set of (deterministic) functionality is supported on the FPGA, so the string manipulations you're doing are not possible in that environment.