LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Behaviour of global variable in LabVIEW-RT

Solved!
Go to solution

In LabVIEW2016 RT (Pharlap), we know that a call to a FGV from 3 tasks at different priority levels can lead to a priority inversion (because of the mutex on the non-reentrant FGV VI that is not priority inversion safe).

What about the access to a global variable instead? Is there also a such mutex grant?

Has this priority inversion issue been corrected in more recent LabVIEW-RT versions?

Thx for your support

Message 1 of 3
(180 Views)
Solution
Accepted by topic author thumble

@thumble wrote:

Has this priority inversion issue been corrected in more recent LabVIEW-RT versions?


It is not NI's problem when you use non-deterministic constructs in a system that need determinism.

 

If all you are doing is getting or setting a value in the FGV, then you really should not be seeing issues.  A normal Global Variable should be a little faster.  There is still a mutex involved, but a GV is pretty fast.

 

A few other ideas:

1. You can make the FGV a "subroutine" (in the VI Properties->Execution dialog).  In your deterministic loop you can right-click on the VI call and choose to skip if busy.  This assumes your deterministic loop is only trying to read.

 

2. Use RT FIFOs to pass data between the deterministic loop and the other loops.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 3
(164 Views)

Thanks for your quick answer,

 

for information, the issue I’ve mentioned is:

1 task T1 with prio 100 (high prio)

1 task T2 with prio 50 (medium prio)

1 task T3 with prio 10 (low prio)

T3 is interrupted by T1 while in the FGV (global var?) but T1 wants to access the FGV so it is blocked and T2 takes the CPU and because it performs a long duration operation T1 is blocked for a long time and should not. Normally the OS should have raised the prio of T3 to 100 when T1 was queued on the mutex so it could finish the critical operation without been overcame by T2.

You can argue that T2 should be at a lower prio than T3, but in our real-life case it’s not a good option, knowing also that there are not only 3 threads in concurrency but a dozen or so.

We can manage to save T1 by using “skip when busy” in the case of a simple FGV, but sometimes we need to call native NI VI’s that are not reentrant and suffer from the same issue. Even an unplanned memory allocation can trigger that.

0 Kudos
Message 3 of 3
(132 Views)