LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW RT, Timed loop, finished late, Call by reference

I have a timed loop triggered by the sample clock of a DAQ-Card. The sample Clock is 8 kHz and the loop will run with dt = 4. Normally the loop is running without finished late[i-1]. But from time to time it happens that the loop is running extremly longer which means instead of 0.5 millisec it needs 4 - 5 millisec. It seems this doesn't never occur while accessing DAQmx.

 

The application uses plugin technologies and some of the VIs in the timed loop are preloaded and called by Call by Reference.

 

Does those VIs inherit the priority, execution system and CPU of the timed loop?

 

The application is running on LV RT 2009 on a Dual core PXI-Controller. The timed loop is bound to CPU 1. There is no  difference runinng the application from the development environment or as a startup application.

 

For the measuring test  I modified the application in a way that I don't have:

disk access for storing the result file

TCP/IP communication

Controls on the front panel of the top level VI

Waldemar

Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
Don't forget to give Kudos to good answers and/or questions
0 Kudos
Message 1 of 13
(4,241 Views)

There are couple of possible issues. I say possible because I am relying on my memory about two things.

 

1) I believe the Call By Reference (CBR) executes in the UI thread. A thread swap to the UI thread could be resulting in the Finished late. I think you can verify or disprove this possiblity by using the RT Trace Execution Tool kit. (If you do this, could you PLEASE post back and let us know what you find, please?).

 

2) Data buffers for the data in/out of CBR can NOT be re-used so the is at the least a data buffer copy and if teh size jumps up, a call to the memory manager to get more space.

 

I think you can get a good handle on what the issue is by using teh RT Execution Trace Toolkit. I am curious wht you find and how you fixed so please keep us updated.

 

Your parter in real-time wires,

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 2 of 13
(4,236 Views)

To keep you informed:

 

I stripped the application and have left only the measurement storing module and the timed loop. The loop was using the microsecond timer running each 500 µsec. Additional all Non-Real-Time modules were loaded and only one module creates a XML string and sends it to the communication layer which will drop it because no TCP/IP port is open. The creation of the XML string is done each 300 ms.

In this case I don't have any finished late iterations

 

Next I added the DAQ Module. Now I get finished late again but with a lesser frequency as original.

 

I removed all unnesseccary tasks from the DAQ moulde leaving one task for a PXI-4204 for using as the clock source for the timed loop. No I get finished late seldom.

 

 I removed the module which will send the XML string and I don't get finished late.

 

Next I was adding code to see memory allocation. I can when memory allocation is changing but it is not related to the finished late iterations.

 

Next time I have the chance to do more tests I will see which DAQ task triggers the finished late iterations. I have one AI task on a PXI-4204,  4 Counter tasks on a PXI-6602, 1 DI task on a PXI-6514, 1 DO task on a PXI-6514, 1 DI task on a PXI-6259, 1 DO task on a PXI-6259, 1 AO task on a PXI-6259 and 1 AO task on a PXI-6711.

The AI task on the PXI-4204 is running in Continous Sampling (Single Point HW Timed is not supported), all other tasks exept the DI and DO are Single Point HW Timed.

Waldemar

Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
Don't forget to give Kudos to good answers and/or questions
0 Kudos
Message 3 of 13
(4,157 Views)

waldemar.hersacher wrote:

To keep you informed:

...

The AI task on the PXI-4204 is running in Continous Sampling (Single Point HW Timed is not supported), all other tasks exept the DI and DO are Single Point HW Timed.


If all of the data you requested is not available in the buffer when you ask for it, that function will wait until the data is in the buffer. Maybe check number of samples waiting and only read that many to avoid the wait IF that is waht is causing the finished lates.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 4 of 13
(4,156 Views)

Today I got the time to make some tests about the finished late problem.

I analyzed a VI which will read in all data neccessary in the real-time loop. This VI uses two subVIs for local DAQmx tasks and one SubVI as a FGV.

I was measuring the time the VIs will use and keep 50.000 iterations for analysis. I found that we will get the finished late condition about 10 to 12 times.

 

 

 

Since the VI doesn't have much code the last thing I didn't look at was the "Merge Errors" VI. This VI is a Subroutine and setting the node to "Skip Subroutine when busy" removed the finished late condition.

Message Edited by waldemar.hersacher on 04-28-2010 09:20 PM
Waldemar

Using 7.1.1, 8.5.1, 8.6.1, 2009 on XP and RT
Don't forget to give Kudos to good answers and/or questions
Message 5 of 13
(4,033 Views)

So are you calling this fixed and the issue was the Merge Errors?

 

This is something I am going to have to remember!

 

Thank you!

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 13
(4,019 Views)

Just a guess, I've never done RT. One of the inputs of the Merge errors is an array of clusters which contain a string. Maybe you can replace the merge error with a switch statement that does the same (instead of skipping the error).

 

Felix

0 Kudos
Message 7 of 13
(4,016 Views)

F. Schubert wrote:

Just a guess, I've never done RT. One of the inputs of the Merge errors is an array of clusters which contain a string. Maybe you can replace the merge error with a switch statement that does the same (instead of skipping the error).

 

Felix


 

But he said that setting the instance of "Merge Errors" to "skip if busy" cleared the error. so that would indicate that the Merge Error, running in another context, was hanging the call to that VI in the code illustrated above. So if the simulataneous calling of Merge Errors (ME) is actually yhte issue then changing the setting of ME to re-entrant should help as well WITHOUT running the risk of loosing errors when the the ME is skipped.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 13
(4,008 Views)

I really wonder if that is the issue.

Considering the facts that:

* Merge error is of Subroutine priority

* The code is stripped down for benchmarking (so not a tons of calls to this VI)

* Time penalty of about 4 ms (See first post)

I think there must be a huge memory realocation (maybe garbage collector running), which wouldn't be solved by making it reentrant but not using dynamic (array) size in dynamic size (cluster)  in dynamic size (string) data. I read some threads (most propably on LAVA) that discussed the disadvantage of string in error cluster on RT.

I hope Waldemar will provide us with some new benchmarks...

 

Felix

0 Kudos
Message 9 of 13
(3,999 Views)

F. Schubert wrote:

I really wonder if that is the issue.

Considering the facts that:

* Merge error is of Subroutine priority

* The code is stripped down for benchmarking (so not a tons of calls to this VI)

* Time penalty of about 4 ms (See first post)

I think there must be a huge memory realocation (maybe garbage collector running), which wouldn't be solved by making it reentrant but not using dynamic (array) size in dynamic size (cluster)  in dynamic size (string) data. I read some threads (most propably on LAVA) that discussed the disadvantage of string in error cluster on RT.

I hope Waldemar will provide us with some new benchmarks...

 

Felix


The issue with strings in error cluster came from me since I learned that one the hard way.

 

But you reply did flip a switch!

 

What if the OTHER instance of ME is being passed a large array of errors. then the other instance could be hanging the ME in this context.

 

I suspect Waldemar will reply... eventually.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 10 of 13
(3,996 Views)