LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

real-time target gets disconnected when using timed loops

Hi,

 

I have a problem running a Space Vector modulation on the cRio9022 together with 9114 FPGA module and stucked for 3 days now :-(. The idea is to calculate the switching timings for the next period in the RT target and then calculate the switching signals and wait the given times in the FPGA and output the signals with a NI 9401. The currents are measured with a NI 9219 AI module.

 

The problem is now that I need to run the loop faster then once per millisecond. Previously I used two while loops with a "Wait until next ms .."  block and a value of 1 inside. But to be faster, I replaced them with timed loops as can be seen in the pictures (RT_interface). Now the block diagram is sometimes running for a short time and then disconnecting, or it is directly disconnecting and gives the following message:

 

"Waiting for Real-Time Target (Ni-cRio9022-014XXXXXX) to respond. - Stop waiting and disconnect"

 

followed by the message:

 

"Warning: Connection to the Real-Time target ((Ni-cRio9022-014XXXXXX) has been lost."

 

I attached the block diagrams of the RT target and the FPGA. The calculations in the RT target are mainly simple multiplications and two sinus/cosinus calculations in the Park transformations. Thus I doubt that they need that much resources. But even if I put the timing of the timed loop to 1ms, the RT target does not get disconnected but the front panel is much slower updating and reacting to a knob (it takes up to half a second). It was much faster with simnple while loops.

 

Why does it get disconnected with timed loops? Is there a way to optimize the code or are there tools to analyze a possible bottleneck in the code? Why is the behavior using the timed loops so different?

 

Furthermore, is it possible to avoid the disconnection and resetting of the unit if the RT target gets to busy? The point is, if the RT target disconnects, the code is still running and can just be stopped by pressing the reset button on the cRio-9022. But for safety reasons I would prefer if all digital channels of the NI-9401 switch to 0 if the RT target gets disconnected.

 

Thank you in advanced and best regards

 

Andy

Download All
0 Kudos
Message 1 of 8
(5,585 Views)

Hey everyone,

 

I worked also at the weekend but still no progress. 😞

Thus is there maybe anyone you has an idea or suggestion to my problem? I am totally open to any idea which could perhaps minimize my problem or improve the code, and would give it a try.

 

I really appreciate your help.

 

Thank you and best regards

 

Andy

0 Kudos
Message 2 of 8
(5,550 Views)

Hello Andy,

 

I am not sure what you are trying to accomplish by looking at your FPGA code but I might be able to provide some feedback for you.

 

The DIO lines you write to in your FPGA code mimics pulse-width modulation behavior where you have various high/low times dependent on various calculations that are performed in your RT code.

 

I would implement the FPGA code differently but since that is not your main issue here we can ignore that for now. I am somewhat curious though – you have a sector control that are fed into multiple case structures in your sequence structure – what does the other cases in the case structure do?

 

All the messages that you are receiving indicates that you are choking the cRIO controller i.e. the workload is too high so you kill the user interface thread which in turn updates your GUI on your RT application. First – you should just use a user-interface on the RT application for debugging purposes but in the end you need to have a separate host application.

 

You wrote that the code was faster when using while-loops and well just because you had “1” wired as an input to the “Wait until..” function don’t mean that you were running with that rate – just read the information about how the “Wait until…” function works. When you specify that a timed-loop should run with that rate with a higher priority LabVIEW tries hard to meet specified deadlines. You can specify how LabVIEW should act on missed deadlines and such when configuring the loop itself.

 

Multiple ways to see how long time you need in order to perform different actions in your code but why not just increase the loop-times of your timed-loops and see how high CPU load you have when running with 1 kHz loop rate for an example.

 

Finally you are using two timed-loops – why not use an ordinary while-loop instead of the lower priority timed loop you are using? Timed loops are one single thread and is good for high-priority tasks with minimal parallel code (pipe-lining for an example not supported in timed-loops in the same manner as in an ordinary while-loop) whereas an ordinary while-loop works well with parallel code.

 

You could implement a heart-beat signal in where the FPGA reads a control updated by the RT and whenever that one is not updated within a certain time-period you could set your DIO lines in a safe mode. Toggle a Boolean in your RT code with a certain interval and if that interval is exceeded when measuring the time on the FPGA you could do actions as you like. See basic idea in the attached picture.

Regards,
Jimmie Adolph
Systems Engineering Manager, National Instruments Northern European Region

Message 3 of 8
(5,495 Views)

Hello Jimmie,

 

thanks a lot for your detailed answer and suggested tips and solutions. I found out there is a "Real-Time Execution Trace toolkit". Thus I will also try to give this a chance for trying to analyze my code further. Or do you think this is not helpful? Further I will try to explain my control structure in more detail regarding to your suggestions, so one can get a better understanding of my project.

 


Jimmie A. wrote:
>... I would implement the FPGA code differently but since that is not your main issue here we can ignore that for now. I am somewhat curious though – you have a sector control that are fed into multiple case structures in your sequence structure – what does the other cases in the case structure do?
  • It is a space vector control. That means the other cases in the FPGA case structures just give different true/false values to the same DIO channels depending on the sector of the space vector (there are 6 sectors in total).

 

>... All the messages that you are receiving indicates that you are choking the cRIO controller i.e. the workload is too high so you kill the user interface thread which in turn updates your GUI on your RT application. First – you should just use a user-interface on the RT application for debugging purposes but in the end you need to have a separate host application.

 

  • I am planning to use a host application for interfacing the Real Time target, but I did not implement it yet. Regarding this host application, do I have to use shared variables for this or are there also other ways for implementing and connecting the host interface to the real time target?

 

>... Finally you are using two timed-loops – why not use an ordinary while-loop instead of the lower priority timed loop you are using? Timed loops are one single thread and is good for high-priority tasks with minimal parallel code (pipe-lining for an example not supported in timed-loops in the same manner as in an ordinary while-loop) whereas an ordinary while-loop works well with parallel code.

 

  • Thank you for this explanation. This makes the different between them more clear for me. I rearranged now several parts between the loops and put the important calculations in the timed-loop with high priority and the interface controls and output in a normal while-loop. In this case I assume that LabView tries to meet the deadlines for the high priority timed-loop and then just runs the while-loop in between when there is time left. Or did I understand this wrong?

>You could implement a heart-beat signal in where the FPGA reads a control updated by the RT and whenever that one is not updated within a certain time-period you could set your DIO lines in a safe mode. Toggle a Boolean in your RT code with a certain interval and if that interval is exceeded when measuring the time on the FPGA you could do actions as you like. See basic idea in the attached picture.

 

  •  This looks like a really good idea to me to implement a kind of emergency stop if the connection to the host is lost, thanks a lot. I tried to implement your suggested code, but I  cannot find the green vi with the arrow over the star which is connected to the "indicator being read by FPGA" variable and goes to the "Not Equal?" vi. Looks like a  memory vi, but I cannot find it in my Labview (2009 SP1). Can you give me the name of this vi so that I can search for it?

 

Thanks a lot for your help in advanced.

 

Regards

 

Andy

0 Kudos
Message 4 of 8
(5,461 Views)

Hello Andy,

Real-Time Execution Trace Toolkit will be helpful in that respect it will give you information on where you are spending time in your application. You can also see when memory allocations occur (not a good thing in the world of real-time operative systems) by enabling detailed tracing. Remember one thing though if the CPU is too high and you can not kill the application you might not be able to see the results depending on where you have placed the function that sends the result to host or alternatively saves it to disk. One idea could be that you enable the start of the trace right away before entering the loops and then after XXX samples/iterations you save the trace to disk on your target. If you then need to reset the device you can then FTP into the controller and upload the trace and open it in Real-Time Execution Trace Toolkit. Does that make any sense Andy?

OK – it makes more sense now what you trying to do in your FPGA code.

Network enabled shared variables is an option that many customers make use of since it works pretty well in single-point application and is easy to use. Another option would be to use TCP/IP or maybe Simple TCP/IP Messaging Protocol that is based on TCP/IP but has a higher abstraction layer and is easier to use. You can download it here:

http://zone.ni.com/devzone/cda/epd/p/id/2739

And read about it more here:

http://zone.ni.com/devzone/cda/tut/p/id/4095

Your timed-loop will get most of the CPU time and will run with a higher priority than your while-loop and your while-loop will run whenever time permits. Your timed-loop will run deterministically and that is a normal architecture choice that one divides up the application in different priorities so one can control which portions of the code that need to behave deterministically.

Network code (TCP/IP) and writing to disk is typical things that you should put in your while-loop that runs with normal priority whereas things related to acquisition and signal processing where timing is important will run in your high priority loops/VIs.

When the high-priority thread is done it will release the control of the CPU by completing what is supposed to do or by go to sleep. Now lower priority threads can execute…

The function you didn’t find is the feedback node that you find on your function palette under the “Programming>Structures” section i.e. where you find your while-loops and for-loops for an example. Don’t forget to rate a good answer!

Regards,
Jimmie Adolph
Systems Engineering Manager, National Instruments Northern European Region

Message 5 of 8
(5,453 Views)

Hello Jimmie,

 

thanks again for the very fast response. I tried the RT-Tracing Toolkit and it gives me good information on how long several parts of my code need to execute and where it may be possible to optimize the project. If I set the timed loop frequency a little bit lower (1ms) the RT target does not get disconnected and I can see everything in the Tracing Toolkit. For the faster execution (500us), I will try to use your tip by saving it to the internal memory and access the file later by ftp.

 

For my host application, it seems the easiest way are the shared variables. Or does the Simple TCP/IP Messaging Protocol and its modules consume less CPU power and make the RT-vi faster? Furthermore, I am just wondering how to start the vi on the real-time target on the cRIO from the host application. Is there a block for connecting and starting the RT vi block diagram?

 

Thanks again.

 

Regards

 

Andy

 

 

 

 

 

0 Kudos
Message 6 of 8
(5,431 Views)

Hello Andy,

Sound good that you are up and running with Real-Time Execution Trace Toolkit and it gives you really valuable information I think.

Shared variables is the easiest for sure – but based on my experience I would say that the Simple TCP/IP Messaging Protocol if set up correctly works better for demanding applications.

Normally you would build your real-time application into an executable using the LabVIEW Application Builder (included in LabVIEW Professional) and then you have the option to set this executable as startup which means whenever you power on your cRIO that executable will start running.

There is not a single block /function that you can find on your function palette that can do what you ask but using VI Server is also an option. Using this it is possible to start a VI on a cRIO target for an example from your Windows application. The RT application needs to be FTP:d down to the cRIO target so it resides on disk and it is then possible to open a VI server connection to the cRIO and then to the VI and tell it to run. One need to ensure that all subVIs and such are present as well and to ensure this one typically create a source distribution using LabVIEW Application Builder.

Regards,
Jimmie Adolph
Systems Engineering Manager, National Instruments Northern European Region

Message 7 of 8
(5,404 Views)

Hello Jimmie,

 

thanks again for your fast answer and help. The RT target vi is now running correctly and fast enough, first step finished. My next step are some holidays 🙂 and afterwards I will try to implement the RT application directly on my cRIO as an executable and to run it there with the help of the VI Server. Also thanks for this detailed explanation.

 

Regards

 

Andy

 

0 Kudos
Message 8 of 8
(5,363 Views)