LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

real-time system with simulation interface toolkit

Hi all,

I have developed a controller model in Matlab/Simulink and compiled it into a dll using the nidll.tlc specifications.
I am now using the Labview Simulation Interface Toolkit to communicate with the dll.
In the SIT connection manager I specify "localhost" as the real-time target, since I want to have the control algorithm running on my desktop PC, and define the signal mappings.
The resulting driver and host VIs work fine, except that they do not run at real-time, although Labview only uses a small fraction of my CPU power.

Can anyone tell me how I can get them running at real-time? Do I have to embed the host and driver VIs into a real-time project?
Or do I have to use an RTX target instead of "localhost"?

I am using Labview 8.2 and SIT 3.0.2.

Thanks!
Yves

0 Kudos
Message 1 of 18
(3,241 Views)

Hi Yves,

Typically, a model DLL running on localhost should run very close to real-time.  Since a desktop OS is non-deterministic, there will probably be considerably jitter, but that should not slow down the overall execution of your model.  Just to clarify, at what rate are you seeing your model run on localhost?  How does this rate compare to the execution speed of your model in Simulink?  If you have any waveform charts that are mapped on your client VI, they should display the simulation time as it runs.

Also, have you run any other models and seen the same issue you described?  It would be interesting to see if the same thing happens with any of the SIT shipping examples.

Chris M

0 Kudos
Message 2 of 18
(3,235 Views)
Hi Chris,

On localhost my model runs about 60x slower than real-time (so I am not even certain I can ever run it in real-time). However in Simulink I can run the same model about 5-6x faster than real-time (in accelerator/compiled mode). The period of my model is 30 microseconds.

I have run the sinewave shipping example in real-time mode and that works correctly.

Thanks!
Yves
0 Kudos
Message 3 of 18
(3,231 Views)

Yves,

I have a few questions about your application.  First, you said you specify the localhost as the real-time target.  Are you doing this to test the DLL?  The localhost is always going to be considerably slower than the real-time target due to the overhead going on.  Do you see the same issues if you run this on an actual real-time target with the real-time operating system?  Also, how are you measuring the speed of your model?  You said it is running 50-60X slower.  How is this being measured?  You mentioned several times that things were slower than "real time".  Can you explain what you mean by "real time"?  I think you have the wrong definition there.

 

reggier

0 Kudos
Message 4 of 18
(3,199 Views)

Yves,

With a 30 us time step, I'm not surprised that your model is running slowly.  You said that it runs much faster in accelerated mode, but how fast does the Simulink model run in normal mode?  This should be more comparable to what you would see with a DLL model on Localhost.  When a DLL is built, there is no simplification done to the model that would make it run as fast as the accelerated model.  Is the accuracy of your simulation dependent on having a 30 us time step?  Increasing this period would be the best way to run your model faster.

Chris M

0 Kudos
Message 5 of 18
(3,192 Views)

Reggier,

Right now I am indeed trying to test my dll, in particular I want to make certain I can run it at real time. Later on, I want to add a DAQ to interface my physical device with the control algorithm that is running on my PC. So I don't really have any other real time target, but if localhost is not suitable for real time control, I was thinking about using an RTX target.

By 50-60x times slower I mean that it takes about 1 min for the plot on my waveform chart to scroll 1 second. What I want it to do is output the new values every 30us (instead of every 1.8ms), which is the sampling period I specified in my controller. So the time in my simulation should run just as fast as in the real world.

 

Thanks!

Yves

0 Kudos
Message 6 of 18
(3,182 Views)

Chris,

When I run my Simulink model in normal mode, it is still about 2x faster than real time. I assumed that when compiling the Simulink model using nidll.tlc, at least some of the simplifications and discarding of overhead would be done that is also done when using accelerator mode.

The accuracy of my model does depend on the period, still I might be able to double it (max), which does not exactly solve my problem.  Also, Labview only uses part of my CPU power, so wouldn't there be a possibility to increase that?

Thanks!

Yves

0 Kudos
Message 7 of 18
(3,181 Views)
If you are running the simulation on your Windows computer, what is slowing down the simulation is not the fact that you've compiled the code into a DLL. The problem is that the Simulation Interface Toolkit is designed to try to run a simulation at a specific rate, not as fast as possible. A Wait function is called in the base rate loop to throttle the execution speed to a specific rate. But Windows doesn't provide a MHz clock to applications, only a kHz clock. This means you are limited to loop periods of around 2ms.

Real-Time operating systems export a MHz clock and as such allow Simulation Interface Toolkit to run models at loop rates higher than 1kHz. Your simulation ran so fast in Simulink because that application doesn't try to throttle the loop, but rather runs it almost as fast as it can.
Jarrod S.
National Instruments
0 Kudos
Message 8 of 18
(3,163 Views)
Thanks a lot for that information. The ratio of 2ms to 30us explains the factor by which my simulation is too slow pretty well.

So if I used an RTX target instead of localhost, could that provide a MHz clock?

Yves
0 Kudos
Message 9 of 18
(3,142 Views)
Hmmm... Sorry to say I really don't know too much about RTX. I would imagine it supports MHz clock rates, but definitely don't quote me on that. From this web page it appears that it doesn't support Timed Loops with MHz or even kHz clocks, so that could be a real problem. Also, RTX systems have much more limited support for IO. The DAQmx driver, for instance, doesn't work with RTX.

I think it would be a much better option in terms of performance and IO capabilities to consider turning your computer into a Desktop ETS RT system. This would allow you to use the NI-DAQmx driver with DAQ boards for IO, and would give you MHz clock rates just like a PXI Real-Time system. You could configure your computer to dual-boot if you still need it to serve as a regular computer as well.

Check this page for more information. I think this method would give you the most benefits.
Jarrod S.
National Instruments
0 Kudos
Message 10 of 18
(3,127 Views)