Typically, a model DLL running on localhost should run very close to real-time. Since a desktop OS is non-deterministic, there will probably be considerably jitter, but that should not slow down the overall execution of your model. Just to clarify, at what rate are you seeing your model run on localhost? How does this rate compare to the execution speed of your model in Simulink? If you have any waveform charts that are mapped on your client VI, they should display the simulation time as it runs.
Also, have you run any other models and seen the same issue you described? It would be interesting to see if the same thing happens with any of the SIT shipping examples.
I have a few questions about your application. First, you said you specify the localhost as the real-time target. Are you doing this to test the DLL? The localhost is always going to be considerably slower than the real-time target due to the overhead going on. Do you see the same issues if you run this on an actual real-time target with the real-time operating system? Also, how are you measuring the speed of your model? You said it is running 50-60X slower. How is this being measured? You mentioned several times that things were slower than "real time". Can you explain what you mean by "real time"? I think you have the wrong definition there.
With a 30 us time step, I'm not surprised that your model is running slowly. You said that it runs much faster in accelerated mode, but how fast does the Simulink model run in normal mode? This should be more comparable to what you would see with a DLL model on Localhost. When a DLL is built, there is no simplification done to the model that would make it run as fast as the accelerated model. Is the accuracy of your simulation dependent on having a 30 us time step? Increasing this period would be the best way to run your model faster.
Right now I am indeed trying to test my dll, in particular I want to make certain I can run it at real time. Later on, I want to add a DAQ to interface my physical device with the control algorithm that is running on my PC. So I don't really have any other real time target, but if localhost is not suitable for real time control, I was thinking about using an RTX target.
By 50-60x times slower I mean that it takes about 1 min for the plot on my waveform chart to scroll 1 second. What I want it to do is output the new values every 30us (instead of every 1.8ms), which is the sampling period I specified in my controller. So the time in my simulation should run just as fast as in the real world.
When I run my Simulink model in normal mode, it is still about 2x faster than real time. I assumed that when compiling the Simulink model using nidll.tlc, at least some of the simplifications and discarding of overhead would be done that is also done when using accelerator mode.
The accuracy of my model does depend on the period, still I might be able to double it (max), which does not exactly solve my problem. Also, Labview only uses part of my CPU power, so wouldn't there be a possibility to increase that?