Hi Ross@UL,
This question has come up a number of times, so let me share my thought on this subject.
1) NI-DAQ is HUGE and many people have asked for scaled down versions of DAQ. Simulation code would only make it larger.
2) What kind of signal should be simulated? I have customers that are watching reference voltages that only vary be uVolts per day. A simple constant could simulate this signal but this would disapoint the customer that are monitoring Bessel functions.
3) I build in simulators to almost all of my applications just to make it possible to test without having to drive to a customer site. Until recently (i.e. since 1GHz proccessors hit the streets) Attempts to simulate 120 channels at 1000Hz would take more CPU than what was left over by the ap
plicatoin!
4) What about errors? Should the simulator simulate buffer over-runs etc? Now the simulator would have to know the spec of the boards and system buss behaviour to decide if it should simulate an error.
So....
I can accept the fact that simulation is not built in.
I understand this is not what you wanted to hear, but, sometimes it is easier to accept pain if you know that there is a reason.
Ben