Dear labVIEW users,
as a last year graduate on the KDG I got to realise a last year project.
The target is to simulate automotive sensor signals and send these to the ECU ( electronic control unit ) of the motormanagementsystem, this ECU will interpret this signal as actual circumstances in the car while he's driving and while the engine is running.
All the sensorsignals has been simulated but now we've got to simulate the crank shaft signal, this is a inductive sensor that stands next to a flywheel, on the flywheel there are 35 theeth and after the 35 theeth there is 1 thooth gone. When passing a tooth the inductive sensor generates a sinus, here where the thooth is gone after passing 35 theeth the inductive sensor generates a sinus with a bigger amplitude (this is only physical) and a lower frequency. The ECU triggers on each falling edge of the sinussignal, here where the tooth is gone it will take a longer time before it triggers a falling edge, this is a reference point for the ECU for knowing the position of the cylinders in the engine.
As
attachment I've uploaded what we've got right now and a picture of how it should be when it's perfect.
The waveform graph in the VI shows with every frequency that after 35 pulses 1 puls is equal to zero, here the falling edge will come later so the ECU will know this is the reference.
But when I measure the signal on a oscilloscoop the zero puls my PCI generates is not in the middle of the signal, when the frequency is 20Hz it's perfect, but when a use a floating point ( a comma number ) the zero puls is not in the middle ( or reference ).
The zero puls should be in the middle of the signal with every frequency! so the ECU will always measure a zero puls after 35 pulses (with every frequency).
How should I solve this problem? The minimum frequency = 14,7Hz ( engine running stationairy ), the maximum frequency = 108,4Hz ( engine running at his highest rpm ).
Thank you very much for every reply,
Stijn