I'm trying to better understand how to calculate the input delay.
According to this page built on 9237 spec, at 2Ks/s the input delay is 0.0200097 seconds.
Assuming that the frequency-related part of the equation is in seconds and the constant part is in us (weird to have 2 different units in the same equation...), the result of the calculation would be 0.0200094 seconds (instead of 0.0200097).
Compared to the acquisition rate (2Ks/S = 500us per sample), this input delay seems huge. Very huge...
But since an equation is usually given to follow the specified unit for all its terms, it would mean that the frequency-related part of the equation AND the constant part are in us. Following this hypothesis, it woudl mean that the input delay is 4.520005us. Which makes a big difference !
Then, which calculation is right ?
I have done the calculations myself and the number should be 0.0200094 -- you are correct. I will edit the KB such that it's the correct number. Conceptually, if the numerator of the input delay equation is in samples (and I say this because the math would work out in our favor and conceptually, it makes sense), the math works out to be that number. Essentially, the top number is probably the number of samples that R&D has evaluated to be a safe number that is nonsensical and would give you unwanted results so you would just throw that many number of samples away. That way, the equation is essentially (samples/second)/(kS/s) + microseconds and then you would end up with seconds at the very end. Does this make sense?
That is what I was afraid of...
When you say 'the top number is probably the number of samples ' it would have been nice to ensure that it really is... Because it seems that with our modules it doesn't match with what we really see. When we measure the phase between the same signal acquired by a 9244 and a 9246, we should see approximately 16 degrees (#790us). But we only see 1.6us.
Could you ensure with the R&D that the numerator is really a number of samples per seconds ?
Secondly, 'the equation is essentially (samples/second)/(kS/s) + microseconds' is a mathematical error. In mathematics all members of an equation must refer to the same unit division. That way it would be wiser to write the input delay inforamtion in the data sheet that way :
Input delay (s) = 40*5/512/fS + 3.3E-6
Input delay (us) = (40*5/512/fS)*1E-6 + 3.3
Does this make sense ?
Looking forward to hear from a confirmation of the R&D team on this subject.
Also, yes, sorry about the mathematical error ; I just merely wanted to explicitly put the units where they were so you can see how they cancel out I am going to edit the KB such that it reflects the correct number and clarify the document as well.