I'm developing an FPGA program on the NI PXIe-7962R and NI 5781 A/D. Using a single cycle timed loop. In the loop I read data from the A/D channels.
When selecting a clock, the performance seems to be much better when using the IO module clock rather than the on board clock, even if both are set to the same frequency. In better performance I mean less spikes in the sampled data.
1) Why is the IO module clock better? It takes more resources and so harder to get the loop to do more.
2) What is a PLL lock? In some labview examples if the host is running the FPGA, it waits for a PLL lock. What is this function? Can this effect the quality of the sampled data as well?
I'm unable to find any information regarding a decrease in accuracy of the 5781 when using different clocks. Can you provide more information about this change in accuracy you're experiencing? Which example are you using?
I rechecked carefully the code. This time I kept everything exactly the same, only replaced the 100MHZ on board clock with IO module clock - 100MHz.
Results are still as I described initially. IO module clock creates noise free sampling.
See attached image showing both code and results.
Any ideas why this difference occurs?
What you are seeing is an issue with crossing clock domains from the sample clock in the CLIP to your onboard clock. The compilation tools are not able to analyze this crossing, so the data integrity is not guarenteed. If the onboard clock is even slightly off from the sample clock, it can capture data while it is changing, which will corrupt it. The data should be sampled with the IO module clock to allow the tools to analyze timing properly. If you would like to do processing in the onboard clock domain, you can use a FIFO to safely cross between clock domains. Kyle
Do you know why IO module clock would work well for 100MHz and not 40 MHz? when I lower the frequency I get back the spikes even though I'm on the same clock.
How are you lowering the frequency?
I've seen the spikes you are referring to when folks incorrectly change the Compile for Single Frequency field in the clock properties under the false assumption that changing that field changes the speed that the clock runs at (that field tells the compiler what frequency to use when analyzing the design for timing).
The getting started examples use the System Synchronous CLIP that does not provide an option to divide down the ADC or DAC clock, so if you are using an on board oscillator my recommendation is that you oversample at 100MHz and then discard the samples you don't want.
You could also use an external clock routed through the backplane from a timing and sync module or through the CLK In on the front of the module from an external source and then run your sample clock at whatever arbitrary rate you set on that external source.
A third option, which I do not recommend, is using the other CLIP that we ship with the 5781 that allows you to divide down the clock that gets routed to the ADC or DAC by 1, 2, 4, 6, 8, and 10. I don't recommend using this because this CLIP uses a regional clock rather than a global clock which limits the usable FPGA space you have available (the compiler will tell you that you are out of resources even though the 3/4 of the FPGA that is not clocked off that regional clock is still empty).