Signal Generators

cancel
Showing results for 
Search instead for 
Did you mean: 

Jitter in signal generator response to digital edge trigger when using ni tclk with digitizer

Solved!
Go to solution

I have written a VI that uses NI tclk to synchronize a signal generator (PXI-5422 named FGEN1) and digitizer (PXI-5122 named DIGITIZER1).  There is also a card TIMING3 generating a digital clock.

 
FGEN1 is in script mode and uses "Wait until scripttrigger0" for a digital edge pulse from an internal clock from another card (/TIMING3/PFI1).  In response, FGEN1 generates a burst of waveforms, and generates markers Marker1->PXI_Trig1 (for DIGITIZER1) and Marker 0->FGEN1/PFI0 for monitoring.
 
On an external HP oscilloscope, I am monitoring TIMING3/PFI1, FGEN1/CH0, and FGEN1/PFI0.  I can see that FGEN1/CH0 and FGEN1/PFI0 are synchronized, but that with each occurrence of an edge of TIMING3/PFI1,  the marker on FGEN1/PFI0 systematically moves backwards relative to TIMING3/PFI1. This results in a systematic jitter on the order of about 2us. It seems like can probably be explained by the way that TCLK synchronizes, but I don't quite understand all the details.  Could anyone help explain this to me?
 
I am interested in reducing the TIMING3/PFI1 to FGEN1/PFI0(Marker0) jitter, i.e., the jitter in the response of FGEN1 to the digital edge trigger, while still maintaining the tight sync between DIGITIZER1 and FGEN1.
 
Is there anything I can do to improve this?  Or is there a tradeoff that I should be understanding?
 
 
My script:
script myScript1
Repeat forever
    Wait until scriptTrigger0
    repeat 32
      Generate wfmSine marker0(0,100) marker1(0,100)
      Generate wfmDC0V
      end repeat
end repeat
end script

 

0 Kudos
Message 1 of 4
(5,123 Views)

I cleaned up my demo VI a little bit so that it is reasonably understandable.

 

I also tried a few things.  I noticed that when I disabled the calls to the TCLK synchronize VIs (the "Use NI TCLK" button), the phase jitter on my digitized waveform ("Intra-burst delay jitter") went up to the order of nanoseconds with a uniform random distribution. When I went in and manually set their reference clocks to be the PXI backplane clock (rather than their internal reference), that phase jitter went back down to 20 ps gaussian distribution, as when using TCLK.  That makes sense.

 

When TCLK is disabled, the jitter in the response of FGEN1 to the digital edge trigger ("Inter-burst delay jitter") is better, on the order of 3ns rather than 1us.  What is it that TCLK changes that causes this to happen? 

 

I am hoping to reduce the inter-burst delay jitter as much as possible.  3ns is marginally acceptable, but are there any other clocks or settings I can try to improve it to somewhere on the order between 50ps and 1ns?

0 Kudos
Message 2 of 4
(5,109 Views)
Solution
Accepted by topic author gregoryng

It seems like can probably be explained by the way that TCLK synchronizes, but I don't quite understand all the details.  Could anyone help explain this to me?

 

You are correct. NI-TClk makes sure that all synchronized devices start at almost the same time, down to the same sample clock, with very very tight synchronization. Sometimes, the level of synchronization across NI-TClk-synchronized devices beats the level of synchronization of some competitor's instruments channels within the same device. But this doesn't come for free, there are tradeoffs involved and added added jitter to the trigger  response time is precisely one of them. Here's an attempt at explaining why:

 

When you don't use NI-TClk, and send a trigger, the devices will respond to the trigger on the next clock cycle after the trigger signal made it to the device. Let's say you have several devices all of them the same configured with the same clock rate. You lock them to the PXI_Clk10 signal using their PLLs so they don't drift apart. But each device's clock edge will be off, by +- 0.5 clock cycles. If you send a trigger to all of them, they will respond on the next clock cycle whenever that is, after the trigger arrived to each device with different propagation times, whatever they are. You get one clock cycle of jitter across device reaction to the trigger.

 

When you use NI-TClk, several things happen: All devices are locked to the PXI_Clk10 signal in order to eliminate drift. Then device clocks are adjusted at a sub-clock period level. Very very tight. Then a common, slower clock signal called TClk is generated inside all devices. All trigger generation is delayed to be sent at the next TClk rising edge, and all trigger consumption is delayed to be received at the next TClk falling edge. This way you ensure that propagation delays don't mean one of the devices doesn't react to the trigger until the next clock cycle.

 

This is why you see higher jitter in trigger reaction times. When you add devices with different clock settings, then the frequency of the TClk may be slower so that an evenly divisible frequency to all device's clock rates can be achieved. This causes trigger reaction time jitter to be even slower!

 

I hope this helps you understand what you are seeing.

Marcos Kirsch
Chief Software Engineer
NI Driver Software
0 Kudos
Message 3 of 4
(5,102 Views)

Thanks Marcos, 

 

I must have missed your reply.  That is a good explanation and it was along the lines of what I was thinking but couldn't form into a complete explanation.  I guess that tclk software is already trying to optimize the settings to reduce jitter as well as it can.  

 

I think that for this application, synchronizing without using tclk is probably the right tradeoff, but it's good to now have a good handle on what i am trading.

0 Kudos
Message 4 of 4
(5,076 Views)