LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

clock sample rate seems wrong

I have an NI 7851R in a PXI chassis (LabVIEW 8.6) and  I wrote a UART for the FPGA to decode a 500 Kbps serial data stream (8-N-1)

I simulated the UART first in standard LabVIEW, it relies on a classic x16 over sample to decode each bit, and it worked perfectly.  Once in the FPGA the thing decoded data but it was all wrong - further troubleshooting I discovered that the sample clock was running at half of what I expected - the FPGA loop runs with a 5 clock tick rate, which I am thinking should give me 8 MHz sampling or 16 samples on the 2 microsecond bit time of the stream...  I instrumented the VI with a DMA pipe to send the 160 samples of a decode byte (1 start + 8 data + 1 stop) x 16 samples to the host.  To my surprise there were HALF the samples, so instead of a bit having 16 samples it only had 8.  Re-adjusting the UART parameters and it worked BUT why was it not running at 8 MHz and instead at 4 Mhz.  Has anyone seen this behaviour ??

 

Thank You !!

 

Chris

0 Kudos
Message 1 of 5
(2,954 Views)

Your question would be easier to answer if you include your project and the FPGA VI.  If you can't share everything, a subset that demonstrate problem would be useful.

Kosta
0 Kudos
Message 2 of 5
(2,925 Views)

So here is what I found,  as I am sure many of you with the FPGA toolkit have learnerd that there are some major things you have to remember to code these things "right", so this is for the "noobs" who might read this (BTW I have been using LabVIEW since version 3.0, but with the FPGA stuff I am very new)

 

1.  You can build a loop that reads a port and use a delay to wait a certain number of clock ticks BUT as soon as you put the simplest logic or add shift registers the timing goes out the window and the loop may take longer than the number of clock ticks, so my original loop worked like this but as  I added procesing the loop would not run at 8 MHz

 

2.  The way out of this movie is to use a single cycle timed loop and run it at the full clock rate (40 MHz) then create a counter to sample the data at the desired rate (in my case 8 MHz)

 

The caveat is that there are a variety of functions that do not work in single cycle timed loop, and if your processing takes longer than a single cycle it won't work - but if you need accurate sampling its the only way to go as near as I can tell !

0 Kudos
Message 3 of 5
(2,894 Views)

It is true that using the Single Cycle Timed Loop (SCTL) will give you the most control over execution timing on the FPGA and for your particular application, makes the most sense to use.  However, as you mentioned, there are some functions that are not supported inside a SCTL.  One thing I want to note is that we can still have accurate timing inside a normal while loop provided we understand what is actually happening in the hardware.  If this were not the case, we would have no control over sample rates of an analog channel that cannot be run inside the SCTL.

 

There are two separate timing functions on the FPGA Timing palette that I want to address.  The first is the Wait function.  This function does just what it says, it waits until the specified amount of time has elapsed and then allows other code in the flow to execute.  This means that if you have a wait, followed by some other code, the loop will take more than the specified amount of time to complete because other code execution is not factored into the wait.  In some instances, this is desirable, but what it sounds like you are looking for is the Loop Timer function.

 

The Loop Timer basically behaves as if it were taking a timestamp the first time it is called.  Then, code after this function executes immediately.  The next time it is called, the Loop Timer compares the current timestamp to the previous timestamp.  If the specified time has not elapsed, it waits.  Once the time does elapse, the subsequent code executes immediately again.  So, the loop will actually run at the specified rate unless the code inside the loop takes longer to execute.

 

I’ve attached two images to show the difference in behavior.  Each of these loops with a wait time of 0 or 1 tick specified will take 9 ticks to run.  However, when we put the wait time to 10 ticks, notice that the loop with the Wait now takes 18 ticks while the loop with the Loop Timer only takes 10 ticks.  This code does not accomplish much other than to show that you can still get accurate timing in a while loop, but there is a big difference in how the loop is timed.  The tick times for each of the first frames on the diagram are for a count of 10 ticks specified for the wait.

 

Using a loop timer, you could then add pipeline stages to your code inside the while loop.  This means you could still sample at the correct rate, there would just be a latency between when the sample was taken and when it was sent to the DMA FIFO.  Depending on how/if you are packing bits before sending the sample to the DMA FIFO, you should be able to get an 8 MHz sample rate and send data through DMA in the same while loop.
Message Edited by Donovan B on 03-16-2010 01:52 PM
Donovan
Download All
0 Kudos
Message 4 of 5
(2,863 Views)

Excellent info - I had started with one of the NI example VI's and it could very well have had a WAIT instead of a Loop Timer - seems kind of obvious,  I will probably go look at it and see if it indeed has a WAIT, that actually could explain a lot since it did not seem like the code should take that long to execute

 

Thank You !!!!

0 Kudos
Message 5 of 5
(2,832 Views)