LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

spartan 3e low performance adc

Hi, I have spartan 3e board and labview 2010 with the properly installed XUP driver.

 

When I program the FPGA board in VHDL code I have (more or less) a 1.3MHz sample rate aquisition for ADC.

But when I program a in a labview FPGA, time code for a simple continuous acquisition is about 8us (corresponding to a sample rate frequency of about 125kHz).

 

I know that for each flat sequence frame that I use with only a FPGA I/O node inside  it takes more or less 4-10 ticks, when should take only one tick.

 

 

Could  anyone explain it for me.

 

Thanks in advance.

 

 

0 Kudos
Message 1 of 8
(2,933 Views)

Hi,

 

Thank you for posting your question on National Instruments' Forums.

 

Could you be more specific about which I/O node you are using ? 

 

A DIO node should indeed only take 1 clock tick to execute but it is not the case ofr AI nodes or AO nodes which typically and respectively take 15 tick and 32 tick to execute.

 

Also, if you have other portion of code on top of the I/O node, have you tried to create a SCTL to speed up the execution time ?

 

If you could provide your FPGA VI, it could help me understand the root cause of your issue.

 

Best regards,

 

Guillaume H.
National Instruments France

0 Kudos
Message 2 of 8
(2,918 Views)

In LVFPGA, when you are dealing with sequence frames and normal dataflow that is outside of a single-cycle loop (which guarentees that the contents of the loop will be executed in a single clock cycle), there is some non-visible framework to enforce dataflow known as the enable-chain which will add overhead to your design. It is described in the following article:

 

http://zone.ni.com/reference/en-XX/help/371599G-01/lvfpgaconcepts/fpga_sctl_and_enablechain/

 

Also, as Guillaume H noted, some nodes require more than 1 clock cycle to execute (such a setup necessitates the noted enable chain).

 

Basically, it is conceivable that you could create a design that matches the performance that you saw with the HDL-coded design, using, for example, a case structure inside of a SCTL that is used to create a simple state machine to perform the acquisition, but it's not perhaps the most obvious design.

0 Kudos
Message 3 of 8
(2,907 Views)

Hi Guillaume H and BradM

 

I inserted my code in the attachment.

 

I hope that will help you.

 

Thanks,

 

Gabriel

0 Kudos
Message 4 of 8
(2,892 Views)

Each one of those frames add at least one, maybe two (I'd have to check again) ticks to whatever the execution time of the contents of that frame would be, and this would add up as you progress through the loop. However, doing some quick, back-of-the-envelope math, even those delays don't account for the low sample rate you're seeing (something on the order of 40 clock cycles for the HDL vs. 400 for the LV implementation). Do you mind posting the actual project you are using?

0 Kudos
Message 5 of 8
(2,874 Views)

Hi BradM,

 

(meanwhile I was doing other projects)

 

My project is in the atachment.

 

Thanks in advance for your contribuition. I hope this can help me clarify my doubt.

 

 

0 Kudos
Message 6 of 8
(2,795 Views)

Hi BradM,

 

(meanwhile I was doing other projects)

 

My project is in the atachment.

 

Thanks in advance for your contribuition. I hope this can help me clarify my doubt.

0 Kudos
Message 7 of 8
(2,710 Views)

At what point are you measuring the time required to acquire a sample? If you are looking at the transfer rates to the host VI, those rates are going to be low as a result of the bottleneck that the JTAG link introduces and you have no real hope of getting near the 1.5Msample rate. If you are looking at the timestamps that are in ADC.vi, the issue is more concerning but there are multiple instances of sub-optimal design. Considering you are looping over a flat sequence that has 4 frames and each frame has some diagram that is going to take up at least one addidional cycle, it is not too surprising that you are seeing the rates that you are (conservatively, each iteration is taking on the order of 8-10 cycles, multiply that by 32, and you begin to see why the acquisition is taking around 400 cycles with the LV FPGA design).

 

In order to get the tighter timing that you see in the HDL code you would need to replace the various flat sequences with a finite state machine that is housed in a single-cycle timed loop in order to ensure that each of the logical steps you have in the design take one cycle to complete, as undoubtedly you designed the HDL thusly. I would recommend reviewing this handy introduction to FSM's in LabVIEW FPGA and test replacing the simple while loop with a single-cycle loop. Also, look at the datasheet for the LTC1407 family of ADCs to determine what the timing needs to be for the SPI communication (example: with a 25 MHz SPI clock, as provided by a FSM running in a single-cycle timed loop, you can probably capture SDO on the same cycle where you de-assert SPI_CLK since the delay from rising SCK to SDO being valid is 6ns, the duration from SCK rising to falling is one SCTL cycle, 20ns).

 

Let us know if you have further questions.

0 Kudos
Message 8 of 8
(2,698 Views)