LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Estimating FPGA compile success rate

I'm wondering if other users have faced the same problem we currently do and if so, how do you deal with it?

 

We have some FPGA code which is "hard to compile".  We can have up to 20 failed compilations in a row.  We can, however, also have four or five successful compilations in a row.  20 failed compilations is far more prevalent though.  If we made some minor change prior to a successful change, human nature demands we attribute the success to the minor change but alas, time and experience are whispering in my ear that the whole correlation versus causation thing is completely the wrong way around for FPGA compiles (i.e. correlation suggests no causation whatsoever).

 

I ask this because we're trying to optimise some of our CLIP code which is tied to certain hardware pins and thus places strong limitations on the routing algorithm.  Since most of our timing violations come from routing and not from logic (with routing typically being 2-3x larger than logic delays) this would seem to be a logical place to try to gain some routing headroom.  In order to do this, we would ideally need to be able to make statements like "This change improved compilation success by X %".  But in the world of super varying results and long compilation times, how to go about this?  I have already stolen borrowed login details for a cloud compile account of a co-worker to at least allow 10 simultaneous cloud compiles but even this is frustrating.  One set of 10 can give 0% success rate, another 80%.  I'm starting to consider plotting the phase of the moon along the X axis of my success rate chart.

 

Is there a general rule of thumb regarding this kind of statistic or is the wish for such a thing just pie in the sky?

0 Kudos
Message 1 of 6
(3,024 Views)

You could try removing the FFT you used to extract the moon phase. This will free up some space for placing and routing the rest of the code.

0 Kudos
Message 2 of 6
(2,978 Views)

What devices are you trying to compile for?  From what I have been told ISE (the older compiler) was quite random in how it would optimize and therefore your builds were almostly completely random.  But Vivado (Kintex 7 and Zync chipsets) was supposed to be faster and more consistent.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 3 of 6
(2,968 Views)

@crossrulz wrote:

What devices are you trying to compile for?  From what I have been told ISE (the older compiler) was quite random in how it would optimize and therefore your builds were almostly completely random.  But Vivado (Kintex 7 and Zync chipsets) was supposed to be faster and more consistent.


Im pretty sure that there is (or used to be) a random seed to decide where to start placing.

 

 

0 Kudos
Message 4 of 6
(2,961 Views)

We're currently using Virtex 5, so ISE 14.7.

Although we certainly do see variation between individual compilations, a general trend "seems" to appear that if I start 10 compilations today and 10 tomorrow, the deviation between today and tomorrow will be greater than between all 10 today or between all 10 tomorrow.  Compilations startet from the same source code (just duplicated build specs) seem to vary less than copying the actual top-level VI and making a new build spec.

But again, this is all conjuncture.  Either way, does anyone have experience trying to narrow down some kind of reliable compilation success percentage?  Aside from compiling 10 times a day for 6 months.... Smiley Indifferent

0 Kudos
Message 5 of 6
(2,955 Views)

Hello,

 

For compilations, the Xilinx compiler uses a heuristic algorithm and therefore does not necessarly generate the same bitstream/bitfile each time for the same code. Additionally, while this may not apply to CLIP code, you could use IP builder for your G code. This can estimate resource usage and timing also, I have included some documentation below:

 

http://www.ni.com/white-paper/14036/en/

 

For actual timing violations, the issue here is a long combinatorial path of logic which needs to be executed within a specific time frame. Unfortunately this is where we come to the limits of what is possible, there are however a few things that can be done. As I mentioned in a previous post, using a SCTL (Single cycle timed loop) removes the enable registers which enforces that the logic will execute in 1 tick for a sequence of operations (if possible).

 

Another methodology to get around timing violations is pipelining, this splits the code into sections which run in parallel. I've included the main documentation on fixing timing violations below, note it does mention that compilations can differ between one another:

 

http://zone.ni.com/reference/en-XX/help/371599G-01/lvfpgaconcepts/fpga_fix_timing_violations/

 

If your CLIP code itself can compile in a SCTL at the data clock rate without actually performing any logic on the inputs/outputs I would suggest to pipeline at the instance you read in data. That way you actually split the combinatorial path, you will be reading data 1 tick ago but the CLIP and LabVIEW will be seperated in terms of timing.

 

Best regards,

 

Ed

0 Kudos
Message 6 of 6
(2,874 Views)