LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

[FPGA] Increasing speed in a delay

Solved!
Go to solution

Hello, I've implemented a fairly simple delay inside a while loop in LabVIEW FPGA, as I haven't found any already implemented delay with adjustable number of cycles (see the atached file). Now, however, I'm trying to increase the speed of my code and this delay is the bottleneck, taking a minimum of 150 clock ticks to execute.

 

Does anyone have an idea or suggestion on how to implement this? 

Also, what is usually faster: using several different while loops, feeding one another with FIFOs, or a single one with shift registers? Any other difference, besides speed and memory usage?

0 Kudos
Message 1 of 5
(2,826 Views)
Solution
Accepted by topic author gerardpc

Hi gerard,

 

as first step you should replace any Divide/Modulo function in your FPGA code!

Replace both modulo operations by simple "IF x >= dividend THEN x:=x-dividend" constructs.

If you would define your memory block to 1024 elements you could replace the modulo operation by an even more simple "x AND 1023" operation!

 

- Additionally I prefer the increment function instead of adding with a constant "1".

- It seems your memory pointers are defined as I32. You could use an I16 instead, it can handle numbers between 0 and 1023 easily!

- You need to make sure your "delay cycles" doesn't generate a negative memory index due to the subtract function…

See how much improvement you can achieve…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 5
(2,802 Views)

What do you mean with 'this delay is the bottleneck, taking a minimum of 150 clock ticks to execute'?

If you change your code according to GerdW's recommendations and for example replace your subraction on the Read side with an addition on the Write side, your code should be able to run in a single cycle time loop (SCTL) so in one clock cycle.

0 Kudos
Message 3 of 5
(2,760 Views)

Hi, thanks for the recommendations. The time of execution decreased drastically. 

The problem I have right now is that I'm using a couple of VIs that can't be included in a SCTL (butterworth filter, for example), so I'm forced to use regular while loops (I don't know if there is any workaround). 

 

My other question is, under the assumption of while loops, what will be faster: while loops in series, feeded with FIFOs, or the use of feedback nodes after every calculation step inside a single while loop (to pipeline and parallelize)?

0 Kudos
Message 4 of 5
(2,752 Views)

When you use a regular while loop every single operation will be latched and cost a clock cycle. The execution time depends on the critical path, that is the path with the most sequential operations (without feedback nodes).

 

If you pack parts of your code, like for example your delay code in an SCTL and only run that loop once you can reduce the execution time for that section of code to one clock cycle (you are skipping the internal latches), reducing the time needed to execute your critical path.

 

So try to pack as much as you can in SCTLs, if your code ends having one while loop that includes 5 sequential SCTLs your execution time will be slightly more than 5 ticks (while loop itself also costs a few ticks).

 

BTW, N sequential SCTLs can also be implemented in one SCTL running N times (5 ticks) with your N code sections in the same loop but separated by shift-registers (pipelining).

Message 5 of 5
(2,748 Views)