A general question regarding the resource usage of Block Ram primitives in LabVIEW FPGA.
I'm investigating utilising a group of BRAMs as "interconnects" between several different loops running on my FPGA. I can serialise and deserialise many values into these BRAMs so that the depth can be used more or less efficiently.
My question regards how (or whether) the attached feedback nodes are represented on Fabric. Are they purely place-holders so that the latency can be visualised or are they actually implemented in fabric (LUTs, REgisters, SRLs)?
I have since found out that BRAMs have optional input and output registers (internal to the BRAM) so choosing 1 or 2 delay does NOT utilise fabric on a Virtex 5. The third delay, however seems to utilise fabric.
Therefore two new questions:
Why does LV offer a delay of 3 when only 2 are supported in hardware and the third actually utilised fabric?
Why does the third feedback node utilise registers instead of SRLs? I have disabled the reset signal and according to the LV 2012 help, this allows the compiler to implement the code as 16-bitSRLs (LUTs) instead of registers. This text is missing from the LV 2015 help. Has this function changed? I use this knowledge a LOT in our design to significantly save resources, using 24 LUTs for a 16-cycle delay of a 24 bit signal is a lot better than using 384 registers to do the same thing.