02-28-2018 05:22 AM
Hello,
does anybody know if is there a limit os number of case structure or a instances of a local variable in labview fpga?
I have 8 case structure, and in each one I have an instace of a local variable, and when i run the .vi it seems that the cases 6,7 y 8 doesn't execute.
02-28-2018 06:35 AM
You are only limited by the fabric in the FPGA. But if there was a problem, the compilation would fail.
First of all, try not to have local variables. They use more gates and fabric. If you are writing to the same variable, move that out of the case structure.
Any more advice would require seeing your code.
03-01-2018 03:27 AM
In general, I'd avoid clusters in general. Not sure if that applies to your code. But each local costs a certain amount of gates\slices on the FPGA for it's data, and then some for synchronization. Locals of clusters will be expensive.
03-01-2018 07:02 AM
Pedantic correction:
Clusters themselves are not expensive. The width of a datatype is what makes it expensive. An U8 or a cluster of 8 Booleans will consume the same fabric on FPGA because they are the same width.
The problem with Clusters is that they tend to grow only (they rarely shrink) and so costs which may have been acceptable when designing code can become harmful due to later changes when the size of the cluster changes (even if we're only actually modifying a small portion of it).
So the problem with clusters is that the minimum datawidth of a cluster is the sum of ALL elements within the cluster, and that CAN be expensive, but it's the data content which makes it expensive, not the cluster per se.
Shane O'Neill
03-01-2018 08:41 AM
@Intaris wrote:
Pedantic correction:
Clusters themselves are not expensive. The width of a datatype is what makes it expensive. An U8 or a cluster of 8 Booleans will consume the same fabric on FPGA because they are the same width.
The problem with Clusters is that they tend to grow only (they rarely shrink) and so costs which may have been acceptable when designing code can become harmful due to later changes when the size of the cluster changes (even if we're only actually modifying a small portion of it).
So the problem with clusters is that the minimum datawidth of a cluster is the sum of ALL elements within the cluster, and that CAN be expensive, but it's the data content which makes it expensive, not the cluster per se.
Shane O'Neill
In theory, yes. But has that been proven? Maybe it has changed between LV versions?
Basically, all LV code is scripted to VHDL. Are you sure there is no overhead at all in using clusters? You might very well be right, but I wouldn't bet on it until I see some benchmarks.
03-01-2018 08:52 AM - edited 03-01-2018 08:54 AM
There is a caveat with clusters when using FP controls.
Due to the atomic nature of operations for single datatypes and the fact that a cluster can grow beyond 64-bit LV needs to do some extra housekeeping to ensure that a write to a FP cluster occurs atomically (Arrays also BTW). This is something not required for a non-clustered and non-array elements since they cannot grow beyond 64-bit. The reason for the 64-bit boundary and the neccessity for housekeeping is the 64-bit width of the DMA engine. So as FP elements, Clusters (and Arrays) require extra fabric for operation. Locals don't have this limitation since they have no relation to the inherent 64-bit data width of the DMA engine.
03-01-2018 08:54 AM
@Intaris wrote:
There is a caveat with clusters when using FP controls.
Due to the atomic nature of operations for single datatypes and teh fact that a cluster can grow beyond 64-bit LV needs to do some extra housekeeping to ensure that a write to a FP cluster occurs atomically (Arrays also BTW). This is something not required for a non-clustered and non-array elements since they cannot grow beyond 64-bit. The reason for the 64-bit boundary and the neccessity for housekeeping is the 64-bit width of the DMA engine. So as FP elements, Clusters (and Arrays) require extra fabric for operation. Locals don't have this limitation since they have no relation to the inherent 64-bit data width of the DMA engine.
But that doesn't answer the question, although a "yes, it's proven" seems to be implied.
03-01-2018 09:01 AM
Correct. The reason for my aside is that there IS a case where clusters are more expensive than just the width of their counterparts.
As to a "yes", I need first to ask what your definition of "proven" is.
03-01-2018 09:25 AM
@Intaris wrote:
As to a "yes", I need first to ask what your definition of "proven" is.
Shown beyond reasonable doubt.
I'll take anything, really. If you say that in your experience bundling\unbundling adds 0 overhead, I'll accept that. But it doesn't count if you have been ignoring potential overhead because there where plenty of resources.
03-01-2018 10:00 AM - edited 03-01-2018 10:07 AM
I have made some trivial code to show there is zero difference.
When I compile the two main VIs (Doing the same work, one via 16x U64 Registers, one via a 16xU64 cluster) I see no discernible difference between the resource usage for both (Aside fromt he usual 1-2 LUT / Register differences attained when compiling the same VI multiple times).
Hmm, wait, maybe it's all being optimised away.....