In current versions of LabVIEW FPGA, placing a For Loop inside an SCTL will result in code that cannot be compiled; this is because conventially For Loops work iteratively and therefore require multiple clock signals to drive each new iteration.
However, I think a logical implementation of a For Loop within an SCTL would be the generation of multiple parallelised instances of whatever code is inside the For Loop. This would greatly improve readability and flexibility by avoiding the user having to manually create multiple separate instances of the same critical code on the Block Diagram.
This would require the For Loop to execute a known maximum number of times.
I am currently running into this issue where I have some constant data I'm trying to write to some DO lines. I want this data to be a constant array on my block diagram, so I create the array programmatically under the "my computer" target. I then change the indicator that is populated to a constant and move it to the FPGA. When I right click and set the array to be a fixed size, it replaces all my data with 0's. I propose the data should not be cleared if I set my fixed array size to be equal to the size which the array already is.
This might have already been asked, but I couldn't find any posts.
X-Nodes are huge in comparison to the size of a subvi or most anything else on the block diagram of code, so lets shrink them down.
Can we remove the Read/Write box? We already have the little triangle to tell us the function/direction.
Can we use the node name instead of the generic term Data/Element? It's already there.
From there we can model it after a property node using references instead of error lines or we can model it after the IO node which is a little cramped but gets the job done. Both options retain a purple/pink bar to help identify it's X-node-iness.
A smaller (and cheaper) sbRIO based on the Xilinx Zynq chip. Target size is SO-DIMM form factor (68 x 30 mm (half the area of a credit card), 200 pins). Such a board would be OEM friendly and can be plugged into a product (rather than the current sbRIO offerings that requires the product to be developed around the sbRIO rather than the sbRIO fitting into your product). Also, a Base Board that is (only) used during development. Below is what the proposed sbRIO and Base Board would roughly look like (courtesy of Enclustra FPGA Solutions)
I just manually transferred a fairly large LabVIEW FPGA project from one target to another (7965R to 7966R). It would be nice to be able to click on the RIO target in the project and have an option to "Migrate to New FPGA Target" in the context menu. The menu would open a new dialog where you could select the new RIO target and then it is automatically added to the project and populated the VIs, FIFOs, derived clocks, memory blocks, etc. from the original target. The user can choose whether or not to delete the original RIO target.
This would also make it very easy for users to transfer sample code from the LabVIEW Example Finder to the correct FPGA target (insead of having the folder labeled "Move These Files").
In real time engineering usualy the clock rate is a parameter which is needed in calculations. Therefore it would be useful to be able to access that rate as integer (or float). It is clear, especially in fpga-programming that the clock (and its rate) is not a variable, that can be chosen by the application user. This idea is rather about code development in order to avoid bugs. In the current situation I am forced to define a seperate constant copying the clock rate; in the course of later code changes I risk to forget to change that constant, when changing the clock.
For the same reason it would be useful to be able to access a clock refernce of an fpga-vi (an with it its rate) form the calling vi.
Currently Measurement & Automation Explorer (MAX) only shows the following information for a typical R-Series card:
It would be helpful to add a "Device Pinouts" tab that shows you all the pin assignments for your Analog and Digital IO:
I am Muhammad Was,
an AE from NIJ.
While choosing FPGA variable, We should have sorted variable list for FPGA Read/Write Control option as we have in shared variable list that is always sorted and from A to Z.
In FPGA Read/Write Control option, variable added lately in FPGA VI, get higher position than old ones in the list.
Its voice of one of our FPGA customer.
Thanks and regards,
Having recently attempted to get started with Simulation for debugging my FPGA code I found out that apparently the built-in LV support for native LV testbenches using simulated FPGA is supported only for ModelSim SE.
This is a shame since ISim is included with the FPGA toolkit.
If feasible, expanding the functionality to allos co-simulation with ISim would be a rather splendid idea ideed!
In LabVIEW FPGA 2011, only the base clocks enumerated in the project and clocks derived from the base clock(s) are available in the FPGA Clock Control. I’d like LabVIEW to show the top level clock in this control as well.
Consider designs with nested components that both CAN and CANNOT be optimized with the single-cycle timed loop. If the domain of the SCTL does not match the top-level clock domain that contains it, you seem to pay a heavy performance penalty. I presume it’s due to the clock-crossing logic under the hood. Thank you, by the way, for dealing with this for me! For example, consider this VI:
The While Loop will take more ticks (a few hundred more in cases I’ve seen) to execute than if the Clock Control constant was set to 200MHz (assuming you could compile). So, just set the TLC and the clock control to be the same, right? Sure, except when you change the top-level clock and a few hours later, when the compile is finished, realized you forgot (gasp) to change a clock constant and the code doesn't fill its timing requirement anymore.
LabVIEW 2011 Behavior:
With availability of fast FlexRIO cards (such as NI 5761) and FPGA framegrabbers (NI 1483, PXIe-1435, NI PCIe-1433 ) data rates of 1GB/s are becoming commonplace. However, the FPGA Module is limited to communication only with 32-bit LabVIEW. Since, typically you want to store more than 2 seconds of data in RAM,you would like to use 64-bit LabVIEW as your host application. Unfortunately, this isn't possible yet.
While, I can imagine that a full blown 64-bit FPGA Module add on would be pretty difficult to build (and especially test), I believe there is a solid middle ground at this point. I can imagine, coding and compiling the FPGA in the normal 32-bit LabVIEW environment, and then just using a 64-bit host application to Read/Write front panel controls and to read/write the DMA buffers from the FPGA. I don't know the details, but this communication protocols could be very low hanging fruit if it's just a simple matter of recompiling a few key pieces for 64-bit operation.
Since the data rates passing to and from FPGAs will continue to climb, as well as the prevalence of 64-bit OS, a 64-bit version of FPGA Module is needed in the new feature pipeline. This should also be kept in mind as other new FPGA Module features and tools are created, as planning for 64-bit compatability now will make the eventual transition to 64-bit much, much easier down the road.
Sometimes I just want to compile a lot of Bitfiles (Be it for a release or a debugging test case) and I have to right click each and every Build spec and choose "Build". then wait about 10 seconds and do the same again for the next build spec.
How about being able to select multiple build specs and then select "Build Selection" and have time to go for lunch while the PC queues up all the compilations?
I don't use a compile farm and everything is done locally but at least the queuing could be automated.
Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.
My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.
I like rearranging my BD items manually and FPGA items behave weird.
If I shift a normal Property node left or right, the "latch point" of the wire is shown below. For some reason FPGA nodes latch differently and leads to some ugly wiring.
It's only cosmetic, but please change it....
Single cycle timed loops are a huge performance enhancer in LV FPGA. We learn to use these very prolifically in and around our code to save precious FPGA space, yet the BD representation of the SCTL is the standard Timed Loop structure, with both the Left and Right "ears" visible as well as the conditional terminal.
I propose that the SCTL be given it's own representation on the block diagram, one without the "ears" and without the conditional terminal (by definition it only runs once). This will promote much cleaner looking FPGA code and more readable diagrams.
This morning, after a night of FPGA compilation, i moved my Labview project path into an other location.
(Without modification of relatives path inside the project directory)
And then ... when i tryed to launch my FPGA main VI ... the compilation started again !!!
I would be nice that the "change detection mechanism" which detect if a compilation is required or not, doesn't take care of absolute paths !
I think that the "change detection mechanism" of FPGA code should be modified in order to only take in account the FPGA code dependencies.
The dependencies should not include ...
The Tick Count function in LabVIEW FPGA can represent time periods with tick count accuracy of up to 2^32 clock cycles, that is (using the standard 40 MHz FPGA clock) about 107 seconds.
Sometimes I need to handle longer time spans, and I use this example.
I suggest to implement a built-in 64-bit tick counter.