Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
Browse by label or search in the LabVIEW FPGA Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW FPGA Idea Exchange. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see in a future version of LabVIEW FPGA!
How amazing yould it be to have the ability to visualise resource usage on a FPGA target using a similar view to that shown above (courtesy of Windirstat)
I only recently shaved a significant portion off my FPGA usage by finding out that I had a massively oversized FIFO in my code for almost a year without noticing. I feel that this kind of visualisation (with mouse over showing what is actually occupying the space) with differentiation between Registers, LUTs, BRAM, DSPs and so on would greatly aid those of us trying to squeeze as much as possible out of our FPGA designs.
I think providing this information based on the "estimated resource utilisation" i.e. before Xilinx optimises stuff away would be OK. I'm not sure if the final resource utilisation can be mapped as accurately in this way.
It would also be nice to see CLIP utilisation and NI-internal utilisation at a glance as this is apparently hugely different between targets.
I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects. I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.
For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:
To deal with the seond option: Why is it currently not possible to create a derived clock in code. The ability to automatically have one piece of code accept a single clock reference and let one loop run at a multiple of the speed is something I've wanted to be able to do in the past but it is currently impossible in LabVIEW.
Please let us configure / define derived clocks in LV code.
I have to remember to place a comment next to every memory element so that I can quickly know its size. This is especially important because I can't get to the property window if I'm viewing the vi outside of the FPGA Target Scope. I can view the data type of a memory/FIFO element by hovering over the wire that goes into a read/write property with the context help window open, so I don't really care about that. However I cannot view its size. This could be fixed in one of two ways, add it to the context help when you hover over the element or display it directly on the memory element.
I have recently started placing wire labels in my code to keep track of datatypes of wires flowing around my code. This is very useful to understand what is going on on the FPGA since not all grey wires (FXP) are equal and slight mismatches can mean big trouble in the code.
As such I tend to label wires like "Phase FXP+25,0" and so on.
What I'd love to be able to do is to place a formatter in a Wire Label to be able to keep such labels up to date because at the moment it's probably more error prone than anything else due to some wires and labels not being synced any more due to code changes or bugfixes.
If I can set a wire label to "Phase %s" or similar to place the ACTUAL datatype in the label this would be amazing.
In correlation with another general idea I have posted, I have come to the conclusion that it would be nice to run an analysis of the Xilinx log in order to give feedback over which code has been constant folded by the Xilinx compiler.
Other aspects such as specific resource utilisation would be really cool also (SRL32 vs Regsiters for Feedback nodes). This would obviously be a post-bitfile operation but could at least give some direct feedback as to what the Xilinx compiler has modified in the code (Dead code elimination, constant folding etc.).
Having recently attempted to get started with Simulation for debugging my FPGA code I found out that apparently the built-in LV support for native LV testbenches using simulated FPGA is supported only for ModelSim SE.
This is a shame since ISim is included with the FPGA toolkit.
If feasible, expanding the functionality to allos co-simulation with ISim would be a rather splendid idea ideed!
I know its not necessarily a LVFPGA issue, but its us LVFPGA users that use fixed point numbers most often. Why don't fixed point numbers always show coercion dots. If every unnecessary numerical digit wastes chip resources, then isn't it more important that we know about these coercions so that we can avoid them?
I've developed a small test utility for testing FPGA code on my windows machine before proper FPGA testing and have noticed something I think could be improved.
If I have a graph or any other indicator with DBL default data type within a conditional disable, the compiler still throws an error regarding an unknown state (DBL). Although the terminal is in the diagram disable, the object seems to remain on the FP (but also in the code being sent to the FPGA compiler).
Can we actually remove the graph (or other culprit control) in the background before starting the actual compile so that I don't need to drop and re-connect every time I want to switch execution systems?
When working with LabVIEW FPGA if so much as the value of a block diagram constant is changed, the entire application must be recompiled.
It seems that there could be a smarter method to deal with recompiling that might allow selected portions of the block diagram to be recompiled without the need to recompile the entire app.
Some of the tradeoffs might be lower FPGA utilization efficency and timing constraints, but allowing minor changes to the FPGA code without the overhead of a complete recompile would certainly make debugging applications much faster.
I am not necessarily proposing an implementation just posting an idea that seems like it would add value to the LV-FPGA development experience.
Presently, the Xilinx Compile Tools do not appear in the MAX technical report or NI License Manager. As a result, to determine the version, users must go to Add/Remove Programs in the control panel to determine which versions they have installed. It would be great for troubleshooting if the Xilinx Version could be implemented into the MAX technical report.
In addition, the Compile Worker states that the version of Xilinx used is 12.4, regardless of whether you are using 12.4 or 12.4 SP1. It would be useful for the compile worker to note which version it is using. Specifically, often the compilation chooses the compile tools based on what it was compiled with previously. When upgrading to 12.4 SP1, the user may think the compiler automatically uses the new compile tools and has no visual cue to verify the compile tools version used.
At present, if you are trying to simulate your FPGA's actual logic, using a custom VI like this:
Then you know that your custom VI test bench only has one case for methods (just a general method case, not a case for each method available). There are ways to get around this problem--for example, this example emulates a node and suggests using a different timeout value for wait on rising edge, wait on falling edge, etc, but one still has to write the code for the different methods.
My suggestion is as simple as this: make test benches easier to use by handling all of the methods and properties with a set behavior. That way, all one has to set up when creating a test bench is the input and output on each I/O read/write line. At the very least, it would be nice to have the ability to read what method is being called, so the appropriate code can be set up without complicated case structures.
When you configure Memory in LV FPGA, there is no way to find specific references to particular memory blocks using the search function. The search will show "ALL" memory access nodes, not just the ones you are looking for. Additionally a text serach will not catch the text from the X Nodes. This can be particularly tedious if you have many nodes in your hierarchy and are looking to only see references to a particular memory block.
I would like to see the search improved to allow filtering of the memory nodes the same way that we can search for global variables (find definition and find references)
While attempting to debug NI1483 issues, I found it necessary to make modifications to the NI1483 CLIP. In LabView 2014 and earlier, it's not possible to maintain your own IO Module CLIP directory. One must maintain all IO Modules within the IO module search path (<National Instruments>\Shared\FlexRIO\IO Modules folder ). This can be done by copying an existing IO module to a new path within the <National Instruments>\Shared\FlexRIO\IO Modules folder, then editing the *.tbc file to rename the "model" key. The main issues with this approach are the potential lack of administrator permissions and the difficulty of maintaining source control in a non-project related system directory.
The suggestion is thus:
1. Give the user an option to select the path of the IO module under the IO module Properties General Category (When Enable IO Module is selected).
I find myself again and again having to memorise bit field index and other things in order to be able to debug FPGA code efficiently. What I would really like to be able to do is to create an XControl with a compatible datatype (Say U64) and have this display and accept input in the form of human-readable information.
The data being transported is simply a U64 and the FPGA code doesn't need to know anything about the XControl itself. Just allow a host visualisation based on an XControl to ease things a bit.
I've already started using LVOOP on FPGA and I think this could be another big improvement in the debugging experience. Having an input XControl (or a set of XControls) for certain LVOOP modules on the FPGA just gets me all excited.
Memory initialization is one of the more tedious aspects of LVFPGA coding. A lot of my LVFPGA vis have multiple memory elements that I need to access simultaneously for a given operation. I've tried to streamline the initialization process by making all memory initialization vis read from an init values file and populate the array indicator. However now I have to have multiple initialization vis reading from different points in the same init values file. If I could somehow get a parameter into the memory initialization vi, I could programmatically select from where in the init values file to read. Here is how this could work:
I do a lot of debugging by simply running my LVFPGA code in traditional labview test benches. Its kind of a pain to have to open up an FPGA scoped version of my vis just to configure the memory elements or just to view the length/data types.
Array to number is very useful for just auto-sign extending numbers, but it would be nice to visually see this without having to go to each instance and inspecting the context menu. How about some coercion dots. I don't really care which colors. Here's an example:
I would like to see some form of simple locking mechanism for VIs that are targeted to an FPGA.
The use case would be where you have compiled a VI for your FPGA target and are currently in the process of debugging/testing it. While running interactively and opening and closing VIs, you accidentally move something on a block diagram without realizing it. The next time you hit the run button LV shows you the "Generating Intermediate Files" dialog and you have now ventured down the one way street to a full FPGA recompile.
I know that source code control or setting all files to read only would also work, but when debugging a project, it is cumbersome to continually check all files in and out, or to continually change the directory attributes.
Just a simple lock/unlock button on the toolbar to keep from shooting myself in the foot while debugging.
....posted as I sit here waiting on a 4 hour FPGA compile for just this reason.