LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

 

How amazing yould it be to have the ability to visualise resource usage on a FPGA target using a similar view to that shown above (courtesy of Windirstat)

 

I only recently shaved a significant portion off my FPGA usage by finding out that I had a massively oversized FIFO in my code for almost a year without noticing.  I feel that this kind of visualisation (with mouse over showing what is actually occupying the space) with differentiation between Registers, LUTs, BRAM, DSPs and so on would greatly aid those of us trying to squeeze as much as possible out of our FPGA designs.

 

I think providing this information based on the "estimated resource utilisation" i.e. before Xilinx optimises stuff away would be OK.  I'm not sure if the final resource utilisation can be mapped as accurately in this way.

 

It would also be nice to see CLIP utilisation and NI-internal utilisation at a glance as this is apparently hugely different between targets.

 

Shane.

Allows for ILA and other debugging capabilities.

 

While for loops inside SCTLs offer limited functionality, placing an unsupported element inside the for loop does not result in broken code. Instead, one has to wait until the second stage of generating intermediate files to discover that the element is not supported. Code like the example below should show a broken run arrow if it is not supported.

 

Annotation 2019-08-14 111042.png

I have to remember to place a comment next to every memory element so that I can quickly know its size.  This is especially important because I can't get to the property window if I'm viewing the vi outside of the FPGA Target Scope. I can view the data type of a memory/FIFO element by hovering over the wire that goes into a read/write property with the context help window open, so I don't really care about that.  However I cannot view its size.  This could be fixed in one of two ways, add it to the context help when you hover over the element or display it directly on the memory element.

  17825i508503367D458784

Malleable FPGA VIs import into the Desktop Execution Node with the same datatype as the FPGA VI's "malleable terminal".  The Desktop Execution Node does not mutate the input type to match the "malleable terminal" of the FPGA VI.  As a result, host VI test benches cannot iterate Type Specialization Structure cases in the malleable FPGA VI.

 

The "anything" input to this Assert Structural Type Match node is an I16, which breaks this case against an I16, which is the "malleable terminal" of this VI.

 

PIE5669450_0-1687372970404.png

 

The Desktop Execution Node only sees the I16, and coerces other datatypes.

 

PIE5669450_1-1687373121899.png

 

IMO the compiled Type Specialization Structure case is a critical unit test, which depends on the data type of the control wired to the "malleable terminal", so this is a critical limitation of the Desktop Execution Node.

 

I think the intended use-case for the DEN is to hook into an FPGA VI that's in a loop, and if so, the inputs to the malleable VI are selected by the calling VI.  So, maybe this isn't a limitation of the DEN itself, but of the DEN workflow.

 

Thanks for your consideration,

 

Steve K

I hope the FPGA Register Function Could Add "Find Caller"....

 

1.png

 

2.png

Hello,

 

I recently have issue configuring FPGA Vis to be run seemlessy by the same host code, because of incompatible interface between VIs. Here is the Configure FPGA VI Reference Interface :

 

Configure FPGA VI Reference InterfaceConfigure FPGA VI Reference Interface

 

And here are two (differents) interface, fore FPGA 1.vi and FPGA 2.vi respectively, as seen in the context help. I just duplicate the VI for the example and get the tab order modified - see under Registers :

 

Context Help for FPVA 1.viContext Help for FPVA 1.viContext Help for FPVA 2.viContext Help for FPVA 2.vi

 

I think it could be more consistent to have the same kind of display in the configure dialog, with the same control order. It's quite confusing not seeing any difference when configuring a reference to discover that something is wrong at run-time (controls and indicators are separated, and then sorted alphabetically - I only set controls in my example code, no indicators). The context help over the dynamic reference finally helps me to figure out what was wrong but it tokk me a while...

 

Please note that the FPGA FIFOs have to be define if the same order from one bitfile to an other (if there is differents targets, or differents projects). This is correctly reflected by the configuration window.

 

So I suggest having a more coherent display of control and indicators interfaces, that correspond to the effective interface (just like the context help does), i.e. the tab order of the controls under Registers.

 

Best regards,

I know its not necessarily a LVFPGA issue, but its us LVFPGA users that use fixed point numbers most often.  Why don't fixed point numbers always show coercion dots.  If every unnecessary numerical digit wastes chip resources, then isn't it more important that we know about these coercions so that we can avoid them?

 

17841iA5A553AFF2430D61

I have recently started placing wire labels in my code to keep track of datatypes of wires flowing around my code.  This is very useful to understand what is going on on the FPGA since not all grey wires (FXP) are equal and slight mismatches can mean big trouble in the code.

 

As such I tend to label wires like "Phase FXP+25,0" and so on.

 

What I'd love to be able to do is to place a formatter in a Wire Label to be able to keep such labels up to date because at the moment it's probably more error prone than anything else due to some wires and labels not being synced any more due to code changes or bugfixes.

 

If I can set a wire label to "Phase %s" or similar to place the ACTUAL datatype in the label this would be amazing.

Having recently attempted to get started with Simulation for debugging my FPGA code I found out that apparently the built-in LV support for native LV testbenches using simulated FPGA is supported only for ModelSim SE.

 

Failed Simulation FPGA.png

 

This is a shame since ISim is included with the FPGA toolkit.

 

If feasible, expanding the functionality to allos co-simulation with ISim would be a rather splendid idea ideed!

 

Shane.

Arising from similar requirements as I have posted many moons ago: HERE.... I naively thought putting a terminal in a disable structure would remove it from the FPGA compile. It doesn't.

 

Years later, I have developed a nice debug interface for my FPGA code which is becoming more and more modular as I refactor it.  I have many sub-modules with their own debug interfaces which can be turned on or off from the top-level VI via LVOOP method injection.

 

The problem is that I can't really compile my entire FPGA VI with ALL debug paths enabled as this just won't fit (It will sometimes compile, but most often not and our FPGA code base is still growing).  And this is before I even think about making my debug information more detailed.  I would like to be able to easily switch certain aspects of the debug interface on and off as testing requirements change.  On the debug interface level I can do this easily by simply not reading the data from the objects being used for the data transfer or simply passing in abstract methods which don't actually do anything and get optimised away.  But I'm left with a load of FP controls which are still eating up resources on the FPGA target. Smiley Mad I don't want to delete the controls because that leads me to X copies of ever-so-slightly out-of-sync versions of my test VI which quickly becomes a maintenance nightmare.  Instead, I want to be able to "easily" reconfigure my test front-panel to only compile the stuff I'm currently actually interested in.

 

Part of what I would like is the ability to actually define areas of the FP which are enabled, disabled or enabled (and preferably also based on whether simulation is active or not - hence conditional disables for FP).  This way, when compiling, the FP elements will actually disappear and full resource savings can be made (as Xilinx is clever enough to optimise away any pointless code LV may stillhave instantiated in VHDL).  In addition, the ability to define certain controls as being enabled only when in simulation mode can allow us to have SGL graphs and so on present when needed during debugging.

 

So, would having conditional disable options for the FP (where controls are shown as greyed out when not available) be of interest to anyone?  If this would be an FPGA only thing, I wouldn't shed and tears.

 

Am I the only one who would use this? hmm. Maybe.

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the seond option: Why is it currently not possible to create a derived clock in code.  The ability to automatically have one piece of code accept a single clock reference and let one loop run at a multiple of the speed is something I've wanted to be able to do in the past but it is currently impossible in LabVIEW.

 

Please let us configure / define derived clocks in LV code.

When you configure Memory in LV FPGA, there is no way to find specific references to particular memory blocks using the search function. The search will show "ALL" memory access nodes, not just the ones you are looking for. Additionally a text serach will not catch the text from the X Nodes. This can be particularly tedious if you have many nodes in your hierarchy and are looking to only see references to a particular memory block.

 

I would like to see the search improved to allow filtering of the memory nodes the same way that we can search for global variables (find definition and find references)

When working with LabVIEW FPGA if so much as the value of a block diagram constant is changed, the entire application must be recompiled.

 

It seems that there could be a smarter method to deal with recompiling that might allow selected portions of the block diagram to be recompiled without the need to recompile the entire app.

Some of the tradeoffs might be lower FPGA utilization efficency and timing constraints, but allowing minor changes to the FPGA code without the overhead of a complete recompile would certainly make debugging applications much faster.

 

I am not necessarily proposing an implementation just posting an idea that seems like it would add value to the LV-FPGA development experience.

 

I've developed a small test utility for testing FPGA code on my windows machine before proper FPGA testing and have noticed something I think could be improved.

 

If I have a graph or any other indicator with DBL default data type within a conditional disable, the compiler still throws an error regarding an unknown state (DBL).  Although the terminal is in the diagram disable, the object seems to remain on the FP (but also in the code being sent to the FPGA compiler).

 

Can we actually remove the graph (or other culprit control) in the background before starting the actual compile so that I don't need to drop and re-connect every time I want to switch execution systems?

 

Shane.

Hello,

 

i would suggest some Improvements to the Timing Violations Window.

 

  • Highlight the Whole Path with all Element and Wires in the Blockdiagramm when i click on Path in the Timing Violations Window.mark path.png

Path Highlighted.png

 

  • Highlight the Element on the Blockdiagramm which i clicked in the Timing Violations Window.

   mark element.pngElement Highlighted.png

 

 

  • Give the Problem Window of the Timing Violations Window the ability to adjust its size i often have to scroll. (Can be seen in the Timing Violation Window above)

 

With kind regards

westgate

Presently, the Xilinx Compile Tools do not appear in the MAX technical report or NI License Manager. As a result, to determine the version, users must go to Add/Remove Programs in the control panel to determine which versions they have installed. It would be great for troubleshooting if the Xilinx Version could be implemented into the MAX technical report. 

 

In addition, the Compile Worker states that the version of Xilinx used is 12.4, regardless of whether you are using 12.4 or 12.4 SP1. It would be useful for the compile worker to note which version it is using. Specifically, often the compilation chooses the compile tools based on what it was compiled with previously. When upgrading to 12.4 SP1, the user may think the compiler automatically uses the new compile tools and has no visual cue to verify the compile tools version used. 

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

At present, if you are trying to simulate your FPGA's actual logic, using a custom VI like this:

1234.png

Then you know that your custom VI test bench only has one case for methods (just a general method case, not a case for each method available). There are ways to get around this problem--for example, this example emulates a node and suggests using a different timeout value for wait on rising edge, wait on falling edge, etc, but one still has to write the code for the different methods.

 

My suggestion is as simple as this: make test benches easier to use by handling all of the methods and properties with a set behavior. That way, all one has to set up when creating a test bench is the input and output on each I/O read/write line. At the very least, it would be nice to have the ability to read what method is being called, so the appropriate code can be set up without complicated case structures.

When debugging, I find it useful to have Graphs on my FPs. Mostly for running in simulation mode but sometimes I want to verify that the compiled code behaves the same way.

 

I currently have to replace all of my Graphs (fed with fixed size arrays) with Arrays since I can't define the FP element to be a fixed size, unlike arrays.  This makes debugging a bit more of a pain than it needs to be.

 

Is it possible to gbet the option to define a Graph as being a fixed size so that this replacement step is unneccessary?