LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Allows for ILA and other debugging capabilities.

 

When using FPGA code on a different target, you can copy/paste all the User Defined Variables (UDV) to the new target, but you must manually re-link every UDV in the FPGA code to the UDV on the new target, which is a very tedious process when there are more than a handful of UDVs. Having the UDVs relative-referenced would make the FPGA code much more portable. On a cRIO, I always use FIFOs, but UDVs are the only method of getting data out of an EtherCAT chassis.

 

Malleable FPGA VIs import into the Desktop Execution Node with the same datatype as the FPGA VI's "malleable terminal".  The Desktop Execution Node does not mutate the input type to match the "malleable terminal" of the FPGA VI.  As a result, host VI test benches cannot iterate Type Specialization Structure cases in the malleable FPGA VI.

 

The "anything" input to this Assert Structural Type Match node is an I16, which breaks this case against an I16, which is the "malleable terminal" of this VI.

 

PIE5669450_0-1687372970404.png

 

The Desktop Execution Node only sees the I16, and coerces other datatypes.

 

PIE5669450_1-1687373121899.png

 

IMO the compiled Type Specialization Structure case is a critical unit test, which depends on the data type of the control wired to the "malleable terminal", so this is a critical limitation of the Desktop Execution Node.

 

I think the intended use-case for the DEN is to hook into an FPGA VI that's in a loop, and if so, the inputs to the malleable VI are selected by the calling VI.  So, maybe this isn't a limitation of the DEN itself, but of the DEN workflow.

 

Thanks for your consideration,

 

Steve K

When simulating an FPGA VI, I use sampling probes (https://www.ni.com/docs/en-US/bundle/labview-fpga-module/page/lvfpgahelp/using_sampling_probe.html).

 

If I close the VI the sampling probes are lost.  This is a request to be able to save the sampling probes for a given FPGA VI.

In the past customers used ChipScope (https://www.xilinx.com/products/intellectual-property/chipscope_ila.html) with LabVIEW FPGA or this https://www.ni.com/en-us/support/downloads/tools-network/download.xilinx-chipscope-pro-debugging-break-out-box.html#372379.

 

This is a request for Integrated Logic Analyzer (ILA) of Xilinx Vivado in the LabVIEW FPGA tool flow.

When dragging multiple IO items from project to a block diagram, it'd be great to have them show up as a single IO node instead of multiple ones. To be backward compatible it could be something like <Shift>-drag. This improves code readibility by producing more compact code.

 

AndreasStark_0-1680632105226.png

 

Hello,

 

I recently have issue configuring FPGA Vis to be run seemlessy by the same host code, because of incompatible interface between VIs. Here is the Configure FPGA VI Reference Interface :

 

Configure FPGA VI Reference InterfaceConfigure FPGA VI Reference Interface

 

And here are two (differents) interface, fore FPGA 1.vi and FPGA 2.vi respectively, as seen in the context help. I just duplicate the VI for the example and get the tab order modified - see under Registers :

 

Context Help for FPVA 1.viContext Help for FPVA 1.viContext Help for FPVA 2.viContext Help for FPVA 2.vi

 

I think it could be more consistent to have the same kind of display in the configure dialog, with the same control order. It's quite confusing not seeing any difference when configuring a reference to discover that something is wrong at run-time (controls and indicators are separated, and then sorted alphabetically - I only set controls in my example code, no indicators). The context help over the dynamic reference finally helps me to figure out what was wrong but it tokk me a while...

 

Please note that the FPGA FIFOs have to be define if the same order from one bitfile to an other (if there is differents targets, or differents projects). This is correctly reflected by the configuration window.

 

So I suggest having a more coherent display of control and indicators interfaces, that correspond to the effective interface (just like the context help does), i.e. the tab order of the controls under Registers.

 

Best regards,

We're starting development on an Ultrascale device, KU40 and am missing the option to utilise the DSP48E2 primitive as we have for DSP48 and DSP48E1.

 

Intaris_0-1670929335347.png

 

NXG seemed to have it, but as we all know, NXG is no more.

 

https://www.ni.com/docs/en-US/bundle/labview-nxg-fpga-module-cdl-api-ref/page/dsp48e2.html

 

Can we please have a DSP48E2 primitive for LabVIEW FPGA? I would really like access to the new features supported, including the wider multiplier.

As somewhat an opposite request to this idea

https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Ability-to-define-datatype-of-Registers-FIFOs-from-code-without/idi-p/3123936

I would like to show some pertinent information to the configuration of certain primitives in FPGA code.

 

Intaris_0-1663335955202.png

 

The ability to turn this display on and off just like a label would be very welcome indeed. I'd always have it visible.

 

I just spent two days tracking down a bug which ended up being an under-dimensioned Block RAM instantiation (and how BRAM indexing works, just throwing bits away instead of coercing the read/write index), something whose configuration is completely hidden from view. Why can't we have some visible elements to show the size of a Block RAM and the Datatype (FXP would do for any given FXP type). Same goes for FIFOs, whether a FIFO is 16 elements or 8192 elements deep is a very important piece of information. And of course I mean only the primitives which instantiate the resources, not FP references for these items, even though the datatype of these would also be a very welcome addition.

The project I'm currently working on involves a USRP 2954R with a small amount of FPGA programming (with the code running at 200MSPs). I ,of course, started editing the USRP FPGA Streaming example code to achieve this.

 

The Receiver code on the FPGA was edited to take the samples at 200MSPs (without the usual decimation), and perform a complex multiplication on it using a high throughput math palette. Post this I decimate my samples (Using the same decimator VI used in the Streaming example code) on a different loop in the Main FPGA VI.

 

Unfortunately I keep receiving a timing error on compilation which, upon investigation, shows a large number of non-diagram components eating away at the loop time. What I don't understand is why a complex multiplication followed by a decimation would require that much time to execute.

 

I've tried using the pipeline feature in the Complex Multiplication and also various compilation styles that optimize timing but I'm not able to cross 150MHz clock rate.

 

I also checked the knowledge NI page that talks about Non-Diagram components but pretty much most of the issues according to the page is about a long critical path, which in my case is not relevant because I literally perform only 2 operations in the concerned loop along with the necessary pipelining operations.

 

I've included the image of the timing violation along with the VIs. Could anyone please let me know what's going on or if I'm doing something wrong?

 

PS: The compiler I'm using is Vivado 2019.1.1

 

PS 2 : I haven't started working on the host yet

Download All

Sometimes, you might not care about the outputs under certain input conditions. "Not caring" can lead to significant improvements in optimization and thus resource utilization but there's no way to tell LabVIEW right now that you "don't care". I propose we create new data types that can support "don't care". It should start with the booleans but when you convert a boolean array to an integer, if one of booleans has a "don't care" the numeric output also then becomes a "don't care" which is yet another data type we need as well.

 

Here's what a "don't care" might look like if the user didn't care about the output if the input was 2:

dont care.png

Both resource and timing reports can be hard to read.

 

Timing shows clocks that may not appear as SCTLs.

Resources show items which are not easy to trace back the resources on the FPGA itself.

 

This varies based on the target being compiled, for say 7976 (Kintex-7), 5785 (Ultrascale), and x410 (RFSoC).

 

There could be a knowledgebase article on "How to understanding LabVIEW FPGA compile results"

 

I know that for more detailed compile results we have exported to Vivado but for new users this can be very intimidating and a big distraction.

The Vision FPGA has come around for maybe ten years? Nowadays, FPGA has much more resources than their ancestor, but in Vision FPGA, we can still only handle 8 pixels a single cycle, which sometimes comes as a bottleneck.

 

Also, I think it would be good to add a signed 16-bit image data type for Vision FPGA; when we use u8 subtraction, we are losing some of the information for the output is only another u8 image. If we have the i16 datatype, it will be possible to do a lossless subtraction. Sometimes every bit counts.

 

 

 

The functionality of the update firmware button on NI MAX (R series targets) is not clearly described on NI documentation:

 

Juanjo_B_0-1636058231554.png

 

It would be nice to add a description of its functionality in the LabVIEW FPGA Module Help document.

 

I'm bringing back to life this long-lost idea: https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/pre-and-post-build-options/idi-p/2364676

as I think there are lots of situations where pre/post build actions could be useful for FPGA.

 

For example, as suggested here: https://forums.ni.com/t5/LabVIEW/Populate-FPGA-array-with-values/m-p/4145330#M1195362

I have a large array of coefficients that I want to load from a file, then populate an array constant with it. I have a script to do it, now I would like to automatically run it before compiling the bitfile. For context, I want it to be a constant because controls take more resources and do not allow constant folding optimization.

 

I already had another situation where I made a tool to auto-generate code in case structures based on some specifications given by the developer. If however the developer forgets to run it before compiling, the FPGA VI won't work properly as necessary code has not been generated.

 

More generally, I think scripting for FPGA is way underrated. As FPGA code is quite often tedious and redundant to create (because optimization is priorized over readability, and because of the lack of type genericity), scripting has a great potential here. Allowing to run pre/post build actions for FPGA bitfiles would surely take FPGA scripting to the next level !

With even simple examples we experience errors when trying to run Instruction Framework based LabVIEW FPGA VIs.

 

This is a blocker for our using Instruction Framework.

MGT interfacing to the 7915 is provided: https://forums.ni.com/t5/Examples-and-IP-for-Software/Aurora-64b-66b-Streaming-Example-for-the-PXIe-7915-Ultrascale/ta-p/3952187

 

It is not provided for other cards such as the 5785.  Is the interfacing the same?  Could examples for this be provided?

Working with the NI 5785 our team had a hard time understanding how to use TClk without all of the extra (e.g. streaming) code that comes with the example.

 

Through support we were eventually put in touch with R&D and they told us how to initiate TClk by setting some of the FPGA controls.  This was helpful but not intuitive.

 

TClk helps support beamforming applications shown in the NI Marketing but without this usability it is very difficult (impossible) to develop applications promised.

 

TClk also has other lower level features such as the delay correction.  No info is posted on this either but it is a property we can read.

When not using Instruction Framework to interface from the Host to LabVIEW FPGA the FPGA VI reference register items cannot be ordered by the user

 

They appear in a random order (order of creation) and it is not easy to find and select them.

 

I am referring to this function: https://zone.ni.com/reference/en-XX/help/371599P-01/lvfpgahost/readwrite_control/

P2P is a very useful technology for sharing data between NI targets.

 

Could this be provided for GPUs?