From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Allows for ILA and other debugging capabilities.

 

Malleable FPGA VIs import into the Desktop Execution Node with the same datatype as the FPGA VI's "malleable terminal".  The Desktop Execution Node does not mutate the input type to match the "malleable terminal" of the FPGA VI.  As a result, host VI test benches cannot iterate Type Specialization Structure cases in the malleable FPGA VI.

 

The "anything" input to this Assert Structural Type Match node is an I16, which breaks this case against an I16, which is the "malleable terminal" of this VI.

 

PIE5669450_0-1687372970404.png

 

The Desktop Execution Node only sees the I16, and coerces other datatypes.

 

PIE5669450_1-1687373121899.png

 

IMO the compiled Type Specialization Structure case is a critical unit test, which depends on the data type of the control wired to the "malleable terminal", so this is a critical limitation of the Desktop Execution Node.

 

I think the intended use-case for the DEN is to hook into an FPGA VI that's in a loop, and if so, the inputs to the malleable VI are selected by the calling VI.  So, maybe this isn't a limitation of the DEN itself, but of the DEN workflow.

 

Thanks for your consideration,

 

Steve K

When simulating an FPGA VI, I use sampling probes (https://www.ni.com/docs/en-US/bundle/labview-fpga-module/page/lvfpgahelp/using_sampling_probe.html).

 

If I close the VI the sampling probes are lost.  This is a request to be able to save the sampling probes for a given FPGA VI.

In the past customers used ChipScope (https://www.xilinx.com/products/intellectual-property/chipscope_ila.html) with LabVIEW FPGA or this https://www.ni.com/en-us/support/downloads/tools-network/download.xilinx-chipscope-pro-debugging-break-out-box.html#372379.

 

This is a request for Integrated Logic Analyzer (ILA) of Xilinx Vivado in the LabVIEW FPGA tool flow.

Hello,

 

I recently have issue configuring FPGA Vis to be run seemlessy by the same host code, because of incompatible interface between VIs. Here is the Configure FPGA VI Reference Interface :

 

Configure FPGA VI Reference InterfaceConfigure FPGA VI Reference Interface

 

And here are two (differents) interface, fore FPGA 1.vi and FPGA 2.vi respectively, as seen in the context help. I just duplicate the VI for the example and get the tab order modified - see under Registers :

 

Context Help for FPVA 1.viContext Help for FPVA 1.viContext Help for FPVA 2.viContext Help for FPVA 2.vi

 

I think it could be more consistent to have the same kind of display in the configure dialog, with the same control order. It's quite confusing not seeing any difference when configuring a reference to discover that something is wrong at run-time (controls and indicators are separated, and then sorted alphabetically - I only set controls in my example code, no indicators). The context help over the dynamic reference finally helps me to figure out what was wrong but it tokk me a while...

 

Please note that the FPGA FIFOs have to be define if the same order from one bitfile to an other (if there is differents targets, or differents projects). This is correctly reflected by the configuration window.

 

So I suggest having a more coherent display of control and indicators interfaces, that correspond to the effective interface (just like the context help does), i.e. the tab order of the controls under Registers.

 

Best regards,

Can support for simulating CLIP nodes (as can be done with IP Integration Node) be provided in LabVIEW FPGA?

 

when you try to use serial NI 9870/71 with crio controllers it will lead you directly to access them from FPGA mode, however you will find it difficult or not allowed to use its connection with MODBUS device  so it will need be accessed by scan mode by installing the specified software on your crio to enable scan mode for these devices , may be we need clear declaration in serial NI 9870/71 datasheet to show that its possible to connect them in scan mode as it guide us only to FPGA and what are the best practices to it

While for loops inside SCTLs offer limited functionality, placing an unsupported element inside the for loop does not result in broken code. Instead, one has to wait until the second stage of generating intermediate files to discover that the element is not supported. Code like the example below should show a broken run arrow if it is not supported.

 

Annotation 2019-08-14 111042.png

When debugging, I find it useful to have Graphs on my FPs. Mostly for running in simulation mode but sometimes I want to verify that the compiled code behaves the same way.

 

I currently have to replace all of my Graphs (fed with fixed size arrays) with Arrays since I can't define the FP element to be a fixed size, unlike arrays.  This makes debugging a bit more of a pain than it needs to be.

 

Is it possible to gbet the option to define a Graph as being a fixed size so that this replacement step is unneccessary?

Arising from similar requirements as I have posted many moons ago: HERE.... I naively thought putting a terminal in a disable structure would remove it from the FPGA compile. It doesn't.

 

Years later, I have developed a nice debug interface for my FPGA code which is becoming more and more modular as I refactor it.  I have many sub-modules with their own debug interfaces which can be turned on or off from the top-level VI via LVOOP method injection.

 

The problem is that I can't really compile my entire FPGA VI with ALL debug paths enabled as this just won't fit (It will sometimes compile, but most often not and our FPGA code base is still growing).  And this is before I even think about making my debug information more detailed.  I would like to be able to easily switch certain aspects of the debug interface on and off as testing requirements change.  On the debug interface level I can do this easily by simply not reading the data from the objects being used for the data transfer or simply passing in abstract methods which don't actually do anything and get optimised away.  But I'm left with a load of FP controls which are still eating up resources on the FPGA target. Smiley Mad I don't want to delete the controls because that leads me to X copies of ever-so-slightly out-of-sync versions of my test VI which quickly becomes a maintenance nightmare.  Instead, I want to be able to "easily" reconfigure my test front-panel to only compile the stuff I'm currently actually interested in.

 

Part of what I would like is the ability to actually define areas of the FP which are enabled, disabled or enabled (and preferably also based on whether simulation is active or not - hence conditional disables for FP).  This way, when compiling, the FP elements will actually disappear and full resource savings can be made (as Xilinx is clever enough to optimise away any pointless code LV may stillhave instantiated in VHDL).  In addition, the ability to define certain controls as being enabled only when in simulation mode can allow us to have SGL graphs and so on present when needed during debugging.

 

So, would having conditional disable options for the FP (where controls are shown as greyed out when not available) be of interest to anyone?  If this would be an FPGA only thing, I wouldn't shed and tears.

 

Am I the only one who would use this? hmm. Maybe.

The error cluster has a string to identify where the error occurred, "source". In a FPGA code the string is only accepted (no broken run arrow) inside an error cluster. I guess this is implemented in this way to maintain code compatibility when you move code to another kind of target. The problem is that doesn't matter what you write in these strings when you are in a FPGA environment, it's ignored.

Some people use the same error code to show a kind of error changing only the source to identify where. I got a software, written by somebody else, that used error code 5000 in all user defined errors, changing only the "source" string. That give me no clue where the error was happening.

Since in a FPGA target only the "code" is useful i a error cluster, I propose two solutions:

1) A warning when a string in a error cluster is not empty (compilation time);

2) The FPGA compiler converts the ASCII chars in "source" string of the error cluster in a fixed size array of bytes [U8]. This array will be converted into a string in a target that can handle it. This is very common when you read a error cluster indicator in a FPGA VI from a RT VI. This solution will have a little overhead but it maintains 100% compatibility.

I like the second solution a bit more. A limited number of characters should be allowed order to save memory. One solution to that is to have a configurable option to determine how many chars is the maximum allowed.

When debugging FPGA code, I still like creating debug code right there in the FPGA code with FP debug indicators.  After some simulation I can then compile (the exact same code) and test with hardware.

 

The IDE, however, makes my life really hard.  In the background, each VI has a default build spec or bitfile associated with it.  When a tiny tiny change occurs in the source code (some of which seem overly sensitive BTW) the interactive mode will not start.

 

It would be nice if we had the option, assuming that the FP controls are identical, that we can start an interactive mode where the existing bitfile is used with the same FP of the VI source.  A visual indicator that the bitfile MAY NOT be identical with the code would by a good idea.  Sometimes changes are trivial, sometimes when fixing a bug, we might want to double-check old behaviour for a moment before starting a compile process.  The ability to maintain the option to execute the last compiled code seems like it would be a nice addition.

 

And yes, we could make a RT app which interacts with the FP elements but since debugging code changes often (including FP elements), this is a problematic maintenance issue.

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the seond option: Why is it currently not possible to create a derived clock in code.  The ability to automatically have one piece of code accept a single clock reference and let one loop run at a multiple of the speed is something I've wanted to be able to do in the past but it is currently impossible in LabVIEW.

 

Please let us configure / define derived clocks in LV code.

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

In correlation with another general idea I have posted, I have come to the conclusion that it would be nice to run an analysis of the Xilinx log in order to give feedback over which code has been constant folded by the Xilinx compiler.

 

Other aspects such as specific resource utilisation would be really cool also (SRL32 vs Regsiters for Feedback nodes).  This would obviously be a post-bitfile operation but could at least give some direct feedback as to what the Xilinx compiler has modified in the code (Dead code elimination, constant folding etc.).

I hope the FPGA Register Function Could Add "Find Caller"....

 

1.png

 

2.png

Hello,

 

I simulate small FPGA code parts from time to time, and use these while doing it.

There are 2 helpers.

 

1) Simulation time estimate and progress: Module_SimulationProgress_Caller + Module_SimulationProgress_Popup

Here the idea is to just add the caller VI and it will call and display progress.

It has some "autotune" funtion to not call popup to often, but still update once in a while. It tries to hit around 0.5-1.5 sec in update.

This will minimize time spend on popup after some iterations. It also makes it possible to stop the main sim VI.

The estimator only works if  your code is fairly static.

 

2) Data collector while running: Module_FGV_DataCapture.vi

Here the idea is to collect data (in fast buffer) while simulating and use it to display while simulating.

It has 5 buffers that can have different number of elements in them, but all have same length.

Then in a "slow" loop I update graphs once every second, then i can abort if i see something wrong.

This is to avoid having graph plotting in highspeed loop or using graph after simulation is run.

 

3?) Maybe i will add a plot VI that can take data in from the buffer, just to clean up simulation VI, and make it generic.

 

Can i get some feedback if it is good or not? Any other sugestions are wellcome!

Or how you do your small FPGA simulations?

 

Thanks.

 

 

 

I love the FPGA Desktop Execution Node. I'd love it even more if I could access global variables from the FPGA VI that is being emulated:

 

Globals in DEN.png

 

I normally use globals as opposted to controls and indicators to curve FPGA resource usage in cases where I won't need those values available through the FPGA Interface on the deployed application.

There needs to be a way to physically probe the FlexRIO card edge when a NI or custom module is installed.  A time honored method of debugging has always been to probe signals with an o-scope or logic analyzer.  To route debugging signals to unused pins (EX: Within a CLIP) for probing seems a necessity when dealing with hardware and FPGAs.

 

Lets get them to design one and make it into a purchasable accessory!