From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea
I would like a way to name all of the connector I/O from an external souce - perhaps an excel file.  I envision importing a single file for all of the I/O.

Well it seems that persistent/non-volatile memory is not available on FPGA targets for scenarios involving cycling power.

 

Apparently, the recommended approach is to transfer the data to the host and store it to disk.

 

This is a bit problematic in that the choice is either to write data to disk at a high rate or to accept that the most recent data might not be reloaded on restart.  For instance, an operator might expect to know exactly how many revolutions a shaft has undergone after power cycles to the FPGA target. A guaranttee that this information is as up to date as possible probably can't be met (maybe even under transferring data to disk at a high rate).

 

So I'd like to request this. 

As part of my quest to solve problems arising from over-cautious Register transfers HERE I found a solution which WOULD have worked if I was able to force multiple clocks derived from the same source to be have synchronised start points (so that the iteration counters of the loops are known relative to each other). It seems that clocks derived from the same base clock do not neccessarily all start with Iteration zero at the same time.

 

My suggestion would be to either

  • Give some option to force such loops to have synchronous starts (also when using external clocks) -or-
  • Allow loops with external clocks to terminate so that we can put together out own synchronisation method

Shane

I find myself again and again having to memorise bit field index and other things in order to be able to debug FPGA code efficiently.  What I would really like to be able to do is to create an XControl with a compatible datatype (Say U64) and have this display and accept input in the form of human-readable information.

 

The data being transported is simply a U64 and the FPGA code doesn't need to know anything about the XControl itself.  Just allow a host visualisation based on an XControl to ease things a bit.

 

I've already started using LVOOP on FPGA and I think this could be another big improvement in the debugging experience.  Having an input XControl (or a set of XControls) for certain LVOOP modules on the FPGA just gets me all excited.

It would be nice if the sbRIO targets supported the DMA Acquire Write Region (and Acquire Read Region, although maybe that's already supported - I only tried Write Region since that's what I need for my application). Failing that, perhaps the documentation could mention that those methods are not supported on all targets?

When I build a new bitfile for my project, I sometimes (shock horror) make mistakes and bring the whole house of cards crashing down.

 

In situations like that, I would love to have the last version of the bitfile available for re-testing.  Ideally, I could specify a pre- and post-build option for my compilation where I can define my own automatic re-naming and archiving scheme so that I no longer need to do painful re-compiles for reverting my code.

 

I am aware that this probably applies to more than FPGA but here the compilationt imes are more prohibitive and I feel the need is larger.

Currently the SMB Trigger is connected to the Real-Time controller to act as a DIO or drift correction for the RT clock. Some applications require sub-millisecond accuracy with the trigger which is not possible with the current configuration of the SMB trigger. This idea is to connect the SMB Trigger to the FPGA as a DIO line to achieve better accuracy.

-Ryan

Working on the FPGA, I use fixed-point precision numbers quite often.  I have grown tired of selecting the FXP representation from the right-click contextual menu (block diagram or front panel), only to then right click again to navigate to the "size" tab to select the configuration.

 

The default configuration is very rarely what I need it to be -- there should be a faster way to change this.

When writing LabVIEW code for an FPGA target, the most important considerations are speed and resource usage.  By using the single-cycle timed loop (SCTL), we can increase the speed of the program by allowing more than one operation to complete per clock cycle.  We also decrease resource usage by removing the flip-flops that would be required to store values between clock cycles for the operations in the SCTL.

 

However, there are limitations of the SCTL.  For some operations, it takes significantly less resources to implement something using a for loop rather than a single-cycle timed loop.  With a for loop, one can auto-index a result at the border of the for loop to obtain a fixed-size array (valid on the FPGA).  Below is the simplest possible example:

 

AutoIndexed For Loop

 

The equivalent with a single-cycle timed loop would be:

 

SCTL

The replace array/subset VI consumes resources proportional to the size of the array.  Depending on the operation being performed, this can increase resource usage such that it is more practical to use a for loop (as shown above).

 

I propose the creation of a single-cycle timed for loop.  Here is a very rough mock-up (MS Paint is not the most adequate of image processing tools... you will get the idea):

 

SCTFL

 

This solves two problems: 1) It allows for the compiler to know how many times to loop will run at compile time.  It also simplifies the UI by letting the user know how many times the loop will run without having to think through a condition.  2) It allows for the more efficient creation of fixed-size arrays through a SCTL (rather than through a for loop).

I would like to see some form of simple locking mechanism for VIs that are targeted to an FPGA.

 

The use case would be where you have compiled a VI for your FPGA target and are currently in the process of debugging/testing it. While running interactively and opening and closing VIs, you accidentally move something on a block diagram without realizing it. The next time you hit the run button LV shows you the "Generating Intermediate Files" dialog and you have now ventured down the one way street to a full FPGA recompile.

 

I know that source code control or setting all files to read only would also work, but when debugging a project, it is cumbersome to continually check all files in and out, or to continually change the directory attributes.

 

Just a simple lock/unlock button on the toolbar to keep from shooting myself in the foot while debugging.

 

....posted as I sit here waiting on a 4 hour FPGA compile for just this reason.

Cross Posted

 

I do a fair amount of Pipelining and it would be cool if I could Offset the Input Shift Register from the Output Shift Register.

The default would be to keep them aligned but a right - click would give me the option to offset the input or output Terminal. I think it would be bad form to allow crossing the terminals between multiple Shift Registers so the top Input terminal would correspond to the top Output terminal.Offset Shift registers.JPG

 

The NI 9802 (Secure Digital Removable Storage Module for CompactRIO) is a cRIO module that has two SD memory card slots. The problem is that the programmer cannot index the ports as "0" and "1". The solution is to write a code for "0" and repeat it for "1".

 

The proposal is to allow the user to select memory card by a terminal in the "Method" and "Property" nodes.

 

Since the maximum amount of memory per card is 2 GB, if more than 2 GB is needed, the programmer should manage to split the data in two cards. Right now the code should be duplicated and selected by a "Case" structure. In many other situations the programmer may need to use one or other card, like when a big file should be saved after the usual check of the available free space in both cards.

 

 

Malleable FPGA VIs import into the Desktop Execution Node with the same datatype as the FPGA VI's "malleable terminal".  The Desktop Execution Node does not mutate the input type to match the "malleable terminal" of the FPGA VI.  As a result, host VI test benches cannot iterate Type Specialization Structure cases in the malleable FPGA VI.

 

The "anything" input to this Assert Structural Type Match node is an I16, which breaks this case against an I16, which is the "malleable terminal" of this VI.

 

PIE5669450_0-1687372970404.png

 

The Desktop Execution Node only sees the I16, and coerces other datatypes.

 

PIE5669450_1-1687373121899.png

 

IMO the compiled Type Specialization Structure case is a critical unit test, which depends on the data type of the control wired to the "malleable terminal", so this is a critical limitation of the Desktop Execution Node.

 

I think the intended use-case for the DEN is to hook into an FPGA VI that's in a loop, and if so, the inputs to the malleable VI are selected by the calling VI.  So, maybe this isn't a limitation of the DEN itself, but of the DEN workflow.

 

Thanks for your consideration,

 

Steve K

When simulating an FPGA VI, I use sampling probes (https://www.ni.com/docs/en-US/bundle/labview-fpga-module/page/lvfpgahelp/using_sampling_probe.html).

 

If I close the VI the sampling probes are lost.  This is a request to be able to save the sampling probes for a given FPGA VI.

In the past customers used ChipScope (https://www.xilinx.com/products/intellectual-property/chipscope_ila.html) with LabVIEW FPGA or this https://www.ni.com/en-us/support/downloads/tools-network/download.xilinx-chipscope-pro-debugging-break-out-box.html#372379.

 

This is a request for Integrated Logic Analyzer (ILA) of Xilinx Vivado in the LabVIEW FPGA tool flow.

The project I'm currently working on involves a USRP 2954R with a small amount of FPGA programming (with the code running at 200MSPs). I ,of course, started editing the USRP FPGA Streaming example code to achieve this.

 

The Receiver code on the FPGA was edited to take the samples at 200MSPs (without the usual decimation), and perform a complex multiplication on it using a high throughput math palette. Post this I decimate my samples (Using the same decimator VI used in the Streaming example code) on a different loop in the Main FPGA VI.

 

Unfortunately I keep receiving a timing error on compilation which, upon investigation, shows a large number of non-diagram components eating away at the loop time. What I don't understand is why a complex multiplication followed by a decimation would require that much time to execute.

 

I've tried using the pipeline feature in the Complex Multiplication and also various compilation styles that optimize timing but I'm not able to cross 150MHz clock rate.

 

I also checked the knowledge NI page that talks about Non-Diagram components but pretty much most of the issues according to the page is about a long critical path, which in my case is not relevant because I literally perform only 2 operations in the concerned loop along with the necessary pipelining operations.

 

I've included the image of the timing violation along with the VIs. Could anyone please let me know what's going on or if I'm doing something wrong?

 

PS: The compiler I'm using is Vivado 2019.1.1

 

PS 2 : I haven't started working on the host yet

Download All

Sometimes, you might not care about the outputs under certain input conditions. "Not caring" can lead to significant improvements in optimization and thus resource utilization but there's no way to tell LabVIEW right now that you "don't care". I propose we create new data types that can support "don't care". It should start with the booleans but when you convert a boolean array to an integer, if one of booleans has a "don't care" the numeric output also then becomes a "don't care" which is yet another data type we need as well.

 

Here's what a "don't care" might look like if the user didn't care about the output if the input was 2:

dont care.png

When not using Instruction Framework to interface from the Host to LabVIEW FPGA the FPGA VI reference register items cannot be ordered by the user

 

They appear in a random order (order of creation) and it is not easy to find and select them.

 

I am referring to this function: https://zone.ni.com/reference/en-XX/help/371599P-01/lvfpgahost/readwrite_control/

Can support for simulating CLIP nodes (as can be done with IP Integration Node) be provided in LabVIEW FPGA?

 

This would vastly simplify making re-usable modular sub-vis to handle complex interactions involving reading and writing front-panel controls to communicate over the FPGA interface. Presently, this requires a lot of complex code to be copied onto an large complex top-level vi. Being able to pass registers linked to front panel controls would allow controls to be bundled into clusters of registers and sent to sub-vis that could then be generically usable for repeat functionality or across multiple "channels".