From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When you configure Memory in LV FPGA, there is no way to find specific references to particular memory blocks using the search function. The search will show "ALL" memory access nodes, not just the ones you are looking for. Additionally a text serach will not catch the text from the X Nodes. This can be particularly tedious if you have many nodes in your hierarchy and are looking to only see references to a particular memory block.

 

I would like to see the search improved to allow filtering of the memory nodes the same way that we can search for global variables (find definition and find references)

Problem:

Auto-Indexing of LabVIEW is extended to LabVIEW FPGA, only with one small caveat. You can easily auto-index into a loop, but not out of it. You will understand this better if you've already worked with LV FPGA.

In the FPGA paradigm, we enforce compile time resource determinism, by making sure all our arrays are of a fixed pre-determined size. In auto-indexing out of a loop, we may not know what the size of the array is, and hence it breaks the VI with the error "Arrays must be of fixed size". Try to write the following code in LV FPGA, for a better picture:

Auto-Indexing LV FPGA

 

Solution:

The current workaround is that we have a fixed size Array which we then use in and out of the loop, replacing its

elements, as shown below.

 

2.PNG

 

However, an easier and much much more intuitive solution for users would be to just right click the auto-indexed tunnel and set the dimension size.

Auto-Index Pop 

 

This definitely means that the number of data flowing out of the loop could be more than our fixed size. We handle that case by providing the user with the "In case of overflow" option.

4.PNG

 

 

This would ease our effort in coding LV FPGA as much as it would would improve intuitive coding. Vote for this idea if you think it would make your life a tad bit easier.

When you are using same code on different boards, it would be a big help if you could set the "FPGA VI Reference" indicator as "Adapt to source". When I use dirrerent DMA on different target, then the wire break every time I change target:

 

My FGV for the FPGA reference looks like:

 

FPGA ref Adapt to source.png

When setting up a custom VI to use as a simulation test bench, you are currently limited to setting a constant path. If you move your VI, send it to another user, change the name, or otherwise change the path of the custom VI, you must enter the FPGA target settings and modify the path. That is, if you moved the project below from user1's desktop to user2's desktop, you would have to enter the properties and change the location. This makes code sharing somewhat tedious.

psugg.png

Using a relative path or a project item would make this feature easier to use.

Presently, the Xilinx Compile Tools do not appear in the MAX technical report or NI License Manager. As a result, to determine the version, users must go to Add/Remove Programs in the control panel to determine which versions they have installed. It would be great for troubleshooting if the Xilinx Version could be implemented into the MAX technical report. 

 

In addition, the Compile Worker states that the version of Xilinx used is 12.4, regardless of whether you are using 12.4 or 12.4 SP1. It would be useful for the compile worker to note which version it is using. Specifically, often the compilation chooses the compile tools based on what it was compiled with previously. When upgrading to 12.4 SP1, the user may think the compiler automatically uses the new compile tools and has no visual cue to verify the compile tools version used. 

When setting up memory in LV FPGA these seems to be one important setting missing....description.

 

Good programming practice defines that we should have descriptions in our code, similar to the VI description of a global variable. This would also help out immensely when using bit packed memory blocks to define status bytes and such as the description of the individual bit meanings could be added to the description and not having to be dropped as block diagram comments everywhere one of the nodes is used.

Perhaps there's already a good way to do this, but some structures/nodes are allowed in a Single-Cycle Timed Loop but their behaviour is significantly changed, perhaps breaking your VI.

It would be good to be able to mark VIs in some way as unsuitable for use within a SCTL.

 

An example is the flat sequence structure - you can place this in a SCTL and it can pass intermediate file generation, but the behaviour is as if there was no sequence structure.

Assuming that it isn't always superfluous, this probably indicates invalid behaviour but is not necessarily obvious to detect (e.g. with broken compilation or intermediate files).

 

Some specific node that could be placed on a block diagram and indicate that a VI cannot be placed inside a SCTL would be useful.

Something like a Divide can be used for this, but not trivially easily - you need to actually use the output of the Divide or else the dead-code elimination allows the intermediate files to be happily generated. It took me quite a few goes to get a failure even with SGL precision divide in a SCTL... wiring to a structure or an indicator is not enough, it must be something that actually uses the value.

The actual "Wait on ... I/O method" has an input named Timeout ... With no unit !

 

Wait on I_O.PNG

 

So i went on the NI forum and i found this article ... http://digital.ni.com/public.nsf/allkb/0D8325309894115286256F3B00341159

 

It would be nice to add the unit in the name of the input parameter like Timeout(ticks)

 

Better idea ... it would be nice to be abble to configure the timout unit like timing objects like this ...

 

Wait on I_O_2.PNG

 

 

Like a Formula node or a math script node why not a RT node that will support Verilog and VHDL? Yeah yeah i know the time taken to code will take a hell a lot of time compared to what can be sweetly done in lv (So dont compare) but at times verilog support will have its advantage.

In LabVIEW FPGA 2011, only the base clocks enumerated in the project and clocks derived from the base clock(s) are available in the FPGA Clock Control. I’d like LabVIEW to show the top level clock in this control as well.

 

Consider designs with nested components that both CAN and CANNOT be optimized with the single-cycle timed loop. If the domain of the SCTL does not match the top-level clock domain that contains it, you seem to pay a heavy performance penalty. I presume it’s due to the clock-crossing logic under the hood. Thank you, by the way, for dealing with this for me!  For example, consider this VI:

 

2013-03-04_164826.png

 

The While Loop will take more ticks (a few hundred more in cases I’ve seen) to execute than if the Clock Control constant was set to 200MHz (assuming you could compile). So, just set the TLC and the clock control to be the same, right? Sure, except when you change the top-level clock and a few hours later, when the compile is finished, realized you forgot (gasp) to change a clock constant and the code doesn't fill its timing requirement anymore.

 

Project Clocks:

2013-03-04_163653.png

 

LabVIEW 2011 Behavior:

2013-03-04_163818.png

 

Desired Behavior:

2013-03-04_163818b.png

 

Thanks!

 

-Steve K

Why the "Stacked Sequence Structure" is still present in the FPGA palette? It has been removed from other targets palettes. I think it's better not be in the FPGA palette also.

FPGA registers would be more user friendly, if they could be quick dropped and also searchable (find caller as has been suggested before). This would be also great for handshakes.

I often work with the FPGA in hybrid mode because the Scan Interface covers most of the project requirements 90% of the time.  When NI added support for the SGL datatype to the FPGA module in 2012 (?), they overlooked user-defined variables.  There is currently no built-in support for typecasting a SGL to U32, so passing SGL data back to the host requires FP controls or using custom typecasting solutions (see SGL typecast) on both the FPGA and host layers.

 

Please add SGL as an option for user-defined variables.

 

 

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

It would be nice to be able to use logic operators with fixed point numbers.

17967iA902813A3838DDED

 

 

The loop timer express VI is very useful to time a loop to an exact rate, however... if you want to be sure the loop is meeting the rate requested... you also have to put in tic count VIs like this:

 

loop counter fpga.png

 

Since the loop timer express VI already is calculating how long it needs to wait in order to achieve the desired loop time, I would prefer it if at least output a bool that indicated it failed to achieve the timing required.

 

failed timing.png

 

It would be best if it output the actual tics it waited in like I16 form so it could go negative (indicating the # of tics it failed to achieve timing by.

 

counts waited.png

For some application, I find myself configuring memory blocks for the storage of custom controls which I am maintaining with a type def.  Type definitions normally have the advantage that changing them will update them everywhere they are used.

 

Unfortunately, when I change a type def control for which a memory block has been configured, the memory block does not update this, and my code breaks.  It appears that the memory block disconnects the control from its type def when configured.  It would be nice if the memory block was reconfigured - as this is what I would expect to happen with a type def control.

When using the Xilinx IP nodes in LabVIEW FPGA it becomes very difficult to support source code control and branching.  The biggest issue is the fact that the "Folder for Support Files" entry is absolute.  So when we need to branch the code to isolate new feature development from the main trunk the relative path is now wrong.  Please make this and all other paths relative to support a more robust development environment.

 

 

It would be nice to have "time unit converters" in the Labview FPGA Timing menu.

 

My need would be, to automatically, convert Ticks to µs, according to the local Clock cycle frequency ...

 

  • Ticks -> µs
  • µs -> Ticks
  • Ticks -> mSec
  • mSec -> Ticks

 

Using this kind of automatic converters in place of "manual calculations with constants" would help during code evolution ...

Currently, FPGA palette is specific to target use din the LV project. It could be a great idea to have a common part in this palette to add drivers/functions without regarding target type.