LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I'd like to have a dedicated FPGA Compile Server, based on a realy slim OS, e.g. damn small linux oder even PharLab? The OS does not realy matter, as far as it is multi-core capable and it should use as little system resources as possible, to get as much ressources as possible for the compilation process.

 

Purpose: get max. compilation speed

 

Hello,

Recently, Last year, i've had a bad experience, when i tried to compile my old FPGA applications with Windows 10.

 

=> The FPGA ISE XILINX compiler is no more compatible with Windows 10.

 

Will something be done ? I got no clear answer from NI support ...It should be a XILINX problem !

 

The issue is that some products on the NI web site, are sold without clear information about the incompatibility with Win 10 !

 

Please, add a "clear highlighted Warning" on the product page in order to inform about the problem : On FPGA boards and on CRIO targets ...

 

Thanks for help.

 

Presently, the Xilinx Compile Tools do not appear in the MAX technical report or NI License Manager. As a result, to determine the version, users must go to Add/Remove Programs in the control panel to determine which versions they have installed. It would be great for troubleshooting if the Xilinx Version could be implemented into the MAX technical report. 

 

In addition, the Compile Worker states that the version of Xilinx used is 12.4, regardless of whether you are using 12.4 or 12.4 SP1. It would be useful for the compile worker to note which version it is using. Specifically, often the compilation chooses the compile tools based on what it was compiled with previously. When upgrading to 12.4 SP1, the user may think the compiler automatically uses the new compile tools and has no visual cue to verify the compile tools version used. 

Perhaps there's already a good way to do this, but some structures/nodes are allowed in a Single-Cycle Timed Loop but their behaviour is significantly changed, perhaps breaking your VI.

It would be good to be able to mark VIs in some way as unsuitable for use within a SCTL.

 

An example is the flat sequence structure - you can place this in a SCTL and it can pass intermediate file generation, but the behaviour is as if there was no sequence structure.

Assuming that it isn't always superfluous, this probably indicates invalid behaviour but is not necessarily obvious to detect (e.g. with broken compilation or intermediate files).

 

Some specific node that could be placed on a block diagram and indicate that a VI cannot be placed inside a SCTL would be useful.

Something like a Divide can be used for this, but not trivially easily - you need to actually use the output of the Divide or else the dead-code elimination allows the intermediate files to be happily generated. It took me quite a few goes to get a failure even with SGL precision divide in a SCTL... wiring to a structure or an indicator is not enough, it must be something that actually uses the value.

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

Compiling can take long and it would be cool to get updates via sms or email at various stages of the compiling process.

 

 

According to LabVIEW FPGA 2018 Help, "Using a sequence structure inside a single-cycle Timed Loop has no sequencing effect."

 

The compile should fail when these structures are used inside single-cycle Timed Loops.

 

NI's own example of guaranteeing sequential access to a shared resource shows a flat sequence structure, with no note or caveat about using the structure inside SCTL.

 

-Steve K

When debugging, I find it useful to have Graphs on my FPs. Mostly for running in simulation mode but sometimes I want to verify that the compiled code behaves the same way.

 

I currently have to replace all of my Graphs (fed with fixed size arrays) with Arrays since I can't define the FP element to be a fixed size, unlike arrays.  This makes debugging a bit more of a pain than it needs to be.

 

Is it possible to gbet the option to define a Graph as being a fixed size so that this replacement step is unneccessary?

 

  Recompiling an FPGA VI can be time consuming when debugging a large program.  The emulator mode is not useful when the process includes debugging real I/O connections (vs. emulator simulated).  I would propose a useful "fix" to the emulator I/O problem.  Could the emulation mode have the ability to use all the I/O's as "pass through" connections from the FPGA to the host in order to actually use the I/O's.  This would involve a very simple FPGA VI that connects all the I/O's to appropriate indicators or controls.  If this pre-compiled VI is downloaded and running on the FPGA during emulation mode, then you could actually debug real I/O connections without compiling your entire VI.

FPGA bitfiles should not have any dependency on the project name or target name.  What if you change the name of your project?  What if you change the name of the target?  These dependencies should only correspond to the VI and its location in the project tree and FPGA target. FPGA bitfiles should be in the same directory as the vi but with a different extension.

Change the automatic name and path of FPGA bitfiles from:

.\FPGA Bitfiles\ProjectName.lvprog_TargetName_ViName.vi.lvbitx

to

.\ViName.vi.lvbitx

 

20041iD9562FE2CAEEA87E

I would suggest to implement the possibility to use at the same time multiple compile servers.

Imagine you have a project with many FPGA targets: it would be useful to send the FPGA vis to compilers working in paraller (a sort of Compiler Farm....).

 

Cheers,

Marco

 

Improper use of Global Variables in a SCTL causes compiling error 61056.

 

Currently, this error does not alert the user until a considerable amount of time has been used during compiling.

Please include a check in LabVIEW for inproper use and alert user before compiling. 

 

*Created for service request per customer recommendation.

Many times I create a new FPGA VI to run from the same project and it needs an extra memory block or maybe a new I/O pin, so I add it in the project for that new VI. Meanwhile all my other FPGA VIs that don't have anything to do with that added piece will now need to recompile (very time consuming).

 

It would be nice if those VIs did not need to recompile since that new memory block, I/O, or clock are not being used in the old already compiled VIs.

The CORDIC High throughput functions available in LabVIEW are capable of running at high frequencies, thus allowing FPGA code to (for example) multiplex multiple demodulators without exploding device utilisation.

 

Unfortunately, the option to apply a Gain correction to the results does not pipeline the actual multiplication, thus artificially limiting the available speed of the CORDIC algorithms.

 

In my code I always deactivate the Gain compensation and do this "manually" allowing the code to compile at much higher frequencies and more efficiently utilising the FPGA device.

 

It would be great if it were possible to also pipeline this multiplication as part of the CORDIC High-throughput node instead of being forced to implement the multiplication separately.

In correlation with another general idea I have posted, I have come to the conclusion that it would be nice to run an analysis of the Xilinx log in order to give feedback over which code has been constant folded by the Xilinx compiler.

 

Other aspects such as specific resource utilisation would be really cool also (SRL32 vs Regsiters for Feedback nodes).  This would obviously be a post-bitfile operation but could at least give some direct feedback as to what the Xilinx compiler has modified in the code (Dead code elimination, constant folding etc.).

When LabVIEW 2009 and prior, after the compilation of FPGA VI, the bitfile was automatically downloaded to EtherCAT. However, from 2010, that process became manual; after the compilation, you need to go under the Build Specification, right click on the bitfile created, and select Download. Regular cRIO does it automatically, and I don't see the point of manually downloading it.

 

Does anyone know the point of doing this? And if it was not intended, I like auto download a lot better. But at the same time, 2009 and prior, the bitfiles were not shown under the build specification, which bothers me also. So the conclusion is that, I think it will be better to show the bitfile under the Build Speficcation AND download it automatically to EtherCAT.

When I build a new bitfile for my project, I sometimes (shock horror) make mistakes and bring the whole house of cards crashing down.

 

In situations like that, I would love to have the last version of the bitfile available for re-testing.  Ideally, I could specify a pre- and post-build option for my compilation where I can define my own automatic re-naming and archiving scheme so that I no longer need to do painful re-compiles for reverting my code.

 

I am aware that this probably applies to more than FPGA but here the compilationt imes are more prohibitive and I feel the need is larger.

I would like to see some form of simple locking mechanism for VIs that are targeted to an FPGA.

 

The use case would be where you have compiled a VI for your FPGA target and are currently in the process of debugging/testing it. While running interactively and opening and closing VIs, you accidentally move something on a block diagram without realizing it. The next time you hit the run button LV shows you the "Generating Intermediate Files" dialog and you have now ventured down the one way street to a full FPGA recompile.

 

I know that source code control or setting all files to read only would also work, but when debugging a project, it is cumbersome to continually check all files in and out, or to continually change the directory attributes.

 

Just a simple lock/unlock button on the toolbar to keep from shooting myself in the foot while debugging.

 

....posted as I sit here waiting on a 4 hour FPGA compile for just this reason.

I know that when connected to the compile server the local compile status window will show you when a compile is done, however that does seem to severely limit productivity in that the only way you can get back to working in LV is to disconnect from the compile server. The downside is that you don't get any feedback as to when your compile has completed. This is especially true if your compile server is running on a remote machine.

 

Why not add a feature to LabVIEW to allow disconnecting from the compile server but still provide a background polling feature to update the user when the compile has completed. Something as simple as a dialog box telling me that my compile is ready would be great. It would allow me to get back to work on other sections of the code while still closing the loop on the running FPGA compile process and alerting me that it is done.

 

If the system polled once every minute or so that would be more than adequate.

 

 

The project I'm currently working on involves a USRP 2954R with a small amount of FPGA programming (with the code running at 200MSPs). I ,of course, started editing the USRP FPGA Streaming example code to achieve this.

 

The Receiver code on the FPGA was edited to take the samples at 200MSPs (without the usual decimation), and perform a complex multiplication on it using a high throughput math palette. Post this I decimate my samples (Using the same decimator VI used in the Streaming example code) on a different loop in the Main FPGA VI.

 

Unfortunately I keep receiving a timing error on compilation which, upon investigation, shows a large number of non-diagram components eating away at the loop time. What I don't understand is why a complex multiplication followed by a decimation would require that much time to execute.

 

I've tried using the pipeline feature in the Complex Multiplication and also various compilation styles that optimize timing but I'm not able to cross 150MHz clock rate.

 

I also checked the knowledge NI page that talks about Non-Diagram components but pretty much most of the issues according to the page is about a long critical path, which in my case is not relevant because I literally perform only 2 operations in the concerned loop along with the necessary pipelining operations.

 

I've included the image of the timing violation along with the VIs. Could anyone please let me know what's going on or if I'm doing something wrong?

 

PS: The compiler I'm using is Vivado 2019.1.1

 

PS 2 : I haven't started working on the host yet

Download All