Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
Browse by label or search in the LabVIEW FPGA Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW FPGA Idea Exchange. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see in a future version of LabVIEW FPGA!
Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.
My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.
I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects. I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.
For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:
To deal with the seond option: Why is it currently not possible to create a derived clock in code. The ability to automatically have one piece of code accept a single clock reference and let one loop run at a multiple of the speed is something I've wanted to be able to do in the past but it is currently impossible in LabVIEW.
Please let us configure / define derived clocks in LV code.
If I am choosing to offload multiple FPGA compilations to either a local or cloud compile farm, can we not at least do the itnermediate file generation in parallel? Our current design takes approximately 10-15 minutes to generate intermediate files. For 5 Cloud compiles, this blocks my IDE for around an hour.
Since the file creation processes are independent of each other, why can't we do them in parallel?
This is the current situation when dealing with register creation on FPGA targets:
This is what I would like:
I am currently creating a group of classes to abstract out inter-loop communication and the ONLY thing changing between classes (aside from variations between Ragister vs FIFO vs Global and so on) is the datatype. Being able to link the Register creation to a data input (the data value of the class itself for example) would save a lot of work in such operations. If it were also possible to do the same for the Register stored within the class private data then implementing different classes int his way would be really easy.
Even without classes, the ability to autoadapt the type of registers / FIFOs in this way would be a real step towards re-usable code on FPGA.
I have several FPGA projects that require significant compile time (up to 1.5 hours), and for that I am thankful to have my compile server running on a separate computer.
The issue comes with the seven Pre-Compile steps that occurs before LabVIEW sends to the code to the compiler. On one particular project this action alone can take up to 35 minutes during which time I can do nothing on that machine.
I would like to see much of this precompile time moved from the development environment to the compile server. There already exists a mechanism for updating the user with the compile status so those precompile errors could be annunciated in a similar fashion.
Get the development system back online as quickly as possible.
For debugging, using FPGA VIs in interactive mode can be very valuable. I have, to this day, not been able to find out how LV determines if a bitfile and a VI match.
Therefore whenever I click on the run button for a VI, I'm never quite sure if the bitfile will match or not and often have to wait 1-5 minutes before I can resume working with LabVIEW. This is a very high price to pay for something which I end up cancelling. I would like very much if the IDE would TELL ME that the bitfile and VI don't match before starting a new compilation and thus wasting my time.
This is opposed to a CTRL_Click of the run arrow which explicitly tells the IDE to compile.
When developing a FPGA application in LabVIEW, after submiting a FPGA code compilation - usually quite a lengthy process - if you modify the code either on the Front Panel or Block Diagram while compiling is in progress, this results in a Compilation Error at the end.
And this occurs regardless the modification be only a mere cosmetic change, without any implication in the code that is being compiled. This is quite frustrating when you realize that the compilation has failed (maybe after half an hour waiting) just because you unconsciously clicked and resized some control or node.
In such a situation, when LabVIEW detects a code change while the FPGA compilation is running, it should warn the user with a message box; if the user confirms the code change, the current compilation can be inmediately aborted or let it continue (at user option); on the other hand, if the user cancels the modification, nothing happens and the compilation continues to a successful (hopefully) end.
Wouldnt it be nice if, when you build an FPGA, rather than poping up a modal window, and preventing you from doing anything usefull for 10 mins or so (or more, dependant on the FPGA vi), LabVIEW went away and generated the intermediate files in the background?
After all, the actual compilation is now performed asyncronously (and you are using the cloud compile, arent you? ), so why should we sit and watch the intermediate files being generated?
Imagine the hours you would save a week, just by being able to get on and do something else.
As the compilation goes on of the LabVIEW FPGA code to bitfile, there is an intermediary step when a VHDL file (or maybe Verilog?) is generated. This file would be very beneficial if you want to use another FPGA target, that NI supports. I know that this VHDL file cannot be directly used for non supported FPGA, but it would be a very good starting point for the ones that know VHDL language.
The LabVIEW FPGA module has supported static dispatch of LabVIEW Class types since 2009. This essentially means all class wires must be analyzable and statically determinable at compile-time to a single type of class. However, this class can be a derived class of the original wire type which means, for instance, invoking a dynamic dispatch method can be supported since the compiler knows exactly which function will always be called.
This is not sufficient for many applications. Implementations that require message passing or other more event oriented programming models tend to use enums and flattened bit vectors to pass different pieces of data around on the same wire. All of this packing and unpacking can automatically be handled by the compiler if we can use run-time dynamic dispatch to describe the application.
We call for the LabVIEW FPGA module to add support for true run-time dynamic dispatch to take care of this tedious, annoying, and down-right boring job of figuring out how to pack and unpack bits everywhere. Whose with me?
Per NI Applications Engineering, "If you intend to run multiple compiles in parallel on the [Linux] server then yes, you will need the Compile Farm Toolkit running on a Windows machine to handle the parallel workers." I would like NI to support the FPGA Compile Farm Toolkit on Linux, so I don't need a dedicated Windows server to outsource compiles to workers.
In correlation with another general idea I have posted, I have come to the conclusion that it would be nice to run an analysis of the Xilinx log in order to give feedback over which code has been constant folded by the Xilinx compiler.
Other aspects such as specific resource utilisation would be really cool also (SRL32 vs Regsiters for Feedback nodes). This would obviously be a post-bitfile operation but could at least give some direct feedback as to what the Xilinx compiler has modified in the code (Dead code elimination, constant folding etc.).
Hi, since there an be a queue for compiling FPGA code, it seems natural to me to also be able to make a queue for generating intermediate files.
I'm working with 10 build specs. for compilation per project and generating intermediate files for my design takes aprox. 3-4 minutes. This means that I need to sit by my computer for half an hour just waiting and clicking build on every build specification. Sometimes I work with FPGA VI which need to build intermediate files for something like 7-10 minutes, so this is a pain.
It would be great if there was a way of just highlighting all build specifications for compilation with shift and just creating the intermediate files for them automatically one by one.
Long compile times are a necessary evil of FPGA code. Even with the vast improvements of Vivado, compile time still ranks as the biggest killer of large project efficiency. As compile times approach 3-4 hours, their successful completion becomes paramount. All too often I find that the Xilinx compiler running on the compile worker has completed successfully however some small communication glitch either between my development machine and the farmer or the farmer and the worker has caused the compile to be lost. It is quite frustrating to know you have a completed bitfile from Xilinx but the NI tools will not perform the final processing steps required to create the lvbitx file. The only solution is to restart the compile costing another 3-4 hours of productivity.
Typical workflow in our company for these large projects is to spend mornings testing and stressing the compile(s) from overnight. Then make any bug fixes and incremental feature improvements and try to start a compile by mid-morning. By mid-afternoon when the compile is complete do the process again so that you can process another build for overnight. If one of the compiles fails because of timing or resource problems, there's nothing that can be done. But if it fails because of glitches in NI's compile wrapper code, that becomes a waste of a half of a day of productivity.
I propose that the current methods for compiling bitfiles be modified. The goal is to improve user productivity. Some of my suggestions include:
For a given build specification, give it the ability to re-attempt to retrieve the last completed compile. This option would be available even if the VI's that created that compile had been modified.
If a compile was completed previously for this build specification and there has yet to be a successful lvbitx generation, prompt the user before doing anything that would destroy the ability to retrieve it.
Make sure that all of this still works when changing connections to the worker. For example if I start my compile at work then take my laptop home and want to login to my VPN at night to check on my compile
Don't remove any chance to get a compile if there was a communication error. Right now when I get the communication error, I see a red X in the compilation status and my only option is to remove it from the list.
We can programmatically mass compile VI's and build executables but there is no easy method to compiling FPGA code. We have a large application that consists of C++ and LabVIEW code. We have automated our build process but we still have to compile the FPGA code using a procedure. It would be nice to write a script or a VI that would compile all of our FGPA code.
Sometimes I just want to compile a lot of Bitfiles (Be it for a release or a debugging test case) and I have to right click each and every Build spec and choose "Build". then wait about 10 seconds and do the same again for the next build spec.
How about being able to select multiple build specs and then select "Build Selection" and have time to go for lunch while the PC queues up all the compilations?
I don't use a compile farm and everything is done locally but at least the queuing could be automated.