Xilinx log window should use a fixed-width font.
Which of these two string indicators with identical content is easier to read?
The LabVIEW FPGA module has supported static dispatch of LabVIEW Class types since 2009. This essentially means all class wires must be analyzable and statically determinable at compile-time to a single type of class. However, this class can be a derived class of the original wire type which means, for instance, invoking a dynamic dispatch method can be supported since the compiler knows exactly which function will always be called.
This is not sufficient for many applications. Implementations that require message passing or other more event oriented programming models tend to use enums and flattened bit vectors to pass different pieces of data around on the same wire. All of this packing and unpacking can automatically be handled by the compiler if we can use run-time dynamic dispatch to describe the application.
We call for the LabVIEW FPGA module to add support for true run-time dynamic dispatch to take care of this tedious, annoying, and down-right boring job of figuring out how to pack and unpack bits everywhere. Whose with me?
User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.
Please let's see this in future versions of LabVIEW as standard.
I got following feedback from a LV FPGA user:
When developing a FPGA application in LabVIEW, after submiting a FPGA code compilation - usually quite a lengthy process - if you modify the code either on the Front Panel or Block Diagram while compiling is in progress, this results in a Compilation Error at the end.
And this occurs regardless the modification be only a mere cosmetic change, without any implication in the code that is being compiled.
This is quite frustrating when you realize that the compilation has failed (maybe after half an hour waiting) just because you unconsciously clicked and resized some control or node.
In such a situation, when LabVIEW detects a code change while the FPGA compilation is running, it should warn the user with a message box; if the user confirms the code change, the current compilation can be inmediately aborted or let it continue (at user option); on the other hand, if the user cancels the modification, nothing happens and the compilation continues to a successful (hopefully) end.
For debugging, using FPGA VIs in interactive mode can be very valuable. I have, to this day, not been able to find out how LV determines if a bitfile and a VI match.
Therefore whenever I click on the run button for a VI, I'm never quite sure if the bitfile will match or not and often have to wait 1-5 minutes before I can resume working with LabVIEW. This is a very high price to pay for something which I end up cancelling. I would like very much if the IDE would TELL ME that the bitfile and VI don't match before starting a new compilation and thus wasting my time.
This is opposed to a CTRL_Click of the run arrow which explicitly tells the IDE to compile.
Currently when you build a VI the bit file path is stored as relative (you can see it in the project XML). This means if you change the project location either:
You have to recompile the FPGA to use VI mode or run interactively. It seems the bitfile could be stored as a relative path like all VIs in the projects.
On the cRIO-9068, the third serial port and the second Ethernet adapter is actually mounted on the FPGA, resources are consumed to redirect to realtime. Currently there are no access to this resource on the FPGA for developers, only from the Realtime.
I would like some I/O Nodes for interacting with these devices on the FPGA. NI could put up some examples how they could be used.
Today the resources are invisible to the developer, except for the additional long compile time and resources used (about 7%).
I attached pictures of the FPGA design and the resources consumed for a blank vi.
Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.
My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.
Per NI Applications Engineering, "If you intend to run multiple compiles in parallel on the [Linux] server then yes, you will need the Compile Farm Toolkit running on a Windows machine to handle the parallel workers." I would like NI to support the FPGA Compile Farm Toolkit on Linux, so I don't need a dedicated Windows server to outsource compiles to workers.
Hi, since there an be a queue for compiling FPGA code, it seems natural to me to also be able to make a queue for generating intermediate files.
I'm working with 10 build specs. for compilation per project and generating intermediate files for my design takes aprox. 3-4 minutes. This means that I need to sit by my computer for half an hour just waiting and clicking build on every build specification. Sometimes I work with FPGA VI which need to build intermediate files for something like 7-10 minutes, so this is a pain.
It would be great if there was a way of just highlighting all build specifications for compilation with shift and just creating the intermediate files for them automatically one by one.
Can this be done?
This morning, after a night of FPGA compilation, i moved my Labview project path into an other location.
(Without modification of relatives path inside the project directory)
And then ... when i tryed to launch my FPGA main VI ... the compilation started again !!!
I would be nice that the "change detection mechanism" which detect if a compilation is required or not, doesn't take care of absolute paths !
I think that the "change detection mechanism" of FPGA code should be modified in order to only take in account the FPGA code dependencies.
The dependencies should not include ...
As the compilation goes on of the LabVIEW FPGA code to bitfile, there is an intermediary step when a VHDL file (or maybe Verilog?) is generated. This file would be very beneficial if you want to use another FPGA target, that NI supports. I know that this VHDL file cannot be directly used for non supported FPGA, but it would be a very good starting point for the ones that know VHDL language.
I have several FPGA projects that require significant compile time (up to 1.5 hours), and for that I am thankful to have my compile server running on a separate computer.
The issue comes with the seven Pre-Compile steps that occurs before LabVIEW sends to the code to the compiler. On one particular project this action alone can take up to 35 minutes during which time I can do nothing on that machine.
I would like to see much of this precompile time moved from the development environment to the compile server. There already exists a mechanism for updating the user with the compile status so those precompile errors could be annunciated in a similar fashion.
Get the development system back online as quickly as possible.
Sometimes I just want to compile a lot of Bitfiles (Be it for a release or a debugging test case) and I have to right click each and every Build spec and choose "Build". then wait about 10 seconds and do the same again for the next build spec.
How about being able to select multiple build specs and then select "Build Selection" and have time to go for lunch while the PC queues up all the compilations?
I don't use a compile farm and everything is done locally but at least the queuing could be automated.
Wouldnt it be nice if, when you build an FPGA, rather than poping up a modal window, and preventing you from doing anything usefull for 10 mins or so (or more, dependant on the FPGA vi), LabVIEW went away and generated the intermediate files in the background?
After all, the actual compilation is now performed asyncronously (and you are using the cloud compile, arent you? ), so why should we sit and watch the intermediate files being generated?
Imagine the hours you would save a week, just by being able to get on and do something else.
The CORDIC High throughput functions available in LabVIEW are capable of running at high frequencies, thus allowing FPGA code to (for example) multiplex multiple demodulators without exploding device utilisation.
Unfortunately, the option to apply a Gain correction to the results does not pipeline the actual multiplication, thus artificially limiting the available speed of the CORDIC algorithms.
In my code I always deactivate the Gain compensation and do this "manually" allowing the code to compile at much higher frequencies and more efficiently utilising the FPGA device.
It would be great if it were possible to also pipeline this multiplication as part of the CORDIC High-throughput node instead of being forced to implement the multiplication separately.
If you try to compile while pointed to a Compile Server that is for any reason inaccessible (server is down, firewall, typo in the hostname, etc.) you must wait through the generation of intermediate files, then you receive the error message that LabVIEW FPGA was unable to contact the Compile Server at your configured hostname/IP. Generating intermediate files can be a lengthy process and it shouldn't be necessary to wait through it just to find out if you have configured your Compile Server correctly. Any of the following would be a much better experience:
Simple one that I have heard a number of people request. Why is there no auto-increment on the versions of an FPGA build specification versus any other versioned build specification in LabVIEW? This should be a simple addition to bring the FPGA module inline with other LabVIEW modules.
I know its not necessarily a LVFPGA issue, but its us LVFPGA users that use fixed point numbers most often. Why don't fixed point numbers always show coercion dots. If every unnecessary numerical digit wastes chip resources, then isn't it more important that we know about these coercions so that we can avoid them?