I often work with the FPGA in hybrid mode because the Scan Interface covers most of the project requirements 90% of the time. When NI added support for the SGL datatype to the FPGA module in 2012 (?), they overlooked user-defined variables. There is currently no built-in support for typecasting a SGL to U32, so passing SGL data back to the host requires FP controls or using custom typecasting solutions (see SGL typecast) on both the FPGA and host layers.
Please add SGL as an option for user-defined variables.
What I would like to see when transferring FPGA Projects from one target machine to another, that I will not be forced to recompile the Vi code when testing the FPGA code whilst passing parameters within the front panel when executed on the new target., Work arounds like creating a small host front panel to allow parameter changes kind of defeats the object of a FPGA Front panel execution.
We need a way to simply reinterpret the bits in our FPGAs. I currently have a situation where I need to change my SGL values into U32 for the sake of sending data up to the host. Currently, the only way is to make an IP node. That is just silly. We should be able to use the Type Cast simply for the purpose of reinterpreting the bits.
It is time-consuming that we have to compile all LabVIEW FPGA code even if there is tiny little change on FPGA code.
I understand there is sampling probe, Desktop execution node and simulation tools to reduce such time.
Our customer in Japan, would like to use incremental compile function also on LabVIEW.(Please see below)
I agree his opinion.
What do you think?
Application Engineer at National Instruments Japan.
How amazing yould it be to have the ability to visualise resource usage on a FPGA target using a similar view to that shown above (courtesy of Windirstat)
I only recently shaved a significant portion off my FPGA usage by finding out that I had a massively oversized FIFO in my code for almost a year without noticing. I feel that this kind of visualisation (with mouse over showing what is actually occupying the space) with differentiation between Registers, LUTs, BRAM, DSPs and so on would greatly aid those of us trying to squeeze as much as possible out of our FPGA designs.
I think providing this information based on the "estimated resource utilisation" i.e. before Xilinx optimises stuff away would be OK. I'm not sure if the final resource utilisation can be mapped as accurately in this way.
It would also be nice to see CLIP utilisation and NI-internal utilisation at a glance as this is apparently hugely different between targets.
Xilinx log window should use a fixed-width font.
Which of these two string indicators with identical content is easier to read?
When you are using same code on different boards, it would be a big help if you could set the "FPGA VI Reference" indicator as "Adapt to source". When I use dirrerent DMA on different target, then the wire break every time I change target:
My FGV for the FPGA reference looks like:
Well it seems that persistent/non-volatile memory is not available on FPGA targets for scenarios involving cycling power.
Apparently, the recommended approach is to transfer the data to the host and store it to disk.
This is a bit problematic in that the choice is either to write data to disk at a high rate or to accept that the most recent data might not be reloaded on restart. For instance, an operator might expect to know exactly how many revolutions a shaft has undergone after power cycles to the FPGA target. A guaranttee that this information is as up to date as possible probably can't be met (maybe even under transferring data to disk at a high rate).
So I'd like to request this.
The LabVIEW FPGA module has supported static dispatch of LabVIEW Class types since 2009. This essentially means all class wires must be analyzable and statically determinable at compile-time to a single type of class. However, this class can be a derived class of the original wire type which means, for instance, invoking a dynamic dispatch method can be supported since the compiler knows exactly which function will always be called.
This is not sufficient for many applications. Implementations that require message passing or other more event oriented programming models tend to use enums and flattened bit vectors to pass different pieces of data around on the same wire. All of this packing and unpacking can automatically be handled by the compiler if we can use run-time dynamic dispatch to describe the application.
We call for the LabVIEW FPGA module to add support for true run-time dynamic dispatch to take care of this tedious, annoying, and down-right boring job of figuring out how to pack and unpack bits everywhere. Whose with me?
When accessing a FPGA control / indicator from the HOST, a ~property node is used, and the developer has to pick the right control / indicator from a list
When there's lots of items on the list, it can be a pain to scan the list for the right one.
At the moment they are listed in order they were created on the FPGA.
Could they instead be listed in alphabetical order? Or give the developer the choice?
A very useful feature of the FPGA Butterworth filter is the ability to use it multiple times, saving FPGA resources.
However this is not possible for 32 bit wide filters, only for 16 bit filters.
It would be useful if the 32bit filters could go multichannel too, at least two channel
I have recently started placing wire labels in my code to keep track of datatypes of wires flowing around my code. This is very useful to understand what is going on on the FPGA since not all grey wires (FXP) are equal and slight mismatches can mean big trouble in the code.
As such I tend to label wires like "Phase FXP+25,0" and so on.
What I'd love to be able to do is to place a formatter in a Wire Label to be able to keep such labels up to date because at the moment it's probably more error prone than anything else due to some wires and labels not being synced any more due to code changes or bugfixes.
If I can set a wire label to "Phase %s" or similar to place the ACTUAL datatype in the label this would be amazing.
Vision is available under LabVIEW 64 bit and this makes sense since vision can generate very large amounts of data. I think now is the time to bring FPGA over to LabVIEW 64 bit as well. With FPGA systems you can also generate very large data sets. Also with cards like the PCIE 1473R, you have a VISION requiring card that generates lots of data, but it requires FPGA, so you can only use it in LabVIEW 32 bit. This is not a good thing. It has been 5 years since LabVIEW 64 bit has been released it is time to finish moving the addons over to 64 bit.
I got following feedback from a LV FPGA user:
When developing a FPGA application in LabVIEW, after submiting a FPGA code compilation - usually quite a lengthy process - if you modify the code either on the Front Panel or Block Diagram while compiling is in progress, this results in a Compilation Error at the end.
And this occurs regardless the modification be only a mere cosmetic change, without any implication in the code that is being compiled.
This is quite frustrating when you realize that the compilation has failed (maybe after half an hour waiting) just because you unconsciously clicked and resized some control or node.
In such a situation, when LabVIEW detects a code change while the FPGA compilation is running, it should warn the user with a message box; if the user confirms the code change, the current compilation can be inmediately aborted or let it continue (at user option); on the other hand, if the user cancels the modification, nothing happens and the compilation continues to a successful (hopefully) end.
On the cRIO-9068, the third serial port and the second Ethernet adapter is actually mounted on the FPGA, resources are consumed to redirect to realtime. Currently there are no access to this resource on the FPGA for developers, only from the Realtime.
I would like some I/O Nodes for interacting with these devices on the FPGA. NI could put up some examples how they could be used.
Today the resources are invisible to the developer, except for the additional long compile time and resources used (about 7%).
I attached pictures of the FPGA design and the resources consumed for a blank vi.
As part of my quest to solve problems arising from over-cautious Register transfers HERE I found a solution which WOULD have worked if I was able to force multiple clocks derived from the same source to be have synchronised start points (so that the iteration counters of the loops are known relative to each other). It seems that clocks derived from the same base clock do not neccessarily all start with Iteration zero at the same time.
My suggestion would be to either
HERE I detailed a problem I currently have with Registers between two clock domains which are closely related (phase-locked).
It turns out that there is handshaking going on which, essentially, is not really neccessary. It would be nice to have the option to have something similar to a Register for such clock domains where we know explicitly the relationships between the clocks and thus does not require handshaking.
I find myself again and again having to memorise bit field index and other things in order to be able to debug FPGA code efficiently. What I would really like to be able to do is to create an XControl with a compatible datatype (Say U64) and have this display and accept input in the form of human-readable information.
The data being transported is simply a U64 and the FPGA code doesn't need to know anything about the XControl itself. Just allow a host visualisation based on an XControl to ease things a bit.
I've already started using LVOOP on FPGA and I think this could be another big improvement in the debugging experience. Having an input XControl (or a set of XControls) for certain LVOOP modules on the FPGA just gets me all excited.
The CORDIC High throughput functions available in LabVIEW are capable of running at high frequencies, thus allowing FPGA code to (for example) multiplex multiple demodulators without exploding device utilisation.
Unfortunately, the option to apply a Gain correction to the results does not pipeline the actual multiplication, thus artificially limiting the available speed of the CORDIC algorithms.
In my code I always deactivate the Gain compensation and do this "manually" allowing the code to compile at much higher frequencies and more efficiently utilising the FPGA device.
It would be great if it were possible to also pipeline this multiplication as part of the CORDIC High-throughput node instead of being forced to implement the multiplication separately.
User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.
Please let's see this in future versions of LabVIEW as standard.