Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
Browse by label or search in the LabVIEW FPGA Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW FPGA Idea Exchange. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see in a future version of LabVIEW FPGA!
We need a way to simply reinterpret the bits in our FPGAs. I currently have a situation where I need to change my SGL values into U32 for the sake of sending data up to the host. Currently, the only way is to make an IP node. That is just silly. We should be able to use the Type Cast simply for the purpose of reinterpreting the bits.
Many data streams contain information for multiple channels or multiple samples. Today one must pack this data into larger integer types or interleave the data manually into multiple writes to the DMA FIFO API. It would be much simpler if the DMA natively support cluster and array data types. The local FIFO, Memory, and Register APIs already support this; extend it to DMA.
Vision is available under LabVIEW 64 bit and this makes sense since vision can generate very large amounts of data. I think now is the time to bring FPGA over to LabVIEW 64 bit as well. With FPGA systems you can also generate very large data sets. Also with cards like the PCIE 1473R, you have a VISION requiring card that generates lots of data, but it requires FPGA, so you can only use it in LabVIEW 32 bit. This is not a good thing. It has been 5 years since LabVIEW 64 bit has been released it is time to finish moving the addons over to 64 bit.
Now that most numeric operators have the ability to saturate it would be nice to be able to differentiate these operations. I know that the majority of the time you can determine this information easily with the context help but this would make it much easier to spot. I tend to copy operators that are already being used in my vis than to grab a new one off the pallet. This would let me know which type of operator I'm copying.
When parsing message packets in my FPGA code I'm often using a custom packing method with multiple individual datasets being packed into a single U64 (for example) for transport.
With the introduction of Malleable VIs, I would love to be able to create my own packing and unpacking Malleable VIs but don't yet see how. Problem is that the "Boolean array to Number" does not allow attaching a datatype like the "To FXP" does. I would like to marry these two functions into a "Boolean array to FXP" node where I can wire in my input datatype and have the output datatype automatically maintained in a malleable VI.
I would like to access class attributes of my FPGA class hierarchy with property nodes. I prefer the property node API over VIs for data member access because it allows you to grab properties from across the hierarchy in a single node. This leads to a (much) cleaner block diagram and expedites development. For example in the screenshot below, the FXP attributes belong to the NI_9205 class, while the "_OP" attributes belong to the parent. I don't care about the Invoke node API over a subVI, because the wiring work and diagram appearance are about the same IMHO.
The LabVIEW FPGA module has supported static dispatch of LabVIEW Class types since 2009. This essentially means all class wires must be analyzable and statically determinable at compile-time to a single type of class. However, this class can be a derived class of the original wire type which means, for instance, invoking a dynamic dispatch method can be supported since the compiler knows exactly which function will always be called.
This is not sufficient for many applications. Implementations that require message passing or other more event oriented programming models tend to use enums and flattened bit vectors to pass different pieces of data around on the same wire. All of this packing and unpacking can automatically be handled by the compiler if we can use run-time dynamic dispatch to describe the application.
We call for the LabVIEW FPGA module to add support for true run-time dynamic dispatch to take care of this tedious, annoying, and down-right boring job of figuring out how to pack and unpack bits everywhere. Whose with me?
We need to have more FPGA Vision example codes. I followed NI introductory articles on image processing using FPGA and they sound great, but was very much disappointed when trying to find usable examples as there are only 5 examples on the IPNet, far fewer than what the intro articles suggest what FPGA can do.
Xilinx supports BRAM primitives (FIFO and normal BRAM) with certain varying width read and write ports. For some applications, the ability to write 2x 16 bit values to a FIFO in one loop and read 1x 16 bit value from the FIFO at double clock rate in another loop can be very useful.
As it stands, the IPCore for such BRAM primitives, although present in LabVIEW FPGA, cannot be used without a CLIP (essentially making this aspect of the IPCore useless).
It would be cool if LV would expose the ability to have differing read and write port widths for BRAM.
I often work with the FPGA in hybrid mode because the Scan Interface covers most of the project requirements 90% of the time. When NI added support for the SGL datatype to the FPGA module in 2012 (?), they overlooked user-defined variables. There is currently no built-in support for typecasting a SGL to U32, so passing SGL data back to the host requires FP controls or using custom typecasting solutions (see SGL typecast) on both the FPGA and host layers.
Please add SGL as an option for user-defined variables.
I posted this suggestion in the forums, but it is something I would like to see improved and included in the FPGA library. The idea is to multiplex multiple inputs/outputs to a single high-throughput math function. If someone has to do a lot of fixed point math on the FPGA, the resources are used up quickly. The multiply block is primarily what I would like to see this implemented for, but I think it would be useful with all of the high-throughput math functions.
In one project I quickly ran out of DSP48E's on my FPGA, and since I had many fixed-point multiplies with the same data type configuration, I created a state machine to step through the inputs, allowing me to replace 4 high-throughput multiplies with one multiply block for multiple operations. Sequential operations are possible by feeding the output of one operation into the input of another (I didn't implement that in the forum post below, but it can be done). I think Labview could improve pipelining of the multiplexed function, ease of setting the number of inputs/outputs and data-type, hand-shaking logic for operation in SCTL, etc. LabVIEW could also show separate schematic figures for each of the multiplexed functions (example: a PCB layout software such as Eagle shows separate blocks on the schematic for each opamp on a chip containing multiple opamps).
The loop timer express VI is very useful to time a loop to an exact rate, however... if you want to be sure the loop is meeting the rate requested... you also have to put in tic count VIs like this:
Since the loop timer express VI already is calculating how long it needs to wait in order to achieve the desired loop time, I would prefer it if at least output a bool that indicated it failed to achieve the timing required.
It would be best if it output the actual tics it waited in like I16 form so it could go negative (indicating the # of tics it failed to achieve timing by.
At present, if you are trying to simulate your FPGA's actual logic, using a custom VI like this:
Then you know that your custom VI test bench only has one case for methods (just a general method case, not a case for each method available). There are ways to get around this problem--for example, this example emulates a node and suggests using a different timeout value for wait on rising edge, wait on falling edge, etc, but one still has to write the code for the different methods.
My suggestion is as simple as this: make test benches easier to use by handling all of the methods and properties with a set behavior. That way, all one has to set up when creating a test bench is the input and output on each I/O read/write line. At the very least, it would be nice to have the ability to read what method is being called, so the appropriate code can be set up without complicated case structures.