I would like to have a feature to access several IO pin ranges to avoid programming this for a 9205 cRIO module:
With DIO modules like NI9403 you can program this:
Why not provide Mod2/AI0:31 in the above image? (With subranges like AI0:7, AI8:15,… similar to DIO module?)
Xilinx log window should use a fixed-width font.
Which of these two string indicators with identical content is easier to read?
When you compile on your own machine, the output of the "coregen" step is cached and used on subsequent compiles. This saves considerable time. The fact that this does not happen on the NI cloud service erases any speed gains (and more).
- Cache cores at NI. Save for xx days since last use (or forever if space is not an issue)
- Transfer cores when they exist (cached locally) to compile server along with other intermediate files
A smaller (and cheaper) sbRIO based on the Xilinx Zynq chip. Target size is SO-DIMM form factor (68 x 30 mm (half the area of a credit card), 200 pins). Such a board would be OEM friendly and can be plugged into a product (rather than the current sbRIO offerings that requires the product to be developed around the sbRIO rather than the sbRIO fitting into your product). Also, a Base Board that is (only) used during development. Below is what the proposed sbRIO and Base Board would roughly look like (courtesy of Enclustra FPGA Solutions)
We need a way to simply reinterpret the bits in our FPGAs. I currently have a situation where I need to change my SGL values into U32 for the sake of sending data up to the host. Currently, the only way is to make an IP node. That is just silly. We should be able to use the Type Cast simply for the purpose of reinterpreting the bits.
How amazing yould it be to have the ability to visualise resource usage on a FPGA target using a similar view to that shown above (courtesy of Windirstat)
I only recently shaved a significant portion off my FPGA usage by finding out that I had a massively oversized FIFO in my code for almost a year without noticing. I feel that this kind of visualisation (with mouse over showing what is actually occupying the space) with differentiation between Registers, LUTs, BRAM, DSPs and so on would greatly aid those of us trying to squeeze as much as possible out of our FPGA designs.
I think providing this information based on the "estimated resource utilisation" i.e. before Xilinx optimises stuff away would be OK. I'm not sure if the final resource utilisation can be mapped as accurately in this way.
It would also be nice to see CLIP utilisation and NI-internal utilisation at a glance as this is apparently hugely different between targets.
Hi How about facility of import and export of I/O Label in FPGA-Real time project as shown image instead of manually renaming each I/O
User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.
Please let's see this in future versions of LabVIEW as standard.
It is time-consuming that we have to compile all LabVIEW FPGA code even if there is tiny little change on FPGA code.
I understand there is sampling probe, Desktop execution node and simulation tools to reduce such time.
Our customer in Japan, would like to use incremental compile function also on LabVIEW.(Please see below)
I agree his opinion.
What do you think?
Application Engineer at National Instruments Japan.
When there are many controls on the front panel of the FPGA, selecting the control from a Read/Write Control node in the host can become a pain. It is one very large list of controls on the front panel of the FPGA. This list has no scrollbar, no browse, or search feature, and no obvious way of grouping controls.
Here is one example of a front panel, and a video showing how long it takes to scroll through the list of contorls.
And here is the video of me scrolling through the controls: http://screencast.com/t/PLzptTwq58aw
There is plenty of room for improvement. Here are just a few ways I think NI could make this better.
Browse and Search
When using a Property Node, or Invoke Node, the very top option is to "Browse..." From here a list of all properties, or methods can be seen in a resizeable window. Here you can also search, and sort alphabetically. The Read/Write Control node could have similar functionality making selection of controls easier.
Front Panel Selection From FPGA
There could be an option for creating a node by selecting the controls on the front panel of the FPGA. A solution that may work today, is to select the controls, then invoke a custom QuickDrop command that creates the node and puts it in the clipboard so it can be pasted in the host VI. If this were to become an option, I'd hope there is a way to combine two nodes into one, by concatenating the controls of one onto the other.
Front Panel Selection From Host
Lets say you already have the Read/Write Control node on the host. There could be an option by right clicking that would open a new window, showing a static image of the front panel of the FPGA, which the user could then click on. This would be great because the developer probably already knows the control they want based on the front panel location. I don't know how possible this is because you could load a bit file which won't have any front panel information.
Easier Grouping of Controls
Right now there is a way to group controls of an FPGA. This feature is never talked about, and doesn't work on dynamic bit files. Here is a discussion where I describe the steps to make controls grouped on the host. Still this isn't supported on all FPGA setups, and you have to conform to a specific naming convention. Why can't controls that are grouped on the front panel, just be grouped in the host?
This idea exchange is really for any kind of improvement to the FPGA control selection.
Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.
My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.
The FIFO read looks like an event based node (like a dequeue or wait on occurance) and I think there's a lot of people that assume it's going to use minimal cpu resources while it is waiting for data. I'm wondering if we can have an option that behaved like that. For example, could we have fixed sized FIFO read where the FPGA could trigger an interupt to let the RT side know the data is ready?
I just manually transferred a fairly large LabVIEW FPGA project from one target to another (7965R to 7966R). It would be nice to be able to click on the RIO target in the project and have an option to "Migrate to New FPGA Target" in the context menu. The menu would open a new dialog where you could select the new RIO target and then it is automatically added to the project and populated the VIs, FIFOs, derived clocks, memory blocks, etc. from the original target. The user can choose whether or not to delete the original RIO target.
This would also make it very easy for users to transfer sample code from the LabVIEW Example Finder to the correct FPGA target (insead of having the folder labeled "Move These Files").
Currently when you build a VI the bit file path is stored as relative (you can see it in the project XML). This means if you change the project location either:
You have to recompile the FPGA to use VI mode or run interactively. It seems the bitfile could be stored as a relative path like all VIs in the projects.
Vision is available under LabVIEW 64 bit and this makes sense since vision can generate very large amounts of data. I think now is the time to bring FPGA over to LabVIEW 64 bit as well. With FPGA systems you can also generate very large data sets. Also with cards like the PCIE 1473R, you have a VISION requiring card that generates lots of data, but it requires FPGA, so you can only use it in LabVIEW 32 bit. This is not a good thing. It has been 5 years since LabVIEW 64 bit has been released it is time to finish moving the addons over to 64 bit.
With availability of fast FlexRIO cards (such as NI 5761) and FPGA framegrabbers (NI 1483, PXIe-1435, NI PCIe-1433 ) data rates of 1GB/s are becoming commonplace. However, the FPGA Module is limited to communication only with 32-bit LabVIEW. Since, typically you want to store more than 2 seconds of data in RAM,you would like to use 64-bit LabVIEW as your host application. Unfortunately, this isn't possible yet.
While, I can imagine that a full blown 64-bit FPGA Module add on would be pretty difficult to build (and especially test), I believe there is a solid middle ground at this point. I can imagine, coding and compiling the FPGA in the normal 32-bit LabVIEW environment, and then just using a 64-bit host application to Read/Write front panel controls and to read/write the DMA buffers from the FPGA. I don't know the details, but this communication protocols could be very low hanging fruit if it's just a simple matter of recompiling a few key pieces for 64-bit operation.
Since the data rates passing to and from FPGAs will continue to climb, as well as the prevalence of 64-bit OS, a 64-bit version of FPGA Module is needed in the new feature pipeline. This should also be kept in mind as other new FPGA Module features and tools are created, as planning for 64-bit compatability now will make the eventual transition to 64-bit much, much easier down the road.
In current versions of LabVIEW FPGA, placing a For Loop inside an SCTL will result in code that cannot be compiled; this is because conventially For Loops work iteratively and therefore require multiple clock signals to drive each new iteration.
However, I think a logical implementation of a For Loop within an SCTL would be the generation of multiple parallelised instances of whatever code is inside the For Loop. This would greatly improve readability and flexibility by avoiding the user having to manually create multiple separate instances of the same critical code on the Block Diagram.
This would require the For Loop to execute a known maximum number of times.
Many data streams contain information for multiple channels or multiple samples. Today one must pack this data into larger integer types or interleave the data manually into multiple writes to the DMA FIFO API. It would be much simpler if the DMA natively support cluster and array data types. The local FIFO, Memory, and Register APIs already support this; extend it to DMA.
This is the current situation when dealing with register creation on FPGA targets:
This is what I would like:
I am currently creating a group of classes to abstract out inter-loop communication and the ONLY thing changing between classes (aside from variations between Ragister vs FIFO vs Global and so on) is the datatype. Being able to link the Register creation to a data input (the data value of the class itself for example) would save a lot of work in such operations. If it were also possible to do the same for the Register stored within the class private data then implementing different classes int his way would be really easy.
Even without classes, the ability to autoadapt the type of registers / FIFOs in this way would be a real step towards re-usable code on FPGA.