LabVIEW FPGA Idea Exchange

Community Browser
About LabVIEW FPGA Idea Exchange

Have a LabVIEW FPGA Idea?

  1. Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
  2. Browse by label or search in the LabVIEW FPGA Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  3. If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW FPGA Idea Exchange. Be sure to submit a separate post for each idea.
  4. Watch as the community gives your idea kudos and adds their input.
  5. As NI R&D considers the idea, they will change the idea status.
  6. Give kudos to other ideas that you would like to see in a future version of LabVIEW FPGA!
Top Kudoed Authors
User Kudos Count
1
1
1
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I love using enums because they can often make discrete options much clearer. Example:

enum clarity.PNG

But, at least as of 2017, the below code is going to use fewer resources and propagate faster because LabVIEW is going to use an 8 bit register above instead of the two bits below. 

 

My solution is to allow smaller integer representations of enums (I'd use a U2 in the above case).

 

Ideally the user wouldn't even have to specify the integer size, it could just calculate the minimum at edit time and show the user what it is.

A smaller (and cheaper) sbRIO based on the Xilinx Zynq chip. Target size is SO-DIMM form factor (68 x 30 mm (half the area of a credit card), 200 pins). Such a board  would be OEM friendly and can be plugged into a product (rather than the current sbRIO offerings that requires the product to be developed around the sbRIO rather than the sbRIO fitting into your product). Also, a Base Board that is (only) used during development. Below is what the proposed sbRIO and Base Board would roughly look like (courtesy of Enclustra FPGA Solutions)

mars_pm3_350.jpg       mars_pm3_350.jpg

Xilinx log window should use a fixed-width font.

 

Which of these two string indicators with identical content is easier to read?

 

FPGA Xilinx Log font.png

 

Spoiler
The one on the left is Courier, the one on the right is the default Application font

 

How amazing yould it be to have the ability to visualise resource usage on a FPGA target using a similar view to that shown above (courtesy of Windirstat)

 

I only recently shaved a significant portion off my FPGA usage by finding out that I had a massively oversized FIFO in my code for almost a year without noticing.  I feel that this kind of visualisation (with mouse over showing what is actually occupying the space) with differentiation between Registers, LUTs, BRAM, DSPs and so on would greatly aid those of us trying to squeeze as much as possible out of our FPGA designs.

 

I think providing this information based on the "estimated resource utilisation" i.e. before Xilinx optimises stuff away would be OK.  I'm not sure if the final resource utilisation can be mapped as accurately in this way.

 

It would also be nice to see CLIP utilisation and NI-internal utilisation at a glance as this is apparently hugely different between targets.

 

Shane.

NI has released an exciting new product with the High-Speed Serial board which is essentially a Kintex 7 with GTX transeivers exposed.

 

http://sine.ni.com/nips/cds/view/p/lang/en/nid/212693

http://sine.ni.com/nips/cds/view/p/lang/en/nid/212695

 

Some applications we are working on would benefit greatly from having some way to expose the GTP / GTX / GTH transceivers ont he various NI FPGA boards to enable high speed serial transmissions on a wider product palette.  Even the Virtex 5 supprts up to 16 GTP transceivers.  Having even 8 of these available for Aurora communication would be a complete game changer as we could move over to optical links between individual devices.

 

In order to be able to make proper use of this, NI needs to offer GTX / GTH / GTP options for many of their boards.  Imagine the power which could be unlocked by itnerfacing 8x Zynq 7010 boards with a single Kintex 7, each Zynq performing some aspect of pre-processing (Analog control, switching, some aspects of control) before sending results back to the Kintex 7..... For those of us working in Scientific areas, the prospects of a system like this is truly mouth-watering.

We need a way to simply reinterpret the bits in our FPGAs.  I currently have a situation where I need to change my SGL values into U32 for the sake of sending data up to the host.  Currently, the only way is to make an IP node.  That is just silly.  We should be able to use the Type Cast simply for the purpose of reinterpreting the bits.

Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.

 

My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.

 

User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.

 

Please let's see this in future versions of LabVIEW as standard.

 

http://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Multi-core-Compiling/idc-p/2301338#M297

Hi How about facility of import and export of I/O Label in FPGA-Real time project as shown image instead of manually renaming each I/O

 

IO Label.jpg

I just manually transferred a fairly large LabVIEW FPGA project from one target to another (7965R to 7966R).  It would be nice to be able to click on the RIO target in the project and have an option to "Migrate to New FPGA Target" in the context menu.  The menu would open a new dialog where you could select the new RIO target and then it is automatically added to the project and populated the VIs, FIFOs, derived clocks, memory blocks, etc. from the original target.  The user can choose whether or not to delete the original RIO target.

 

This would also make it very easy for users to transfer sample code from the LabVIEW Example Finder to the correct FPGA target (insead of having the folder labeled "Move These Files").

With availability of fast FlexRIO cards (such as NI 5761) and FPGA framegrabbers (NI 1483, PXIe-1435, NI PCIe-1433 ) data rates of 1GB/s are becoming commonplace.  However, the FPGA Module is limited to communication only with 32-bit LabVIEW. Since, typically you want to store more than 2 seconds of data in RAM,you would like to use 64-bit LabVIEW as your host application.  Unfortunately, this isn't possible yet.

 

While, I can imagine that a full blown 64-bit FPGA Module add on would be pretty difficult to build (and especially test), I believe there is a solid middle ground at this point.  I can imagine, coding and compiling the FPGA in the normal 32-bit LabVIEW environment, and then just using a 64-bit host application to Read/Write front panel controls and to read/write the DMA buffers from the FPGA.  I don't know the details, but this communication protocols could be very low hanging fruit if it's just a simple matter of recompiling a few key pieces for 64-bit operation.

 

Since the data rates passing to and from FPGAs will continue to climb, as well as the prevalence of 64-bit OS, a 64-bit version of FPGA Module is needed in the new feature pipeline.  This should also be kept in mind as other new FPGA Module features and tools are created, as planning for 64-bit compatability now will make the eventual transition to 64-bit much, much easier down the road.

FPGA registers would be more user friendly, if they could be quick dropped and also searchable (find caller as has been suggested before). This would be also great for handshakes.

Hello all.

 

It is time-consuming that we have to compile all LabVIEW FPGA code even if there is tiny little change on FPGA code.

I understand there is sampling probe, Desktop execution node and simulation tools to reduce such time.

 

Our customer in Japan, would like to use incremental compile function also on LabVIEW.(Please see below)

I agree his opinion.

 

http://www.youtube.com/watch?v=9v50uCVdW3o

 

What do you think?

 

Eisuke Ono

Application Engineer at National Instruments Japan.

 

 

If I am choosing to offload multiple FPGA compilations to either a local or cloud compile farm, can we not at least do the itnermediate file generation in parallel?  Our current design takes approximately 10-15 minutes to generate intermediate files.  For 5 Cloud compiles, this blocks my IDE for around an hour.

 

Since the file creation processes are independent of each other, why can't we do them in parallel?

Currently when you build a VI the bit file path is stored as relative (you can see it in the project XML). This means if you change the project location either:

 

  • Moving machines.
  • Checking in and out of source code control on different machines

You have to recompile the FPGA to use VI mode or run interactively. It seems the bitfile could be stored as a relative path like all VIs in the projects.

 

Cheers,

James

It would be nice to be able to use logic operators on arrays in Single Cycle Timed Loops.

  17863i0D7A4F514670B8AB

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the seond option: Why is it currently not possible to create a derived clock in code.  The ability to automatically have one piece of code accept a single clock reference and let one loop run at a multiple of the speed is something I've wanted to be able to do in the past but it is currently impossible in LabVIEW.

 

Please let us configure / define derived clocks in LV code.

The Tick Count function in LabVIEW FPGA can represent time periods with tick count accuracy of up to 2^32 clock cycles, that is (using the standard 40 MHz FPGA clock) about 107 seconds.

Sometimes I need to handle longer time spans, and I use this example.

 

I suggest to implement a built-in 64-bit tick counter.

 

 

 

64bittick.png

In current versions of LabVIEW FPGA, placing a For Loop inside an SCTL will result in code that cannot be compiled; this is because conventially For Loops work iteratively and therefore require multiple clock signals to drive each new iteration.

 

However, I think a logical implementation of a For Loop within an SCTL would be the generation of multiple parallelised instances of whatever code is inside the For Loop. This would greatly improve readability and flexibility by avoiding the user having to manually create multiple separate instances of the same critical code on the Block Diagram.

 

This would require the For Loop to execute a known maximum number of times.

 

SCTLs.png

 

 

I would like to have a feature to access several IO pin ranges to avoid programming this for a 9205 cRIO module:

check.png

With DIO modules like NI9403 you can program this:

check2.png

Why not provide Mod2/AI0:31 in the above image? (With subranges like AI0:7, AI8:15,… similar to DIO module?)