LabVIEW FPGA Idea Exchange

Community Browser
About LabVIEW FPGA Idea Exchange

Have a LabVIEW FPGA Idea?

  1. Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
  2. Browse by label or search in the LabVIEW FPGA Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  3. If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW FPGA Idea Exchange. Be sure to submit a separate post for each idea.
  4. Watch as the community gives your idea kudos and adds their input.
  5. As NI R&D considers the idea, they will change the idea status.
  6. Give kudos to other ideas that you would like to see in a future version of LabVIEW FPGA!
Top Kudoed Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

The error "You must recompile the VI for the selected target" appears for reasons that, to me, are often obscure or even inexplicable. Recompiling is, as we know, painful. It would be good if the error message included the reason(s) for refusing the existing bitfile, since then I may be able to work out how to stop it happening.

I understand the message comes because LabVIEW decides there are "dirty dots" associated with the bitfile, what I would like the error message to tell me is which dots are dirty and why.

Please add the ability to right-click on a Memory, Register and/or FIFO and FIND ALL instances throughout the project and/or VI hierarchy. Ideally it would be just like local and global variables (as shown) for desktop LabVIEW. 

 

FIND all instances for BRAMs.png

All FPGA dialog properties dialog boxes that allow to specify a custom control data type (e.g. Memory, register, FIFO, etc...) should show the path and name for the last .ctl configured. For example,properties.PNG

In labview it is not allowed to exit SCTL running in an external clock domain. Labview claims it could lead to instability of code due to glitches etc on the external clock.

I propose to leave the option open to the programmer to take that risk, which is not always there. It can lead to better understandable code.

For example I have code where I read data from an NI5752 ADC module and store in in block RAM (32 ADc channels, 32 block RAMs). Reading from that ADC implies acquiring the data in the external ADC clock domain. So, also the writing to memory is in that clock domain.

I needed to implement a function to reset the memory as well. That means writing to that memory.  That has to be done in an SCTL in the same external clock domain.

However, this reset function (subVI) can no be inserted in the normal "enable chain" of the main program, since the SCTL can not be terminated and the memory reset subVI never terminates.

 

Now I had to make an ugly trick to get this done. In the main program I create a dead branch doing the reset. That subVI never stops, but after the reset has been done it send a signal via a FIFO to the "wait reset" subVI in the main enable chain. the wait reset is running in the default clock domain and can exit the wait loop after the reset signal has been received.

Capture.PNG

However, this trick is not easy to understand from the program. It would have been easier if the reset function (external clocked loop)  could have exited by itself and be inserted in the main enable chain. That would have been more logical..

 

 

It would be nice if this code:

casestructure.pngCase Structure version

 

could be replaced by this code:

SPI Driver_BD.pngSelect version

 

but doing so gives an edit-time error specifying that "Select: Possiblity of Dynamic Refnum not supported for current target".

 

Is there a fundamental reason this can't be allowed? The behaviour is presumably the same in each case, and the references are from the same place in both cases. In the case structure, the exact same references that are passed to the Select node are used, with the same True/False choice.

 

It's not a huge issue, but it would be a nice usability/readability improvement.

I love using enums because they can often make discrete options much clearer. Example:

enum clarity.PNG

But, at least as of 2017, the below code is going to use fewer resources and propagate faster because LabVIEW is going to use an 8 bit register above instead of the two bits below. 

 

My solution is to allow smaller integer representations of enums (I'd use a U2 in the above case).

 

Ideally the user wouldn't even have to specify the integer size, it could just calculate the minimum at edit time and show the user what it is.

A smaller (and cheaper) sbRIO based on the Xilinx Zynq chip. Target size is SO-DIMM form factor (68 x 30 mm (half the area of a credit card), 200 pins). Such a board  would be OEM friendly and can be plugged into a product (rather than the current sbRIO offerings that requires the product to be developed around the sbRIO rather than the sbRIO fitting into your product). Also, a Base Board that is (only) used during development. Below is what the proposed sbRIO and Base Board would roughly look like (courtesy of Enclustra FPGA Solutions)

mars_pm3_350.jpg       mars_pm3_350.jpg

Hello,

Recently, Last year, i've had a bad experience, when i tried to compile my old FPGA applications with Windows 10.

 

=> The FPGA ISE XILINX compiler is no more compatible with Windows 10.

 

Will something be done ? I got no clear answer from NI support ...It should be a XILINX problem !

 

The issue is that some products on the NI web site, are sold without clear information about the incompatibility with Win 10 !

 

Please, add a "clear highlighted Warning" on the product page in order to inform about the problem : On FPGA boards and on CRIO targets ...

 

Thanks for help.

 

Xilinx log window should use a fixed-width font.

 

Which of these two string indicators with identical content is easier to read?

 

FPGA Xilinx Log font.png

 

Spoiler
The one on the left is Courier, the one on the right is the default Application font

Along the same line of thinking as this idea:

https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Handle-Fixed-Point-Integers-Going-Into-Case-Stru...

It would be cool if we got rid of the coercion dot here for smaller integers and integer fixed point numbers as the index for an array.

Smaller integer representation for indexes2.png

As it is, it's hard to tell if the compiler will use the bigger I32 for this.

This behavior would also useful for other integer inputs like the memory address input.

Arising from similar requirements as I have posted many moons ago: HERE.... I naively thought putting a terminal in a disable structure would remove it from the FPGA compile. It doesn't.

 

Years later, I have developed a nice debug interface for my FPGA code which is becoming more and more modular as I refactor it.  I have many sub-modules with their own debug interfaces which can be turned on or off from the top-level VI via LVOOP method injection.

 

The problem is that I can't really compile my entire FPGA VI with ALL debug paths enabled as this just won't fit (It will sometimes compile, but most often not and our FPGA code base is still growing).  And this is before I even think about making my debug information more detailed.  I would like to be able to easily switch certain aspects of the debug interface on and off as testing requirements change.  On the debug interface level I can do this easily by simply not reading the data from the objects being used for the data transfer or simply passing in abstract methods which don't actually do anything and get optimised away.  But I'm left with a load of FP controls which are still eating up resources on the FPGA target. Smiley Mad I don't want to delete the controls because that leads me to X copies of ever-so-slightly out-of-sync versions of my test VI which quickly becomes a maintenance nightmare.  Instead, I want to be able to "easily" reconfigure my test front-panel to only compile the stuff I'm currently actually interested in.

 

Part of what I would like is the ability to actually define areas of the FP which are enabled, disabled or enabled (and preferably also based on whether simulation is active or not - hence conditional disables for FP).  This way, when compiling, the FP elements will actually disappear and full resource savings can be made (as Xilinx is clever enough to optimise away any pointless code LV may stillhave instantiated in VHDL).  In addition, the ability to define certain controls as being enabled only when in simulation mode can allow us to have SGL graphs and so on present when needed during debugging.

 

So, would having conditional disable options for the FP (where controls are shown as greyed out when not available) be of interest to anyone?  If this would be an FPGA only thing, I wouldn't shed and tears.

 

Am I the only one who would use this? hmm. Maybe.

 

How amazing yould it be to have the ability to visualise resource usage on a FPGA target using a similar view to that shown above (courtesy of Windirstat)

 

I only recently shaved a significant portion off my FPGA usage by finding out that I had a massively oversized FIFO in my code for almost a year without noticing.  I feel that this kind of visualisation (with mouse over showing what is actually occupying the space) with differentiation between Registers, LUTs, BRAM, DSPs and so on would greatly aid those of us trying to squeeze as much as possible out of our FPGA designs.

 

I think providing this information based on the "estimated resource utilisation" i.e. before Xilinx optimises stuff away would be OK.  I'm not sure if the final resource utilisation can be mapped as accurately in this way.

 

It would also be nice to see CLIP utilisation and NI-internal utilisation at a glance as this is apparently hugely different between targets.

 

Shane.

Even though ibberger touched the concept in the idea , I do think that most o people uses LabVIEW under Windows environment. Compiling a FPGA VI happens all in the PC under Windows. I noticed that during this process the compiler uses only one core. Since I'm using a machine with a 4 core processor, the CPU use rarely goes above 25%.

 

My idea is to update the compiler allowing it to be multicore. The user should have the option to limit the maximum number of cores available to the compiler. This is necessary because the user may want to continue working, while the compiling process is being done in background.

We need a way to simply reinterpret the bits in our FPGAs.  I currently have a situation where I need to change my SGL values into U32 for the sake of sending data up to the host.  Currently, the only way is to make an IP node.  That is just silly.  We should be able to use the Type Cast simply for the purpose of reinterpreting the bits.

With availability of fast FlexRIO cards (such as NI 5761) and FPGA framegrabbers (NI 1483, PXIe-1435, NI PCIe-1433 ) data rates of 1GB/s are becoming commonplace.  However, the FPGA Module is limited to communication only with 32-bit LabVIEW. Since, typically you want to store more than 2 seconds of data in RAM,you would like to use 64-bit LabVIEW as your host application.  Unfortunately, this isn't possible yet.

 

While, I can imagine that a full blown 64-bit FPGA Module add on would be pretty difficult to build (and especially test), I believe there is a solid middle ground at this point.  I can imagine, coding and compiling the FPGA in the normal 32-bit LabVIEW environment, and then just using a 64-bit host application to Read/Write front panel controls and to read/write the DMA buffers from the FPGA.  I don't know the details, but this communication protocols could be very low hanging fruit if it's just a simple matter of recompiling a few key pieces for 64-bit operation.

 

Since the data rates passing to and from FPGAs will continue to climb, as well as the prevalence of 64-bit OS, a 64-bit version of FPGA Module is needed in the new feature pipeline.  This should also be kept in mind as other new FPGA Module features and tools are created, as planning for 64-bit compatability now will make the eventual transition to 64-bit much, much easier down the road.

 

User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.

 

Please let's see this in future versions of LabVIEW as standard.

 

http://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Multi-core-Compiling/idc-p/2301338#M297

I just manually transferred a fairly large LabVIEW FPGA project from one target to another (7965R to 7966R).  It would be nice to be able to click on the RIO target in the project and have an option to "Migrate to New FPGA Target" in the context menu.  The menu would open a new dialog where you could select the new RIO target and then it is automatically added to the project and populated the VIs, FIFOs, derived clocks, memory blocks, etc. from the original target.  The user can choose whether or not to delete the original RIO target.

 

This would also make it very easy for users to transfer sample code from the LabVIEW Example Finder to the correct FPGA target (insead of having the folder labeled "Move These Files").

Hi How about facility of import and export of I/O Label in FPGA-Real time project as shown image instead of manually renaming each I/O

 

IO Label.jpg

The Tick Count function in LabVIEW FPGA can represent time periods with tick count accuracy of up to 2^32 clock cycles, that is (using the standard 40 MHz FPGA clock) about 107 seconds.

Sometimes I need to handle longer time spans, and I use this example.

 

I suggest to implement a built-in 64-bit tick counter.

 

 

 

64bittick.png

Hello all.

 

It is time-consuming that we have to compile all LabVIEW FPGA code even if there is tiny little change on FPGA code.

I understand there is sampling probe, Desktop execution node and simulation tools to reduce such time.

 

Our customer in Japan, would like to use incremental compile function also on LabVIEW.(Please see below)

I agree his opinion.

 

http://www.youtube.com/watch?v=9v50uCVdW3o

 

What do you think?

 

Eisuke Ono

Application Engineer at National Instruments Japan.