From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

The Tick Count function in LabVIEW FPGA can represent time periods with tick count accuracy of up to 2^32 clock cycles, that is (using the standard 40 MHz FPGA clock) about 107 seconds.

Sometimes I need to handle longer time spans, and I use this example.

 

I suggest to implement a built-in 64-bit tick counter.

 

 

 

64bittick.png

With availability of fast FlexRIO cards (such as NI 5761) and FPGA framegrabbers (NI 1483, PXIe-1435, NI PCIe-1433 ) data rates of 1GB/s are becoming commonplace.  However, the FPGA Module is limited to communication only with 32-bit LabVIEW. Since, typically you want to store more than 2 seconds of data in RAM,you would like to use 64-bit LabVIEW as your host application.  Unfortunately, this isn't possible yet.

 

While, I can imagine that a full blown 64-bit FPGA Module add on would be pretty difficult to build (and especially test), I believe there is a solid middle ground at this point.  I can imagine, coding and compiling the FPGA in the normal 32-bit LabVIEW environment, and then just using a 64-bit host application to Read/Write front panel controls and to read/write the DMA buffers from the FPGA.  I don't know the details, but this communication protocols could be very low hanging fruit if it's just a simple matter of recompiling a few key pieces for 64-bit operation.

 

Since the data rates passing to and from FPGAs will continue to climb, as well as the prevalence of 64-bit OS, a 64-bit version of FPGA Module is needed in the new feature pipeline.  This should also be kept in mind as other new FPGA Module features and tools are created, as planning for 64-bit compatability now will make the eventual transition to 64-bit much, much easier down the road.

I have several FPGA projects that require significant compile time (up to 1.5 hours), and for that I am thankful to have my compile server running on a separate computer.

 

The issue comes with the seven Pre-Compile steps that occurs before LabVIEW sends to the code to the compiler. On one particular project this action alone can take up to 35 minutes during which time I can do nothing on that machine.

 

I would like to see much of this precompile time moved from the development environment to the compile server. There already exists a mechanism for updating the user with the compile status so those precompile errors could be annunciated in a similar fashion.

 

Get the development system back online as quickly as possible.

 

User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.

 

Please let's see this in future versions of LabVIEW as standard.

 

http://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Multi-core-Compiling/idc-p/2301338#M297

Writeable inputs to FPGA I/O nodes can be left disconnected without any warning (or broken VI indication) from the VI in which the I/O node is used. This can cause some vigorous head-scratching if the missing connection is not immediately obvious as in the screen shot below. For obvious reasons, FPGA controls have no connector assignment or "Recommended, Required, Optional" attribute. In that case, and to avoid playing "Where's Waldo" on the block diagram, I suggest making FPGA I/O node input connections implictly "required", and if not, the VI would be broken. This would be the same behaviour as seen with cluster nodes. 

FPGANode.png

The latest Virtex-7 FPGAs have something like 20 times the computing power of the biggest FPGA supported by LabVIEW FPGA; it would be cool to be able to get those on a FlexRIO card.

 

Other companies make FPGA boards with up to 32 GB of RAM, the biggest FlexRIO has 512 MB; would be cool to have FlexRIO cards with RAM in the gigabytes.

I love using enums because they can often make discrete options much clearer. Example:

enum clarity.PNG

But, at least as of 2017, the below code is going to use fewer resources and propagate faster because LabVIEW is going to use an 8 bit register above instead of the two bits below. 

 

My solution is to allow smaller integer representations of enums (I'd use a U2 in the above case).

 

Ideally the user wouldn't even have to specify the integer size, it could just calculate the minimum at edit time and show the user what it is.

When considering options, it's important to see the development and deployment price. Please put the sbRIO prices on the NI website so we can consider it.

When writing LabVIEW code for an FPGA target, the most important considerations are speed and resource usage.  By using the single-cycle timed loop (SCTL), we can increase the speed of the program by allowing more than one operation to complete per clock cycle.  We also decrease resource usage by removing the flip-flops that would be required to store values between clock cycles for the operations in the SCTL.

 

However, there are limitations of the SCTL.  For some operations, it takes significantly less resources to implement something using a for loop rather than a single-cycle timed loop.  With a for loop, one can auto-index a result at the border of the for loop (if the preallocation of arrays option is selected) to obtain a fixed-size array (valid on the FPGA).  Below is the simplest possible example:

 

AutoIndexed For Loop

 

The equivalent with a single-cycle timed loop would be:

 

SCTL

The replace array/subset VI consumes resources proportional to the size of the array.  Depending on the operation being performed, this can increase resource usage such that it is more practical to use a for loop (as shown above).

 

I propose the creation of a single-cycle timed for loop.  Here is a very rough mock-up (MS Paint is not the most adequate of image processing tools... you will get the idea):

 

SCTFL

 

This solves two problems: 1) It allows for the compiler to know how many times to loop will run at compile time.  It also simplifies the UI by letting the user know how many times the loop will run without having to think through a condition.  2) It allows for the more efficient creation of fixed-size arrays through a SCTL (rather than through a for loop).

For debugging, using FPGA VIs in interactive mode can be very valuable.  I have, to this day, not been able to find out how LV determines if a bitfile and a VI match.

 

Therefore whenever I click on the run button for a VI, I'm never quite sure if the bitfile will match or not and often have to wait 1-5 minutes before I can resume working with LabVIEW.  This is a very high price to pay for something which I end up cancelling.  I would like very much if the IDE would TELL ME that the bitfile and VI don't match before starting a new compilation and thus wasting my time.

 

This is opposed to a CTRL_Click of the run arrow which explicitly tells the IDE to compile.

The first thing you hear about programming FPGAs with LabVIEW is: use single-cycle loops. But if you build a state machine (while-loop + enum + case structure), in many cases you cannot make the outer while loop in a single cycle-loop, because not every state fits into a single-cycle loop. Therefore you have to place the single cycle-loop into every case, which has to run in one cycle, which takes up block diagramm space and is cumbersome. Therefore my idea to create a single-cycle case structure: it uses the same compiling mechanism of the single-cycle loop on every case, which is capable of running inside a single cycle. It would be nice, if this behaviour is configurable, meaning I can decide from case to case, if it is a single-cycle case or not. Some kind of right-click menu options like these: "make this case single-cycle", "make every possible case single-cycle", etc. Of course the mode of the case (single-cycle or not) should somehow be displayed.

 

Regards,

Marc

It sure would be nice to take advantage of timed loops when your FPGA is dictating the timing of your host code with IRQs. DAQ and XNET can make timing sources to drive the timed loop, why not LV FPGA?

 

I'm imagining a "Create timing source from IRQ #" VI. # input and string output along with the reference wires.

My original problem was that I in the FPGA have an array of data, where I need to do some calculation, where the only appropriate way was to use a pipeline. The pipeline is a very strong tool in the FPGA, but I think the LabVIEW tools could be changed so the pipeline is easier.

 

My old implementation if the pipeline:

old pipeline.jpg

 

Suggested implementation of single cycle pipeline, with shift registers in the tunnels, so all the code is run on each of the 5 elements in the array.

New pipeline.jpg

Maybe there should be added a number of iterations (like the “for loop”), if the number of data, is not defined by the array size.

In another project I have a continuous running pipeline, I have implemented in different ways, but one simple is as shown:

 

Before

 

old pipeline2.jpg

 

 

Here the new pipeline sequence could also be used, maybe like following:  

 

New pipeline2.jpg

 

Here it should be stated in the loop tunnel, that the input data is read continuous.

Here I have shown in both examples, that it should be single cycle times loops, but maybe the pipeline structure should also be able to run with another timing (determined by the slowest frame).

I have seen the idea about the timed frame, it will help on the last example, but there will still be need for a pipeline structure.

I think it would be useful if LV kept track of device utilization over each compilation. The data could be presented as a graph which might give useful clues to the developer how the project is approaching the limits of the FPGA. Also, I think this data could optionally be stored in the same folder as the bit file so that the developer can review the file history with their source control.

utilization 3.PNG

 

Hi, since there an be a queue for compiling FPGA code, it seems natural to me to also be able to make a queue for generating intermediate files.

 

I'm working with 10 build specs. for compilation per project and generating intermediate files for my design takes aprox. 3-4 minutes. This means that I need to sit by my computer for half an hour just waiting and clicking build on every build specification. Sometimes I work with FPGA VI which need to build intermediate files for something like 7-10 minutes, so this is a pain.

 

It would be great if there was a way of just highlighting all build specifications for compilation with shift and just creating the intermediate files for them automatically one by one.

 

Can this be done?

I've searched but can't see anything similar - please add a method for setting the timeout for FPGA nodes. This includes the 'Open FPGA reference' and FPGA IO nodes.

 

If you disconnect a cRIO FPGA (e.g. NI 9148) from the network, it takes 20-30s for the IO node or Open FPGA reference to execute. This is really bad for the user experience as if they try to exit their application in this time it may take half a minute for the application to exit. It also means you may have to wait that length of time to realise that your FPGA has disconnected under most use cases (you can obviously have an external watchdog loop to check that the node is executing in a timely manner)

 

Please allow me to configure the timeouts for these nodes similar to the TCP/UDP or VISA nodes. They are very similar in how they operate to the FPGA nodes (i.e. a hardware device driver which is susceptible to disconnects!) so I don't understand why these have been omitted.

 

I wouldn't mind having to set the timeout as part of opening the FPGA reference and then internally have it use the same timeout for other IO nodes as follows:

2014-09-29_17-16-29.jpg

 

 

Problem:

Auto-Indexing of LabVIEW is extended to LabVIEW FPGA, only with one small caveat. You can easily auto-index into a loop, but not out of it. You will understand this better if you've already worked with LV FPGA.

In the FPGA paradigm, we enforce compile time resource determinism, by making sure all our arrays are of a fixed pre-determined size. In auto-indexing out of a loop, we may not know what the size of the array is, and hence it breaks the VI with the error "Arrays must be of fixed size". Try to write the following code in LV FPGA, for a better picture:

Auto-Indexing LV FPGA

 

Solution:

The current workaround is that we have a fixed size Array which we then use in and out of the loop, replacing its

elements, as shown below.

 

2.PNG

 

However, an easier and much much more intuitive solution for users would be to just right click the auto-indexed tunnel and set the dimension size.

Auto-Index Pop 

 

This definitely means that the number of data flowing out of the loop could be more than our fixed size. We handle that case by providing the user with the "In case of overflow" option.

4.PNG

 

 

This would ease our effort in coding LV FPGA as much as it would would improve intuitive coding. Vote for this idea if you think it would make your life a tad bit easier.

If I am choosing to offload multiple FPGA compilations to either a local or cloud compile farm, can we not at least do the itnermediate file generation in parallel?  Our current design takes approximately 10-15 minutes to generate intermediate files.  For 5 Cloud compiles, this blocks my IDE for around an hour.

 

Since the file creation processes are independent of each other, why can't we do them in parallel?

The FIFO read looks like an event based node (like a dequeue or wait on occurance) and I think there's a lot of people that assume it's going to use minimal cpu resources while it is waiting for data. I'm wondering if we can have an option that behaved like that. For example, could we have fixed sized FIFO read where the FPGA could trigger an interupt to let the RT side know the data is ready?

On the cRIO-9068, the third serial port and the second Ethernet adapter is actually mounted on the FPGA, resources are consumed to redirect to realtime. Currently there are no access to this resource on the FPGA for developers, only from the Realtime.

 

I would like some I/O Nodes for interacting with these devices on the FPGA. NI could put up some examples how they could be used.

 

Today the resources are invisible to the developer, except for the additional long compile time and resources used (about 7%).

 

I attached pictures of the FPGA design and the resources consumed for a blank vi.

 

 

Sincerly,

Jens Eriksen