LabVIEW FPGA Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

In labview it is not allowed to exit SCTL running in an external clock domain. Labview claims it could lead to instability of code due to glitches etc on the external clock.

I propose to leave the option open to the programmer to take that risk, which is not always there. It can lead to better understandable code.

For example I have code where I read data from an NI5752 ADC module and store in in block RAM (32 ADc channels, 32 block RAMs). Reading from that ADC implies acquiring the data in the external ADC clock domain. So, also the writing to memory is in that clock domain.

I needed to implement a function to reset the memory as well. That means writing to that memory.  That has to be done in an SCTL in the same external clock domain.

However, this reset function (subVI) can no be inserted in the normal "enable chain" of the main program, since the SCTL can not be terminated and the memory reset subVI never terminates.

 

Now I had to make an ugly trick to get this done. In the main program I create a dead branch doing the reset. That subVI never stops, but after the reset has been done it send a signal via a FIFO to the "wait reset" subVI in the main enable chain. the wait reset is running in the default clock domain and can exit the wait loop after the reset signal has been received.

Capture.PNG

However, this trick is not easy to understand from the program. It would have been easier if the reset function (external clocked loop)  could have exited by itself and be inserted in the main enable chain. That would have been more logical..

 

 

On the cRIO-9068, the third serial port and the second Ethernet adapter is actually mounted on the FPGA, resources are consumed to redirect to realtime. Currently there are no access to this resource on the FPGA for developers, only from the Realtime.

 

I would like some I/O Nodes for interacting with these devices on the FPGA. NI could put up some examples how they could be used.

 

Today the resources are invisible to the developer, except for the additional long compile time and resources used (about 7%).

 

I attached pictures of the FPGA design and the resources consumed for a blank vi.

 

 

Sincerly,

Jens Eriksen

 

 

P2P is a very useful technology for sharing data between NI targets.

 

Could this be provided for GPUs?

Problem:

Auto-Indexing of LabVIEW is extended to LabVIEW FPGA, only with one small caveat. You can easily auto-index into a loop, but not out of it. You will understand this better if you've already worked with LV FPGA.

In the FPGA paradigm, we enforce compile time resource determinism, by making sure all our arrays are of a fixed pre-determined size. In auto-indexing out of a loop, we may not know what the size of the array is, and hence it breaks the VI with the error "Arrays must be of fixed size". Try to write the following code in LV FPGA, for a better picture:

Auto-Indexing LV FPGA

 

Solution:

The current workaround is that we have a fixed size Array which we then use in and out of the loop, replacing its

elements, as shown below.

 

2.PNG

 

However, an easier and much much more intuitive solution for users would be to just right click the auto-indexed tunnel and set the dimension size.

Auto-Index Pop 

 

This definitely means that the number of data flowing out of the loop could be more than our fixed size. We handle that case by providing the user with the "In case of overflow" option.

4.PNG

 

 

This would ease our effort in coding LV FPGA as much as it would would improve intuitive coding. Vote for this idea if you think it would make your life a tad bit easier.

Hello,

 

In Labview 2010, the implementation of INLINE VIs has been improved. But this feature is not aivalable in Labview FPGA.

 

When you are looking for ticks/space you have to replace the VI calls by their content ... and then the FPGA VIs becames rapidly unreadable.

 

I think that inline VI could be very interresting in FPGA because ...

 

 

  • Ability to create userfriendly / updatable / readable / clear FPGA diagrams 
  •  Optimize the time/space needed to call a real VI
By default, FPGA VIs should all be "Inline" VIs !
Manu.

 

We have a "group" of occurrences.  We can't identity individual occurrencs easily because you can't store them in an array (where we would use indexes to identify).  When using a cluster they all get the same name (labels aren't used!) which is bug-prone.  The only workaround is to create a control cluster, and that's not a clean solution.

Like a Formula node or a math script node why not a RT node that will support Verilog and VHDL? Yeah yeah i know the time taken to code will take a hell a lot of time compared to what can be sweetly done in lv (So dont compare) but at times verilog support will have its advantage.

Currently, when you put a fixed point number into a case structure, it uses the next largest integer and you get a red cooercion dot:

Allow fxd point integers in case structures.PNG

 This is unfortunate because, you have to have a default case. It would be nice if the case structures could take the fixed point type since there's isn't any of the ambiguity that exits with floating point. Using a smaller number for the selector might also provide an optimization.

 

 

HERE I detailed a problem I currently have with Registers between two clock domains which are closely related (phase-locked).

 

It turns out that there is handshaking going on which, essentially, is not really neccessary.  It would be nice to have the option to have something similar to a Register for such clock domains where we know explicitly the relationships between the clocks and thus does not require handshaking.

 

Shane.

At present, if you are trying to simulate your FPGA's actual logic, using a custom VI like this:

1234.png

Then you know that your custom VI test bench only has one case for methods (just a general method case, not a case for each method available). There are ways to get around this problem--for example, this example emulates a node and suggests using a different timeout value for wait on rising edge, wait on falling edge, etc, but one still has to write the code for the different methods.

 

My suggestion is as simple as this: make test benches easier to use by handling all of the methods and properties with a set behavior. That way, all one has to set up when creating a test bench is the input and output on each I/O read/write line. At the very least, it would be nice to have the ability to read what method is being called, so the appropriate code can be set up without complicated case structures.

Hello,

 

It should be nice to be able to get some general informations, on windows,  about a FGPA VI using its reference.

 

For example it should be interesting to get ...

 

 

  • The main cycle loop frequency
  • A version ID 
  • The CRC of the bitfile
  • ...
This kind of informations could be usefull in case of dynamic bitfiles downloads ...
Or when you try to connect to a running target, you could ask dynamically to get informations ...
I think this kind of informations are known by Labview FPGA ... but only the property nodes are missing.
Thanks.

 

A very useful feature of the FPGA Butterworth filter is the ability to use it multiple times, saving FPGA resources.

 

However this is not possible for 32 bit wide filters, only for 16 bit filters.

 

It would be useful if the 32bit filters could go multichannel too, at least two channel

 

 

The CORDIC High throughput functions available in LabVIEW are capable of running at high frequencies, thus allowing FPGA code to (for example) multiplex multiple demodulators without exploding device utilisation.

 

Unfortunately, the option to apply a Gain correction to the results does not pipeline the actual multiplication, thus artificially limiting the available speed of the CORDIC algorithms.

 

In my code I always deactivate the Gain compensation and do this "manually" allowing the code to compile at much higher frequencies and more efficiently utilising the FPGA device.

 

It would be great if it were possible to also pipeline this multiplication as part of the CORDIC High-throughput node instead of being forced to implement the multiplication separately.

In this thread, I learned that you can't change the sbRIO analog IO to Raw. I would like that functionality to help reduce FPGA resource usage.

raw sbrio2.png

Well it seems that persistent/non-volatile memory is not available on FPGA targets for scenarios involving cycling power.

 

Apparently, the recommended approach is to transfer the data to the host and store it to disk.

 

This is a bit problematic in that the choice is either to write data to disk at a high rate or to accept that the most recent data might not be reloaded on restart.  For instance, an operator might expect to know exactly how many revolutions a shaft has undergone after power cycles to the FPGA target. A guaranttee that this information is as up to date as possible probably can't be met (maybe even under transferring data to disk at a high rate).

 

So I'd like to request this. 

As part of my quest to solve problems arising from over-cautious Register transfers HERE I found a solution which WOULD have worked if I was able to force multiple clocks derived from the same source to be have synchronised start points (so that the iteration counters of the loops are known relative to each other). It seems that clocks derived from the same base clock do not neccessarily all start with Iteration zero at the same time.

 

My suggestion would be to either

  • Give some option to force such loops to have synchronous starts (also when using external clocks) -or-
  • Allow loops with external clocks to terminate so that we can put together out own synchronisation method

Shane

It would be nice if the sbRIO targets supported the DMA Acquire Write Region (and Acquire Read Region, although maybe that's already supported - I only tried Write Region since that's what I need for my application). Failing that, perhaps the documentation could mention that those methods are not supported on all targets?

When writing LabVIEW code for an FPGA target, the most important considerations are speed and resource usage.  By using the single-cycle timed loop (SCTL), we can increase the speed of the program by allowing more than one operation to complete per clock cycle.  We also decrease resource usage by removing the flip-flops that would be required to store values between clock cycles for the operations in the SCTL.

 

However, there are limitations of the SCTL.  For some operations, it takes significantly less resources to implement something using a for loop rather than a single-cycle timed loop.  With a for loop, one can auto-index a result at the border of the for loop to obtain a fixed-size array (valid on the FPGA).  Below is the simplest possible example:

 

AutoIndexed For Loop

 

The equivalent with a single-cycle timed loop would be:

 

SCTL

The replace array/subset VI consumes resources proportional to the size of the array.  Depending on the operation being performed, this can increase resource usage such that it is more practical to use a for loop (as shown above).

 

I propose the creation of a single-cycle timed for loop.  Here is a very rough mock-up (MS Paint is not the most adequate of image processing tools... you will get the idea):

 

SCTFL

 

This solves two problems: 1) It allows for the compiler to know how many times to loop will run at compile time.  It also simplifies the UI by letting the user know how many times the loop will run without having to think through a condition.  2) It allows for the more efficient creation of fixed-size arrays through a SCTL (rather than through a for loop).

Sometimes, you might not care about the outputs under certain input conditions. "Not caring" can lead to significant improvements in optimization and thus resource utilization but there's no way to tell LabVIEW right now that you "don't care". I propose we create new data types that can support "don't care". It should start with the booleans but when you convert a boolean array to an integer, if one of booleans has a "don't care" the numeric output also then becomes a "don't care" which is yet another data type we need as well.

 

Here's what a "don't care" might look like if the user didn't care about the output if the input was 2:

dont care.png

The error cluster has a string to identify where the error occurred, "source". In a FPGA code the string is only accepted (no broken run arrow) inside an error cluster. I guess this is implemented in this way to maintain code compatibility when you move code to another kind of target. The problem is that doesn't matter what you write in these strings when you are in a FPGA environment, it's ignored.

Some people use the same error code to show a kind of error changing only the source to identify where. I got a software, written by somebody else, that used error code 5000 in all user defined errors, changing only the "source" string. That give me no clue where the error was happening.

Since in a FPGA target only the "code" is useful i a error cluster, I propose two solutions:

1) A warning when a string in a error cluster is not empty (compilation time);

2) The FPGA compiler converts the ASCII chars in "source" string of the error cluster in a fixed size array of bytes [U8]. This array will be converted into a string in a target that can handle it. This is very common when you read a error cluster indicator in a FPGA VI from a RT VI. This solution will have a little overhead but it maintains 100% compatibility.

I like the second solution a bit more. A limited number of characters should be allowed order to save memory. One solution to that is to have a configurable option to determine how many chars is the maximum allowed.