In current versions of LabVIEW FPGA, placing a For Loop inside an SCTL will result in code that cannot be compiled; this is because conventially For Loops work iteratively and therefore require multiple clock signals to drive each new iteration.
However, I think a logical implementation of a For Loop within an SCTL would be the generation of multiple parallelised instances of whatever code is inside the For Loop. This would greatly improve readability and flexibility by avoiding the user having to manually create multiple separate instances of the same critical code on the Block Diagram.
This would require the For Loop to execute a known maximum number of times.
I am currently running into this issue where I have some constant data I'm trying to write to some DO lines. I want this data to be a constant array on my block diagram, so I create the array programmatically under the "my computer" target. I then change the indicator that is populated to a constant and move it to the FPGA. When I right click and set the array to be a fixed size, it replaces all my data with 0's. I propose the data should not be cleared if I set my fixed array size to be equal to the size which the array already is.
I am Muhammad Was,
an AE from NIJ.
While choosing FPGA variable, We should have sorted variable list for FPGA Read/Write Control option as we have in shared variable list that is always sorted and from A to Z.
In FPGA Read/Write Control option, variable added lately in FPGA VI, get higher position than old ones in the list.
Its voice of one of our FPGA customer.
Thanks and regards,
Make DRAM on sbRIO expandable on new and legacy sbRIOs. Add USB for connectivity, add more I/O expansion for 0-3, 0-5 and 0- 24 VDC.
sbRIO is pricey even for quantities. 35% profit margin should be fine.
When I build a new bitfile for my project, I sometimes (shock horror) make mistakes and bring the whole house of cards crashing down.
In situations like that, I would love to have the last version of the bitfile available for re-testing. Ideally, I could specify a pre- and post-build option for my compilation where I can define my own automatic re-naming and archiving scheme so that I no longer need to do painful re-compiles for reverting my code.
I am aware that this probably applies to more than FPGA but here the compilationt imes are more prohibitive and I feel the need is larger.
In LabVIEW FPGA 2011, only the base clocks enumerated in the project and clocks derived from the base clock(s) are available in the FPGA Clock Control. I’d like LabVIEW to show the top level clock in this control as well.
Consider designs with nested components that both CAN and CANNOT be optimized with the single-cycle timed loop. If the domain of the SCTL does not match the top-level clock domain that contains it, you seem to pay a heavy performance penalty. I presume it’s due to the clock-crossing logic under the hood. Thank you, by the way, for dealing with this for me! For example, consider this VI:
The While Loop will take more ticks (a few hundred more in cases I’ve seen) to execute than if the Clock Control constant was set to 200MHz (assuming you could compile). So, just set the TLC and the clock control to be the same, right? Sure, except when you change the top-level clock and a few hours later, when the compile is finished, realized you forgot (gasp) to change a clock constant and the code doesn't fill its timing requirement anymore.
LabVIEW 2011 Behavior:
This might have already been asked, but I couldn't find any posts.
X-Nodes are huge in comparison to the size of a subvi or most anything else on the block diagram of code, so lets shrink them down.
Can we remove the Read/Write box? We already have the little triangle to tell us the function/direction.
Can we use the node name instead of the generic term Data/Element? It's already there.
From there we can model it after a property node using references instead of error lines or we can model it after the IO node which is a little cramped but gets the job done. Both options retain a purple/pink bar to help identify it's X-node-iness.
I've been thinking to an experiment in wich I would need to acquire single shot spectra at 1MHz, with a sinlge line camera (1024 pixels, 12 bit), and then manipulate this spectra performing this kind of operation:
[ (spectrum1 - spectrum2)/spectrum2 + (spectrum3 - spectrum4)/spectrum4 + ... + (spectrumN-1 - spectrumN)/spectrumN ] / N
As I don't have experience with FPGA, do you think it would be possible to do this kind of things considering that the data flow (1M*1024*12) will be something like 2GB/s?
Similar to the overflow-status-functionality (which I missused in the picture below) it would be useful to make it possible to include handshaking bit(s) in the signal as well. It is true that this could be implemented using a cluster. However the additional cluster-level will imply a multiplication of type definitions; moreover a built in handshaking-bit-functionality could be included directly in the high-speed math functions an registers, so that a seperate 'output valid' terminal would not be necessary.
In real time engineering usualy the clock rate is a parameter which is needed in calculations. Therefore it would be useful to be able to access that rate as integer (or float). It is clear, especially in fpga-programming that the clock (and its rate) is not a variable, that can be chosen by the application user. This idea is rather about code development in order to avoid bugs. In the current situation I am forced to define a seperate constant copying the clock rate; in the course of later code changes I risk to forget to change that constant, when changing the clock.
For the same reason it would be useful to be able to access a clock refernce of an fpga-vi (an with it its rate) form the calling vi.
Having recently attempted to get started with Simulation for debugging my FPGA code I found out that apparently the built-in LV support for native LV testbenches using simulated FPGA is supported only for ModelSim SE.
This is a shame since ISim is included with the FPGA toolkit.
If feasible, expanding the functionality to allos co-simulation with ISim would be a rather splendid idea ideed!
Currently Measurement & Automation Explorer (MAX) only shows the following information for a typical R-Series card:
It would be helpful to add a "Device Pinouts" tab that shows you all the pin assignments for your Analog and Digital IO:
Sometimes I just want to compile a lot of Bitfiles (Be it for a release or a debugging test case) and I have to right click each and every Build spec and choose "Build". then wait about 10 seconds and do the same again for the next build spec.
How about being able to select multiple build specs and then select "Build Selection" and have time to go for lunch while the PC queues up all the compilations?
I don't use a compile farm and everything is done locally but at least the queuing could be automated.
When working with CLIP-generated clocks we need to have good UCF files for proper compilation control (Something we now have after WEEKS of debugging ).
At the moment the ucf files MUST be in Users\Public\Documents\National Instruments\FlexRIO\IOModules for the code to work even though all other CLIP-relevant files can be located anywhere.
Please let us use the ucf file located int he same directory as the CLIP we're using otherwise we'll end up with cross-linking nightmares between users who don't have the right version in their local folder.
It would be nice if we could use the Atlys board with the FPGA module. As far as i see not alot FPGA boards are suported besides the RIO FPGA boards. (only spartan3e xup) Since these are outdated could drivers be developped for some low cost none NI board ? Preferably the Altys ?
I just want to know, can I acure 10000 smaple (when the programme run in real time) per second or more, when data taking from a sensor, through the cRIO.
I didn't find something related to this, so I hope it's a new idea.
I use frequently VI scripting on LabVIEW, it is very useful for example to generate template VI's.
but this feature doesn't exist under FPGA, I mean some code is specific to this module, and I think it would be great to be able to generate FPGA VI's programmatically. For example in my job we make FPGA programming for Magnet Security. Even if global structure is the same for all magnets, we have to adapt a lot of things depending on type of magnet and instrumentation available. The idea would be to create ourself a kind of Magnet Safety Editor based on VI scripting specific for FPGA in order to allow non-programmers, but Magnet specialists, to generate themselves an adapted security system.
It's just an example, but when we see powerful of VI scripting for LabVIEW, it would give great results if it extends to FPGA, and even Real-Time module, why not?
The "FPGA I/O Properties" that can be set for an I/O point under an FPGA target consist of just the name for the I/O point.
On the other hand, the "Shared Variable Properties" that can be set for a similar I/O point under a cRIO chassis are much more extensive and include a description field.
I'd like to see a similar description field included/available for the FPGA I/O points. As it is information that is maintained as part of the "project", the reduced functionality normally associated with an FPGA should not be an issue.
As long as I'm wishing, it would also be nice to be able to export/import names/descriptions/properties from something akin to the "Multiple Variable Editor" for both "FPGA I/O Properties" and "Shared Variable Properties" and if an I/O module is moved from the FPGA level to the cRIO level or vice versa, allow us to transfer or import the relevant properties from one level to the other.