LabVIEW Idea Exchange

Community Browser
About LabVIEW Idea Exchange

Have a LabVIEW Idea?

  1. Browse by label or search in the LabVIEW Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  2. If your idea has not been submitted click Post New Idea to submit a product idea to the LabVIEW Idea Exchange. Be sure to submit a separate post for each idea.
  3. Watch as the community gives your idea kudos and adds their input.
  4. As NI R&D considers the idea, they will change the idea status.
  5. Give kudos to other ideas that you would like to see in a future version of LabVIEW!
Top Kudoed Authors
Showing results for 
Search instead for 
Did you mean: 

Do you have an idea for LabVIEW NXG?

Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!

Post an idea

I propose that if an array is wired into a for loop, the tunnel should be auto-indexing by default (current behavior) UNLESS there is already an auto-indexing input tunnel in that for loop (new behavior).


Generally, when I wire an array into a for loop, I want an auto-indexing tunnel, so I am happy that it creates one by default. However, when I wire a second array into the same for loop and it creates another auto-indexing tunnel by default. This is usually not what I want because it will cause the loop to stop early due to one array being smaller. I'm afraid that this default behavior may cause bugs for new programmers because they may not realize to change it (in fact, this has even happened to me before). Default behavior should be the "safe" behavior. Making the decision to have more than one auto-indexing input tunnel in a loop is one that should be carefully considered, so it shouldn't happen by default, but rather should be changed explicitly by the user.


I know there have been many ideas posted about the current auto-indexing default behavior, but I didn't see this specific one anywhere, and I think it is an important suggestion.

Now that the SSP package is delivered on USB instead of DVDs (good stuff!), I have a minor request: Could you have the USB label include a release/version name on its label?

It might add too much of a cost depending on how you get them customized, but if that is not an issue it would be very practical to be able to see what the USB contains by its label (as we could with the DVDs).Smiley Happy


On a side note: Many companies have strict regulations on the use of USBs, and the need for such has increased with weaknesses like BadUSB. Perhaps NI could state something about how the USB sticks they send out are protected, either in the delivery package, or just as a statement on That way people who need to convince their IT departments to allow them to use the NI USB sticks will have something to show (I'm sure you will have to add some legal disclaimers there as well , but that's OK).

Many controls allow you to make scrollbars visible. When a user clicks anywhere within the control, including on the scrollbar, this counts as a Mouse Down. It would be nice if the Mouse Down event would indicate whether the click was on the scrollbar or on the actual clickable area of the control, so you could do different actions based on which it was. Of course, you can usually do manually by checking boundaries of the control against the coordinates of the click, but it seems like a common thing so it would be easier if the check was built in.

Scrollbar Idea.png

I would like to have the possibility to "negate" some comparison functions such as "Empty String/Path?". This can avoid to add the "Not" operator.

The picture below is a possible implementation. The dot on the input (on the output ok also) is showing the negation.

Labview Idea.png

This might be good to be implemented for the following functions:

Labview Idea 2.png

When a 1 Dimensional array of any type is showing only a single element, LabVIEW forces a horizontal scrollbar. I couldn't find any documentation or reasoning behind it. It's really annoying and ruins UI design that Vertical is the normal scrolling direction for just about everything else ever and LV messes that up for some seemingly arbitrary reason.

The LabVIEW IDE may coerce diagram constants when the enumerated typedef changes.  The problem is: the constants may occur in many places, and the coerced value is not necessarily helpful.

In the pictures below, the "Idle" element is removed from the typedef and the IDE coerces the diagram constant to a neighboring value "triggered".


Now the program behaves as if triggered. 



When a diagram constant becomes invalid, wouldn't it be better to flag it as broken, so that the programmer is forced to handle the problem?


It is nice if one can select which driver to download, specifying the version.


Currently it is only "Device Driver" item in the web-based installer. By using this option, user must download very huge driver ~10GB.


LabVIEW 2016 Platform (web-based installer).png


It is very helpful if one can select only necessary drivers like the UI of RT software installation which allows users to choose version such as NI-XNET 16.0, 16.1, or 17.0

Once in a while I encounter a case or event list that forces me to grow the structure

- in order to keep the list readable:


It would be nice if the item-list would automatically "wrap" instead.  


After reading Restore High Contrast Icons I procrastinated as long as possible before installing LV2016.  When I finally did, I was disappointed by the additional space required for the palettes; all of them!  I have been using LabVIEW since 5.0 and switched to an Icon view of the palettes shortly after getting comfortable with the graphics.  Now, I have to move my mouse further to get to each sub-menu and VI selection.  It's a waste of developer's time and apparently done for absolutely no good reason except to make a change; very similar to the washed out icons.

This extra space needs to be removed or at least an option provided to set the spacing back to the condensed spacing always available.

These images to show the relative size of the palettes LV2016 vs. 2015.

Controls Palette



Functions Palette




Yes, this might seem trivial, until you think about traversing several palettes to get to your needed VI.



*Random example, if one were doing FTP development they'd pin the menu.

** The original size of the above graphic is 1030 pixels wide; less than 800 for 2015.


Quit messing with what works and has become the standard with regards to options.  At least when that ridiculous "default" setting for icons instead of terminals was introduced we could undo the setting in Options

It seems that NI has hired some non-G experts to mess up the interface simply so they can enumerate all the "great" improvements they've made.  Or, was all the extra space to make sure newbies couldn't miss the folder tab, since connecting the "right arrow" on an icon to it being a sub-folder would be too difficult for children?


I like constant folding.  LabVIEW says "this is going to be a constant".


There are some times that I want to see the actual values of the constant.  In the middle of a large VI it can be a pain to de-rail to make a constant, then come back.  It can be easier to look at an array of values for troubleshooting.


I wish there was a way to right-click and either show what constant-folding gives, or even convert it to an actual constant.  This is going to change the development side, not the execution side.


While it doesn't have to be reversible, it would be nice to know I got it right, then revert it to code, in case I want to vary the generation of values at a future time.

There is no way how to programatically disable the Run-Time Shortcut Menu with a property node.

The only way is to work with event structures what is quite unhandy:

The ctrl+drag and ctrl+alt+drag shortcuts for adding and removing space on the BD are super awesome. But I think they would be even *more* awesome with the following tweak:

Limit space addition/removal to the visible diagram; i.e. don't make any changes to other cases in a case structure.


I have had many situations where I was doing cosmetic tweaks in one case of a case structure, only to find I have overlapping items in another case that I wasn't looking at.




Hopefully this simple animated gif demonstrates what I'm talking about.


(And yes, I know we could enable "auto-grow" on the case structure to eliminate the shrink-case-smaller-than-contents problem, but that still doesn't address all the potential issues here.)

HideLabelsInInPlace.pngIt'd be great if you could hide or just show the 1st letter (and thus color) on the left side of a IPS, forcing both often gives no extra information and steals a lot of space.



The label should be hidden as standard on This VI referrence as it gives no extra information, it only clutters the block diagram.


A picture is worth a thousand words.


Listbox Control.png


Basically I'd like more control over the text in listboxes.  I want the same level of control that you can get from a string control, where each character in a string element can have custom font settings.  At the moment each line in a listbox must have the same settings.  This idea is to have more control over the font settings of listboxes, and multicolumn listboxes, as well as implementing the property nodes that allows for these settings to be controlled problematically.

I'm developing some software for a colleague, to run on (one of) his machines.  I do my best to follow Good LabVIEW Practices, including using Version Control (SVN) to maintain my code.  Hence my Project "jumps" from computer to computer.


I recently noticed an old problem re-appear, namely occasional Front Panel and Block Diagram labels appearing in a different Font Size (18) than I've set as the default on my Office Desktop and home Laptop (15).  This was really irritating (especially having to find those wayward labels and "fix" them), forcing me to re-examine the Where and How of setting Default Fonts in LabVIEW.


This still appears to be a Dark Art, one (perhaps) involving LabVIEW.ini (and some undocumented keys, not present in the "vanilla" configuration file).  There appear to be several such INI files, with LabVIEW.ini attuned for development, and an INI file in the Data folder of a built Executable for that Executable.  But still, the values are bereft of documentation (i.e. documentation is conspicuous by its absence) and not everything is explained (like why some values are in quotes, what they mean, and how one sets a specific Font, e.g. Arial).


One thing that I, in particular, would like to see would be the ability to set the Font Defaults on a Project basis.  For myself, I "own" the Project, and would want it to have a consistent Font (size) on all my VIs (unless I specifically decide to Emphasize something), no matter on what machine I develop them and when.  If I have to set the Font Default on a machine-wide basis, then every time I develop on my colleague's PC, I'd have to (a) note his settings, (b) change them to mine, and (c) remember to set them back when I finish.  As such sessions are often an hour here, an hour there, this "machine-centric" setting becomes a nuisance fast.


I recently had the opportunity to discuss this with an NI Applications Engineer, who assisted me in finding (some of) the obscure references to Font Setting Tricks.  I noted that a lot of what the Community knows seems to come from "Reverse-Engineering" NI's settings, and that some Documentation and Standardization (let's get away from designating Fonts as "1", "2", or "3", which have no intrinsic meaning, please) would be a good idea.  Hence this LabVIEW Idea.


Bob Schor

Currently, having one misconnected wire breaks the entire wire tree and pressing ctrl+b wipes out everything. Poof!


In the vast majority of (my) scenarios, a broken wire is due to a small problem isolated to one branch so it does not make sense to drag the entire wire from the source to all valid destinations down with it and break everything in the process.


Here is a simplified example to illustrate the problem (see picture).



In (A) we have mostly good code. If we add a wire as shown, that wire (and VI!) must break of course because such a wire would not make any sense.


However, it does not make sense to also break the good, existing branches of the wire (the cluster in this case), but that is exactly what we get today as shown in (B). If we press ctrl+b at this point, all broken wires will disappear and we would have to start wiring from scratch (or undo, of course Smiley Happy). Even the context help and tip strip is misleading, because it claims that the "source is a cluster ... the sink is long ...", while that is only true for 25% of the sinks in this case!


What we should get instead is shown in part (C). Only the tiny bad wire branch should break, leaving all the good connection untouched. Pressing ctrl+b at this point should only remove the short bad wire.


The entire wire should only be broken in cases where nothing is OK along its entire length, e.g. if there is no source or if it connects to two different data sources, for example.


Summary: Good parts of a wire should remain intact if only some of the branches are bad. Wires that go to a destination compatible with the wire source should not break.


(Similarly, for dangling wires, the red X should be on the broken branch, not on the good source wire as it is today)


Implementation of this idea would significantly help in isolating the location of the problem. Currently, one small mistake will potentially cover the entire diagram with broken wires going in all directions and finding the actual problem is much more difficult than it should be.


                                            "Build Path" should be Growable


                              something like this,




We can right-click a string object and change the state (control, indicator, constant, array, element) by right-clicking. Unfortunately, the current behavior is (partially) inconsistent in the way the display format (normal, /-codes, pass, hex) is handled. Here are some results (list is incomplete), the symbol <> means in either direction.


Control<>indicator: The display format is retained

Array<>array constant: The display format is reset to "normal". *(Also see below)

Control|indicator<>constant: The display format is reset to "normal".


(*note that if I drop a string constant into an empty array container, the format and element size is retained. Converting to array using right-click should do the same!)


Whenever a conversion involves a diagram constant, the current display format is lost. I think it should be retained!


I've been trying to follow google's material design guidelines.


There are many things I struggle with when building a UI in LabVIEW... I think following a set of guidelines designed by google is a good starting point.


LabVIEW UI capabilities should evolve to help us implement material design UIs