Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Search instead for
Did you mean:
Do you have an idea for LabVIEW NXG?
Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!
The QControl Toolkit is a fantastic library of tools for developing reusable UI components. I think they are a great alternative to XControls. Not only does the QControl Toolkit provide me the framework for developing my own QControls, but it also ships with some fully functional QControls, my favorite probably being the tree with checkboxes.
I think QControls are useful enough for all LabVIEW users that they should be part of the LabVIEW core product instead of an add-on toolkit.
The enum has a built-in right-click menu to 'Select Item', but it's worthless... it does exactly the same thing as clicking on the enum directly:
I think we should replace the 'Select Item' right-click on enums and rings with a Quick Drop-like UI that lets you filter the enum/ring contents and select an item without having to scroll a giant list:
For about a decade and a half I have consistently changed the default block diagram grid spacing to 4. Why, to match the shift arrow moves. Hold it! Why not make the shift arrow moves equal the grid spacing?
(note: this idea is similar to this one, but different enough that I thought it warranted a new post)
LabVIEW 2019 will include a Clean Up Front Panel feature that repositions front panel objects to match their positions on the (already wired) connector pane. I think we should also consider implementing the opposite behavior, where you can arrange your front panel the way you want your connector pane, then assign that arrangement to the (currently empty) connector pane:
I envision the gesture used to employ this feature would be a Quick Drop Keyboard Shortcut, an entry in one of the drop-down menus, or a right-click on the Connector Pane.
There are already a couple ideas to retain wire values on a hierarchy (here and here). This is a request to disable (or toggle) the 'retain wire values' option on all VIs of a hierarchy in the same vein as 'disable breakpoints on hierarchy'. This request was spurred by an investigation into VI performance on resizing a very large 2D array of strings, wherein several memory allocation strategies and lookup strategies (arrays, maps, variants) had been exhausted, only to discover that a subVI whose (front panel and diagram had been closed) was set to retain wire values. Merely toggling the retain wire value setting off on this particular subVI improved performance by approximately one order of magnitude for this particular project.
Investigating and accurately measuring run-time LabVIEW performance can be challenging with so many debugging tools available, the impacts of which are not immediately obvious. For this reason, I would like to see a menu option that recurses on VI hierarchy and toggles the 'retain wire values' option, in the same vein as the option which removes all breakpoints from VI hierarchy. The option would be very helpful with run-time performance investigation or optimization.
OK, using Malleble VIs is cool. Imaging being able to not only wrap code around a variable datatype but wrapping an object around a variable datatype.
I personally have quite a few classes which do pretty much exactly the same thing but which only differ by the precide datatype contained within its private cluster. No changes in the number of elements, no changes in methods, only the datatype of the specific data.
Obviously, the datatype would have to be compatible with all methods and VIs of the class which use that data field or we would have broken code.
CTRL+Click on an input is a great little tool to switch the input.
However, it only works when both inputs are wired. Often (, I or QD connected a wire wrong,) I feel like switching the input, before wire-ing the 2nd. Only to find it doesn't work...
Having to connect the 2nd wire just seems to disrupt the flow, being focused on the first input. Being forced to make things worse (connect two wrong wires) before being able to make it right just feels itchy to me.
It's a minor thing, but I never understood why it would be limited to 2 wires.
There are cases where a variant constant / control needs to have a data value. This can be done by right-clicking a control / constant, selecting Data Operations -> Copy Data, then right-clicking the variant and selecting Data Operations -> Paste Data. This can be time consuming for many variants.
The idea is to allow dragging and dropping of a control or constant onto a variant, and have the data copied to the variant:
When you use Quick Drop to insert a division operation onto a wire: you almost always want the numerator to be automatically wired, and the divisor to be unwired. Yet, Quick Drop always wires up the denominator.
When I print a front panel containing a graph (who ever does that?) I get a stunningly beautiful publication-quality render of my data (overlooking the poorly done vertical y-axis label). However, all the many options of directly exporting a graph image (detailed here) ultimately provide only a screen-resolution image. See the contrast in image quality below:
Printed front panel vs exported graph image.
This idea has been discussed ad infinitum on the forum (see, for example, here, here, here, here, and here), but has never been adequately resolved. (Enlarging the graph for export does not work since graph elements do not scale, so lines, points and labels end up being far too small.) High time to implement a feature that has been in demand since (at least) 2002!
Panel Actors provide a very extensible UI tool. There is simply nothing better than panel actors to build composite UIs, but we are still limited in that we can only insert UI elements into existing sub-panels or new windows. What if there was a simple function called Create SubPanel.vi with inputs of size and position, and an output of the reference to the created subpanel? What if closing the subpanel reference removed it from the front panel. Now we could create an infinitely complex UI at runtime. It should be a simple matter to build special support for this into the runtime environment. (I actually have no idea how hard it would be)
Use case: an acquisition UI with pre-defined display elements (graphs, gauges, numeric boxes, etc, etc), all created as individual panel actors. The user asks for a new graph element display during runtime. We programmatically drop a subpanel and then spawn the requested graph actor into it. User wants it moved, and we can do that with mouse events. Same for resize. User wants it gone and we can do that too. User wants 10 elements or 10,000. We can do that with no problem. Give us this and we can create literally any UI of any complexity, fully configurable by the user.
One of the most over-looked LabVIEW VI Properties, mentioned in all of the "Good LabVIEW Coding Practices", is the one called "Documentation", where you describe what the VI does, and its Inputs and Outputs (at a minimum). [NI Examples are certainly guilty of this].
I've been trying (and mostly succeeding) to ensure that every VI I write has such Documentation. Sometimes I make a mis-type, highlight "bad" parts, and hit the "Delete" (or backspace), then say "Oops, erased too much, let's undo that with ^Z". Except there is no Undo, or at least it isn't bound to ^Z here.
I can find no other place in LabVIEW that doesn't allow ^Z to replace deleted text. It works in String Constants, in Labels (whether Free Labels or names for Controls/Indicators), and other places.
To encourage LabVIEW Developers to use the Documentation property, can you please allow us to "undo a boo-boo" with ^Z?
The "Convert Instance VI to Standard VI" right-click option on malleable VIs made me think if this technology could be used for debugging otherwise inlined VIs (and therefore also malleable VIs)?
VIs are obviously inlined without their block diagrams, so my suggestion would require two changes:
1) 'Allow debugging' must be settable in combination with inlining in VI Properties >> Execution.
2) LabVIEW IDE and RTE would have to call inlined VIs in different ways, depending on this setting. Suggestion follows:
Running in the IDE
The IDE would now create a temporary unique instance for each running inlined subVI that has debugging enabled. This would allow us to double-click on an "inlined" subVI while it's running, and have that temporary instance open up ready for probing. Settings like the 'SubVI Node Setup' dialog would now also be available for inlined VIs, if they have 'Allow debugging' on. Set 'Allow debugging' off and inlined VIs will behave as today (so no change for any existing code).
Running in the RTE
Executables are built with 'Enable debugging' either on or off (Advanced page of the Application Builder). I suggest that Application Builder builds in unique instances in place of debug-enabled inlined VIs when 'Enable debugging' is on, and bakes in ordinary inlined code when 'Enable debugging' is off. This would allow us to debug into inlined (and thus also malleable) VIs inside executables with remote debugging. Loose VIs called with the RTE would have had their inlined callees compiled with either debug capability or not, depending on their 'Allow debugging' setting as discussed above in "Running in the IDE".
What do you think - would it be nice to be able to debug inlined VIs?
With the advent of the IoT and the growing need to synchronize automation applications and other TSN (Time Sensitive Networking) use cases UTC (Coordinated Universal Time) is becoming more and more problematic. Currently, there are 37 seconds not accounted for in the TimeStamp Which is stored in UTC. The current I64 portion of the timestamp datatype that holds number of seconds elapsed since 0:00:00 Dec 31, 1900 GMT is simply ignoring Leap Seconds. This is consistent with most modern programming languages and is not a flaw of LabVIEW per se but isn't quite good enough for where the future is heading In fact, there is a joint IERS/IEEE working group on TSN
Enter TAI or International Atomic Time: TAI has the advantage of being contiguous and is based on the SI second making it ideal for IA applications. Unfortunately, a LabVIEW Timestamp cannot be formated in TAI. Entering a time of 23:59:60 31 Dec 2016, a real second that did ocurr, is not allowed. Currently IERS Bulletin C is published to Give the current UTC-TAI offset but, required extensive code to implement the lookup and well, the text.text just won't display properly in a %<>T or %^<>T (local abs time container and UTC Abs time container) We need a %#<>T TAI time container format specifier. (Or soon will!)
<bakground>So i just had a silly thing happen to me, i compiled a program and deployed it to the target computer, just like i did yesterday. Then it started complaining about a missing LV 2017 Runtime ... How? Why?
I reinstalled, repaired and it didn't help, both program and RT, several times and it didn't help.
I tested another program which started just fine, so what's missing? Is it some sub-package that my custom RT-installer didn't include?
I called NI and the support technician couldn't see anything wrong either, so with him on the phone i went to the development computer to look at some installer settings he had dug up. That's when i noticed i was in LV2017 64 bit! The messages said nothing about that! </background>
Improve error messaging when missing RunTime. Include the bitness that's missing, now it only said "Missing LabVIEW RunTime 2017", if it had said "Missing LabVIEW RunTime 2017 (64 bit)" i would have reacted and understood directly, now it cost way too much time to admit. :/
Also, isn't it time to scrap the 32 bit version and go only 64?