Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Search instead for
Did you mean:
Do you have an idea for LabVIEW NXG?
Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!
It is currently difficult to select all the contents of a cluster without selecting the cluster itself, on the front panel or a block diagram. This is especially true if the contents are themselves clusters or arrays; a marquee selection often picks just a few elements from them. I suggest having a command on a cluster's right-click menu called "Select all cluster contents" that does exactly that. Alternatively, or in addition, this could be a shortcut such as Ctrl+Shift+A (to match the existing Ctrl+A for selecting everything on a front panel or block diagram).
An example situation where this is useful is when you place a TypeDef constant on the block diagram, and want to show the labels on each element of the cluster, to help you fill in the values.
Overlay drawings are super useful for highlighting regions of images, but sometimes just a solid line is not enough. I'd like to be able to change overlay line types between Solid and a dashed or dotted line. Color isn't always enough to discern between different overlays and I think line styles would help with that. I've considered how to do it myself, but it seems like it would probably take much longer to compute segmented overlays than something built-in.
My idea is very simple - I'd like to see new size(s) indicator available for arrays, so user may know which dimensions his/her array has. The necessity partially arises from this thread. In that case the array looks visually empty but really contains some hidden row or column and there's no way to know about it except for calling Array Size instrument on it. Also it would be good for the developer to see the exact number of elements in the array on FP or BD.
I suggest this new context menu item:
The indicator might look like a common LV indicator like this one:
I know that its implementation adds one additional operation in IDE mode but I think it should be fast enough to work smoothly.
Working with applications with 5000+ VIs means a lot of folders, libraries and classes. Normally you work on a specific class for a while, or perhaps a method of that class. But every time I have to restart LabVIEW for some reason I have to do a lot of clicking to get to a specific VI. If I'm lucky it is in the Recent Files list, but many times I'm working on more VIs than the list contains and have to click same path down in the project, over and over again.
Wouldn't it be sweet if you could right-click on a VI/class/lvlib/control and select "Create shortcut" and a link to the item in the tree is created in the "Shortcut" folder? Right-click on a shortcut and select "Remove shortcut" and it's gone.
I also have VIs in my project that I use often (for example a GUI showing all modified VIs in the project, where I can mark VIs and lock them in Subversion) and they would always "hang out" in the Shortcut list. So perhaps it should be possible to mark items in the shortcut list that are permanent and follows you in all projects.
LabVIEW 2017 ships with an example project, Channel Message Handler.lvproj, which provides a starting point for a QMH using channel wires. However, it would be neat to have a more sophisticated sample project (something on par with the Continuous Measurement and Logging sample project) which uses channel wires as opposed to the traditional queue approach.
I propose that if an array is wired into a for loop, the tunnel should be auto-indexing by default (current behavior) UNLESS there is already an auto-indexing input tunnel in that for loop (new behavior).
Generally, when I wire an array into a for loop, I want an auto-indexing tunnel, so I am happy that it creates one by default. However, when I wire a second array into the same for loop and it creates another auto-indexing tunnel by default. This is usually not what I want because it will cause the loop to stop early due to one array being smaller. I'm afraid that this default behavior may cause bugs for new programmers because they may not realize to change it (in fact, this has even happened to me before). Default behavior should be the "safe" behavior. Making the decision to have more than one auto-indexing input tunnel in a loop is one that should be carefully considered, so it shouldn't happen by default, but rather should be changed explicitly by the user.
I know there have been many ideas posted about the current auto-indexing default behavior, but I didn't see this specific one anywhere, and I think it is an important suggestion.
Now that the SSP package is delivered on USB instead of DVDs (good stuff!), I have a minor request: Could you have the USB label include a release/version name on its label?
It might add too much of a cost depending on how you get them customized, but if that is not an issue it would be very practical to be able to see what the USB contains by its label (as we could with the DVDs).
On a side note: Many companies have strict regulations on the use of USBs, and the need for such has increased with weaknesses like BadUSB. Perhaps NI could state something about how the USB sticks they send out are protected, either in the delivery package, or just as a statement on ni.com? That way people who need to convince their IT departments to allow them to use the NI USB sticks will have something to show (I'm sure you will have to add some legal disclaimers there as well , but that's OK).
Many controls allow you to make scrollbars visible. When a user clicks anywhere within the control, including on the scrollbar, this counts as a Mouse Down. It would be nice if the Mouse Down event would indicate whether the click was on the scrollbar or on the actual clickable area of the control, so you could do different actions based on which it was. Of course, you can usually do manually by checking boundaries of the control against the coordinates of the click, but it seems like a common thing so it would be easier if the check was built in.
From time to time it would be handy to have an "insert again" option on the context-sensitive menu; not necessarily a history for insert (as suggested at https://forums.ni.com/t5/LabVIEW-Idea-Exchange/History-for-insert/idi-p/1524082 where the poster does mention a single-item history) nor as extensive as QuickDrop - just a simple repeat of the last inserted item. Here is an example of being able to insert another conversion identical to the one just inserted, with a minimum of mousing and no keystrokes.
One (probably superior) alternative would be to change the behavior of <ctrl><right-click>; hovering over a wire and pressing <ctrl> shows the wiring tool, and a right click could repeat the last insertion (instead of bringing up the context-sensitive menu, which is a redundant behavior and a squandered control sequence).
I like constant folding. LabVIEW says "this is going to be a constant".
There are some times that I want to see the actual values of the constant. In the middle of a large VI it can be a pain to de-rail to make a constant, then come back. It can be easier to look at an array of values for troubleshooting.
I wish there was a way to right-click and either show what constant-folding gives, or even convert it to an actual constant. This is going to change the development side, not the execution side.
While it doesn't have to be reversible, it would be nice to know I got it right, then revert it to code, in case I want to vary the generation of values at a future time.
I would like the ability to launch a root actor to test without creating a new VI for each new actor I want to test. This has the added benefit of not having to search for your launcher VI in a sea of open VIs and project items.
This could just call the launch root actor with the clicked actor and set it to visible.
When using the waveform datatype in my applications I noticed that the T0 field of this datatype is getting out of pace with the system clock of the machine of where it is running on.
The origin of this behaviour is that time synchronisation only takes place at the start of a measurement session after which waveform timestamps are derived from the measurement device's clock and not from the system's clock. Small differences in clock accuracy cause the clocks to run out of phase.
This effect is especially noticable on applications that are running 24/7 e.g monitoring Industrial continuous processes for weeks at a time.
When this happens and data is saved for analysis afterwards there could be problems synchronizing this saved data with data of other sources because timestamps are different.
The only way to prevent this is stopping the task and starting it again but this is not always possible due to the nature of the processes monitored.
It would be very nice to have an option in the AIread function that can automatically synchronizes waveform timestamps with the systemclock on a timely basis.
Funtionally this would be something I programmed in the attached VI.