Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.
Showing results for
Search instead for
Did you mean:
Do you have an idea for LabVIEW NXG?
Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!
We use NIRG a lot, and I'd like to move away from what looks like random free text on the block diagram to a text block that's instantly recognizable as a requirement. Kind-of related to this idea, but specific to NIRG.
Spurred by Darren's latest nugget, I think it would be an excellent addition to the feature to be able to retain wire values for all subVIs of a particular VI as well as the VI itself. Several times, I have found myself having to run a VI a couple times to get to the point where I can satisfactorily examine the data flow. There is a JKI RCF plugin by Vishal posted herewhich implemented this, but native functionality would be much preferred.
I'm not sure how best it could be implemented in UI so as not to disturb those who don't want this, and I can forsee a hairy situation arising if a particular subVI is called from a different hierachy later. Ideally, the subVI would retain values for the last execution in the retained hierachy, but obviously that's incorrect in the grand scheme of things. I'd love to hear other ideas on how to handle that scenario.
Multiple search result windows should be allowed, such as the results of a search can be retained when you need to run a second or third search for something else before you have finished checking the results of the previous search.
Currently the results of a search are lost immediatelly as soon as you search for something else.
This has been a major nuisance for me, especially when I am investigating code and (for example) I want to check all instances of a VI (say 20 instances) but during this process I also need to search for something else.
Currently your options would be to either NOT search for something else before you have finished checking all instances of the VI you searched for, or search for that "something else" and loose the previous search results. After you have found what you were looking for in the second search, go back and search again for that VI. Then in the results list, try to figure out which instances of that VI you had already checked! ....
Currently the IMAQ (Vision) display windows maintain a separate event queue which can only be accessed by polling for new events. This should be integrated into the LabVIEW event handling so that all IMAQ events (e.g. Click Event, Draw Event, Move Event, etc) can be event sources for the Event Structure.
It would be nice to have a run button at the project level. Which VI to run could be selected in the project properties.
I find that I often need to go looking for the top VI in projects. This is more true with actor framework based projects, since the launcher / splash screen will be closed after the software is run each time.
The icon for the VI in the project tree could be altered to denote top level VI or the VI that will be executed from the project run button.
I would love to be able to show a ruler on the front panel....when turned on it should float above the objects on the panel...not unlike a cursor with lines in both planes if the panel was a graph. Unlike the grid this would make it simple to align e.g. columns when you have multiple tables underneath eachother. You could of course have this on the diagram as well, but the main use for such accuracy is on the front panel.
I often use the tab controls to get more front panel real estate for applications where many parameters have to be accesible rapidly. I use the tab control to group my controls according to their fonctionnality. It would be desirable to have the controls from each tabs available as a single cluster in order to pass them to subVI. Of course, the controls from each tab can be grouped using a cluster on the front panel but this leeds to an extra border on each tab (the cluster border). Also, it remains that if one wants to pass all parameters to subVIs, he has to pass the many clusters to the subVI or bundle the clusters to one new cluster then pass this cluster to the subVI. I thought of using a tab control in a cluster but it seems that clusters have a problem dealing with tab control.
My idea would be to have a cluster type which looks and works like a tab control, but instead, each tab would be a sub-cluster of the Tabbed cluster.
This would simplify the development of complex parameter intensive user interfaces and remove lots of control terminals in the block diagram.
When I have large projects with lots of classes, Labview's edit time environment slows down to a painful crawl. Almost every meaningful action triggers a recompile that gives me a busy cursor for 3-10 seconds. My productivity falls off a cliff and I avoid making important changes in favor of the cheap hack. (The dev environment has high viscosity.)
I often wish there were a way to turn off the compiler while I make a set of changes. I'm not interested in the errors at every single intermediate state of the code while I'm making the changes. I *know* the code will be broken after I delete some nodes or wires. Let me turn the compiler off, make my changes, and then turn it back on for error checking.
I'm sure there are lots of different solutions to address this problem. My preference would be to make compiling faster so it is unnoticable. I doubt NI will ship me a new computer with the next release. My second choice would be to stop loading all of the class' member vis when any single vi is loaded. I vaguely remember a discussion about that some time ago and it wasn't practical to do that. That leaves me my third choice--more control over what automatically compiles when I make edits.
It could be something as simple as Ctrl-Shift clicking the run arrow to toggle auto-compiling at the vi level. Or it could be a right click option in the project window that controls auto-compiling for an entire library or virtual folder. Either would be okay; both would be nice.
(For that matter, it would probably be a lot cheaper if NI just shipped me a new computer...)
As of LV2010SP1, there is no good way to interact with the Right-Click menu of a control programmatically. You can create a custom right-click menu for a control at edit time, and as long as you save the menu with the control (saving it as a discrete *.rtm file doesn't work--see CAR #256160), you're in business.
But if you want to make sure multiple controls all have the same menu, you're pretty much out of luck. There is no way to programmatically load an *.rtm file for a control. The best you can do is build up the menu one item at a time using the menu primitive functions. And you have to do that on menu activation event, because you can't get at the control's menu reference any other way.
This is pretty kludgy.
The other solution is to make those controls strict type definitions, but that isn't ideal in a lot of cases. For instance, I want to make sure all the XY Graphs in my application have the same RTM API, so I could use a single strict typedef, but that sucks if I don't want them all the same size, etc....
So here's the idea: fix runtime menus for controls.
Add an "RTM Path" property for all LVControls to allow loading custom RTM files at runtime.
Add a "Runtime Menu" property to all LVControls to grab the menu reference at runtime.
Make sure that RTM files are loaded properly in EXEs (see CAR #256160)
I would like to see an Option for Child Window. This option
will useful to keep open FP in Parent FP even when it is moved around,
I understand LabVIEW dose not support MDI, but simple Child
Window option will be helpful. Currently, I am using Windows API to do this,
but native supported function/ option can be big help (won’t have to use VI
server, Dynamic path builder to check application type vi, exe, llb, and others
useful things that needs be done..).
When working with large projects with many VIs I often find myself rewriting portions of the 'find and replace' feature that already exists as part of my VI Analzyer for overnight code-checks on the server...
It would be really neat if the existing find and replace logic available via the dialog was wrapped up into some simple API to allow for searching:
for full/partial matching text
VI path to subroutine...
if it had parameters to suppress dialog popups for passwords & not-found-subVIs, that would be really helpful for running searches headlessly.
The API could return just a 1D array of paths to SubVI's where instances were found, or if it was really nice, the VI references? I'd be content if just the 'find' side of the equation was implemented..I could take any manipulations from there. I can see that there might be a dependency on VI Scripting to accomplish the search, and I'd be OK with an API that only works on a machine with a dev license... Especially if the 'replace' side of things was implemented somehow...
When developing a utility to traverse any control using VI Server and save its contents to a file (similar to the OpenG utility using Variant) it is quite challenging to find out the size of the array's data.
There are various workarounds, but all of these are relatively tedious and over-complicated.
Why don't we have a "array data size" read only value on the property node of an array?
It computes variance = 1/W*sum((X_i-mean)^2) where the X_i are the array elements (total number N) and mean is their average and:
W = N when weighting = Population
W = N - 1 when weighting = Sample (the default).
I am not a statistician, but I would be surprised if many in the engineering and scientific fields are using the second definition.
If you have a large sample, the difference is minimal.
In the other cases, all bets are off.
In particular try N = 1.
There is a very verbose mention of it in the new source of all knoweldge, wikipedia, at the end of the article, but it has been proposed to move it to some more specialized article. In other words, nobody cares, unless they are statisticians. In this case, they'll use anything but LabVIEW to analyze data.
So either set the default value to "Population" OR make the input required AND the doc much clearer about the consequences of the weighting choice.
Too many times, a generic "Out of Memory" error pops up without explanation, source, or traceability. Sometimes it occurs intermittently when executing the exact same process. Tracking these mystery errors down takes more time than necessary and takes away from the efficiency and gains intended by the design of the automatic memory manager within LabVIEW. After some research and help from an application engineer, it is apparent that the memory manager is not well suited for modern PC's and OS's when needing to process larger amounts of data.
LabVIEW should be able to use all the application memory offered by the OS, not just the contiguous parcels it is lucky enough to find. Not only should it be able to use fragmented virtual memory but it should also be able to exploit more than just 75% of a 1GB application segment, particularly when 16 GB is installed on the motherboard.
For example, simple arrays of I16's are sometimes denied if they are only tens of MB in length and denied all the time if they are in the hundreds of MB. That doesn't even come close to the available memory capacity in the PC. Granted, those arrays are large compared to VI's written for simple GPIB devices twenty years ago but the need for larger arrays is now more prevalent with high-speed data acquisition and high-resolution imaging.
Why can't the memory manager grow with the latest PC memory capacities, motherboard architectures, modern OS's, and modern instruments that can acquire and transmit data with those array sizes? Isn't it time to challenge the need for contiguous memory? Can't more intelligence be added to the memory management strategy by not needing to copy large arrays redundantly that cause "out of memory" errors? Can't a memory manager be able to work within the fragmented virtual memory space of a Windows OS without having to reboot? Shouldn't it adapt to the OS environment instead of needing to prevent every other application from running in order to statistically gain more contiguous memory? Can't better automatic tracing and error messaging be delivered to the programmer prevent to much wasted time?
I have been impressed by the quality of service and detail of the online help to tiptoe around these limitations. However, it seems time to graduate from building contraptions to avoid the problem and instead apply that effort towards solving the problem. Are there plans to issue a new automatic memory manager to optimize the potential of modern PC's and OS's?