LabVIEW Idea Exchange

Community Browser
About LabVIEW Idea Exchange

Have a LabVIEW Idea?

  1. Browse by label or search in the LabVIEW Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  2. If your idea has not been submitted click Post New Idea to submit a product idea to the LabVIEW Idea Exchange. Be sure to submit a separate post for each idea.
  3. Watch as the community gives your idea kudos and adds their input.
  4. As NI R&D considers the idea, they will change the idea status.
  5. Give kudos to other ideas that you would like to see in a future version of LabVIEW!
Top Kudoed Authors
cancel
Showing results for 
Search instead for 
Did you mean: 

Do you have an idea for LabVIEW NXG?


Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!

Post an idea

Hi!

Maybe this has been already requested elsewhere and I'm missing it....

but it would be useful to have a Wait (ms) with connectors for error in and out.

This can help keeping the BD clean...

Marco

18613iCF039EA34765F743

The "Probe Display" pane of most default probes should have indicators that are set to "Fit to Pane". The worst offenders, in my opinion, are Strings and Variants...I often find myself cursing the fact that I can't see more of the data, and usually have to copy & paste into Notepad++ or something to actually see what I'm looking for.

 

Yes, you can make custom probes for this behavior. But it should be native.

 

probe.png

SR1.png

Here's how we currently make cluster element labels appear, and a proposed idea for improvement.

ClusterLabels.png

(Inspired by this discussion)

 

The Index & Bundle Cluster Array function currently discards any labels of the input data. I think it would be more useful if it would try to retain the names (i.e. copy the original array labels (if available) to the element labels of the output).

 

The picture illustrates the idea. On the left we see the current behavior and on the right what we would get after this idea is implemented.

 

The goal of this idea is to make it easy for the LabVIEW ecosystem to create reusable libraries for LabVIEW that would be type independent. Let's think for a second dictionaries, also called as key-value stores. Dictionaries are data structures that allow storing and retrieving values with a specific key. To create a generic reusable strongly typed dictionary is currently impossible with the LabVIEW type system. One can create a dictionary that is type specific but then it's not reusable. Or one can create a reusable dictionary but then it's not strongly typed. Type Parameters and Parametrized Generic Types as explained in this idea would allow creating strongly typed dictionaries that are widely reusable across applications. Specifically type parameters and parametrized generic types would allow LabVIEW ecosystem to develop highly reusable strongly typed components to solve various common programming problems. This would allow National Instruments to put more focus on the core of the language as the LabVIEW ecosystem could solve much wider range of problems that preivously have required National Instruments to contribute.

 

Add a new control type Type Parameter to LabVIEW that augments the current Control, Type Def and Strict Type Def control types. The Type Parameter type would act like a regular Type Def control with one special and important distinction. You could wire anything to an input terminal expecting a specific Type Parameter type and the downstream type would adapt at compile time to the type wired to the type parameter input.

Type Parameter Definition

 

In a single VI type parameter could be used in multiple places but all instances of the type parameter would adapt to the same type.

 

Type Parameters

 

When a VI that uses Type Parameters in the front panel is used on a block diagram, the template VI is replaced by the compiler by a type specific instance that has adapted the type parameters to the type wired to the Type Parameter input. Notice below how in our VI the control and the indicator were of type Type Parameter with a default type of DBL and the instance got adapted to type U32 that was wired to the input.

Calling a VI with Type Parameter Inputs

The same type parameter could be used on multiple inputs of a VI.

Multiple Inputs of The Same Type Parameter

And all of the type parameters would adapt to the same type when the VI is being used.

Calling a VI with Multiple Type Parameter Inputs of the Same Type

Note that in the above example we chose the element of the array to be a specific type specified by a type parameter. However the arrays themselves could as well have been specified by a type parameter.

 

So far we have focused on VI boundary where type parameters adapt the whole VI to specific type or types if multiple different type parameters are being used in the connector pane of the VI. Type parameters can also be used in composite types (e.g. arrays, clusters, classes) and the downstream composite types would adapt to what is wired to the type parameter input.

 

Type Parameters with a Cluster

Note that x and y as instances of the same type parameter have to be of the same or compatible type.

Type Error With Type Parameters

 

Type parameters can also be used in class private data to create parameterized custom types. This is where type parameters become extremely powerful. Let's assume that we have a class 3D Vector.lvclass that has three instances of a "Data Element.ctl" Type Parameters. The default type of the Data Element is set to be DBL. The cluster private data has three instances of the Data Element, one for each of X, Y and Z.

Type Parameters in Class Private DAta

 

Now we could create a Create 3D Vector method VI for this class that allows us to construct type parametrized instance of the class type.

Creating a Type Parametrized Object

Now calling this Create 3D Vector.vi with string as the inputs for type parametrized inputs X, Y and Z will create an instance of class 3D Vector with compile time type 3D Vector[String].

Calling a VI that Creates a Type Parametrized Object

 

And this is where we now start seeing the superpowers of type parameters and parametrized types as well as generic type parameterized VIs that go along with them. Now we have a capability of creating custom VIs and custom types that both can adapt to different parameter types at usage time.

 

Let's get back to the question of dictionaries. We could easily construct a dictionary that allows the key type to be parametrized with one parameter and the value type to be parametrized with another parameter. For example we could use the dictionary with I32s as keys and Strings as values. Or we could use it with Strings as keys and File Paths as values. Constructor for such custom type would be trivial to create.

Type Parametrized Create Method for a Dictionary

 

Once we have constructed the dictionary we would naturally like to use it. We could now use method VIs of the Dictionary class to add and fetch elements from the dictionary. As an example Get Element By Key would look something like this in it's simplest form.

 

Get Element From Dictionary

 

 

Note that Dictionary In is type parametrized with two different type parameters Key Type and Value Type. In the class library there is a Type Parameter control Key Type.ctl and Value Type.ctl. Now type parameter Key Type.ctl is used both inside the private data of the class and on the fron panel as the Key input, the type of these two must be the same. The same is true for the Value Type element of private data and the Value indicator that both derive from Value Type.ctl type parameter. The has function is any function that can convert any LabVIEW types to some strings that we can use as keys for the variant attribute node. We are using variant attributes as the store implementation is this basic example.

 

Calling the Dictionary with integer as the type parameter and string as the value would look something like this.

 

Calling a Type Parametrized Dictionary

 

As you can see the 0 and empty string will define propagate as type parameter types for Key Type and Value Type in the dictionary wire. Now Add Element.vi would have to adapt to these elections for Key Type and Value Type the moment the Dictionary wire is connected. The Key input immediately change to type INT32 and the Value input to type String. Similar would be true if the wires would be connected in reverse order. Connecting University of Texas string to the Value input of Add Element and connecting number 1 of type INT32 to the Key input of the Add Element would immediately adapt the Dictionary in and Dictionary out inputs to be of type Dictionary[Key Type = INT32, Value Type = String]. A type error would occur if Dictionary in would be of different type.

 

Type Parametrized Generic Types are an extremely powerful concept to incldue in a language and this idea describes a feasible way to implement them in a visual dataflow model of LabVIEW. This is and has been for maybe ten years my absolute #1 feature I have wanted to see in LabVIEW. I think the time is right for me to officially make this request. Ideally Type Parameters can be bounded but that's a topic for a whole other idea post.

I am struggling with my Event Structure event list and the corresponding list of cases in the parallel consumer loop Case Structure.

Both have currently over 100 cases each and finding one or scrolling down to access the latest one has become painful due to the lack of a scrollbar in these lists.

For instance, here is the Event Structure list:

 

Screen Shot 2015-09-29 at 12.19.16.png

 

Same goes for the list of controls in a Local Variable (and other objects, I am sure).

There is no reason why such lists do not have a vertical scrollbar when that corresponding to a Enum do have a scrollbar:

 

Screen Shot 2015-09-29 at 13.33.30.png

 

Or is there?

 

Suggestion: All long pulldown lists should have a vertical scrollbar

Instead of doing all this:

Captured1.png

I would like to be able to just drop a VI on the Start Asynchronous Call node, and get this:

Captured1.png

I have often the case, that i have to display an array of cluster. It takes a lot of space on the front panel to show all the captions of the cluster elements, cause they will be shown on each array element. A better solution would be, showing the captions or labels just on the top visible element.

IMAG000.jpg

IMAG001.jpg

When analyzing execution performance it would really help if we had a simple yet comprehensive way to check a VI and all it's dependencies for "things" that would create a UI thread swap / access.

 

cheers

My idea is to have LabVIEW cease and desist it's self-important modal behavior.  Not that I think LabVIEW is anything other than the most important application I run, but I don't think it should force its (many windows') way to the front of the line when I shift focus to a LabVIEW window.  I didn't find any other idea that matched this, but there is this discussion that covers the notion well.

 

An example case:  When chasing efficiency I frequently have Task Manager open to observe CPU usage when I change front panel controls.  I'll run the .vi and load Task Manager, but when I click on a front panel control ALL the LabVIEW windows come to the front and cover Task Manager:Modal.png

 

So, my suggestion is to have only the selected LabVIEW window come to the front.  I get the impression that Ctrl-Tab and Ctrl-e behavior are why LabVIEW controls its own window z-placement, but leaving their function out of it, my suggestion is just to change the modal behavior of LabVIEW windows.

The problem that height of local variable is 17 pix, and terminal - 16 pix, but distance between terminals in unbundle function is 15 pix.

As result - aligning to vertical compress caused steps in wires:

 

Screenshot.png

 

Right nowterminals/local variables should be slighly overlapped for "step edge free" wiring.

Please synchronize size of these icons with distance between terminals (to 16 pixels - seems to be ideal size)

 

Not sure if it was already in Idea Exchange or not.

 

Andrey.

 

While there are ways to temporaily disable autowiring, some should never happen in the first place. For example if the resulting wire has a net right-to-left flow, it should not be created at all!

 

Let's have a look at the following simple scenario (see picture):

 

Place an "add" primitive (or almost anything else!), then use ctrl-drag to create a second copy right below it. (Sometimes we need to add a few different things in parallel!). LabVIEW immediately connects a random input and output with a circuitous backwards wire for no good reason at all. Smiley Sad

 

 

If I really wanted them to be connected, I definitely would have placed them side-by-side!!!

 

Idea summary: if any auto-generated wire would result in a net backwards dataflow, it should not be created at all!

Not so much an idea, as a wish/plead/rant:

 

Please make the next version of LabVIEW a major update of the features we have available to create user interfaces.

 

2011 was the "improved stability" version. 2014 should be the year it became simple and fun to create user interfaces that blow everyone's socks off. I'm not even talking about fancy stuff, just get the basics right!  Fix the graph indicators, and provide better front panel scaling options - and that alone will make 2014 the best update ever(!).

 

 

I started writing a list of all the things I find bad with the graph/charts for example, and found out that it would be better to just do a search here on the idea exchange to see how many ideas mention graphs alone. 2390 ideas! (yes, I have not gone through them all to filter out the ones that do not actually request changes to the graphs, but most of them do, directly or indirectly...). My own little list started like this, in random order:

 

Graphs and charts

1. You cannot stack plots in any of the graph indicators, only in charts
2. Number of plots stacked cannot be varied at run-time
3. Annotation properties are only partially available programmatically
4. Auto-scaling cannot be restricted to one way-only, it's behaviour cannot be configured in any way
5. Legends, palettes and tools do not fit together to form an organized user interface, nor are they possible to detach from eachother to get more flexible designs/scaling for ecxample...
6. XY graphs become sluggish and almost unusable with large data sets, where alternatives written in other languages have no performance issues
7. Plot colors could automatically adjust to the chosen background color - suggesting unique colors for the added plots that provide the best possible
visibility.

8. Graphs on e.g. Google and Yahoo have tonnes of cool features like animated zooming, thumbnail graphs, highlighting of the plot you hover the mouse over etc. which provide a very interactive feeling, you can achieve some of this in LV as well, but it could/should be possible with little or no programming

9. To get charts to accept data with variable sample rate (delta X) is possible, but cumbersome and an almost hidden feature...

 

Mixed signal
1. You cannot set the group names programmatically
2. The number of plot areas is not configurable at run-time
3. You cannot assign plots to a given group programmatically
4. You cannot show the visibility checkbox of each plot etc.

 

And then you have the additional 2000 ideas...;-)

 

As for front panel scaling there are not that many ideas (naturally), but the impact/value of them would change every LabVIEW programmer's life significantly. We can stop spending so much time finding ways around limitations in LV, and start focusing on the actual goal of our applications.

Would that not make everyone's day?

 

 

(This is different and less controversional than  this related old idea)

 

Arrays of timestamps only contain legal values and can even be sorted. We can use "search 1D array" to find a matching timestamp just fine.

 

As with DBLs, there might be a very close array value that is a few LSBs off, but well within the error of the experiment so it is sufficient to find the closest value. We can always test later if that value is close enough for what we need or if "not found" would be a better result.

 

If we have a sorted input array, the easiest solution is "threshold 1D array" giving us a fractional index of a linearly interpolated value. For some unknown reason, timestamps are not allowed for the inputs, limiting the usefulness of the tool. One possible workaround is to convert to DBLs, with the disadvantage that quite a few bits are dropped (timestamp: 128 bits, DBL: 64bits).

 

Similarly, "Interpolate 1D array" also does not allow timestamp inputs. In both cases, there is an obvious and unique result that is IMHO in no way confusing or controversial.

 

 

 

IDEA Summary: "Threshold 1D Array" and "Interpolate 1D Array" should work with timestamps.

 

 

 

I like constant folding.  LabVIEW says "this is going to be a constant".

 

There are some times that I want to see the actual values of the constant.  In the middle of a large VI it can be a pain to de-rail to make a constant, then come back.  It can be easier to look at an array of values for troubleshooting.

 

I wish there was a way to right-click and either show what constant-folding gives, or even convert it to an actual constant.  This is going to change the development side, not the execution side.

 

While it doesn't have to be reversible, it would be nice to know I got it right, then revert it to code, in case I want to vary the generation of values at a future time.

NI send us the NI Developer Suite each year on DVDs all packed in a nice little NI branded dvd carry case. We are on the SSP suscription and we receive 3/years, which means I have a whole stack of them.

 

I suggest that NI start shipping USB keys instead. USB has several advantages:

 

  • USBs are smaller
  • USBs are more usable on devices without DVD player
  • Installing with one large USB means no more DVD swapping. I can go to lunch while NI installs/updates without having to change the DVD every couple of minutes.
  • USBs are reusable: when you get a new version on LabVIEW on a new USB, you can use the old one for regular usage. This also means less waste, since the USB keys are still in use after a new version ships, but the DVDs are useless.

 

Ship developer suite on NI USB keys

LabVIEW on just looks awful. Here's a little gallery of horrors. In my opinion it makes LabVIEW looks very unprofessional. These are all taken from a "stock" LV2016 Full license, installed on Windows 7, taken from some of the windows I see all the time. This list could go on forever!

 

1.PNG

 

2.PNG

 

3.PNG

 

4.PNG

 

5.PNG

 

6.PNG

 

A lot of these problems seem to originate from using LabVIEW-style controls and indicators into its own windows and panels, which are not rendered correctly. Please save LabVIEW from itself!

Currently if you right click on a subVI from the block diagram and choose properties, it brings up the Object Properties dialog.  The only options you can change there are label options, which can easily be changed in the "Visible Items" submenu.  I can't think of one time when this has ever been what I wanted out of this action.  Instead, I think this action should open up the VI Properties Window for the VI.  

 

properties1.png

Typical question in development process: "How quickly does my code execute? What runs faster... Code A or Code B?" So, if you're like me, you throw in a quick sequence that looks like this:

 

TimingDuringDevelopment.png

 

AHHH! What a mess! It's so hard to fit it in, with FP real estate so packed these days!

 

We need this:

ProposedTimingDuringDevelopment.png

 Just like my other idea, and for simplicity's sake NI, I would be PERFECTLY happy even if you had to set up the probes during edit mode, and were not able to "probe" while running.

 

 As a bonus, this idea may be extrapolated into n timing probes, where you can find delta t between any two of the probes.