LabVIEW Idea Exchange

Community Browser
About LabVIEW Idea Exchange

Have a LabVIEW Idea?

  1. Browse by label or search in the LabVIEW Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  2. If your idea has not been submitted click Post New Idea to submit a product idea to the LabVIEW Idea Exchange. Be sure to submit a separate post for each idea.
  3. Watch as the community gives your idea kudos and adds their input.
  4. As NI R&D considers the idea, they will change the idea status.
  5. Give kudos to other ideas that you would like to see in a future version of LabVIEW!
Top Kudoed Authors
cancel
Showing results for 
Search instead for 
Did you mean: 

Do you have an idea for LabVIEW NXG?


Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!

Post an idea

As discussed here, please incorporate the System Controls 2.0 UI elements into core LabVIEW. This would eliminate the need to visit VIPM to obtain the full functionality of the System UI control palette. System Controls 2.0 is not an additional UI style, rather it is the partial completion of the original System controls palette.

 

Including these controls in the default installation would allow the merging of these two palette entries for those that use the System controls for UI development.

 

system.png

 

 

0 Kudos

If you want to set a VI's icon, you can choose from 32 icons. Non of them seem to be used by LabVIEW, and the icons that LV is using are not available for picking.

 

Icons for FP scaling, moving cursors or moving splitters should be available to choose from. If LV can use them why can't I?

 

Fact is, splitters, graphs and even LV FP's have limits. If you are ever forced to pass those limits (e.g. make a picture control look like any other graph , or make splitter that can disappear completely), the lack of access to these cursors make it an even harder problem.

 

A VI to get additional icon resources as pixmap, and one to set icons from pixmap seem like a good idea, but if it can be solved in another way I'd accept it.

0 Kudos

After 17 years I still make this mistake.

 

So you've been working on something and feel you've messed up. So you revert the VI, only to find out the first 70% was not messed up... Sadly, you can't undo the revert.

 

I know it's a bit "against the nature" of the revert as it has been for the past 15-20(?) years, but maybe it's time for a change? I do understand it might be difficult from a technical point of view as the bkup-heap as they are probably won't support this...

0 Kudos

Why is the property to read the default value a scripting property?

 

If you'd ask me reading the default value should be a normal property...

 

Writing it is scripting, so maybe it should be split up.

0 Kudos

This previously wasn't such a useful idea, because typically the datatypes unbundled would be different, requiring a To Variant to be able to get anywhere. This is still the only way I can see to do this (and unless I look into scripting, it's fairly time consuming):

 

Unbundle with one case per valueUnbundle with one case per value

Would it be possible to get either

a) a primitive node that takes an Enum and a Cluster as inputs, and outputs the data element matching the value of the enum as a variant (as in the code above). This can break at edit time if the enum contains any values which are not the labels of an element of the cluster, in a way similar to the case structure if the case is not covered by an enum value (but in reverse, I suppose. So then more like not having all values or default...).

 

b) far more excitingly (but I don't see how it would be possible) a VIM that outputs the data value without the variant. I can't quite see this being possible, since the value of an enum requires runtime information whilst the datatype output needs to be available at edit time, but I'll list it as a tentative dream.

Please vote to improve the help. NI wants votes before improving the Desktop Execution Trace Toolkit help! The various log events should be defined. Service request reference#7718574 said: 

 

"It seems that the information regarding the descriptions of the Desktop Execution Trace Toolkit Events is not publicly available at this time" and that it could be requested via the Idea Exchange here.

 

There are about nine event types which have a short description (Errors, Event Structures, Memory Allocations, Queues and Notifiers, Reference Leaks, User Events, User-Defined Trace Events, VI Execution, VIs). There are several events under each category which are not described. Some are not self-explanatory. For example, what is the difference between the VI Execution's Call and Start Execution or Return and Stop Execution? (rhetorical)

This must be duplicate, but can't find it anywhere...

 

What I'm missing is a Static VI Reference, that instantiates a clone:

Static Clone Reference.png  

 

 

It might needs a new name, since it's not a static reference. But "Clone Reference" is ambiguous (a reference to a clone). Maybe "Clone Allocator"?

 

Obviously, making it a strict type is still possible:

Static Clone Reference (strict).png

From:

https://forums.ni.com/t5/LabVIEW/possible-to-change-opc-ua-alarm-limits-at-runtime/m-p/3688481

 

The request is mostly in the title. Right now it seems as though changing an alarm limit requires stopping the OPC UA server. It seems reasonable to expect that users may wish to adjust alarm limits on the fly -- for my use case, tuning the system during a long commissioning process. It might be many months before a system is fully ready to go, and during that time alarm limits change regularly and we still want to report alarms to operators during this period.

In LabVIEW we can dynamically run a VI in a few ways:

a) If it's not running Top Level VI or if the VI re-entrant with the Run method.

b) Already running as sub VI, with Call By Reference.

c) Make a new VI and drop the (running) sub VI on the diagram.

 

Downside of a) is we can't always make sub VI's re-entrant, but still want to call it by reference. Downside of b) is we need to know the strict type (connector pane). Downside of c) is we might end up with a lot of VI's just to function as Top Level VI for the sub VI's and it doesn't work in executables.

 

I like to propose a method, so we can dynamically call a sub VI without knowing the strict type.

This is all I want.png

Using it, we enable LV to dynamically run sub VI's while setting\getting it's parameters by name.

 

For sub VI's (already running) this method will act as Top Level VI. For Top Level VI's it will fail unless it's idle.

 

 

 

(Please ignore my first confusing attempt)

0 Kudos

I've always wondered why this is not possible.

 

So you can dynamically start a VI. Except when it's not already "running". I quoted running, but the VI itself is not running, it's just on the diagram of a running VI.

 

To start a VI dynamically, when already "running" we need to make it re-entrant. This is very limiting (see use case in following post).

 

Technically, there should not be a problem. You can create a new VI, put the sub VI on the diagram and run it.

 

Why can't we do this by reference???

The "Read Key (Double).vi" is not giving out proper values when a SI notation number is present in the ini file. For e.g. if the number is 1m it returns as 1 while the actual value should be 0.001. It will be helpful if the format specifier is provided as another control.

Here's a dumb mistake I think many of us can relate to:

bundle_by_name.gif

It would be really nice if the VI were just broken in this situation. But I can understand why it's not necessarily simple to mark node *outputs* as required.

 

 

But maybe there could be a way for the editor to *hint* that there is a problem here. Maybe the bundle nodes could change color when the output terminal is wired, so you could get a little more obvious feedback if you screwed up like this.  The same could go for any other primitives that have a "for type" input (e.g. unflatten from string, variant to data, to more specific class, etc).

 

Granted, VI Analyzer could report bugs like this, but having a little more immediate feedback would probably be a big win.

 

(Let me know if this should be cross-posted to the NXG idea exchange, too).

When performing a single point read on an XNet session, you will receive the value of the signal that was last read, or the Default value as defined by the database if it has never been read.

 

This type of functionality is sometimes useful, but more often I'm interesting in knowing what the last reading was, if the reading is relatively recent.  The problem with the NI implementation is that you have no way of knowing (with this one session type) if the device is still broadcasting data or not.  I think a useful property might be to have a way of specifying an amount of time, that a signal's value is still valid.  If I haven't seen an update to a signal in 2 seconds for example, I can likely assume my device is no longer communicating and the reading I get back from the XNet read should return NaN.

 

I had a small discussion on this topic and posted a solution using an XY session type here, which demonstrates the functionality I am talking about.  I'd like this to be built into the API.

What I propose is to have functionality built into the XNet API that is similar to the DAQmx API.  I'd want a separate subpalette where functions like Start, and Stop logging can be invoked which will log the raw data from an XNet interface into a TDMS file.  Maybe even some other functions like forcing a new file, or forcing a new file after the current file is so old, or of a certain file size.  On buses that are heavily loaded, reading every frame, and then logging it can be use a non-trivial amount of resources, and having this built into the API would likely be more efficient.

 

XNet already has a standard for how to read and write raw frames into a TDMS file that is compatible with DIAdem, and has several shipping examples with LabVIEW to log into this format.

0 Kudos

Hi All,

 

It will be great time saver if NI puts stop actor functionality inbuilt with AF Framework VI.

 

Reference to my post : Change Actor Framework override VIs

 

Thanks!

0 Kudos

If in the middle of building a VI, a user needs to look over something in another project to ensure the 2 programs function together the user clicks File>Open Project this spawns windows explorer where you have to browse to the directory where the project is located at. Since LabVIEW  .ini file contains the paths to known projects, the dialog box should populate with the project names and paths for known projects. This allows the user to one-click the project instead of having push down several directories to get there. Of source the user can still browse to another project path not listed in the INI file if the dialog is coded as such. Just a matter of convenience and ease of use

0 Kudos

Now with the capability to highlight code and push that to a sub-vi, there should at least be a rudimentary help created or at least a warning when the project is saved to alert the creator that there are vi's being saved with no documentation or even using the generic vi image. 

 

While creating a sub-vi from highlighted code makes for a cleaner visual effect, it is possible that the code pushed down now loses all documentation.  Plus it may even end up with the generic vi image which provides even less information as to the purpose and function of the newly created vi for future use.

 

Follow-ons could include things like a simple help creator.

0 Kudos

To generate a VI or set of VIs with a general driver to get low-end FPGA boards to work with LabView FPGA. Parameters will only come from the users to make this dynamic, this would be the total count of I/Os FPGA type, location of I/O items (eg. buttons) in the FPGA board, etc. It would be a bit of work, but also would pay off at the end. Doing such is no more than an extension of LabView if done well, let's say written in an xml file plus it would be a very powerful tool for researchers, and would generate more sales to use LabView FPGA for more researchers, university students, and engineers who want to test a few things without full initial commitment to NI tools.

 

 

0 Kudos

If someone would have asked if there was a setting to run a VI after an installer was build I would have said yes, until today when I realized I needed it and couldn't find it.  When building an EXE there is a Pre/Post Build Actions section where you can specify a VI to be ran before the build or after it.  But this appears to be limited to building EXEs and not installers.  There are several limitations to the NI building process, and I'd like to improve them by having functions get ran after a build of an installer is complete.

 

 

EXE settings

EXE Builder Settings.png

 

 

Installer Settings

Installer Settings.png

 

 

The Excel-Specific, Excel General sub-Palette of the Report Generation Toolkit contains a useful function, "Excel Get Last Row".  This allows a user to add new rows of data to an existing Worksheet by returning a cluster of MS Office Parameters that can be used with the other Excel Report functions (such as Excel Easy Table) to place the new (row-organized) data below existing rows, such as Column Headers, that might be already present in the Template.

 

 

I propose that NI add a similar "Excel Get Last Column" that does the same thing, but returns the last column on the WorkSheet.  This would be useful when entering data one column at a time, not uncommon when entering multiple channels (columns) of sampled data, where you want the new data to be just to the right of the existing (columnar) data.

 

Get Last Column.png

I could easily write such a function myself, but so could NI, and if NI did it, everyone who uses the Report Generation Toolkit would have access to such functionality.

 

Bob Schor