From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Background

This idea is derived from the following idea: Front panel controls and indicators should be genuine after being replaced with control or indicator of different style . Please read that thread for background.


Problem

There are three ways to replace front panel controls and indicators:
1. Right-click the control or indicator, select Replace, then select the select the object to replace with.

2. Select the control or indicator, press Ctrl + Space to bring up QuickDrop, search for the object to replace with, then press Ctrl + P.

3. Select a control or indicator and press Ctrl + X. The control will disappear from the front panel. Select another control or indicator. Press Ctrl + V. The second control is replaced by the first control.

 

The first two methods preserve some of the look and feel of the original object. For example, when replacing a Modern-style string control with a classic-style string control, the end result is a control that looks like a mixture between the two styles. This may or may not be desired behaviour.


The third method does not preserve any of the attributes of the object that was replaced. The end result is a control that looks exactly like the control that was selected when Ctrl + X was pressed.

 

Both behaviours can be desireable in different situations, and it's good that there are ways to achieve both. However, the third method is not easily discoverable.

Proposed solution

A more intuitive and discoverable solution could be: After right clicking an object, selecting Replace, and selecting the object to replace with, a two button dialogue message could popup. The message could ask something like "Preserve attributes?" The buttons' Boolean text could be "Yes" and "No".

 

Ideally the same popup would appear when using the QuickDrop Ctrl + Space, Ctrl + P method.

 

This would:

1. Enable the user to select either to preserve attributes or not, depending on their intention at that particular moment. (the same user may prefer either behaviour at different times)

2. Increase awareness that two behaviours are available

Thanks

I use an electrical drawing package and on different pages of schematics it is nice to have copied components e.g. potential lines to be in the same position so when flicking through pages you can easily see the differences and it looks prettier.

 

In an event structure or case structure when I copy and paste an object from one case to another I would like to be able to also copy the position of the copied item. I like to be able to scroll through event structures and see any identical components in the same location. 

 

I propose that the X and Y keys on the keyboard be used to trigger this auto alignment so that after pasting, pressing Y will move the copied item to the same Y position and then X to move to X position.

 

 

an insider program for LV for signed up users to be able to access and pretest current or next version LV and all its module for non-academic/commercial/industrial applications; so that signed up members can help NI detect bugs for the free access? that can further improve LV during release in addition to just being a communication channel

 

interested members can sign up with active SSP as entry requirement

 

Batch message class should have a public static method to build a batch message that can be sent to the Time-Delayed Message.vi.

tkendall_0-1643323757644.png

 

I would like to wait xxxx ms then send three messages in order.

tkendall_1-1643323849848.png

Yes there are twelve other ways to do this but this would be much cleaner and it is an easy one change.

Allow type casting between Strictly Typed VI References

TypeCastVI.png

Why:

The asynchronous methods require strictly typed VIs. Strictly typed VIs depend on the connector pane and terminal wiring type (Required, Recommended, Optional). Depending on your development environment and configuration options (Front Panel > Connector pane terminals default to required), newly connected terminals may be Required -or- Recommended. 

Opening a VI reference or calling an async method with the wrong terminal wiring type will result in Error 1057: Type mismatch. 

TypeCastVI_Justification2.png

To work around the connector pane variance, we need to type cast check every connector pane terminal variance in order to call the proper async method without getting the Type Mismatch error.

TypeCastVI_Current.png

The amount of unnecessary complexity to call and collect a thread is infuriating. There is already enough complexity when it comes to spawning a new threads; between the conditional options (Non-Reentrant, Reentrant, Async Forget, Async Collect), VI type specifier (Generic, Strict), Connector Pane Type (Pattern, Rotation), Terminal Wiring Types (Required, Recommended, Optional), etc. Threading in LabVIEW needs improvement.

Idea:

To better support Asynchronous calls, we need the ability type cast the required strictly typed VIs to:

1. Add an option to Start Asynchronous Call and Wait on Asynchronous Call methods to ignore the VI terminal wiring type flags:

     Required (0x21000), Recommended (0x20800), Optional (0x20000)

2. Support type casting VIs To More Generic Type and To More Specific Type to avoid the error 1057 Type Mismatch all together

TypeCastVI_Idea.png

Regards,

A LabVIEW Enthusiast 

Two more suggestions

 

There has also been a need for rising edge and falling edge triggers for boolean values instead of having to manually code this in every time.  I know this would take extra memory space but could be turned on or off maybe in the control settings dialog.

 

I also have to manually code in time delay functions because everything I do is in loops with parallel code.  The timers can't execute properly this way and would be nice to be able to use the built-in timing functions instead of hand-coding.

Aren't you tired or seeing Labview, or LABview or LabView online?

 

NI Certifications (CLA, CLD, etc) should be stripped off people engage in miss-spelling LabVIEW.

 

And those who aren't certified should handwrite "LabVIEW stands for Laboratory Virtual Instrumentation Engineering Workbench" 10 thousand times!

With LabVIEW classes, I appreciate that:

- I can save and load class data to/from disk.

- Backwards compatibility is automatically taken care of with no coding required (in most cases).

 

However....this functionality breaks as soon as you rename a class (which includes moving it into a library).

 

It'd be great if I could rename a class (or add it to a library) and then still load data that was saved with a previous version of that class.  I've been burned by this a few times where legacy data files are unrecoverable after code refactoring....  and it's a tradeoff I'd rather not have to make in the future!

 

(Apologies if this suggestion has already been submitted...but...didn't find it in my search)

We are already familiar with the Ctrl+Scroll to explore the Structures in the Block Diagram.

 

It is a handy feature to explore the Numerous Pages of a structure.

 

And on the Front Panel, we have a similar Primitive which has the ability to have as much pages as required - The Tab Control.

 

In different scenarios, we may end up with numerous pages on our Tab Control and even we may not have the Tabs/Page Labels Display visible in the GUI.

 

In such scenarios, while editing the GUI, we may need to switch between different pages. To switch between pages currently we have only one option with the Right Click->Go To Page->select page process.

 

This is making the whole GUI editing process slower.

 

We also have some workarounds to avoid this Right Click->Go To Page->select page process delay by temporarily making the Page Labels Display visible during the edit time to select which page to be shown.

 

But lets consider the efficiency if we have this Ctrl+Scroll feature for Tab Controls to explore the pages!

 

And it may work only in the Edit mode.

When creating a real-time app (.rtexe) intended to be run on multiple targets with target-relative references, the recommended technique is to create a build with a IP address of 0.0.0.0; then, using some other utility outside of the LabVIEW environment, copy the executable to the startup folder on the target.

 

Given that, it would seem reasonable to create a feature within the LabVIEW Project Explorer that allows for direct deployment to a specific RT target or targets from the right-click menu instead of having no option to select an alternate target(s).  If setting the IP address in the project to 0.0.0.0 is the recommended technique for creating a target-relative (versus absolute) executable, then LabVIEW Project Explorer should be able to key off of this value to take a different and more appropriate action.

 

The same would be true of the Set as startup and Run as startup right-click menu options.

 

 

Although it appears that the 2020 version of the Block Diagram Cleanup Tool (BDCT) does do a better job than its predecessors, I would still say that the BDCT’s results are still less than optimal. Most LabVIEW Idea Exchange posts concerning the BDCT talk about label positioning and alignment.  Here I would like to focus on the issue of horizontal expansion of the block diagram and a holistic view of what LabVIEW features contribute to that effect.

 

Like most programmers, when developing a VI block diagram, I try to keep the diagram no more than one screen wide.  I have learned a few tricks over the years that help manage horizontal expansion, such as resizing an object’s label so that the words appear on multiple lines before running the BDCT. This allows for some compression horizontally and allows for some growth vertically to compensate. Horizontal expansion of the block diagram is certainly expected to some extent because data flow happens left to right, of course; but it would seem logical to incorporate that knowledge into the BDCT algorithm to compensate and find ways to adjust the spacing so that the rearrangement creates more a bit more vertical expansion and less horizontal expansion—while still satisfying usual criteria such as no backwards-running wires.

 

Because data flow is horizontal, to help the BDCT work better, NI may want to think about what visual features—other than left-to-right data flow—contribute to a unnecessarily wide block diagram. I seldom have an issue with a block diagram becoming too tall.  Admittedly, poor programming technique can result in too-wide block diagrams, but let’s look at a few other things. What elements of LabVIEW’s block diagram unnecessarily consume width? Here’s a few that I could think of:

 

  1. Long names in bundle/unbundle cluster objects, property/invoke nodes, Enum/ring constants, local variables, formula nodes, etc. – Most LabVIEW-supported languages are left-to-right horizontal. Long names, especially when using nested clusters, take up horizontal space. I like being able to use long names; it helps the code to be more self-documenting. If those type of objects supported word-wrap, that would help conserve width.
  2. Expression nodes and multi-digit constants – no word-wrap available here either.
  3. Named terminals of timing and event structures – They are what they are, and you cannot remove all of the ones you don’t use, and so they take up space horizontally.
  4. Some native functions – There are some LabVIEW native function icons that are wider than they are tall. Some examples: In-Range/Coerce and Initialize Array.
  5. Shift registers – It has a subtle effect, but shift registers are wider than they are tall. Do they have to be that way?

To fully tackle this issue, I believe you have to look at things holistically and not just as potential improvements that could be made in the BDCT.  Recognize that data flow is left-to-right horizontal and you will have long text names; you can’t really do anything about that. However, there are other things that could be done in LabVIEW feature-wise to help compensate for width-wise block diagram growth.

Since LabVIEW 2021 is announced that will adopt the advantage features of NXG, I expect some thing like:

- In NXG we could dock constants to functions & nodes. hope can also be in LabVIEW 

- In NXG we could also reduce space of block diagram by (Ctrl + mouse click & move up/left) while Down/right is to extend the space. In LabVIEW still couldn't reduce the space. all mouse direction are extending the space.

- provide unplaced terminal, controls and/or indicator pallet, is very use full.

- zoom in/out, I've no Idea why NXG can while LabVIEW can't.

When integrating libraries written in C# a common scenario that occurs is the necessity to cast a method to a generic delegate defined in the class in order to, for instance, subscribe callbacks to events belonging to sub-classes inherited by the object instantiated by the Constructor Node.

 

So far LabVIEW efficiently allows to subscribe to events, registering a VI as callback in case the event involved is fired, but does not allow to cast a VI to a generic delegate and that's very limiting.

 

The only workaround existing is to wrap the C# library, adding an additional C# layer that instantiates the delegates on behalf of LabVIEW and then exposes plain methods, properties and events that are straightly callable by LabVIEW itself.

 

The idea is to allow  LabVIEW to instantiate generic delegates based on the definition of the delegate provided by the C# library and return the delegate object so that it can be provided as input to library's methods requesting it.

 

This feature would make the LabVIEW integration of C# dlls more solid and completely manageable within LabVIEW code without further wrappers needed.

Imagine that I have a UI with dozens of graphs all showing acquired time history data.  Time could be on the X axis, but it could also be on the Y axis depending on the convention of the UI.  The user wants to see what happened between 1 minute ago and 2 minutes ago.  The user needs to look at data on various graphs to make a determination.  The user wants to go to one plot and make a zoom operation on the time scale and have all other plots show the same time scale interval.

 

Why not have a properties tab on plots that allows the developer to define a named linked scale and assign a scale on the current graph to link to the global scale.  So the developer can drop XY graphs and set the time scale to link to create a standard time scale common between plots.  When the user changes the time scale on one linked plot, it automatically propagates to all other linked plots.  The common scales would be global across any VIs in memory.

 

This would be useful for linking time scales, but I also see it as potentially useful for linking other scales, say the Y scale is temp and the user wants all temp plots to match scale.  This would be harder though as some negotiation would be needed if autoscale is turned on where where all plots would have to report their min/max to a process that would then pass out updated scale ranges as some reasonable interval (1 hz, 10 hz, etc).

 

All of the above can be managed with resize events and programmatic handling of those events, but I waste many dev hours on that task.  I'd love for this to be an out of box feature.

I am extending on an old idea, but the implementation is different than the OP so I made this a new idea:

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Decimation-feature-built-into-the-graph-indicators/idi-p/1109956

 

What I would want would be an XY graph with automatic disk buffering and on screen decimation.  Imagine this, I drop a super duper large data sets XY graph.  Then I start sending data in chunks of XY pairs to the graph (updating the graph at 10 hz while acquisition is running at +5000Hz).  We are acquiring lots of high rate data.  The user wants to see XX seconds on screen, probably 60 seconds, 120 seconds, or maybe 10 or 30 minutes, whatever.  That standard plot width in time is defined as a property of the plot.  So now data flows in and is buffered to a temp TDMS file on disk with only the last XX seconds of data showing on the graph.  The user can specify a file location for the plot buffers in the plot properties (read only at runtime). 

 

We decimate the incoming data as follows:

  • Calculate the maximum possible pixel width of the graph for the largest single attached monitor
  • Divide the standard display width in time by the max pixel width to calculate the decimation interval
  • Buffer incoming data in RAM and calculate the min and max value over the time interval that corresponds to one pixel width.  Write both the full rate data to the temp TDMS and the time stamped min and max values at the decimation interval
  • Plot a vertical line filling from the min to max value at each decimation interval
  • Incoming data will always be decimated at the standard rate with decimated data and full rate data saved to file
  • In most use, the user will only watch data streaming to the XY graph without interaction.  In some cases, they may grab an X scroll bar and scroll back in time.  In that case the graph displays the previously decimated values so that disk read and processing in minimized for the scroll back request.
  • If the user pauses the graph update, they can zoom in on X.  In that case, graph would rapidly re-zoom on the decimated envelope of data.  In the background, the raw data will be read from the TDMS and re-decimated for the current graph x range and pixel width and the now less decimated data will be enveloped on screen to replace the prior decimated envelope.  The user can carry on zooming in in this manner until there is at least one vertical line of pixels for every data point at which point the user sees individual points and not an envelope between the min and max values.
  • Temp TDMS files are cleared when the graph is closed. 
  • The developer can opt to clear out the specified temp location on each launch in case a file was left on disk due to a crash.

This arrangement would allow unlimited zooming and graphing of large datasets without writing excessive data to the UI indicator or trying to hold excessive data in RAM.  It would allow a user to scroll back over days of data easily and safely.  The user would also have access to view the high rate data should they need to. 

 

Just recently ran into an issue with XML namespaces where the LabVIEW implementation just didn't cut it. The AE had R&D confirm this too. The work around, after trolling through the vast sea of .NET XML features & forums, was there. Something this simple shouldn't burn up this much time.

The world around us uses JSON & XML extensively yet the examples are minuscule. It doesn't matter if new examples are .NET, Python or something else but we need to get to solutions quicker.

 

How about Indexing cluster element by name or order number?

 

 

AdarshaPakala_0-1625234655052.png

 

This helps dynamic indexing or programmatically indexing the cluster elements.

 

 

Thank You

Adarsh

LabVIEW from 2006

CLA from 2014

On larger diagrams (e.g. state machines) it's sometimes annoying to search for a special control or constant that defines the data type of the wire (especially if it is hidden or out of the visible FP area) or just to see the exact type of the wire.

To make this easier I have several suggestions:

1) add the data type to visible items: Right-Click to a wire -> visible items -> datatype   (should show either the basic data type like "U32" or the name of the type def control or class.


2) add a tool-tip function to wires. After a few seconds of mouse-on-wire the type / type-def name should appear

(ToolTip.jpg)

ToolTip.jpg

 

3) add an option to open the type def control file by right click to a w

ire (see RightClick.jpg); alternatively to #2 the menu could also show the data type

 

RightClick.jpg

 

I wanted to suggest if it would be possible to add an option to resize the "Browse for Variable" screen which would help with the development of the new project. Especially when you have a generic subVI where you just change the variable but keep the logic built.

Download All

In the world of tab controls, a tab caption refers to the actual text in the tabs that you click on to select the different pages, see attachment "Tab Caption". Currently, there is no property node that allows you to change the font characteristics of the tab captions. The font characteristics can be customized non-programmatically by right-clicking on the tab control and selecting Advanced/Customize. However, within the customize control all the tab caption properties are linked, so if I make the font color red for the page 1 tab caption, it will automatically change the page 2 tab caption text as well. The same thing applies to the other font characteristics.

 

This functionality would be useful for those users who want more control over the aesthetics of their front panel. For example, if a user wanted each tab to represent a test that he was running he could change the individual text and color to represent whether or not the test passed, green "passed," or failed, red "failed."