From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

This topic keeps coming up randomly.  A LabVIEW class keeps a mutation history so that it can load older versions of the class.  But how often does this actually need to be done?  I have never needed it.  Many others I have talked with have never needed it.  But it often causes problems as projects and histories get large.  For reference: Slow Editor Performance with Large LabVIEW Projects Containing Many Classes.  The history is also just added bloat to your files (lvclass and any built files that use the lvclass) for something most of us do not need and sometimes causes build errors.

 

My proposal is to add an option to the class properties to keep the mutation history.  It can be enabled by default to keep the current behavior.  But allowing me to uncheck this option and then never have to worry about clearing the history again would be well worth it.

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Some languages like Rust and Zig have a feature called Tagged Enums (or Sum Types) that allow you to create a data type that can be one of a few different types where there is a name associated with each type. In LabVIEW, however, Enums are limited to consecutive numeric integer values -- there's no way to associate a type with each named value.

 

The power of combining an Enum with a data type for each value is that we could potentially use a Case Structure as a switch statement with type assertion and data conversion built in! This would allow us to create robust, type-safe code that is easier to maintain and understand.

 

example_equipment_variant.png

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Background

 

The DAQmx apis allow streaming measurements to TDMS. This supports spanning multiple files (by setting the max file size for individual files in the span set), which is very useful.

We need a similar feature for files we're writing to directly (i.e. not using DAQmx) with the TDMS File functions in LabVIEW.

Jim_Kring_0-1707938647270.png

 

Note: The main reason we wanting this feature, right now, is that our files are growing quite large and when we run the TDMS Defragement function, we get out of memory errors in RT on cRIO (e.g. if we have 512MB RAM on our cRIO and the TDMS file is around the same size).

 


Alternatives

We're thinking to do this ourselves, however the TDMS file api does not support checking the file size from a TDMS file reference  (and that's a great idea, too).

I'm not totally sure how this would be added to the TDMS file api -- maybe there could be a "TDMS Advanced Spanning" palette with options for configuring and interacting with TDMS spanning.

Currently, the TDMS File api does not offer a way to get the TDMS file size.

 

Our use case is that we'd like to limit the size of the TDMS files and span them accross multiple individual files (and I've posted an idea suggestion for adding that as a native feature, too).  To do this, we need to be able to monitor the TDMS file size, so that we can save/close the current file and then create the next file in the span for continued use (until we hit the size limit again).

 

 

Jim_Kring_0-1707938415587.png

 

The compare tool is fundamentally broken which is somewhat shocking considering that such a tool is essential to any sort of large scale or long term software development.

 

Attached is a basic LV 2023 example which illustrates some of the many flaws with the tool.  If you compare "old version" and "new version" vi without including cosmetic changes, the tool shows 33 differences, most of which are incorrect.

 

When looking for unexpected behaviour in time or memory usage of a project the Profiler is useful and easier than the execution tracer, but it could be made much more useful by adding the ability to monitor for changes and analyse the issues.

 

Mads_0-1695798309247.png

 

Issue:
Currently detecting how the memory or run counts e.g. of a VI changes over time you have to take snapshots, save them and then compare the values in e.g. Excel. (which the saved traces do not directly fit into either...)

Proposed feature: Trends
It would be nice if you could just set the tool to automatically sample and log all/selected numbers regularly and then be able to view the trends. 

 

Proposed feature: Automated Analysis
Having trends will help in manually detecting issues, but the profiler could also have tools that helped you in this, e.g. highlighting which VIs show a continous growth in memory. This could also then be expanded by being able to call a VI analyzer on any given VI - preferably made/set up to identify possible reasons for a memory leak e.g. (unclosed references, continous array building e.g.).

Current Behavior:

When using the "retain all errors" option of the Merge Errors function the combined error message JSON output is not a valid JSON string.

This is the source output of Merge Errors when retain all errors is active - it is not a valid JSON string.

 

<JSONErrorMultiple_1.0>[{
    "status": true,
    "code": 111,
    "source": "This is a test message including\nnewlines"
},{
    "status": true,
    "code": 222,
    "source": "This is a test message including\nnewlines"
}]

 

 

Desired Behavior:

Make the combined errors source string a valid JSON string so that it can be parsed by other tools.

Valid JSON string:

 

[{"status":true,"code":111,"source":"This is a test message including\nnewlines"},{"status":true,"code":222,"source":"This is a test message including\nnewlines"}]

 

 The above JSON string can be formatted nicely by most tools - e.g. JSON pretty print - https://jsonformatter.org/json-pretty-print

 

The below string including the JSONErrorMultiple_1.0 key (but formatted correctly) also works:

 

{"<JSONErrorMultiple_1.0>":[{"status": true,"code": 111,"source": "This is a test message including\nnewlines"},{"status": true,"code": 222,"source": "This is a test message including\nnewlines"}]}

 

 

Timed Loops are only supported on Windows and RT, making it difficult to port applications that use it to Linux.

 

In talking to some NI R&D people, it should be possible to port the RT Linux shared library to desktop Linux. (Perhaps it wouldn't even need to be recompiled, since it's x86.) We also need support in the editor on Linux to allow us to drop the structure.

 

This seems like low-hanging fruit to at least try to implement.

 

Using 2023 for the first time (skipped 2021 and 2022)

 

QuickDrop used to be quick.

It was very convenient.

 

Now it takes a second to load.

 

That's pretty inconvenient if you're trying to... do much of anything.

 

Best example: If you're trying to remove code (Quickdrop's Control-R), now instead of quickly removing the selected code, you get to deal with accidentally RUNNING your VI (Menu's Control-R)

 

Please make it Quick again

Please implicitly consider array index during index / replace elements in In Place Elements Structure if I am starting from Index 0

 

Present method:
image.png

 

Expected method:
image (1).png

 

[admin edit 2021-02-24]: placed images in-line with text and removed them as attachments

I like channel wires for obvious reasons. I like to have modules, which work on their own, coming with their own GUI. This is convenient for developing my modules and testing them. Afterwards I want to be free to insert this module into a bigger project, which takes control of my module.

The channel wire controls on the connector pane can be set optional, but LabVIEW throws an error, when executing a VI with not connected channel wire control. (All branches of a channel wire must be connected to a channel endpoint...)
Please make VIs working with not connected, optional channel wire controls on the connector pane. That way, we can have channel wire based modules, which can run on their own, as a single VI, or can be controlled remotely without any change.

Quiztus2_0-1707901224255.png

 

The Swap Values node only supports a Boolean input on the '?' terminal.

swap-values-error-cluster.png

It should also support an error cluster input, in the same way as the Select node (pictured) and many other nodes with Boolean inputs.

There are times when I leave a VI with modal properties open and then I run the main application that also calls this VI if opened in the development environment. This locks all running windows due to the modal VI. I propose a button in the taskbar that aborts all running VIs OR perhaps a list is opened on right-click of all running VIs 🙂

 

abort_all_running_vi.png

 

 

It would be nice to easily find objects that prevent a structure with "Auto Grow" from becoming smaller in size.  For example, the Add function in  the True case in the simplified example below.

 

NIExpert_0-1680056795428.png

 

 

Windows 10 now handles long file paths although it has to be enabled with via various possible registry or group policy edits. Windows applications have to opt in. LabVIEW source code should be able to utilize long file paths.

Searched briefly but couldn't find any ideas about this.

 

I know we have the ability #via_ignore comments to ignore specific tests for a specific VI, but I am looking at a different use case.

 

Here's the use case, I use DQMH. When you create a new DQMH module there is a lot of plumbing code that comes with it. It's standard stuff. Very rarely do I have to open or edit it. Much of it is scripting generated. It often fails tests but I don't care. In addition to failing, it takes up test time, which slows my feedback loop. I would a way to signal to VI analyzer to skip these files. I know I can use the VIAN API to limit the files it checks, but I was thinking there had to be a better way. 

 

Implementation Ideas - I had 2 main ideas

- Regex matching on VI names - with the regex pulled out of a text file somewhere ala gitignore. Many of the DQMH generated VIs have standard names, so that is easy. When generating events, they don't but you could easily add a prefix/suffix or something that the regex would pick up.

- Using the Tag API to tag the VI. I like this because the scripting can just apply the tag or apply it to the template the scripter uses. Downside is: kind of hidden from the user and perhaps if I decide to make some edits to this VI  I may want VIAN to stop ignoring it and it's not immediately obvious how to do that.

 

Note:

I picked the Execution and Performance Label because it didn't seem to fit any of the labels easily. If this is the wrong label and you are an admin, please relabel it.

 

 

Classes? OOP? ... Huh?

Even if you don't (yet) work with LV classes, you may have noticed that they are starting to become increasingly widespread in the LV world. In fact, the excellent new Actor Framework that ships with LV2012 relies heavily on classes. LV classes are great but they can impact on your performance as a developer as your application becomes larger. I'd encourage everyone to click the magic KUDOS button for this idea, since classes will likely affect us all sooner or later!

 

 

The problem:

Most class-based architectures contain some degree of linking. One form of linking is inheritance where parent-child relationships are implicitly defined, and another form of linking arises from nesting libraries where classes (e.g.) are placed inside other libraries.

 

Unfortunately as the linking increases in a project, the IDE starts to become very sluggish! Those who have worked on mid-sized class-based applications know the symptoms:

  • Opening the "class properties" window takes 10 seconds or more
  • Renaming a class brings the editor to a standstill

For many projects these symptoms are a minor annoyance, but as your project grows they can become a serious impediment to productivity. Why should it take over 30 seconds to modify a class's inheritance?!

 

Obviously careful design can reduce linking to some extent, but that just postpones the pain. The reality is that all class-based projects start to suffer from these symptoms once they reach a "resonable" size.

 

 

The idea:

Improve the responsiveness of the LV editor when working with classes.

  • Highly repetitive tasks such as editing a class library's icon deserve a snappy response from the IDE, regardless of how many classes I have loaded!
  • Modifying inheritance is a fundamental operation. It should be quick and easy! (See this related idea)
  • Placing classes in libraries promotes good project organisation. It should *not* bring the editor to a grinding halt!

hierarchy.png

 

Credits:

Others have written about this topic well before me. Here are a few relevant discussions:

Feel free to link more! Smiley Happy

Hello,

 

As shown in below image we can see that, if I index numeric array and wire it with any of the node from numeric function it gives un-aligned wire whereas as same process if I use Boolean function at output of index it gives well aligned wire.

So due to this numeric function node wire to index out terminal makes our code with full of wire bends which is not as per NI LabVIEW coding standards also.

So here, I want to draw attention for NI, to do some correction to specific numeric function nodes so we can make neat and clean code in LabVIEW.Wire cleanup.PNG

Many of these VI properties are "Run-time" writeable.  They should not be set while the vi is in an "Idle" state 

 

Screenshot 2023-05-12 191049.png

So when it comes to using a queue, there is a somewhat common design pattern used by NI examples, which makes a producer consumer loop, where the consumer uses a dequeue function with a timeout of -1.  This means the function will wait forever until an event comes in.  But a neat feature of this function is it also returns when the queue reference becomes invalid, which can happen if the queue reference is closed, or if the VI that created that reference stops running.

 

This idea is to have similar functionality when it comes to user events.  I have a common design pattern with a publisher subscriber design where a user event is created and multiple loops register for it.  If for some reason the main VI stops, that reference becomes invalid but my other asynchronous loops will continue running.  In the past I've added a timeout case, where I check to see if the user event is still valid once every 5 seconds or so, and if it isn't then I go through my shutdown process.

 

What I am thinking is that there could be another event to register for, which gets generated when that user event which is registered for, becomes invalid so that polling for the validity of the user event isn't necessary.

 

before:

before.png

after:

after.png