LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

This topic keeps coming up randomly.  A LabVIEW class keeps a mutation history so that it can load older versions of the class.  But how often does this actually need to be done?  I have never needed it.  Many others I have talked with have never needed it.  But it often causes problems as projects and histories get large.  For reference: Slow Editor Performance with Large LabVIEW Projects Containing Many Classes.  The history is also just added bloat to your files (lvclass and any built files that use the lvclass) for something most of us do not need and sometimes causes build errors.

 

My proposal is to add an option to the class properties to keep the mutation history.  It can be enabled by default to keep the current behavior.  But allowing me to uncheck this option and then never have to worry about clearing the history again would be well worth it.

TL;DR: the idea is please make building as fast in new LabVIEWs post 2023 Q3 as old ones when you have a clear compiled object cache and when your file timestamps have changed.

 

LabVIEW 2023 Q3 includes a major change to the app builder: 

 

LabVIEW 2023 Q3 has improved cache behavior for packed project libraries and applications.

 

The first build will populate the cache, and then the subsequent builds will be much faster.

Great! It is pretty easy to confirm this and it works! Unless...

 

1. File timestamps change - such as when you clean a directory and clone project files in your CI job

  • Git does not make file timestamps match "commit date" - when you clone fresh all the timestamps are now o'clock, when you change branches all the files that change are reset to now

2. The cache is cleared prior to the build - such as when you want a repeatable build unaffected by prior state so you set your CI jobs to clear it before each build

3. You are doing a "first build" on a new project - such as when you use a container or VM disk image to reset the entire environment to a clean state prior to your CI job

 

In all my testing, any of these conditions pretty strictly cause builds to be slower in LabVIEW 2023 Q3 than in earlier versions: with a clear cache by typically ~60%, with new timestamps and a persisted cache depends on how much of the code has new timestamps.

 

In other words, I expect most everyone using CI will find building slower in 2023 Q3+ than later versions. So, the idea is: whatever optimizations were done, include an option to revert to the old behavior - build spec, config ini, editor option, whatever.

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Some languages like Rust and Zig have a feature called Tagged Enums (or Sum Types) that allow you to create a data type that can be one of a few different types where there is a name associated with each type. In LabVIEW, however, Enums are limited to consecutive numeric integer values -- there's no way to associate a type with each named value.

 

The power of combining an Enum with a data type for each value is that we could potentially use a Case Structure as a switch statement with type assertion and data conversion built in! This would allow us to create robust, type-safe code that is easier to maintain and understand.

 

example_equipment_variant.png

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

The LabVIEW compiler currently appears to use one core of a multi-core processor.  It would be nice if it fully utilized multiple cores to speed building of large projects, and recompilation of VIs when editing/opening source code.

Currently, the TDMS File api does not offer a way to get the TDMS file size.

 

Our use case is that we'd like to limit the size of the TDMS files and span them accross multiple individual files (and I've posted an idea suggestion for adding that as a native feature, too).  To do this, we need to be able to monitor the TDMS file size, so that we can save/close the current file and then create the next file in the span for continued use (until we hit the size limit again).

 

 

Jim_Kring_0-1707938415587.png

 

It would be useful to have a node that could be fed any wire and return the size (in bytes) of the data on that wire. In other words, the node would return the number of bytes that the wire occupies in memory.

 

1 (edited).png

 

 

 

 

 

 

For example, the node would return a value of:

  • 1 byte when fed a U8 wire
  • 2 bytes when fed a U16
  • 4 bytes when fed a I32
  • 8 bytes when fed a DBL
  • 800 bytes when fed a 1D array that contains 100 DBL elements
  • 9 bytes when fed a cluster that contains a DBL and a U8
  • 9 bytes when fed an object that contains a DBL and a U8
  • 18 bytes when fed an object that contains two other objects that occupy 9 bytes each
  • and so on

Notes

  • The node would would enhance LabVIEW programmers' ability to monitor and audit memory usage.
  • The node may serve as an additional tool to detect memory leaks (by repeatedly calling the VI on the same wire and checking whether the size is going up).
  • The node would simply be interesting to programmers interested in performance and would enable programmers to learn more about LabVIEW internals.
  • The node would be useful especially to query the size of complex data structures, such as objects that contains other objects that themselves contain objects, or clusters that contain arrays, or arrays that contain clusters, or objects that contain DVRs.
  • I would be happy if the node had a second input named "Mode" (or similar). This input may be a typedef enum with items named "Shallow Measurement" and "Deep Measurement" (or similar). This input could be required, recommended, or optional.
    • When "Shallow Measurement" mode is selected, the node would return the size of all the by-value data fields in the main input wire, but would not add up the size of data referenced by DVRs or other references. For example, a wire that contains a cluster that contains a DBL, a U8, and a DVR would return perhaps 13 bytes (8 + 1 + presumably 4 bytes for the DVR reference itself). It would not add to the result the size of the data referenced by the DVR.
    • When "Deep Measurement" is selected, the node would recursively scan all data structures, including DVRs and other references.
  • In both "Shallow Measurement" and "Deep Measurement" modes there should be no limit to the scanning depth. In other words, if a cluster contains a cluster that contains a cluster and so on, they should all be measured regardless of the nesting depth. Similarly, if "Deep Measurement" is selected, and a DVR contains a DVR that contains a DVR and so on, the data behind all these DVRs should be added to the total.
  • When in "Deep Measurement" mode and fed a Queue reference wire, the node could perhaps return the size of all the data in the queue. In other words, the size of all the elements present in the queue.
  • Perhaps the LabVIEW compiler already uses a "Get Data Size" function internally? If such a function already exists, perhaps it would be relatively straight forward to expose it as a node in the palettes?
  • Perhaps the best location for this node would be in the Programming >> Application Control >> Memory Control palette.

2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Thanks!

The recently introduced Raspberry Pi is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The Raspberry Pi is a $35 (with Ethernet) credit card sized computer that is open hardware. The ARM chip is an Atmel ARM11 running at 700 MHz resulting in 875 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 15 times the processing power!

 

Wouldn’t it be great to programme the Raspberry Pi in LabVIEW?

The array "Startup/Always Included" carries these Vi's into pre/post build action VI's.

The order of the vi's in this array depends on the order they have been added to/created in the project when, not the order they can be seen on the project window itself.

Here's the project window:

GICSAGM_0-1718176051948.png

Here's the order within prebuild vi:

GICSAGM_1-1718176157393.png

 

The order is NOT the same. "startup" is the first but if you delete from the project and then re-add it, it will become the last.

 

I suggest that the order in pre/post build VI should be:

first - startup.vi

following: always included vi's in the same order they appear in the project window.

 

Also, the always included list should have a 'mechanism' to change the order of the vi's in the list - be it up/down arrow buttons on the side that would move the selected vi or a similar command in a "right click" drop down menu.

 

In this way, any pre/post build action that may involve any of these vi's can be clearly defined and remain stable during the lifetime of the project without running into the risk of getting the wrong vi to "work" on or having to edit the pre/post build vi to add or edit the vi's names every time there is a change in the startup/always included list.

 

Background

 

The DAQmx apis allow streaming measurements to TDMS. This supports spanning multiple files (by setting the max file size for individual files in the span set), which is very useful.

We need a similar feature for files we're writing to directly (i.e. not using DAQmx) with the TDMS File functions in LabVIEW.

Jim_Kring_0-1707938647270.png

 

Note: The main reason we wanting this feature, right now, is that our files are growing quite large and when we run the TDMS Defragement function, we get out of memory errors in RT on cRIO (e.g. if we have 512MB RAM on our cRIO and the TDMS file is around the same size).

 


Alternatives

We're thinking to do this ourselves, however the TDMS file api does not support checking the file size from a TDMS file reference  (and that's a great idea, too).

I'm not totally sure how this would be added to the TDMS file api -- maybe there could be a "TDMS Advanced Spanning" palette with options for configuring and interacting with TDMS spanning.

The compare tool is fundamentally broken which is somewhat shocking considering that such a tool is essential to any sort of large scale or long term software development.

 

Attached is a basic LV 2023 example which illustrates some of the many flaws with the tool.  If you compare "old version" and "new version" vi without including cosmetic changes, the tool shows 33 differences, most of which are incorrect.

 

Please implicitly consider array index during index / replace elements in In Place Elements Structure if I am starting from Index 0

 

Present method:
image.png

 

Expected method:
image (1).png

 

[admin edit 2021-02-24]: placed images in-line with text and removed them as attachments

Typical question in development process: "How quickly does my code execute? What runs faster... Code A or Code B?" So, if you're like me, you throw in a quick sequence that looks like this:

 

TimingDuringDevelopment.png

 

AHHH! What a mess! It's so hard to fit it in, with FP real estate so packed these days!

 

We need this:

ProposedTimingDuringDevelopment.png

 Just like my other idea, and for simplicity's sake NI, I would be PERFECTLY happy even if you had to set up the probes during edit mode, and were not able to "probe" while running.

 

 As a bonus, this idea may be extrapolated into n timing probes, where you can find delta t between any two of the probes.

When looking for unexpected behaviour in time or memory usage of a project the Profiler is useful and easier than the execution tracer, but it could be made much more useful by adding the ability to monitor for changes and analyse the issues.

 

Mads_0-1695798309247.png

 

Issue:
Currently detecting how the memory or run counts e.g. of a VI changes over time you have to take snapshots, save them and then compare the values in e.g. Excel. (which the saved traces do not directly fit into either...)

Proposed feature: Trends
It would be nice if you could just set the tool to automatically sample and log all/selected numbers regularly and then be able to view the trends. 

 

Proposed feature: Automated Analysis
Having trends will help in manually detecting issues, but the profiler could also have tools that helped you in this, e.g. highlighting which VIs show a continous growth in memory. This could also then be expanded by being able to call a VI analyzer on any given VI - preferably made/set up to identify possible reasons for a memory leak e.g. (unclosed references, continous array building e.g.).

There are times when I leave a VI with modal properties open and then I run the main application that also calls this VI if opened in the development environment. This locks all running windows due to the modal VI. I propose a button in the taskbar that aborts all running VIs OR perhaps a list is opened on right-click of all running VIs 🙂

 

abort_all_running_vi.png

 

 

Using 2023 for the first time (skipped 2021 and 2022)

 

QuickDrop used to be quick.

It was very convenient.

 

Now it takes a second to load.

 

That's pretty inconvenient if you're trying to... do much of anything.

 

Best example: If you're trying to remove code (Quickdrop's Control-R), now instead of quickly removing the selected code, you get to deal with accidentally RUNNING your VI (Menu's Control-R)

 

Please make it Quick again

The Swap Values node only supports a Boolean input on the '?' terminal.

swap-values-error-cluster.png

It should also support an error cluster input, in the same way as the Select node (pictured) and many other nodes with Boolean inputs.

As soon as we have more complicated data structures (e.g. clusters of arrays), a large portion of the FP real estate is wasted taken up by borders, frames and trims, etc.

 

We need a palette full of "Amish" 😉 controls, indicators, and containers that eliminate all that extra baggage. We have a few controls already in the classic palette, but this needs to be expanded to include all types of controls, including graphs, containers, etc.

 

A flat control consists of a plain square and some text (numerical value, string, ring, boolean text, etc). A flat container is a simple borderless container.  A flat graph is a simple line drawing that would look great on a b&w printer. A flat picture ring looks like the image alone.

 

They have a single area color and a single pixel outline, if both have the same color, the outline does not show. They can also be made transparent, of course. If we look at them in the control editor, there are only very few parts.

 

Now, why would that be useful?

 

Let's have a look a the data structure in the image. There is way too much fluff, distracting from the actual data. If we had flat objects, the same could look as the "table" below. Note that this is now the actual array of clusters, no formatting involved! It is fully operational, e.g. I can pick another enum value, uncheck the boolean, or enter data as in the cluster above.

 

Many years ago in LabVIEW 4, I actually made a borderless cluster container in the control editor and it looked fine, but it was difficult to use because it was nearly impossible the grab the right thing with the mouse at edit time.

 

The main problem of cours is that the object edges completely overlap, making targeted seletion with the mouse impossible. (For example the upper right corner pixel is the corner of an array, a cluster, another array, and an element at the same time.)

 

So what we need is a layer selection tool that allows us to pick what we want (similar to tools in graphics editing software). It could look similar to the context help shown in the picture with selection boxes for each line. Picking an object would show the relevant handles so we can intereact with the desired object. Another possibility would be to hover over the corner and hit a certain key to rotate trough all near elements until the right element is selected, showing it's resize handles. I am sure there are other solutions.

 

As a welcome side effect, redrawing such a FP is relatively cheap.

 

Message Edited by altenbach on 06-03-2009 09:20 AM
Message Edited by altenbach on 06-03-2009 09:20 AM

I don't know how many times I've added a case statement post-programming, but I do know that there isn't an easy way to make a tunnel the case selector.  Usually I delete the tunnel and then drag the case selector down and then rewire, there should be an easier way.  For loops and while loops have an easy way to index/unindex or replace with shift register, why can't a case statement be the same?

Case2.PNG 

Case3.PNG 

Current Behavior:

When using the "retain all errors" option of the Merge Errors function the combined error message JSON output is not a valid JSON string.

This is the source output of Merge Errors when retain all errors is active - it is not a valid JSON string.

 

<JSONErrorMultiple_1.0>[{
    "status": true,
    "code": 111,
    "source": "This is a test message including\nnewlines"
},{
    "status": true,
    "code": 222,
    "source": "This is a test message including\nnewlines"
}]

 

 

Desired Behavior:

Make the combined errors source string a valid JSON string so that it can be parsed by other tools.

Valid JSON string:

 

[{"status":true,"code":111,"source":"This is a test message including\nnewlines"},{"status":true,"code":222,"source":"This is a test message including\nnewlines"}]

 

 The above JSON string can be formatted nicely by most tools - e.g. JSON pretty print - https://jsonformatter.org/json-pretty-print

 

The below string including the JSONErrorMultiple_1.0 key (but formatted correctly) also works:

 

{"<JSONErrorMultiple_1.0>":[{"status": true,"code": 111,"source": "This is a test message including\nnewlines"},{"status": true,"code": 222,"source": "This is a test message including\nnewlines"}]}