LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Many of us using graphical programming for scientific applications, where we dealing with numbers, measurements, etc.

How often we grab to Windows Calculator to compute simple equations?

What about ability to enter something like 3,75*2,8 into any constant or control (in principle everywhere where we can put numbers) and then get computation result in this place:

Screenshot 2024-03-06 09.16.22.png

In the past I've worked in desktop publishing industry and using the software called "Macromedia Freehand MX", and that was really "killer feature", which saves huge amount of time.

 

This is how it works:

resize.gif

Or for example, 5 rotated copies:

rotate.gif

Even in Color mixer simple computations are allowed:

mixer.gif

Everywhere where I can put some numbers, in any dialog:

guides.gif

So, my suggestion to have the same in every numeric control or constant.

 

This is what I mean:

numeric1.gif

So, it should be allowed to enter here something like "3*5" or "42+3*5" 

As MVP suggested to have the only (at least) base operations *, /, +, -, also combined as shown above, but may be "advanced" support (like fully offered by Formula String) is also not so bad, why not:

numeric2.gif

Anyway it should work everywhere, including constants on the Block Diagrams:

numeric3.gif

Also, for example, on Resize Objects Dialog:

Screenshot 2024-03-06 11.20.04.png

Or even in the Settings (in general everywhere for any numeric field across whole LabVIEW):

options.png

 

And also in Run-Time, of course, not only in Development Environment.

 

If you think that "always enabled" feature will be annoying, then I can suggest to make this optionally per Control/Constant Option:

Screenshot 2024-03-06 11.31.54.png

Or may be as global setting in the Options.

Background

 

The DAQmx apis allow streaming measurements to TDMS. This supports spanning multiple files (by setting the max file size for individual files in the span set), which is very useful.

We need a similar feature for files we're writing to directly (i.e. not using DAQmx) with the TDMS File functions in LabVIEW.

Jim_Kring_0-1707938647270.png

 

Note: The main reason we wanting this feature, right now, is that our files are growing quite large and when we run the TDMS Defragement function, we get out of memory errors in RT on cRIO (e.g. if we have 512MB RAM on our cRIO and the TDMS file is around the same size).

 


Alternatives

We're thinking to do this ourselves, however the TDMS file api does not support checking the file size from a TDMS file reference  (and that's a great idea, too).

I'm not totally sure how this would be added to the TDMS file api -- maybe there could be a "TDMS Advanced Spanning" palette with options for configuring and interacting with TDMS spanning.

Currently, the TDMS File api does not offer a way to get the TDMS file size.

 

Our use case is that we'd like to limit the size of the TDMS files and span them accross multiple individual files (and I've posted an idea suggestion for adding that as a native feature, too).  To do this, we need to be able to monitor the TDMS file size, so that we can save/close the current file and then create the next file in the span for continued use (until we hit the size limit again).

 

 

Jim_Kring_0-1707938415587.png

 

Struggling to replace R&S FSW-K70 by VST and RFmx for my customer, found 16APSK and 32APSK are not directly supported by RFmx DeMod.  The customer is currently using FSW-K70 and their current test scnearios require 16APSK and 32APSK modulation and demodulation.  

 

I found a thread mentioning about APSK support below.  

 

https://forums.ni.com/t5/LabVIEW/APSK-Modulation-Demodulation/td-p/4232293

When looking for unexpected behaviour in time or memory usage of a project the Profiler is useful and easier than the execution tracer, but it could be made much more useful by adding the ability to monitor for changes and analyse the issues.

 

Mads_0-1695798309247.png

 

Issue:
Currently detecting how the memory or run counts e.g. of a VI changes over time you have to take snapshots, save them and then compare the values in e.g. Excel. (which the saved traces do not directly fit into either...)

Proposed feature: Trends
It would be nice if you could just set the tool to automatically sample and log all/selected numbers regularly and then be able to view the trends. 

 

Proposed feature: Automated Analysis
Having trends will help in manually detecting issues, but the profiler could also have tools that helped you in this, e.g. highlighting which VIs show a continous growth in memory. This could also then be expanded by being able to call a VI analyzer on any given VI - preferably made/set up to identify possible reasons for a memory leak e.g. (unclosed references, continous array building e.g.).

One of the most common operations performed on arrays is to determine whether an element is found inside the array or not.

 

There should be a function that is dedicated to this fundamental operation. My workaround is to use the "Search 1D Array" function followed by a "Greater Or Equal To 0?" function, as seen below. While it only takes a few seconds to drop the geqz function using Quick Drop, it's still slightly frustrating that this is necessary.

 

The set and map data types rightly have been given the "Element of Set?" and "Look In Map" functions. An equivalent should be provided for arrays.

Element of Array - edited.png

 

Thanks! 

Many common functions include a "found" or "exists" output. Examples include:

  • Get Variant Attribute
  • Element of Set?
  • Config file VIs (Read Key, Write Key, Get Key Names, etc)
  • Chek if File or Folder Exists

 

Why then does Look In Map provide an inverted ("not found") output? Wouldn't it be better if it was consistent with other similar functions??

fabric_0-1617240144803.png

This is most frustrating when replacing existing code using variant attribute lookups with equivalent maps. "Look In Map" is pin compatible with "Get Variant Attribute" except for that one inverted output! This has caught me out on more than one occasion...

Make possible that Boolean function accept error cluster as input as this example:

 

StopOnError.png     StopOnErrorCast.png

In LabVIEW you need to know the number of sub matches at edit time and cannot handle arbitrary regular expressions. It would be nice if there was a regex function that returned sub matches in an array which can be handled much more abstractly than a pre-sized xnode.

 

C++, the regex utilities return a container-like class https://www.cplusplus.com/reference/regex/match_results/

PHP an array of matches is returned https://www.php.net/manual/en/function.preg-match.php

JavaScript returns an array of matches https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/match

 

Many or most VIs that ship with LabVIEW have their protection set to Unlocked (no password). The screenshot below shows a selection of such VIs.

1.png

 

It would be much better if the protection of vi.lib VIs was set to "Locked (no password)", to prevent accidental modification.

2.png

 

It seems very risky for built-in VIs to be open to modification, especially to accidental modification.

 

Scenario 1: Developer A is developing an application on their machine which contains modified vi.lib VIs which were accidentally modified as part of work on previous projects. They build an application which passes validation and starts to be used in production. All of the developer's source code is committed to a source code repository. Developer A leaves the company. Six months later Developer B is asked to pull the code from the repository and add a minor improvement. The application behaves very differently when Developer B builds it on their machine. A long and complicated troubleshooting session later, Developer B concludes that the different behaviour was likely caused by modified vi.lib VIs on Developer A's machine. Developer B cannot be sure, because Developer A's machine was wiped when they left, so there is no way to unequivocally prove the conclusion.

 

Scenario 2: A team of developers builds a test system for a defence application. The code is completed, and the test system is put through a thorough  commissioning and validation process that involves testing dozens of known good units and known bad units. The validation process takes three weeks to complete. Management plans to not have to run the whole validation process for future minor changes. Instead they will ask the development team to perform code reviews and record notes for each minor change. Revalidation is not necessary if the code reviewers agree that the changes are non-functional, for example, the wording was changed in a dialogue message, or a logo was added to the UI. This sounds like a great plan, but is technically unsafe. Strictly speaking the whole revalidation process would have to be rerun, even for minor changes, due to the fact that not all of the source code is visible in the repository (there is uncontrolled source code in vi.lib that could have been modified in between builds).

Essentially I don't think it's safe for an app to contain source code that is not visible or tracked in a repository.

I can't think of a simple, quick solution to the concerns above, but having all vi.lib VIs set to "Locked (no password)" could be a quick first step towards reducing the likelihood of this issue. Developers would at least have to consciously edit vi.lib VIs, rather than doing it accidentally which can happen now. Of course, a malicious actor could still wreak havoc by editing a few inconspicuous vi.lib VIs.

The risk would be reduced further if the vi.lib VIs were password-protected. This would come at the expense of not being able to view the source code of native VIs, something which I find useful. Therefore, I personally would prefer "Locked (no password)" to password-protected, but I might prefer password-protected to unlocked.

Similar concerns apply to non-NI third-party libraries that install in vi.lib, user.lib or instr.lib, for example the extremely useful OpenG libraries. These too are examples of uncontrolled source code. For this reason some developers I worked with preferred to copy the OpenG and other libraries into the project repository (this involves a tedious job of opening each library VI and relinking it to the other library VIs in their new location).


This idea is similar but potentially easier to implement than the following idea: Make the VI's from the "vi.lib" Read-only - NI Community

 

Thanks

When you compare two Vis, one with a control, and the other one with the same control but with a change on the test justification (You can modify it by clicking on the dropdown menu of text settings, next to the pause button)

 

daguero_0-1680115378128.png

 

 

If you compare the vis at this moment it only shows this

----------------------------

Difference Type: text justification

 

As we can see there is missing information regarding other comparissons like label name, object type and detailed description like the following example

 

----------------------------

Label Name: Label1

Object Type: String

Difference Type: moved

Detailed Description: Changed from "(168,132)" to "(12,71)"

 

daguero_1-1680115378130.png

 

 

 

Consider that this is a small scenario (only comparing two indicators) where the difference can be found easily, but if you try this with bigger VIs it will have a lot of missing information

 

The fact that this information is not available is time consuming, eventhough they can be walked manually the record of changes is compromised

Currently, you can place a probe on a wire while developing, which is an indicator of the data on a wire. I want the ability to CONTROL the data on the wire, with a data forcing mechanism.

 

The implementation would be very simple... right click on a wire, and in the context menu the option "Force" would be right under "Probe." It would pop up a window of the forcing control, and while the VI is running and forcing is set to "Enable", the programmer can control the values that pass on the wire. If the force window were set to "Disable", the data in the wire would be completely controlled by the VI's logic.

 

DataForcing.png

 

I think the implementation by NI could be trivially simple. If you only allow a forcing control to be added during edit mode (not while the VI is running), the force could be added as an inline VI (as denoted by the green rectangle on the wire). The code inside the inline VI would be as follows, and the front panel would be "Data Force (1)" as shown above.

 

ForcingImplementation.png

 

Of course, if you could add a force to a wire during runtime like probes, props NI. But I would be PERFECTLY happy if you could only add these force controls in edit mode prior to running.

 

One level further (and this would be AMAZING, NI, AMAZING): enable and disable certain parts of the cluster that you would like to force and allow the other elements to be controlled by the VI logic. I made the example above because it would be very natural to ONLY force Sensor1 and Sensor2, and letting the output run it's course from your forced input.

It is time to put a dent in the floating point "problems" encountered by many in LV.  Due to the (not so?) well-known limitations of floating point representations, comparisons can often lead to surprising results.  I propose a new configuration for the comparison functions when floats are involved, call it "Compare Floats" or otherwise.  When selected, I suggest that Equals? becomes "Almost Equal?" and the icon changes to the approximately equal sign.  EqualToZero could be AlmostEqualToZero, again with appropriate icon changes.  GreaterThanorAlmostEqual, etc.

 

AlmostEqual.png

 

 

I do not think these need to be new functions on the palette, just a configuration option (Comparison Mode).  They should expose a couple of terminals for options so we can control what close means (# of sig figs, # digits, absolute difference, etc.) with reasonable defaults so most cases we do not have to worry about it.  We get all of the ease and polymorphism that comes with the built-in functions.

 

There are many ways to do this, I won't be so bold as to specify which way to go.  I am confident that any reasonable method would be a vast improvement over the current method which is hope that you are never bitten by Equals?.

In .NET each of the integer data types provides two constant fields, named MinValue and MaxValue. These constants return the minimum and maximum values that can be stored in that data type. For example, for Int32 MinValue would return -2147483648 and MaxValue would return 2147483647.

 

1 C# min and max values.png

 

It would be nice if there would be similar constants in LabVIEW for all eight integer data types. These constants would be similar to the DBL constants that already exist: +Inf, -Inf, Machine Epsilon and NaN, and would be similar in concept to string constants such as Space Constant and End Of Line Constant.

 

The new constants could be placed in the Numeric palette, or in a new sub palette of the Numeric palette, as shown in the image below. I would be happy with any location.

 

3 Where to place the constants in LabVIEW (edited).png

 

I am aware that I can easily create constant VIs to implement this functionality, which would be very similar to how Space Constant.vi is implemented, but it would be nice if the constants were built-in.

 

My current workaround when I need the min or max values of an integer data type is to drop a numeric constant and type something like -9999999999999999 or 9999999999999999, which after Enter is pressed LabVIEW correctly coerces to the min or max value.

 

Thanks

LabVIEW  has a somewhat hidden feature built into the variant attributes functionality that easily allows the implementation of high performance associative arrays. As discussed elsewhere, it is implemented as a red-black tree.

 

I wonder if this functionality could be exposed with a more intuitive set of tools that does not require dummy variants and somewhat obscure VIs hidden deeply in the variant palette (who would ever look there!).

 

Also, the key is currently restricted to strings (Of course we can flatten anything to strings to make a "name" for a more generalized use of all this).

 

I imagine a set of associative array tools:

 

 

  • Create associative array (key datatype, element datatype)
  • insert key/element pair (replace if key exists)
  • lookup key (index key) to get element
  • read all keys
  • delete key/element
  • delete all keys/elements
  • dump associative array to disk
  • restore associative array from disk
  • destroy associative array
  • ... (I probably forgot a few more)
 
 
I am currently writing such a tool set as a high performance cache to avoid duplicate expensive calculations during fitting (Key: flattened input parameters, element: calculated array).
 
However, I cannot easily write it in a truly generalized way, just as a version targeted for my specific datatype. I've done some casual testing and the variant attribute implementation is crazy fast for lookup and insertion. Somebody at NI really did a fantastic job and it would be great to get more exposure for it.
 
Example performance: (Key size: 1200bytes, element size 4096: bytes, 10000 elements) 
insert: ~60 microseconds
random lookup: ~12 microseconds
(compare with a random lookup using linear search (search array): 10ms average. 1000x slower!)
 
Thanks! 

 

It would be helpful if the LabVIEW Python node natively supported Python dictionaries as LabVIEW Maps. This would make it simpler/easier to support a frequently use Python structure. You can work around but you have to do extra data preparation/formatting in both LabVIEW & Python. It would be nice if the node handled that converting to list of tuples and building the LabVIEW map or vice versa for us.

Add field witch shows the size of the array.

 

new probe watch window

 

So simple but so helpful Smiley Happy

This is an extension of Darin's excellent idea and touches on my earlier comment there.

 

I suggest that it should be allowed to mix booleans with numeric operations, in which case the boolean would be "coerced" (for lack of a better word) to 0,1, where FALSE=0 and TRUE=1. This would dramatically simplify certain code fragments without loss in clarity. For example to count the number of TRUEs on a boolean array, we could simply use an "add array elements".

 

(A possible extension would be to also allow mixing of error wires and numerics in the same way, in which case the error would coerce to 0,1 (0=No Error, 1=Error))

 

Here's how it could look like (left). Equivalent legacy code is shown on the right. Counting the number of TRUEs in a boolean array is an often needed function and this idea would eliminate two primitives. This is only a very small sampling of the possible applications.

 

 

Idea Summary: When a boolean (or possibly error wire) is wired to a function that accepts numerics, it should be automatically coerced to 0 or 1 of the dominant datatype connected to the other inputs. If there is no other input, e.g. in the case of "add array elements", it should be coreced to I32.

NI updater kindly informed me that LabVIEW 2014 SP1 was released (even though I uninstalled it shortly after I tried it last year) and out of curiosity, I took a look at the known issues list.

I learned a few interesting things I did not know about, and also that some problems had been reported as long ago as version 7.1.1. This type of stuff looks like bugs that won't be fixed, ever.

For instance, CAR #48016 states that there is a type casting bug in the Formula Node. It was reported in version 8 and the suggested workaround it to use a MathScript Node instead of a Formula Node (where is the "Replace Formula Node by a MathScript Node" contextual menu item?).

Problem: the MathScript RT Module is required. Even in my Professional Development System, this is not included by default. Does this really count as a workaround?

I read: we don't have the resources to fix that bug, or we don't want to break code that expected that bug.

In any case, this bug with most likely never be fixed.

The bottom line is, we can waste a lot of time as users, rediscovering bugs that have been known for a while and will probably never be fixed. As a user, I would really appreciate a courteous warning from NI that there are known traps and have a complete description handily available with the help file related to the affected function.

 

My suggestion: add a list of known issues (with link to their description) for all objects, properties, functions. VIs, etc, in the corresponding entry in the Help File.

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)