LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)

 

The In Range and Coerce function is frequently used to determine whether a value is within range of an upper limit and lower limit values.

 

But when it is out of range, you often also want to know whether the value is out of range too high, or out of range too low.  It is easy enough to add a comparison function alongside and compare the original value to the upper limit.  It's another primitive and 2 more wire branches.  But since comparison is one of the primary purposes of the In Range and Coerce function, why shouldn't it be built into it?

 

The use case that made me think of this that as come up in my code a few times over the years is with any kind of thermostat type of control, particularly one that would have hysteresis built into it.  If a temperature is within range (True), then you would often do nothing.  If it is lower than the lower limit, you'd want to turn on a heater.  If it is higher than the upper limit, than you'd turn off the heater.  (Or the opposite if you are dealing with a chiller.)

 

Request Add an additional output to In Range and Coerce that tells whether the out of range condition is higher than the upper limit, or lower than the lower limit.

 

(That does leave the question as to what should the value of this extra output be when the input value is within range.  Perhaps, the output should not be a boolean, but  a numeric value of 1, 0, and -1, perhaps an enum.)

The Timing palatte is looking bad with all thes gaps.  A simple fix would be to fill these holes with useful functions. I'm proposing 3 and attaching 2 from my re-use code. (I may re-create the third later)

timing2.PNG

 

Time to XL.vi (Attached): and its inverse, XL to Time.vi

12:00:00.0 AM Jan 0, 1900 is a pretty common epoch (Base Date) for external programs and converting from LabVIEW epoch shows up several times a year on the forums. and Time to excel has a few solutions to threads under its belt.   Moreover for analisys against external data from other enviornments you are often using Access, Excel, Lotus... All share the same epoch (and Leap year bug) in their date/time formats.  These vi.s have been pretty useful to me although the names may change to avoid (tm) infringements

 

Time to Time of Day.vi (Attached) has also been in my arsenal and proves both valuable and get on a few threads per year on the forum.

 

The gaps in the palatte make it a perfect fit

timing.PNG

Download All

I'd like to have a PDF export function within the Report Generation Toolkit.

So that no additionl software have to be installed on the target system to generate PDF files including graphics an text.

 

The PDF format is the most used format for sharing documents because nearly everybody has a reader installed.

It looks identical on every computer and is perfectly to print.

 

Software like OpenOffice, Word or programming languages like PHP supports the generation of PDF files, why not LabVIEW?

 

Regards

After applying my own subjective intellisense (see also ;)), I noticed that "replace array subset" is almost invariably followed by a calculation of the "index past replacement". Most of the time this index is kept in a shift register for efficient in-place algorithm implementations (see example at the bottom of the picture copied from here).

 

I suggest new additional output terminals for "replace array subset". The new output should be aligned with the corresponding index input and outputs the "index past replacement" value. This would eliminate the need for external calculation of this typically needed value and would also eliminate the need for "wire tunneling" as in the example in the bottom right. (sure we can wire around as in the top right examples, but this is one of the cases where I always hide the wire to keep things aligned with the shift register).

 

If course the idea needs to be extended for multidimensional arrays. I am sure if can be all made consistent and intuitive. There should be no performance impact, because the compiler can remove the extra code if the new output is not wired.

 

Several String functions have an "offset past ..." output (e.g. "search and replace string", "match pattern", etc.) and I suggest to re-use the same glyph on the icon.

 

Here is how it could look like (left) after implementing the idea. equivalent legacy code is shown on the right.

 

 

 

Idea Summary: "Replace array subset" should have new outputs, one for each index input, that provides the "index past replacement" position.

 

 

 

 

Support project based vi-analyzer configuration which containing target specific information (important e.g. to correctly analyze FPGA VIs) to be executed with the vi-analyzer api to be integrated into CI process. 

A graphical programming language deserves to have great graphical tools representing the design of big applications.

 

It is possible to use scripting to understand if a method of a class has another class as input or output terminal and to know if a class composes another class in private data. The only thing I cannot do right now myself is to show this information in Class Hierarchy window. Representing the usage and composition relationships only requires parsing through the project once, and then every time it changes. 

I am right now using a script to parse through all project classes to understand which have what relationships, which are actors and which are messages and I draw a Plant UML diagram from that. The intermediate step is to generate Plant UML text, but this should be integrated into LV.


https://forums.ni.com/t5/LabVIEW-APIs-Discussions/Project-to-Plant-UML-Diagram-Script/td-p/3984499

5.png

 

6.png

 

Please improve the readability of object oriented project with additional tools for LabVIEW. This is especially critical for NXG, where the currently the ability to understand an OO project is even more limited than LV19.

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.

When you or two numeric numbers it does a bitwise or.  The same should happen when using Or Array Elements, but it is not supported for numeric arrays. I see no reason why it shouldn't work just the same?

 

test.PNG

The goal of this idea is to make it easy for the LabVIEW ecosystem to create reusable libraries for LabVIEW that would be type independent. Let's think for a second dictionaries, also called as key-value stores. Dictionaries are data structures that allow storing and retrieving values with a specific key. To create a generic reusable strongly typed dictionary is currently impossible with the LabVIEW type system. One can create a dictionary that is type specific but then it's not reusable. Or one can create a reusable dictionary but then it's not strongly typed. Type Parameters and Parametrized Generic Types as explained in this idea would allow creating strongly typed dictionaries that are widely reusable across applications. Specifically type parameters and parametrized generic types would allow LabVIEW ecosystem to develop highly reusable strongly typed components to solve various common programming problems. This would allow National Instruments to put more focus on the core of the language as the LabVIEW ecosystem could solve much wider range of problems that preivously have required National Instruments to contribute.

 

Add a new control type Type Parameter to LabVIEW that augments the current Control, Type Def and Strict Type Def control types. The Type Parameter type would act like a regular Type Def control with one special and important distinction. You could wire anything to an input terminal expecting a specific Type Parameter type and the downstream type would adapt at compile time to the type wired to the type parameter input.

Type Parameter Definition

 

In a single VI type parameter could be used in multiple places but all instances of the type parameter would adapt to the same type.

 

Type Parameters

 

When a VI that uses Type Parameters in the front panel is used on a block diagram, the template VI is replaced by the compiler by a type specific instance that has adapted the type parameters to the type wired to the Type Parameter input. Notice below how in our VI the control and the indicator were of type Type Parameter with a default type of DBL and the instance got adapted to type U32 that was wired to the input.

Calling a VI with Type Parameter Inputs

The same type parameter could be used on multiple inputs of a VI.

Multiple Inputs of The Same Type Parameter

And all of the type parameters would adapt to the same type when the VI is being used.

Calling a VI with Multiple Type Parameter Inputs of the Same Type

Note that in the above example we chose the element of the array to be a specific type specified by a type parameter. However the arrays themselves could as well have been specified by a type parameter.

 

So far we have focused on VI boundary where type parameters adapt the whole VI to specific type or types if multiple different type parameters are being used in the connector pane of the VI. Type parameters can also be used in composite types (e.g. arrays, clusters, classes) and the downstream composite types would adapt to what is wired to the type parameter input.

 

Type Parameters with a Cluster

Note that x and y as instances of the same type parameter have to be of the same or compatible type.

Type Error With Type Parameters

 

Type parameters can also be used in class private data to create parameterized custom types. This is where type parameters become extremely powerful. Let's assume that we have a class 3D Vector.lvclass that has three instances of a "Data Element.ctl" Type Parameters. The default type of the Data Element is set to be DBL. The cluster private data has three instances of the Data Element, one for each of X, Y and Z.

Type Parameters in Class Private DAta

 

Now we could create a Create 3D Vector method VI for this class that allows us to construct type parametrized instance of the class type.

Creating a Type Parametrized Object

Now calling this Create 3D Vector.vi with string as the inputs for type parametrized inputs X, Y and Z will create an instance of class 3D Vector with compile time type 3D Vector[String].

Calling a VI that Creates a Type Parametrized Object

 

And this is where we now start seeing the superpowers of type parameters and parametrized types as well as generic type parameterized VIs that go along with them. Now we have a capability of creating custom VIs and custom types that both can adapt to different parameter types at usage time.

 

Let's get back to the question of dictionaries. We could easily construct a dictionary that allows the key type to be parametrized with one parameter and the value type to be parametrized with another parameter. For example we could use the dictionary with I32s as keys and Strings as values. Or we could use it with Strings as keys and File Paths as values. Constructor for such custom type would be trivial to create.

Type Parametrized Create Method for a Dictionary

 

Once we have constructed the dictionary we would naturally like to use it. We could now use method VIs of the Dictionary class to add and fetch elements from the dictionary. As an example Get Element By Key would look something like this in it's simplest form.

 

Get Element From Dictionary

 

 

Note that Dictionary In is type parametrized with two different type parameters Key Type and Value Type. In the class library there is a Type Parameter control Key Type.ctl and Value Type.ctl. Now type parameter Key Type.ctl is used both inside the private data of the class and on the fron panel as the Key input, the type of these two must be the same. The same is true for the Value Type element of private data and the Value indicator that both derive from Value Type.ctl type parameter. The has function is any function that can convert any LabVIEW types to some strings that we can use as keys for the variant attribute node. We are using variant attributes as the store implementation is this basic example.

 

Calling the Dictionary with integer as the type parameter and string as the value would look something like this.

 

Calling a Type Parametrized Dictionary

 

As you can see the 0 and empty string will define propagate as type parameter types for Key Type and Value Type in the dictionary wire. Now Add Element.vi would have to adapt to these elections for Key Type and Value Type the moment the Dictionary wire is connected. The Key input immediately change to type INT32 and the Value input to type String. Similar would be true if the wires would be connected in reverse order. Connecting University of Texas string to the Value input of Add Element and connecting number 1 of type INT32 to the Key input of the Add Element would immediately adapt the Dictionary in and Dictionary out inputs to be of type Dictionary[Key Type = INT32, Value Type = String]. A type error would occur if Dictionary in would be of different type.

 

Type Parametrized Generic Types are an extremely powerful concept to incldue in a language and this idea describes a feasible way to implement them in a visual dataflow model of LabVIEW. This is and has been for maybe ten years my absolute #1 feature I have wanted to see in LabVIEW. I think the time is right for me to officially make this request. Ideally Type Parameters can be bounded but that's a topic for a whole other idea post.

Many of the Mathematics and SIgnal Processing VIs retain state, rendering them unusable inside reentrant VIs: http://digital.ni.com/public.nsf/allkb/543589DF37BA48CF8625744A00543FF3

 

Many of the VIs in this list (all those in my current application, unfortunately) can only work with single-channel data. When manipulating multi-channel data, you can work around that fact by running the channels serially through the VI you need, but that (1) takes much longer for large data sets or several channels, and (2) is not an option when performing live manipulation of streaming data block-by-block.

 

I ran into this problem while developing code in the Actor Framework, where Do.vi and Actor Core.vi (the two main framework methods) are both Shared Reentrant. Now that AF is a native feature in LV, I expect that more people will run into problems with these VIs.

 

We need stateless versions of these VIs so we can use multiple copies in on a multichannel data set. You can probably keep backward compatibility by pushing the core logic to a new stateless subVI and keeping the shift register or feedback node on the main VI's diagram.

I recently did some work that required a lot of simple equations. The formula node works fine, but the diagram would have been easier to read if I could have entered equations like this:

 

 Equation Node.PNG

 

Equation Node : Mathcad ::  Mathscript Node : MATLAB

This would replicate the ability of the standard Index Array function to extract a subarray from a multi-dimensional array. Currently both indices must be wired, which only selects scalar elements.

ArrayIndexInPlace.png

 

Another related enhancement that may be useful is to provide the equivalent of "Array Subset" on the In Place Element Structure.

 

I would like to see more native support for compression and encryption/hash algorithms.

 

When it comes to compression and encryption. We are very much the poor relations to other languages that have a plethora of source and examples for SHA1, DES, 3DES, Blowfish, HMAC, Ryjindel, AES (encryption) and LZO, LZMA, LZW, GZip, JPEG2000 (compression) to name but a few.

 

Apart from "Zip" (which can only be used on files) and "twofish" (which is hardly an industry standard because of security concerns) we have very little choice but to use DLLs and SOs meaning our applications violate one of the biggest reasons for using LabView........cross-platform. We have very little in our tool-kit to make efficient and secure network applications for example. Especially if we are required to interface to existing systems that ave used these technologies for many years.

 

We cannot even write applications to access Twitter any more!

 

With the introduction of "Stream" prototypes in the new LV2010, it is essential to have these tools in our palettes.

I would like to have the ability to set the compare aggregates mode for comparisons involving containers (arrays certainly, clusters would be a nice bonus) and a scalar value.  This includes the comparisons to 0 functions as well.

 

compareAggregatesIdea.png

FIR Filter is almost the same as convolution, except that it has a init/cont terminal while convolution has an input for algorithm (Direct, frequency domain). FIR filter always uses direct convolution.

 

If "cont" is not used, convolution based FIR filtering could be orders of magnitude faster because it scales much less steeply with input sizes. Examples have been discussed where switching algorithms from "Direct" to "frequency domain" can turn minutes into seconds (e.g. 1M points and 5k filter).

 

While the knowledgeable programmer can of course make his own using the convolution primitives (also programming around other limitations because this idea is not implemented :(), it might be more intuitive if the FIR Filter had an "algorithm" input where we can select between the same choices as for convolution. (From my casual understanding, "frequency domain" would ignore the "cont" input because it is incompatible. This can just be mentioned in the help.)

 

altenbach_0-1600102620742.png

 

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

I just found out about LabVIEW NXG's C Node. I know my team would love to have it in LabVIEW to use for bit masking in a familiar syntax. The Expression and MathScripts nodes have issue with 64-bit values and give unexpected results. A quick test in C Node does fix these issues.

I recently found out that LabVIEW uses the Wichmann-Hill (1984) random number generator.

http://digital.ni.com/public.nsf/allkb/9D0878A2A596A3DE86256C29007A6B4A

 

...which in some fields is completely unacceptable for not being random enough (cycle length of 6.9536e12?)

A much better (and arguably a much more currently standard one) would be Mersenne Twister (cycle length of 2e199371).

http://www.mathworks.com/help/matlab/ref/randstream.list.html

Hello,

 

In many of my applications i had to run dynamic assynchronous VIs using a VI path.

 

When the VI you want to launch has some problems (VI broken, missing ressources, ... ) you get the generic error 1003. (The VI you want to launch may be broken ... ? )

 

This error can be easily solved when you are using LabVIEW IDE. Smiley Happy

 

But sometimes, this error can occured in an executable you have deployed to your final user computer.

And then, the error 1003 (The VI you try to call may be broken ... ) is a little bit poor in order to analyse the problem. Smiley Frustrated

This error occurs, for example, when the dynamic VI tryes to access missing DLL's.

 

It would be nice, if the error 1003 message, could be filled out with and additionnal "error description", giving more informations about the error source.

For example, an error list, like the error list in the LabVIEW IDE. (Broken arrow error list !)

 

Thanks a lot.

 

Manu.