LabVIEW Idea Exchange

Community Browser
About LabVIEW Idea Exchange

Have a LabVIEW Idea?

  1. Browse by label or search in the LabVIEW Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  2. If your idea has not been submitted click Post New Idea to submit a product idea to the LabVIEW Idea Exchange. Be sure to submit a separate post for each idea.
  3. Watch as the community gives your idea kudos and adds their input.
  4. As NI R&D considers the idea, they will change the idea status.
  5. Give kudos to other ideas that you would like to see in a future version of LabVIEW!
Top Kudoed Authors
cancel
Showing results for 
Search instead for 
Did you mean: 

Do you have an idea for LabVIEW NXG?


Use the in-product feedback feature to tell us what we’re doing well and what we can improve. NI R&D monitors feedback submissions and evaluates them for upcoming LabVIEW NXG releases. Tell us what you think!

Post an idea

Cluster Size as a Wired Input:

 

  • Easier to see
  • More implicit
  • Nearly impossible to forget to set it (if it were a required input).

 Cluster Size.gif

I would like to have the ability to set the compare aggregates mode for comparisons involving containers (arrays certainly, clusters would be a nice bonus) and a scalar value.  This includes the comparisons to 0 functions as well.

 

compareAggregatesIdea.png

Recently I've been doing some XNet development on embedded RT targets.  Particularly the Linux RT hardware with the embedded UI.  This is a very neat platform and allows for a UI on RT without needing an industrial PC to talk to.  So having support for things like importing a database from a USB drive, or exporting a database would be features I'd hope were supported by XNet but apparently it isn't.

 

I made a post here describing my use case where an engineer would like to come up to a system with a database on a USB drive, and be able to select it and start logging data back to the USB drive.  Then I thought when the data was being logged, the database used could be exported and saved with the raw data.  This way the data being taken could have a copy of the database used for tracability.  There does exist a SQLite database on the RT target but it is not in any form that is useful for importing in XNet.

 

So this idea is to add support for more database manipulation on RT targets, particularly when it comes to importing and exporting database.

  • Execution & Performance

This is written as both an Idea and as a Community Nugget.

 

Did you know there exists a function that decreases code fragility when it comes to typecasting and type converting datatypes? It's called 'Coerce to Type', and I bet you've never heard of this function unless you have kept up with this conversation. Thanks to RandyP for creating that Idea which culminated in a 'public' release of the Coerce to Type function.

 

21195iD087EF25489F6CED

 

Since that post, I have become aware of potential risks/bugs I had been proliferating in my coding style. First, I will briefly describe my understanding of the difference between typecasting and typeconverting in the context of LabVIEW. Next, I'll show a few use cases where this node increases code robustness. Finally, it's up to you to Kudos this Idea so we get Coerce to Type officially supported and in the palette!

 

Simply, "type converting" preserves the value of a wire, and "typecasting" preserves the raw bits that express that value on a wire. These two concepts are not interchangeable - they perform distinctly different data transfer functions (which is why they show up on two separate subpalettes: "Numeric>>Conversion" and "Numeric>>Data Manipulation"). Then there's this new function: Coerce to Type. I think of it as a Coerce-Cast combo. The data input is first Coerced to the datatype harvested from the top input, and then it is typecasted to that type.

 

Dynamic event registration is sensitive to the name on the wire, and for documentation's sake it has historically been important to typecast the event source ref to achieve a good name. Well, typecasting refs can get you into trouble (ask Ben), especially if you change the source control type while forgetting to update your "pretty name" ref constant.

 

21187iA83D4F02211D2DCB

 

My next favorite example is when you need to coerce a numeric datatype into an enum. Sometimes it's impossible to control the source datatype of an integer, while it's desirable to typecast the value into an enum for self-documented syntax inside your app. 

 

For instance, take the "standard" integer datatype in LabVIEW - I32 - and cast it to an enum. You're going to get some unexpected results (assuming, you expected the *value* to be preserved). Consider the following scenario:

 

  1. You desire to typecast a plain integer '2' into an enum {Zero, One, Two, Three}, and after 45 minutes of debugging, realize "Typecasting" has hacked off 75% of the bits and clobbered the value. Drats!
  2. The enterprising engineer you are, you determine how to fix the problem with a deftly-placed "Coerce to U8". (This is one of the fragile errors I proliferated prior to learning about this node)
  3. Maniacal manager/customer comes along and says "I want that enum to count to 10k". Drats again. A datatype change from U8 to U16 for the typedef'd enum, and a lot of typing with the wretched enum editor. Finally, two hours into testing and wondering why the program doesn't work, you realize you also forgot to replace all of the Type Converts to U16 (this is the definition of fragile code: you change one thing, and another thing(s) breaks).
  4. Rockstar Programmer Epiphany: use Coerce to Type, bask in your robust code. You even enjoy data value preservation from floating point numbers.
21191iD3339D40A531F181
 
Finally, typecasting can generate mysterious failure modes at Run-Time, but Coerce to Type can catch errors at Design Time. This is especially helpful for references (see above), but can also prevent some boneheaded data gymnastics (see below). Whew! Saved by compiler type resolution mismatch!
 
21193iC698E122C5BE16AC
 
In short, now that you realize you need this function, I hope you will want to see it added to the Data Manip palette.
 
Penultimate note: Coerce to Type is neither a replacement for typecast nor any of the type converts!!! There are distinct scenarios appropriate for each of the three concepts.
Ultimate note: please check the comments section, because I expect you'll find corrections to my terminology/concepts from GregR, et al.

There is a construct I am quite fond of in pointer-friendly languages, using iterator math to implement circular buffers of arbitrary data types.  They are a little bit slower to use than straight arrays, but they provide a nice syntax for fixed sized buffers and are helpful in cases where you will be prepending and appending elements.

 

I am pretty certain that queues are implemented as circular buffers under the hood, so much of the infrastructure is already in place, this is mostly adding a new API.  Added bonus:  the explicit circular buffer can be synchronous, unlike the queue, so for example you can put them in subroutine VIs.

 

It should be easy to convert 1D arrays to/from circular buffers.  Array->CB is basically free, the elements are in order in memory.  CB->Array requires two block copies (most of the time).  This can be strategically mananged, much like Reverse or Transpose operations.

 

Use cases:

 

You can implement most of  the following two ideas naturally:

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Looping-Input-Tunnels/idi-p/2020406

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/New-modes-on-auto-indexed-input-array-tunnels-in-loops...

 

Circular buffers would auto-index and cycle the elements and not participate in setting 'N'.

 

You can do 95+% of what I wanted to do with negative indexing:

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Negative-Values-in-Index-Array-or-Array-Subset/idi-p/9...

 

A lot of the classic divide and conquer algorithms become tractable in LV.  You can already use queues to implement your own stack and outperform native recursion.  A CB implementation of the stack would be amenable to subroutine priority and give a nice performance kick.  I have done it by hand for a few datatypes and the beauty and simplicity of  the recursive solution gets buried in the implementation of the stack.  A drop-in node or two would give you a cleaner look and high-octane performance.

 

Finally, perhaps the most practical reason yet:  simple XY Charts.

 

As for appearance I'd suggest a modified wire like the matrix data type.  Most if not all Array primitives should probably accept the CB.  A few new nodes are needed to get/set buffer size and number of elements and to do the conversions to/from 1D arrays. The control/indicator could have some superpowers:  set the first element, wraparound scrolling (the first element should be highlighted).

  • Execution & Performance

LabVIEW  has a somewhat hidden feature built into the variant attributes functionality that easily allows the implementation of high performance associative arrays. As discussed elsewhere, it is implemented as a red-black tree.

 

I wonder if this functionality could be exposed with a more intuitive set of tools that does not require dummy variants and somewhat obscure VIs hidden deeply in the variant palette (who would ever look there!).

 

Also, the key is currently restricted to strings (Of course we can flatten anything to strings to make a "name" for a more generalized use of all this).

 

I imagine a set of associative array tools:

 

 

  • Create associative array (key datatype, element datatype)
  • insert key/element pair (replace if key exists)
  • lookup key (index key) to get element
  • read all keys
  • delete key/element
  • delete all keys/elements
  • dump associative array to disk
  • restore associative array from disk
  • destroy associative array
  • ... (I probably forgot a few more)
 
 
I am currently writing such a tool set as a high performance cache to avoid duplicate expensive calculations during fitting (Key: flattened input parameters, element: calculated array).
 
However, I cannot easily write it in a truly generalized way, just as a version targeted for my specific datatype. I've done some casual testing and the variant attribute implementation is crazy fast for lookup and insertion. Somebody at NI really did a fantastic job and it would be great to get more exposure for it.
 
Example performance: (Key size: 1200bytes, element size 4096: bytes, 10000 elements) 
insert: ~60 microseconds
random lookup: ~12 microseconds
(compare with a random lookup using linear search (search array): 10ms average. 1000x slower!)
 
Thanks! 

 

 

 

 

The BeagleBoard xM is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The BeagleBoard xM is $149 and is open hardware. The BeagleBoard xM uses an ARM Cortex A8 running at 1,000 MHz resulting in 2,000 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 33 times the processing power!

 

Wouldn’t it be great to programme the BeagleBoard xM in LabVIEW?

NI Community,

I have developed some applications where it was desirable to have a Wait, but 1 millisecond is just too long.

 

I came up with a method using the High Resoution Releative Seconds.vi, to create a delay in the microsecond range (it's attached).  This works for the particular application I need it for, because I am waiting on an external buffer to be ready to accept new data (its rate it can process is 60 nanoseconds).  Waiting for an entire millisecond in this case is just too long.

 

The downside to this method is it is tied to your processor speed and current workload.  It would be great if NI supported a 'Wait Until Next us Multiple.vi  (it doesn't Sleep).

Attached is my work-around.  I'd love to see other ideas on this topic. 

Thank you, 

Dan

  • Execution & Performance

LabVIEW is awesome. But sometimes things don't go quite right, and when something isn't quite right somewhere (maybe a bad bit of binary) it would be great to be able to recompile your code. Mass Compile is great for upcompiling from older major versions, but will ignore VIs that are already compiled in the current version. What I'd like to see is a "Force Recompile" flag in Mass Compile to ensure all identified VIs are most definitely recompiled, especially useful when you call Mass Compile from the shortcut menu in Project view.

 

Yes, it could take some time, but less time than scripting it and you can use the time to make yourself a nice cup of hot British tea.

 

forcerecompile.png

  • Execution & Performance

After applying my own subjective intellisense (see also Smiley Wink), I noticed that "replace array subset" is almost invariably followed by a calculation of the "index past replacement". Most of the time this index is kept in a shift register for efficient in-place algorithm implementations (see example at the bottom of the picture copied from here).

 

I suggest new additional output terminals for "replace array subset". The new output should be aligned with the corresponding index input and outputs the "index past replacement" value. This would eliminate the need for external calculation of this typically needed value and would also eliminate the need for "wire tunneling" as in the example in the bottom right. (sure we can wire around as in the top right examples, but this is one of the cases where I always hide the wire to keep things aligned with the shift register).

 

If course the idea needs to be extended for multidimensional arrays. I am sure if can be all made consistent and intuitive. There should be no performance impact, because the compiler can remove the extra code if the new output is not wired.

 

Several String functions have an "offset past ..." output (e.g. "search and replace string", "match pattern", etc.) and I suggest to re-use the same glyph on the icon.

 

Here is how it could look like (left) after implementing the idea. equivalent legacy code is shown on the right.

 

 

 

Idea Summary: "Replace array subset" should have new outputs, one for each index input, that provides the "index past replacement" position.

 

 

 

 

Swap Long.jpg

(hope this hasn't been posted before, couldn't find it)

 

Since we now have 64 bit numbers, can we please have a Swap Long function?

 

This is a real hassle to do yourself (convert to byte array, rotate array, convert back? or flatten to string, flatten back with different byte order?). If you have large arrays of "quads", it is a real performance hit without the function.

 

Regards,

 

Wiebe.

  • Execution & Performance

 

                     this time, I hope this idea was not already proposed! Smiley Frustrated

 

 

original6.png

 


  • Execution & Performance

I recently found out that LabVIEW uses the Wichmann-Hill (1984) random number generator.

http://digital.ni.com/public.nsf/allkb/9D0878A2A596A3DE86256C29007A6B4A

 

...which in some fields is completely unacceptable for not being random enough (cycle length of 6.9536e12?)

A much better (and arguably a much more currently standard one) would be Mersenne Twister (cycle length of 2e199371).

http://www.mathworks.com/help/matlab/ref/randstream.list.html

I would like to be able to create executables that don’t require the runtime engine in LabVIEW. Perhaps a palette of basic functions that can compiled without the runtime engine and an option in the application builder for that. I routinely get executables from programmers that don’t require a runtime installation. I just put it on my desktop and it runs. It would be nice that if I get a request, I could create, build, and send them an exe in an email without worrying about runtime engine versions, transferring large installer files to them, etc.

  • Execution & Performance

When I have an array of clusters and I want to locate the array index where a specific element of the cluster has a certain value, I need to first build an array from the Array of Clusters and then search that to find the Array element I want:

 code example.jpg

 

It would be nice (and cleaner and likely faster) if I could wire the Array of Clusters into an unbundle by name function and select String[] to get the array of string element to search.

In addition, if the cluster contained nested clusters, I could access them the same way, using the dot notation alrerady supported.  For example, the unbundle would let me select 'cluster1.subcluster2.String[]' to access the subarray of an element.

 

code example2.jpg 

 

Format into text is very useful but can become hard to edit when it has a lot of inputs. I propose, instead of one huge format string, that the programmer be allowed to put the required format next to the corresponding input. Also, the user should be allowed to enter constant strings, e.g.. \n, \t, or "Comment", and have the corresponding input field automatically grayed out.

 

FORMAT INTO TEXT IDEA

  • Execution & Performance

Currently, the VariantDataType  are buried deep in VI lib but they are very useful when trying to program based on datatype. Can we bring some of those functions into the light?

 

http://forums.ni.com/t5/LabVIEW/Darren-s-Weekly-Nugget-10-05-2009/m-p/996866

http://forums.ni.com/t5/LabVIEW/Darren-s-Occasional-Nugget-03-25-2014/m-p/2791974

 

Get items from enum

  • Execution & Performance

I liked the new DBL on the palette that we got last year or so - saves a step. But, it should NOT adapt to entered data by default! Why would I specifically place a DBL only to have it change?

 

1.png

A pretty common use case of LVCompare in my workflow is to use it as a diff tool in SVN to compare different versions of a VI. When I do that, the previous version is downloaded into a temp directory, and then there is a decent amount of load time because dependency paths have to be resolved differently for the version in the temp directory and some recompiling happens. For top-level VIs in large applications, it seems like the whole dependency tree is getting loaded, which takes a long time. But really, for comparing VIs, there's no need to load the contents of lower level subVIs (and their dependencies, and dependencies of dependencies, etc.) As long as the connector pane, typedefs on the connector pane, and the icon of a subVI are loaded, that should be enough information for a visual diff of the top level VI.

  • Execution & Performance

I think it would be very useful to have the ability to view the end output of the Dataflow Intermediate Representation (DFIR)  (optimised) before it's passed to the Low-Level Virtual Machine (LLVM). At the moment it's difficult to tell if certain things have been optimised, and even more difficult to know how they have been optimised.

 

NI LabVIEW Compiler: Under the Hood

 

Consider the following (horrific) example:

Badly designed string concatenation.png

 

Does LabVIEW refactor it to the following, or is it somewhere in-between like the second image?:

Optimised string concatenation.png

 

Alternative.png

 

Sometimes leaving things inside loops make the code look neater and easier to change in the future.

Example: reading the length of a constant string, but I'm never sure if this gets changed to a constant length value or not.

 

Also, for a VI using a password protected sub-VI that is marked as inlined, it would make sense to not show the optimised code for the password protected part, but instead just showing the unprotected parts with a note about it not representing the final optimisation.


Note: I've seen an idea that previously mentioned this marked as "declined". However, the original idea proposed existing/similar functionality to the VI Analyser with only a small section on the DFIR output, so I'm assuming that is the reason it was declined.

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Compile-Advisor-Report/idi-p/1215369

  • Execution & Performance