LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Make possible that Boolean function accept error cluster as input as this example:

 

StopOnError.png     StopOnErrorCast.png

Currently, you can place a probe on a wire while developing, which is an indicator of the data on a wire. I want the ability to CONTROL the data on the wire, with a data forcing mechanism.

 

The implementation would be very simple... right click on a wire, and in the context menu the option "Force" would be right under "Probe." It would pop up a window of the forcing control, and while the VI is running and forcing is set to "Enable", the programmer can control the values that pass on the wire. If the force window were set to "Disable", the data in the wire would be completely controlled by the VI's logic.

 

DataForcing.png

 

I think the implementation by NI could be trivially simple. If you only allow a forcing control to be added during edit mode (not while the VI is running), the force could be added as an inline VI (as denoted by the green rectangle on the wire). The code inside the inline VI would be as follows, and the front panel would be "Data Force (1)" as shown above.

 

ForcingImplementation.png

 

Of course, if you could add a force to a wire during runtime like probes, props NI. But I would be PERFECTLY happy if you could only add these force controls in edit mode prior to running.

 

One level further (and this would be AMAZING, NI, AMAZING): enable and disable certain parts of the cluster that you would like to force and allow the other elements to be controlled by the VI logic. I made the example above because it would be very natural to ONLY force Sensor1 and Sensor2, and letting the output run it's course from your forced input.

It is time to put a dent in the floating point "problems" encountered by many in LV.  Due to the (not so?) well-known limitations of floating point representations, comparisons can often lead to surprising results.  I propose a new configuration for the comparison functions when floats are involved, call it "Compare Floats" or otherwise.  When selected, I suggest that Equals? becomes "Almost Equal?" and the icon changes to the approximately equal sign.  EqualToZero could be AlmostEqualToZero, again with appropriate icon changes.  GreaterThanorAlmostEqual, etc.

 

AlmostEqual.png

 

 

I do not think these need to be new functions on the palette, just a configuration option (Comparison Mode).  They should expose a couple of terminals for options so we can control what close means (# of sig figs, # digits, absolute difference, etc.) with reasonable defaults so most cases we do not have to worry about it.  We get all of the ease and polymorphism that comes with the built-in functions.

 

There are many ways to do this, I won't be so bold as to specify which way to go.  I am confident that any reasonable method would be a vast improvement over the current method which is hope that you are never bitten by Equals?.

LabVIEW  has a somewhat hidden feature built into the variant attributes functionality that easily allows the implementation of high performance associative arrays. As discussed elsewhere, it is implemented as a red-black tree.

 

I wonder if this functionality could be exposed with a more intuitive set of tools that does not require dummy variants and somewhat obscure VIs hidden deeply in the variant palette (who would ever look there!).

 

Also, the key is currently restricted to strings (Of course we can flatten anything to strings to make a "name" for a more generalized use of all this).

 

I imagine a set of associative array tools:

 

 

  • Create associative array (key datatype, element datatype)
  • insert key/element pair (replace if key exists)
  • lookup key (index key) to get element
  • read all keys
  • delete key/element
  • delete all keys/elements
  • dump associative array to disk
  • restore associative array from disk
  • destroy associative array
  • ... (I probably forgot a few more)
 
 
I am currently writing such a tool set as a high performance cache to avoid duplicate expensive calculations during fitting (Key: flattened input parameters, element: calculated array).
 
However, I cannot easily write it in a truly generalized way, just as a version targeted for my specific datatype. I've done some casual testing and the variant attribute implementation is crazy fast for lookup and insertion. Somebody at NI really did a fantastic job and it would be great to get more exposure for it.
 
Example performance: (Key size: 1200bytes, element size 4096: bytes, 10000 elements) 
insert: ~60 microseconds
random lookup: ~12 microseconds
(compare with a random lookup using linear search (search array): 10ms average. 1000x slower!)
 
Thanks! 

 

Add field witch shows the size of the array.

 

new probe watch window

 

So simple but so helpful Smiley Happy

LabVIEW loops are quite flexible and allow me to do just about anything I want, but I often find myself writing the same code over and over again trying to iterate across different data types and data structures. There are several situations where smarter looping constructs could greatly simplify my code. One simple example is stepping by a delta between a minimum and maximum value. Currently, I have to calculate the number of iterations required ahead of time (for a for loop) or do a comparison with the maximum (with a while loop) and use shift registers to maintain the intermediate value. I'd like to be able to wire the max, min, and delta to my loop and have LabVIEW do the required calculations for me. The iteration terminal could also adapt to the proper data type given the input parameters. Perhaps the iteration terminal would have two outputs, one with the current iteration count and the other with the proper iteration value.

 

stepped-loop.PNG 

 

Another useful feature would be allowing me to wire a queue reference to the loop count terminal of the for loop and having it automatically pop each value from the queue and feed it to me through the iteration terminal. It would do this until there are no values left in the queue or until the code stops the loop. One could write an algorithm that pushes new points into the queue from within the loop or push the current value back onto the queue for later processing.

 

I'm sure there are other useful iteration strategies that are fairly common, please share them with the community

This is an extension of Darin's excellent idea and touches on my earlier comment there.

 

I suggest that it should be allowed to mix booleans with numeric operations, in which case the boolean would be "coerced" (for lack of a better word) to 0,1, where FALSE=0 and TRUE=1. This would dramatically simplify certain code fragments without loss in clarity. For example to count the number of TRUEs on a boolean array, we could simply use an "add array elements".

 

(A possible extension would be to also allow mixing of error wires and numerics in the same way, in which case the error would coerce to 0,1 (0=No Error, 1=Error))

 

Here's how it could look like (left). Equivalent legacy code is shown on the right. Counting the number of TRUEs in a boolean array is an often needed function and this idea would eliminate two primitives. This is only a very small sampling of the possible applications.

 

 

Idea Summary: When a boolean (or possibly error wire) is wired to a function that accepts numerics, it should be automatically coerced to 0 or 1 of the dominant datatype connected to the other inputs. If there is no other input, e.g. in the case of "add array elements", it should be coreced to I32.

I'd like to have a PDF export function within the Report Generation Toolkit.

So that no additionl software have to be installed on the target system to generate PDF files including graphics an text.

 

The PDF format is the most used format for sharing documents because nearly everybody has a reader installed.

It looks identical on every computer and is perfectly to print.

 

Software like OpenOffice, Word or programming languages like PHP supports the generation of PDF files, why not LabVIEW?

 

Regards

The Timing palatte is looking bad with all thes gaps.  A simple fix would be to fill these holes with useful functions. I'm proposing 3 and attaching 2 from my re-use code. (I may re-create the third later)

timing2.PNG

 

Time to XL.vi (Attached): and its inverse, XL to Time.vi

12:00:00.0 AM Jan 0, 1900 is a pretty common epoch (Base Date) for external programs and converting from LabVIEW epoch shows up several times a year on the forums. and Time to excel has a few solutions to threads under its belt.   Moreover for analisys against external data from other enviornments you are often using Access, Excel, Lotus... All share the same epoch (and Leap year bug) in their date/time formats.  These vi.s have been pretty useful to me although the names may change to avoid (tm) infringements

 

Time to Time of Day.vi (Attached) has also been in my arsenal and proves both valuable and get on a few threads per year on the forum.

 

The gaps in the palatte make it a perfect fit

timing.PNG

Download All

I recently did some work that required a lot of simple equations. The formula node works fine, but the diagram would have been easier to read if I could have entered equations like this:

 

 Equation Node.PNG

 

Equation Node : Mathcad ::  Mathscript Node : MATLAB

This would replicate the ability of the standard Index Array function to extract a subarray from a multi-dimensional array. Currently both indices must be wired, which only selects scalar elements.

ArrayIndexInPlace.png

 

Another related enhancement that may be useful is to provide the equivalent of "Array Subset" on the In Place Element Structure.

 

NI updater kindly informed me that LabVIEW 2014 SP1 was released (even though I uninstalled it shortly after I tried it last year) and out of curiosity, I took a look at the known issues list.

I learned a few interesting things I did not know about, and also that some problems had been reported as long ago as version 7.1.1. This type of stuff looks like bugs that won't be fixed, ever.

For instance, CAR #48016 states that there is a type casting bug in the Formula Node. It was reported in version 8 and the suggested workaround it to use a MathScript Node instead of a Formula Node (where is the "Replace Formula Node by a MathScript Node" contextual menu item?).

Problem: the MathScript RT Module is required. Even in my Professional Development System, this is not included by default. Does this really count as a workaround?

I read: we don't have the resources to fix that bug, or we don't want to break code that expected that bug.

In any case, this bug with most likely never be fixed.

The bottom line is, we can waste a lot of time as users, rediscovering bugs that have been known for a while and will probably never be fixed. As a user, I would really appreciate a courteous warning from NI that there are known traps and have a complete description handily available with the help file related to the affected function.

 

My suggestion: add a list of known issues (with link to their description) for all objects, properties, functions. VIs, etc, in the corresponding entry in the Help File.

I would like to see more native support for compression and encryption/hash algorithms.

 

When it comes to compression and encryption. We are very much the poor relations to other languages that have a plethora of source and examples for SHA1, DES, 3DES, Blowfish, HMAC, Ryjindel, AES (encryption) and LZO, LZMA, LZW, GZip, JPEG2000 (compression) to name but a few.

 

Apart from "Zip" (which can only be used on files) and "twofish" (which is hardly an industry standard because of security concerns) we have very little choice but to use DLLs and SOs meaning our applications violate one of the biggest reasons for using LabView........cross-platform. We have very little in our tool-kit to make efficient and secure network applications for example. Especially if we are required to interface to existing systems that ave used these technologies for many years.

 

We cannot even write applications to access Twitter any more!

 

With the introduction of "Stream" prototypes in the new LV2010, it is essential to have these tools in our palettes.

When you or two numeric numbers it does a bitwise or.  The same should happen when using Or Array Elements, but it is not supported for numeric arrays. I see no reason why it shouldn't work just the same?

 

test.PNG

Many common functions include a "found" or "exists" output. Examples include:

  • Get Variant Attribute
  • Element of Set?
  • Config file VIs (Read Key, Write Key, Get Key Names, etc)
  • Chek if File or Folder Exists

 

Why then does Look In Map provide an inverted ("not found") output? Wouldn't it be better if it was consistent with other similar functions??

fabric_0-1617240144803.png

This is most frustrating when replacing existing code using variant attribute lookups with equivalent maps. "Look In Map" is pin compatible with "Get Variant Attribute" except for that one inverted output! This has caught me out on more than one occasion...

After applying my own subjective intellisense (see also ;)), I noticed that "replace array subset" is almost invariably followed by a calculation of the "index past replacement". Most of the time this index is kept in a shift register for efficient in-place algorithm implementations (see example at the bottom of the picture copied from here).

 

I suggest new additional output terminals for "replace array subset". The new output should be aligned with the corresponding index input and outputs the "index past replacement" value. This would eliminate the need for external calculation of this typically needed value and would also eliminate the need for "wire tunneling" as in the example in the bottom right. (sure we can wire around as in the top right examples, but this is one of the cases where I always hide the wire to keep things aligned with the shift register).

 

If course the idea needs to be extended for multidimensional arrays. I am sure if can be all made consistent and intuitive. There should be no performance impact, because the compiler can remove the extra code if the new output is not wired.

 

Several String functions have an "offset past ..." output (e.g. "search and replace string", "match pattern", etc.) and I suggest to re-use the same glyph on the icon.

 

Here is how it could look like (left) after implementing the idea. equivalent legacy code is shown on the right.

 

 

 

Idea Summary: "Replace array subset" should have new outputs, one for each index input, that provides the "index past replacement" position.

 

 

 

 

Many of the Mathematics and SIgnal Processing VIs retain state, rendering them unusable inside reentrant VIs: http://digital.ni.com/public.nsf/allkb/543589DF37BA48CF8625744A00543FF3

 

Many of the VIs in this list (all those in my current application, unfortunately) can only work with single-channel data. When manipulating multi-channel data, you can work around that fact by running the channels serially through the VI you need, but that (1) takes much longer for large data sets or several channels, and (2) is not an option when performing live manipulation of streaming data block-by-block.

 

I ran into this problem while developing code in the Actor Framework, where Do.vi and Actor Core.vi (the two main framework methods) are both Shared Reentrant. Now that AF is a native feature in LV, I expect that more people will run into problems with these VIs.

 

We need stateless versions of these VIs so we can use multiple copies in on a multichannel data set. You can probably keep backward compatibility by pushing the core logic to a new stateless subVI and keeping the shift register or feedback node on the main VI's diagram.

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)

 

The In Range and Coerce function is frequently used to determine whether a value is within range of an upper limit and lower limit values.

 

But when it is out of range, you often also want to know whether the value is out of range too high, or out of range too low.  It is easy enough to add a comparison function alongside and compare the original value to the upper limit.  It's another primitive and 2 more wire branches.  But since comparison is one of the primary purposes of the In Range and Coerce function, why shouldn't it be built into it?

 

The use case that made me think of this that as come up in my code a few times over the years is with any kind of thermostat type of control, particularly one that would have hysteresis built into it.  If a temperature is within range (True), then you would often do nothing.  If it is lower than the lower limit, you'd want to turn on a heater.  If it is higher than the upper limit, than you'd turn off the heater.  (Or the opposite if you are dealing with a chiller.)

 

Request Add an additional output to In Range and Coerce that tells whether the out of range condition is higher than the upper limit, or lower than the lower limit.

 

(That does leave the question as to what should the value of this extra output be when the input value is within range.  Perhaps, the output should not be a boolean, but  a numeric value of 1, 0, and -1, perhaps an enum.)

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.