LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I know this idea has been coined already in A new head-controlled while loop, though it was closed becasue it didn't receive more then 4 kudos. That was 3 years ago and I didn't get any vote because I wasn't aware this post was out there. I'd like to re-open the discussion. Or for how long do we close suggested topics because at the time, there was not enough interest? It is not al that difficult and yet we are 2019 and NI still didn't solve this (as well as taking the square of a united input doesn't square the unit).

 

I have been programming a lot in LabVIEW and generally love it, but I also find it quite frustrating that you can not do the classic "check condition first, then run". I alos find the classic arguments not valid.

 

Some argue that it is easily solved using a case structure and while loop, which is not equivalent. One solution is to put the while loop inside the conditional loop, which forces you duplicate the conditional code and good programmers don't like duplicate code. (You could create subVI though, but that is also creating a lot of overhead for both the programmer as the program at run-time). Another solution, as some suggested here, is to put the case structure in the while loop, but then if you are indexing the output, you output a array with one element, whereas you probably wanted to output an empty array. In this case you can work with conditional terminals but that is also a lot of overhead and there is even a bug in LabVIEW when you output multidimensional arrays in this way that are empty and later on try to iterate over it, a for loop will execute the inside of the for loop, although the array is "empty".

 

I also find the argument that it is a rare case not compelling, I myself experienced this often while programming and the whole sense of a while (or conditional for) loop is that you don't know beforehand how many times you will have to iterate it. So it might be as well that you don't even have to iterate it a single time. That sounds reasonable, no? Otherwise you would use for loop. It is actually the other way round: the classic "do while" is a rare case (that's why textbooks first learn the while loop).

 

I also don't agree with the fact that the way LabVIEW works, it is not possible. I understand that you need a value for non-indexing terminals, but that would simply be not supported in this type of loop. A "check first then run" kind of loop would only work with iterating terminals (which can be empty) and shift registers (which can take up the value it was initialized to).

A seperate part of the loop's body must be reserved for the conditional code (for instance a rectangle that is fixed at the left underside of the while loop rectangle), just like in most languages where you have to write your conditional statement in brackets as part of the loop statement...

 

This type of loop would also solve the problem that you often have to delete the last element of your indexed array because the condition for which you probably wanted it, was no longer true (but the loop has excuted first after deciding that).

 

Lastly, it would give new LabVIEW programmers that come from any other language more familiarity with what they know. As this is reasonably a rather big flaw in LabVIEW (see argumentation above) that is some common and basic in other languages and you are likely to stumble on quite soon when you start experimenting in LabVIEW, it might just make you turn around right away, as it was almost the case when I first started programming in LabVIEW and didn't know how the rest of the language and environment is very beautifull and well designed.

Wiring complex time waveform or complex wdt plots only the "I" or real portion of the complex signal. It should plot the Real and Imaginary as two plots overlayed. Always have to do that when plotting complex data and wastes time and clutters the diagram as well.

 

The data reduction algorithm used by Citadel for logging values inherently assumes that only the absolute magnitude of the change is relevant; that means that a single value for the resolution and the deadband is sufficient for the whole value range to be monitored.

There are situations, though, where the underlying value could span several orders of magnitude. In such cases, it would be useful to set deadbands on the minimal relative change, instead.

The current workarounds are either to reduce the deadband to a tiny value (thus logging much more large data than needed) or to log the logarithm of the value, but that involves an additional step in writing to the SV/reading histories. It would be nice to have a built in option for that. 

 

One of the more important and useful functions, especially for exploratory data analysis in R, is the ability to take N samples from an input provided set with or without replacement.

 

LabVIEW should have that.  It would be relatively straightforward, and would start moving G toward having more shared Venn-diagram with R.

 

https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/sample

 

Personally, I think you should look at the NIST EDA techniques library, and built a toolset off of it. https://www.itl.nist.gov/div898/handbook/eda/section3/eda3.htm

 

It is tax-funded source code, so it is something you should be able to access and integrate without too much work.

https://www.itl.nist.gov/div898/handbook/dataplot.htm

We can get the top level VI using property nodes / call chain, and we can also determine the running state of any specific VI (given a path).

It would be really useful to know which VI was running as any point in time (without resorting to execution highlighting) - not really applicable to time critical applications of course.

Uses for this include:

1) the abilty to change a VI execution state from Modal to normal with custom created dropins - if required.

2) Automation - it would be really useful to retrofit projects with a tool/dropin for soak testing / debug which detected certain windows - normally these would be modal dialogs and entered data in them to allow the program to continue (I know these features can be built in at the start of a project, but dropping in later would be very powerful for pre-existing ones and allow the use of the NI dialogs.

3) simple debug - code is stuck, exection highlighting is not showing where. This would give the answer.

 

James

When FFT data is shown the most interesting data occurs as vertical lines or peaks.  When interrogating the data with plot cursor there is not a cursor type that will show the data underneath the cursor. See the plots belowDisplaying image.png

 

Displaying image.png

The line style of the cursor can be changed to some type of "dashed" to show some of the data, however there is not a current style of cursor that has the benefit of showing a line that is drag-able without covering the data. 

One possible solution is add a select-able cursor type that will show the line above the Y value but not below. So when the cursor is moved to a peak the Y value of the cursor is at the peak, there is a small circle or box and a line from the Y value going positive but no line below the Y value going negative. Current cursor style.PNG

 

 

It'd be nice to have a compound arithmetic function of "Greatest" and "Least" which will output the largest or smallest of the input values. It could look like this but with re-sizable input nodes. 

 

Also, I think the top way is the best way to find the greatest in a set, but let me know if there is a better way. 

Two recommendations:

 

1) Add an OSI Read VI for N wavelength spectra that returns each wavelength spectrum as a waveform (i.e. an N x number of channels array of waveforms). This will provide the time stamp of each wavelength spectrum, and will also be more efficient that reading one set of channel wavelength spectra at a time.

 

2) Add a VI that applies the wavelength scaling to wavelengths.  This will allow the user to apply their own power spectrum peak search algorithm to wavelength spectra, whilst still applying the current wavelength scaling algorithm. This will free the user from having to use the OSI peak search algorithm.

 

We have found that the peak search algorithm utilised by NI-OSI performs poorly when the peaks in the power spectrum are poor quality, and that a much simpler algorithm yielded far better performance. Our algorithm was to simply report the maximum power wavelength in the sensor wavelength range if it was above a minimum threshold. This is in significant contrast to the NI-OSI algorithm that searches for peaks across the entire power spectrum using four algorithm tuning parameters, identifying all of the detected peaks that fall within the sensor wavelength range, and then reporting the one with maximum wavelength.

 

 

For reference, we have a PXIe-4844, and are using NI-OSI 16.0.

Definition of the problem:

To find the scope of an method in code quickly there are two options:

  1. Either place it in another VI or in a child class method, if it breaks you know the scope. 
  2. Look into the project explorer and find it there.

It would be nice if we could avoid the above two methods of finding out the scope.

 

Current solutions:

Of course we can add text in the icon (limited space) or rename the method so it has the private/public/community name in the method (long file names and a loth of work when changing the access scope) and even add descriptions for the scope which requires the context help window to be open.

 

But there is a more elegant solution: In the project explorer there is an way to find out the scope of the Methods in a single glance by the glyphs used on the icons.

 

Project Explorer.png

 

So incorporating this in the Icons of the methods makes sense. Even better there are already a glyphs of the protected and private methods. 

 Proposed Icons.png

 

Proposed Idea:

Changing the access scope into private/protected/public or community automatically add an layer with the corresponding glyph of the access scope. This makes it clear what the scope of the methods are and thus avoiding breaking the code/using long names/writing it in the icon or place it in the description. Automating this also removes a loth of extra clicks to get the same result.

This NIST site defines a non-integer factorial:

http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc32.htm

 

The result of the computation is used in statistical process control.

 

I think that LabVIEW should have its VI's compatible with NIST, and "engineered for process control", seeing how they control processes.  The control shouldn't exclusively about operation, but should also have bounds, and quick/effective alerting for out-of-control condition.  (OOC).

The signal processing VIs only tend to support DBL waveforms and waveform arrays.  Since waveforms can themselves support many data types, the signal processing VIs should also be able to adapt to non DBL based waveform data.  EXT at the very least.

Excel has been around forever, and that means it is optimized for a mix of demographics.  Something qualitatively similar can be said for most tools that have been around for a while, but what differentiates end user group includes the "canonical use case for the feature", and market/economics.  (I am convinced that if NI could make a sustainable/defensible few billion per year with a spreadsheet, then it would make a spreadsheet product.  It is business 101.)

 

One of the tools that Excel, Open/Libre office, MatLab, R, and Python have that LabVIEW doesn't really have (MathScript has it, but not integrated) is a variation on "Goal Seek" [1, 2, 3a/3b, 4, 5 ].  It is trivially easy to say "make the output of my function this, by changing this input" in all of those.  It can be done in LabVIEW, but it is not a "one-liner", and that comes from the UI being engineered around software creation, but not around data exploration.

 

I recommend a tool that can work on the front panel of a VI that allows me to attempt to set an output to a particular value by varying an input.  It can work with fzero, fsolve, or any number of optimization.

 

The demographic here is different than the conventional LabVIEW market in that it is the data-explorer.  This is for folks who want to understand something new, not seek to wrap an understanding they already have into a package.  Instead of capturing data in LabVIEW and processing it in another tool, why not start moving the line so more is done in LabVIEW?  The greater return of value for the same price means continuous growth of market share.  That isn't a bad thing.

Quite by accident, I stumbled upon an inconsistency in LabVIEW's "Square" function.  For almost all operations, you get the same result if you wire a numeric or "structured" numeric (like an Array or Cluster) to either Square or to both inputs of Multiply.  Furthermore, you get the same (or a close approximation to the same) result if you compare the input to Square to the output from Square -> Square Root.  Here's an example for a 10x10 Array of 1's -- if you run this, both Booleans will return True.  Note that I'm testing for "Equality of Dbls" by subtracting them, taking their absolute value, and saying they're equal if their difference is <= 0.000000001.

Square Works for Array.png

 

However, if you transform the Square Array to a Square Matrix, Multiply still does what (mathematically) Multiply should do for a Matrix, namely do a Matrix multiplication (giving a 10x10 Matrix of 10), but Square, instead, simply squares each Matrix entry, giving a 10x10 Matrix of 1.  Curiously, the Square Root function does do a proper Matrix Square Root, i.e. it produces the Matrix that, when multiplied by itself, gives the argument to the Square Root (shown by "Square Root is Inverse of Multiply").

Square Fails for Matrices.png

So here we have a double-inconsistency.  We have a Square function that does not always produce the same quantity as "Multiply something by itself", and a Square function that is not the inverse of the Square Root function, meaning you don't get back the same thing if you Square (a Matrix) then take its Square Root, or vice versa.

 

There is no "warning" (that I've seen) that Square has this anomalous behavior.  It was pointed out to me (by an NI AE) that there is a listing of functions that "work" for Matrices, listing Square Root, but not Square.

 

At the risk of potentially causing code written with the "old" mathematically-wrong Square function to fail if migrated to a function that (mathematically) works properly for Matrices (i.e. is the inverse of Square Root and gives the same thing as "multiply by yourself"), I recommend that NI:

  • Fix the Square function for Matrices so that it multiplies the matrix by itself, and
  • Prominently note that the functionality of Square has been "corrected in this Version of LabVIEW", including on the Help for the Square function.

Bob Schor

 

P.S. -- I searched the Idea Exchange for "Matrix" and found no other mention of this "bug".  It occurs to me that while NI might be reluctant to admit this is a clear bug and to fix it, potentially causing "old code moved to a new Version" to break, I strongly suspect that anyone actually using matrices in LabVIEW would already "know" that Square is broken and therefore not use it for fear of having code that "doesn't do what it seems to say it is doing", a further argument that this should really be fixed!

Every time I want to use the Pick Like String Function, its becuase I want to process a string line by line. 

However its usage is limited by the fact that it does not have a boolean output that goes true on the last line of the string. 

 

This would make it easier to use in a while loop for instance, where the last line can terminate the loop. 

If going past the end of string, it returns an empy string - same as an empty line....

Making the input string a unique string not seen in the file e.g "<EOF>" also doesn't help if a line is empty. 

 

Pick Line.png

 

P.S. Documentation is not clear if the picked line includes the end of line character(s).... It doesn't.

 

So I try to use Pick Line, realise (again) its drawback and then have to think of an alternative... which is easy enough as there are many ways to do it... but my inital thought was not match the end of line character and split the string... 

 

Pick Line work around.png

 

 

In the documentation of the Divide Quaternions function of the Quaternion subpalette of the Robotic Arm subpalette of the Robotics Module, the documentation fails to mention whether, given that we are dividing quaternion by y, whether the result is (quaterniony^(-1)) or (y^(-1) * quaternion). As these two are each others' negatives, this seems important. 

If you need to compare two vi hierarchies in memory, it would be nice if the compare tool would take into account the current Conditional Disable Symbols for the files.  (I know that it might not be possible for vi's loaded from file, but ones in memory would be compiled according to symbol state, wouldn't they?)

 

I currently have a project with multiple sub-projects making use of conditional disable structure values to allow for various deployment options, when I need to debug a specific sub-project I generally start with a compare against the projects 'base' sub-project to verify that the sub-project is up to date with any revisions.  It would be nice to be able to also at the same time verify that any conditional symbols are set correctly (f.e. 'base' has all conditional symbols set to 'FALSE', while sub-project 'A' has 2 of the three symbols set to "TRUE".)

Currently, doing a compare will not catch this, it will consider the conditional disable structures the same, even though the actual code compiled (and not greyed out) is different.

 

So my suggestion is Let the Compare tool take into account which sequence of a conditional disable structure is the 'active' sequence when doing a compare.

There are some analytic software that adds connections to Excel.  JMP comes to mind - it allows 2-way data transfer, and allows excel to run some of JMP's analysis routines.  Those packaged routines are vetted by phd's and can't be quickly damaged by a tired user. 

 

This details how JP morgan lost a few billion dollars because they were using Excel for hard math, and someone had an error in an equation.

 

Excel is likely more common than both Javascript or SQL for use in the planet. 

 

It might be very powerful to be able to import an excel spreadsheet, specify inputs, specify outputs, and get it translated to "g".  This could allow it to run thousands of times faster, and the visual language would allow a technical vetting that can be very hard for humans to do on hundreds and hundreds of pages of tables of numbers or formulas. 

 

This would be a motive for banks like JP Morgan (or their peers) and their multi-billion dollar business units to use LabVIEW.  Stunning speed, optimization, and scale.  Stunning accessibility to error correction.  Packages that, once wrapped, stay wrapped, but can plug in where they need to.

 

Other add-in or connectors for Excel:

It is unlike the TDM-importer in that it sends data to LabVIEW, the data is transformed or processed, and a numeric, text, or image result is returned to Excel.

When you have a probe to se the data in an array, you se all elements in one row (in the Value column) and just one element in the Probe Display area. If you have a large array, it is difficult to get a good overview.

 

My suggestion is that in the part of the probe window called Probe Display, you should be able to pull the bottom right corner and see several elements simultaneously.

 

Probe 4.png

 

An extra feature would be to also show the array size anywhere in the probe window.

The Report Generation Toolkit includes Excel Easy Table that allows either Text (2D String Arrays) or Numeric (2D Dbl Arrays) to be written to Excel.  The function is written as a Polymorphic function to handle the two types of input.  However, when processing numeric input, an inner function called "Append Numeric Table to Report (wrap)" converts the numeric data to a String Array using the format string %.3f.  This is, in fact, a Control input for the function, but its caller does not wire the input, forcing the numeric data to be truncated to three significant digits.

 

I suggest that the default either be changed to %.15f (or something similar) to preserve the precision of the input data, or the Format String be "brought out" to the User (but there are no free Connector slots) to allow the User to control the precision.

It would be useful if there was functionality within the VI Analyzer, under the style section, that checked for overlapping on the block diagram. This would allow you to be able to check readability and check for some mistakes. For example, it is possible to copy and paste while loops on top of each other. VI Analyzer should be able to tell you that there are 2 while loops overlapping to help with style and debugging.