LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Support project based vi-analyzer configuration which containing target specific information (important e.g. to correctly analyze FPGA VIs) to be executed with the vi-analyzer api to be integrated into CI process. 

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

A Diagram Disable structure lets all the code that isn't inside of it execute but not what is inside of it. I think it would be helpful to put a structure around a snippet of code and execute only what is inside the structure. I usually copy and paste it into a new vi but it would be helpful to have that functionality within the vi i'm working on. 

From trying to make a malleable VI for some set/map functionality, I cannot seem to find a good way to cause the VIM call to be broken for when the input is not a set/map. It would be helpful to have a Asset Set and Assert Map method added to the Assert Type palette for this. 

Related post 

LabVIEW AF is a fantastic tool to create multi-threaded messaging applications the right way. It promotes best practices and help write big yet scalable applications. 

I feel however that there is not enough included in the box 🙂 When looking at many other industry implementation of the actor model e.g. Scala Akka, Erlang, Elixir, microservices in many languages etc. you find that the frameworks usually include many more features. https://getakka.net/

 

I understand the decisions behind the design for AF yet would like to start a discussion about some of these.

I would like to see the following features being included into AF:
1. Ability to Get Actor Enqueuer by Path through hierarchy of actors e.g. /root/pactor/chactor could be used to get the enqueuer to chactor. This would probably require actors to have unique paths pointing to them. Having a system manager keeping all enqueuers on local system is not a bad thing. It is a good thing.

2. Ability to seamlessly communicate over the network and create your own queue specifications. For example actors should be able to talk between physical systems. Using the Network Actor is not a good solution, because it is simply to verbose and difficult to understand. The philosophy of AKKA should be embraced here "This effort has been undertaken to ensure that all functions are available equally when running within a single process or on a cluster of hundreds of machines. The key for enabling this is to go from remote to local by way of optimization instead of trying to go from local to remote by way of generalization. See this classic paper for a detailed discussion on why the second approach is bound to fail."

3. Improved debugging features like MGI Monitored Actor being inbuilt. https://www.mooregoodideas.com/actor-framework/monitored-actor/monitored-actor-2-0/

4. Included Subscriber Publisher communication scheme as a standard way for communicating outside the tree hierarchy. https://forums.ni.com/t5/Actor-Framework-Documents/Event-Source-Actor-Package/ta-p/3538542?profile.language=en&fbclid=IwAR3ajPR1lvFDyPFP_aRqFZzxR4FCQXh2nB2z0LYmPRQlnvXnsC_GQaWuZQk

5. Certain templates, standard actors, HALs, MALs should be part of the framework e.g. TDMS Logger Actor, DAQmx Actor. Right now the framework feels naked when you start with it, and substantial effort is required to prepare it for real application development. The fact that standard solutions would not be perfect for everyone should not prevent us from providing them, because still 80% of programmers will benefit.

6. Interface based messaging. The need to create messages for all communication is a major decrease in productivity and speed of actor programming. It also decreases readability. It is a better with the BD Preview in Choose Implementation Dialog in LV19, but still 🙂

7. This is more of an object orientation thing rather than actor thing but would have huge implications for actor core VI, or Receive Message VI. Please add pattern matching into OO LV. It could look like a case structure adapting to a class hierarchy on case selector and doing the type casting to the specific class inside the case structure. You could have dynamic, class based behavior without creating dynamic dispatch VIs, and you would still keep everything type safe. https://docs.scala-lang.org/tour/pattern-matching.html

 

The natural way for programming languages to evolve is to take ideas from the community and build them into the language. This needs to also become the LabVIEW way. I saw a demo of features of LV20 and I am really happy to see that many new features were inspired by the community. Lets take some ideas about actor and add them in.

 

I wanted to share my view on the direction I think AF should go. I see in it the potential to become the standard way to do big applications, but for that we need to minimize the barrier to entry.

A graphical programming language deserves to have great graphical tools representing the design of big applications.

 

It is possible to use scripting to understand if a method of a class has another class as input or output terminal and to know if a class composes another class in private data. The only thing I cannot do right now myself is to show this information in Class Hierarchy window. Representing the usage and composition relationships only requires parsing through the project once, and then every time it changes. 

I am right now using a script to parse through all project classes to understand which have what relationships, which are actors and which are messages and I draw a Plant UML diagram from that. The intermediate step is to generate Plant UML text, but this should be integrated into LV.


https://forums.ni.com/t5/LabVIEW-APIs-Discussions/Project-to-Plant-UML-Diagram-Script/td-p/3984499

5.png

 

6.png

 

Please improve the readability of object oriented project with additional tools for LabVIEW. This is especially critical for NXG, where the currently the ability to understand an OO project is even more limited than LV19.

I know this idea has been coined already in A new head-controlled while loop, though it was closed becasue it didn't receive more then 4 kudos. That was 3 years ago and I didn't get any vote because I wasn't aware this post was out there. I'd like to re-open the discussion. Or for how long do we close suggested topics because at the time, there was not enough interest? It is not al that difficult and yet we are 2019 and NI still didn't solve this (as well as taking the square of a united input doesn't square the unit).

 

I have been programming a lot in LabVIEW and generally love it, but I also find it quite frustrating that you can not do the classic "check condition first, then run". I alos find the classic arguments not valid.

 

Some argue that it is easily solved using a case structure and while loop, which is not equivalent. One solution is to put the while loop inside the conditional loop, which forces you duplicate the conditional code and good programmers don't like duplicate code. (You could create subVI though, but that is also creating a lot of overhead for both the programmer as the program at run-time). Another solution, as some suggested here, is to put the case structure in the while loop, but then if you are indexing the output, you output a array with one element, whereas you probably wanted to output an empty array. In this case you can work with conditional terminals but that is also a lot of overhead and there is even a bug in LabVIEW when you output multidimensional arrays in this way that are empty and later on try to iterate over it, a for loop will execute the inside of the for loop, although the array is "empty".

 

I also find the argument that it is a rare case not compelling, I myself experienced this often while programming and the whole sense of a while (or conditional for) loop is that you don't know beforehand how many times you will have to iterate it. So it might be as well that you don't even have to iterate it a single time. That sounds reasonable, no? Otherwise you would use for loop. It is actually the other way round: the classic "do while" is a rare case (that's why textbooks first learn the while loop).

 

I also don't agree with the fact that the way LabVIEW works, it is not possible. I understand that you need a value for non-indexing terminals, but that would simply be not supported in this type of loop. A "check first then run" kind of loop would only work with iterating terminals (which can be empty) and shift registers (which can take up the value it was initialized to).

A seperate part of the loop's body must be reserved for the conditional code (for instance a rectangle that is fixed at the left underside of the while loop rectangle), just like in most languages where you have to write your conditional statement in brackets as part of the loop statement...

 

This type of loop would also solve the problem that you often have to delete the last element of your indexed array because the condition for which you probably wanted it, was no longer true (but the loop has excuted first after deciding that).

 

Lastly, it would give new LabVIEW programmers that come from any other language more familiarity with what they know. As this is reasonably a rather big flaw in LabVIEW (see argumentation above) that is some common and basic in other languages and you are likely to stumble on quite soon when you start experimenting in LabVIEW, it might just make you turn around right away, as it was almost the case when I first started programming in LabVIEW and didn't know how the rest of the language and environment is very beautifull and well designed.

I am still hoping that NI will bring back the Standard Report type in the future and I know that there are many others sharing my feeling. Whatever the reason for discontinuing it, there might be another or even better way to implement good looking reports internally in LabVIEW without having to buy and rely on third party vendors.

It would be good to have Search an Array and give the number of times the search element is available in the array.

Count of elements.jpg

 

 

 

 

Most of us have made this logic with for loop with an increment if it matches

Wiring complex time waveform or complex wdt plots only the "I" or real portion of the complex signal. It should plot the Real and Imaginary as two plots overlayed. Always have to do that when plotting complex data and wastes time and clutters the diagram as well.

 

The data reduction algorithm used by Citadel for logging values inherently assumes that only the absolute magnitude of the change is relevant; that means that a single value for the resolution and the deadband is sufficient for the whole value range to be monitored.

There are situations, though, where the underlying value could span several orders of magnitude. In such cases, it would be useful to set deadbands on the minimal relative change, instead.

The current workarounds are either to reduce the deadband to a tiny value (thus logging much more large data than needed) or to log the logarithm of the value, but that involves an additional step in writing to the SV/reading histories. It would be nice to have a built in option for that. 

 

One of the more important and useful functions, especially for exploratory data analysis in R, is the ability to take N samples from an input provided set with or without replacement.

 

LabVIEW should have that.  It would be relatively straightforward, and would start moving G toward having more shared Venn-diagram with R.

 

https://www.rdocumentation.org/packages/base/versions/3.5.1/topics/sample

 

Personally, I think you should look at the NIST EDA techniques library, and built a toolset off of it. https://www.itl.nist.gov/div898/handbook/eda/section3/eda3.htm

 

It is tax-funded source code, so it is something you should be able to access and integrate without too much work.

https://www.itl.nist.gov/div898/handbook/dataplot.htm

We can get the top level VI using property nodes / call chain, and we can also determine the running state of any specific VI (given a path).

It would be really useful to know which VI was running as any point in time (without resorting to execution highlighting) - not really applicable to time critical applications of course.

Uses for this include:

1) the abilty to change a VI execution state from Modal to normal with custom created dropins - if required.

2) Automation - it would be really useful to retrofit projects with a tool/dropin for soak testing / debug which detected certain windows - normally these would be modal dialogs and entered data in them to allow the program to continue (I know these features can be built in at the start of a project, but dropping in later would be very powerful for pre-existing ones and allow the use of the NI dialogs.

3) simple debug - code is stuck, exection highlighting is not showing where. This would give the answer.

 

James

A recent request in the LabVIEW Forums was to append additional lines of text to the end of a (LabVIEW-generated, I assume) HTML Report.  "Easy," I thought, "just use the Template Input of New Report, move to the end, and Append away!".  So I tried it -- it over-wrote the earlier Report.  Oops!

 

Examination of the Toolkit showed there were no functions analogous to "Find Last Row" or "Read ..." from the Excel-specific sub-Palettes in the HTML or Report sub-palettes.  Indeed, when examining the New Report functions for HTML, it was clear that the Template input was simply ignored!

 

It seems to me (naively) that it should be fairly simple to add code to (a) allow a Template to be specified for HTML, and (b) allow a positioning function to position the "appended" input to the end of the existing HTML <body><\body> block.

 

A potentially more "useful" (but, I would think, also fairly simple) extension would be to allow a "Read Next HTML Element" function, which would return the next Line, Image, whatever, along with a Tag identifying it, allowing the user to copy from the Template file selected Elements and append new Elements where appropriate.  For this to work "nicely", you would need to also return a "No More HTML Elements" flag when you reached the end of the Template.  This would allow the User to copy as much as needed, adding in new data where appropriate, as well as "appending to the end".  As with Excel, there would be no issue with having the Template File be the same or different from the final Save Report file.

 

Bob Schor

Currently LabVIEW Timestamps truncate values rather than round them when using the %u format code.  The numeric format codes all round values (0.499999 formatted with %.2g is 0.50).  I would like an alternative to the %u code that does the same.

 

See https://forums.ni.com/t5/LabVIEW/Timestamp-Formatting-Truncates-Instead-of-Rounds/m-p/3782011 for additional discussion.

 

 

 

My use case here was generating event logs.  I was logging the timestamp when parameters started/stopped failing.  To save space, I stored a set of data identical to the compressed digital waveform data type.

 

I had a couple parameters which would alternate failing (one would fail, then the other would start failing as the first stopped failing).  As they had different T0 values, the floating point math for applying an offset multiplied by a dT to each T0 gave values that would truncate to 46.872 and the other truncated to 46.873, leading to a strange ordering of events in the log file (both would appear to be failing/not failing for a brief time).

 

To make the timestamps match, I made the attached vi (saved in LabVIEW 2015) to do my own rounding (based off the feedback in the linked forum post).

When FFT data is shown the most interesting data occurs as vertical lines or peaks.  When interrogating the data with plot cursor there is not a cursor type that will show the data underneath the cursor. See the plots belowDisplaying image.png

 

Displaying image.png

The line style of the cursor can be changed to some type of "dashed" to show some of the data, however there is not a current style of cursor that has the benefit of showing a line that is drag-able without covering the data. 

One possible solution is add a select-able cursor type that will show the line above the Y value but not below. So when the cursor is moved to a peak the Y value of the cursor is at the peak, there is a small circle or box and a line from the Y value going positive but no line below the Y value going negative. Current cursor style.PNG

 

 

LabVIEW comes with a example (Check TDMS File for Fragmentation.vi) to check is a TDMS file is fragmented:Check TDMS File for Fragmentation.vi Block DiagramCheck TDMS File for Fragmentation.vi Block Diagram

I create a subVI based in this example and added to my personal library. How about having a "TDMS is Fragmented" as a native function in the "TDMS Streaming" palette?

 

This is a proposal to the VI's icon:

TDMS is fragmented Icon.png

It'd be nice to have a compound arithmetic function of "Greatest" and "Least" which will output the largest or smallest of the input values. It could look like this but with re-sizable input nodes. 

 

Also, I think the top way is the best way to find the greatest in a set, but let me know if there is a better way. 

I need to adjust the sampling rate of a Waveform, TSA Resampling VI accomplishes this task but it does not keep t0 value since it creates a new waveform instead of adjusting the original one.

I think an additional feature should be added to keep the reference of the original waveform, or specify in the documentation that you will have to modify the VI in order to keep t0.

 

TSA.png

 

 

 

Two recommendations:

 

1) Add an OSI Read VI for N wavelength spectra that returns each wavelength spectrum as a waveform (i.e. an N x number of channels array of waveforms). This will provide the time stamp of each wavelength spectrum, and will also be more efficient that reading one set of channel wavelength spectra at a time.

 

2) Add a VI that applies the wavelength scaling to wavelengths.  This will allow the user to apply their own power spectrum peak search algorithm to wavelength spectra, whilst still applying the current wavelength scaling algorithm. This will free the user from having to use the OSI peak search algorithm.

 

We have found that the peak search algorithm utilised by NI-OSI performs poorly when the peaks in the power spectrum are poor quality, and that a much simpler algorithm yielded far better performance. Our algorithm was to simply report the maximum power wavelength in the sensor wavelength range if it was above a minimum threshold. This is in significant contrast to the NI-OSI algorithm that searches for peaks across the entire power spectrum using four algorithm tuning parameters, identifying all of the detected peaks that fall within the sensor wavelength range, and then reporting the one with maximum wavelength.

 

 

For reference, we have a PXIe-4844, and are using NI-OSI 16.0.