LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Imagine that you have made lots of code, wiring up several VI etc. all with the same data, a cluster for example, going in and out. Now you regret not having defined that wire as a typedef. Today replacing the data with a typedef will involve a lot of steps; you start by clicking on one of the controls or indicators, and create a typedef...then you need to do the tedious work of replacing all the other controls and indicators up- and downstream. 

 

Would it not be nice if you could just right-click on the wire and select "Define type"/"Create type definition" or "Replace with type def/class"  - and then choose to have the type definition automatically replace everything along the wire ("propagating type def")?

 

This idea was inspired by and first came about as a comment to this idea by cowen71.

I'm surprised a search did not turn up anything about this.  NI should create a LabVIEW Amazon Web Services/Azure/Docker Image(s).  The sample case I am thinking of is having a Jenkins CI server running on AWS EC2 with a full LabVIEW environment installed.  This would allow off-loading the long FPGA or full RF suite builds during CI.

 

Apparently NI had this with LabVIEW 2012 on AWS

 

I would suggest to NI the following steps:

  1. Collect underpants
  2. create AWS LabVIEW image for cloud-based compiling - charge per use
  3. Use AWS to work out the kinks to get an efficient, speedy image created
  4. Open an NI data-center that does what AWS/Azure does but for the tech/engineering field and host all of their own software images
  5. Profit (or at least start to pay back that huge CapEx outlay)

Thanks!

Charlie

 

 

 

 

 

....I use labview since 1999 , I don't like, and I'm not able to write textual code and remember syntax.
I use Labview for every kind of software, from data acquisition to DB management.
During last two years i had the necessity to develop some application for mobile but there's no way to do it in labview....datadashboard is definetely not "programming in labview" for mobile.
Obviously I would like labview to be open source, a community version and a professional one could be nice....but...first of all, I would like labview to be able to compile executable for win, mac, linux, WEB,android and iOs using the same platform
Like for example i can do using LiveCODE (but as i said, it is simpler than other textual programming languages, but it still a textual language)
Now if i need an application for linux i need labview for linux ( a tragedy) and make executable from inthere the same thing for MAC.
I would like a single IDE where i can build executable for every platform (this SHOULD BE a real cross platform IDE).
...maybe I can ask it to Santa for christmas...

This idea is for improving the connector pane to default to required inputs for terminals that use reference type data.  So if I have a new blank subVI and I wire a VI reference to an input, this should be set to Required by default.  I'd also suggest this be the same for a Create SubVI from selection.  Obviously you could change it from required, because the developer may have some code in the VI to detect an invalid reference and do something specific.  But in most cases if I do something like wire a Queue reference to an input, that input should be required.

 

One could make the argument that this idea could be done today, by making all inputs default to required, which I think goes too far.  Many times I have code that detects unwired inputs (by looking at the default value for the control) and it shouldn't make an input required if it isn't really required.  What I mean is in my work flow the majority of inputs should be recommended, but the majority of references (Queues, DVRs, Control References, VI References) should be required.  There are a few data types like Classes that could be reference based or not, and I could see an argument for this being required, or recommended, and for these I don't really care how they behave.  But for inputs that are clearly reference based I think it would help from making code that the developer mistakenly leaves recommended.

 

This could be an INI key in LabVIEW for those that don't want it, or for those that choose to make all inputs required.

The idea is to add some systems controls which are not available in LabVIEW 2016...

GUI Interest group shows many alternatives but any about I/O controls.

IOsystControlsGUI.JPG

The idea is to get System controls with identicals functionalities as Modern UI (I/O Name filtering, ...)

 

Vincent

 

The creation method for Polymorphic VIs is too cumbersome. It is a lot of work to go through each field and manually enter the information; especially if there are lots of data to create. It would be much more time-saving to be able to enter it all in a spreadsheet, and have the creator read from this spreadsheet. People would be much less intimidated to use the creator. 

Reference conversation in this thread: Speed of Cursor Clicks on graph

 

The cursor button pad for changing the position of the cursor is overly sensitive to the duration of the click in how far to move the cursor.  A normal button click sometimes move the cursor one tick like you'd expect.  Sometimes it jumps several positions.  It requires an exceptionally quick click to keep it from jumping to far.  The button pad is designed so that if you hold down the button, it moves continuously.  But what is a normal single click gets interpreted as a button hold.

 

There needs to be more of a delay between when a button goes down to when it gets intepreted as a hold before repeating the cursor.  Some usability testing may be needed, but perhaps a delay of 1/2 to 1 second.  I believe Windows has a setting that determines when a double click is treated a double click or two single clicks, that timing might apply here.

 

If you are uncertain what I am talking about, create a graph with some data.  Make the cursor palette visible and add a cursor.  Try to click the button left or right to make it move just a single tick.

 

One more thing to improve.  When doing this while the VI is not running and is in edit mode, sometimes a button click on the pad winds up distortingn the diamond shaped button rather than being interpreted as a click.

 

CVI allows for color modification of the sweep line in a sweep chart (ATTR_SWEEP_LINE_COLOR), but LabVIEW does not allow any modification of the sweep line.

 

It would be nice if the developer could change sweep line width and color to make the sweep chart fit better visually in a customer-facing UI.

Many advanced funtions in the optimization and fitting palettes allow the use of a "VI model" given as a strictly typed VI reference to be defined by the user. A great feature!

 

LabVIEW provides various templates containing the correct connector pattern. Here is a list of the templates I found:

 

in labview\vi.lib\gmath\NumericalOptimization:

  • ucno_objective function template.vit

  • cno_objective function template.vit

  • LM model function and gradient.vit

in labview\vi.lib\gmath

  • Zero Finder f(x) 1D.vit

  • Zero Finder f(x) nD.vit

  • 1D Evolutionary PDE Func Template.vit

  • 2D Evolutionary PDE Func Template.vit

  • 2D Stationary PDE Func Template.vit

  • ODE rhs.vit

  • Global Optimization_Objective Function.vit

  • DAE Radau 5th Order Func Template.vit

  • function_and_derivative_template.vit

  • function_template.vit

As you can see, the naming is quite inconsistent:

  • all files are templates as is immediately obvious from the file extension! (*.vit )
  • some containing the word "_template" (undescore/lowercase t)
  • some containing the word " template" (space/lowercase t)
  • some contain the word " Template") (space/uppercase T)
  • some don't contain the word template in any form (good! :D)

 Since the extension fully defines them as templates, maybe the word "template" could be scrubbed from all the filenames, making things more uniform and consistent.

 

Idea Summary: remove the word "template" from all model template names that contain it.

Waveform charts are really useful but have an annoying bug. Every now and then (maybe once every 10 seconds depending on the update rate), the digital displays blink to zero even though a zero value was never written to the chart. I use these charts frequently in HMI type displays and explaining this behavior is always part of the training for new operators ("Don't worry if this critical sensor goes to zero for a second unless the line on the chart also goes to zero, then you should freak out and hit E-stop"). This has been brought a couple times over the last 8 years (http://forums.ni.com/t5/LabVIEW/Waveform-chart-digital-display-blinks-zero/td-p/554868) but has never gained enough attention to be fixed. I know this functionality could be duplicated with additional numeric indicators or even an xcontrol but I would prefer to just have waveform charts function correctly. I attached a VI that shows this behavior. All three of the digital displays randomly blink to zero.

 

 

Chart digital display blinks to zero.png

It is possible to import an EPICS .db file into LabVIEW in order to use the Process Variables (PVs) within LabVIEW.

(see http://www.ni.com/white-paper/14144/en/ )

 

But, all records are imported as seperate LabVIEW items.

 

Each PV has to be seperately added as a 'bound shared variable' for inclusion into a VI.

Then each PV will need to be seperately connected up to a control or indicator, unless some means of iterating over the collection is implemented.

 

This is all fine if there are 3 or 4 PVs (as is the case for the example app).

 

My current application is quite modest in scope - there are 15 PVs for each of 6 devices, so 90 PVs altogether. It is barely feasible to follow this manual process for each of these - it would take hours and be very finger-trouble prone.

 

Many EPICS IOCs can use thousands, or even millions, of PVs.

 

I would suggest that the .db file import wizard process the PVs from each file into a cluster.

Or - possibly better - process PVs into an array of clusters.

 

IMO, the current implementation just isn't scalable to 'real world' control system IOC use.

 

If NI wish to provide LV integration with large-scale EPICS projects, I beleieve a better way of doing needs to be found.

 

(Re-posted from https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Enhanced-EPICS-Support/idc-p/3203 )

 

The NI_ChannelLength is a handy property, written to each channel in a TDMS file that can be read to tell you the number of samples in that channel, without having to read all the samples and do an array size operation on it.  Having this in a property can also be useful for programs like DIadem or the DataFinder tookit which index these properties.

 

This idea is to have an option to add a few more properties built into the TDMS write operation.  It would be best if this were an option given to the TDMS Open, which is off by default.

 

I think adding a NI_ChannelMinimum, NI_ChannelMaximum, and NI_ChannelAverage would be very helpful so that this information is available without having to read every sample, for every channel, for every group.  Again the benefit can be clear when using DIadem or DataFinder and having this information be quickly available.

 

Of course we can do this today if we don't mind having to read every sample, perform the Min/Max/Average then write this property, but this can be a very time and memory intensive process for large files with lots of samples, channels, and groups.  For channels with data types which aren't a numeric, I'd say a constant can be used, like NaN, or 0 if the data type is not a double.  I think this would be most useful for channels with a numeric data type, waveform, or timestamp.

Dear community and developers

 

I would have the suggestion to add a simple context menu enty on property nodes and on invoke nodes.

 

Just add in invoke nodes the menu entry (Change to property node) and

on property nodes add an etry (Change to invoke node).

 

I often have to change this and this would help to improve the work flow for me.

 

Gernot Hanel

IONICON Analytik Gesellschaft m.b.H.

www.ionicon.com

An RT program can be ran either from a host PC (what I call the "interpreter mode"), or as an exe in the startup directory on the RT controller. When running from the host PC (for debugging purposes), it allows front panel "property nodes" to execute properly as you would expect. After building, and transferring to the RT app to the startup directory on the RT controller, the program errors out on the first occurance of a front panel property node. The reason is obvious; a front panel is non-existent in an RT application, hence the front panel property nodes are rejected. Of note, no errors or warnings are generated during the RT app build operation.

 

Recommend that the build application simply ignore the front panel property nodes as it ignores the front panel in general. This would allow the programmer to retain the same version of the source code for either mode of operation.

 

Thanks,

Bob

Instead Moving to cases instead selecting one by one every time,  whenever we highlight the cases it has to show the cases.  It will reduce the

programming or debugging time.

 

 CASE STRUCTURE.png

In addition or extention  to this idea

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Graphs-and-Charts-with-semi-transparent-fill-option/idi-p/1530960

 

I would like to have a transparency control/property in the colour selection for plots in graphs and charts.

 

Currently, if the user edits the X or Y position of a cursor, this doesn't fire any event (that I am aware of), while in effect is is equivalent to the user grabbing the cursor and instantaneously moving it to a new location.

There is a "Cursor Move", "Cursor Release" and a "Cursor Grab" event, as well as possibility to catch the user selection of a cursor contextual menu event such as "Bring to Center".

It is possible to check whether the user 

 

My suggestion: Add a "Cursor Location Edited" event for Graphs with cursors

 

In order to provide an Owner when showing a .NET dialog, we need to create a IWin32Window Handle to the front panel window.  This KB article shows how this can be done, which requires a DLL to be available.  My suggestion is that there is a Front Panel property which returns this Handle as a .NET IWin32Window value.

 

.NET Handle.png

The only way to order reorder plotted times is to rewire the block diagram.  It would be useful to have the ability to change the order by dragging items on the plot legend into the desired order.

 

reorderplots.PNG

 

When using strict typedefs, the automatic inclusion of the typedef in teh actual datatype of a control (and the asociated data passing down the wire) leads to some annoying corner cases.

 

Imagine a FP with several different strict typedef Booleans (we re-designed and standardised our entire UI recently).  If you try to group value change events for these in a single event structure pane then the datatype changes to Variant.... Huh?  Because the typedef is a part of the datatype (allowing downstream wires to "create Indicator" or "create Control" with teh correct representation) the IDE realises different wire datatypes and moves to a more generic type (variant).

 

 

Separation of Control and Data.png

 

In this example I don't care that the boolean wire originated from a strict typedef, it's purely for cosmetic reasons.  I would ideally like to be able to have a strict representation of the control (which will auto-update if the typedef is modified) with a standard datatype (non-strict wire).