LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

For the people that have been activally using and supporting LabVIEW for more than the past say 20 years in more than just basic applications it should be possible to be granted a CLV certificate. Last friday I tried the CLD exam and it was a struggle ...

This is coercion:

Capture.PNG

 

Best practices say it is bad, so how do I fix it?  Mindless hunt?  Try them all?  Sometimes I can't change the upstream data because it is "fresh out of the file-read".  Sometimes there is no in-VI upstream to modify outside of the read itself, and that can be challenging with mixed, not explicitly typed data.

 

suggestions:

  • I should be able to right-click on the wire an have an option "insert explicit conversion" and have it pick the one that I need, the one it is doing inside the dot, and insert it into the wire just upstream of the conversion dot.
  • I can configure the outputs representation of many VI's.  Can I configure the input representation?  That would be explicit.

 

A number of elite users consider the VI hierarchy "useless". 

 

The compiler currently makes some assumptions (the detailed particulars of which I have only heard rumors) about the ability of the user to impose a boundary in the block diagram.  Whether it is the selection of elements to make into a sub-vi or how frame edges work with memory, there is some assumption there.

 

Why not remove all the "boundaries" then use graph theory to make better boundaries?  I would like to see an "auto-hierarchy" button.  When I click it, I would like the tool to remove every artificial boundary, then use graph clustering (reference, toy example) to find actual best boundaries, then impose those. 

 

I would expect a single chain to be in a single "box".  I would expect something that starts at a single element, splits into two chains, then re-combines into a single final element to have the head element, the tail element, and two "boxes" which hold the chains.  I would expect the box to inform what makes a decent sub-VI during the simplification/splitting of a detailed VI.

I have just had my first look at some sample VIs that were demonstrating the use of channel wires and I have seen channel wires going from left to right and also from right to left in the block diagram.  It was my understanding that LabVIEW best programing practice was to avoid backward wires.  The VI Analyzer actually tests for the number of occurrences of backward wires.  If the new programing best practice is going to allow backward channel wires, then to avoid confusion about the direction of data flow the channel wires should have a visual indication for the direction of data flow.  My suggestion would be a chevron design that indicated the direction of data flow, see attached file.

 

[admin edit 2016-09-15] Changed title of idea from 'Channel Wires' to something more descriptive.

It would be really nice if the Flatten to JSON supported more LV data types.  In particular, I find it very annoying that it doesn't support flattening a waveform to JSON since those are very common in most of the measurement APIs.  

 

In the mean time, I have come up with this little work around, but it really seems like this is something that should be natively supported.

 

Waveform to JSON.png

 

JSON to Waveform.png

LabVIEW has nice debugging window which is very helpful for DLL debugging purposes. It's able to output numeric variables, strings, LV paths and so on. But it also lacks some standard functionality that makes it not very comfortable to use. Currently this window looks like that:

 

DbgPrintf window

 

First main disadvantage of this style is that the window has no scrollbars so we can only see last 24 rows of text. Older strings are completely unavailable if they are not cleaned up from memory at all. Second disadvantage is that user cannot copy or export that debug text to clipboard or text file. It's especially important when working with large numbers or complex data which is hard-to-remember and hard to rewrite by a hand. And third disadvantage is that the window misses some standard elements which almost any window has: minimize/maximize buttons, right bottom resize corner etc.

 

Because of these drawbacks I'm forced to give up this DbgPrintf function and use standard Windows console window (AllocConsole / FreeConsole from WinAPI). That requires additional work in the code and not so easy to use as the mentioned function. Also it's not cross-platform way of debugging.

 

So, it would be perfect if that window looked like that:

 

DbgPrintf (new look)

 

As you can see there are new abilities of the window:

1. It has both horizontal and vertical scrollbars, so all the text can be read;

2. It allows to select the text and copy/export it to clipboard or .txt file;

3. It has minimize/maximize buttons on the title bar;

4. It has right bottom resize corner which is used to change window size;

 

That's the basic functional but there may be some additional settings for window appearance or for ease of debugging.

If you have a project where a VI is used on multiple targets, and contains conditional disable structures that vary between the targets (loading Vis that are not compatible with all targets for example) - you may see application build previews fail until you open the affected VIs on the relevant target just to get LabVIEW to update the conditional disable structure.

 

The actual build is smart enough to do this, but the preview function is not. This has the funny side effect that installer builds can fail when the relevant application builds suceed; because the installer will run a preview on the application builds even though they are already successfully built, and then fail because the preview will fail.

 

Now, I'm sure the installer build's use of the preview has its good reasons so I will not suggest to remove that, but rather suggest that the issue is resolved by making the preview functionality smarter; If (and only if) it runs into trouble when generating the preview, it should do the same as what the actual application build is now able to do; load the troublesome VI - and see if the issue is easy to resolve on the fly.

all know that labview has changed engineer's programmer style of measuremnt and control filed. it not enough , i take a suggesstion to expand labview feature which add machine learning feature. it can aquire data when user developing application using labview, These data include user programming style, the use of professional background knowledge, after training to be able to automatically generate relevant code according to the user. microsoft has started to research this Tech. everybody, if has true, will be changed all progrmmer life.

I only mean that this should apply to the sub vi's that come with LabVIEW. I was putting together a vi that is execution time sensitive. I had a choice between the IMAQ Histogram and IMAQ Histograph. I could get the result i needed from wither one but I was forced to try each,  run a few times, and clock each one. There are many such "which of these two similar options is fastest" choices we make for every program and knowing which upfront would be very helpful.

It’s amazing to send some questions about programing for labview. Recently, I use labview to develop an application which includes some file I/O function such as creating path to save some data. and I use subVI’s path cite to indicate new path for saving data. But unfortunatelyI fund that it’s seriously problem after generate exe application and running。

 

Absolutely. It’s unavailable to save data. So I have to using top VI path to generate new path and solve this problem, during this time. I have an inspire to share with you during generating exe application using labview which has application self check security as show in next table.

 

Check Path

Determining whether it’s security or unavailable after produce exe application when running

Auto change scroll position in table controller

If user set table controller number of page for displaying. it can automatically adjust scroll position in table controller when new page data incoming.

When the error list window is opened, scripting sometimes triggers updates and can cause significant lagging. Adding an environement method to suppress updates would allow to prevent this.

I have a decently large VI comprised of layers of sub-VI's and some MathScript.

 

Sometimes, when I am working on it, it takes about 30 seconds to display the Front Panel.  I just see the little window-8 analog of the hourglass spinning.

 

If LabVIEW is taking that long to "think" there is a problem.  It isn't telling me what it is doing, only that what it is doing takes so much resources that it can only display blank white panes while it is doing so.

 

You might want to fix that.  I'm running an 8-core Xeon, there isn't any lack of computing resources.  It is a software bottleneck.

Background:

MathScript is slow compared to compiled LabVIEW for my application.  Maybe, because it is interpreted, it is slower for most cases and not just mine.

 

In (older) MatLab, it is generally better to use "built-ins" compared to hand-made functions, because the built-ins are compiled and run faster.  Sometimes  "a = sum(b,c)" is much faster than "a = b+c".

 

Current MathScript allows me to make hand-made functions, but not directly called vi's.  (as far as I know or can search)

Suggestion:
I would like to call a VI from the text in MathScript, like I would a function.  As long as I pass in the right variables, I would expect the compiled (pure) VI to run substantially faster than MathScript alone.

 

Additional thought:

When moving from mostly MathScript to purely LabVIEW, there are intermediate stages where part of the code is in both forms.  If we were able to do the above suggestion, the intermediate form would be faster and cleaner than the original, and would allow piecewise movement instead of "all or nothing".

 

 

It would be nice if property/invoke nodes supported "interlinked wiring" among eachother. Something like this:

nodes.png

 

At the moment I move the wires manually behind the nodes. This saves some space and (in my opinion) contributes to a well-arranged block diagram. But do not select "Clean up wire" afterwards! Smiley Frustrated

I started to use Labview Web Service and I have a couple of suggestions

 

In the Startup Vis you are able to monitor the status of the web service, but you are not able to close it. It says that labview will automatically close the Labview Web Service when closing a standalone exe, but there are some schenarios where you might want to close programatically the web service. e.g.:

 

- When building a standalone exe and you want to be able to close the service programatically.

 

Also,

 

As soon as the web service receives a URI that doesn't have a Web Resource, it throws an error without been able to handle it. Been able to handle the error could be beneficial when trying to make a dynamic routing system e.g. Many web frameworks.

 

BR

I am Facing the problem when VIs Having the visual Images in Front panel for Operator Instruction screens.

( such a connecting the Cables, Visual Inspection of Components etc)

 

If you use these type of visual images the total Executable size increasing huge (my case 150MB beacuse of many products and many Images,images are copies from the *.Jpeg or .png Images.if the LabVIEW can find these image copied on the front panel and able to compress . we can reduce the EXE file size almost 30% of the total size and also taking more time to build and compile

 

NI have to find is there any technique(now a days many image compression models are avilable) , first compile the code and later add images to front panel once compile.

I very much like the formula parse and evaluate vi's. For me writing a formula is easier and I am making less mistakes writing formulas than wiring numeric nodes. Specially when the formula is taken from literature.
Unfortunately, the parsed formula is much slower than using standard numeric nodes. Browsing through the formula nodes, I notice that the formulas are parsed down to the same standard numeric nodes (add subtract etc.). Still the formula parsing method is much slower because of many case statements that have to be executed before arriving at the  level of the numeric building blocks.
I think from the status where the formula parsing blocks are now, it would be feasible to have the formula parsing blocks generate vi's using only numeric nodes so the formula parsing nodes will have the same performance as the standard labview mathematics. The best solution would be to include it in the building/compiling of the code.

 

Arjan

 

 

Labview should have a project setting option to enter command line arguments to simulate what a user could type in at the command line as if they were running the project as an .exe, even though I am debugging from within Labview.

 

Ultimately, I want to create my project as an executable file. My project would read in the command line parameters as typed in by the user at the command line. Labview already has a property node that reads these parameters. However, that only works when running the project as an executable. I want to debug my project from within Labview.

 

This would be similar to what VisualStudio has - a project property setting in the project properties. Labview has a project property setting for "Pass command line arguments to application" in the build specifications, but no way to initialize these parameters for debugging purposes.

 

FPGA Desktop Execution Node doesn’t allow applying new input value for each simulation tick. e.g. simulation of digital filter would require an array of input samples. In each step/tick a new input sample is processed. The VI under test has internal feedback nodes to store data history. The following example is no solution because simulation initializes feedback nodes every time.

 

Desktop Execution Node

I would like to see a few improvements to the BM:

 

  • parse only text fields that actually start with a # at position 0 (not displaying fields that have the # somewhere in the middle)
  • display tags case-insensitiv

I see that the actual work is done in the "Get VI Bookmarks" Invoke Node, thus it may not be trivial to add such features.

 

one related Idea deals with providing custom tag identifiers: http://forums.ni.com/t5/LabVIEW-Idea-Exchange/bookmark-symbol-change-to-or-user-selected/idi-p/2976865

 

-Benjamin