The Ctrl-Space Ctrl-R is pretty powerful. Witness this showcase of wizardry from LV:
Ctrl-Space Ctrl-R results in this:
No broken wires, even though there is still some useless stuff to remove.
Now, why can't I select the subVI I just removed and the two DVR functions and Ctrl-Space Ctrl-R them (obviously, that was the effect I was trying to get to eventually)?
As a matter of fact, I'd be happy if I could select the two remaining DVR functions (and their connecting wire) and Ctrl-Space Ctrl-R them as in:
But as of today, that's what we get:
Suggestion: When a set of selected objects has an unambiguous set of source wires and an unambiguously corresponding set of sink wires, Ctrl-Space Ctrl-R should be able to zap everything and reconnect the unambiguously corresponding wires.
As a corrolary, when there is ambiguity, do not do anything with the wires that cannot be reconnected. Leave them broken. Do not do extra cleaning!
For instance, in the following case:
If I Ctrl-Space Ctrl-R the selection, here is the result:
Note that the DVR wire is connected properly, but the enum constant is gone, even though I did not select it!
The Clean Up button was a huge improvement in LabVIEW. Now I can concentrate on how to make my program works, not on what it looks like. Just put the blocks and wires anywhere on the Block Diagram and let CleanUp move them to a suitable space. But unfortunately, CleanUp can sometimes make the program look rather messy. A Format Into File may be placed at the bottom, and the format string at the top. Things that naturally belong together should stay together. Manual correction is waste of time; it will be messed up again on next cleanup. The only solution is to put some code in a sequence. But a feature should not be used for other purposes than it is meant to, if you want other people to understand your program.
What we need is a light box that can be placed around code that should stay together. This can be like the Flat Frame on the Decorations palette. It does not work now; cleanup will move it outside any code.
CleanUp is just a minor problem in this small program. In large programs it might be much worse.
Maybe there are some of them already existing, but this list is based on my experience.
- Read and write connection to each variable
- Highlight execution only for a selected area or block
- Possibility to change the Highlight execution speed
- When the grid is activated
Snap wiring to grid
Size icons and blocks automatically to grid
Snap connections to grid
Align and size cases, loops, sequence, ... to grid
- Search option for a combination between object and text
- Logging and alarm properties for shared vars in cRIO
- Modern set of valves and pipes
- Cleanup wiring within selection
- Preview wiring while moving items
I think that a couple of very simple UI improvements to the linked tunnels ability would be great.
1 - when you click create and wire linked nodes, the cursur becomes a solder iron, and annoyingly the scroll bars stop working - this is annoying if you have a large case structure. So firstly should allow scroll bars
2 - another simple tweak would be to allow a quick jump to the other side of the case structure - since this is typically wheree you are connecting.
3 - once a linked node is created, if the link is broken, then in order to rewire it you have to 'create and wire linked inputs' again - even though it knows where the node is. Would be nice to have another option - to wire to the already linked node.
The built-in LabVIEW comparison and array sort primitives are inadequate for many applications involving clusters. For example, the clusters may contain elements that
For example, consider the following cluster:
Now, suppose I want to sort an array of this cluster, but I am uninterested in the VendorCode or the Password, and I want the Server, Database, and User to be compared caselessly. The Sort 1-D Array primitive will not do this properly. The common pattern for overcoming this is something like the code below.
This does the job, but it is not particularly efficient for large arrays. I could code my own sort routine, but that's not the best use of my time, nor is it very efficient.
A similar argument can be made for simple comparison (lt, le, eq, etc.) between two clusters, although this is easily done with a sub-VI.
My proposal is to take an object-oriented approach and allow clusters to decide how they are to be compared. This would involve something like attaching a VI to a cluster (typedef). This would allow the default comparison of two of these clusters to be determined by the provider of the cluster, rather than the writer of the code that later needs to compare the clusters. I will leave it to LabVIEW designers how to associate the comparison code with the cluster, but giving a typedef a block diagram is one way that comes to mind.
Of course, different elements may need to be compared in different ways at different times. This leads to the thought that Sort 1-D Array ought to take an optional reference to a sorting VI to be used instead of whatever the default is. This idea was touched on in this thread but never thoroughly explored. The reference would have to be to a VI that conformed to some expected connector pane, with well-defined outputs, like this:
Strictly speaking, the x > y? output is not required here. Another possibility is
which simply outputs an integer whose sign determines the comparison results. Clusters that cannot be strictly ordered would somehow have to be restricted to equal and not equal.
The advantage to wiring a reference to such a VI into the Sort 1-D Array primitive is obvious. It is less obvious that there would be any utility to be gained from providing such an input to the lt, le, eq, etc. primitives, but consider that this would allow the specifics of the comparison to be specified at run-time much more easily than can presently be done.
I searched on "polymorphic" and did not find this idea posted.
I just learned over here that when you use a polymorphic VI, all flavors of that VI load into memory! That's why a VI hierarchy gets so cluttered so fast when you use them.
In the object-oriented version of polymorphism, all possible polymorphic cases need to be coded and loaded into memory, since any of these possible cases could be called depending on the execution of the program. In the LabVIEW-specific version of polymorphism, where a function has many flavors, perhaps due to a change in data type on one of the inputs, it is not usually the case that all of the different polymorphic members can execute at run time. In fact, I believe it is usually the case that only ONE of the cases will ever be called or execute.
So, why are all of the other polymorphic members in memory? I don't know. I think they shouldn't be. They seem to be eating RAM for no good purpose.
Load only the specifically called version of a polymorphic VI into memory.
Sometimes I do not wish to delete a file permanently or keep the path of the delete file for later restore. Usually this is done in Windows by sending the file to the recycler and getting it back from there.
Two possibilities to get this functionality into LabView:
- Add "Recycler path" to the VI "Get System Directory.vi" to return the path to the recycler. The file can than by moved to the recycler and eventually back again
- Add a new VI "recycle" to the files palette. It should work like the "delete.vi" and return the path to the recycled file in the recycler.
Local/global/Shared variables or properties nodes (that are read/writeable) are usually placed in read mode.
For Power users it would be great if by holding e.g. the Strg.-Key the local variable or properties are placed in write mode. Also a good idea would be that the node adapts to the wire (input or output), but this idea is already in the idea exchange.
I often find myself not using array of clusters in my applications since its to much hazzle to wrap up the data in forloops and then use it. Instead I put my data in straight arrays.
If a feature like "unbundle array" would be implemented as a feature this problem would be solved and the code would even look elegant.
If creating a "file view" tab control it is nice to be able to reorder the tabs by dragging the tab. Currently there are no "drag started" events for tab pages.
It would make it really easy if there was a tab property called "enable page reorder". I know currently the Tab is an enum so it would not make sense in that case, but if you just move the "Tab Caption" then it works.
If there were enough events and location information for the tabs I think it would be possible even if the functionality for "moving tabs" wasn't fully built in.
See any program with tabs for an example. IE, Google Chrome, Notepad++.
I was inspired by the idea 'Color properties should default to colorbox data' to post this one.
You can drop the listbox symbol constant from the pallet onto the block diagram. But, if you create a control or convert it to a control, you end up with a numeric control, not a pict ring of the symbols.
It would be nice, when including the listbox symbol in a data structure to allow for it's graphical representation. That way you would not have to lookup the index number with the constant to figure out what symbol had been set in the data structure when you are debugging.
If there is a work around for this, please let me know!
Labview is a graphical programming language. I try to decore my VIs with meaningful icons. If I place sub-VIs I miss that I can see the icon of VIs/typedefs etc. when selected.
Why not to improve the file dialog with showing the icon (or even a icon list)?
When you edit a VI, you may likely move the origin of the pane and not know it. Next time you run it, your controls are not centered. There are several ways to work around this:
How about an option which sets the panel origin to 0,0 every time we save?
.... i.e., an Option which does this for us.
I have recently encountered some problems when using libraries in my code whilst needing/wanting to disconnect multiple VI's from that library. I have found the process of manually selecting the VI's for disconnection to be rather tedious and time consuming (especially if they have enums and subVI's associated with them), I would suggest that perhaps the option of "disconnect all VI's from library" could possibility. After a brief discussion with some of my colleagues, some of which are very experienced developers, I have found they are reluctant to use them for this and various other reasons. I have managed to find a workaround for the solution on LAVA with this handy little script. I was surprised this functionality was not included in the development environment. Any input from other users with regards to the pro and cons they have encountered when using the libraries and other suggested workarounds would be greatly appreciated. Thanks!
An uninitialized feedback node will always start with the default value if its data type.Then you can select if the feedback node should initialize once on compile or load, or if it should initialize on every first call. The latter option is only available when you wire something to the initializer terminal though, else it's grayed out:
I suggest that the Initialize On First Call option should always be selectable. If you haven't wired anything to the initializer terminal, the initialization value should just be the default value of the feedback node's data type (as usual). You may argue that you can always just wire a constant to the initializer terminal, but if that constant doesn't differ from the default value, it shouldn't be necessary, and such constants might even take up considerable block diagram real estate (clusters, arrays, or refnums for instance).
There should be a LabVIEW primitive that returns the size of the connected data type in bytes, similar to sizeof() in C.
I know this can be done using "Flatten To String" and "String Length", but that's only a workaround since this way all the data has to be copied just to get the size information.