As I continue to use LVOOP in my LabVIEW code, I'm beginning to question the value of the graphical programming concept as a whole. I think that it's a very interesting concept for non-programmers and for educational or entertainment purposes. When it comes to professional hardware integration and product development, I feel the graphical environment becomes a hindrance. I understand that LabVIEW is a powerful programming tool and can cut down on the time to build a product. What I question is whether this is due to the graphical programming environment, or due to the plethora of reusable functions NI has provided that are useful for hardware integration.
What I propose is to provide a text-based, object-oriented programming environment, perhaps a flavor of C++ or Java, such that applications could be optionally written and compiled in Eclipse or Visual Studio. I understand CVI/LabWindows is similar, but I feel it's not as full-featured an environment as other text-based IDEs.
An even more clever environment might allow for both text-based and graphical programming to be used simultaneously wherever they are the most appropriate (e.g., Keep the "front panel" as it is and allow for a graphical representation of data flow and code parallelism)
I feel that providing such an environment will also invite more programmers to the National Instruments community. Unfortunately, this might hurt the traditional LV developers who have benefitted from a scarcity of professionals who work with such an esoteric programming language. In the long run, I think that it would be a good direction to go for the overall health of the community and the betterment of the craft.
A polymorphic VI always displays input/output labels, captions and tips from the instance VI (makes sense, these aren't defined within the poly-VI). You get a choice of where to get the VI icon from though:
The VI description also follows this selection, which I sometimes find unfortunate. I generally make two kinds of poly-VIs:
1) API or Lib-type poly-VIs, where each function can have very different IO on the con-pane.
2) Type overloading poly-VIs, which contain a bunch of almost identical VIs that only differ in the data type of some "Value" input or output.
In the former case I prefer both VI icon and VI description to come from the instances, but in the latter case I'd like to get the VI icon from the instances and the VI description from the poly-VI (the VI description can be long, only differing maybe in a single sentence, easy to make into a common description for all data types). So in polymorphic VIs I suggest splitting the selection of VI icon and VI description source into two:
I think that it would be useful if the Bundle by name (cluster creation function) could automatically pick up the names of the inputs as the default names of the members of the cluster being created (if other are not specified)
In the latest years the monitors and the graphics card became more and more advanced supporting very high pixel resolution.
Using a 1600x1200 is common now but this could create problem whit labview due to the absence of a zoom function.
The VI connector at that resolution are too little and near so could be difficult to work with and more in general all the block diagram of a VI could be diffucult to edit.
So i suggest to implement a zoom in\out (maybe using the mause wheel) function also in labview like in a common CAD sw
LabVIEW remembers the folder from which it retrieved a VI or project or whatever in a buffer.
If a VI has been running (using file-IO) the folder buffer is set to the folder from which the last file was retrieved or written to.
If you want to open a new VI or whatever, the browser opens in the most recently opened folder.
It would be nice to remember the VI-folder and data folder separately. This means two buffers in stead of one.
I would like to have some kind of compiler optimalisation options.
The save time is often to long, Editing is annoying
Editing in LV2010 often halts for 10-20 seconds because it it recompiling the code for some reason.
If we had some option to turn off "advanced optimalisation" things might go fluently, like in the old days.
When replacing a normal Add (of a Timestamp and a value) with the compound arithmetic, the Timestamp input gets broken, this should not be the case.
LabVIEW does currently allow you to run mutliple instances of your LabVIEW executable, but it requires editing the *.ini file:
Is there any reason this shouldn't be an option/checkbox in the Build Specifications?
When you want to use an array of simple types or clusters ... the uninitialized array elements are viewed as shadow elements.
It should be nice to have the ability to modify this behavior thru the array configuration.
One of the frustrating (and mystifying) things about Labview is that it doesn't seem to know what libraries it needs to compile an installer. I have to try and guess what libraries I am accessing and if I don't realise that one of the sub vis I used has used a function from the Math Kernel Library then I have to recompile the whole thing and do another test install. Depending on the size of the project and the machines that you are using, this can take a considerable chunk of time (on my machine that can be half an hour or more). It also selects a set of "standard" libraries to install, many of which I'm not using but I must take a guess as to which I don't need. Again, I won't know if a sub vi is accessing one of them until I actually try installing it.
Wouldn't it be great if Labview could look at it's own vi hierarchy and automatically include the libraries it accesses when you do a build - or at least tell you what you need like most other languages do. (Is there any others that don't?)
My customer is using LabVIEW 2009 with SIT, MATLAB R2007B and he was able to transfer more than 97 elements of array data with his Simulink .mdl file with the following configuration:
- Configuration parameter of MATLAB model: solve complete time: inf, type: fixed step, solver: discrete, fixed step sample time: 0.01
- Default LabVIEW array reading rate defined by auto generated VI: 50ms >> changed to 5ms
- Executed in Windows local host
However, when the .mdl file is converted to DLL, it seems as though that an array size of over 97 cannot be transferred.
The issue seems to be able to be produced even with multiple arrays or multiple scalar controls so I believe it seems to be an issue of how much data it can handle and not a data type issue. As I mentioned previously, data is able to be passed up till a certain amount but after this "threshold", data does not seem to be passed and the default value of 0 is displayed on the indicator. (In an array, the specific array size is initialized but after the threshold, 0s are shown)
I was also able to reproduce the issue on my end with the attached files.
Whenever I have a couple of VIs open I drag and arrange them around the screen (/screens) for better visibility. I think it would be great if there was a way to have LabVIEW open the VIs at a specified position on the screen and not necessarily the last open position. (This would be useful while development especially if one plans to share these VIs with others.)
there is an idea about making labview.ini and other user configs to the corresponding user folder .
I propose to go a step further and make the LabVIEW environment project based, which means that you can define a labview.ini file, VI templates (including standard VI, if you create a new VI), palettes, QuickDrop stuff, etc. inside a project.
E.g. templates are saved in a custom location and linked inside the project. If you do this and start the "New..." dialog, than all your templates inside the project are listed first (below project).
If you do not use this feature in your project, the default files are used (default meaning the way LabVIEW works now).
When working with reentrant VIs, every time you double click an instance of a reentrant vi on the block diagram, the cloned instance is opened. This is fine during debugging but during development when you want to modify the actual code this requires that you open a clone, then CTRL+M to open the actual master vi. Much more frustrating if you have to dig several VIs deep through a highly reentrant hierarchy.
We have CTRL+RT Click to open the block diagram directly, how about something like ALT+RT Click to open the master VI instead of the clone.
Under normal development this is not such an issue but if you happen to be working with LV FPGA, just about everything is reentrant.
My team has been working with LVMerge.exe for a while. I wish to disable "auto-resolve" as a default, instead of unchecking it after loading. Disabling block diagram cosmetic changes would also be nice. Some method to modify these settings, so that on the next VI load, the modified settings would be used instead. My original post is here.