LabVIEW currently allows users to execute a MATLAB script inside the "MATLAB Script" structure, which lets you add inputs/outputs to the edge, set datatypes, and then type your MATLAB code in the central box.
If you already have a MATLAB script, you can use the right-click menu to "Import" (and conversely, you can test a script in LabVIEW and then Export it, if you wanted).
However, you cannot link to a script by path. Importing simply copy-pastes the content into the Script node. This behaviour, whilst probably useful in some cases (avoid future changes to the .m file breaking your nicely tested LabVIEW code) is different to most other nodes I can think of (Call Library Function Node, Python Node, .NET methods, ...).
Please add an option to pass a path to a "myFunction.m" file to the MATLAB execution system rather than copying the contents of a .m file into the structure's box.
(As a workaround, I believe this could be accomplished by running the MATLAB interpreter via command line and using the System Exec node, but that would require various path -> command line string parsing operations, and perhaps complicate cleanup of VIs using MATLAB.)
Trying to toggle back and forth between the instructions and the code for the online CLD exam was both distracting and difficult since most hot keys didn't work. Placing the instructions or goals in the block diagram would allow easier reference for those taking the exam.
Since LabVIEW 2017, it's possible to build application with a compatibility with future version of run time engine.
This option is set by default but can be disabled.
I just discover that this option is set for real time applicaiton and cannot be unset. I mean that if you build your application in with labview real time 2017, it will run with a system installed with a newer version of LabVIEW Real time.
This can be a good idea, but I'm a little bit surprise that I cannot have informations on that options for real time application and I can't control it.
Here is a way to test it. Tested on a real time desktop with pharlaps.
Install RT target with LV 2017.
Build an application and set it to run as startup. A simple application writting something in the console is enough.
Make sure your applicaiton is running at startup.
Update your system by only installing LabVIEW real time 2019.
Restart your system and your application is still running !
Because I faced an issue where LabVIEW 2020 broke my application build in LabVIEW 2017, I'm asking myself how NI can garanty that a real time system will work in any case if we upgrade the system to a higher version of LabVIEW real time version without recompiling the application.
Real time system can be used to control system that can be a secure system. If a user update by error a system, I want to keep my system safe for user.
So my idea is to remove this option or give access to the user to unselect this option to avoid any bad behavior.
When creating a new Source Distribution to run on an RT system. It makes no sense that the "Exclude vi.lib VIs" option is checked by default. The VI will not run and cannot be launched asynchronously which is the whole point of a source distribution on an RT system.
The "New Source Distribution" wizard that creates it with default properties should look at the context it is being created and pick appropriate options. This is supposed to be a smart IDE.
We use Queued Message Stage Machines a lot and often send messages through API calls. In a state machine I prefer using enums to determine the state and in the QMSM that would be useful too because sometimes a typo in a message in an API call stops it from working.
However, the Message Queue functions accept string only.
Even if we make changes to the message cluster or make the VI polymorphic but we would then need to do it every time we are setting up a new machine with LabVIEW. Would this be useful for anybody else?
I wanted to use a property node in an application today. I happened to have the Help context up and it showed "Run-Time Engine: No". But I could easily have missed that and not discovered the issue until much later, after building a lot of code and wasting time.
If the node had an obvious indication as soon as you put it on the block diagram - for example, a different color - that would help a lot and potentially save a lot of headache.
Breakpoints are great for debugging. But...I've never wanted to share them with another developer, and I've never liked it when they add the dirty dot to code that I haven't otherwise changed. This can lead to unnecessary code changes which can add hassle to source control, and it can lead to other developers unintentionally inheriting your debugging breakpoints.
How about an option to manage breakpoints separately from source code, perhaps similar to how compiled code is handled?
I work with large projects. And I often need to move items around in my projects. However... while you're dragging an item in the Project Explorer, if the cursor is at the top or bottom of the visible area, the window does not scroll. This means that the item you're moving, and its destination, must both be visible at the same time in order to drag and drop it.
I regularly have to un-expand items in order to be able to simply move a file. This is time consuming and forces me to un-expand tree items that I would prefer to keep expanded.
On bigger projects, when creating a new class, I find it time-consuming to track down the parent class I'd like to inherit from.
It'd save me some pain if there was some kind of filter and/or search option for this on the New Class GUI:
Other thoughts on this:
- While the tree structure is useful, I usually know the name of the parent class I want to inherit from, but I don't necessarily know the full inheritance of it, meaning the tree structure isn't the most efficient way to find it. (Even alphabetical by class name would be faster in these cases).
- I'd find the tree structure here easier to follow if the lines were visible.
Many times, the bulk of LabVIEW development happens on computers that will never interface with hardware. A dozen engineers may be collaborating on code that will ultimately run on a dedicated machine somewhere, that is connected. Yet, as things currently are, I have to install more than I need on my development machine to get access to API VIs. If I am working on my laptop on an application with DAQ, RF, Spectrum analyzer, etc. components, I have to choose to either download and install all of that, or deal with missing VIs and broken arrows. This seems needless, since my particular machine will never actually interface with the hardware.
I would like to have the option to install only the LabVIEW VIs and ignore the driver itself. In many, if not most cases, the LabVIEW API could be independent of driver version. It could install very quickly, since it would just be a set of essentially no-op VIs. I don't care that the VIs would do nothing. They would just be placeholders for my development purposes. This would allow me to have full API access to develop my code without having to carry around large driver installations that I will never actually use.
I find the process of initializing Maps in LabVIEW to be unintuitive and inconsistent with the initialization process for other similar concepts (such as, say, arrays). After some initial trial and error, the process I've settled into for creating maps is to drop a map constant, and then to drag and drop the appropriate data types onto it for my key and value. I first looked for an Initialize Map VI (which doesn't exist) and then tried to create the constant by wiring up appropriate types to an "Insert Into Map" and creating the constant from there -- but this doesn't work as expected because the terminals don't update appropriately.
What drove me to coming to the forum today though, was that in creating a malleable VI with a map inside of it, I found a breaking point for the "drop constant and drag values into it" approach. Since the map data types need to be dynamic to support malleable VIs, I've had to get creative to get around that...