It should be nice to add a new "Build specification" in Labview RT which could generated something like a RT Target installer.
This installer could contain ...
When the installer is executed, it should be nice to show all the available targets ( Like in Max ).
The user should have to select the destination Target in this list.
Then it should be nice to show the target configuration (like in Max), initialized with the setting contained in the installer.
An installation could be like this ...
This feature could be an easy way to deploy RT application without having a Labview RT installed on the host computer.
A RT application could be installed completely by a final customer of the application, without having to install the Target, drivers ...
This could also be nice to clone many times a RT application.
Inspired by this post, and my own experience trying to debug a problem that only appeared when I compiled an RT executable, LabVIEW should warn the user when compiling a real-time application that contains property nodes that require access to the front panel, since those property nodes will not execute properly in an RT application and it can be very difficult to find the source of the problem if you don't know this.
When developing RT code (especially system upgrades) it would be truly helpful to have a virtual machine (VMware, MS Virtual PC, Sun Virtual box, etc....) that would allow us to run the actual VxWorks OS and LVRT in it's native environment, within the Windows OS. This would allow the code to run on the actual RTOS (I realize that determinism would be scacrificed) and provide the ability to actually test the functionality of the code in the actual environment to ensure that it runs as it should. It would also preclude the need to have a bunch of RT controllers sitting on the shelf in the event that you might need them.
There is and emulator for PDA module, why not for RT.
Many developers have the primary ethernet port of their development computer reserved for the corporate intranet/internet access.
Unfortunately, MAX and other tools like RT System Deployment Utility expect the targets to be connected to the same primary port for initial configuration, because they do not allow the specification of a local IP on which to exchange the UDP configuration packets.
Being able to select the ethernet port on which the RT system is connected, e.g. through a ring control populated with all available NICs and their local IPs, would facilitate devolopment enormously in such constellations, because the developer would not need to switch cables and IP configurations every time he needs to reconfigure the RT system.
Currently, if you have hardware in a LabVIEW project (e.g. a cRIO controller, cRIO chassis, or R-Series PXI card), the only way that you can change this to another product is by adding a new one to the project and deleting the old one. It would be nice to be able to use a configuration window to change the model number of a piece of hardware to a different, but similar one. For example, if you have a 9072 in the project but wanted to change it to a 9073. Another example would be the ability to change, via menus, a PXI 7813R to a 7854R. Of course the user would have to update any code written to account for changes due to the new hardware. This is especially convenient when you are simulating and configuring test systems but aren't quite sure exactly what hardware you need. Currently, for each new piece of hardware (similar or not) you have to create a new device and copy all of the IO, VIs, libraries, etc. under the new device in the project.
When working in the Windows development environment the application builder has the ability to implement a version number for the built executable. Additionally LabVIEW has the ability to querry this version number through a property node. I would like to see this feature carried over to RT systems as well. It would be very helpful in determining what particular build of the startup.rtexe file is running on the target.
Right now, you can simulate the I/O from an FPGA target. Timing features and other hardware-specific VIs are not executed, but the code still functions and allows you to debug certain aspects of it without working through the compile process. It would be similarly helpful if you could simulate the real-time controller, or a cRIO in scan mode, with simulated IO. Again, the resultant VI will not be truly realtime, but it would allow useful development without having constant access to the cRIO.
I am often working with a compact RIO and I need to change the IP configuration or the software settings. Currently I have to open up MAX refresh my target list (which can sometime take a minute or two since we have so many targets). Then open up the RT target settings.
I think the IP address and software settings should be accessible from the project window by right-clicking on the target and selecting properties.
The MAX settings page could be reproduced in the general settings section that all ready has a limited ability to edit the IP.
You could also add a software category, so you could update the version of RIO or install scan engine.
If you get errors when deploying your real-time VI, you have to scroll through a small window and many many lines of messages to find the error that's in bold text... often only to find that it's just the "startup application is missing" error. It would be better to have a separate box where the errors are summarized for you.
Currently, you can view the console from MAX, but if you don't know what buttons to press... lets just say it's in there somewhere. My idea is to make it more accessible, such as a right mouse button feature off the RT Target.
I noticed there seem to be no way to guarantee the state of an output module controlled by a scan engine in case the RT Application (or the Host Application, depending who is controlling the chassis) crashs. With FPGA one can program some kind of watchdog setting back the output values of a module in case the RT Exe fails. With Scan there just seem to be no possibility.
This is why I think adding a FailSafe Value for a Scan I/O node could be a creat idea. in ase the RT application got aborted or stops without cleanup, the output value would not be random no more but set back to their FailSafe value. I imagine it could look like that:
What do you think about it?
It should be nice to be abble to duplicate a CRio target easily.
=> Be abble to duplicate the Target
=> Be abble to duplicate the backplane configuration
=> Be abble to duplicate a FPGA target
It should also be nice to be able to change easily ...
=> The CRIo type
=> The CRio backplane type
Some commands to manipulate targets are missing ... or are hidden ...
For example the copy/paste commands are available by key stroke ... but are not visible thru the Labview project treeview context menu.
I think that the usability of the targets manipulations should be improved.
Thanks for help.
When running a VI in development mode while targeting an RT device, you must "Save All" prior to deployment. This is annoying, especially when using SCC. I'm sure that SourceOnly will minimize this effect in LV2010, but the concept still remains: I don't want to be forced to Save All when I don't want my edits (or automatic linking edits) to persist.
Wouldn't it be nice to have a native LabVIEW XML parser available on real-time targets? Storing your config data in XML rather than in Config-ini files is more flexible, techniques like XMLRPC would be easier to implement etc.
Yes I know about the third party EasyXML library, but I don't want to spend extra money as we are already paying for LabVIEW
Dual network interfaces is often part of the requirements for redundancy, however in such cases it is also very common to specify that the behaviour of borth of these should be identical. You see it in subsea control systems where they have an "A" and "B" channel, you see it topside where the device might need to be on two networks etc.
Unfortunately this is not the case for any of the dual port RT targets from NI. The secondary port is really a second class NIC. It has limited configuration options. It does not support DHCP, you cannot specify a gateway for it - and the code to do programmatic changes to its configuration is not easily available.
Please make the two ports fully interchangable. Port 1 or 2? It should not really matter which one you use.
The cRIO chassis can only be configured to to use one time server for time synchronization. If that time server fails, the only recourse is to detect that in some way, edit the ni-rt.ini file, and then re-boot the controller. This is rather clumsy, and it would be helpful to have alternate IP addresses.
Changing the IP setup currently requires a restart. It is now possible to change the configuration of all network interfaces programmatically using the System Configuration API, but it's not possible to put such changes into effect on the fly, so you actually have to shut down your application to put the changes into effect, something that might not be allowable.
So let's say e.g. that I have a controller with dual network interfaces and I want to reconfigure one of the network interfaces (turn on DHCP e.g.) - but I do NOT want to stop the RT application, because it controlls processes via other interfaces, and those processes must be kept alive. If it was a Windows PC, I could have done it. On an NI controller, which is ironically then typically used for more critical tasks, I cannot.