LabVIEW Real-Time Idea Exchange

Community Browser
About LabVIEW Real-Time Idea Exchange

Have a LabVIEW Real-Time Idea?

  1. Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
  2. Browse by label or search in the LabVIEW Real-Time Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
  3. If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW Real-Time Idea Exchange. Be sure to submit a separate post for each idea.
  4. Watch as the community gives your idea kudos and adds their input.
  5. As NI R&D considers the idea, they will change the idea status.
  6. Give kudos to other ideas that you would like to see in a future version of LabVIEW Real-Time!
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Dual network interfaces is often part of the requirements for redundancy, however in such cases it is also very common to specify that the behaviour of borth of these should be identical. You see it in subsea control systems where they have an "A" and "B" channel, you see it topside where the device might need to be on two networks etc.

 

Unfortunately this is not the case for any of the dual port RT targets from NI. The secondary port is really a second class NIC. It has limited configuration options. It does not support DHCP, you cannot specify a gateway for it - and the code to do programmatic changes to its configuration is not easily available.

 

Please make the two ports fully interchangable. Port 1 or 2? It should not really matter which one you use.

 

 

The SVE on a windows based platform acts as a OPC-DA server. Due to the requirements for DCOM, this is not possible on real-time target. However, as OPC-UA is gaining traction, I would find it very useful to have the SVE act as an OPC-UA server for integration with 3rd party systems (SCADA, IIoT).

 

I know that there is the OPC-UA API palette, however, this takes more time to setup than creating a shared variable (i.e. a tag) and moving on with the application development. If update rates are slow enough (500-1000ms), I have found the shared variable to be more than adequate for my applications (500-800 NSVs), so I'm not looking for granular control over how I send data out of the cRIO system. Ease of use is my primary concern.

When running a VI in development mode while targeting an RT device, you must "Save All" prior to deployment. This is annoying, especially when using SCC. I'm sure that SourceOnly will minimize this effect in LV2010, but the concept still remains: I don't want to be forced to Save All when I don't want my edits (or automatic linking edits) to persist.

It would be very convenient if we could format and install the OS/RTE on a target directly from the project window.

 

Currently, if I'm about to deploy a new application I typically format the target, install the OS software on it, then I deploy the application onto it...and finally I make an image of it that will serve as a way for others to deploy the application to production targets.

 

This involves use of NI MAX (format and install OS), LabVIEW (deply app) and RAD (make image). Doing all these operations (image-making as well would be great) from LabVIEW would make the workflow much nicer.

 

PS. In the project window today you have Utilities>>System Manager. The described operations should be available directly from the menu, but it would also feel natural to have these options from the system manager.

When I choose to "Stop waiting and disconnect" from an RT system that is unresponsive, suppress the dialog box that fires immediately afterward saying that the connection has been lost.....I already know that, I am the one who asked for the disconnection.

Many of the features that are not supported on LinuxRT are things that one would expect not to work (due to the underlying technologies), but why no menu and cursor functions? 

 

The lack of menu support in particular means that most GUIs have to be customized to run on the target....even though Xfce should be perfectly capable of handling them in their original design... I was really hoping we could port some of our GUIs straight to the embedded GUI and replace our  industrial computers immediately with cRIOs...(the differences in appearances are easier to handle, after all we use system controls and take cross-platform variations into consideration already).

 

Now we'll have to (decide if we want to) spend time on customizing the GUIs.Smiley Sad

If you have an RT target set up with a startup application (ready to be deployed in the field), running a VI on it from the IDE should not change anything permanently. 

 

Today (LabVIEW RT 2013), doing this will not just temporrarily stop the VIs running on the target (from the executable) - as you get warned about, but also cause the RTTarget.LaunchAppAtBoot=True line to be removed from the ni-rt.ini. So the startup application will not be launched the next time the device is started up, rendering your device useless in the field. Why?

 

- We had an incident where we narrowly escaped such a scenario. The RT target was embedded into a cannister that was about to be sealed, but an unforseen issue made it necessary to run a special test on it, with a VI from the IDE. The startup application in this case provides the only way into the system once the cannister is sealed (no Ethernet access, just RS485), so having it no longer start would be a catastrophy. No one expected running the VI would actually change anything permanently. We tested it of course, and saw that it stopped the startup application (and so we loaded an image of the correct setup afterwards to be sure all changes were removed), but it would be much better and more intuitive if no such permanent and fundamental changes occured (if actually possible to implement in a such a way).

 

 

Currently viewing, testing or configuring NI-9862 Modules in a cRIO using NI-MAX needs a lot of effort. It requires:

- Installation of the complete Labview Realtime and FPGA development suite on the connected PC

- Setting up a Labview project including the NI-9862 Modules and an empty FPGA VI

- Compiling the emty FPGA VI to a FPGA bitmap file

- Starting the FPGA VI

Without these steps the NI-9862 Modules are invisible in NI-MAX

 

Proposed improvement:

- NI-MAX should include a default FPGA bitmap file that includes the communication with NI-9862 modules

- When the user connects to the cRIO with NI-MAX, the current cRIO configuation should be automatically saved to a backup and the default bitmap file should be activated

- When the user disconnects, the backup should be restored

 

Advantage:

- viewing, testing or configuring NI-9862 Modules in a cRIO could be possible just by starting the NI-MAX and connecting to the cRIO, as simple as with any standard I/O modules.

Changing the IP setup currently requires a restart. It is now possible to change the configuration of all network interfaces programmatically using the System Configuration API, but it's not possible to put such changes into effect on the fly, so you actually have to shut down your application to put the changes into effect, something that might not be allowable.

 

So let's say e.g. that I have a controller with dual network interfaces and I want to reconfigure one of the network interfaces (turn on DHCP e.g.) - but I do NOT want to stop the RT application, because it controlls processes via other interfaces, and those processes must be kept alive. If it was a Windows PC, I could have done it. On an NI controller, which is ironically then typically used for more critical tasks, I cannot.

The cRIO chassis can only be configured to to use one time server for time synchronization. If that time server fails, the only recourse is to detect that in some way, edit the ni-rt.ini file, and then re-boot the controller. This is rather clumsy, and it would be helpful to have alternate IP addresses.

 

Sometimes, you really just want a RT front panel. Of course, there is no real front panel, but when you run in interactive mode Labview is automatically setting up network communication to pull data from the RT target and push data to the RT target. Wouldn't it be great if you could just convert that whole front panel into a basic host VI? Right now, you have to manually convert all controls and indicators to shared variables. Granted, this does force you to be very careful about limiting the number of network-shared items that you have, but sometimes you really just want to run the VI in interactive mode...but deployed.

Untitled.png

 

Or, better than that, convert everything into a web service and automatically build a silverlight UI (using Web UI Builder) which is hosted on the cRIO. Anything that provides you with a quick and easy way to convert from the rt debugging UI to a basic host VI would be great.

To allow expansion of DAQ capabilities from a real time PXI Rack it would be nice to be able to add a Compact DAQ chassis to the ethernet port and address it like you can on a desktop. I understand this is possible for USB connected chassis but not ethernet. 

 

This would allow an existing RT DAQ system to be easily expanded, or to acquire data from remote points without the necessity of wiring every channel back to the main rack.

Wouldn't it be nice to have a native LabVIEW XML parser available on real-time targets? Storing your config data in XML rather than in Config-ini files is more flexible, techniques like XMLRPC would be easier to implement etc.
Yes I  know about the third party EasyXML library, but I don't want to spend extra money as we are already paying for LabVIEW Smiley Wink

Given a project topology like this:

 

Windows PC Host  <-> cRIO-9024 <-> NI-9114 backplane <-> NI-9144 EtherCAT #1 <-> NI-9144 EtherCAT #2   which is run in hybrid scan mode,   during development I wish that it were possible to, for example, tell the project to run without EtherCAT #2 being physically connected, instead using virtual/simulated IO. As things stand now, when I try to switch from Configuration to Active mode, I get an error saying that "the slave device cannot be found".

 

More generally, I often find myself thinking that the project explorer needs a good way to "comment out" various pieces of the project without having to resort to "Remove From Project".

I had posted this idea to the LabVIEW idea exchange before, but it makes more sense to have it here.

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Allow-RT-FIFOs-to-be-of-type-LV-Class/idi-p/2426726

 

It would be very useful that RT FIFOs could be of type lvclass as long as the class' private members are of static types (perform the same check that is done for clusters when you try to use them as the type for RT FIFOs).

Any controller that contains either a USB or SDIO card would have a bootstrap loader, available when the controller is in safe mode, that would allow the controller to be loaded up to operational status (OS, Drivers, RTEXE, etc...) from a deployment image contained on the removeable media. This would allow a replacement controller to be unboxed and installed in a stand alone system without the need to install software from a development computer.

 

This is one of those things that sort of bridges between a hardware and software request.

One common request for production systems is some kind of monitoring/surveillance.

The current NI System Monitor 1.1.1 lacks official support for the recent LabVIEW versions and the most recent RT controllers.

For an updated version I would love to see some kind of S.M.A.R.T. support (including wearout indicators for SSDs).

If the access of the S.M.A.R.T. is technically not possible during normal realtime operation, then it would be nice to get that feature when running in save mode.

Today we have to remember to build the application prior to deploying it. Instead of just throwing an error if we choose to deploy without having built first; either automatically run a build if necessary, or offer to do so.

 

Very often it would also be nice to just be able to select "Run as startup", and get all 3 stages done automatically (build, deploy, run as startup).

On all of the RT platforms, if the CPU maxes out at 100% the RTOS goes through a sort of "load shedding" and begins suspending non critical threads to lower the CPU overhead and hopefully maintain determinism. One of the first threads that gets suspended is the TCP thread which essentially cuts off the RT target from the ethernet and outside world.

 

While I understand the concept of the load shedding, I believe that if you have the CPU pushed that hard, then your determinism is out the window anyway. Dropping the TCP thread only serves to disconnect the RT system from the development environment or the built application with no real benefit. I have yet to see a single application that was actually able to accomplish anything substantial after the point that the TCP thread was suspended. In most cases, the high CPU usage happens during the development of the application or when something is critically wrong with the setup of the deployed app. In those cases, it wouldn't matter how many threads you suspended, the CPU will still be maxed out.

 

It is akin to blowing out a match in the midst of a forrest fire.

 

I would propose that the RTOS be reconfigured to show the TCP thread as one of the critical threads and keep it from being suspended during high CPU excursions OR give the developer the option to turn this feature on or off throgh the device properties.

In some critical installations, it is absolutely unacceptable for the cRIO to be able to "go to sleep" (as it can do now) and be unreachable via its Ethernet connection, especially when it runs out of real-time in the software.  Currently when a cRIO becomes unresponsive, it requires manual intervention to kill power, push buttons, and/or set DIP switches, etc. to bring it back to life.  Our company is dealing with cRIOs that are located in places that are very difficult, expensive and time consuming to reach.

 

I strongly recommend that NI add, Ethernet accessible, hardware based reset/restart functions to the cRIO.  These functions should be completely separate from current cRIO software and hardware and should allow remote control of the cRIO power supply, and setting of the dip switches and reset buttons mounted on the cRIO front panel.  NI-MAX should be enhanced to allow a user to remotely control cRIO reboots via these functions.