Does your idea apply to LabVIEW in general? Get the best feedback by posting it on the original LabVIEW Idea Exchange.
Browse by label or search in the LabVIEW Real-Time Idea Exchange to see if your idea has previously been submitted. If your idea exists be sure to vote for the idea by giving it kudos to indicate your approval!
If your idea has not been submitted click New Idea to submit a product idea to the LabVIEW Real-Time Idea Exchange. Be sure to submit a separate post for each idea.
Watch as the community gives your idea kudos and adds their input.
As NI R&D considers the idea, they will change the idea status.
Give kudos to other ideas that you would like to see in a future version of LabVIEW Real-Time!
Based on the discussion in this thread I would like to propose the following:
My suggestion is to open support for execution of python nodes on Labview Real-Time targets. This is currently not supported by LabVIEW 2018 Current Gen, even though this version supports executing a python node using the new "Connectivity -> Python"-support.
This should be possible to do on the targets using NI Linux Realtime as the operating system, since Linux treats python as a native application language.
I understand that the execution of Python scripts are not deterministic, which will prevent an implementation in time critical code. That I can follow, but it should be possible to use the python node in lower priority threads or non real-time code, for communication with a REST-API, downloading/uploading files, connectivity to online services and so on.
Therefore I propose to simplify this implementation and open the support for Python3.
Some additional implementations are essential for the success:
Selection of Python runtime should be fixed to python3, since python 2 will be without further support or active development from 2020-01-01 going forward.
It would gain the best momentum if it was possible to deploy python from the Device Software installation as part of the build specification of the real-time application.
Python developers can specify their external requirements for additional code by defining a "requirements.txt" or "pipfile.lock" depending on the virtualization method and choice of development style. If the project manager or build specification could read/parse this for the starting point of the application to prevent manual labour and possible errors.
Backwards Compatibility:
Support for this should be from LabVIEW 2018 and forward, since these have support for the python nodes.
Currently, non default map constants (like the one shown below) are detected as fatal insanities when deploying to Real-Time targets and crash the LabVIEW development environment.
There is a workaround, which is to build the map from other data structure constants. The map constant above was built from the code below. However, it takes longer to execute, and it takes more code. That is okay in some cases, but it would be better if there was an option to simply use the map constant above.
NI has introduced malleable VI function since LabVIEW 2017. It is a great function to have in advanced LabVIEW application development.
But "Stall Data Flow.vim" function is missing in Real-Time Target OS from timing function palette since 2017.
It is not like you can not use malleable VI in RT side, I have copied same VI from desktop VI to RT VI. It works fine without any error or broken arrow.
NI should include this types of VIs in RT package by default.
RT has Type Specification Structure too, which creates question for me on this missing VI.
If you have any reason(s) to share on this why it is not included on RT side, Please share your thoughts.
Could we have SSL on by default in the standard software sets for RT targets? It is not like there is much security in the default setup anyway, so having SSL active as well as the web interface with default passwords is just more practical (if that is the rationale for not having it on...(?)).
Background: Whenever I have loaded a default software package onto an RT target using NI MAX, I run into this little snag; I try to log onto it using SFTP or PuttY - only to rediscover that this is disabled. So then I have to go to the web interface or NI MAX to enable it. Most of the time we use RAD to replicate PACs anyway, but this is still another little thing we have to remember every time when dealing with the very first setup.
64bit has been the dominant architecture for a decade; any computer with more than ~2.5GB of RAM must use it after all. It is inevitable that 32-bit machines will cease to be made - maybe not tomorrow, but let's be realistic. Let's get ahead of the times and convert modules to support 64 bit support. Please!
I noticed there seem to be no way to guarantee the state of an output module controlled by a scan engine in case the RT Application (or the Host Application, depending who is controlling the chassis) crashs. With FPGA one can program some kind of watchdog setting back the output values of a module in case the RT Exe fails. With Scan there just seem to be no possibility.
This is why I think adding a FailSafe Value for a Scan I/O node could be a creat idea. in ase the RT application got aborted or stops without cleanup, the output value would not be random no more but set back to their FailSafe value. I imagine it could look like that:
The real-time controllers have a "Time server" IP input in their setup. That is great, but it is not great that the time server has to be a piece of NI software (Logos). If it was possible to specify an NTP server for example (and/or other standard protocols), this feature would be much more useful.
Most of the time we need the PACs to be synced with a third party system.
I use "Set System Image" to set an image on freshly manufactured units. Every now and then, the imaging process fails for one reason or another (possibly a hardware/networking failure). I know that the image process in my case usually takes about 4 minutes. Unfortunately when the imaging process fails, there's no way for the application to gracefully stop "Set System Image". The only option is to wait a really long time or use the task manager to force-quit the application. The abort button on the VI doesn't work
So, I'd like one of two things. Either, an input so I can adjust the timeout based on my application or some way to let "Set System Image" know that the application would like to stop if possible (maybe nothing had been written yet). For example, it could take in a DVR and the application could feed it a "true".
Many measurement and process control application run at relatively slow rates (<100Hz). Using SCAN Engine on the CompacRIO for data acquisition is ideal for these applications because you don't need to program the FPGA and all the measurement and control logic can be implemented on the Real-Time controller.
In many cases you want to process your data before you analize it. Currently you only have the ability to get the raw measurement data from the AI modules, so you need to add the data processing code to your existing LabVIEW program. It would be helpful if the SCAN engine could offload some of the data processing (ex. lowpass filter or sample average) to the FPGA and provide the user with already processed data. For example, this functionality can be added to the module configuration page:
Currently, on the cRIO-903x series controllers, the Chassis Temperature and the USER1 push button are only available on the FPGA http://digital.ni.com/public.nsf/allkb/75E039282A8B691886257E450049C7B6 and require the chassis to operate in a Hybrid Mode in order to use both the Scan interface and those chassis I/O options.
The cRIO-902x series allowed for both of these I/O options to be accessed in the real-time OS directly. This small feature would add convenience on the cRIO-903x series controllers.
During LabVIEW Development was a PXI Real-Time System connected (LAN) next to my Laptop and process Build/Deployment was no problem.
Now the RT has been shipped far away to customer site and i have no longer access to that network.
So I need an other way to distribute and
here is my IDEA #1: (below is another IDEA #2)
FACT : Most of all customers have no knowledge about "How to find the Real-Time System and update the Application on it".
So the new item will built an "Offline Installer.exe" and will run on any customer PC to a full automated update job automatically.
Features:
- create a detailed installation report which can be stored and send back to the developer.
- In case of the Real-Time System isn't connected to any LAN, the customer could use the "Offline Installer.exe" to prepare an USB-Stick which has to be plugged into the Real-Time System. Then a reboot will launch the full automated application update process. (the "RT LEDs" will give feedback when the process is finished)
I tried also another way but without success:
Step 1: Add a ZIP File into the Project Explorer : "My Zip File"
Step 2 : Configure the "My Zip File" => Add "My Real-Time Application"
Step 3 : Build "My Zip File"...but the Result was bad :
But anyway, if the would be successful there are a lot of more steps needed:
- sending the ZIP to the customer
- unpack ZIP
- find Real-Time System at customers network
- a FTP tool is needed to update the system.
- There is no way to stop the running Application (from a PC without NI-Software)
Conclusion:
The update with "ZIP" is far away from a comfortable solution.
IDEA #2
If the Real-Time System has a running webserver (the thing which is using Silverlight),
then the application could be updated by using a webbrowser.
So, first we have to update the webserver on the Real-Time System (as difficult as updating the application)
Dual network interfaces is often part of the requirements for redundancy, however in such cases it is also very common to specify that the behaviour of borth of these should be identical. You see it in subsea control systems where they have an "A" and "B" channel, you see it topside where the device might need to be on two networks etc.
Unfortunately this is not the case for any of the dual port RT targets from NI. The secondary port is really a second class NIC. It has limited configuration options. It does not support DHCP, you cannot specify a gateway for it - and the code to do programmatic changes to its configuration is not easily available.
Please make the two ports fully interchangable. Port 1 or 2? It should not really matter which one you use.
When running a VI in development mode while targeting an RT device, you must "Save All" prior to deployment. This is annoying, especially when using SCC. I'm sure that SourceOnly will minimize this effect in LV2010, but the concept still remains: I don't want to be forced to Save All when I don't want my edits (or automatic linking edits) to persist.
The cRIO chassis can only be configured to to use one time server for time synchronization. If that time server fails, the only recourse is to detect that in some way, edit the ni-rt.ini file, and then re-boot the controller. This is rather clumsy, and it would be helpful to have alternate IP addresses.
Sometimes, you really just want a RT front panel. Of course, there is no real front panel, but when you run in interactive mode Labview is automatically setting up network communication to pull data from the RT target and push data to the RT target. Wouldn't it be great if you could just convert that whole front panel into a basic host VI? Right now, you have to manually convert all controls and indicators to shared variables. Granted, this does force you to be very careful about limiting the number of network-shared items that you have, but sometimes you really just want to run the VI in interactive mode...but deployed.
Or, better than that, convert everything into a web service and automatically build a silverlight UI (using Web UI Builder) which is hosted on the cRIO. Anything that provides you with a quick and easy way to convert from the rt debugging UI to a basic host VI would be great.