LabVIEW Real-Time Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hello Fellow developers and engineers.

 

Based on the discussion in this thread I would like to propose the following:

 

My suggestion is to open support for execution of python nodes on Labview Real-Time targets. This is currently not supported by LabVIEW 2018 Current Gen, even though this version supports executing a python node using the new "Connectivity -> Python"-support. 

This should be possible to do on the targets using NI Linux Realtime as the operating system, since Linux treats python as a native application language.

 

I understand that the execution of Python scripts are not deterministic, which will prevent an implementation in time critical code. That I can follow, but it should be possible to use the python node in lower priority threads or non real-time code, for communication with a REST-API, downloading/uploading files, connectivity to online services and so on. 

It has been shown that python can be compiled and executed on real-time targets, here on the forums by Sev_k in this thread: Getting the Most Out of your NI Linux Real-Time Target

 

Therefore I propose to simplify this implementation and open the support for Python3.

 

Some additional implementations are essential for the success: 

  • Selection of Python runtime should be fixed to python3, since python 2 will be without further support or active development from 2020-01-01 going forward. 
  • It would gain the best momentum if it was possible to deploy python from the Device Software installation as part of the build specification of the real-time application.
  • Python developers can specify their external requirements for additional code by defining a "requirements.txt" or "pipfile.lock" depending on the virtualization method and choice of development style. If the project manager or build specification could read/parse this for the starting point of the application to prevent manual labour and possible errors.

 

Backwards Compatibility:

Support for this should be from LabVIEW 2018 and forward, since these have support for the python nodes.

 

Best Regards

Jesper

When working in the Windows development environment the application builder has the ability to implement a version number for the built executable. Additionally LabVIEW has the ability to querry this version number through a property node. I would like to see this feature carried over to RT systems as well. It would be very helpful in determining what particular build of the startup.rtexe file is running on the target.

 

64bit has been the dominant architecture for a decade; any computer with more than ~2.5GB of RAM must use it after all. It is inevitable that 32-bit machines will cease to be made - maybe not tomorrow, but let's be realistic. Let's get ahead of the times and convert modules to support 64 bit support. Please!

 

 

Sometimes, you really just want a RT front panel. Of course, there is no real front panel, but when you run in interactive mode Labview is automatically setting up network communication to pull data from the RT target and push data to the RT target. Wouldn't it be great if you could just convert that whole front panel into a basic host VI? Right now, you have to manually convert all controls and indicators to shared variables. Granted, this does force you to be very careful about limiting the number of network-shared items that you have, but sometimes you really just want to run the VI in interactive mode...but deployed.

Untitled.png

 

Or, better than that, convert everything into a web service and automatically build a silverlight UI (using Web UI Builder) which is hosted on the cRIO. Anything that provides you with a quick and easy way to convert from the rt debugging UI to a basic host VI would be great.

Changing the IP setup currently requires a restart. It is now possible to change the configuration of all network interfaces programmatically using the System Configuration API, but it's not possible to put such changes into effect on the fly, so you actually have to shut down your application to put the changes into effect, something that might not be allowable.

 

So let's say e.g. that I have a controller with dual network interfaces and I want to reconfigure one of the network interfaces (turn on DHCP e.g.) - but I do NOT want to stop the RT application, because it controlls processes via other interfaces, and those processes must be kept alive. If it was a Windows PC, I could have done it. On an NI controller, which is ironically then typically used for more critical tasks, I cannot.

Streaming data is an important part for many applications. If the amount of data is very large, a RAID can improve system performance significantly.

Sadly, RAID is currently not supported for RT so Real Time Applications cannot use this valuable tool for enhancing streaming performance.

On Hypervisor systems, RAIDs can be used, but data has to be passed from the RT to Windows first. So streaming does not work like/with DMA but it uses CPU load which is in fact a waste of resources.

 

Norbert

On all of the RT platforms, if the CPU maxes out at 100% the RTOS goes through a sort of "load shedding" and begins suspending non critical threads to lower the CPU overhead and hopefully maintain determinism. One of the first threads that gets suspended is the TCP thread which essentially cuts off the RT target from the ethernet and outside world.

 

While I understand the concept of the load shedding, I believe that if you have the CPU pushed that hard, then your determinism is out the window anyway. Dropping the TCP thread only serves to disconnect the RT system from the development environment or the built application with no real benefit. I have yet to see a single application that was actually able to accomplish anything substantial after the point that the TCP thread was suspended. In most cases, the high CPU usage happens during the development of the application or when something is critically wrong with the setup of the deployed app. In those cases, it wouldn't matter how many threads you suspended, the CPU will still be maxed out.

 

It is akin to blowing out a match in the midst of a forrest fire.

 

I would propose that the RTOS be reconfigured to show the TCP thread as one of the critical threads and keep it from being suspended during high CPU excursions OR give the developer the option to turn this feature on or off throgh the device properties.

If you have an RT target set up with a startup application (ready to be deployed in the field), running a VI on it from the IDE should not change anything permanently. 

 

Today (LabVIEW RT 2013), doing this will not just temporrarily stop the VIs running on the target (from the executable) - as you get warned about, but also cause the RTTarget.LaunchAppAtBoot=True line to be removed from the ni-rt.ini. So the startup application will not be launched the next time the device is started up, rendering your device useless in the field. Why?

 

- We had an incident where we narrowly escaped such a scenario. The RT target was embedded into a cannister that was about to be sealed, but an unforseen issue made it necessary to run a special test on it, with a VI from the IDE. The startup application in this case provides the only way into the system once the cannister is sealed (no Ethernet access, just RS485), so having it no longer start would be a catastrophy. No one expected running the VI would actually change anything permanently. We tested it of course, and saw that it stopped the startup application (and so we loaded an image of the correct setup afterwards to be sure all changes were removed), but it would be much better and more intuitive if no such permanent and fundamental changes occured (if actually possible to implement in a such a way).

 

 

Many of the features that are not supported on LinuxRT are things that one would expect not to work (due to the underlying technologies), but why no menu and cursor functions? 

 

The lack of menu support in particular means that most GUIs have to be customized to run on the target....even though Xfce should be perfectly capable of handling them in their original design... I was really hoping we could port some of our GUIs straight to the embedded GUI and replace our  industrial computers immediately with cRIOs...(the differences in appearances are easier to handle, after all we use system controls and take cross-platform variations into consideration already).

 

Now we'll have to (decide if we want to) spend time on customizing the GUIs.Smiley Sad

It has a number of improvements over the plain Reliance FS. One of them is, that there is no (less) preformance degradation when there are over x*100 files in one folder. One thing to note is that the system suffers even if you are not reading the files from the folder in question.

I had posted this idea to the LabVIEW idea exchange before, but it makes more sense to have it here.

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Allow-RT-FIFOs-to-be-of-type-LV-Class/idi-p/2426726

 

It would be very useful that RT FIFOs could be of type lvclass as long as the class' private members are of static types (perform the same check that is done for clusters when you try to use them as the type for RT FIFOs).

Currently many controllers from NI only have one way to deal with a failure to contact a DHCP server - they will try to use a link local address (APIPA). If this too fails (due to Proxy ARP e.g.) the controller will not even start the RT Application (which might have other tasks to do regardless of the lack of netorking and/or might run code that will remedy the IP problem), but go into Safe Mode. This in itself is bad enough and should be changed to allow the application to be run, however the main request / idea I have will fix this indirectly:

 

- It should be an option to not fall back to APIPA, but to use the last known configuration (as long as the lease time is still valid). If the DHCP server is down the controller will have the IP-configuration it received the last time the server was up, and will use that configuration.

 

 

As cRIO's are deployed in ever growing applications, it would sure be nice if there was an option to use SFTP (and disable FTP altogheter) on the controller. Ideally it would be supported at the OS level, i.e. the existing cRIO FTP server is upgraded or extended to include SFTP as well.

 

If this is already supported, try searching for "sftp" or "crio sftp" and you'll see only one 3rd party tool-kit, but I confirmed with that company (Labwerx.net) that it (labSSH) does not support cRIO/FPGA/RT targets and there are no concrete plans to add support to other targets.

 

NI: I call on you to either create the FTP toolkit I need to write my own SFTP server, or better yet, update the cRIO FTP server to include the "s". . . It is only one letter, how hard can that be!? 😉 Smiley LOL

 

If this is already possible through some (obscure?) way, please update your site-search engine to reckognize sftp and/or crio sftp, and/or link to a KB or Whitepaper on the topic as I did not find any.

(Note: cRIO safety and security IS covered in pretty good detail in the article series starting with Overview of Best Practices for Security on RIO Systems and the linked 3 articles.)

At the moment, there is a small development problem with dynamically launched VIs (via the Asynchronous Call method). They are not recognised as dependencies during the deployment process, so the code can't be run effectively in development mode via the run arrow (those VI's will return File Not Found errors).


A simple fix for this would be to be able to mark VI's as "always deploy" in the project window.

I am using the AF-1501 Frame Grabber Module from MoviMED. Right now I am able to capture and store images in BMP and TIFF formats at the RT level but I would like to be able to create AVI videos at the RT level.

The idea is to be able to record videos and then store them (either in the internal memory of the RIO or in an SD card using the NI-9802). 

 

The problem is that the RT processor does not run windows and has no cocdecs. Therefore the AVI generation VI's from the vision palet do not work. Even if it is tight to a particular codec, it would be great to be able to create AVI's on the RT target.

 

 

Similar to NI's Visual Studio 2008 support, it would be great if NI installed the Visual Studio 2010 run time to the Real-Time target and supported it officially with tools such as the DLL checker.

 

I know that currently, I can statically link in the 2010 run time to my built libraries if I want to run them on RT... but it would be nice to not have to. This would also enable products like LabVIEW Simulation Interface Toolkit and NI VeriStand to officially claim 2010 support on their readme's... as by default the make files used by these NI products do not statitcally link in the run time and I have to hack the make file to enable support.

The development environment for RT is really a great, integrated system. It's so convenient to develp on a regular PC and deploy the VIs for testing. The real-time UI feedback during development is a boon to productivity. It's difficult to imagine a useful RT workflow that doesn't include a live panel in the development mode -- it's an idea that is both powerful and intuitive.

 

What I would really like to see next is similar RT panel interactivity in a deployed run-time system.

 

At the moment, I am using a 'remote panels' implementation to stream an interactive panel view back to the PC. (I assume that this is similar to how things are managed by the development environment itself.) The problem with this approach is that the Remote Panels API seems a bit flakey at times. I don't always seem to get a reliable connection. What's worse is the programming overhead associated with setting up the remote panels connection, and the sometimes fickle behavior of Windows in allowing a remote PC to connect.

 

Since the RT development environment already has the capability of seamlessly displaying an interactive UI on the Windows client, would it be that difficult to add UI panel feedback into the run-time executable environment?

 

I realize that, at first glance, this may seem to defeat the whole purpose of an RT system. However, I have built a few different Vision RT systems over the last few years, and the same challenge always seems to come up in every project. When I need to implement a calibration, focus and/or alignment mode on the RT target, I find myself performing all kinds of contortions to retrofit a simple interactive RT utility for use in the deployable application.

 

If I could simply implement an RT VI that presents an interactive UI on the Windows client (just like when you are in the LV Development mode), perhaps I could eliminate the complexity of my current strategies: Either 1) developing an elaborate case structure and messaging system on both sides of the network connection to pass images and parameters across the network in real-time, or 2) implementing a (hit-or-miss) remote panel linkage to a self-contained RT configuration VI over a remote panel connection that isn't always as robust as I wish it would be.

 

If the heavy liting has already been done for the Development system, surely it wouldn't be too much of a stretch to enable similar functionality in the run-time environment..?

 

Anyway, that's my wish for today.

 

 

 

 

 

Every Fixed-Point number allocates 64 bits (8 bytes) or 72 bits (9 bytes) if you include an overflow status. Working on a cRIO project that I need to acquire and save a large amount of data to process later when the RT CPU has free time to do it (is not acquiring more data nor transmitting data). Since the data is acquired in 12 channels, 51.2 kS/s, with 24 bit each point, the fixed-point allocates 64 bits but uses only 24, wasting 5 bytes (62,5%) for every single data point acquired.

 

As a workaround, I developed two VIs one to compact the data removing the 5 unused bytes of every Fixed-Point, number keeping the 3 bytes with the data. The second VI does the reverse job. With this I reduced the information to 37,5% of the original, saving space and time to save the information in non volatile memory. Maybe there's a way to do it directly, but I didn't figure it out.

 

My idea is to add, in some way, the possibility to use the minimum amount of bytes needed by the data, at least for storage purposes. It would be nice if NI add an option to have fixed point in two memory allocation modes:

 

  • Standard: the current way with 64 or 72 bits (probably faster)
  • Packed: the number of bytes allocated will be the minimum to hold all information. For a 16 bits data the each data will use only 2 bytes.

At any one time, we have several complex LabVIEW-RT projects that run behavior experiments for us, taking multiple channels of analog and digital data while providing a complex series of audio and visual stimuli.  For each project, there is an RT part that "runs" the Experiment and a tailored Host part that runs the UI, handles an Excel Workbook that specifies the various trials being performed, displays the data, and saves the sampled results to disk.

 

Each of these projects are developed using LabVIEW Project, and each LV Project has its own, unique name.  When we build UI and RT executables, we would like an "automatic" way to associate them with each other.  A "natural" way would be to use the "shared" information of their joint Project.  There are ways to get the Project name in the Host UI code, both in Development mode and from an Executable.

 

I would like to propose that NI provide a Property or something similar for the RT side so that the RT code, at Run Time, could determine the Project from which it came.  With both sets of code knowing their shared "ancestor", they could use this information to ensure they are talking to their proper counterpart.  They could also use it to "mark" data structures (such as Files or Folders) that belong to them.  For example, there could be multiple configuration files, one for each Project, but they could be uniquely identified as "<Project> Config.xml" (where "<Project>" is the name of the Project containing the VI, allowing the appropriate Configuration file to unambiguously be chosen at Run Time).

 

Bob Schor

We use cRIO's in industrial deployments all the time.

While a lot can be done to work around the lack of security offered by the built in FTP server, it would be much preferred if NI would add FTPS (and possibly SFTP) support, or at the very least, modify the existing FTP server solution to expose/support the following functionality:

 

* define users (currently, only the "password" field is used by the cRIO FTP server, the user name is ignored (or limited to the "admin" name only).

* define folder access rules on a "by user" level

 

Ideally, the current FTP server would be expanded further to include:

* secured file transfers via built in FTPS support, (or by adding FTPS VI's to the existing FTP toolkit)

* (secure binary transfers via built in SFTP server)

* open up and/or document how the current web configuration tool does the HTTPS file transfer interface

 

Currently, there are no easy way to allow our end customers to securely pull data log files remotely, and no way to prevent end customer from accidentally accessing or modifying sensitive parts of the application. 

It would be possible to build a custom HTTPS file server on the cRIO, but I have several issues with this approach: it adds development and maintenance of another module to our code, it would most likely require another client side application for the customer to use, and most importantly: a lot of our customers have data collection servers set up to pull data over standard FTPS directly into their historians and their database systems.