LabVIEW Real-Time Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Scenario

Suppose that we want to start coding my Real-Time application, but the hardware hasn't arrived yet.

 

We can't Discover the chassis + modules, so we need to add modules manually.

 

 

Current editor

cseriesedit_old.png

 

To add N modules, we need to launch this dialog N times:

 

  1. Right-click on Chassis in the Project Explorer
  2. Hover over "New"
  3. Click "C Series Modules…"
  4. Click "New target or device"
  5. Click "C Series Module"
  6. Click "OK"
  7. Wait for LabVIEW to fetch module list (wait ~1 second)
  8. Select Type (2 clicks)
  9. Select Location (2 clicks)
  10. Click "OK"
  11. Go to #1 to add a new module

 

How tedious!

 

 

Proposed Editor

Wouldn't it be nice if we could set up all the modules in 1 dialog?

 

cseriesedit_proposed.png

 

Features

  • Table auto-fills itself with modules already in the project
  • Number of rows is determined by the chassis model. No need to select Location
  • Ability to leave rows/slots empty
  • Editable Name field (with default name) appears upon selecting Type
  • Description appears upon selecting Type

 

 

Feel the difference

Adding N modules (using default names) requires...

  • Current dialog: 10N clicks, N hovers, waiting N seconds
  • Proposed dialog*: (6+2N) clicks, 1 hover, waiting 1 second

 

So, adding 8 modules requires...

  • Current dialog: 80 clicks, 8 hovers, waiting 8 seconds
  • Proposed dialog*: 22 clicks, 1 hover, waiting 1 second

*Assuming that steps 1-7 and 10 need to be performed once

 

Hello Fellow developers and engineers.

 

Based on the discussion in this thread I would like to propose the following:

 

My suggestion is to open support for execution of python nodes on Labview Real-Time targets. This is currently not supported by LabVIEW 2018 Current Gen, even though this version supports executing a python node using the new "Connectivity -> Python"-support. 

This should be possible to do on the targets using NI Linux Realtime as the operating system, since Linux treats python as a native application language.

 

I understand that the execution of Python scripts are not deterministic, which will prevent an implementation in time critical code. That I can follow, but it should be possible to use the python node in lower priority threads or non real-time code, for communication with a REST-API, downloading/uploading files, connectivity to online services and so on. 

It has been shown that python can be compiled and executed on real-time targets, here on the forums by Sev_k in this thread: Getting the Most Out of your NI Linux Real-Time Target

 

Therefore I propose to simplify this implementation and open the support for Python3.

 

Some additional implementations are essential for the success: 

  • Selection of Python runtime should be fixed to python3, since python 2 will be without further support or active development from 2020-01-01 going forward. 
  • It would gain the best momentum if it was possible to deploy python from the Device Software installation as part of the build specification of the real-time application.
  • Python developers can specify their external requirements for additional code by defining a "requirements.txt" or "pipfile.lock" depending on the virtualization method and choice of development style. If the project manager or build specification could read/parse this for the starting point of the application to prevent manual labour and possible errors.

 

Backwards Compatibility:

Support for this should be from LabVIEW 2018 and forward, since these have support for the python nodes.

 

Best Regards

Jesper

Hi,

 

I noticed there seem to be no way to guarantee the state of an output module controlled by a scan engine in case the RT Application (or the Host Application, depending who is controlling the chassis) crashs. With FPGA one can program some kind of watchdog setting back the output values of a module in case the RT Exe fails. With Scan there just seem to be no possibility.

 

This is why I think adding a FailSafe Value for a Scan I/O node could be a creat idea. in ase the RT application got aborted  or stops  without cleanup, the output value would not be random no more but set back to their FailSafe value. I imagine it could look like that:

 

failsafe.png

 

What do you think about it?

It is allmost impossible to find errors in "Reentrant SUBVI's" without the debugging possiblities.
 
 
 
 

The real-time controllers have a "Time server" IP input in their setup. That is great, but it is not great that the time server has to be a piece of NI software (Logos). If it was possible to specify an NTP server for example (and/or other standard protocols), this feature would be much more useful.

 

Most of the time we need the PACs to be synced with a third party system.

During LabVIEW Development was a PXI Real-Time System connected (LAN) next to my Laptop and process Build/Deployment was no problem.

Now the RT has been shipped far away to customer site and i have no longer access to that network.

 

So I need an other way to distribute and

here is my IDEA #1:  (below is another IDEA #2)

 

Offline RT-Distribution LabVIEW Idea Exchange.GIF

 

FACT : Most of all customers have no knowledge about "How to find the Real-Time System and update the Application on it".

 

So the new item will built an "Offline Installer.exe" and will run on any customer PC to a full automated update job automatically.

 

Features:

 - create a detailed installation report which can be stored and send back to the developer.

 - In case of the Real-Time System isn't connected to any LAN, the customer could use the "Offline Installer.exe" to prepare an USB-Stick which has to be plugged into the Real-Time System. Then a reboot will launch the full automated application update process. (the "RT LEDs" will give feedback when the process is finished)    

 

 


 

I tried also another way but without success:

 

Step 1: Add a ZIP File into the Project Explorer : "My Zip File"

Offline RT-Distribution 1.GIF

 

Step 2 : Configure the "My Zip File" => Add "My Real-Time Application" 

Offline RT-Distribution 2 zip.GIF

 

Step 3 : Build "My Zip File"...but the Result was bad :

Offline RT-Distribution 2 Build My Zip File.GIF

 

But anyway, if the would be successful there are a lot of more steps needed:

 - sending the ZIP to the customer

 - unpack ZIP

 - find Real-Time System at customers network

 - a FTP tool is needed to update the system.

 - There is no way to stop the running Application (from a PC without NI-Software)

 

Conclusion:

The update with "ZIP" is far away from a comfortable solution. 

 


 

IDEA #2

If the Real-Time System has a running webserver (the thing which is using Silverlight),

then the application could be updated by using a webbrowser.

So, first we have to update the webserver on the Real-Time System (as difficult as updating the application)

 

Spoiler
(But who the f..k has Silverlight installed  )

 

 

When running a VI in development mode while targeting an RT device, you must "Save All" prior to deployment. This is annoying, especially when using SCC. I'm sure that SourceOnly will minimize this effect in LV2010, but the concept still remains: I don't want to be forced to Save All when I don't want my edits (or automatic linking edits) to persist.

Dual network interfaces is often part of the requirements for redundancy, however in such cases it is also very common to specify that the behaviour of borth of these should be identical. You see it in subsea control systems where they have an "A" and "B" channel, you see it topside where the device might need to be on two networks etc.

 

Unfortunately this is not the case for any of the dual port RT targets from NI. The secondary port is really a second class NIC. It has limited configuration options. It does not support DHCP, you cannot specify a gateway for it - and the code to do programmatic changes to its configuration is not easily available.

 

Please make the two ports fully interchangable. Port 1 or 2? It should not really matter which one you use.

 

 

Wouldn't it be nice to have a native LabVIEW XML parser available on real-time targets? Storing your config data in XML rather than in Config-ini files is more flexible, techniques like XMLRPC would be easier to implement etc.
Yes I  know about the third party EasyXML library, but I don't want to spend extra money as we are already paying for LabVIEW 😉

Sorry if this is a duplicate idea I searched and couldn't believe it hasn't been posted.

 

This idea is to add Linux RT deployment support for desktop PCs.  NI in the past has had a process for purchasing an Phar Lap RT license, then deploying it to a normal x86 based PC.  Here are the System Requirements, and a manual.

 

This opened up the door for desktop PCs to become embedded PCs, controlling NI hardware, but allowing for an upgrade path where embedded cDAQ devices, and even PXI controllers don't have many options.

 

At the moment there are unofficial ways to get Linux RT on an x86 based desktop with limited success, but no support.  These options are outlines in the comments of an idea to allow for a VM of the Linux RT environment.  NI could go with a similar route to Phar Lap and sell a license with a list of supported hardware.  

I really miss the ability to debug reentrant VIs in LabVIEW real-time. Currently there is no way at all, short of indirect probing by programming your own debug functionality (logging of some sort).

 

I understand there are inherent difficulties in this, since LV RT systems are typically headless, and definitely GUI-less (except for maybe www-access).

 

But, maybe if we had a way to send a BD reference to a Host PC, the block diagram could be opened there? Something like an Open.BD method with a "destination" input on it? Then the reentrant VI could open its own BD on another LabVIEW instance running Desktop LV.

Sometimes, you really just want a RT front panel. Of course, there is no real front panel, but when you run in interactive mode Labview is automatically setting up network communication to pull data from the RT target and push data to the RT target. Wouldn't it be great if you could just convert that whole front panel into a basic host VI? Right now, you have to manually convert all controls and indicators to shared variables. Granted, this does force you to be very careful about limiting the number of network-shared items that you have, but sometimes you really just want to run the VI in interactive mode...but deployed.

Untitled.png

 

Or, better than that, convert everything into a web service and automatically build a silverlight UI (using Web UI Builder) which is hosted on the cRIO. Anything that provides you with a quick and easy way to convert from the rt debugging UI to a basic host VI would be great.

Many measurement and process control application run at relatively slow rates (<100Hz). Using SCAN Engine on the CompacRIO for data acquisition is ideal for these applications because you don't need to program the FPGA and all the measurement and control logic can be implemented on the Real-Time controller.

 

 

In many cases you want to process your data before you analize it. Currently you only have the ability to get the raw measurement data from the AI modules, so you need to add the data processing code to your existing LabVIEW program. It would be helpful if the SCAN engine could offload some of the data processing (ex. lowpass filter or sample average) to the FPGA and provide the user with already processed data. For example, this functionality can be added to the module configuration page:

 

SCAN Engine.png   

The cRIO chassis can only be configured to to use one time server for time synchronization. If that time server fails, the only recourse is to detect that in some way, edit the ni-rt.ini file, and then re-boot the controller. This is rather clumsy, and it would be helpful to have alternate IP addresses.

 

64bit has been the dominant architecture for a decade; any computer with more than ~2.5GB of RAM must use it after all. It is inevitable that 32-bit machines will cease to be made - maybe not tomorrow, but let's be realistic. Let's get ahead of the times and convert modules to support 64 bit support. Please!

 

 

Streaming data is an important part for many applications. If the amount of data is very large, a RAID can improve system performance significantly.

Sadly, RAID is currently not supported for RT so Real Time Applications cannot use this valuable tool for enhancing streaming performance.

On Hypervisor systems, RAIDs can be used, but data has to be passed from the RT to Windows first. So streaming does not work like/with DMA but it uses CPU load which is in fact a waste of resources.

 

Norbert

Given a project topology like this:

 

Windows PC Host  <-> cRIO-9024 <-> NI-9114 backplane <-> NI-9144 EtherCAT #1 <-> NI-9144 EtherCAT #2   which is run in hybrid scan mode,   during development I wish that it were possible to, for example, tell the project to run without EtherCAT #2 being physically connected, instead using virtual/simulated IO. As things stand now, when I try to switch from Configuration to Active mode, I get an error saying that "the slave device cannot be found".

 

More generally, I often find myself thinking that the project explorer needs a good way to "comment out" various pieces of the project without having to resort to "Remove From Project".

On all of the RT platforms, if the CPU maxes out at 100% the RTOS goes through a sort of "load shedding" and begins suspending non critical threads to lower the CPU overhead and hopefully maintain determinism. One of the first threads that gets suspended is the TCP thread which essentially cuts off the RT target from the ethernet and outside world.

 

While I understand the concept of the load shedding, I believe that if you have the CPU pushed that hard, then your determinism is out the window anyway. Dropping the TCP thread only serves to disconnect the RT system from the development environment or the built application with no real benefit. I have yet to see a single application that was actually able to accomplish anything substantial after the point that the TCP thread was suspended. In most cases, the high CPU usage happens during the development of the application or when something is critically wrong with the setup of the deployed app. In those cases, it wouldn't matter how many threads you suspended, the CPU will still be maxed out.

 

It is akin to blowing out a match in the midst of a forrest fire.

 

I would propose that the RTOS be reconfigured to show the TCP thread as one of the critical threads and keep it from being suspended during high CPU excursions OR give the developer the option to turn this feature on or off throgh the device properties.

Hello,

 

It would be nice to had the ability to create more than one RT target in a project with the same IP address.

For the moment, if you try to use 2 same IP address, The LabVIEW IDE don't let you save your modifications ! Smiley Sad

 

You may say WTF !!!! The manu has so curious ideas !!!!

 

My need is for example to had 2 configurations for 1 only RT CRio.

 

  • 1 configuration in Scan Interface 
  • 1 configuration in FPGA interface

Or an other way to use multiples targets ...

 

  • 1 project linking to Version 1 sources
  • 1 project linking to Version 2 sources

Yopu may say this can be done by using different build specifications.

 

I will say yes ... But my need is to separate the two versions of LabVIEW sources !

=> 1 project/target linked to an autopopulating folder in version a

=> 1 project/target linked to an autopopulating folder in version b

 

=> So my need is to be abble to use RT Targets as "Target versions"

=> To be abble to do this ... i need to create multiple targets with identical IP addresses. Smiley Wink

 

Thanks for reading.

 

Manu.

I do not know about most people use cases with RT system, but I have the reflex to click the run arrow to test all my VIs. When I am on site and I do have access to the hardware, everything is rosy. But, when I do not have access to the hardware, and depending on the system complexity, I now have to wait up to over a minutes before I can resume working. It would be nice if somehow once I realized my mistake I could notified LabVIEW (right away) to run my VI in the main application instance instead of attempting to deploy it on the missing target.

Alternatively, once I made the mistake once, LabVIEW could automatically run the VI in the main application instance.

I do not know what is the best approach to fix this, but I know that I am very annoyed every time I make that mistake.