LabVIEW Real-Time Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

This would be implemented on RT Targets such as cRIO or WSN controllers. The purpose would be to significantly decrease the amount of power used by a controller when it is idling. A typical application would be remote field deployements of controllers programmed as either data loggers or WSN controller node. Competing products offer much lower power consumption than a cRIO or WSN controller. If the RT controller could be put to sleep, with a watchdog timer for instance to set the awake timing power could be conserved.

 

At the moment the only way around this is to have a third party device applying power to the controller...

 

Thx.

L.

 

 

 

 

 

I think this would be a good option since I have run into the problem during debug where a previously used Network Stream did not get destroyed and then when the code tries to reopen the stream you get an error that the stream is still in use.  Since the refnum is lost there is no way around this dilemma.

It would allow an RT executable to deploy shared variable automatically

(and not from the project explorer or the host computer as mentionned here)

Hello,

It would be nice to add a filename Check in the Crio executable process generation, in order to detect accentuated characters !Smiley Happy

When you use accentuated file names, the generated Crio Linux executable does not work ... it crashes without any message !!! Smiley Mad => NI Linux does not support accentuated file names !!!

=> The problem is that the Debug mode works fine, and doesn't work like the executable !

 

It should be nice to add a file name verification to the generation processus in order to highlight the problem .   Smiley Wink

 

Thank for your help.

 

Manu.

At the moment, there is a small development problem with dynamically launched VIs (via the Asynchronous Call method). They are not recognised as dependencies during the deployment process, so the code can't be run effectively in development mode via the run arrow (those VI's will return File Not Found errors).


A simple fix for this would be to be able to mark VI's as "always deploy" in the project window.

In lack of a hardware idea exchange I'll post this here. I know composite ideas are not ideal either, but my main point is to voice the wish for a different kind of RT controller:

 

The "sb-cFP":

 

We basically use cFP-2220s as small, low power, rugged embedded computers in our systems. We've looked at sbRIOs, especially the newer ones like the 9606, but they do not have dual networking, nor the 4 in-built serial ports (yes you can use the mezzanine, but the power consumption, size etc. goes up). No FPGA is needed, nor any IO other than communication ports like Ethernet/Serial (USB and CANbus is nice though).

 

In an ideal world we would have a sbcFP version of the 2220, designed for embedded use, and at the price of a sbRIO withoyt an FPGA.Smiley Happy The only thing we would miss then is lower power consumption. The cFP-2220 is specified to use around 6W. In real stand-alone use it typically draws around 3.2W (other applications might push that though), but that's still a lot for many types of embedded use.

 

Underclocking

Perhaps underclocking could be a solution to lower the power consumption of NIs controllers, when customers need a less power consuming device? Imagine beeing able to adjust this dynamically from the System Configuration API / NI MAX...An sb-cFP with 1-2W consumption would remove any reason to abandon LabVIEW RT and NI-hardware in favour of controllers running micro-linux e.g.

 

I am using the AF-1501 Frame Grabber Module from MoviMED. Right now I am able to capture and store images in BMP and TIFF formats at the RT level but I would like to be able to create AVI videos at the RT level.

The idea is to be able to record videos and then store them (either in the internal memory of the RIO or in an SD card using the NI-9802). 

 

The problem is that the RT processor does not run windows and has no cocdecs. Therefore the AVI generation VI's from the vision palet do not work. Even if it is tight to a particular codec, it would be great to be able to create AVI's on the RT target.

 

 

Similar to NI's Visual Studio 2008 support, it would be great if NI installed the Visual Studio 2010 run time to the Real-Time target and supported it officially with tools such as the DLL checker.

 

I know that currently, I can statically link in the 2010 run time to my built libraries if I want to run them on RT... but it would be nice to not have to. This would also enable products like LabVIEW Simulation Interface Toolkit and NI VeriStand to officially claim 2010 support on their readme's... as by default the make files used by these NI products do not statitcally link in the run time and I have to hack the make file to enable support.

The development environment for RT is really a great, integrated system. It's so convenient to develp on a regular PC and deploy the VIs for testing. The real-time UI feedback during development is a boon to productivity. It's difficult to imagine a useful RT workflow that doesn't include a live panel in the development mode -- it's an idea that is both powerful and intuitive.

 

What I would really like to see next is similar RT panel interactivity in a deployed run-time system.

 

At the moment, I am using a 'remote panels' implementation to stream an interactive panel view back to the PC. (I assume that this is similar to how things are managed by the development environment itself.) The problem with this approach is that the Remote Panels API seems a bit flakey at times. I don't always seem to get a reliable connection. What's worse is the programming overhead associated with setting up the remote panels connection, and the sometimes fickle behavior of Windows in allowing a remote PC to connect.

 

Since the RT development environment already has the capability of seamlessly displaying an interactive UI on the Windows client, would it be that difficult to add UI panel feedback into the run-time executable environment?

 

I realize that, at first glance, this may seem to defeat the whole purpose of an RT system. However, I have built a few different Vision RT systems over the last few years, and the same challenge always seems to come up in every project. When I need to implement a calibration, focus and/or alignment mode on the RT target, I find myself performing all kinds of contortions to retrofit a simple interactive RT utility for use in the deployable application.

 

If I could simply implement an RT VI that presents an interactive UI on the Windows client (just like when you are in the LV Development mode), perhaps I could eliminate the complexity of my current strategies: Either 1) developing an elaborate case structure and messaging system on both sides of the network connection to pass images and parameters across the network in real-time, or 2) implementing a (hit-or-miss) remote panel linkage to a self-contained RT configuration VI over a remote panel connection that isn't always as robust as I wish it would be.

 

If the heavy liting has already been done for the Development system, surely it wouldn't be too much of a stretch to enable similar functionality in the run-time environment..?

 

Anyway, that's my wish for today.

 

 

 

 

 

The cRIO9031 with Linux Real Time did not support SDXC cards. Only SDGC with a maximum of 32GB supported by cRIO.

For logging application their is a need for more than 32GB => use of SDXC cards with 128/256 GB

 

The Distributed System Manager does not allow one to see the values for a cluster in a deployed process.  It would be nice to view and edit values in the cluster.

When renaming a set of variables for all the channels on a cRIO module, the names are assigned numbers starting with 1. These names do not line up properly with the names of the physical channels, and referencing the inputs becomes confusing with two different assigned numbers. This could be resolved by zero-indexing the numbers that are appended to the name of the channel.

 

MVE3.png

MVE4.png

If you have a deployed RT system running the scan engine you may have IOV channels with scaling that was configured and deployed from the project.

What happens when you need to change a scaling factor after the system has been deployed?  You would need the development tools to do this.

Since the smarts for this is already built into the LV IDE why not put the same code into MAX for remote viewing and modification of IOV properties.

If you right-click on the RT target in the project and select Utilities you can:

 

-View in system manager

-View in browser

-etc...

 

It would be useful to have a View in FTP Client there as well. I seldom need to access the web interface, but I often want to delete files prior to a new deployment e.g. and this would be a quick way to do it.

 

Yes, you can use the web interface to access the files as well, but that's much more cumbersome than to just open an ftp connection.

Creating an OPC UA address space by hand is very tedious.  It would be easier if you could import an OPC UA NodeSet XML file and map OPC UA variables to measurments.  

Every Fixed-Point number allocates 64 bits (8 bytes) or 72 bits (9 bytes) if you include an overflow status. Working on a cRIO project that I need to acquire and save a large amount of data to process later when the RT CPU has free time to do it (is not acquiring more data nor transmitting data). Since the data is acquired in 12 channels, 51.2 kS/s, with 24 bit each point, the fixed-point allocates 64 bits but uses only 24, wasting 5 bytes (62,5%) for every single data point acquired.

 

As a workaround, I developed two VIs one to compact the data removing the 5 unused bytes of every Fixed-Point, number keeping the 3 bytes with the data. The second VI does the reverse job. With this I reduced the information to 37,5% of the original, saving space and time to save the information in non volatile memory. Maybe there's a way to do it directly, but I didn't figure it out.

 

My idea is to add, in some way, the possibility to use the minimum amount of bytes needed by the data, at least for storage purposes. It would be nice if NI add an option to have fixed point in two memory allocation modes:

 

  • Standard: the current way with 64 or 72 bits (probably faster)
  • Packed: the number of bytes allocated will be the minimum to hold all information. For a 16 bits data the each data will use only 2 bytes.
Need a possiblity to read out the installed BIOS Version to check if the system is updated..

It would be nice if LabVIEW RT could give more output during the startup process of a LabVIEW real-time executable. For example if the rtexe is started at all or not, if there are missing dependencies during the loading process or other usable stats.

 

In my case I had problems with the German codepage. The build process was done without failure. But the rtexe didn’t start at all. I used special German characters like “öäü” in some VI names. And the rtexe couldn’t be loaded for that reason. But I didn’t get any message.

 

So please improve the debug output for LabVIEW RT.

It would be good to enhance access security to also include program-control of cRIO's. As it is now you can set user access for a cRIO in a project by opening the Real-Time CompactRIO properties and set Allow/Deny access by IP. However, this only limits access to deploying settings and eventual RT applications on the cRIO. You can still control the cRIO (e.g. set outputs and, as in my case, control servo motor drives connected to the cRIO) from a LabVIEW application on any PC on the LAN.

This added access control could eventually be set up in MAX.

The Reliance FS used by Labview RT can be accessed in a Windows computer using the driver provided by Datalight.  Unfortunately this driver is only for 32-bit windows systems (including Windows 7).  It is getting harder and harder to find 32-bit windows computers as 64-bit processors are widespread.

 

Could we get a 64-bit Windows Driver for Reliance FS?