I see in MAX that the System Resources section for my target shows CPU Total Load. I'm wondering how this gets pulled in to MAX and if there's an equivalent way to get at those values via console command. Thanks!
For Linux RT (or, rather, for Linux in general), the following link is an interesting read to learn about CPU measurements:
Similarly, reading memory monitoring numbers isn’t as trivial as it seems on first glance. We have started collecting information in our own Wiki, but there’s plenty of Information to be found elsewhere, for example this:
Thanks Joerg, that is a fantastic wiki (I especially liked the part about doing a VMWare NI Linux RT VM- I had been toying with that idea and it was super helpful to see the steps).
I've been going over the ways Linux presents CPU information and I agree it's not trivial, which is why I'm curious how NI MAX is doing it in a way that presents an almost Windows-like percentage per core. Alternatively, before I go write up my own CPU parser, I want to know if it's possible to tap in to whatever process the controller is using to communicate to MAX and extract data from it.
Thanks, Tim, I'm glad you find the wiki useful.
As for tapping into the communication between MAX and the controller, I'm not aware of any possible ways to do so. Frankly, I'd be surprised if it was possible. You can use the NI System Configuration API, though, to programmatically read information from target devices. I can't remember if CPU Load is part of the available information?
Additionally, you might find this interesting: MAX uses the NI Discovery Protocol (amongst others?) to communicate with targets. That protocol is based on UDP to allow for finding targets and configuring IP addresses across the border of TCP subnets.
I started reverse-engineering it years ago; whatever I needed back then (mostly only setting IP addresses) can be found here:
I can't remember seeing CPU loads in the UDP messages, though. I believe that kind of information is not sent via UDP but part of the TCP communication (i.e. the connection made after configuring PC and target so that they both have IP addresses in the same subnet).
I've done some work with the System Config API on there, just wasn't really sure that the values I was getting out were really accurate (again, it's kinda apples and oranges when comparing code that used to run on PharLap to get CPU usage out). Since this is for VeriStand I might just switch to the System Monitor custom device NI's got up on their github, but I'll still need to tweak some stuff since that CD doesn't just give a straight percentage like my old code.
I think it is important to point out bug #206134 in the know issues list for LabVIEW Real-Time. In summary, there may be reporting issues on CPU load when using Timed Loops. I would recommend using a Linux-based approach like what was linked above.
Good to know about that bug- but what if my RT application doesn't use timed loops (it's an Async Custom Device)? Or does this apply even if there is other adjacent code being run on the system that uses Timed Loops (such as the VeriStand engine)?