Is there a way to configure the cRIO /var/log/messages to save the data from a previous run in a rotation file or in the same file.
Now the file is cleared on start-up and contains only data of the current run.
I'd like to be able to see what was logged last before a cRIO e.g. hangs itself.
Solved! Go to Solution.
Unfortunately, /var/log/messages is designed to be a buffer file to be overwritten on reboots on purpose. There isn't a native way to change this behaviour as far as I know.
My recommendation would be to programatically move this file in our code; if we're trying to debug a hang/crash, this obviously becomes much more difficult. What exactly are you trying to debug? Are we able to get information from the logs in /var/local/natinst/log? Are we able to log system resource mapping to a file?
Thanks for the response. This is part of an investigation for which i also have an SR open.
We have 1 cRIO out of multiple that run the exact same application, same settings and hardware modules, which hangs/crashes after a while.
Hangs/crashes means: it doesn't repsond to any connection requests anymore and we use:
- VI server
- NSVE (scan engine)
- Remote panels
The application is stopped by proof of the User LED not blinking anymore, but the FPGA continues to function.
The reason for the log question is to figure out if the system might have logged something that could indicate the root cause or help with that.
Since the system is in a deadlock state the only option to reach it again is a power cycle, which ....
Hope this clarifies it a bit more.
Thanks for the clarification. /var/log/messages is essentially a catch-all for logging processes on the system that aren't explicitly kept in the other log files. I wouldn't expect to see any information about crashes in here, as this is cleared on reboot.
From your description of the hang, it sounds like CPU usage being maxed out. Is there a LV coredump at /var/local/natinst/log appearing? Alternatively, just logging CPU usage over time (time depending on how quickly we hang) to see whether something is appearing.
Interesting that only one system out of the multiple is hanging; same firmware installed?
Too bad, would've hoped to be able to change a configuration file to secure the content of /var/log/messages for these purposes.
Too bad also that my Linux knowledge only goes so far. But I would have thought that maybe the syslog-ng config could be persuaded to dump all messages to 2 output locations with different logrotate settings.
The core dumps you mentioned just contain the default threading info, no errors like I did see in others in the past.
I ran a small app (User led blink only) and that also made is hang/crash, so next step is RMA.
All systems were identical software-wise (firmware, NI sw stack, application).
It's been a while since I've looked at explicitly what /var/log/messages contains - does it update all the way up to the crash?
Can we just copy the contents of the file periodically? We wouldn't capture the very end but could try to scan if the file is updated and copy the contents to another (non-volatile) area.
I wouldn't know what gets logged before it crashes, may even be nothing special.
Syslog-ng seems pretty flexible, so I won't be surprised if multiple destination files for the same filter can be configured, but that might be a question better asked in a syslog-ng expert forum.
I think a duplicate destination might catch something while a copy action is possibly to slow to catch anything that happened very close to a crash due to the extra latency.
Looking at the contents of /var/log/messages, I don't think they'd be helpful for troubleshooting the hang you see. I'd recommend implementing something like this community code, it'd be better for helping with this. Changing the frequency to better catch something that's more of a spike towards the end.
Linux tools (other than Linux RT) aren't my area of expertise so there may be something you could find in those.
Thx for pointing out this gem.
The cRIO is now being shipped for repair, but always good to be prepared for the next case.