NI Linux Real-Time Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

File reporting on NI Linux RT

Solved!
Go to solution

I have seen some odd behavior when putting files onto my NI Linux RT Target. I generated a text file that was about 43 MB in size and put it in the home\lvuser directory on my RT Target. When using the "du -h /home/ command the space taken up seems to be about 42 MB which is fairly similar to the 43 MB reported in windows. However, when using the "df -mh" command the file is reported as taking only about 33 MB. This behavior is also seen with binary files. I have two questions in regards to this. 

 

1. Is this expected behavior, and is it something that is inherent to Linux? 

 

2. When monitory disk usage on the cRIO should you trust the du commands or the df commands? 

 

Some clarity on this will be very helpful! 

0 Kudos
Message 1 of 6
(3,220 Views)

Taken from stackexchange:

 

du == Disk Usage. It walks through directory tree and counts the sum size of all files therein. It may not output exact information due to the possibility of unreadable files, hardlinks in directory tree, etc. It will show information about the specific directory requested. Think, "How much disk space is being used by these files?"

df == Disk Free. Looks at disk used blocks directly in filesystem metadata. Because of this it returns much faster that du but can only show info about the entire disk/partition. Think, "How much free disk space do I have?"

 

How do you make df show the size of a single file?




DSH Pragmatic Software Development Workshops (Fab, Steve, Brian and me)
Release Automation Tools for LabVIEW (CI/CD integration with LabVIEW)
HSE Discord Server (Discuss our free and commercial tools and services)
DQMH® (The Future of Team-Based LabVIEW Development)


Message 2 of 6
(3,216 Views)

I have attached a file that shows what I am discussing in my initial post. This shows what the df functions shows before and after the 43 MB file on the Linux RT Target. It also shows what the du function reports after the file has been stored on in the home\lvuser directory

 

Thank you

0 Kudos
Message 3 of 6
(3,206 Views)
Solution
Accepted by topic author Shezaan

From the output of the df utility, I can see that you're using a Zynq-based controller. One interesting thing to keep in mind with the 906x, SOM, and sbRIO 96xx controllers is that they are using raw NAND flash, and the filesystem that sits on that does transparent compression. This can help explain the differences between the reported size of the file and deduction of the size of the file from checking the available space on the NAND: you're basically measuring the uncompressed size in the former case and the compressed, on NAND size in the latter.

 

Now, with that bit of information and what the two things are measuring, it really comes down to what it is you want to monitor. To make sure that you're not running low on storage capacity, you should use df to check the available storage remaining on the NAND. If you wish to check that a single file is not getting too large, you can use du (or ls or stat) to check the file that you care about.

Message 4 of 6
(3,197 Views)

Hey Brad, 

 

Thank you so much for clearing that up. One follow up question I had was, is there anyway to control how much these files are being compressed, or is it something that is just dependent on hardware/software? 

 

Thank you

0 Kudos
Message 5 of 6
(3,195 Views)
Solution
Accepted by topic author Shezaan

Hi Shezaan,

 

It doesn't look like there's a convenient way to control the compression level, but there is a means to switch the compression level between LZO (default, faster, slightly larger) and ZLIB (slower, better compressions) or a mix of the two. You'd need to make adjustments to how the formatting of the filesystem happens, and for the limted gains you'd get (less than 10%, actual difference depends on filesystem contents), the added complexity and non-standardness of the approach, the fact that it hasn't been tested on NI hardware, and the slight performance hit, I'm not going to recommend you try it.

 

You can adjust the compression level if you were to dig into the kernel code and the UBI userspace tools to add the functionality to adjust the compression levels, if you decide to tackle that be sure to submit the changes to the appropriate opensource mailing lists (I'm sure there are others who would benefit from the work)

Message 6 of 6
(3,191 Views)