Linux RAMDISK with tmpfs

The Linux kernel provides the tmpfs, which basically creates a file system in memory. This temporary file system can be used to store temporary data, such as caches or log files. Read more about tmpfs in the kernel documentation: tmpfs.txt

After reading this excellent article about using tmpfs in Linux, I decided to put it to the test. Even though the Linux kernel already does a good job caching files, I wanted to see the performance of this solution by applying different loads on it. For this, I am using the IOzone tool I already used for my ZFS tests (1) (2) and my Amazon EC2 IO test.

Mount tmpfs

First of all, we need to create a folder for mounting the file system:

mkdir -p /mnt/ramdisk/

Since I wanted to keep this mountpoint even after rebooting the machine, I edited the /etc/fstab file and added the following line:

tmpfs /mnt/ramdisk      tmpfs   size=128M,mode=0777     0       0

Then mount all the mountpoints using mount -a.

Performance

I am testing this configuration on an IBM System x3550 M2, Type 7946. Since I want to compare the performance of tmpfs with the performance of local storage, this server features six 10k RPM HDDs (IBM 44W2194) in a RAID10 configuration. This should provide good comparison values for a modern physical local storage system.

To initiate the performance tests, I run the test with the following commands:

iozone -a -g80m -f /mnt/ramdisk/testfile

respectively

iozone -a -g80m -f /root/testfile

I then parsed the generated output with Excel (grab the raw data here) and plotted it using a bar graph. This is what we get:

Bar diagram showing harddisk and RAMDISK performance
IOzone output

Conclusion

Most of the bars show what I expected; the RAM-based disk is a lot faster than the physical storage, sometimes even up to 79% faster (random write test). However, especially when performing read-based operations, the hard disks suddenly perform even faster than the temporary filesystem. Why is that so? From my point of view, there are two factors that play come into play:

  • Disk Controller Cache
  • Kernel Filesystem Cache

We notice that especially in the “reread” scenarios, the hard disks leap in front of the RAM-based approach. This might have to do with the kernel already caching these blocks in memory and therefore removing the advantage of the ramdisk. However, in all write-based tests, the tmpfs clearly shows superior performance, even though the server features a RAID controller cache. Without that cache of the RAID controller, the write performance of the hard disks would have been even worse. Nontheless, both solutions show an impressive throughput in nearly all the tests, the lowest value being around 130000KBytes/sec.

Hello world

My name is Simon Krenger, I am a Technical Account Manager (TAM) at Red Hat. I advise our customers in using Kubernetes, Containers, Linux and Open Source.

Elsewhere

  1. GitHub
  2. LinkedIn
  3. GitLab