Skip to main content
  • Place orders quickly and easily
  • View orders and track your shipping status
  • Enjoy members-only rewards and discounts
  • Create and access a list of your products

5 ways to improve hard drive speed in Linux

Summary: Here you find 5 ways of improving the hard drive performance in Linux.

This article applies to   This article does not apply to 

Symptoms

5 ways to improve hard drive speed in Linux 

Cause

-

Resolution

  1. Bypass PAGE-CACHE for "read-once" Data.
  2. Bypass PAGE-CACHE for Large Files.
  3. IF (CPU-BOUND) THEN SCHEDULER == NO-OP;
  4. Block-Size: Bigger is better
  5. SYNC vs. ASYNC (& READ vs. WRITE)

 

1. Bypass PAGE-CACHE for "read-once" Data.

Back to Top

Page-cache caches recently accessed pages from the hard drive, thus reducing seek-times for subsequent accesses to the same data. The page-cache does not improve the performance the first time a page is accessed from the hard drive. So if an app is going to read a file once and just once, then bypassing the page-cache is the better way to go. This is possible by using the O_DIRECT flag. This means that the kernel does not consider this particular data for the page-cache. Reducing cache-contention means that other pages (which would be accessed repeatedly) have a better chance of being retained in the page-cache. This improves the cache-hit ratio, providing better performance.

void ioReadOnceFile()
{
/* Using direct_fd and direct_f bypasses kernel page-cache.
* - direct_fd is a low-level file descriptor
* - direct_f is a filestream similar to one returned by fopen()
* NOTE: Use getpagesize() for determining optimal sized buffers.
*/

int direct_fd = open("filename", O_DIRECT | O_RDWR);
FILE *direct_f = fdopen(direct_fd, "w+");

/* direct disk-I/O done HERE*/

fclose(f);
close(fd);
}


2. Bypass PAGE-CACHE for Large Files.

Back to Top

Consider the case of a reading in a large file (i.e. a database) made of a huge number of pages. Every subsequent page accessed goes into the page-cache only to be dropped out later as more and more pages are read. This severely reduces the cache-hit ratio. In this case the page-cache does not provide any performance gains. Hence one would be better off bypassing the page-cache when accessing large files.

void ioLargeFile()
{
/* Using direct_fd and direct_f bypasses kernel page-cache.
* - direct_fd is a low-level file descriptor
* - direct_f is a filestream similar to one returned by fopen()
* NOTE: Use getpagesize() for determining optimal sized buffers.
*/

int direct_fd = open("largefile.bin", O_DIRECT | O_RDWR | O_LARGEFILE);
FILE *direct_f = fdopen(direct_fd, "w+");

/* direct disk-I/O done HERE*/

fclose(f);
close(fd);
}

3. IF (CPU-BOUND) THEN SCHEDULER == NO-OP;

Back to Top

The io-scheduler optimizes the order of I/O operations to be queued on to the hard drive. As seek-time is the heaviest penalty on a hard drive, most I/O schedulers attempt to minimize the seek-time. This is implemented as a variant of the elevator algorithm i.e. re-ordering the randomly ordered requests from numerous processes to the order in which the data is present on the hard drive, requires a significant amount of CPU-time.

Certain tasks that involve complex operations tend to be limited by how fast the CPU can process vast amounts of data. A complex I/O-scheduler running in the background can be consuming precious CPU cycles, thereby reducing the system performance. In this case, switching to a simpler algorithm like no-op reduces the CPU load and can improve system performance.
echo noop > /sys/block//queue/scheduler

 


4. Block-Size: Bigger is better

Back to Top

While this will eventually get the job done, it's definitely not  the most optimal way. From the kernel's perspective, the most optimal size for I/O requests is the file system block size (i.e. the page-size). As all I/O in the file system (and the kernel page-cache) is in terms of pages, it makes sense for the app to do transfers in multiples of page-size too. Also with multi-segmented caches making their way into hard drives now, one would hugely benefit by doing I/O in multiples of block-size.


The following command can be used to determine the optimal block-size
stat --printf="bs=%s optimal-bs=%S\n" --file-system /dev/


5. SYNC vs. ASYNC (& READ vs. WRITE)

Back to Top

When an app initiates a SYNC I/O read, the kernel queues a read operation for the data and returns only after the entire block of requested data is read back. During this period, the Kernel will mark the application's process as blocked for I/O. Other processes can utilize the CPU, resulting in an overall better performance for the system.

When an app initiates a SYNC I/O write, the kernel queues a write operation for the data puts the application's process in a blocked I/O. Unfortunately what this means is that the current application's process is blocked and cannot do any other processing (or I/O for that matter) until this write operation completes.

When an app initiates an ASYNC I/O read, the read() function usually returns after reading a subset of the large block of data. The app needs to repeatedly call read() with the size of data remaining to be read, until the entire required data is read-in. Each additional call to read introduces some overhead as it introduces a context-switch between the userspace and the kernel. Implementing a tight loop to repeatedly call read() wastes CPU cycles that other processes could have used. Hence one usually implements blocking using select() until the next read() returns non-zero bytes read-in. i.e. the ASYNC is made to block just like the SYNC read does.

When an app initiates an ASYNC I/O write, the kernel updates the corresponding pages in the page-cache and marks them dirty. Then the control quickly returns to the app which can continue to run. The data is flushed to the hard drive later at a more optimal time (low CPU-load) in a more optimal way (sequentially bunched writes).

Hence, SYNC-reads and ASYNC-writes are generally a good way to go as they allow the kernel to optimize the order and timing of the underlying I/O requests.

 

 

 

 

 

 

 

 

 

 

Affected Products

Red Hat Enterprise Linux Version 5, Red Hat Enterprise Linux Version 6, Red Hat Enterprise Linux Version 7, Red Hat Enterprise Linux Version 9, Red Hat Enterprise Linux Version 8
Article Properties
Article Number: 000140396
Article Type: Solution
Last Modified: 16 Aug 2024
Version:  4
Find answers to your questions from other Dell users
Support Services
Check if your device is covered by Support Services.