fbpx

Learn about Fio:

  1. Fio
  2. Fio Basics
  3. Compile Fio
  4. Visualize Fio Logs with fio2gnuplot

Fio is an acronym for Flexible IO Tester and refers to a tool for measuring IO performance. With Fio, devices such as hard drives or SSDs can be tested for speed by running a user-defined workload and collecting performance data. The following article provides answers to the most important questions that need to be answered prior to a performance test and their relationship with Fio parameters

Testing an entire device or with individual files?

This question is answered via the parameter ” –filename “. For this parameter, either an entire block device can be entered or a file name can be specified.

Attention : Using a block device overwrites the whole device!

If the parameter is not used, Fio creates the test files based on the parameter ” –name “. Specifying the size of the file is obligatory in this case, when using a block device without ” –size ” the whole device is used.

$ fio - rw = write - name = test --size = 10M
[...]
$ ls -sh
totally 10M
10M test.1.0
$ fio --rw = write --name = test --size = 10M --filename = file1
[...]
$ ls -sh
totally 10M
10M file1

How big should the test files be?

The parameter ” –size ” indicates the size of the files. By means of ” –filename ” an entire block device used, the whole device is used without a “–size”.

$ fio - rw = write - name = test - size = 20M
[...]
$ ls -sh
totally 20M
20M test.1.0

Use of main memory (especially Page Cache under Linux) or use of Direct I / O?

The use of ” –direct = 1 ” activates the use of Direct_und_synchronized_I / O_unter_Linux # O_DIRECT Direct I / O. This means, above all, that the page cache is bypassed and therefore the memory is no longer used.

For the use of asynchronous accesses (eg via the library libaio) “–direct” is a prerequisite because the page cache can not be addressed asynchronously.

 
$ fio - rw = write - name = test --size = 20M --direct = 1
[...]
Run status group 0 (all jobs):
  WRITE: io = 20480KB, aggrb = 28563KB / s

Without “–direct” the speed of the main memory is measured:

$ fio - rw = write - name = test - size = 20M
[...]
Run status group 0 (all jobs):
  WRITE: io = 20480KB, aggrb = 930909KB / s

Which block size should be used?

By default, 4 KB is used as the block size. The desired block size can be passed to the parameter ” -bs “:

$ fio - rw = write - name = test --size = 20M --direct = 1
test: (g = 0): rw = write, bs = 4K-4K / 4K-4K
[...]
$ fio --rw = write --name = test --size = 20M --direct = 1 --bs = 1024k
test: (g = 0): rw = write, bs = 1M-1M / 1M-1M, ioengine = sync, iodepth = 1

Sequential or Random Access?

The parameter ” –rw ” decides whether the IO accesses are issued sequentially or randomly. Once you have decided on a variant, it is also possible to define a mixed workload, for example 50% read and 50% write. The following options are available for “–rw”:

  • read – Sequential reading
  • write – Sequential Write
  • randread – random reading
  • randwrite – random writing
  • readwrite , rw – Mixed, sequential workload
  • randrw – Mixed Random Workload

By default, for a mixed workload, the read and write percentages are set to 50% / 50%. In order to achieve a different distribution, the parameter ” –rwmixread” or ” –rwmixwrite ” can also be specified.

$ fio - rw = readwrite --name = test --size = 20M --direct = 1 --bs = 1024k
[...]
  read: io = 11264KB, bw = 54154KB / s, iops = 52, run = 208msec
[...]
  write: io = 9216.0KB, bw = 44308KB / s, iops = 43, run = 208msec
[...]
Disk stats (read / write):
  sda: ios = 14/12, merge = 0/0, ticks = 156/76, in_queue = 232, util = 54.55%

With an 80% to 20% split the result is:

$ fio - rw = readwrite --name = test --size = 20M --direct = 1 --bs = 1024k --rwmixread = 80
[...]
  read: io = 17408KB, bw = 83292KB / s, iops = 81, run = 209msec
[...]
  write: io = 3072.0KB, bw = 14699KB / s, iops = 14, run = 209msec
[...]
Disk stats (read / write):
  sda: ios = 24/3, merge = 0/0, ticks = 192/16, in_queue = 216, util = 56.67%

What type of data does Fio use?

Fio always uses random data . To reduce the overhead of generating the random data, a buffer of random data is created at the beginning, which is used continuously during the test. In most cases, however, these random data should also be compressible.

This reuse of the buffer in actual SSDs (eg, the Intel 520 Series SSDs ) actually causes the SSD controller to compress data. Performance is almost the same as when all zeros are used as data – eg with option ” –zero_buffers “.

To bypass this SSD compression effect , Fio can be instructed with ” –refill_buffers ” to refill the buffer for each IO submit :

refill_buffers If this option is given, fio will refill the IO buffer on every submit.

Here is an example demonstrating the above effect with the Intel 520 Series SSDs .

  • By default, data can be compressed. See the specifications in (intel.com):
    • Compressible Performance – Sequential Write Sata 6Gbps: 520MB / sec
    • Incompressible Performance – Sequential Write (up to): 235 MB / s
/ usr / local / bin / fio - rw = write --name = intel520-run0 --bs = 128k --direct = 1 --filename = / dev / sdb --numjobs = 1 --ioengine = libaio - iodepth = 32
intel520-run0: (g = 0): rw = write, bs = 128k-128k / 128k-128k, ioengine = libaio, iodepth = 32
Jobs: 1 (f = 1): [W] [100.0% done] [0K / 475.7M / s] [0/3629 iops] [eta 00m: 00s]
  write: io = 228937MB, bw = 394254KB / s, iops = 3080, run = 594619msec

With “–zero_buffers” the performance stays almost the same:

/ usr / local / bin / fio - rw = write --name = intel520-run0 --bs = 128k --direct = 1 --filename = / dev / sdb --numjobs = 1 --ioengine = libaio - iodepth = 32 - zero_buffers
intel520-run0: (g = 0): rw = write, bs = 128k-128k / 128k-128k, ioengine = libaio, iodepth = 32
Jobs: 1 (f = 1): [W] [100.0% done] [0K / 490.4M / s] [0/3741 iops] [eta 00m: 00s]
intel520-run0: (groupid = 0, jobs = 1): err = 0: pid = 13274
  write: io = 228937MB, bw = 401393KB / s, iops = 3135, run = 584044msec

With the option “–refill-buffers” the Incompressible Performance is achieved:

/ usr / local / bin / fio - rw = write --name = intel520-run0 --bs = 128k --direct = 1 --filename = / dev / sdb --numjobs = 1 --ioengine = libaio - iodepth = 32 --refill_buffers
Jobs: 1 (f = 1): [W] [100.0% done] [0K / 242.8M / s] [0/1852 iops] [eta 00m: 00s]
intel520-run0: (groupid = 0, jobs = 1): err = 0: pid = 13289
  write: io = 228937MB, bw = 203629KB / s, iops = 1590, run = 1151267msec

How can parallel accesses be performed?

Parallel accesses can be realized via different ways. On the one hand, several processes can be started which execute jobs in parallel (” –numjobs “), on the other hand, the I / O depth can be increased by means of an asynchronous I / O engine . Especially for SSDs, parallel I / O requests increase performance as SSDs internally have several Flash channels for processing (see Intel SSDs at a glance ).

  • numjobs

Specifies the number of processes that each generate the defined workload (default 1). Values> 1 generate numbjobs = n, n parallel processes that execute the same workload / job. Especially the parameter ” –size ” has to be considered, because eg with numjobs = 4, 4 * size storage space is needed.

  • group_reporting

When enabled, this option generates a group report for tests with numjobs> 1 (instead of individual job reports).

$ fio - rw = readwrite --name = test --size = 50M --direct = 1 --bs = 1024k --numjobs = 2
test: (g = 0): rw = rw, bs = 1M-1M / 1M-1M, ioengine = sync, iodepth = 1
test: (g = 0): rw = rw, bs = 1M-1M / 1M-1M, ioengine = sync, iodepth = 1
fio-2.0.8-9-gfb9f0
Starting 2 processes
[...]
test: (groupid = 0, jobs = 1): err = 0: pid = 25753: Thu Aug 30 10:40:30 2012
[...]
test: (groupid = 0, jobs = 1): err = 0: pid = 25754: Thu Aug 30 10:40:30 2012
[...]
$ ls -sh
totally 100M
50M test.1.0 50M test.2.0

“–group_reporting” summarizes the statistics:

$ fio --rw = readwrite --name = test --size = 50M --direct = 1 --bs = 1024k --numjobs = 2 --group_reporting
test: (g = 0): rw = rw, bs = 1M-1M / 1M-1M, ioengine = sync, iodepth = 1
test: (g = 0): rw = rw, bs = 1M-1M / 1M-1M, ioengine = sync, iodepth = 1
fio-2.0.8-9-gfb9f0
Starting 2 processes
Jobs: 2 (f = 2)
test: (groupid = 0, jobs = 2): err = 0: pid = 25773: Thu Aug 30 10:43:00 2012
  read: io = 56320KB, bw = 41020KB / s, iops = 40, run = 1373msec
[...]
$ ls -sh
totally 100M
50M test.1.0 50M test.2.0

libaio and iodepth

libaio  enables asynchronous access from the application level. This allows parallel I / O requests to be issued. The parameter ” –iodepth ” can be used to specify the number of requests. ” –libaio ” also requires the option “- -direct = 1 ” because the page cache can not be addressed asynchronously under Linux:

Here’s an example using 16 as the IO depth:

$ fio --rw = readwrite --name = test --size = 50M --direct = 1 --bs = 1024k --ioengine = libaio --iodepth = 16
test: (g = 0): rw = rw, bs = 1M-1M / 1M-1M, ioengine = libaio, iodepth = 16
[...]
 IO depths: 1 = 2.0%, 2 = 4.0%, 4 = 8.0%, 8 = 16.0%, 16 = 70.0%, 32 = 0.0%,> = 64 = 0.0%
[...]

As the example shows, the exact distribution of the IO depths can be checked after a job run. This is advantageous in that, because of OS restrictions, the IO depth can not always be forced:

Even async engines may impose OS restrictions causing the desired depth not to be […] achieved.

How do I limit the duration of a test?

How long a test should run can be specified by the parameter ” –runtime “. If you want to make sure that the test is not ended earlier than the specified time, the parameter ” –time_based ” is recommended . It repeats the workload until the desired runtime has been reached.

$ fio --rw = readwrite --name = test --size = 50M --direct = 1 --bs = 1024k --numjobs = 2 --group_reporting --runtime = 2
[...]
  read: io = 50176KB, bw = 36545KB / s, iops = 35, run = 1373msec
[...]

The duration of this test is 1.373 seconds, despite the specification of “–runtime”. With “–time_based” the exact runtime can be reached:

$ fio --rw = readwrite --name = test --size = 50M --direct = 1 --bs = 1024k --numjobs = 2 --group_reporting --runtime = 2 --time_based
[...]
  read: io = 77824KB, bw = 38718KB / s, iops = 37, run = 2010msec
[...]

.

Categories: Tutorials

5 Comments

Steven Silk · May 18, 2020 at 9:55 PM

Does it make sense to try and have different files for running the randreadwrite tests? So that the system is not trying to access the same files? Also – it there a way to make certain that the cache is not being used?

Ambharesh · July 16, 2021 at 8:05 AM

I’m trying to run FIO with dev-dax engine. The size of my dax file is 64GB , but i gave the size as 100GB and FIO ran without any issues.
Does dev-dax engine support overflow of filename memory, because if my file /dev/dax0.0 size is only 64GB how does it write 100GB?

Fio - Virtono Community · June 21, 2023 at 8:50 AM

[…] Fio Basics […]

Compile Fio - Virtono Community · June 21, 2023 at 9:05 AM

[…] Fio Basics […]

Visualize Fio Logs With Fio2gnuplot - Virtono Community · June 21, 2023 at 9:10 AM

[…] Fio Basics […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.