0

I'm trying to understand the relationship between device IOPS and throughput. I used AmorphousDiskMark to gather some information, and I have one question:

My SSD disk is based on 4KB per block. So, I guess the IOPS is fixed, regardless of whether my operating system reads/writes a 1MB file or a 4KB file. I believe this is not a problem with the software, but rather my lack of background knowledge in hardware. Could someone please help me understand this better? Thank you very much.

---------------------------------------------------------------------
AmorphousDiskMark 4.0.1 (C) 2016-2023 Katsura Shareware
                    Katsura Shareware : https://katsurashareware.com/
---------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1,000 bytes, KiB = 1,024 bytes
* MB = 1,000,000 bytes, MiB = 1,048,576 bytes

  Sequential Read 1MiB (QD=   8) :   2637.09 MB/s [   2514.9 IOPS]
 Sequential Write 1MiB (QD=   8) :   2538.94 MB/s [   2421.3 IOPS]
     Sequential Read 1MiB (QD=1) :   1428.81 MB/s [   1362.6 IOPS]
    Sequential Write 1MiB (QD=1) :   1941.95 MB/s [   1852.0 IOPS]
      Random Read 4KiB (QD=  64) :    942.61 MB/s [ 230130.6 IOPS]
     Random Write 4KiB (QD=  64) :    250.66 MB/s [  61197.3 IOPS]
         Random Read 4KiB (QD=1) :     38.68 MB/s [   9443.2 IOPS]
        Random Write 4KiB (QD=1) :    335.59 MB/s [  81930.9 IOPS]

    Test : 1 GiB (x5) [Interval=5 sec]
  Volume : Macintosh HD: 17% used (76/465 GiB)
  Device : APPLE SSD AP0512N
     CPU : Intel Core i7-1068NG7
    Date : 2024-06-21T06:54:36Z
      OS : macOS 14.5 23F79

0

1 Answer 1

0

IOPS and throughput are only related when working with the smallest block size a device supports. If you are requesting larger blocks of data then you will bypass IOPS limits. They are vaguely related but largely independent.

The reason you see different IOPS values is mainly because of caching and it's implementation in the drive.

You see the biggest IOPS in the QD=64 tests because QD is Queue Depth. The OS is basically giving the drive a list of "here's a list of 64 random blocks I want, go fetch" rather than waiting for a single request to finish before sending the next one (QD=1). At QD 64 the drive controller can be looking ahead and preparing the next read or write while the current one is in progress and means a much higher efficiency of use of the path to the flash device.

The reason for lower IOPS for the sequential 1MB tests is because you are doing bigger single requests. The drive might be doing the same number of requests internally, but externally it is fulfilling one single big request which will tie up the interface for longer, thus lowering the IOPS.

5
  • I understand the meaning of QD. Thank you for the explanation. In macOS, is it possible to see the physical block IOPS, whether the logical block size is 1MB or 4K? I am just using the fio tool, but I'm not sure if it's helping. Another question: Can the operating system control the IOPS? Commented Jun 21 at 16:52
  • The IOPS is a function of how fast the controller can switch between data blocks within the flash on the SSD. The OS doesn't really have anything to do with it apart from what block size it requests and what queue depth it works at. The OS can't tell the controller how to do it's job, only what it wants from it. The "physical IOPS" is not really something that is relevant, and due to drive wear levelling is going to be similar to the highest random IOPS figures. Due to wear levelling data may well be randomly distributed across the flash memory.
    – Mokubai
    Commented Jun 21 at 17:25
  • I understand that IOPS is influenced by the SSD controller's ability to switch between data blocks, and the OS influences this by the block size it requests and the queue depth. To clarify: When testing IOPS with a 1MB block size for random access, does it behave similarly to sequential access with 1MB blocks, effectively selecting sequential 1MB blocks randomly? Is testing random access with a 1MB block size practical? For accurate random access testing, should I use the physical block size instead? Commented Jun 22 at 14:07
  • Random 1MB blocks will be roughly equivalent to sequential 1MB blocks with potentially a slight difference in time if the device needs to power up read circuitry in a different flash cell or device. It will be miniscule compared to the time to read a 1MB block though. I would expect 1MB random to be nearly identical to 1MB sequential tests for an SSD and not really relevant to test. Random tests are showing you worst case performance of smallest blocks all over the place, sequential are showing best case of big blocks close together. Normal operation will be somewhere between those two.
    – Mokubai
    Commented Jun 22 at 14:18
  • Thank you for the explanation. Your clarification was very helpful. Commented Jun 23 at 4:29

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .