7

Is there a maximum amount of hard drives that one can connect to a 64-bit linux machine? I'm not concerned with practicality, as my situation involves a VM.

3
  • 1
    Edited your question, since "mounts" are not quite the same thing as "hard drives".
    – Renan
    Commented Mar 29, 2012 at 18:50
  • 1
    Another way to ask this is "Could I attach, say, 147 file systems to my Linux machine? In other words, tell us what you are trying to accomplish and we can give you more practical answers. Unless this is an intellectual exercise only.
    – uSlackr
    Commented Mar 29, 2012 at 21:45
  • This is an intellectual exercise only.
    – monksy
    Commented Jan 25, 2013 at 23:09

4 Answers 4

10

From this LinuxQuestions post:

Linux does not put arbitrary limits on the number of hard disks.

Also, from this post in the Debian mailing list:

That's easy. After /dev/sdz comes /dev/sdaa. And, I've just tested it by making and logging into 800 ISCSI targets on my laptop, after /dev/sdzz comes /dev/sdaaa. :)

and this blog post:

For SATA and SCSI drives under a modern Linux kernel, the same as above applies except that the code to derive names works properly beyond sdzzz up to (in theory) sd followed by 29 z‘s!

So, theoretically there are limits, but in practice they are unreachable.

3
  • 1
    Surely there is a limit.
    – monksy
    Commented Mar 29, 2012 at 23:04
  • There is, but in practice it's almost unreachable.
    – Renan
    Commented Mar 29, 2012 at 23:29
  • 1
    Using hardware raid, the number of devices any kernel has access to may be greater than what can be seen.
    – Clearer
    Commented Nov 21, 2014 at 20:44
2

There is, in fact, a limit on the number of drives exposed by Linux's abstract SCSI subsystem, which includes SATA and USB drives. This is because device files are marked by major/minor device number pairs, and the scheme allocated for the SCSI subsystem has this implicit limit.

https://www.kernel.org/doc/Documentation/devices.txt

The following major opcodes are allocated: 8, 65 through 71, and 128 through 135, resulting in a total of 16 allocated blocks. The minor opcode is limited to 256 possible values (range 0..255). Each disk gets 16 consecutive minor opcodes where the first represents the entire disk and the next 15 represent partitions.

let major = number of major allocated opcodes = 16
let minor = number of minor opcodes per major opcode = 256
let parts = number of minor opcodes per disk = 16
major * (minor / parts) = 16 * (256 / 16) = 256 possible drives

I've previously seen people write 128 as the limit. I believe Linux more recently 128..135, which would explain the discrepancy.

The naming scheme (/dev/sdbz7) is chosen by userland, not by the Linux kernel. In most cases these are managed by udev, eudev, or mdev (though in the past they were created manually). I don't know their naming schemes. Don't necessarily rely on all Linux-based systems naming devices the same way, as the system administrator can modify the device naming policies.

1
  • "The naming scheme (/dev/sdbz7) is chosen by userland": this is wrong. The kernel assigns the device name in sd_format_disk_name().
    – uncleremus
    Commented Dec 21, 2022 at 15:39
1

The RHEL technology capabilities and limits page suggests at least 10000 with a recent enough kernel (see the 'Maximum number of device paths ("sd" devices)' row). This amount is greater than that mentioned by @luiji-maryo because:

  1. If configured to be allowed, a device can be allocated a major/minor number dynamically (see https://www.kernel.org/doc/Documentation/devices.txt for details).
  2. Minor Linux device numbers can be much bigger than an 8 bit value.

One way to show this to yourself is using the scsi_debug module:

modprobe scsi_debug max_luns=10 num_tgts=128

After a short wait on mainstream Linux distro you should now have 1280 more SCSI disks. You can use

ls -l <pathtodisk>

to see their major/minor numbers.

NB (1): virtualisation software normally has much lower (in the hundreds or less e.g. vSphere 6.0 limits) limits on the maximum number of controllers that can be attached to the VM and the maximum number of disks you can hang off those controllers so you're unlikely to hit Linux's limits that way.

NB (2): Both BSG and SG limit themselves (via BSG_MAX_DEVS and SG_MAX_DEVS respectively) to a maximum of 32768 devices. Even if you somehow didn't need /dev/ entries for the disks themselves you would have difficulty sending down more specialised SCSI commands without these extra devices.

0

The answer from the kernel source is 262144 (possibly 1048576).

  • There are 16 major numbers for SCSI disks: 8, 65..71, 128..135.
  • Out of the 20 bits the kernel reserves for minor numbers, the kernel uses 4 for enumerating partitions.
  • For historical reasons, the disks aren't enumerated contiguously. After 135:240 comes 8:256, ..., 65:256, ..., 135:496, 8:512, ..., etc.

I'm not quite sure why the comment in the kernel source says "16k disks" per major, while the remaining 16 minor bits would in theory suffice for 64k, giving us 1M disks. I suppose nobody has tried more than 262k disks yet.

The device name of disk 262143 (135:262128) is /dev/sdnwtl.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .