83

I understand what mounting is in Linux, and I understand device files. However I do not understand WHY we need to mount.

For example, as explained in the accepted answer of this question, using this command:

mount /dev/cdrom /media/cdrom

we are mounting the CDROM device to /media/cdrom and eventually are able to access the files of CDROM with the following command

ls /media/cdrom

which will list the content of the CDROM.

Why not skip mounting altogether, and do the following?

ls /dev/cdrom

And have the content of the CDROM Listed. I expect one of the answers to be: "This is how Linux is designed". But if so, then why was it designed that way? Why not access the /dev/cdrom directory directly? What's the real purpose of mounting?

12
  • 3
    You may also be interested in Trouble with understanding the concept of mounting
    – PM 2Ring
    Commented Jan 8, 2015 at 6:41
  • 23
    Note pretty much all operating systems "mount". It's just transparent in most cases. When in Windows you pick "safely remove" for a pendrive, you're actually performing umount, after it was automatically mounted by the system. Linux just doesn't isolate the user so far from the process, so as result you are able to 'customize' it more - say, umsdos partition doesn't differ in any visible way from vfat, but if you use mount -t umsdos to mount it, you have all Linux permissions, ownerships, special files, fifos etc. If you mount -t vfat it behaves like a plain Windows partition.
    – SF.
    Commented Jan 8, 2015 at 7:02
  • 2
    help.ubuntu.com/community/Autofs
    – jamesdlin
    Commented Jan 8, 2015 at 19:46
  • 8
    "I understand what mounting is in linux, and I understand device files." Apparently not ;) Commented Jan 9, 2015 at 16:50
  • 6
    Why not access the /dev/cdrom directory directly? Because it's not a directory.
    – Brandon
    Commented Jan 11, 2015 at 3:41

9 Answers 9

76

One reason is that block level access is a bit lower level than ls would be able to work with. /dev/cdrom, or dev/sda1 may be your CD ROM drive and partition 1 of your hard drive, respectively, but they aren't implementing ISO 9660 / ext4 - they're just RAW pointers to those devices known as Device Files.

One of the things mount determines is HOW to use that raw access - what file system logic / driver / kernel modules are going to manage the reads/writes, or translate ls /mnt/cdrom into which blocks need to be read, and how to interpret the content of those blocks into things like file.txt.

Other times, this low level access can be good enough; I've just read from and written to serial ports, usb devices, tty terminals, and other relatively simple devices. I would never try to manually read/write from /dev/sda1 to, say, edit a text file, because I'd basically have to reimplement ext4 logic, which may include, among other things: look up the file inodes, find the storage blocks, read the full block, make my change(s), write the full blocks, then update the inode (perhaps), or instead write this all to the journal - much too difficult.

One way to see this for yourself is just to try it:

[root@ArchHP dev]# cd /dev/sda1
bash: cd: /dev/sda1: Not a directory

/dev is a directory, and you can cd and ls all you like. /dev/sda1 is not a directory; it's a special type of file that is what the kernel offers up as a 'handle' to that device.

See the wikipedia entry on Device Files for a more in depth treatment.

11
  • 4
    I'm glossing over some details, because I think bad things will happen if you just start writing to whatever data is stored on /dev/sda1, and thus I assume there's some preventative measures or perhaps abstraction that would stop you from overwriting things. But to sum it up, if you knew exactly how and where to write to the disk you could do it manually through /dev/sda1. Note some tools DO interact directly with raw disks, such as swapon/swapoff and dd.
    – Ehryk
    Commented Jan 8, 2015 at 4:55
  • 4
    Just to add a bit more, mounting initializes the filesystem and thus also activates an entire layer of automatic handling of input/output operations that is transparent to the user (such as caching files in RAM, queueing the operations, holding the states of open files and so on). This is why you also have to unmount the filesystem correctly to avoid corruption (or at least sync it). Mounting is present on all commonly used platforms, not just linux. If mounting is automatically handled by the desktop environment (KDE or gnome), it's just as hidden as in MS windows.
    – orion
    Commented Jan 8, 2015 at 8:01
  • 6
    @Ehryk (3 comments up) the only preventative measure in a typical Linux system is the filesystem permissions - in other words, you have to use the root account to write to a device file. If you do, you can cat >/dev/sda1 to your heart's content and Linux won't stop you. (Needless to say, doing so would completely corrupt the filesystem.)
    – David Z
    Commented Jan 8, 2015 at 9:04
  • 5
    @psusi You couldn't do that on Windows 95, true. But it was present (and well hidden) on MS DOS and Windows NT. Your modern NT-based Windows certainly allows you to mount and unmount partitions at will (even into folders on other partitions, and even into multiple folders at the same time) - it just usually mounts all unknown partitions to drive letters by default. You can also access the device without mounting it by using it's full path (very similar to the unix way), but only if it's not locked - which it of course is if it's currently mounted.
    – Luaan
    Commented Jan 8, 2015 at 16:22
  • 3
    @Luaan & psusi Psusi has the right of it. But for nearly all intends and purposes the effect for the caller of the Win32 API is basically identical. (For Posix compliance there is even an emulation of mount semantics.) Win9x actually does have the concept of mount because it still runs on top of DOS. FAT support was build in into DOS as a native handler somewhat similar to the way the NT kernel handles it. But CDROM and network filesystems had to be mounted. (Remember MSCDEX for CD. This provided the ISO/RockRidge filesystem handler and mounted it on drive-letter).
    – Tonny
    Commented Jan 8, 2015 at 21:02
23

Basically, and to put it easily, the operating system needs to know how to access the files on that device.

mount is not only "giving you access to the files", it's telling the OS the filesystem the drive has, if it's read only or read/write access, etc.

/dev/cdrom is a low-level device, the operating system functions wouldn't know how to access them... imagine you put a weirdly formatted cdrom in it (even an audio cd), how would ls tell which files (if any) are there on the cd-rom without "mounting" it first?

Note that this happens automatically in many OSes (even Linux on some distributions and graphic interfaces), but that doesn't mean other OSes are not "mounting" the drives.

10

I'd call it historical reasons. Not that the other answers are wrong, but there's a bit more to the story.

Compare Windows: Windows started as a single-computer, single-user OS. That single computer probably had one floppy drive and one hard drive, no network connection, no USB, no nothing. (Windows 3.11 had native networking capabilities; Windows 3.1 didn't.)

The kind of setting Windows was born into was so simple that there was no need to be fancy: Just mount everything (all two devices) automatically every time, there aren't (weren't) many things that could go wrong.

In contrast, Unix was made to run on server networks with multiple users from the very start.

One of the Unix design decisions was that the file system should appear as a single uniform entity to the end users, no matter how many computers the physical disks were spread over, no matter what kind of disk, and no matter which of dozens of computer the user would access it from. The logical path to the user's files would stay the same, even if the physical location of those files had changed overnight, e.g. due to server maintenance.

They were abstracting the logical file system, paths to files, from the physical devices that stored those files. Say server A is normally hosting /home, but server A needs maintenance: Just unmount server A and mount backup server B on /home instead, and noone apart from the administrators would even notice.
(Unlike the Windows convention of giving different names to different physical devices - C:, D:, etc. - which works against the transparency that Unix was striving for.)

In that kind of setting, you can't just mount everything in sight willy-nilly,

In a large network, individual disks and computers are out of commission constantly. Administrators need the ability to say what is mounted where and when, e.g. to do a controlled shutdown of one computer while another computer transparently takes over hosting the same files.

So that's why from a historical perspective: Windows and Unix came from different backgrounds. You could call it a cultural difference, if you like:

  • Unix was born in an environment where the administrator needed to control mounting; of the dozens of storage devices on the network the admin must decide what is mounted where and when.
  • Windows was born in a setting where there was no administrator and only two storage devices, and the user would probably know whether their file was on the floppy or the hard drive.
  • (Linux was born as a single-computer OS, of course, but it was also explicitly designed from the start to mimic Unix as closely as possible on a home computer.)

More recently, the OSes have been moving closer to each other:

  • Linux has added more single-computer, single-user stuff (like automounting); as it became frequently used in single-computer settings.
  • Windows has added more security, networking, support for multiple users etc.; as networking became more ubiquitous and Microsoft started making an OS for servers as well.

But it's still easy to tell that the two are the result of different traditions.

1
  • 2
    It's not just that. The devices are low-level abstractions for filesystem drivers (and possibly other system software) to use. Unix was designed to be an operating system for operating system programmers. For instance, the programmers of those filesystem drivers. That is why these low-level abstractions are exposed to the user. Commented Jan 9, 2015 at 17:18
8

For consistency

Imagine you have some partitions on the first hard drive in your system. For example, /dev/sda2. You later decide that the drive isn't large enough so you purchase a second one and add it to the system. All of a sudden, that becomes /dev/sda and your current drive becomes /dev/sdb. Your partition is now /dev/sdb2.

Using your proposed system, you'd have to change all scripts, applications, settings, etc. that access the data on your old partition to reflect this change in names.

However, mounting allows you to still use the same mount point for this renamed drive. You'd have to edit /etc/fstab to tell your system that (for example) /media/backup is now /dev/sdb2 instead, but that is only one edit.

Note that modern day systems are even easier. Instead of referencing the device as /dev/sda2 or /dev/sdb2, they have UUIDS, which look similar to c5845b43-fe98-499a-bf31-4eccae14261b or can be given friendlier labels such as backup which can be used to reference the device when mounting. This way, the device name doesn't change when adding a new device which makes administration even simpler:

# mount LABEL="backup" /media/backup

For Safety

By requiring a device to be mounted, the administrator can control access to the device. The device can be removed when unmounted, but not when in use (unless you want to suffer data loss). If you are (were) a Windows user, remember the little green icon in the notification area that tells you it's safe to remove a USB stick? That is Windows mounting and unmounting the stick for you. So the principle isn't just a Unix/Linux one.

2
  • The universal Ids are actually UUIDs, not Microsoft's GUIDs.
    – Ruslan
    Commented Jan 12, 2015 at 10:21
  • @Ruslan - so they are! I had my MS head on at the time. Many thanks - I've changed it. Commented Jan 12, 2015 at 10:39
6

The question title asks: Why do we need to mount on Linux?

One way to interpret this question: Why do we need to issue explicit mount commands to make file systems available on Linux?

The answer: we don't.

You don't need to mount file systems explicitly, you can arrange for it to be done automatically, and Linux distributions already do this for most devices, just like Windows and Macs do.

So that probably isn't what you meant to ask.

A second interpretation: Why do we sometimes need to issue explicit mount commands to make file systems available on Linux? Why not make the operating system always do it for us, and hide it from the user?

This is the question I am reading in the question text, when you ask:

Why not skip mounting altogether, and do the following

ls /dev/cdrom

and have the content of the CD-ROM listed?

Presumably, you mean: why not just have that command do what

ls /media/cdrom

does now?

Well, in that case, /dev/cdrom would be a directory tree, not a device file. So your real question seems to be: why have a device file in the first place?

I'd like to add an answer to the ones already given.

Why do users get to see device files?

Whenever you use a CD-ROM, or any other device that stores files, a piece of software is used that interprets whatever is on your CD-ROM as a directory tree of files. It is invoked whenever you use ls or any other kind of command or application that accesses the files on your CD-ROM. That software is the file system driver for the particular file system used to write the files to your CD-ROM. Whenever you list, read or write files on a file system, it's the job of that software to make sure that the corresponding low-level read and write operations are performed on the device in question. Whenever you mount a file system, you're telling the system which file system driver to use for the device. Whether you do this explicitly with a mount command, or leave it to the OS to be done automatically, it will need to be done, and of course the file system driver software will need to be there in the first place.

How does a file system driver do its job? The answer: it does it by reading from and writing to the device file. Why? The answer, as you stated already: Unix was designed this way. In Unix, device files are the common low-level abstraction for devices. The really device-specific software (the device driver) for a particular device is supposed to implement opening, closing, reading and writing on the device as operations on the device file. That way, higher-level software (such as a file system driver) doesn't need to know as much about the internal workings of individual devices. The low-level device drivers and the file system drivers can be written separately, by different people, as long as they agree on a common way to interface with each other, and that is what the device files are for.

So file system drivers need the device files.

But why do we, ordinary users, get to see the device files? The answer is that Unix was designed to be used by operating system programmers. It was designed to allow its users to write device drivers and file system drivers. That is in fact how they get written.

The same is true for Linux: you can write your own file system driver (or device driver), install it, and then use it. It makes Linux (or any other variant of Unix) easily extensible (and it is in fact the reason Linux was started): when some new piece of hardware comes on the market, or a new, smarter way to implement a file system is designed, someone can write the code to support it, make it work, and contribute it to Linux.

Device files make this easier.

1
  • 1
    very well explained
    – Shailendra
    Commented Jan 15, 2018 at 12:50
5

There are several advantages to the current arrangement. They can be grouped into advantages of block special files and advantages of mountpoints.

Special files are files that represent devices. One of the Ideas that unix was built on is everything is a file. This makes many things simple, for example user interaction is just file reads and writes on a tty device, which is a character special file. likewise checking for bad blocks, partitioning or formatting a disk is just file operations. It does not mater if the disk is mfm, ide, scsi, fiberchanel, or something else it is just a file.

But on the other hand you may not want to deal with the whole disk or partition just the files, and in many cases more files than will fit on a disk. So we have mountpoints. A mountpoint allows you to put a whole disk (or partition) on a directory. Back in my Slackware days when a good sized hard disk was a couple hundred MB, It was common to use the CD as /usr and the hard disk for /, /usr/local, and swap. Or you could put / on one drive and /home on another.

Now I noticed that you mentioned mounting your CD on /media/cdrom, which is handy for computers with only one cdrom drive, but what if you have more than one? where should you mount the second? or the third? or the fifteenth? you could certainly use /media/cdrom2, etc. Or you could mount it on /src/samba/resources/windows-install, or /var/www, or wherever it made sense to do so.

7
  • I think OP meant why not skip the whole mount entirely, and just interact with /dev/cd0, /dev/cd2, /dev/sda1, /dev/sda2 directly - each already has a designated 'directory' of sorts.
    – Ehryk
    Commented Jan 8, 2015 at 5:32
  • 1
    You are correct, but would you really find /dev/sdb9/share/doc/package/README a good path? even d:/share/doc/package/README is better, but /usr/share/doc/package/README has semantics! that is the value of a mountpoint.
    – hildred
    Commented Jan 8, 2015 at 5:39
  • 3
    I suspect the semantic usage came later as a useful byproduct of the utter necessity of 'puttng some code in between the directory system and that raw file pointer to the device', because using cd/ls/nano/everything else is much easier than raw writes: dd if=/file of=/dev/sda2 bs=4096 skip=382765832 count=84756, let alone the associated inode/FAT/journal updating.
    – Ehryk
    Commented Jan 8, 2015 at 5:45
  • (some linux masochists would probably love /dev/sdb9 as working directories, I'm sure)
    – Ehryk
    Commented Jan 8, 2015 at 5:47
  • 2
    My first computer ran cp/m on 2 8" floppies. It did not support sub-directories at all. one directory per disk. a path looked like b:name.ext. The idea of semantic naming was already established. even tape systems used filenames. UNIX had already rejected the idea of drive letters for mountpoints. by the way, @Ehryk, did you know that you can mount a drive on a directory not only in windows but on dos? I did it on MS-DOS 5. besides, when I'm looking for a man page I don't want to have to remember which computer it is on much less which drive.
    – hildred
    Commented Jan 8, 2015 at 7:46
4

Many database engines can work directly with raw disks or partitions. For example, MySQL:

http://dev.mysql.com/doc/refman/5.7/en/innodb-raw-devices.html

This avoids the overhead of going through filesystem drivers, when all the DB engine really needs is one huge file that fills the disk.

3

Because /dev/cdrom is a device, whereas /media/cdrom is a filesystem. You need to mount the former on the latter in order to access the files on the CD-ROM.

Your operating system is already automatically mounting the root and user filesystems from your physical hard disk device, when you boot your computer. This is just adding more filesystems to use.

All operating systems do this — however, some (such as Windows, when it mounts a CD-ROM on to D:) do it transparently. Linux leaves it to you so that you have greater control over the process.

2
  • 2
    I have to disagree on your wording. /dev/cdrom is a device file (that has special abiliies allowing us to easily have i/o communication from/to the associated device). /media/cdrom is a directory, but essentially it is another file (remember, everything is a file in Linux, including directories). Now, when we mount we end up having a special ability to view the contents of the device file as a filesystem. My undestanding of the last sentance is from reading the answers above.
    – Greeso
    Commented Jan 10, 2015 at 1:32
  • @Greeso: I stand by my answer. Commented Jan 10, 2015 at 7:26
0

It does so because there is, with many media for desktop and laptop UIs, ambiguity about what to do when the media is inserted, because user intuition is that inserting the disk in the physical box with which the user interacts is not different to, say, inserting it into a device next to the computer that has a network connection.

Thus in the fundamental sense, the UI for media needs to treat the two kinds of potential mount events similarly, and there is no good way for computers to handle network mounts in as intuitive manner as one can for network mounts with other UIs to computers, such as smartphones, tablets and wearable computers, that lack the possibility of inserting physical media in the devices. (Note how horrible the iPhone interface is for switching SIM cards, the one kind of physical media iOS devices have inserted into them.

Note also that other popular approaches to UIs for this type of physical box (for example, Windows 98, Windows 8, Mac OS X v10.2 (Jaguar), and Mac OS X v10.9 (Mavericks)) run into the same issues, and use additional GUI dialogs to sort out the potential confusion (for example, Windows 8 is typically configured to prompt for each new CD inserted whether it should be mounted as a filesystem, a music media, or if appropriate, a collection of MP4 videos). There is no reason why any of these user dialogs cannot be used with Linux or other UNIXes.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .