Skip to main content
Commonmark migration
Source Link

There are two main reasons that computer systems need an orderly shutdown:

Application state

##Application state ManyMany applications have state that must be written to permanent storage. The obvious example is a database server, but even read-mostly applications such as Web or NTP servers may write logs or statistics which may be unintelligible if a write is interrupted.

It may be possible to alleviate this problem if the applications in question don't read or write files directly, but perform these operations via a transactional mechanism such as writing to a relational database.

Filesystem structure

##Filesystem structure AsAs the operating system writes files on behalf of the applications, writes may be buffered until the disks catch up, meaning that applications' writes don't necessarily complete until quite some time afterwards. Power saving mechanisms tend to increase the delay here, so you have a trade-off between energy consumption and data safety.

Whilst data are being written to disk, there are points where the filesystem data are inconsistent. Modern filesystem implementations take care to minimise the periods, but they can't be eliminated entirely. For example, when a block is taken from the free list, there is a short window where it is neither allocated nor free. This consistency problem is why after an unclean shutdown, an OS will need to perform a filesystem check on the next boot, to examine all blocks and ensure they are correctly accounted for.

Journalling filesystems alleviate this to some extent, by recording intended changes into a log before actually performing them. Then the filesystem check can run much faster, by replaying all the complete log entries and discarding incomplete ones.

Filesystem consistency issues can be avoided by not having local disks, and NFS-mounting the root filesystem, but the loss of cached writes is still a problem for these systems. The only systems I'm willing to hard power-off without shutdown are those that have the disks mounted read-only (mostly embedded systems such as my Empeg Car music player, but also a couple of disk less web-browsing terminals I have lying around for visitors).

TL;DR

##TL;DR DataData writes to permanent storage must be completed before power-off. If you have no writeable storage, then removing the power is low risk.

There are two main reasons that computer systems need an orderly shutdown:

##Application state Many applications have state that must be written to permanent storage. The obvious example is a database server, but even read-mostly applications such as Web or NTP servers may write logs or statistics which may be unintelligible if a write is interrupted.

It may be possible to alleviate this problem if the applications in question don't read or write files directly, but perform these operations via a transactional mechanism such as writing to a relational database.

##Filesystem structure As the operating system writes files on behalf of the applications, writes may be buffered until the disks catch up, meaning that applications' writes don't necessarily complete until quite some time afterwards. Power saving mechanisms tend to increase the delay here, so you have a trade-off between energy consumption and data safety.

Whilst data are being written to disk, there are points where the filesystem data are inconsistent. Modern filesystem implementations take care to minimise the periods, but they can't be eliminated entirely. For example, when a block is taken from the free list, there is a short window where it is neither allocated nor free. This consistency problem is why after an unclean shutdown, an OS will need to perform a filesystem check on the next boot, to examine all blocks and ensure they are correctly accounted for.

Journalling filesystems alleviate this to some extent, by recording intended changes into a log before actually performing them. Then the filesystem check can run much faster, by replaying all the complete log entries and discarding incomplete ones.

Filesystem consistency issues can be avoided by not having local disks, and NFS-mounting the root filesystem, but the loss of cached writes is still a problem for these systems. The only systems I'm willing to hard power-off without shutdown are those that have the disks mounted read-only (mostly embedded systems such as my Empeg Car music player, but also a couple of disk less web-browsing terminals I have lying around for visitors).

##TL;DR Data writes to permanent storage must be completed before power-off. If you have no writeable storage, then removing the power is low risk.

There are two main reasons that computer systems need an orderly shutdown:

Application state

Many applications have state that must be written to permanent storage. The obvious example is a database server, but even read-mostly applications such as Web or NTP servers may write logs or statistics which may be unintelligible if a write is interrupted.

It may be possible to alleviate this problem if the applications in question don't read or write files directly, but perform these operations via a transactional mechanism such as writing to a relational database.

Filesystem structure

As the operating system writes files on behalf of the applications, writes may be buffered until the disks catch up, meaning that applications' writes don't necessarily complete until quite some time afterwards. Power saving mechanisms tend to increase the delay here, so you have a trade-off between energy consumption and data safety.

Whilst data are being written to disk, there are points where the filesystem data are inconsistent. Modern filesystem implementations take care to minimise the periods, but they can't be eliminated entirely. For example, when a block is taken from the free list, there is a short window where it is neither allocated nor free. This consistency problem is why after an unclean shutdown, an OS will need to perform a filesystem check on the next boot, to examine all blocks and ensure they are correctly accounted for.

Journalling filesystems alleviate this to some extent, by recording intended changes into a log before actually performing them. Then the filesystem check can run much faster, by replaying all the complete log entries and discarding incomplete ones.

Filesystem consistency issues can be avoided by not having local disks, and NFS-mounting the root filesystem, but the loss of cached writes is still a problem for these systems. The only systems I'm willing to hard power-off without shutdown are those that have the disks mounted read-only (mostly embedded systems such as my Empeg Car music player, but also a couple of disk less web-browsing terminals I have lying around for visitors).

TL;DR

Data writes to permanent storage must be completed before power-off. If you have no writeable storage, then removing the power is low risk.

Source Link
Toby Speight
  • 5k
  • 1
  • 28
  • 38

There are two main reasons that computer systems need an orderly shutdown:

##Application state Many applications have state that must be written to permanent storage. The obvious example is a database server, but even read-mostly applications such as Web or NTP servers may write logs or statistics which may be unintelligible if a write is interrupted.

It may be possible to alleviate this problem if the applications in question don't read or write files directly, but perform these operations via a transactional mechanism such as writing to a relational database.

##Filesystem structure As the operating system writes files on behalf of the applications, writes may be buffered until the disks catch up, meaning that applications' writes don't necessarily complete until quite some time afterwards. Power saving mechanisms tend to increase the delay here, so you have a trade-off between energy consumption and data safety.

Whilst data are being written to disk, there are points where the filesystem data are inconsistent. Modern filesystem implementations take care to minimise the periods, but they can't be eliminated entirely. For example, when a block is taken from the free list, there is a short window where it is neither allocated nor free. This consistency problem is why after an unclean shutdown, an OS will need to perform a filesystem check on the next boot, to examine all blocks and ensure they are correctly accounted for.

Journalling filesystems alleviate this to some extent, by recording intended changes into a log before actually performing them. Then the filesystem check can run much faster, by replaying all the complete log entries and discarding incomplete ones.

Filesystem consistency issues can be avoided by not having local disks, and NFS-mounting the root filesystem, but the loss of cached writes is still a problem for these systems. The only systems I'm willing to hard power-off without shutdown are those that have the disks mounted read-only (mostly embedded systems such as my Empeg Car music player, but also a couple of disk less web-browsing terminals I have lying around for visitors).

##TL;DR Data writes to permanent storage must be completed before power-off. If you have no writeable storage, then removing the power is low risk.