127

I have just learned that one should "Never defragment your SSD". But I have no idea if that is true.

I believe that Windows 10 was automatically scheduled to finish defragmentation on my SSD, but I cancelled it. Would it cause any problems for defragmentations done before?

The SSD is still not partitioned, because I cannot see the SSD drive in My Computer folder, only in system hardware manager. What are the correct steps I should take to install Windows on an SSD (it's my first SSD)?

5
  • 12
    The advice to "never defragment your SSD" is obsolete and comes from a time when SSDs were slower and had much more limited write endurance than modern SSDs. Modern SSDs tend to be IOPS limited, and defragmented file systems need fewer I/Os. Commented Nov 28, 2016 at 18:34
  • 5
    To @DavidSchwartz point, the amount of writes/deletes needed to spontaneously kill a modern SSD is ridiculously high. Unless you are processing an extraordinary amount of information your SSD will most likely last longer than many of your other components even if you are performing conventional defrags.
    – user488701
    Commented Nov 28, 2016 at 20:23
  • 31
    Why would you want to defragment an SSD? The point of defragmentation is to make files be contiguous on the disk, so the read heads don't have to seek all over the place (which takes time, as it involves physical movement) to read the file. I'm no expert, but AFAIK SSDs are solid state and random access. All accesses take the same time, so it shouldn't matter how file blocks are distributed.
    – jamesqf
    Commented Nov 30, 2016 at 4:46
  • 3
    Possible duplicate of Why can't you defragment Solid State Drives? Commented Nov 30, 2016 at 12:06
  • 3
    @jamesqf Did you read my comment? You asked a question I already answered -- Modern SSDs are typically IOPS limited. Reading a file that's in four fragments takes at least four IOs. Reading a file that's in one fragment may take only one. Commented Dec 5, 2016 at 4:53

8 Answers 8

141

Let Windows do its job. Once per month it does a real full defrag, also on a SSD, to optimize the internal meta data.

The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD.

Here is a reply from Microsoft:

Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

So install Windows on the SSD and forget it. Windows will do everything on its own.

4
  • 6
    Fragmentation is kept to a minimum in EXT but can still occur in specific use cases, at least in Ext3: en.wikipedia.org/wiki/Ext3#Disadvantages
    – Bret
    Commented Dec 2, 2016 at 15:39
  • 6
    EXT variants do fragment and lose performance. Anyone saying otherwise is peddling Linux-superiority lies. Source: I implemented a driver for it and it does.
    – geometrian
    Commented Dec 3, 2016 at 21:37
  • 1
    @spraff thats because EXT leaves spaces inbetween the blocks and the file doesn't need to be partly written somewhere else when it grows. When it grows. Basically when you have a video file (which never grows) it still takes more space than needed and thus wastes space. No system is perfect
    – BlueWizard
    Commented Dec 5, 2016 at 9:24
  • @spraff Also ext doesn't support volume snapshots at all, performance of synchronous TRIM on Linux is terrible, and async TRIM is not implemented.
    – gronostaj
    Commented Jan 21, 2021 at 9:07
49

I have just learned that one should "Never defragment your SSD". But I have no idea if that is true.

A little knowledge is dangerous. Never defragmenting your SSD is probably a good idea if your system is utterly clueless to what an SSD is - say Windows XP. And if SSDs were fragile snowflakes likely to wear out and melt in the harsh heat of normal usage - I have a detailed answer to why that isn't true. It is pretty hard to 'wear out' a drive in normal usage. It might be handy to unlearn this.

Let's take into account that if your application was killing SSDs or even doing heavy writes, like Spotify did, people would flip. And quite often the people who write OSes are smart.

I'm referencing this blog post from Scott Hanselman heavily for the rest of this answer. Magicandre's answer references this too, but I kind of took away different lessons from it. It's worth a read for the details. I'm taking a few liberties with how I'm representing the information. I'd start with this

I think the major misconception is that most people have a very outdated model of disk\file layout, and how SSDs work.

SSDs do fragment and these fragments need to be kept track of. At a fundamental level defragging SSDs helps your file system run efficiently, even if it's different from how a spinning rust drive would. The post I referenced points out volume snapshots would be slow without defragmentation.

SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective.

TRIM is good. Trim saves on writes since it's a mechanism for marking blocks as read without erasing them, and erasing them as needed.

Whoever told you to never defragment a drive has no idea that modern OSes are designed for SSDs and that the necessary housekeeping processes are rolled in.

While it's tempting to assume you know better, in this case the people who wrote the OS have optimised things for you. Keep calm, and let Windows defragment your drive.

2
  • 9
    I think this answer would be greatly improved by mention of logical block addresses vs physical block addresses. It's not possible to defragment an SSD, the filesystem-level procedure that results in sequential logical addresses will still result in data scattered all around the physical disk, due to the flash mapping, and that's OK because SSDs are random-access.
    – Ben Voigt
    Commented Nov 29, 2016 at 20:07
  • To be honest, that's a concept that I still haven't wrapped my head around. I do believe a full treatment of that would make an awesome answer for one of my questions and I have some rep floating around in my test account I'd be happy to award as a bounty for it.
    – Journeyman Geek
    Commented Nov 30, 2016 at 0:28
21

For completeness' sake:

Fragmentation depends on the filesystem(FS), not on the disc or OS.

This means that answer to your question does not really need to ask for Windows*; SSD is special case - it works differently than an ordinary disc.

An FS is a way of organizing your files on the disc. The most common Windows formats are NTFS and FAT32. The most commonly used FSs on Linux are ext3/ext4, but there are many others (zfs, xfs, jfs, ReiserFS, btrfs, and more).

A disc is divided into blocks. You can imagine it as a long tape on which you can write some data. When you write something onto the disc, you use these blocks. Obviously you want related files to be written next to each other, and a single file to be written in a single block, so you don't have to jump around the tape. When things are all scattered around, that's what we call fragmentation. Defragmentation organizes them.

Obviously how you organize things (FS) determines how well they are organized (whether there is fragmentation). If you organize your files from the start, you won't have fragmentation. That's what happens in some filesystems (e.g. the ext family). These filesystems organize your files on the fly (before writing), so that you don't have to defragment them except under special circumstances when there wasn't any other choice but to to introduce a little disorder.

For more information about ext4 and how it prevents fragmentation, you can refer to this page

Now an SSD works differently; it's not a tape. You can get instant access everywhere. The whole point of defragmentation is that you organize your files neatly, so that you don't have to jump around. There's no way to jump around in an SSD. You don't care whether you have to get to the other end of the tape back and forth; there's no tape.

However, there are other ways of optimizing an SSD. See this topic for clarification.

*Almost; filesystem choice is correlated with OS. Most Linux users use different FS than Windows or OS X users.

7
  • 3
    Exactly this. Fragmentation happens on FS level, regardless of storage medium. Some media are more affected by it than others, but there's always some impact, and it won't go away simply because you have an SSD. Commented Nov 29, 2016 at 10:37
  • 1
    The main difference is really Ext3 and FAT vs Ext4 and NTFS; but even then, applications, the OS and even the hardware do contribute, often significantly. Windows organizes files used at startup in the order that they are used, for example - allowing most of startup to use block reads instead of seeks. You could call that defragmentation - it's just that apart from defragmenting on the FS level (reducing file fragmentation), it also "defragments" groups of files to optimize access in a way the FS can't really help. You could imagine many similar optimizations, e.g. moving DLLs close to EXEs.
    – Luaan
    Commented Nov 29, 2016 at 11:47
  • There's many different layers that all matter in their own way. For example, striping is a way of intentional fragmentation that can improve performance. Physical organization of a HDD can also use this, if the HDD has multiple heads that can read from multiple platters concurrently. SSDs don't need to spin the platter and move heads to seek, but they still have an IOPS limit - and with the speed of SSDs today, this is often more important than raw bandwidth. SSD seeks are no longer fast enough to saturate the bandwidth. Fragmentation is a small part of the fundamental problem - caching.
    – Luaan
    Commented Nov 29, 2016 at 11:53
  • @Luaan: There's no such thing as an "SSD seek". I think you're actually talking about the per-command processing overhead.
    – Ben Voigt
    Commented Nov 29, 2016 at 20:10
  • 1
    @BenVoigt I think Luaan means that non-consolidated files still take longer to read even on SSDs. There's no seek delay, but there's a significant difference between sequential and random reads on SSDs. Commented Nov 30, 2016 at 8:24
7

The existing answers are great, but I have some things to supplement them...
I defragment my SSD and disable automatic TRIM, but for totally different reasons than mentioned:

  1. I want to be able to recover files or partitions if & when I accidentally delete something.
    No, it doesn't happen often, but the few times it's happened, it's been quite frustrating to not be able to recover things that I would have been able to recover on a hard drive, even when I tried to recover them immediately after deletion.

  2. I expand, shrink, and even move partitions around every few months, and defragmenting and consolidating files makes this operation far faster and less risky. You'd think you can trust partition managers nowadays, but as late as December 2015, I've run into errors (corruption) on plain move/resize operations. And the smarter partition managers try to avoid running on volumes that are heavily fragmented before any damage is done (and usually, though not always, succeed).

  3. I use Linux sometimes, and I've gotten burned by its corruption of NTFS volumes as lately as a year or so ago. This isn't due to fragmentation specifically, but seeing that it can't even handle unfragmented files properly, I'm on the defensive and trying to present as clean of a volume as possible to it (and even then, I avoid writes most of the time).

The sad part about #2 and #3 is, people who haven't seen these problems with their own eyes always think I'm nuts and am making this all up, or that my system must be broken somehow. But I've reproduced these quite a few times on multiple systems, and as someone who has written his own NTFS readers, I know a thing or two about file systems and kernel programming... with NTFS being at the center. So I know bugs when I see them. Nobody believes me, but I warn people anyway, given that I've seen these happen with my own eyes -- so, if you mess with partitions or use Linux at all, I recommend you keep your drives defragmented. YMMV.

Oh, and don't forget to run TRIM manually every once in a while when you don't need to recover anything. Although if I'm being honest I have yet to see any benefit from it...

3
  • 1
    Excellent, it is good procedure to defrag (and even "zero") flash drives for the the same reasons, particularly for photography. Contiguous files are easier to recover, even partially. Seek time aside, you will also benefit from pre-fetch (read ahead) if the OS is clever enough. SSD's tend to be slower than memory (not always true though).
    – mckenzm
    Commented Dec 2, 2016 at 16:44
  • Isn't your Linux advice actually "if you use Linux and mount NTFS file system read/write with it", not actually "if you use Linux at all"? Using Linux with ext4 or XFS on SSD is perfectly safe. (And FAT, for that matter, so you could make a FAT partition for data exchange if need be.)
    – mattdm
    Commented Dec 4, 2016 at 7:02
  • @mattdm: Yeah, I guess. (Weird, I thought I replied to this already...)
    – user541686
    Commented Dec 6, 2016 at 1:15
1

You can defragment your ssd. Should you? Certainly not very often but there are a few cases where it can be beneficial to do it rarely.

a) You have a Samsung Evo 840 affected by the slow read of old files bug. Defragmenting will effectively rewrite them and they wont be old files any longer.

b) While the effects of fragmentation on SSDs are extremely minor, the controller still has to reassemble files that are spread out over many flash chips. The performance impact of this is very small, but again defragmenting should reorganise the files and make them easier for the controller to reassemble.

c) If you have anything close to a modern ssd an occasional defrag will not affect it's life time enough to matter. In 2018 a hw tech site did a test of ssd endurance and the Samsung evo 840 500GB which uses 2d tlc (which has very bad endurance) failed at around 600 TB of writes. Anything better is likely using 3d tlc (called vnand by some companies) which has a lot more endurance. Bigger models also have more cells to write to which further increases endurance. And if you have a large pro drive endurance is a complete non issue (in the test I mentioned before the Samsung pro 512 GB lasted for 9 PB of writes... larger/newer models should last even more). These numbers are practically impossible to reach unless you are trying on purpose. Writes used to kill (cheap ones anyway, expensive models used slc which has a lot of endurance too) ssds back when it was a new technology, capacities were small, controllers stupid, ....

0

Yes, you can defrag your ssd, these days we don't need to worry so much about excessive read/write operations wearing out the nand gates.

However.. storage efficiency isn't the same as storage performance. When we read or write data to a SSD we are addressing a memory space on the SSD. Because there are no moving parts it doesn't matter where the file fragment is stored. Imagine a grid, each cell has an address, and your fragments of your file are stored in differnt cells, it doesn't matter if the file fragments are stored in two cells next to each other (contiguous) or far apart (fragmented), the cells will be addressed and read / written just as quickly.

Remember we're just reading 1's and 0's from address spaces.

-4

Defragging a SSD promotes early failure of the lowest addressed memory blocks.

See: http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

"Even with wear-leveling algorithms spreading writes evenly across the flash, all cells will eventually fail or become unfit for duty. When that happens, they're retired and replaced with flash allocated from the SSD's overprovisioned area. This spare NAND ensures that the drive's user-accessible capacity is unaffected by the war of attrition ravaging its cells."

"The casualties will eventually exceed the drive's ability to compensate, leaving unanswered questions. How many writes does it take? What happens to your data at the end? Do SSDs lose any performance or reliability as the writes pile up?"

2
  • 5
    This is an extreme case where the drive's having large amounts of data written and overwritten to it. What does this have to do with regular, routine defragging?
    – Journeyman Geek
    Commented Nov 30, 2016 at 0:55
  • No it is not a extreme case... It a test that shows the inherent or eventual weakness of the product. Tests are designed to show limits.
    – jwzumwalt
    Commented Dec 4, 2016 at 4:43
-6

Each cell on an SSD gets slower every time it is rewritten. The disk hides that wear by keeping track of which cells have been written to, and writing to little-used cells first. Defragging means massive rewriting on many cells, and therefore it will wear down the SSD. HDs benefit from defragging because performance is improved by not having to move the servo arm around often, but SSDs are not slowed as much by random placement of data.

1
  • 3
    You're confusing write amplification with wear leveling. SSDs have to erase before they can write, but they erase larger blocks than they can write. This means a single write might do multiple writes to shuffle data around and free up a block to erase. As more writes happen, this process gets more complicated. That's write amplification. There are various ways to deal with this like TRIM and over-provisioning. Wear leveling is spreading out which cells you write to so they don't fail entirely.
    – Schwern
    Commented Nov 28, 2016 at 21:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .