3

I have an cleanly formatted 8TB NTFS drive on a Windows Server 2008 system. I'm copying 6TB of documents onto it using some parallel robocopies . There are a large number of smallish files (~150 million). These are spread over a number of directories. On the whole, files are too large to fit inline in the MFT. About three quarters of the way through, the performance of the copy dropped off significantly.

Looking at procmon, it appears the bottleneck is MFT expansion. I see each of the robocopy processes taking ~3.5s on CreateFile. Immediately after the first call is issued I see IRP_MJ_READ on the $Mft returning END OF FILE. Just before the CreateFile's succeed I see SUCCESS on another $Mft read.

Some pertinent information: The MFT is already large ~115GB. However this is far less than the default reservation of 12.5% of the drive. The MFT is rapidly fragmenting. Contig.exe reports 100,000 fragments. New fragments are being added frequently (multiple times per second).

My question:

Can I make the MFT expand in larger chunks?

I'm curious as to why is the MFT fragmented even through it's far below the reservation size. I know the MFT doesn't start at the reservation size, but what's the point of the reservation if it can't grow contiguously into it. There's still 33% free space on the drive, so normal data shouldn't be using the reservation yet.

Update fsutil fsinfo ntfsinfo gives the following info for the MFT:

Mft Valid Data Length: 0x0000001ca90c0000
Mft Start Lcn:         0x0000000000000000
Mft Zone Start:        0x000000003c828360
Mft Zone End:          0x000000003c828380

The zone is very small, is this normal?

2
  • You would probably be better off zipping,compressing, all the 150 mil files into a zip or whatever transmit that, and then unzip.
    – cybernard
    Commented Mar 8, 2018 at 12:56
  • @cybernard This is iSCSI to iSCSI accross a single switch. The network has nothing to do with the bottleneck, it's file system overhead, which I will still pay when unzipping.
    – Laurence
    Commented Mar 8, 2018 at 13:31

1 Answer 1

1

The latest version of SysInternals contig can report on free space.

contig64 -f 

It shows:

Free cluster space       : 2,838,753,701,888 bytes
Free space fragments     : 89,747,382 frags
Largest free space block : 90,112 bytes

I think this explains everything. Even though there is over 2TB/8TB free (25%) the free space is completely fragmented. This will impact MFT growth, and there is nothing I can do at this stage, apart from looking at defragmention options.

I'm not sure if there's a way I could have avoided this situation in the first place. It feels like you should be able to copy files of a known size in parallel to a newly formatted disk without getting this level of fragmentation.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .