5

I never pay attention on ext4lazyinit before. But today, after I format my 4TB external USB harddrive and the first mount, the led light keeps blinking without any writes and reads. I figured out that is because of ext4lazyinit to initialize inode tables. There is a [ext4lazyinit] process too. I got some questions:

1, Since NTFS does not have this "problem", why does ext4 do this? Will initialize inode tables make ext4 much safer and faster than NTFS? I know NTFS does not have inode and it has another ways of handling data. But NTFS can be formated and mounted quickly without requiring to initialize anything. So why does ext4 make so much different? I think it must have some advantages compared to NTFS, otherwise, what is the point of initializing for a long time?

2, Is that normal that ext4lazyinit takes a long time? The ext4lazyinit is still running at the moment. It has been one hour and don't know how long will it take. I am not interested in the time calculation here. I am just wondering if this is normal. I did google and other people met 6+ hours ext4lazyinit time.

3, Due to this long busy time in the drive, will it damage the hard drive somehow in some level? I don't feel very good the LED keeps blinking for hours and the drive is kinda hot.

4, If I do ext4 inode table initilization during format (run manual mkfs.ext4 commands), will it take this same long time? Will the format take 6+ hours? That is kinda ridiculous to me. I read that ext4lazyinit is a new feature in new kernel. What about the old days without lazy init? I don't remember to format a drive and wait for hours to complete.

Thank you.

Updated: The ext4lazyinit process is completed now. It takes about 2 hours in this drive. This is a kinda new drive.

2 Answers 2

11

Strictly speaking, lazyinit is not necessary so long as we never need to try to do certain extreme file system corruption repairs. The problem is if a pre-existing file system is reformatted using mke2fs (aka mkfs.ext4) there may be old inodes in the inode table that might be mistaken for inodes in the current file system. Normally there is an entry in the block group descrpitor indicating the range of valid inodes in that portion of the inode table. But if the block group descriptors have been trashed, or otherwise can't be trusted (because their metadata checksum is invalid, or we are using the backup block group descriptors, etc.) then e2fsck (aka fsck.ext4) search all of the blocks in the inode table, and it would be highly desirable if we don't stumble across inodes from previous incarnations of the file system, and get confused.

So that is what the lazyinit thread does; it is slowly zeroing out the not-yet-used portions of the inode table. Previously, the inode table initialization would be done during mke2fs, but this would make the mkfs process very slow for very large or very slow disks. So instead, we defer this to when the file system is mounted. The reason why can take hours is that we deliberately try to make it take roughly 10% of the disk bandwidth, to minimize the impact on system performance. This can be tuned via a mount option, or you can force the inode table initialization to happen at mke2fs time (so what might take 2 hours for your drive would make mke2fs about 12 minutes longer). You can also temporarily or permanently disable the lazy initialization via a mount option.

What do other file systems do? One approach is to just throw up its figurative hands in cases of these extreme corruptions. (After all, everyone is regularly doing backups, right? So we don't have to try that hard. :-) This approach is the more common one when the file system does not have a fixed inode table, as in ntfs.

Reiserfs does not have a fixed inode table, so what it will do when it can't find its "dancing inodes" is to do a brute force search of all blocks in the file system trying to find blocks that look like inode table blocks. The problem though with that approach is if you try to store VM disk images of reiserfs file systems in a reiserfs file system, and the fsck.reiserfs does the brute force search of all blocks, the resulting franken-filesystem can be.... entertaining.

As to whether this will damage your disk drive, it shouldn't. But if the disk drive is spinning for an hour or so is causing it to get hot enough to damage it, then it's not properly designed or installed, and normal operations could very easily cause damage. But then you should be complaining to your hardware vendor / provider.

4
  • Isn’t there some fixed UUID in each inode to clearly mark it as belonging to a certain file system, or do I remember this from XFS? With such a UUID that is also stored in the superblock, it could quickly be determined whether an inode belongs to the current superblock or not (thus also avoiding your "VM images on the file system confusing recovery" issues).
    – Ro-ee
    Commented Sep 13, 2020 at 15:31
  • Thank you very much for your detail explanation. To clarify if what I understand is correct: (1) inode table init is only useful when doing fsck. It won't help ext4 performance at all. Right? (2) Because nowadays, the external hard drive is auto mounted. Can I simply run the following to force the full speed of lazyinit? sudo mount -o remount,init_itable=0 /media/user/sda1. Is this command right? Thanks a lot.
    – sgon00
    Commented Sep 14, 2020 at 4:32
  • @Ro-ee a UUID is 16 bytes, and that's a lot of space to spend on each inode. We could spread out the UUID so we only spend one byte per inode across each inode table block, but that's not something we've done to date since inode initialization (either lazy or not) was "good enough" for most users. However, if someone wants to propose which byte in the core 128 byte inode should be used, and wants to send a patch, it would be a relatively easy starter project for someone who wants to get involved into ext4 development. Commented Sep 16, 2020 at 22:03
  • @TheodoreTs'o ok, then this UUID thing I must remember from XFS or such and not from ext4. I happened to come upon it when making a file system driver function that would change a fs’ UUID (after cloning etc.). In older XFS versions you’d have to change every single file entry (and re-calculate checksums). Newer versions have one 'public' UUID (that can be changed without many repercussions) and another UUID repeated in each inode, that won’t need changing – even when cloned.
    – Ro-ee
    Commented Sep 16, 2020 at 22:17
0

To make it faster you can create all inodes on filesystem creation with command:

mkfs.ext4 -E lazy_itable_init=0 /dev/...

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .