While transferring lots of files from a Linux Redhat workstation to a FAT32
formatted external hard disk (WD 2 TB
formatted by Mac disk utility), I ran into the error that there was not enough space left on my disk. But I checked there was still ~700 GB
of disk space left, so I am guessing I ran out of disk space because of long file names (not sure ?) ? How to check for that ?
My external HDD details are
/dev/sdc1 on /media/GUDDULINUX3 type vfat (rw,nosuid,nodev,relatime,uid=988,gid=2000,fmask=0022,dmask=0077,codepage=cp437,iocharset=ascii,s
Currently there are around ~545
directories with anything between ~7000
to ~11000
files in each directory. Each file is a binary file of size (checked with du -sh), ~32K
or 96K
(roughly half each) and the name is like XC6_9k.131_132.12.2012.210.s3
(29
characters long). The file size looks OK because they are supposed to be binary files with 8000
or 24000
floating points.
Is it possible something else is wrong ? Unfortunately, I cannot check the exact disk space consumed by the directories, trying du -sh
takes forever.
Edit 1-
I used Mac Disk Utility to verify the external hard disk and it says -
11361590 files, 1076797472 KiB free (33649921 clusters)
Edit 2-
Following Angelo's suggestions, I tried df -h
and df -i
on the external hard disk attached to my laptop (mac). It looks like I have run out of free inodes in /Volumes/GUDDULINUX3
. Any suggestions on what to do - will I gain inodes if I tar
the small files in one tar
file for each directory ? Should I move to an NTFS
formatted disk ?
avinash$ df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk0s2 233Gi 216Gi 17Gi 93% 56587186 4482254 93% /
devfs 187Ki 187Ki 0Bi 100% 646 0 100% /dev
map -hosts 0Bi 0Bi 0Bi 100% 0 0 100% /net
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /home
/dev/disk1s1 1.8Ti 836Gi 1.0Ti 45% 0 0 100% /Volumes/GUDDULINUX3
avinash$ df -i
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk0s2 488555536 452185504 35858032 93% 56587186 4482254 93% /
devfs 373 373 0 100% 646 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
localhost:/rGEmV8JCfpffeQBEQFAlLe 488555536 488555536 0 100% 0 0 100% /Volumes/MobileBackups
/dev/disk1s1 3906009792 1752414720 2153595072 45% 0 0 100% /Volumes/GUDDULINUX3
These are results with the disk attached to my Linux workstation, it doesn't show the inode information.
seismo82% df -h /media/GUDDULINUX3/
Filesystem Size Used Avail Use% Mounted on
/dev/sdc1 1.9T 836G 1.1T 45% /media/GUDDULINUX3
seismo82% df -i /media/GUDDULINUX3/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdc1 0 0 0 - /media/GUDDULINUX3
Edit 3-
It seems like inode
thing doesn't work with FAT32
. I think the problem is that there is a lower limit in FAT32
on how much files could be there in a directory, lower than ~65k
depending on the filename. First, I tarred a lot of per-existing files in the ext HDD, that should have freed up a lot of inodes
(or FAT32
equivalent). But still moving the big directory (it has ~23k
files) showed "no space left on device" error. Then, instead of moving individual files, I made a tar of the directory and moving it was possible to external disk !!! Trying to untar it in the ext disk gave me the error again. So, I think I ran into a limit on number of files in the directory. See w3dk's comment on this
Max files per directory
I checked the directories that had reported error on moving. The limit seems 16383
files for filenames with 29
characters and 21843
files for filenames with 20
characters. Theoretically, the limit is ~65k
files for files with names in 8.3
format. Thanks to everyone who helped me diagnose the issue. For now, I will just tar up whatever I have.