2

While transferring lots of files from a Linux Redhat workstation to a FAT32 formatted external hard disk (WD 2 TB formatted by Mac disk utility), I ran into the error that there was not enough space left on my disk. But I checked there was still ~700 GB of disk space left, so I am guessing I ran out of disk space because of long file names (not sure ?) ? How to check for that ?

My external HDD details are

/dev/sdc1 on /media/GUDDULINUX3 type vfat (rw,nosuid,nodev,relatime,uid=988,gid=2000,fmask=0022,dmask=0077,codepage=cp437,iocharset=ascii,s

Currently there are around ~545 directories with anything between ~7000 to ~11000 files in each directory. Each file is a binary file of size (checked with du -sh), ~32K or 96K (roughly half each) and the name is like XC6_9k.131_132.12.2012.210.s3 (29 characters long). The file size looks OK because they are supposed to be binary files with 8000 or 24000 floating points.

Is it possible something else is wrong ? Unfortunately, I cannot check the exact disk space consumed by the directories, trying du -sh takes forever.

Edit 1- I used Mac Disk Utility to verify the external hard disk and it says - 11361590 files, 1076797472 KiB free (33649921 clusters)

Edit 2-

Following Angelo's suggestions, I tried df -h and df -i on the external hard disk attached to my laptop (mac). It looks like I have run out of free inodes in /Volumes/GUDDULINUX3. Any suggestions on what to do - will I gain inodes if I tar the small files in one tar file for each directory ? Should I move to an NTFS formatted disk ?

avinash$ df -h
Filesystem                          Size   Used  Avail Capacity  iused   ifree %iused  Mounted on
/dev/disk0s2                       233Gi  216Gi   17Gi    93% 56587186 4482254   93%   /
devfs                              187Ki  187Ki    0Bi   100%      646       0  100%   /dev
map -hosts                           0Bi    0Bi    0Bi   100%        0       0  100%   /net
map auto_home                        0Bi    0Bi    0Bi   100%        0       0  100%   /home
/dev/disk1s1                       1.8Ti  836Gi  1.0Ti    45%        0       0  100%   /Volumes/GUDDULINUX3

avinash$ df -i
Filesystem                        512-blocks       Used  Available Capacity  iused   ifree %iused  Mounted on
/dev/disk0s2                       488555536  452185504   35858032    93% 56587186 4482254   93%   /
devfs                                    373        373          0   100%      646       0  100%   /dev
map -hosts                                 0          0          0   100%        0       0  100%   /net
map auto_home                              0          0          0   100%        0       0  100%   /home
localhost:/rGEmV8JCfpffeQBEQFAlLe  488555536  488555536          0   100%        0       0  100%   /Volumes/MobileBackups
/dev/disk1s1                      3906009792 1752414720 2153595072    45%        0       0  100%   /Volumes/GUDDULINUX3

These are results with the disk attached to my Linux workstation, it doesn't show the inode information.

seismo82% df -h /media/GUDDULINUX3/ 
Filesystem Size Used Avail Use% Mounted on 
/dev/sdc1 1.9T 836G 1.1T 45% /media/GUDDULINUX3 

seismo82% df -i /media/GUDDULINUX3/
Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/sdc1           0     0     0     - /media/GUDDULINUX3

Edit 3-

It seems like inode thing doesn't work with FAT32. I think the problem is that there is a lower limit in FAT32 on how much files could be there in a directory, lower than ~65k depending on the filename. First, I tarred a lot of per-existing files in the ext HDD, that should have freed up a lot of inodes (or FAT32 equivalent). But still moving the big directory (it has ~23k files) showed "no space left on device" error. Then, instead of moving individual files, I made a tar of the directory and moving it was possible to external disk !!! Trying to untar it in the ext disk gave me the error again. So, I think I ran into a limit on number of files in the directory. See w3dk's comment on this Max files per directory

I checked the directories that had reported error on moving. The limit seems 16383 files for filenames with 29 characters and 21843 files for filenames with 20 characters. Theoretically, the limit is ~65k files for files with names in 8.3 format. Thanks to everyone who helped me diagnose the issue. For now, I will just tar up whatever I have.

12
  • 1
    You should post the command you're running and the error you are receiving. Max filename length is 255 characters.
    – Angelo
    Commented Oct 20, 2016 at 3:22
  • 2
    And you didn't run out of space or inodes (df -h and df -i) ?
    – Angelo
    Commented Oct 20, 2016 at 3:43
  • 1
    df should be instant
    – Angelo
    Commented Oct 20, 2016 at 3:46
  • 2
    FAT doesn't really have "inodes" in the ext2/ext3 sense, but the total number of files is limited, and the "inodes" value is likely reporting that. So yes, taring the small files (or just all files in a directory) will get around this.
    – dirkt
    Commented Oct 20, 2016 at 5:20
  • 1
    @Guddu I don't think you can conclude from that output that you have run into a limit for number of files. I'm pretty sure that OSX is just reporting 100% usage because it sees 0 available inodes and 0 inodes used. I'm still uncertain where the problem is. I use Mac/Windows/Linux also, so I understand how difficult it is to select a satisfactory filesystem for all 3. I don't remember which I felt was better: NTFS support on Mac or exFAT support on Linux. I think I ended up just using NTFS and on OSX: VirtualBox with Linux to mount and write to it because it was infrequent.
    – Angelo
    Commented Oct 20, 2016 at 18:38

1 Answer 1

2

In addition to the partition size limits, file size limits, and directory size limits of the FAT32 file system (all of which it sounds like you are aware of), there is also a maximum limit of 268,435,437 total files on a FAT32 volume, regardless of directory.

Doing quick math, 545 directories with 7000 files in each is almost 4 million files -- far in excess of what FAT32 can handle.

2
  • sorry, i feel like i am missing something, 4 million is still less than 268 million ?
    – Guddu
    Commented Oct 20, 2016 at 3:07
  • 1
    No, you are not missing anything. When I read your question I just knew you had to be running into some FS limit, so I googled those limits, saw 268 thousand, and decided that was it. Of course you are right... That's 268 million, not thousand, so my answer is wrong. I feel a bit embarrassed, but I got two upvotes for it! At least two other people made the same mistake I just did and that makes me feel less stupid. Hahaha!
    – Wes Sayeed
    Commented Oct 20, 2016 at 6:21

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .