2

How does a 5 GB gzipped file get read into memory and compressed ? Does the whole file need to be read into memory before decompression ? My question is related to processing gziped files in Hadoop, which cannot split processing as it does for non compressed files. What about bzip2 ? any differences ?

Thanks,

2 Answers 2

3

No, the 5 GB does not need to be read into memory. You can read in a byte at a time if you like, and decompress it that way. gzip, bzip2, and all compression formats that I am aware of are streaming formats. You can read in small bits and decompress them serially, never having to go backwards in the file. (The .ZIP format has header information at the end, so unzippers usually seek backwards from there to the entries. However that is not required, and a .ZIP file can be both compressed and decompressed as a stream.)

0

gzipped files are not splittable, which means there will be always only 1 mapper reading the file in mapreduce, so the best practices is unzipped it first before putting it on HDFS. bzipped files splittable and they are better fit for Hadoop than gzipped files.

Not the answer you're looking for? Browse other questions tagged or ask your own question.