I can't think of an easy way to leverage rsync and perform the decompression. You can do the decompression via a custom preloaded library or a custom FUSE filesystem (on the remote side, the side with the gzipped files), but that would be a lot of work.
If you can assume that the timestamp of the compressed file matches the timestamp of the gzipped file, and it's acceptable to transfer whole files in case of modification, then you don't need to leverage rsync's incremental update capabilities. The same goes if the files are never modified after they are created.
The first hurdle is that this is a remote copy. Eliminate the problem by mounting the remote directory with SSHFS so that it is available as a local filesystem.
Now you can use find
to traverse the directory tree, create directories as needed, and gzip files on the fly.
mkdir server1
sshfs test@server1:/var/www/html/reports server1
cd server1
find . \( -type d -o -type f -name .gz \) -exec sh -c '
for x; do
if [ -d "$x" ]; then
[ -d "$0/$x" ] || mkdir "$0/$x"
else
zcat "$x" >"$0/$x"
touch -r "$x" "$0/$x"
fi
done
' /home/myself/reports {} +
rsync
ing one file. Where are the rest? Are you calling rsync 22,000 times? How do you know what the file names are?