I have some huge folders containing about 40000 files (and much, much more gitignored files like node_modules
), and I want to rsync
to a remote machine.
Currently, I use a simple script like run_rsync; fswatch -o $BASE_DIR | while read f; do run_rsync; done
. But whenever a file changes, the rsync is triggered and I see it scanning the 40000 files to find out diffs, which consumes quite a bit of CPU and time.
Thus, I wonder, since fswatch
knows which exact file(s) are changed, can rsync
be aware of that, and only look at those few files and decide what to do with them. Is this possible?
Thanks!