26

I am trying to understand how mysqldump works:

if I execute mysqldump on my pc and connect to a remote server:

mysqldump -u mark -h 34.32.23.23 -pxxx  --quick | gzip > dump.sql.gz

will the server compress it and send it over to me as gzip or will my computer receive all the data first and then compress it?

Because I have a very large remote db to export, and I would like to know the fastest way to do it over a network!

1
  • If this line is executed on your PC then the gzip will run on your PC too. That means you will get the raw dump uncompressed.
    – pritaeas
    Commented Mar 27, 2012 at 10:28

3 Answers 3

40

You should make use of ssh + scp,
because the dump on localhost is faster,
and you only need to scp over the gzip (lesser network overhead)

likely you can do this

ssh [email protected] "mysqldump -u mark -h localhost -pxxx --quick | gzip > /tmp/dump.sql.gz"

scp [email protected]:/tmp/dump.sql.gz .

(optional directory of /tmp, should be change to whatever directory you comfortable with)

5
  • nice idea, the problem is that I am dealing with cleardb (which is a commercial instance of amazon rds) and I did not manage to login via ssh.. don't know if it's going to be possible!
    – Mark Belli
    Commented Mar 27, 2012 at 18:33
  • yep, that's the next step.. But on a (virtually) shared db I don't think they are going to let me do it :(
    – Mark Belli
    Commented Mar 27, 2012 at 18:38
  • you have a EC2 or any other servers near to the database server? if so, run the command on that server, then scp back to your pc. Otherwise, just bear with the slow network transfer ...
    – ajreal
    Commented Mar 27, 2012 at 18:42
  • unfortunately no ec2.. but I could buy an hourly instance just to do this xfer.. thanks for the hint!
    – Mark Belli
    Commented Mar 27, 2012 at 18:49
  • I do this often, it works well. Thought I'd add: if you're moving an updated dump of a table that you've dumped before (backups?), you can use mysqldump | gzip --rsyncalbe and rsync the file to its destination instead of scp. This prevents fetching the bits that you already have. Commented Jul 6, 2018 at 20:38
16

Have you tried the --compress parameter?

http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_compress

3
  • yes, it reduces a lot data transfer! But what I wanted to understand was: if the mysqldump caches the data on my pc or on the remote server.. From what I understood it does transfer all the data on my pc first. Am I correct?
    – Mark Belli
    Commented Mar 27, 2012 at 18:32
  • 2
    There is not a lot of documentation on how the --compress parameter works. I can make an educated guess but would need to look at the source to be sure. But I thought your question was whether it got compressed server side or client side. In your example it is certainly client side. If you have seen the compress parameter reduce the data transfer, then that should answer your question that it compresses it server side. Commented Mar 27, 2012 at 18:56
  • This only works for the data compression between client and server which means reducing the data size of queries to while creating dump file. It does not compress the sql file generated at the end.
    – Can YILDIZ
    Commented Mar 11 at 8:04
0

This is how I do it:

Do a partial export using SELECT INTO OUTFILE and create the files on the same server.

If your table contains 10 million rows. Do a partial export of 1 million rows at a time, each time in a separate file.

Once the 1st file is ready you can compress and transfer it. In the meantime MySQL can continue exporting data to the next file.

On the other server you can start loading the file into the new database.

BTW, lot of this can be scripted.

1
  • OUTFILE is not possible in Amazon RDS Commented Feb 16, 2017 at 2:05

Not the answer you're looking for? Browse other questions tagged or ask your own question.