121

I am having problems with getting a crontab to work. I want to automate a MySQL database backup.

The setup:

  • Debian GNU/Linux 7.3 (wheezy)
  • MySQL Server version: 5.5.33-0+wheezy1(Debian)
  • directories user, backup and backup2 have 755 permission
  • The user names for MySQL db and Debian account are the same

From the shell this command works

mysqldump -u user -p[user_password] [database_name] | gzip > dumpfilename.sql.gz

When I place this in a crontab using crontab -e

* * /usr/bin/mysqldump -u user -pupasswd mydatabase | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/dev/null 2>&1

A file is created every minute in /home/user/backup directory, but has 0 bytes.

However when I redirect this output to a second directory, backup2, I note that the proper mysqldumpfile duly compressed is created in it. I am unable to figure what is the mistake that I am making that results in a 0 byte file in the first directory and the expected output in the second directory.

* * /usr/bin/mysqldump -u user -pupasswd my-database | gzip> /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz >/home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

I would greatly appreciate an explanation.

Thanks

3
  • sorry for typo in first line of code it should be gzip instead of zip Commented Mar 9, 2014 at 4:17
  • I would not run this every minute
    – m79lkm
    Commented Mar 9, 2014 at 4:19
  • 1
    I was running it just to test the commands. Commented Mar 9, 2014 at 5:13

5 Answers 5

151

First the mysqldump command is executed and the output generated is redirected using the pipe. The pipe is sending the standard output into the gzip command as standard input. Following the filename.gz, is the output redirection operator (>) which is going to continue redirecting the data until the last filename, which is where the data will be saved.

For example, this command will dump the database and run it through gzip and the data will finally land in three.gz

mysqldump -u user -pupasswd my-database | gzip > one.gz > two.gz > three.gz

$> ls -l
-rw-r--r--  1 uname  grp     0 Mar  9 00:37 one.gz
-rw-r--r--  1 uname  grp  1246 Mar  9 00:37 three.gz
-rw-r--r--  1 uname  grp     0 Mar  9 00:37 two.gz

My original answer is an example of redirecting the database dump to many compressed files (without double compressing). (Since I scanned the question and seriously missed - sorry about that)

This is an example of recompressing files:

mysqldump -u user -pupasswd my-database | gzip -c > one.gz; gzip -c one.gz > two.gz; gzip -c two.gz > three.gz

$> ls -l
-rw-r--r--  1 uname  grp  1246 Mar  9 00:44 one.gz
-rw-r--r--  1 uname  grp  1306 Mar  9 00:44 three.gz
-rw-r--r--  1 uname  grp  1276 Mar  9 00:44 two.gz

This is a good resource explaining I/O redirection: http://www.codecoffee.com/tipsforlinux/articles2/042.html

7
  • The problem that I am having is that the mysqldump and gzip commands work. In the first redirect to directory 'backup' a 0 byte file is created. The second redirect to directory 'backup2' that does does not involve any re-compression creates the file that I want. I wanted to know why this is happening. Commented Mar 9, 2014 at 16:17
  • 1
    The output is not being piped into gzip it is being redirected. It will redirect through until it gets to the last part. That is when gzip will pick it up and compress it
    – m79lkm
    Commented Mar 9, 2014 at 17:05
  • 1
    Bash recognized the redirection and first attempted to open the file. The file was opened (created), then bash determines how the data is to be redirected. If there is an error openeing the file the command fails and writes errors to stderr. If the output is redirected again the stdin from the first command is sent to the next file in line. If the file opens and there are not other redirections then the data is written to the file.
    – m79lkm
    Commented Mar 9, 2014 at 19:18
  • 2
    Thank you for explaining that bash redirects the stdin output till the end of the line and then gzip acts to compress the file at the end. I think that I had > /dev/null 2>&1 in my initial code and the mysqldump file was being sent to /dev/null and discarded. I now have the code working as it should. Commented Mar 10, 2014 at 0:28
  • 1
    minor: do not add the password in the command. it will end up in you history and can be retrieved. (for cron it is fine)
    – tibi
    Commented Jan 13, 2017 at 13:53
41

if you need to add a date-time to your backup file name (Centos7) use the following:

/usr/bin/mysqldump -u USER -pPASSWD DBNAME | gzip > ~/backups/db.$(date +%F.%H%M%S).sql.gz

this will create the file: db.2017-11-17.231537.sql.gz

19

Besides m79lkm's solution, my 2 cents on this topic:

OPTION 1:

Don't directly pipe | the result into gzip but first dump it as a .sql file, and then gzip it.
So go for && gzip instead of | gzip if you have the free disk space.

Depending on your system, the dump itself can easily be double as fast but you will need a lot more free disk space. Your tables will be locked for less time so less downtime/slow-responding of your application. The end result will be exactly the same.

So very important is to check for free disk space first with
df -h

Then estimate the dump size of your database and see if it fits the free space:

# edit this code to only get the size of what you would like to dump

SELECT Data_BB / POWER(1024,2) Data_MB, Data_BB / POWER(1024,3) Data_GB
FROM (SELECT SUM(data_length) Data_BB FROM 
information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql')) A;

(credits dba.stackexchange.com/a/37168)

And then execute your dump like this:

mysqldump -u user-p [database_name] > dumpfilename.sql && gzip dumpfilename.sql

OPTION 2:

Another tip is to use the option --single-transaction. It prevents the tables being locked but still result in a solid backup! See docs here. And since this does not lock your tables for most queries you can actually pipe the dump | directly in gzip... (in case you don't have the free disk space)

mysqldump --single-transaction -u user -p [database_name] | gzip > dumpfilename.sql.gz
1
  • 1
    Good suggestion. I think the relative performance of the two approaches depends on the system specs though. I.e. if you'd get very different results on a machine with fast CPU(s) and lots of fast memory but slow storage vs one with less compute and memory but fast SSD. I'd suggest benchmarking both approaches to compare which is faster/optimal. Same for going the other direction. Also depends if the machine has different storage devices for the DB table data vs the file system (which is common). As usual YMMV.
    – Gruff
    Commented Nov 16, 2022 at 18:12
12

You can use the tee command to redirect output:

/usr/bin/mysqldump -u user -pupasswd my-database | \
tee >(gzip -9 -c > /home/user/backup/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz)  | \
gzip> /home/user/backup2/mydatabase-backup-`date +\%m\%d_\%Y`.sql.gz 2>&1

see documentation here

3
  • My question was how the first redirect results in a 0 byte file and the second one in a complete file Commented Mar 9, 2014 at 5:14
  • 2
    sorry about that - I posted an answer to your original question. I will leave this here - hope this little snippet will be useful to someone.
    – m79lkm
    Commented Mar 9, 2014 at 6:59
  • 1
    Indeed it was :) Commented Oct 3, 2019 at 8:59
1

Personally, I have create a file.sh (right 755) in the root directory, file who do this job, on order of the crontab.

Crontab code:

10 2 * * * root /root/backupautomatique.sh

File.sh code:

rm -f /home/mordb-148-251-89-66.sql.gz #(To erase the old one)

mysqldump mor | gzip > /home/mordb-148-251-89-66.sql.gz (what you have done)

scp -P2222 /home/mordb-148-251-89-66.sql.gz root@otherip:/home/mordbexternes/mordb-148-251-89-66.sql.gz

(to send a copy somewhere else if the sending server crashes, because too old, like me ;-))

Not the answer you're looking for? Browse other questions tagged or ask your own question.