0

I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up.

It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious.

A couple of methods I could think of, but I don't know how:

  • Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable.

  • Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way.

I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following:

I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet.

I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes.

(As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely).

(And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

2 Answers 2

0

This will not be possible I'm afraid. Certainly not with the consumer version, I'm not familiar with the business/enterprise versions.

Part of the problem is that the data is encrypted locally so I don't think you can simply pull anything useful from the cache.

I think that you are looking at the wrong tool for the job. I would recommend researching a more focused backup tool, perhaps one that has specific Postgres agents.

2
  • Yes, of course there's more appropriate solutions, but all of those involve adding another backup service, which I was hoping to avoid! Commented Jun 6, 2014 at 2:32
  • Fair enough. Much as I like CP though, it does have limitations. I haven't found any single solution that can do everything I want. Each have strengths and weaknesses. Commented Jun 6, 2014 at 9:49
0

I've written a simple script that do the trick. However it does it by comparing the backup time with the last modification time of the file. It's the only solution I've found. Below is the link to my blog post and the corresponding gist:

http://bougui505.github.io/2016/05/20/get_crashplan_backup_status_using_a_shell_command_line_on_linux.html

https://gist.github.com/bougui505/ba9db84a2fc6f9330f3ccf32a352a98e#file-backup_stat-sh

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .