4

I have a job that nightly runs a backup of all attached databases however it can bog the server down if someone is connected during that time. I have TONS of CPU overhead to work with (I am barely above 5% in use, 8 cores total the highest is maybe 30% the rest are at idle), my issue is I am pegging my I/O. Looking at resource monitor I noticed that the I/O priority in Resource Monitor of the reads to the mdf and to the .bak file have a priority of Normal.

Is there a way to make my backups run at Background priority?

4 Answers 4

3

You can backup databases to a UNC path (manually or by creating a backup device) so you could try that, but you'll be trading disk I/O for network I/O.

4
  • Hmm, I do immediately just transfer the files anyway to another computer. This may be my solution, but I will leave this open for a day or two if there comes better suggestions. Commented Aug 26, 2011 at 17:44
  • There are all sorts of memory problems that go with streaming large database backups to a remote server. Basically this causes all the memory to be assigned to the system cache and taken away from SQL Server. I highly recommend never doing this.
    – mrdenny
    Commented Aug 26, 2011 at 17:47
  • If I mounted a iSCSI device? do the same issues apply? Commented Aug 26, 2011 at 17:52
  • If Windows sees it as a local disk on the SQL Server it doesn't apply (mapped drives do not count as local disk). Only when going to a network path (or a mapped drive).
    – mrdenny
    Commented Aug 26, 2011 at 17:55
2

This doesn't actually answer the question (how to tweak I/O priority for a backup operation) but may help significantly speed up backups and reduce their affect on other activity:

If you can write your backups to drives that are not holding the database's data and log files. With spinning disk based drives this improve performance considerably because the backup operation is not constantly causing the heads to flip between the area holding the live data (to read pages) to the area the backup is being written to (to store those pages).

This may mean adding an extra physical drive (or multiple drives so you can use RAID for redundancy) to the machine for the backups to go to, particularly if you only have one drive or array that holds everything. If you have your data and log files on different disks/arrays but don't have a third drive/array to write the backups to and are unable to add one, writing the backups to the drive with the logs is usually faster than writing them to the drive with the data files (as the active pages in the log files are far fewer except in unusual cases).

3
  • Unfortunately this is a 1U server and all the bays are full and used in a RAID configuration. there is also a issue with the USB controller not showing up in hardware devices (that is the reason it is running on the same disk, it used to go to a external drive, now it gets backed up locally then FTPed to another machine.) Commented Aug 26, 2011 at 17:28
  • Is there another machine it can see over the local network to backup to? That would seperate the source and destination of the backup process to a different spindle set, and if the network is only 100Mbit it will also artificially reduce the I/O load imposed by the operation in any given second. Commented Aug 26, 2011 at 20:58
  • Doing it over the network looks like the solution I will end up doing. Commented Aug 26, 2011 at 21:01
0

No, there is no supported way of changing one thread to a different priority level. You can try using something like LiteSpeed and/or HyperBac to compress the backups which will reduce the write IO requirements.

6
  • Would using windows file and folder compression free up I/O resources at the expense of CPU resources, or would it not make a difference? Commented Aug 26, 2011 at 17:30
  • I'm not sure if NTFS compression is supported for SQL Backups. I know that NTFS compression isn't very good, so it probably wouldn't help much. HyperBac isn't that expensive and has a very good ROI on it. Here's a real world post I wrote up about it. itknowledgeexchange.techtarget.com/sql-server/…
    – mrdenny
    Commented Aug 26, 2011 at 17:35
  • I just ran it by the higher-ups and they rather have it go slow than spend money on making the backups run faster :( Commented Aug 26, 2011 at 17:44
  • Really, they can't fork over $795? Ouch.
    – mrdenny
    Commented Aug 26, 2011 at 17:46
  • For a batch job running at 2 AM that maybe inconveniences one person a week, yea. Commented Aug 26, 2011 at 17:50
0

Upgrade to SQL Server 2008 Enterprise or SQL Server 2008 R2 Standard or higher, make use of native backup compression - and trade some of those CPU cycles in for a potentially much reduced backup time.

2
  • Unfortunately I can not upgrade this instance, I have two instances this one and a R2 instance, on the R2 I do use native compression. Commented Aug 26, 2011 at 18:32
  • Or update to 2014 which now has compression for Standard edition
    – Brain2000
    Commented Jan 16, 2017 at 20:28

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .