Taking SQL Server database backup takes very long time to accomplished on database in production servers. Most DBAs complain about the backup performance, therefore I would like to highlight few tips to improve the backup performance in this blog post. Remember once the BACKUP DATABASE command is executed, SQL Server automatically does few ‘Checkpoint’ to reduce the recovery time and also it makes sure that at point of command execution there is no dirty pages in the buffer pool. After that SQL Server creates at least three workers as ‘Controller’, ‘Stream Reader’ and ‘Stream Writer’ to read and buffer the data asynchronously into the buffer area (Out of buffer pool) and write the buffers into the backup device. You may read my previous blog post for further reading regarding SQL Server Backup Internals.
Be aware that ‘Stream Reader’ thread is one per volume and ‘Stream Writer’ thread is one per backup device.[/info]
Many factors affect the SQL Server Backup Performance, such as hardware, storage configuration, windows configuration, SQL Server configuration and memory capacity. The followings are the tips of Backup Performance Tuning.
Enable ‘Instant File Initialization’ in Windows Configuration
Data and log files are initialized to overwrite any existing data left on the disk from previously deleted files. Data and log files are first initialized by filling the files with zeros when you perform one of the following operations:
- Create a database.
- Add files, log or data, to an existing database.
- Increase the size of an existing file (including autogrow operations).
- Restore/Backup a database or filegroup.
File initialization causes these operations to take longer. However, when data is written to the files for the first time, the operating system does not have to fill the files with zeros. In SQL Server, data files can be initialized instantaneously. This allows for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously.
Use Data Compression
Compression feature in SQL Server helps compress the data inside a database, and it can help reduce the size of the database. Apart from the space savings, data compression provides another benefit: Because compressed data is stored in fewer pages, queries need to read fewer pages from the disk, thereby improving the performance of I/O intensive workloads. However, extra CPU resources are required on the database server to compress and decompress the data, while data is exchanged with the application. Therefore, it is important to understand the workload characteristics when deciding which tables to compress.
Balance Stream Reader and Stream Writer Threads
Number of data files and backup devices should be equal to provide balanced ‘Stream Reader’ and ‘Stream Writer’ threads. The ‘Stream Reader’ thread loads the data into buffer area asynchronously and the ‘Stream Writer’ thread writes the data into the backup device, once the buffer data been written into the backup device the buffer goes under ‘Stream Reader’ queue to be filled up and later on goes to ‘Stream Writer’ queue to be emptied. The count of buffer are vary and the buffer size can be from 64kb to 4MB.
Use Multiple Physical Disks
Try to use dedicated physical disks for database data files and backup devices to improve the backup performance. You may use SAN system with dedicated LUN with dedicated physical disks per LUN as well with proper read/write cache setting.
Reserve Enough Memory for SQL Server MTL
MTL stands for MemToLeave and it is a memory space in SQL Server process which keeps plan cache, SQLMGR, DLLs, threads working set and backup buffers. therefore reserving sufficient memory space for MTL improves the backup process performance. You may read my SQL Server Memory Capacity Planning blog post for more information.
Clear the backup history in MSDB monthly
A complete history of all SQL Server backup and restore operations on a server instance is stored in the MSDB database. Therefore backup process gets slower.
Use Backup Compression Feature
Because a compressed backup is smaller than an uncompressed backup of the same data, compressing a backup typically requires less device I/O and therefore usually increases backup speed significantly. By default, compression significantly increases CPU usage, and the additional CPU consumed by the compression process might adversely impact concurrent operations. Therefore, you might want to create low-priority compressed backups in a session whose CPU usage is limited by Resource Governor.
The compression ratio of a compressed backup depends on the data that has been compressed. A variety of factors can impact the compression ratio obtained. Major factors include:
- The type of data: Character data compresses more than other types of data.
- The consistency of the data among rows on a page: Typically, if a page contains several rows in which a field contains the same value, significant compression might occur for that value. In contrast, for a database that contains random data or that contains only one large row per page, a compressed backup would be almost as large as an uncompressed backup.
- Whether the data is encrypted: Encrypted data compresses significantly less than equivalent unencrypted data. If transparent data encryption is used to encrypt an entire database, compressing backups might not reduce their size by much, if at all.
- Whether the database is compressed: If the database is compressed, compressing backups might not reduce their size by much, if at all.
For compressed backups, the size of the final backup file depends on how compressible the data is, and this is unknown before the backup operation finishes. Therefore, by default, when backing up a database using compression, the Database Engine uses a pre-allocation algorithm for the backup file. This algorithm pre-allocates a predefined percentage of the size of the database for the backup file. If more space is needed during the backup operation, the Database Engine grows the file. If the final size is less than the allocated space, at the end of the backup operation, the Database Engine shrinks the file to the actual final size of the backup.
[warning]Backup Compression is only available in Enterprise Edition, and also it might not reduce the backup process duration due to compression and high CPU utilization.[/warning]
To allow the backup file to grow only as needed to reach its final size, use trace flag 3042. Trace flag 3042 causes the backup operation to bypass the default backup compression pre-allocation algorithm. This trace flag is useful if you need to save on space by allocating only the actual size required for the compressed backup. However, using this trace flag might cause a slight performance penalty (a possible increase in the duration of the backup operation).
Use Native SQL Server Backup
SQL Server Native Backup process runs whilst your database receives transactions and actively Read/Write data into data files, unlike third-party software that freezes database I/O and results in users transaction timeout or not accessibility.
Balance the ‘Disk Write Queue Length’ and Database Backup Threads
‘Disk Write Queue Length’ Indicates the number of disk write requests that are currently waiting as well as requests currently being serviced. Subject to wide variations unless the workload has achieved a steady state and you have collected a sufficient number of samples to establish a pattern. The performance counter value should not be more than 1 for non-RAID or SAN storages, if you are using SAN system and the Read/Write cache is configured to 70/30 or in some cases dynamic, then you may have this counter value below 1000.
The following performance monitor chart is based on Case E for With/Out Buffer Configuration.
Without Buffer Configuration
With Buffer Configuration
Backup Performance Comparison
I have defined few cases to do some database backup comparisons. the test database is 150GB with 131GB data and the storage I/O benchmark is 214MB/S per thread for read and 250MB/S per thread for write operation.
the following table is the backup process configurations:
The following charts represent the backup process performance result.
Without Buffer Configuration
With Buffer Configuration