DBAs are always under pressure to boost-up the database performance by tuning queries and re-fining the database design and data file placement, but all those efforts only boost-up the performance in to some extend, and beyond that requires faster hardware storage. In this blog post, I will share with you how to increase Create,Update and Delete operations’ performance utilizing server memory as disk drive.
prior showing the result of my test, I would like to explain a little bit about how SQL Server handles transactions. Once the CUD query is submitted, SQL Server relational engine creates the execution plan based on the existing statistics objects or reuse the existing execution plan. Query Executor runs the execution plan and asks the Storage Engine to perform, as below figure illustrates.
The performance impacts starts from here, once Transaction Manager needs to write the transaction into the LDF file to make sure that specific transaction or transactions are durable (One of ACID characteristics). There is a buffer for transactions called “Log Pool” which holds the recent transactions then group them and flush them into the disk. Prior SQL Server 2012 the “Log Pool” size is 64KB and from SQL Server 2012, the “Log Pool” size is 112KB. The Transaction Manager flushes the “Log Pool” content into the disk in either one of the following conditions:-
- The Log Pool is full.
- One or some of the transactions are committed.
- Checkpoint command is issued.
Once Transaction Manager flushes the “Log Pool” content into the disk, needs to acknowledge the other components and send the result to the client application.
Disk I/O Latency vs. RAM I/O Latency
Everybody knows that RAM is much faster than Disk, but in what scale? Disk I/O Latency is based on Milliseconds but RAM I/O Latency is based on Nanoseconds. DISK I/O Latency highly depends on the I/O pattern, either Sequential or Random. But RAM I/O Latency is not depends on the I/O pattern.
Even a fast disk will slow your application if there is too much disk I/O. If disk time and disk queue length are increasing from excessive read/write activity, the disk access may be a performance problem.
How to Boost-Up?
So far you understand how SQL Server handles the transactions (Just overview), therefore to improve the performance of database transactions, we do need to provide a disk that performs I/O as fast as RAM. Of course we do have many choices such as RAMDISKS, NVRAM or NVDIMM. which all of the are quite expensive and have storage space limit as NVDIMM’s maximum storage space is 8GB currently.
Why not we do use server memory as virtual disk by using third-party software? I have done my test by using RAMDISK software from DATARAM company that supports up to 128GB in some editions (Of course I used the trial version).
The following figure shows my disk volumes as POS-RAM is RAM based disk and POS-DISK is the physical disk.
To perform my test, I created two different databases called POS_Disk and POS_RAM which respective log file is placed into provided volumes. The following script create the databases:-
CREATE DATABASE [POS_DISK] CONTAINMENT = NONE ON PRIMARY ( NAME = N'POS_DISK', FILENAME = N'C:Program FilesMicrosoft SQL ServerMSSQL12.MSSQLSERVERMSSQLDATAPOS_DISK.mdf' , SIZE = 256000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ) LOG ON ( NAME = N'POS_DISK_log', FILENAME = N'E:POS_DISK_log.ldf' , SIZE = 65536KB , MAXSIZE = 987136KB , FILEGROWTH = 65536KB ) GO USE POS_DISK; GO CREATE TABLE Transactions (ID BIGINT IDENTITY(1,1) , TransactionPadding Binary(30)); GO CREATE DATABASE [POS_RAM] CONTAINMENT = NONE ON PRIMARY ( NAME = N'POS_RAM', FILENAME = N'C:Program FilesMicrosoft SQL ServerMSSQL12.MSSQLSERVERMSSQLDATAPOS_RAM.mdf' , SIZE = 256000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ) LOG ON ( NAME = N'POS_RAM_log', FILENAME = N'F:POS_RAM_log.ldf' , SIZE = 65536KB , MAXSIZE = 987136KB , FILEGROWTH = 65536KB ) GO USE POS_RAM; GO CREATE TABLE Transactions (ID BIGINT IDENTITY(1,1) , TransactionPadding Binary(30)); GO
The following script inserts 1,000,000 records into Transactions table in each database. But you do need to run these scripts in separate connection to be able to compare the test result.
--Run in Separate Connections Insert Into [POS_DISK].dbo.Transactions Default Values Go 1000000 Insert Into [POS_RAM].dbo.Transactions Default Values Go 1000000
[info]Make sure that Performance monitor is already setup prior executing the test script.[/info]
The first comparison is about
Write Transactions/Sec counter that shows POS_RAM database performance is almost 50% higher than POS_DISK database with same workload.
The next performance counter that needs to be observed is
Log Flushes/Sec, it means how many times “Log Pool” content been flushes into the LDF file per second. Wow! That’s ridiculously fast!
The next performance counter is
Disk Writes/Sec to check how many write I/O happens against the specific volume.
As you see the Disk Writes/Sec counter value for POS_DISK database is average 3,473.713 but for POS_RAM database is average 5,546.017 write operations per second. I know! That’s ridiculously fast with not spending much money on the server hardware.
At the end, lets have a look at the elapsed time for both connections. The following figure shows that POS_RAM execution time is 3:03 and POS_DISK execution time is 4:32.
You can use RAMDISK software to convert unutilized server main memory as disk volume for database transaction log file to boost-up the CUD transactions performance by dropping the I/O latency from milliseconds to nanoseconds. This solution works well if you do have SQL Server AlwaysOn Availability Group with Synchronous Mode to reduce
HADR_REDO_THREAD wait time and increase the transaction log commitment performance in replica database.
This solution is very subjective and you must do some assessment and evaluation against your production server. Fard Solutions Sdn Bhd and Author are not responsible for any damage caused by using this solution in your SQL Server environment.[/warning]