Table of Contents
For a website or an application to retain its users, it must run faster. Hanging and lagging are a deal-breaker for many. For users in critical work environments and require high output and processing speed, delayed information may create big problems. Hence, their work may not be efficient as it should be. The more the database size grows, the fetching of data and writing it back to the database should be as fast as possible. This ensures execution of all processes is smooth. For these processes to be as fast as they should be, you should occasionally tune the database servers for performance.
How to Tuning a SQL Server Database?
The article describes step-by-step database performance tuning on one of the most top and most used database servers in the market: Microsoft SQL Server.
1- Finding the culprit
When the database is not performing as it should, there is a need to discover why it’s not outputting the expected results. The reasons and the first things to check are:
- The installation and hardware settings may need correcting.
- Server needs are specific. Having wrong settings may lead to malfunctions.
- The correct T-SQL code for SQL Server has been provided for implementation.
If the above has been properly done, but the SQL is still lagging, then observe the performance of different threads by calculating the wait statistics of different threads. This can be calculated using the code sys.dm_os_wait_stats
[adinserter block=”2″]
It is important to know that this thread is not an operating system thread where the SQL Server has been installed. It is related to a pseudo operating system for the Server. The Dynamic Management View gives more information about its present state. Paul Randal’s script is among the many scripts that query the DMV. It is easy to apprehend and also includes the most important parameters used to observe the wait statistics.
WITH [Waits] AS ( SELECT [wait_type], [wait_time_ms] / 1000.0 AS [WaitS], ( [wait_time_ms] - [signal_wait_time_ms] ) / 1000.0 AS [ResourceS], [signal_wait_time_ms] / 1000.0 AS [SignalS], [waiting_tasks_count] AS [WaitCount], 100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage], ROW_NUMBER() OVER( ORDER BY [wait_time_ms] DESC ) AS [RowNum] FROM sys.dm_os_wait_stats WHERE [wait_type] NOT IN ( N 'BROKER_EVENTHANDLER', N 'BROKER_RECEIVE_WAITFOR', N 'BROKER_TASK_STOP', N 'BROKER_TO_FLUSH', N 'BROKER_TRANSMITTER', N 'CHECKPOINT_QUEUE', N 'CHKPT', N 'CLR_AUTO_EVENT', N 'CLR_MANUAL_EVENT', N 'CLR_SEMAPHORE', N 'DBMIRROR_DBM_EVENT', N 'DBMIRROR_EVENTS_QUEUE', N 'DBMIRROR_WORKER_QUEUE', N 'DBMIRRORING_CMD', N 'DIRTY_PAGE_POLL', N 'DISPATCHER_QUEUE_SEMAPHORE', N 'EXECSYNC', N 'FSAGENT', N 'FT_IFTS_SCHEDULER_IDLE_WAIT', N 'FT_IFTSHC_MUTEX', N 'HADR_CLUSAPI_CALL', N 'HADR_FILESTREAM_IOMGR_IOCOMPLETION', N 'HADR_LOGCAPTURE_WAIT', N 'HADR_NOTIFICATION_DEQUEUE', N 'HADR_TIMER_TASK', N 'HADR_WORK_QUEUE', N 'KSOURCE_WAKEUP', N 'LAZYWRITER_SLEEP', N 'LOGMGR_QUEUE', N 'ONDEMAND_TASK_QUEUE', N 'PWAIT_ALL_COMPONENTS_INITIALIZED', N 'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP', N 'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP', N 'REQUEST_FOR_DEADLOCK_SEARCH', N 'RESOURCE_QUEUE', N 'SERVER_IDLE_CHECK', N 'SLEEP_BPOOL_FLUSH', N 'SLEEP_DBSTARTUP', N 'SLEEP_DCOMSTARTUP', N 'SLEEP_MASTERDBREADY', N 'SLEEP_MASTERMDREADY', N 'SLEEP_MASTERUPGRADED', N 'SLEEP_MSDBSTARTUP', N 'SLEEP_SYSTEMTASK', N 'SLEEP_TASK', N 'SLEEP_TEMPDBSTARTUP', N 'SNI_HTTP_ACCEPT', N 'SP_SERVER_DIAGNOSTICS_SLEEP', N 'SQLTRACE_BUFFER_FLUSH', N 'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', N 'SQLTRACE_WAIT_ENTRIES', N 'WAIT_FOR_RESULTS', N 'WAITFOR', N 'WAITFOR_TASKSHUTDOWN', N 'WAIT_XTP_HOST_WAIT', N 'WAIT_XTP_OFFLINE_CKPT_NEW_LOG', N 'WAIT_XTP_CKPT_CLOSE', N 'XE_DISPATCHER_JOIN', N 'XE_DISPATCHER_WAIT', N 'XE_TIMER_EVENT' ) AND [waiting_tasks_count] > 0 ) SELECT MAX ([W1].[wait_type]) AS [WaitType], CAST ( MAX ([W1].[WaitS]) AS DECIMAL (16, 2) ) AS [Wait_S], CAST ( MAX ([W1].[ResourceS]) AS DECIMAL (16, 2) ) AS [Resource_S], CAST ( MAX ([W1].[SignalS]) AS DECIMAL (16, 2) ) AS [Signal_S], MAX ([W1].[WaitCount]) AS [WaitCount], CAST ( MAX ([W1].[Percentage]) AS DECIMAL (5, 2) ) AS [Percentage], CAST ( ( MAX ([W1].[WaitS]) / MAX ([W1].[WaitCount]) ) AS DECIMAL (16, 4) ) AS [AvgWait_S], CAST ( ( MAX ([W1].[ResourceS]) / MAX ([W1].[WaitCount]) ) AS DECIMAL (16, 4) ) AS [AvgRes_S], CAST ( ( MAX ([W1].[SignalS]) / MAX ([W1].[WaitCount]) ) AS DECIMAL (16, 4) ) AS [AvgSig_S] FROM [Waits] AS [W1] INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] <= [W1].[RowNum] GROUP BY [W1].[RowNum] HAVING SUM ([W2].[Percentage]) - MAX ([W1].[Percentage]) < 95; -- percentage threshold GO
The parameters on the top rows are essential because they epitomize the maximum wait type and are set first. To make the correct decisions, we need to learn and understand the wait types.
In the thread, let’s use an example where PAGEIOLATCH_XX
appears too many times. This means there will be a memory block because the thread is waiting for page reads from the disk that goes into the buffer. This does not mean a poor input/output subsystem or lack of enough memory. Increasing the functioning of the two will only solve the problem temporarily.
[adinserter block=”2″]
To find the necessary solution, database administrators and other technicians need to think thoroughly from different angles and do sample tests to reach a permanent solution. All the wait types need different solutions, which can only be found after tuning the performance of the database. Finding and tuning problematic T-SQL solves 60% to 70% of the problems.
2- Finding problematic queries
The following T-SQL code is used to find the 20 worst performing queries:
SELECT TOP 20 total_worker_time / execution_count AS Avg_CPU_Time, Execution_count, total_elapsed_time / execution_count as AVG_Run_Time, total_elapsed_time, ( SELECT SUBSTRING( text, statement_start_offset / 2 + 1, statement_end_offset ) FROM sys.dm_exec_sql_text(sql_handle) ) AS Query_Text FROM sys.dm_exec_query_stats ORDER BY Avg_CPU_Time DESC
If a query only runs once, it can have a maximum average run time. Therefore, we need to carefully examine the results because the total effect on the SQL is low compared to queries that have medium average run time and run more times in a day.
3- Fine-tuning queries
The important concept to know is how to write down queries and apply indexes. This concept allows the SQL optimizer to find an optimized plan to perform processes that we want it to perform. We get more sophisticated optimizers with the release of a new SQL Server. This optimizer will cover our errors in writing less optimized queries; it also fixes any bugs found in the previous optimizer. Despite this, we still need to tell the optimizer what to do to do its job efficiently.
To make the thought-out guesses why SQL Server is taking certain actions, we need to understand the advanced search and sorting algorithms of the SQL Server. After examining and finding out the queries that need to be tuned, we need to find out how the Server interprets them using the execution plan. Things we need to consider during the execution plan:
- Find out the operators that take most of the query cost.
- Find out the reason why the operator is taking up too much time. On most occasions, scans are more costly compared to seek. Therefore, it is necessary to examine the reason for a particular scan, either an index or a table scan, being executed instead of an index scan. There is no out-of-the-box solution to fix this problem but, we can implement proper indexes on the table columns.
- It is fundamental to know approximately all the 78 operators that represent various actions and decisions. You can consult the Microsoft documentation to understand them.
4- Execution plan reuse
We will encounter performance issues if we do not reuse the execution plan, even if we have already implemented proper indexes on tables and written a good T-SQL code. We need to make sure that we can reuse the execution plan when needed after fine-tuning the queries.
Implementation of parameterized stored procedures is one of the best ways to reuse the execution plan. We can use *code when we cannot implement stored procedures. We can also execute T-SQL statements when parameter values are the only changes in the SQL statements. On most occasions, the SQL Server will reuse the execution plan generated first in the execution.
[adinserter block=”2″]
By running the query below, you can determine how many times the execution plan can be re-used, where usecounts
represents the number of times the plan is reused:
SELECT [ecp].[refcounts], [ecp].[usecounts], [ecp].[objtype], DB_NAME([est].[dbid]) AS [db_name], [est].[objectid], [est].[text] as [query_ext], [eqp].[query_plan] FROM sys.dm_exec_cached_plans ecp CROSS APPLY sys.dm_exec_sql_text (ecp.plan_handle) est CROSS APPLY sys.dm_exec_query_plan (ecp.plan_handle) eqp
There are circumstances where the execution plan is no longer valid, including:
- When the query indexes are dropped or changed.
- When the query statistics, scheme, or structure of a table change.
- When the “recompile” option is used.
- When there are too many insertions, deletes, or updates.
- When within a single query, DML and DDL mix.
5- Removing unnecessary indexes
We need to check how to use the indexes after tuning the queries. Maintaining indexes requires lots of I/O and CPU time. The Server needs to update the indexes each time data is inserted into the database. It is wise to remove them once they are not in use to gain performance.
SELECT OBJECT_NAME(IUS.[OBJECT_ID]) AS [OBJECT NAME], DB_NAME(IUS.database_id) AS [DATABASE NAME], I.[NAME] AS [INDEX NAME], USER_SEEKS, USER_SCANS, USER_LOOKUPS, USER_UPDATES FROM SYS.DM_DB_INDEX_USAGE_STATS AS IUS INNER JOIN SYS.INDEXES AS I ON I.[OBJECT_ID] = IUS.[OBJECT_ID] AND I.INDEX_ID = IUS.INDEX_ID
6- Installation of server and setting up the database
It is advisable to keep the log and data files independently on different operating systems during a database setup. The cause is writing and accessing data files are not sequential, while writing and accessing log files are sequential. Putting the two of them on the same drive hinders us from using them in an optimized way.
Information such as recommendations on how to set up a Storage Area Network, given by a vendor, may not be helpful all the time. There is a need to have a clear discussion with the hardware and networking specialists to keep logs and data files separately to optimize them fully.
[adinserter block=”2″]
7- Do not overload SQL Server.
A database administrator’s main duty is to ensure the smooth running of a database, and it serves the users efficiently to meet their needs. For this to be possible, there is a need to maintain different databases on different machines for the following environments: production, development, testing, and analytical.
It is essential to have a full recovery mode in a production environment; the other databases only need a simple recovery mode. The production database testing may overload the transaction log, indexes, I/O, and CPU. Hence, it is necessary to host the databases on different machines to reduce the load carried by the input/output systems and the CPU.
8- Memory, transaction log, and tempdb
It is fundamental for log files to have enough free space for normal operations. An outgrow operation on a log file can force other operations to wait till it is completed. Hence it would be very time-consuming. We can use *code to find the log size of different databases and their use.
Tempdb should be siloed on a separate disk. It is essential to keep the initial size as big as we can accommodate because when we have an auto-grow situation, the performance of the database will decrease.
Around 2GB should be enough space to hold a database, and SQL Servers should run on a separate machine: the server that does not house any other application. The memory is essential for the operating system, and more of it is part of a cluster.
Conclusion
Customers and users require and prefer websites and applications that run fast and are efficient enough to cater to their needs. A system that is slow and keeps hanging is a deal-breaker for users. The steps of procedures and suggestions discussed in the article are to be used only for performance tuning. Following these steps, on average, we may get a 40% to 50% performance improvement. To achieve greater improvement, you need to research deeper into each of the steps above.
Blog Hero