Monthly Archives: December 2015

THE SQL SERVER MISCONFIGURATION CHRONICLES: PART 3 Database Files

In the previous part of the SQL Server Misconfiguration Chronicles we handled the default instance memory configuration (https://swyssql.wordpress.com/2015/12/17/the-sql-server-misconfiguration-chronicles-part-2-instance-memory-settings/)

In the third part of the misconfiguration we will handle the database files. If you create a database SQL Server will look at your model database and create the file structure as specified in your model database. By default this will mean that you will create 1 data file with a size of 4 MB and 1 log file with a size of 1 MB. These files will be able to grow automatically by auto growth, where the data file will grow in steps of 1MB and your log file will increase with 10% each growth

Why is this a bad thing?

In this episode of the misconfiguration chronicles this is actually a multi part question.

  1. Why is the data file configuration not optimal?

It all starts with another question, how big is your database going to be? If you are going to have a database which will only contain 7 MB of data this data file setup will work for you, but when you have to scale your database this will not be a good configuration for you. Let’s say you are going to have a database in which initially 4 GB of data will be loaded, and this might grow to 8 GB within the next year. In this situation the default data file setup will not be optimal. If you leave the default settings on this will mean that your database will grow automatically about 4088 times for your initial load, causing file fragmentation! This will cause your IO calls to become a lot slower because you have to access your data on a fragmented file. (For more information on fragmented files and how to solve them you can read following blog which gives you a perfect explanation on the problem and how to fix it https://www.mssqltips.com/sqlservertip/3008/solving-sql-server-database-physical-file-fragmentation/).
This fragmentation will definitely impact your performance, and cause your SQL server to perform slower or even give errors due to your file system limitation. You can prevent this issue by initially scaling your database the right way, you can choose to give your initial database file a size of 4096 MB with an auto growth setting of 1024 MB. This will minimize the fragmentation of your file.
Also important to remember is that SQL server can use Instant File Initialization for its SQL Server data files. This has to be set on the OS level. You have to grant the SQL Server service account the right to Perform Volume Maintenance tasks. Why is the Instant File initialization a good thing? IFI will “reserve” the database file space of your auto-growth and not fill the space with zero. This will cause your file to remain available during an auto growth (Without INI your transaction will become suspended while auto growing).

Another thing you can do is choose to use multiple files, but this is to enhance performance, and give you flexibility in scaling your database. For more information on this I suggest you read the impact on performance of multiple data files post of Paul Randall (http://www.sqlskills.com/blogs/paul/benchmarking-do-multiple-data-files-make-a-difference/).

  1. Why is the Log file configuration not optimal?

For the physical file level fragmentation, the explanation of the data file configuration also applies here, only the part of multiple files and the part of Initial File Initialization does not apply here. For multiple log files: SQL Server will only use one transaction log file. If you add another transaction log file it will not be used unless you have a max size on the other transaction log file and/or the file is full. This is because the SQL Server transaction log will write sequentially into the file this means transaction after transaction. It cannot spread its workload over different files.

The first difference with the data file part is that we here have a default setup of an auto growth of 10%, this will make your file grow in very small portions at first, but later on 10% might become a very big growth. 10% of 1MB is not that much and will case heavy fragmentation if you have a large transaction running on your SQL server. It is better to set an auto growth in MB according to the workload your SQL Server has to handle. I usually set the Auto growth on LOG or DATA files from 64MB to 128 MB to 256MB to 512MB to 1024MB depending on the size of the file and the workload.

The second and most important difference with data files is that log files use something that is called Virtual Log Files aka the VLF’s. If your log file grows SQL Server will add additional Virtual Log Files. (for more information how many virtual log files will be created & more information on VLF’s you can read this great blog on VLF http://www.sqlskills.com/blogs/kimberly/transaction-log-vlfs-too-many-or-too-few/ & http://www.sqlskills.com/blogs/paul/important-change-vlf-creation-algorithm-sql-server-2014/ ).
Having too many VLF files will impact your transaction log backup & restore process, this will take longer and longer depending on the amount of VLF which you have in your transaction log file. In extreme cases this might even impact your insert/update/delete performance (as explained by the blog on Brent Ozar’s website http://www.brentozar.com/blitz/high-virtual-log-file-vlf-count/ )

How do you solve this?

There is not a straight answer for this one, because it is a multipart question and it all depends. We can say that the default setting will have to be altered 99% of the time. The configuration you have to choose always depends on what your size is and will be, and what your workload will be, so unfortunately I cannot give you a straight answer here, because it will depend on other factors which will change the answer!

Thank you all for reading and happy reconfiguring!

Stay Tuned!

Advertisements
Tagged , , , , , , , ,

THE SQL SERVER MISCONFIGURATION CHRONICLES: PART 2 Instance Memory Settings

In the first part of the SQL Server Misconfiguration Chronicles we handled the database level issue of AutoClose & AutoShrink (https://swyssql.wordpress.com/2015/12/08/the-sql-server-misconfiguration-chronicles-part-1-database-autoclose-autoshrink/)

The second part of the misconfiguration chronicles will handle the default memory settings of a freshly installed SQL Server instance. SQL Server by default configures the instance memory settings to take a maximum memory of 2147483647MB which is about 2 Petabytes.

Why is this a bad thing?

Let’s say you have configured your SQL server memory to be able to take 2 petabytes of data, but your server only has 64GB of memory available. This will mean that SQL Server will be able to take all memory that exists on the server. But what happens to the other processes on your server. As you know you will also have to run an OS on your server, and maybe also SSIS & SSAS & god knows what. If SQL Server takes all the memory, it will be constantly fighting with the other processes to get the free memory of the server. Which will result in to memory pressure and with that big CPU consumption, because data has to be swapped from memory to disk constantly. Which will then result in a bad performing server.

How do you solve this?

That answer is quite straightforward, lower the default memory setting of your SQL Server to a value which will not take all memory of your server. For example, if you have a server with 12 GB of RAM, I would start off by giving 8GB of ram to the SQL Server leaving 4GB for the OS & other processes. This will result in a better performance of your server.

Following code will allow you to alter the default MaxMemory setting of your SQL Server

———————————————————————-
———————————————————————-
— Created By Stijn Wynants
— SQL MaxMemoryFixer
— Description: Execute Script to PRINT ALTER ServerMemory
— Statements for SQL Server Instance
———————————————————————-
———————————————————————-
DECLARE @ServerMemory int
DECLARE @SM nvarchar(200)
DECLARE @CMD varchar(max)
DECLARE @MemoryForOS int
DECLARE @MemoryOtherProcesses int
———————————————————————-
———————————————————————-
— SET YOUR MEMORY FOR SQL SERVER HERE !!!!
SET @ServerMemory = 14000        –> In MB
SET @MemoryForOS = 4096            –> In MB (Recommended minimum of 2048 MB Optimal 4096 MB, with more memory(64 gb,256,…) leave about 10%.
SET @MemoryOtherProcesses = 0    –> In MB (If there are no other processes just set to 0)
———————————————————————-
———————————————————————-
SET @SM cast(@ServerMemory as nvarchar)
DECLARE @FullSystemMemory as int
SET @FullSystemMemory (select top 1 total_physical_memory_kb/1024 from sys.dm_os_sys_memory)
IF @ServerMemory (@FullSystemMemory @MemoryForOS @MemoryOtherProcesses)
Begin
SET @CMD =

EXEC sp_configure ”show advanced option”, ”1”;
GO
RECONFIGURE
GO
EXEC sys.sp_configure N”max server memory (MB)”, ”’+@SM+”’
GO
RECONFIGURE WITH OVERRIDE
GO
EXEC sp_configure ”show advanced option”, ”0”;
GO
RECONFIGURE
GO

END
ELSE
BEGIN
SET @CMD =
‘There is only ‘+cast((@FullSystemMemory 2048as varchar)+‘ MB available on this server, you have tried to choose more memory then the server can deliver. Pick a lower value then ‘+@SM+‘ MB’
END
PRINT @CMD

Thank you all for reading and happy reconfiguring!

The next part will be on Data File Misconfiguration!

(https://swyssql.wordpress.com/2015/12/24/the-sql-server-misconfiguration-chronicles-part-3-database-files/)

Stay Tuned!

Tagged , , , , , , ,

The SQL Server Misconfiguration Chronicles: Part 1 Database AutoClose & AutoShrink

In the SQL Server Consulting world you see a lot of different environments, but mostly you see the same basic misconfigurations for their SQL Server Enviroment.

The following blogposts will show you the most common issues with the configuration of SQL Server, they will tell you what the issue is, why it is a bad configuration and how you can solve it.

The first post will be on two common misconfigured database options: AutoClose & AutoShrink.

What does Autoclose do to my database?

Autoclose will close your database when the last user who uses the database is disconnected.

Why is that a bad thing?

Everytime a user connects to a SQL Database which is closed it will have to be opened again, this will degrade performance because the connecting user will have to wait until the database is opened again before being able to access the data.

How do we solve this issue?

You can execute the following script which will generate statements to alter auto close to false for all databases of the instance

----------------------------------------------------------------------
———————————————————————-
—        Created By Stijn Wynants                                    
—        SQL AutoClose Fixer            
—        Description: Execute Script to PRINT ALTER STATEMENTS
—                     auto_close to false on all databases
———————————————————————-
———————————————————————-
DECLARE @CMD varchar(max)
DECLARE @Database varchar(200)
DECLARE AutoCloseFixer CURSOR
FOR 
select name from sys.databases where database_id >and is_auto_close_on = 1
OPEN AutoCloseFixer
FETCH NEXT FROM AutoCloseFixer INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN
    BEGIN TRY
         PRINT
————> DisableAutoClose on ‘+@Database+
USE [master];
 GO
ALTER DATABASE [‘+@Database+‘] SET AUTO_CLOSE OFF WITH NO_WAIT;
PRINT ”Disabled AutoClose on ‘+@Database+”’
 ‘
END TRY
BEGIN CATCH
    SELECT  ERROR_MESSAGE()

END CATCH
FETCH NEXT FROM AutoCloseFixer INTO @Database END
CLOSE AutoCloseFixer
DEALLOCATE AutoCloseFixer

What does AutoShrink do to my Database?

It will remove the unused space from your database file and thus making your database file smaller.

Why is that a bad thing?

Because auto shrink process removes all free space out of your database file, there will be no space left for new data. So the moment when new data gets loaded in your database your database file will have to grow again. (This is handled by auto growth, this will be discussed in the next post) This will grow larger than the actual space you needed, which will leave some free space, when the auto shrink feature kicks in again it will remove this space again. This constant shrink grow shrink grow operation will cause file fragmentation (on a system level) and this uses a lot of resources. Also the shrink operation itself uses a lot of resources because it will move pages to other places in you database file which takes CPU IO & generates transaction log. But last but not least when auto shrink services your database it will not only shrink your database but because it moves around the pages it will have a devastating side-effect! It will cause a complete fragmentation of all your database indexes which will cause your performance to plummet.

How do we solve this issue?

You can execute the following script which will generate statements to alter auto growth to false for all databases of the instance

———————————————————————-
———————————————————————-
—        Created By Stijn Wynants                                    
—        SQL AutoShrink Fixer            
—        Description: Execute Script to PRINT ALTER STATEMENTS
—                     auto_shrink to false on all databases
———————————————————————-
———————————————————————-
DECLARE @CMD varchar(max)
DECLARE @Database varchar(200)
DECLARE AutoShrinkFixer CURSOR
FOR
select name from sys.databases where database_id >and is_auto_shrink_on = 1
OPEN AutoShrinkFixer
FETCH NEXT FROM AutoShrinkFixer INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN
    BEGIN TRY
        PRINT

————> DisableAutoShrink on ‘+@Database+
            USE [master];
            GO
            ALTER DATABASE [‘+@Database+‘] SET AUTO_SHRINK OFF WITH NO_WAIT;
            PRINT ”Disabled AutoShrink on ‘+@Database+”’
            ‘
    END TRY
    BEGIN CATCH
        SELECT ERROR_MESSAGE()
    END CATCH
    FETCH NEXT
FROM AutoShrinkFixer 
INTO @Database END
CLOSE AutoShrinkFixer
DEALLOCATE AutoShrinkFixer

Thank you all for reading and happy reconfiguring!

The next part will be on Default Memory Settings Misconfiguration!
(<a href="https://swyssql.wordpress.com/2015/12/17/the-sql-server-misconfiguration-chronicles-part-2-instance-memory-settings//)

Stay Tuned!

Tagged , , , , , , , , , , ,