jump to navigation

Using Trace Flags in SQL Server November 14, 2013

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008, SQL Server 2012.
Tags:
add a comment

I have just finished a seminar in SQL Server 2012 and one of the topics I have explained thoroughly was Trace flags.

In this post I will try to shed some light on what trace flags are and how we can use them.

Trace flags are used to change the behavior of SQL Server. Please bear in mind that you should use them with caution.

There are hundred of Trace flags that help us troubleshoot and optimise SQL Server installations.

Before turning trace flags on, make sure you understand fully what they do.

There are global trace flags (server level) that are enabled the entire time SQL Server is running.

There are session trace flags. These trace flags are enabled and disabled at the client session level. Those flags influence only the current session.

I will provide some hands-one demos. In the first one I will look into backups and trace flags. In the second demo I will look into deadlocks and trace flags.

We use DBCC TRACEOFF/DBCC TRACEON flags to enable trace flags at both the global and session level. They do not require to restart the service.

You can also use the SQL Server Configuration Manager, to enable them by using the startup parameter flags.(-Txxx format)

Have a look at the picture below.

trace-flags-1
I have SQL Server 2012 Enterprise edition installed in my machine. You can try these examples in any edition/version of SQL Server.

I connect to my local instance through windows authentication and I create a new query window.

USE master
GO

DBCC TRACESTATUS(-1);
GO

First I check if there are any trace flags enabled on my server at a global level. In my case there is no a global trace flag.

Type and execute the following t-sql code.

Firstly I recycle the SQL Server error log so I have a brand new error log. Then I create a database backup of the AdventureWorksLT2o12 database. You can use any database you want.


EXEC sp_cycle_errorlog;

BACKUP DATABASE [AdventureWorksLT2012] TO
 DISK = N'C:\sqldata\fulladv.bak'
 WITH NOFORMAT, NOINIT,
 NAME = N'AdventureWorksLT2012-Full Database Backup',
 SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO 20

All the backups by default write detailed information to the SQL Server error log.

When I go again and view the SQL Server error log I see it with lots of entries about the backups.

Have a look at the picture below

backup-trace

I might want to change that behavior and make less entries to the SQL Server error log so I can see easier the real errors.

In a new query window type(copy-paste) the following.


--enable the this flag at the global level

--With this trace flag, we can suppress these log entries
DBCC TRACEON(3226,-1)

--check to see if flags exist at the global level

DBCC TRACESTATUS(-1)

--recycle the error log again

EXEC sp_cycle_errorlog;

BACKUP DATABASE [AdventureWorksLT2012] TO
 DISK = N'C:\sqldata\fulladv1.bak'
 WITH NOFORMAT, NOINIT,
 NAME = N'AdventureWorksLT2012-Full Database Backup',
 SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO 20

I enable the 3226 flag. Then I check to see if this flag is indeed enabled with the DBCC TRACESTATUS command.

Then I recycle the error log and then I run the database backup script again. Now all these entries are not going to be logged into the error log.

Have a look at the picture below.

trace-flags-2

In this new demo I will show you what trace flags you can enable in order to get deadlock information.

By default deadlock related entries do not go into the SQL Server error log.

We have a deadlock when two separate processes/transactions hold locks on resources that each other needs. Nothing can happen. This situation could go on forever.But one of the transactions is forced by SQL Server to “die”.SQL Server selects a victim and kills the process/transaction. It usually kills the one that is least expensive to roll back. We can also have some control over the process to be chosen to be the victim by setting the deadlock priority by typing  “SET DEADLOCK_PRIORITY LOW”.This option makes SQL Server to choose that session to be the victim when a deadlock occurs.

I will use the Northwind database to create a deadlock. If you need the installation scripts for the sample Northwind database, click here

Type (copy-paste) the following T-SQL statements.


--Returns the resources and types of locks that are participating in a deadlock
DBCC TRACEON(1222,-1)

--check to see if flags exist at the global level

DBCC TRACESTATUS(-1)

USE Northwind
GO
--open a query window (1) and run these commands
begin tran
update products set CategoryID = 7

use Northwind
GO
--go to query window (2) and run this command

begin tran
UPDATE Suppliers set CompanyName = 'Mycompany'
update products set CategoryID = 12

--execute this line after you execute the statement in the other query window(1) , in the original query window

update Suppliers set CompanyName = 'TheCompany'

First, I enable the 1222 trace flag.Then I check that this flag is enabled.Then I start a transaction in the query window in the current session.

In another query window I type and execute another t-sql statement. Then I returned to the original query window and typed

update Suppliers set CompanyName = ‘TheCompany’

After I execute the line above, a deadlock occured

If I go to the SQL Server error log I will see the deadlock info captured in it. Have a look at the picture below.

trace-flags-3

By all means use trace flags but make sure that you know what each does before you enable them in your SQL Server ecosystem.

Hope it helps!!!

A thorough look into transactions in SQL Server November 9, 2011

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008.
Tags: , , , , , , , ,
6 comments

In this post I would like to look into transactions in SQL Server. A transaction is a group of tasks that perform as a single unit of work.In database terms all parts of the transaction must be saved in the database or must be abandoned.It is a great feature in the relational world since through transactions we make sure the database is never in an inconsistent state.We will need an instance of SQL Server to do our examples.In case you do not have an instance of Enterprise edition or Developer edition  of SQL Server 2008 / R2, you can download it from here . This is a trial version. Developer edition has the same features as the Enterprise edition but has different licensing requirements.It is not free as many people believe.

We need a database as well. I will use the Northwind database.

If you need the installation scripts for the sample Northwind database, click here

All transactions must possess the ACID properties. They must be atomic, consistent,isolated and durable.Atomic means that a transaction has to work as a unit.That means that the transaction should fully commit or abandon when it completes.Let’s have a look at an example where we can see what atomicity means in SQL Server.Execute the code below.

1) Launch SSMS and connect to the local instance of SQL Server.


USE tempdb
IF (SELECT  OBJECT_ID('dbo.nikostable')) IS NOT NULL
DROP TABLE dbo.nikostable
GO
CREATE TABLE dbo.nikostable (ID INT CONSTRAINT check_ID CHECK (ID = 1),
NAME NVARCHAR(50) NOT NULL)
GO
INSERT  INTO dbo.nikostable
VALUES (1,'nikos'),(2,'kostas'),(3,'Mary')
GO
SELECT  * FROM    dbo.nikostable
--this is the final step...make sure you do not execute this line until you have
-- completed the previous steps.
DROP TABLE dbo.nikostable

When you run the code above (including the last SELECT statement) you will notice that no rows are inserted in the table. I have placed a constraint in the table. Even if I have rows that have values different than “1” none of them is inserted in the table.An error is raised when we try to insert rows with ID (2,3). None of these rows will be inserted as they are considered to be a unit of work(atomic) and everything is rolled back.

2) If we write the previous T-SQL statements in a different way, we would get different results.Execute the code below.

USE tempdb
IF (SELECT  OBJECT_ID('dbo.nikostable')) IS NOT NULL
DROP TABLE dbo.nikostable
GO
CREATE TABLE dbo.nikostable(ID INT CONSTRAINT check_ID CHECK (ID = 1),NAME NVARCHAR(50) NOT NULL)
GO
BEGIN TRANSACTION
INSERT  INTO dbo.nikostable
VALUES (1,'nikos')
INSERT  INTO dbo.nikostable VALUES (2,'kostas')
COMMIT TRANSACTION
SELECT  * FROM    dbo.nikostable
DROP TABLE dbo.nikostable

When you will run the statements above you will see that the first INSERT statement will succeed and the second one will fail.Each statement inside the transaction is atomic. That means each one will act independently of each other.Someone could find this behaviour weird but this is the default behaviour.

3) We could influence this behaviour ( if one statement fails within a transaction all of them fail) by setting the XACT_ABORT ON.Εxecute the code below.No rows will be inserted in the table this time. The XACT_ABORT OFF is the default option.

USE tempdb
IF (SELECT  OBJECT_ID('dbo.nikostable')) IS NOT NULL
DROP TABLE dbo.nikostable
GO
CREATE TABLE dbo.nikostable
(ID INT CONSTRAINT check_ID CHECK (ID = 1),NAME NVARCHAR(50) NOT NULL)
GO
SET XACT_ABORT ON
GO
BEGIN TRANSACTION
INSERT  INTO dbo.nikostable
VALUES (1,'nikos')
INSERT  INTO dbo.nikostable
VALUES (2,'kostas')
COMMIT TRANSACTION
GO
SELECT  * FROM    dbo.nikostable
SET XACT_ABORT OFF
GO
DROP TABLE dbo.nikostable

Consistent means that the transaction must leave all data in a consistent state after the transaction completes.Isolated means that all transactions must be independent of the data operations of all other concurrent transactions.Durable means that all effects on data are permanent when the transaction fails even if the system fails.SQL Server provides various facilities that ensure that all ACID aspects of a transaction are honoured. We have transaction management systems,locking and logging mechanisms.People believe that SQL Server supports one type of transaction. This is wrong. We have explicit transactions,which means that we are responsible for starting(BEGIN TRANSACTION),commiting (COMMIT TRANSACTION) or rollback a transaction (ROLLBACK TRANSACTION).We also have autocommit transactions.This is the default mode. This means that all transactions are automatically committed when complete or automatically rolled back when an error occurs.Finally we have implicit transactions. In this mode SQL Server starts a new transaction whenever one is committed or rolled back.We have to commit or rollback the transaction.All transactions are managed by SQL Server at the connection level.All statements on that connection belong to the active transaction. Obviously the transactions use resources and while they operate they have locks applied on them.When we commit or rollback a transaction, when the transaction finishes locks and resources used by the transaction are freed.Locks are one way of protecting data.Basically it prevents certain operations on a locked resource (e.g table). The lock is held as long as the transaction is active.For example when we have explicit transactions, locks are held until a COMMIT or ROLLBACK statement.In read-only operations, SQL Server uses shared locks that are quickly released.SQL Server uses dynamic locking to find the most cost effective locking strategy/mode for any given operation in the particular resource.Let’s look into locking,blocking and deadlocking with more details.

What we want is to have highly concurrent SQL Server 2008 R2 applications.In a perfect scenario every SQL query should be able to execute concurrently without any blocking by other queries.What happens in real life is that queries block each other.SQL Server uses lock manager to control concurrent access to the database resources. Without the lock manager the data would be in an inconsistent state.So locking determines what type of a lock would be placed on a transaction when it wants to access a piece of data-resource.When two sessions (connections are identified by sessions) try to access a common database resource in conflicting ways then the lock manager makes the second session waits until the first session completes its task.Blocking occurs when a connection/transaction needs to access data and has to wait for another transaction’s lock to clear.Deadlocking is when two or more connections/sessions/transactions have a deadly embrace.Each transaction is waiting on the other for a lock to clear.That is a very bad situation to find ourselves in. That is why we should keep transactions as short as possible.We should try to not require user input and to access the least amount of data.

There are many ways we can get information about the locks and lock types. We have  Activity Monitor and SQL Server Profiler.I am sure you are familiar with both these tools. SQL Server Profiler has a locks event category and we can get much information about locks.There is also the sys.dm_tran_locks dynamic management view.You can also use sp_lock and sp_who(sp_who2) system stored procedures.

Αnother important issue when it comes to transactions,connections,sessions is the isolation level.Τhe isolation level defines how much a transaction has to be isolated from other transactions.It determines the possible side effects from uncommitted changes.The default isolation level is READ COMMITTED.If you want to learn more about isolation levels have a look here.

Let’s have a look at a locking example.We will simulate a locking situation

5) In a new query window (first-1st) type the following and execute the code


USE Northwind
GO
BEGIN TRANSACTION
UPDATE dbo.Categories
SET CategoryName = 'Fishfood'
WHERE CategoryName = 'Seafood';

In a new query window (second-2nd) type the following and execute the code


USE Northwind
SELECT CategoryName FROM dbo.Categories;

Now I will use the Activity Monitor to collect information about the locking that takes place.Select in the Object Explorer window the instance of the SQL Server and right-click and choose Activity Monitor.

Ηave a look at the picture below.Note the SPID 54.

In order to resolve the locking issue we can type ( in the first query window)


ROLLBACK TRANSACTION

6) Now I will show you how to get the same information using SQL Server Profiler.We will need to use an event. We need to identify long-running blocked processes.In this case we will Blocked Process Report event.This event is turned off by default. I know this might sound strange to you but this is the case.

We need to write T-SQL statements to enable the Blocked Process Report event.Execute the code below.

SP_CONFIGURE'show advanced options',1 ;
GO
RECONFIGURE
GO
SP_CONFIGURE'blocked process threshold',5 ;
GO
RECONFIGURE
GO

Launch SQL Server Profiler, connect ( most likely Windows Authentication ) to the instance of SQL Server.From the available templates use TSQL_Locks.In the Event Selection tab deselect all the events but the Blocked Process Report event. Give your trace a name e.g Locking_Info. Then hit the Run button.

We will have to simulate the same locking example as before.

So first you write in a new query window the 1st T-SQL statement (see previous example) and then in a new query window the 2nd T-SQL statement.

Wait for 30 seconds and have a look at the results in the Profiler. Have a look at the picture below to see what I got when I run the same trace in the Profiler.

Run this statement in the 1st query window to resolve the blocking issue.


ROLLBACK TRANSACTION

Stop the trace in the profiler and exit the profiler.

7) Now we will use the same example as before and get locking information using the sp_who2 system stored procedure.

We will have to simulate the same locking example as before.

So first you write in a new query window the 1st T-SQL statement (see previous example) and then in a new query window the 2nd T-SQL statement.

In a third query window type


sp_who2 54

I did not choose 54 was the SPID of the 2nd statement ( SELECT CategoryName FROM dbo.Categories;). So in your case you should choose a different SPID.

Have a look at the picture below to see what kind of results I get by running the sp_who2 stored procedure.

Now if you wonder why the second transaction(the SELECT statement) was blocked, this is because the first transaction had placed a lock on the resource ( table Categories ) with the lock mode of X (exclusive lock).If you want to see what kind of locks(lock type,lock mode) take place on what resource you can use sp_lock system stored procedure e.g sp_lock 57 in a new query window.

When we execute a query ,SQL Server determines the database resources that need to be accessed and then the lock manager grants database locks to the process.So the query is executed after acquiring the required locks.During the query execution the locks do not remain the same. By that I mean that the lock manager escalates the locks to various lock levels. This is known as lock escalation.Obviously we , the developers, we could not be in charge of such a complex process.The lock manager takes charge of that.All that takes place during the course of the transaction. We have various lock levels.We have row locks, these locks are maintained at a row level.We have page level locks, these locks are maintained at page level. SQL Server lock manager can escalate from row,page level locks to table level locks. When this happens all lower level locks are released.We also have lock modes. The lock manager grants locks on resource types by mode.We have Sch-S lock mode which prevents transactions from altering the schema of a resource while it is in use. We have S-shared lock mode which is compatible with all other lock modes except exclusive locks Xmode.X-mode is granted to resources that is being dropped/created or modified.I will use a new example and the sp_lock stored procedure.It will be something similar like the previous example.

In a new query window type (in my example this is session 53)


USE Northwind
GO
BEGIN TRANSACTION
UPDATE dbo.Categories
SET CategoryName = 'Refreshments'
WHERE CategoryName = 'Drinks';

Execute the query.

In an new query window type(in my example this is session 58)


USE Northwind
go
SELECT CategoryName FROM dbo.Categories;

Execute the query above.This is going to be a blocked query.

In a new query window type


EXEC sp_lock 53
EXEC sp_lock 58

When you execute the query above you will get results similar to these. Note the Mode and Type columns.You can notice the lock escalation as the query is executed and the lock modes granted.

8) Now that we have seen some examples about locking and blocking , I would like to look into deadlocks.

I have explained previously what a dealock is and one more simpler explanation is that we have locks on resources that are held for too long.Two seperare processes/transactions hold locks on resources that each other needs.So this is a deadlock. Nothing can happen. This situation could go on forever.But one of the transactions is forced by SQL Server to “die”.SQL Server selects a victim and kills the process/transaction.It usually kills the one that is least expensive to roll back.We can also have some control over the process to be chosen to be the victim by setting the deadlock priority by typing  “SET DEADLOCK_PRIORITY LOW”.This option makes SQL Server to choose that session to be the victim when a deadlock occurs.  When we have a deadlock situation an error with the code 1205 is generated.Τhere are many ways to find information about deadlocks. We have two trace flags that help us analyse deadlocks.These flags are 1222 and 1204.We can turn them on by typing


DBCC TRACEON (1222,-1)
DBCC TRACEON (1204,-1)

The deadlock information is stored in the error log.

When you use lower isolation levels helps to avoid transactions.Another thing to avoid is recursive and nested triggers.

I will show you an example on deadlocks. I will simulate a deadlock situation and I will use SQL Server Profiler to catch the deadlock information.

Before we move on with our T-SQL statements, we need to launch SQL Server Profiler.From the SSMS environment type go to Tools-> SQL Server Profiler.

Launch SQL Server Profiler, connect ( most likely Windows Authentication ) to the local instance of SQL Server.From the available templates use TSQL_Locks.In the Event Selection tab do not deselect any event. Give your trace a name e.g DeadLocking_Info. Then hit the Run button.

In a new query window , execute this first statement.


USE Northwind
GO
--open a query window (1) and run these commands
begin tran
update products set CategoryID = 7

In a new query window , execute this second statement.


use Northwind
GO
--go to query window (2) and run this command

begin tran
UPDATE Suppliers set CompanyName = 'Mycompany'
update products set CategoryID = 12

Go back to the first query window, and execute this third statement.


-- go back to query window (1) and run this command

update Suppliers set CompanyName = 'TheCompany'

Now we have a deadlock situation. Switch to your Profiler. Pause the trace and look for the  Deadlock events.

You will see something similar with the picture below.You can see the Deadlock graph event and the graph with all the processes involved in the deadlock and the deadlock victim.

Go  to the first query window, and execute this  statement to undo all changes


ROLLBACK

Go  to the second query window, and execute this  statement to undo all changes.


ROLLBACK

I hope you realise how bad deadlocks is for the performance of our system.

I know this was a very long post and we talked about many things like transactions,aspects of transactions,types of transactions,concurrency,locking,blocking,deadlocking,lock modes e.t.c.

Hope it helps.