jump to navigation

Looking into Nested Transactions and Rollback operations April 18, 2012

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008.
Tags: , ,
2 comments

In this post I would like to show you what happens when we issue ROLLBACK operations in nested transactions.

Have a look in this post of mine to find out more information about transactions.A lot of people are confused on how a Rollback statement impacts a nested transaction.

I will move on with my hands-on examples.I will use the tempdb database.

We will need an instance of SQL Server running in our machine.You can download and install the free SQL Server Express edition from here.

Launch SSMS and connect to an instance of SQL Server.In a new query window type


USE TempDB
GO
CREATE TABLE TranTable (ID INT, name VARCHAR(100))
GO

SELECT ID, name
FROM TranTable;

BEGIN TRANSACTION
SELECT @@TRANCOUNT tranCount;
INSERT INTO TranTable (ID,  name)
VALUES (1, 'Nick');

SELECT ID, name
FROM TranTable;

BEGIN TRANSACTION
SELECT @@TRANCOUNT tranCount;
INSERT INTO TranTable (ID, name)
VALUES (2, 'George');

SELECT ID, name
FROM TranTable;

--we issue a rollback

ROLLBACK

--we have no values in the table

SELECT ID, name
FROM TranTable;

--this is 0
--nested transaction rollsback to the outmost transaction

SELECT @@TRANCOUNT tranCount;

DROP TABLE TranTable
GO

I am creating a table and I insert some values in the table by starting explicitly a transaction. At the same time I query the @@Trancount global variable.It basically returns the number of  BEGIN TRANSACTION statements that have occurred on the current connection.Then I do the same in a second explicit transaction (lines 17-20).@@Trancount is 2 at this point.Then we issue a Rollbackstatement. Then  we issue a “SELECT ID, name FROM TranTable;”. We notice that there is no data in the table and the @Trancount is 0 at this point.This means that a Rollback statement rollsback to the outmost transaction that was defined. I want to show a similar example where I use the exact same T-SQL as before but I commit the transaction before I issue a rollback statement .In a new query window type


USE TempDB
GO
CREATE TABLE TranTable (ID INT, name VARCHAR(100))
GO

SELECT ID, name
FROM TranTable;

BEGIN TRANSACTION
SELECT @@TRANCOUNT tranCount;
INSERT INTO TranTable (ID,  name)
VALUES (1, 'Nick');

SELECT ID, name
FROM TranTable;

BEGIN TRANSACTION
SELECT @@TRANCOUNT tranCount;
INSERT INTO TranTable (ID, name)
VALUES (2, 'George');

COMMIT TRANSACTION

SELECT ID, name
FROM TranTable;

--we issue a rollback

ROLLBACK

--we have no values in the table

SELECT ID, name
FROM TranTable;

--this is 0
--nested transaction rollsback to the outmost transaction

SELECT @@TRANCOUNT tranCount;

DROP TABLE TranTable
GO

Execute lines 1-20.Then execute lines 22-25. We commit the transaction and then we check to see if the 2 rows reside on the table. @@Trancount is 2 at this point.Then we issue a rollback statement.Then we issue a “SELECT ID, name FROM TranTable;”. We notice that there is no data in the table and the @Trancount is 0 at this point.This means that a Rollback statement rollsback again to the outmost transaction that was defined.

In this last example I will demonstrate how we can avoid rolling back to the outmost transaction by using savepoints.

In a new query window type


USE TempDB
GO
CREATE TABLE TranTable (ID INT, name VARCHAR(100))
GO

SELECT ID, name
FROM TranTable;

BEGIN TRANSACTION Tran1
SAVE TRANSACTION Tran1
SELECT @@TRANCOUNT tranCount;
INSERT INTO TranTable (ID,  name)
VALUES (1, 'Nick');

SELECT ID, name
FROM TranTable;

BEGIN TRANSACTION Tran2
SAVE TRANSACTION Tran2
SELECT @@TRANCOUNT tranCount;
INSERT INTO TranTable (ID, name)
VALUES (2, 'George');

SELECT ID, name
FROM TranTable;

--we issue a rollback to a savepoint

ROLLBACK TRANSACTION Tran2

--we have one row in the table

SELECT ID, name
FROM TranTable;

--@@TRANCOUNT  is 2

SELECT @@TRANCOUNT tranCount;

ROLLBACK TRANSACTION Tran1

SELECT ID, name
FROM TranTable;

--@@TRANCOUNT  is 2

SELECT @@TRANCOUNT tranCount;

ROLLBACK

--@@TRANCOUNT  is 0

SELECT @@TRANCOUNT tranCount;

DROP TABLE TranTable
GO

Execute lines 1-22. We issue the same statements as before but as you can clearly see we save the first and second explicit transactions (Tran 1, Tran2).We then confirm that we have 2 rows in the table.Then we issue a rollback statement to the specific savepoint in line 30.Then we see that we have the first row still in the table by executing the lines 34-35.The @@TRANCOUNT is 2 at this point. Then we issue a rollback statement to a specific savepoint in line 42.Then we see that there are no rows in the table by executing the lines 42-43.The @@TRANCOUNT is 2 at this point.Then we issue a ROLLBACK statement. The @@TRANCOUNT is 0 at this point.This is the only way (savepoints) to return to a point within our transaction when issuing Rollback statements to those savepoints  .

Hope it helps!!!

A thorough look into transactions in SQL Server November 9, 2011

Posted by fofo in Sql Server, Sql Server 2005, SQL Server 2008.
Tags: , , , , , , , ,
6 comments

In this post I would like to look into transactions in SQL Server. A transaction is a group of tasks that perform as a single unit of work.In database terms all parts of the transaction must be saved in the database or must be abandoned.It is a great feature in the relational world since through transactions we make sure the database is never in an inconsistent state.We will need an instance of SQL Server to do our examples.In case you do not have an instance of Enterprise edition or Developer edition  of SQL Server 2008 / R2, you can download it from here . This is a trial version. Developer edition has the same features as the Enterprise edition but has different licensing requirements.It is not free as many people believe.

We need a database as well. I will use the Northwind database.

If you need the installation scripts for the sample Northwind database, click here

All transactions must possess the ACID properties. They must be atomic, consistent,isolated and durable.Atomic means that a transaction has to work as a unit.That means that the transaction should fully commit or abandon when it completes.Let’s have a look at an example where we can see what atomicity means in SQL Server.Execute the code below.

1) Launch SSMS and connect to the local instance of SQL Server.


USE tempdb
IF (SELECT  OBJECT_ID('dbo.nikostable')) IS NOT NULL
DROP TABLE dbo.nikostable
GO
CREATE TABLE dbo.nikostable (ID INT CONSTRAINT check_ID CHECK (ID = 1),
NAME NVARCHAR(50) NOT NULL)
GO
INSERT  INTO dbo.nikostable
VALUES (1,'nikos'),(2,'kostas'),(3,'Mary')
GO
SELECT  * FROM    dbo.nikostable
--this is the final step...make sure you do not execute this line until you have
-- completed the previous steps.
DROP TABLE dbo.nikostable

When you run the code above (including the last SELECT statement) you will notice that no rows are inserted in the table. I have placed a constraint in the table. Even if I have rows that have values different than “1” none of them is inserted in the table.An error is raised when we try to insert rows with ID (2,3). None of these rows will be inserted as they are considered to be a unit of work(atomic) and everything is rolled back.

2) If we write the previous T-SQL statements in a different way, we would get different results.Execute the code below.

USE tempdb
IF (SELECT  OBJECT_ID('dbo.nikostable')) IS NOT NULL
DROP TABLE dbo.nikostable
GO
CREATE TABLE dbo.nikostable(ID INT CONSTRAINT check_ID CHECK (ID = 1),NAME NVARCHAR(50) NOT NULL)
GO
BEGIN TRANSACTION
INSERT  INTO dbo.nikostable
VALUES (1,'nikos')
INSERT  INTO dbo.nikostable VALUES (2,'kostas')
COMMIT TRANSACTION
SELECT  * FROM    dbo.nikostable
DROP TABLE dbo.nikostable

When you will run the statements above you will see that the first INSERT statement will succeed and the second one will fail.Each statement inside the transaction is atomic. That means each one will act independently of each other.Someone could find this behaviour weird but this is the default behaviour.

3) We could influence this behaviour ( if one statement fails within a transaction all of them fail) by setting the XACT_ABORT ON.Εxecute the code below.No rows will be inserted in the table this time. The XACT_ABORT OFF is the default option.

USE tempdb
IF (SELECT  OBJECT_ID('dbo.nikostable')) IS NOT NULL
DROP TABLE dbo.nikostable
GO
CREATE TABLE dbo.nikostable
(ID INT CONSTRAINT check_ID CHECK (ID = 1),NAME NVARCHAR(50) NOT NULL)
GO
SET XACT_ABORT ON
GO
BEGIN TRANSACTION
INSERT  INTO dbo.nikostable
VALUES (1,'nikos')
INSERT  INTO dbo.nikostable
VALUES (2,'kostas')
COMMIT TRANSACTION
GO
SELECT  * FROM    dbo.nikostable
SET XACT_ABORT OFF
GO
DROP TABLE dbo.nikostable

Consistent means that the transaction must leave all data in a consistent state after the transaction completes.Isolated means that all transactions must be independent of the data operations of all other concurrent transactions.Durable means that all effects on data are permanent when the transaction fails even if the system fails.SQL Server provides various facilities that ensure that all ACID aspects of a transaction are honoured. We have transaction management systems,locking and logging mechanisms.People believe that SQL Server supports one type of transaction. This is wrong. We have explicit transactions,which means that we are responsible for starting(BEGIN TRANSACTION),commiting (COMMIT TRANSACTION) or rollback a transaction (ROLLBACK TRANSACTION).We also have autocommit transactions.This is the default mode. This means that all transactions are automatically committed when complete or automatically rolled back when an error occurs.Finally we have implicit transactions. In this mode SQL Server starts a new transaction whenever one is committed or rolled back.We have to commit or rollback the transaction.All transactions are managed by SQL Server at the connection level.All statements on that connection belong to the active transaction. Obviously the transactions use resources and while they operate they have locks applied on them.When we commit or rollback a transaction, when the transaction finishes locks and resources used by the transaction are freed.Locks are one way of protecting data.Basically it prevents certain operations on a locked resource (e.g table). The lock is held as long as the transaction is active.For example when we have explicit transactions, locks are held until a COMMIT or ROLLBACK statement.In read-only operations, SQL Server uses shared locks that are quickly released.SQL Server uses dynamic locking to find the most cost effective locking strategy/mode for any given operation in the particular resource.Let’s look into locking,blocking and deadlocking with more details.

What we want is to have highly concurrent SQL Server 2008 R2 applications.In a perfect scenario every SQL query should be able to execute concurrently without any blocking by other queries.What happens in real life is that queries block each other.SQL Server uses lock manager to control concurrent access to the database resources. Without the lock manager the data would be in an inconsistent state.So locking determines what type of a lock would be placed on a transaction when it wants to access a piece of data-resource.When two sessions (connections are identified by sessions) try to access a common database resource in conflicting ways then the lock manager makes the second session waits until the first session completes its task.Blocking occurs when a connection/transaction needs to access data and has to wait for another transaction’s lock to clear.Deadlocking is when two or more connections/sessions/transactions have a deadly embrace.Each transaction is waiting on the other for a lock to clear.That is a very bad situation to find ourselves in. That is why we should keep transactions as short as possible.We should try to not require user input and to access the least amount of data.

There are many ways we can get information about the locks and lock types. We have  Activity Monitor and SQL Server Profiler.I am sure you are familiar with both these tools. SQL Server Profiler has a locks event category and we can get much information about locks.There is also the sys.dm_tran_locks dynamic management view.You can also use sp_lock and sp_who(sp_who2) system stored procedures.

Αnother important issue when it comes to transactions,connections,sessions is the isolation level.Τhe isolation level defines how much a transaction has to be isolated from other transactions.It determines the possible side effects from uncommitted changes.The default isolation level is READ COMMITTED.If you want to learn more about isolation levels have a look here.

Let’s have a look at a locking example.We will simulate a locking situation

5) In a new query window (first-1st) type the following and execute the code


USE Northwind
GO
BEGIN TRANSACTION
UPDATE dbo.Categories
SET CategoryName = 'Fishfood'
WHERE CategoryName = 'Seafood';

In a new query window (second-2nd) type the following and execute the code


USE Northwind
SELECT CategoryName FROM dbo.Categories;

Now I will use the Activity Monitor to collect information about the locking that takes place.Select in the Object Explorer window the instance of the SQL Server and right-click and choose Activity Monitor.

Ηave a look at the picture below.Note the SPID 54.

In order to resolve the locking issue we can type ( in the first query window)


ROLLBACK TRANSACTION

6) Now I will show you how to get the same information using SQL Server Profiler.We will need to use an event. We need to identify long-running blocked processes.In this case we will Blocked Process Report event.This event is turned off by default. I know this might sound strange to you but this is the case.

We need to write T-SQL statements to enable the Blocked Process Report event.Execute the code below.

SP_CONFIGURE'show advanced options',1 ;
GO
RECONFIGURE
GO
SP_CONFIGURE'blocked process threshold',5 ;
GO
RECONFIGURE
GO

Launch SQL Server Profiler, connect ( most likely Windows Authentication ) to the instance of SQL Server.From the available templates use TSQL_Locks.In the Event Selection tab deselect all the events but the Blocked Process Report event. Give your trace a name e.g Locking_Info. Then hit the Run button.

We will have to simulate the same locking example as before.

So first you write in a new query window the 1st T-SQL statement (see previous example) and then in a new query window the 2nd T-SQL statement.

Wait for 30 seconds and have a look at the results in the Profiler. Have a look at the picture below to see what I got when I run the same trace in the Profiler.

Run this statement in the 1st query window to resolve the blocking issue.


ROLLBACK TRANSACTION

Stop the trace in the profiler and exit the profiler.

7) Now we will use the same example as before and get locking information using the sp_who2 system stored procedure.

We will have to simulate the same locking example as before.

So first you write in a new query window the 1st T-SQL statement (see previous example) and then in a new query window the 2nd T-SQL statement.

In a third query window type


sp_who2 54

I did not choose 54 was the SPID of the 2nd statement ( SELECT CategoryName FROM dbo.Categories;). So in your case you should choose a different SPID.

Have a look at the picture below to see what kind of results I get by running the sp_who2 stored procedure.

Now if you wonder why the second transaction(the SELECT statement) was blocked, this is because the first transaction had placed a lock on the resource ( table Categories ) with the lock mode of X (exclusive lock).If you want to see what kind of locks(lock type,lock mode) take place on what resource you can use sp_lock system stored procedure e.g sp_lock 57 in a new query window.

When we execute a query ,SQL Server determines the database resources that need to be accessed and then the lock manager grants database locks to the process.So the query is executed after acquiring the required locks.During the query execution the locks do not remain the same. By that I mean that the lock manager escalates the locks to various lock levels. This is known as lock escalation.Obviously we , the developers, we could not be in charge of such a complex process.The lock manager takes charge of that.All that takes place during the course of the transaction. We have various lock levels.We have row locks, these locks are maintained at a row level.We have page level locks, these locks are maintained at page level. SQL Server lock manager can escalate from row,page level locks to table level locks. When this happens all lower level locks are released.We also have lock modes. The lock manager grants locks on resource types by mode.We have Sch-S lock mode which prevents transactions from altering the schema of a resource while it is in use. We have S-shared lock mode which is compatible with all other lock modes except exclusive locks X-mode.X-mode is granted to resources that is being dropped/created or modified.I will use a new example and the sp_lock stored procedure.It will be something similar like the previous example.

In a new query window type (in my example this is session 53)


USE Northwind
GO
BEGIN TRANSACTION
UPDATE dbo.Categories
SET CategoryName = 'Refreshments'
WHERE CategoryName = 'Drinks';

Execute the query.

In an new query window type(in my example this is session 58)


USE Northwind
go
SELECT CategoryName FROM dbo.Categories;

Execute the query above.This is going to be a blocked query.

In a new query window type


EXEC sp_lock 53
EXEC sp_lock 58

When you execute the query above you will get results similar to these. Note the Mode and Type columns.You can notice the lock escalation as the query is executed and the lock modes granted.

8) Now that we have seen some examples about locking and blocking , I would like to look into deadlocks.

I have explained previously what a dealock is and one more simpler explanation is that we have locks on resources that are held for too long.Two seperare processes/transactions hold locks on resources that each other needs.So this is a deadlock. Nothing can happen. This situation could go on forever.But one of the transactions is forced by SQL Server to “die”.SQL Server selects a victim and kills the process/transaction.It usually kills the one that is least expensive to roll back.We can also have some control over the process to be chosen to be the victim by setting the deadlock priority by typing  “SET DEADLOCK_PRIORITY LOW”.This option makes SQL Server to choose that session to be the victim when a deadlock occurs.  When we have a deadlock situation an error with the code 1205 is generated.Τhere are many ways to find information about deadlocks. We have two trace flags that help us analyse deadlocks.These flags are 1222 and 1204.We can turn them on by typing


DBCC TRACEON (1222,-1)
DBCC TRACEON (1204,-1)

The deadlock information is stored in the error log.

When you use lower isolation levels helps to avoid transactions.Another thing to avoid is recursive and nested triggers.

I will show you an example on deadlocks. I will simulate a deadlock situation and I will use SQL Server Profiler to catch the deadlock information.

Before we move on with our T-SQL statements, we need to launch SQL Server Profiler.From the SSMS environment type go to Tools-> SQL Server Profiler.

Launch SQL Server Profiler, connect ( most likely Windows Authentication ) to the local instance of SQL Server.From the available templates use TSQL_Locks.In the Event Selection tab do not deselect any event. Give your trace a name e.g DeadLocking_Info. Then hit the Run button.

In a new query window , execute this first statement.


USE Northwind
GO
--open a query window (1) and run these commands
begin tran
update products set CategoryID = 7

In a new query window , execute this second statement.


use Northwind
GO
--go to query window (2) and run this command

begin tran
UPDATE Suppliers set CompanyName = 'Mycompany'
update products set CategoryID = 12

Go back to the first query window, and execute this third statement.


-- go back to query window (1) and run this command

update Suppliers set CompanyName = 'TheCompany'

Now we have a deadlock situation. Switch to your Profiler. Pause the trace and look for the  Deadlock events.

You will see something similar with the picture below.You can see the Deadlock graph event and the graph with all the processes involved in the deadlock and the deadlock victim.

Go  to the first query window, and execute this  statement to undo all changes


ROLLBACK

Go  to the second query window, and execute this  statement to undo all changes.


ROLLBACK

I hope you realise how bad deadlocks is for the performance of our system.

I know this was a very long post and we talked about many things like transactions,aspects of transactions,types of transactions,concurrency,locking,blocking,deadlocking,lock modes e.t.c.

Hope it helps.

Follow

Get every new post delivered to your Inbox.

Join 1,788 other followers