THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Alexander Kuznetsov

Learning postgreSql: serialization failures with SERIALIZABLE

We shall reproduce a serialization failure and see SERIALIZABLE isolation level enforces data integrity.

Prerequisites

We shall need the following test data:

CREATE TABLE Carpools(
  
Car_Name VARCHAR NOT NULL,
  
Passenger VARCHAR NOT NULL
);

INSERT INTO Carpools(car_name, passenger)
VALUES
('Carol''s car', 'Jim'),
(
'Carol''s car', 'Carmen'),
(
'Carol''s car', 'Ted');

Reproducing a serialization failure

Suppose that Carol can only take four passengers in her car, so we can add at most one more passenger. The following command verifies that three still is enough room for one more passenger, adds one more person, but does not commit. The script completes successfully:

BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT INTO Carpools(car_name, passenger)
SELECT 'Carol''s car', 'Juan'
WHERE (SELECT COUNT(*) FROM Carpools
  
WHERE car_name='Carol''s car')<4;

In another session, we can successfully add one passenger - there is no blocking whatsoever:

BEGIN ISOLATION LEVEL SERIALIZABLE;
INSERT INTO Carpools(car_name, passenger)
SELECT 'Carol''s car', 'Pawel'
WHERE (SELECT COUNT(*) FROM Carpools
  
WHERE car_name='Carol''s car')<4;

Should both transactions commit, we'd have five passengers, but this is not going to happen. We can commit only one transaction, no matter whether the first or the second. When we try to commit the other one, PostgreSql detects serialization failure, rolling back the transaction with the following error:

ERROR:  could not serialize access due to read/write dependencies among transactions

The reason is as follows: there is no way these two transactions could insert these two rows serially, with one transaction committing before the other one begins. Whichever transaction runs last, it would see that there already are four passengers, and not insert the fifth one.

As we have seen, PostgreSql successfully detects a potential violation of data integrity, and fails a transaction to prevent the violation. Clearly this is a highly useful feature. As we migrate functionality to PostgreSql, we should utilize its advantages. In my experience, we are not very productive when we are trying to emulate Sql Server's behavior on PostgreSql - we are much more productive when we have a good understanding of the new platform and use it the way it is supposed to be used.

Of course, this data integrity rule is very simple, As such, we could use constraints to enforce it. I have chosen this simple scenario because I wanted to keep my examples simple, not because we cannot solve the problem using only constraints.

Published Monday, January 06, 2014 12:38 PM by Alexander Kuznetsov

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

No Comments

Leave a Comment

(required) 
(required) 
Submit

About Alexander Kuznetsov

Alex Kuznetsov has been working with object oriented languages, mostly C# and C++, as well as with databases for more than a decade. He has worked with Sybase, SQL Server, Oracle and DB2. He regularly blogs on sqlblog.com, mostly about database unit testing, defensive programming, and query optimization. Alex has written a book entitled "Defensive Database Programming with Transact-SQL" and several articles on simple-talk.com and devx.com. Currently he works as an agile developer.

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement