THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Davide Mauri

A place for my thoughts and experiences the Microsoft Data Platform

Will SQL Azure make Normalization popular again?

Normalization should be very popular and widespread. Unfortunately I can say that the 99% of developers that work with database (and thus design the database schema) don’t know it. Even worse they don’t even realize that normalization is a key point in application development.

AFAIK developers and project managers don’t care too much of normalization because they think that it only helps in lowering the storage required by the database and nothing more. Since today space is cheap, why bother of normalizing database?

As already told, normalization is not a way to spare space, but is the key to preserve data integrity and to have overall good performance. Of course, since a part of the process is related to remove redundancy, the result is that space is also saved.

Now, with SQL Azure, space won’t be that cheap anymore. An interesting post by the SQL Azure team, show how much cost storing a row in SQL Azure database, given that the prices for 1GB is set to 9.99$.

And that’s the interesting part, where normalization returns to be an interesting practice to put in place. By designing table using correct data types and normalizing the schema correctly you easily gain from 30% to 80% (as a very general rule of thumb, depending on the data model and the data you have to store).

Now, let’s make some simple calculations. Imagine that we have a three database of 50GB each. Every month the cost is 3 * 499.5$. That sum up to 17,998.2$ per year. Let’s say 18,000$ to make it easier. Sparing 30% means that you can save 5,400$ per year.

Given that I’ve seen database smaller than 100GB only in small companies, I can tell you that sparing 5,000$ for them is very important!

This turns also to be true also for bigger database, where, of course, the money spared is proportional. I’ve seen tables with more than 100 columns, many of them completely redundant or “put here just in case”, like the ColInt1, ColInt2 and so on.

So, now that space means money, maybe normalization will become popular again.

I really hope so, because this will also lead to better data and higher quality solutions.


I didn’t mention this, but keep in mind also that you’ll pay space used by indexes. So the worse your table is design the more you’ll pay for any index you’ll add, since they will be bigger that what they could have been if the database was normalized.

Published Monday, August 30, 2010 8:46 PM by Davide Mauri
Filed under: , ,

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



jamiet said:

Very good points Davide. Something that has occurred to me is that with the limits imposed by SQL Azure your choice of datatype now becomes really important. Do you need datatime or will smalldatetime suffice? Is, NVARCHAR(50) really necessary or can you get away with VARCHAR(20)? I have a feeling that SQL Azure is going to make people strongly consider these things - something which hasn't really been done sufficiently for many years now.


August 30, 2010 3:41 PM

Alexander Kuznetsov said:


I agree with your calculations, but you are not including human costs. The $5,000/yr is less than it costs to hire one good professional for one month. Spending too much valuable time on saving storage may easily become counter-productive, especially when you include lost profit because you are not doing some more useful.

August 30, 2010 3:46 PM

Michael K. Campbell said:


Great numbers and thoughts. But, I think you've just made a strong case against Azure adoption. Azure costs entirely too much.

$500/month for 50GB is steep. For the price in question ($1500/month) I could set up a hosted cluster somewhere with great uptime and easily have around 300-600GB of disk on hand. Then, theoretically, devs could make stuff as craptastic as they wanted - without worrying about space.

Azure just costs too much - and I don't think that folks will therefore adopt best practices to save $$ when using it. Instead, I think they'll look at other options and use what they're comfortable with while saving $$$.

August 30, 2010 3:49 PM

John Paul Cook said:

Since Azure abstracts the database server away to the cloud, I'm expecting some people to think normalization is even less important. Picture an EAV data model hosted by Azure.

August 30, 2010 4:54 PM

Davide Mauri said:

Thanks to everyone for the feedbacks!

September 1, 2010 4:44 AM

RichB said:

"Picture an EAV data model hosted by Azure", stamping on the DBA face forever.

September 1, 2010 6:21 AM

Davide Mauri said:

EAV is everything but simple to be managed by developers. Too many joins or too many nulls. It's more probable that they will go for a JSON serialization and a table like (ID, JSON_Object) that is now very popular....

September 1, 2010 9:45 AM

Leave a Comment


About Davide Mauri

Director of Software Development & Cloud Infrastructure @ Sensoria, an innovative smart garments and wearable company. After more than 15 year playing with the Microsoft Data Platform, with a specific focus on High Performance databases, Business Intelligence, Data Science and Data Architectures, he's now applying all his skills to IoT, defining architectures to crunch numbers, create nice user experiences and provide meaningful insights, all leveraging Microsoft Azure cloud. MVP on Data Platform since 2006 he has a very strong background development and love both the ER model and OO principles. He is also a fan of Agile Methodology and Automation, which he tries to apply everywhere he can, to make sure that "people think, machines do".

This Blog


Privacy Statement