THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

Normalization and How to Know When You Are Done… The short version…

A while back, I was working on a short article about Normalization for a book that never got published (admittedly I wasn’t getting paid for the article, and it wasn’t for charity, so I wasn’t that broken up over it.)  The task at hand was to, in 2 pages or less, describe the process of normalization and help you to know when you have finished. In my upcoming book Pro SQL Server 2000 + N (where N > 10) Relational Database Design and Implementation, it takes about 45 pages. So it wasn’t really a realistic task, especially considering I have spent about a full paragraph letting you know how hard the task is going to be. The most important thing that is missing from this short introduction is examples, which I include in the book in truck loads.

There are two distinct ways that Normalization is approached. In a very formal manner, there are a progressive set of “rules” that specify “forms” that you are working to achieve. There is nothing wrong with that definition, but progressing through the forms in a stepwise manner is certainly not how any seasoned data architect is likely to approach the problem of designing data storage. Instead, you design with the principles of normalization in mind, and use the normal forms as a test to your design.

The problem with getting a great database design is compounded with how natural the process seems. The first database that the past uneducated version of me built had 10+ tables, all of obvious ones like customer, orders, etc. set up so the user interface could be produced to satisfy the client. However, tables for address and even order items were left as part of the main tables, making it a beast to work with for queries, and as my employer wanted more and more out of the system, the design became more and more taxed. The basics were there, but the internals were all wrong and the design could have used about 50 or so tables to flesh out the correct solution. Soon after (at my next company, sorry Terry), I gained a real education in the basics of database design, and the little 1000 watt halogen light bulb went off…

That light bulb was there because what had looked like a more complicated in the college database class that no normal person would have created (bet you can’t guess what my grade was in that class!) was really there to help my design fit in with the tools that I was using. Turns out that the people who create relational database engines use the same concepts of normalization to help guide how the engine is created that I needed to for a database to work well. So if the relational engine vendors are using a set of concepts to guide how they create the engine, it turns out to be actually quite helpful if you follow along.

First, lets look at the “formal” rules. The normalization rules are stated in terms of “forms”, starting at First Normal Form, and including several others some of which are numbered, some are named for the creators of the rule. (Note that in the strictest terms, to be in a greater form, you ought to also conform to the lesser form. So you can’t be in third normal form and not give in to the definition of the First). To be honest, it is rare that a data architect will actually refer to the normal forms  in conversation specifically unless they are having a nerd argument with a developer that is trying to design an entire customer relationship management system in a single table, but understanding the basics of normalization is essential to understanding why it is needed. What follows is a very quick restatement of the normal forms:

  • First Normal form/Definition of a Table – Attribute and row “shape”
    • All columns must be atomic—one value per column
    • All rows of a table must contain the same number of values – no arrays
    • Each row should be different from all other rows in the table – unique rows
  • Boyce-Codd Normal Form – Every candidate key is identified, and all attributes are fully dependent on a key, and all columns must identify a fact about a key and nothing but a key.
    • Encompasses:
      • Second Normal Form - All attributes must be a fact about the entire primary key and not a subset of the primary key
      • Third Normal Form - All attributes must be a fact about the primary key and nothing but the primary key
  • Fourth Normal Form - There must not be more than one multivalued dependency represented in the entity. That is to say that every attribute relates to the key with a cardinality of one. Not a common rule to violate, but it definitely does occur.
  • Fifth Normal Form - All relationships are broken down to binary relationships when the decomposition is lossless. Very rarely violated in typical designs.

There are other, more theoretical forms that I won’t mention, but they are rare to even encounter the definition. In the reality of the development cycle of life, the stated rules are not hard and fast rules, but merely guiding principles that can be useful to help you avoid certain pitfalls. In practice, we end up with denormalization, (meaning purposely violating a normalization principle for a stated, understood purpose, not ignoring the rules to get done faster) mostly to satisfy some programming or performance need from the consumer of the data (programmers/queriers/etc)

Once you deeply “get” the concepts of normalization, you really will find that you build a database like a well thought out Lego creation, desiring how each piece will fit in to the creation before putting pieces together, because disassembling 1000 Lego bricks to make a small change makes Legos more like work than fun. Some rebuilding based on keeping agile can be needed, but the more you plan ahead, the less data you will have to reshuffle.

In actual practice, the formal definition of the rules aren’t thought of at all, but instead the guiding principles that they encompass are.  In my mind, I use the following four concepts in the back of my mind to guide the database I am building, falling back to the more specific rules for the really annoying/complex problem I am trying to solve:

  • Columns - One column, one value
  • Table/row uniqueness – Tables have independent meaning, rows are distinct from one another.
  • Proper relationships between columns – Columns either are a key or describe something about the row identified by the key.
  • Scrutinize dependencies - Make sure relationships between three values or tables are correct. Reduce all relationships to binary relationships if possible.

The question in the title still has yet to be conquered. “How to  know when you are done?” What I left out of the description of Normalization was the granularity you go with. The word “atomic” is a common way to describe a table or column that is normalized enough. Atomic would tend to indicate something that is broken down to its absolute lowest form. But unless you are not a nerd (and would you really be reading this if you weren’t?) we know that there are lots of particles smaller than an atom. When you try to mess with particles smaller than the atom, you get a mushroom cloud that even Timothy Leary would not have approved of.

It is the same way with databases. Tables and columns split to their atomic level have one and only one meaning. Deal with them at a higher level, and you will suffer with lots of substrings, switching attributes that you use to find out what a table means in a situation. But break things down too far, and you will suffer even more. My best example of this is a column that holds a large quantity of text. If you never need to us part of the data using SQL, a single column is perfect (a set of notes that the user uses on a screen is a good example.) You wouldn’t want a paragraph, sentence, and character table to store this information. On the other hand, that same character column is abused when the users start putting coded information (because users WILL find a way to work if your software fails them). Then and you have to search for, you will need to begin working with the less comfortable string manipulation functions in SQL… And just try to index a part of a large text column. Possible? Sometimes. Best way to go? Never.

The key to knowing what is normalization and what is an academic exercise for a nerd is to understand the needs of the users (commonly referred to as requirements, as in “Why don’t we ever have good requirements before we code!?!”). If it is clear that the user is planning on maintaining a list of values and will need to update them programmatically, then it is your job to make each value a row in a table. But if there is no requirement to ever search on a value in that list or programmatically access part of the value, then it might be overkill to do anything other than leave the value alone.  It is often best to err on the side of caution, but the ideal relational storage for a document would be minimally at the word/punctuation level. If you are read this far and are convinced that would be the proper solution, then you need to get a complete book or take a class on the subject before you start creating a relational database.

The reasonable answer to when you are done normalization is when users have exactly the right number of places to store the data they need and you can query/use the data without parsing it… Easy enough until the user changes their mind, huh?

Published Sunday, May 29, 2011 4:54 PM by drsql

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Bob said:

An excellent introduction though nothing beats examples for teaching.  It would be valuable to show a poorly designed database, show why it's poor and show how to correct it.

The light bulb moment is an amazing thing too.  Everyone (who's successful) has one.  It's fun to think about how the brain works and why these sudden leaps of comprehension take place.

Most of my work now is with data warehouses so I do the opposite -- denormalization.  As I was learning and implementing the denormalization process, it was interesting how much my understanding of normalization practices increased.  I had to do things counter to my instincts and I made some mistakes along the way.  This forced me to think more specifically about the decisions I was making while putting data into a star and, in contrast, I better understood the normalization process.

A couple years back, I was asked to handle the database design and implementation for a small IT-internal project used to track something (can't actually remember what for).  I worked with a co-worker to flesh out the requirements.  He would be designing the website and his first ideas about the database was to have just a couple of Big Tables and handle all of the record management in code.  (As an aside, I see this tendency among a lot of decent programmers - mostly because they never learned better ways.)  I steered him away from that and started breaking things up.  He agreed we'd handle CRUD operations through sprocs and views allowing him to focus on the function of the website instead of wrangling data.

I made the mistake of letting him talk me into leaving one table less normalized than my gut was telling me since it only served a small piece of the solution and normalization ended up adding a few xref tables that he felt made things too complex.  As I started writing insert and update sprocs for the subsystem, I saw it turning into an awful, broken mess requiring far more effort to manage than if it had been normalized.  I jumped on correcting it right away despite his protests (though he trusted me enough not to protest too much).  That taught me to rely on my instincts but it also made me understand the mindset of people who haven't learned to normalize.  I think they are afraid of the perceived complexity behind it when the fact is, proper normalization makes the pieces of a project work together more cleanly and removes a great deal of complexity in the use and function of the database.  This is especially true when you encapsulate tasks in sprocs (like p_AddUser or p_RemoveItem).

I should have saved this for my own blog (if I ever get around to making one) but thanks for the thought inspiring article.

May 30, 2011 11:31 AM
 

Louis Davidson said:

This part is part of an ongoing series of blogs I am writing while preparing to give a presentation based

July 1, 2014 12:24 AM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Links to my other sites

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement