THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

How In-Memory Database Objects Affect Database Design: Or does it?

This part is part of an ongoing series of blogs I am writing while preparing to give a presentation based on the prefix of this blog's title. It is a work in progress as I explore the effects of the new in-memory model. I would love comments that tell me that I am wrong (especially if it turns out that I am!) 

Before I start describing what I have discovered so far along the way, let's make a few educated guesses. I have my data model from the previous blog entry (here), and I have a plan of attack for entering data. Basically, I plan to take the "normal" way that people deal with data like this, and enter data into each table separately, outside of and overarching transaction.

So I will create or retrieve, the Customer first, then the CustomerAddress, then I will create the SalesOrder and ever how many SalesOrderItems that the customer (or my data generation using RedGate's Data Generator!) wants. I may dress it up over time, but for my initial experiments, this is the plan.

For a database with UNIQUE constraints on alternate keys, and FOREIGN KEY constraints on relationships, this is all pretty easy and safe. I know I can't duplicate a customer, or violate foreign key constraints. How this will play out in the code is still just a bunch of guesses, with my only goal to basically be to not have the ETL architect trying to decide if running me over with a car is punishment enough for me introducing more bad data to deal with.

But the title of the session ends in "...Database Design". The code isn't database design. Rather, as an architect, it is important to realize that the database design affects the code. So will we need different data structures or will the normal normal model suffice? My initial feeling is that the logical model doesn't change. The actual implementation details (such as the lack of foreign, unique, and check constraints) will be a difference, and the hardware/implementation layer changes immensely, and this affects your code in ways you had best understand.

I "think" that what I determine will be that the basic data architects output remains rather constant. The rules of normalization (briefly described here), don't change at all. You still design the tables and columns based on the same factors of cardinality you would before. The physical model will be the same, and if your data modeling tool supports the in-memory structures, it is just a matter of choosing which tables should be in-mem and which don't really merit it, followed by a lot of testing (a lot).

For my first tests, I only made the four "hot" tables in-mem:

 

This design will work fine when dealing with interop code, even if I want to implement data integrity checks to the domain tables. If I want to use native code, then all of the tables used will need to be in-memory. Are data integrity checks that important? Well, they are to me, but not so much to a lot of folks who trust their other layers to get it right. My scenario, where all of the domain tables are "static" enable this scenario to work just fine. As long as ProductType never changes, and the other code layers have only the right values, you can easily say "this works" (as long as it does... and your code has been tested for what happens if anything crashes on any given statement to the database...which is not an easy task.).

In my next post, I will share at least one of the effects the change to in-memory code has had on the code I am using, and what I am considering doing about it.

Published Monday, June 30, 2014 5:03 PM by drsql

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Sunk818 said:

Your post seems to end without a conclusion. I assume there is a part two?

June 30, 2014 11:01 PM
 

drsql said:

Actually there will be many more parts. I have a few earlier posts that set up the design, and here I am setting up the scenario that I am going to try testing. In the next day or two, I will be posting about how changing these tables to in-memory tables affected the outcome of my tests, and start to ruminate on what I will do next.

I will edit this post to make that more clear...

Thanks!

July 1, 2014 12:19 AM

Leave a Comment

(required) 
(required) 
Submit

This Blog

Syndication

Links to my other sites

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement