THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

How In-Memory Database Objects Affect Database Design: The Logical and Physical Models

This part is part of an ongoing series of blogs I am writing while preparing to give a presentation based on the prefix of this blog's title. It is a work in progress as I explore the effects of the new in-memory model. I would love comments that tell me that I am wrong (especially if it turns out that I am!)  

Now that I have the conceptual model I want to work with (and bunch of "life" tasks out of the way), I head now into fleshing out the model. I go ahead and add attributes, along with domains that I will use to build my actual database.  The model continues on with a minimalist approach, just the basics that are needed, nothing more, nothing less. Some times, names, etc;

 Logical Model

As this is a very simple model, I won't drag it out anymore than I need to, and I am going to go ahead and make this a physical model:



Basically, I set datatypes, added domain tables for some of simple domains (to make doing data checking easier when (if?) I build manual foreign keys in the code, since in-memory databases don't support the concept), and added row create and modify times to every editable table. (The four tables Customer, CustomerAddress, SalesOrder, and SalesOrderItem that I will be loading as part of my stress testing.) Nothing too fancy, but at the same time, not so simplistic that I felt like it would lose any and all meaning.  The other tables are not really "transactional", so I will be loading them all at once as a configuration step. My lack of adding physical columns like row modify times isn't a suggestion that it isn't needed at all in a real system (who added that $.10 fare from Boston to LA?), but rather I wanted to keep it as an indication that I wasn't planning on dealing with that here. The four major tables will be loaded at full speed ahead, and as fast as we can take orders from new and existing customers, we will. All other data will be simply configuration data, for me.

On the other hand, if I was going to do a full day test, a process that might be interesting to include would be a price hike. Who chooses the price? The database code, or the front end? If the price changes while the client has the old price in hand, should the transaction fail and say "Sorry, our prices went up, try again! Thank you come again." or just accept the previous price?  And then, how much time might acceptably pass before the fail notice comes up? Could that be hacked by a craft 10 year old to get pricing from a year ago? Discussions like this is why actual design sessions take simply forever to complete, and are very boring to people who strictly code... It is also why we are avoiding the topic here, for the most part.

In terms of our implementation, one of the more interesting columns to deal with are going to be the CustomerNumber, and the SalesOrderNumber. Generating unique values for customer consumption is a fairly simple concept, but it will change when we start working with in-memory code, certainly for the 2014 version.

The next steps (which are already underway, but still looking for interesting things to blog about along the way), are already underway. In the next entry in this session writing process, I will discuss my methods of testing the structures, the issues I have seen along the way, and anything else that comes up.


Published Saturday, June 28, 2014 4:04 PM by drsql

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Louis Davidson said:

This part is part of an ongoing series of blogs I am writing while preparing to give a presentation based

July 1, 2014 12:24 AM

Leave a Comment


This Blog


Links to my other sites


Privacy Statement