As soon as my book is complete, I'm going to focus full-time on www.codeplex.com/nordic and an ISV app based on Nordic. In the meantime here's an email I recently received from an ISV that's testing Nordic as the database design for thier next version:
I have implemented a variation of Nordic which relies on (ID int, ServerID tinyint) keys for PK’s and FK’s, replacing the GUID’s. I’ve added one table, which I mentioned to you a while back, a home for the AssociationID where the associationID (actually it is the composite ID, ServerID) is unique.
We ran into problems with the Nordic stored procedures since Entity Framework does not have a concept of table variables in SQL. So I’ve taken to writing triggers on the tables and views over the associations. The triggers use table variables inside so they are hidden from the Entity Framework. Using triggers, as a side benefit, lets us enforce, very simply, a “never delete” rule globally, which has been a design goal for us for a long while.
Our database is a bit of a stripped down Nordic now but I’ve populated the database over the last week and I have to say it kicks ass!!! Using the “instead of” triggers on the views, I’ve pushed the Object table to 870,000 rows. I made our association table be extension of Object as well, since we want the Object attributes to apply to associations just as much as they apply to entities. 46% of the Objects are 7 different classes of entity data and 54% of the Objects are class/role associations spread over 15 class/role association types.
The insert triggers are capable of creating 500,000 rows spread across 4 tables (includes object rows, class entities and class/role associations) in 2 minutes (on a machine that is 6 years old!!) starting out with no data in the model. Our tables are tall and narrow and I did strip down the object table a little bit from what you have. I’m not sure yet what the performance will be when we get up into the 10 million row range and add bunches of indexes.
Writing the triggers kind of sucks but the bang for the buck is good so far and I’m getting faster, better at it. Someday, maybe Marc will incorporate some of the logic into his framework, Interacx.
And I’ve tested this model in a P2P (Peer to Peer) replicated configuration. (The ServerID’s purpose in life is to make PK’s unique in a replicated environment.) In all my tests so far the data model maintains integrity when we make changes to two different database instances at the same time.
Looks like we will continue to extend this model, adding features to it, and maybe someday use it in a production environment instead of just as a research tool.
When your book reviews end we should talk on the phone some day.