A few months ago I gave a speech at SQLBits 9 comparing Vertipaq vs. OLAP (you can watch session recording here), which I repeated a few times in other conferences and webcasts. The title was intentionally provocative but it contained an explanation of the real content – in fact it is “Vertipaq vs OLAP: Change Your Data Modeling Approach”. In these days I’m realizing how huge the impact of this technology will be, and how I just scratched the surface of this revolution happening in the BI Modeling world. Regardless you watched the session or not (and you may have another good reason to do that after reading this post), let’s try to see what’s happening from another point of view.
How do you model a BI Solution? Well, you can have a data warehouse or not, you can apply SQLBI Methodology or another of your choice (there are many of them, yes). But, at the end, you create a star schema. Maybe a snowflake one, even if it is almost always a bad idea. But a snowflake schema can be converted in a star schema, so let’s go for that. You have to conform your data model to a well known pattern that can be easily digested by an Analysis Services cube. I mean, a Multidimensional model. And that’s fine, it works great, millions of customers are very happy with that.
But then, someone comes with a question that was not planned in advance. Someone says “I’d like to analyze customer behavior, define a cluster of my customers based on cube slicing and dicing and then I want to know if in this specific month for this promotion they bought more or less than their single individual average over the last three months”. And they expect this to be doable and fast. At this point you have several options:
- Talk about the next football/baseball/NBA/rugby match hoping he’ll forget what he asked for;
- Finally show your master level of MDX and write a geeky MDX query that doesn’t work in Excel and cannot be browsed in an Excel PivotTable (users seems not able to appreciate how nice is writing your custom client code with AMO – except a few Italian speaking people, who understand that AMO is just the first single person present of the verb LOVE, even if for an inexplicable reason most of them tend to associate the word with the noun fishhook, which has the same spelling in Italian);
- Change the data model in Analysis Services, only to realize that you have to change the Data Mart design and the ETL, pushing the business logic of a query down to the ETL implementation, for the happiness of those guys who worked hard to pump data in your ultimate data model that should have been able to answer to any query from any user at any time.
Well, I tried all of them, and I can say that the last one is the most expensive and the only one that really makes the user happy for 5 minutes, until he realizes he just would like to see something else he hadn’t thought before. And this process is called knowledge discovery, is perfectly well known, described in the books and if only he would be able to formulate a question within a predictable path, we would live in a better world. But, hey, I’m a consultant, there are no issues, just opportunities. At least until an opportunity moves beyond the horizon of the profitability ROI line.
Are you with me? Have you experienced all of this? (If you’re not a consultant, you can just replace “profitability ROI” with “acceptable quality of working life”, which is the equivalent concept - a consultant never experience the notion of a life beyond the work and would not understand the expression at all, simply reporting a Connect item to the Word team in order to fix the Spelling and Grammar checker).
Well, now there is Analysis Services 2012. You can create models in Multidimensional or Tabular. At first, Tabular has less features than Multidimensional, it’s simpler and you can define a model that looks like a Multidimensional one. You can create a Tabular model that, connecting from Excel, appears exactly like a Multidimensional one. It could be faster, it could be easier to develop, but at the end it will seem offering the same features to the end user. Apparently, nothing is changed. At least, if you are lazy enough to not think you can do more.
A few months after my talk at SQLBits I continue to see new reasons to move to Tabular just for two capabilities that are hidden in the marketing stuffs, but are really incredible important from a data modeling point of view:
- You can query data using relationships that are not defined in the data model as “regular” relationships
- You can extend the data model without reprocessing the whole database by using calculated columns
These two features adds a fourth option to answer to the question that originated this blog post. And the fourth option is:
I’m not saying it’s easy. But by moving your DAX skills to an expert level, you are able to write complex queries keeping good performance without requiring to change the underlying the data model. Translated in practical terms: happy customer, problem solved, out of office in time to watch your favorite football/baseball/NBA/rugby/whatever match on TV.
A few months after my talk at SQLBits I continue to see new reasons to move to Tabular just for the flexibility it allows if you want to query it. If you tried OLAP in the past and it was too rigid in data modeling for your requirements, give Tabular a chance. You have to learn it deeper than a casual user, but you will found an unexplored territory of new data modeling and reporting opportunities. (How many articles have you read about querying Tabular from Reporting Services?)
Next mission: convince IT Pros to not virtualize a SSAS box with Tabular because it is much more convenient to spend money on fast CPU, not too many cores, large and fast memory, cheap disks. Tactic: design a fancy dashboard for the CEO and getting immunity to fight against the SAN lobbies (Thomas would use a less polite term here).