THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

Please visit the same blog at - comments are now disabled here but you can comment on the new blog (all blog posts available here are available there, too).
You can follow me on Twitter: @marcorus

How to Incrementally Process your #Tabular Model by Using Process Add in #ssas

I recently wrote an article about how to implement a Process Add on a table in a Tabular model. This is an area where there is a lack of documentation by Microsoft, especially if you want to use a custom binding query for every Process Add command, which should be the common case for a Process Add scenario. Cathy Dumas wrote several blog posts about this argument and I tried to put everything together, showing the same example written in XMLA Script, AMO and PowerShell. You can also use Integration Services, but considering you probably want to customize the binding query, chances are that you will opt for a more programmatic approach, maybe embedding an XMLA Script or an AMO Script in a standard Task in SSIS.

The next step for me will be studying how much “near real time” a Tabular model can be by using this type of Process option. In fact, it should be possible to use a push model processing, like in a Multidimensional model, and my initial tests say that there is no “out of service” window when you process data in Tabular. So you can forget the lock issue you may have in a Multidimensional model in Analysis Services. But you need memory and it is hard today to say how much this architecture can scale in terms of concurrent users when you start processing data incrementally. Also the quality of compression made by Vertipaq might suffer.

If you have any experience in this area, please contact me and share your knowledge! Otherwise, stay tuned, I’ll try to further study this topic, then blogging what I’ll found.

Published Wednesday, February 22, 2012 2:31 PM by Marco Russo (SQLBI)



Nick Singleton said:

Hi Marco -

I know that you can import data from a query to your tabular model.  I need to import Oracle data using a WHERE clause on a column "LastModified" to get incremental extracts.  However, this data could contain rows that have already been imported.  These rows would have been changed since a prior load.  Since SSAS doesn't prevent duplicates, how would I handle this?  If I was persisting the data in SQL Server, I know that I could do a delete-insert or use the merge statement.  However, I'm trying to pull the data directly from Oracle into the tabular model.  What do you recommend?



November 3, 2014 10:06 AM

Marco Russo (SQLBI) said:

Nick, I would suggest you reading data of a partition overwriting that. The alternative to rebuild a partition is to create compensating transactions, so that the overall result would be the right one. You cannot delete/change single rows once imported.

I hope it helps,


November 4, 2014 2:07 AM

Bhavin said:

How to process tabular model for last 30 days and schedule it as daily process.

March 7, 2017 10:12 AM

Marco Russo (SQLBI) said:

March 17, 2017 7:13 AM

Vivek said:

In my scenario, the old records are getting updated. So when I am fetching latest modified record for cube processing, I am getting both: new records and the old updated record. By using the processAdd function , I am able to handle the new records. But my records are not correct for old modified records. Instead of value getting replaced with new value, its getting added. for eg: if the old weight was 100 and the new updated weight is 10, then after model processing , I am getting the new value as 110.

Please suggest any approach in this scenario.

August 3, 2017 5:31 AM

Marco Russo (SQLBI) said:

Two options:

1) refresh the old partition containing the modified records

2) create a set of rows visible to the ProcessAdd operation that contains compensating transactions for the modified records.

August 4, 2017 2:19 AM

Rodrigues said:


I have the following scenario where the tabular cube has last 30 days of data, one for each day. I was thinking of partitioning the cube into 30 partitions which queried data based on the particular day of month.

"select * from fact where DAY(Date) = 1,2,3....30" 30 queries one for each partition

How do I go about overwriting existing data in a processed partition? Can that be achieved? So this way, if I ETL'd and received data for Day 13, 20 and Day 1, I'd want the cube to process only these 3 days and overwrite any existing data in those partitions.

Would really like to know the simplest method to go about this.

August 11, 2017 12:37 PM

Marco Russo (SQLBI) said:

You have to either:

- define an algorithm for which a day gets a number between 1 to 30 always overwriting the previous one

- remove one partition and create a new one

I would prefer the latter, it would be simpler to manage.

August 13, 2017 7:00 AM
New Comments to this post are disabled

About Marco Russo (SQLBI)

Marco Russo is a consultant, writer and trainer specialized in Business Intelligence with Microsoft technologies. He runs the SQLBI.COM website, which is dedicated to distribute resources useful for BI developers, like Integration Services components, Analysis Services models, tools, technical information and so on. Marco is certified as MCT, MCDBA, MCSD.NET, MCSA, MCSE+I.

This Blog



Privacy Statement