THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

Why to use #Tabular in Analysis Services 2012 #ssas

A few months ago I gave a speech at SQLBits 9 comparing Vertipaq vs. OLAP (you can watch session recording here), which I repeated a few times in other conferences and webcasts. The title was intentionally provocative but it contained an explanation of the real content – in fact it is “Vertipaq vs OLAP: Change Your Data Modeling Approach”. In these days I’m realizing how huge the impact of this technology will be, and how I just scratched the surface of this revolution happening in the BI Modeling world. Regardless you watched the session or not (and you may have another good reason to do that after reading this post), let’s try to see what’s happening from another point of view.

How do you model a BI Solution? Well, you can have a data warehouse or not, you can apply SQLBI Methodology or another of your choice (there are many of them, yes). But, at the end, you create a star schema. Maybe a snowflake one, even if it is almost always a bad idea. But a snowflake schema can be converted in a star schema, so let’s go for that. You have to conform your data model to a well known pattern that can be easily digested by an Analysis Services cube. I mean, a Multidimensional model. And that’s fine, it works great, millions of customers are very happy with that.

But then, someone comes with a question that was not planned in advance. Someone says “I’d like to analyze customer behavior, define a cluster of my customers based on cube slicing and dicing and then I want to know if in this specific month for this promotion they bought more or less than their single individual average over the last three months”. And they expect this to be doable and fast. At this point you have several options:

  • Talk about the next football/baseball/NBA/rugby match hoping he’ll forget what he asked for;
  • Finally show your master level of MDX and write a geeky MDX query that doesn’t work in Excel and cannot be browsed in an Excel PivotTable (users seems not able to appreciate how nice is writing your custom client code with AMO – except a few Italian speaking people, who understand that AMO is just the first single person present of the verb LOVE, even if for an inexplicable reason most of them tend to associate the word with the noun fishhook, which has the same spelling in Italian);
  • Change the data model in Analysis Services, only to realize that you have to change the Data Mart design and the ETL, pushing the business logic of a query down to the ETL implementation, for the happiness of those guys who worked hard to pump data in your ultimate data model that should have been able to answer to any query from any user at any time.

Well, I tried all of them, and I can say that the last one is the most expensive and the only one that really makes the user happy for 5 minutes, until he realizes he just would like to see something else he hadn’t thought before. And this process is called knowledge discovery, is perfectly well known, described in the books and if only he would be able to formulate a question within a predictable path, we would live in a better world. But, hey, I’m a consultant, there are no issues, just opportunities. At least until an opportunity moves beyond the horizon of the profitability ROI line.

Are you with me? Have you experienced all of this? (If you’re not a consultant, you can just replace “profitability ROI” with “acceptable quality of working life”, which is the equivalent concept - a consultant never experience the notion of a life beyond the work and would not understand the expression at all, simply reporting a Connect item to the Word team in order to fix the Spelling and Grammar checker).

Well, now there is Analysis Services 2012. You can create models in Multidimensional or Tabular. At first, Tabular has less features than Multidimensional, it’s simpler and you can define a model that looks like a Multidimensional one. You can create a Tabular model that, connecting from Excel, appears exactly like a Multidimensional one. It could be faster, it could be easier to develop, but at the end it will seem offering the same features to the end user. Apparently, nothing is changed. At least, if you are lazy enough to not think you can do more.

A few months after my talk at SQLBits I continue to see new reasons to move to Tabular just for two capabilities that are hidden in the marketing stuffs, but are really incredible important from a data modeling point of view:

  • You can query data using relationships that are not defined in the data model as “regular” relationships
  • You can extend the data model without reprocessing the whole database by using calculated columns

These two features adds a fourth option to answer to the question that originated this blog post. And the fourth option is:

  • Just do it

I’m not saying it’s easy. But by moving your DAX skills to an expert level, you are able to write complex queries keeping good performance without requiring to change the underlying the data model. Translated in practical terms: happy customer, problem solved, out of office in time to watch your favorite football/baseball/NBA/rugby/whatever match on TV.

A few months after my talk at SQLBits I continue to see new reasons to move to Tabular just for the flexibility it allows if you want to query it. If you tried OLAP in the past and it was too rigid in data modeling for your requirements, give Tabular a chance. You have to learn it deeper than a casual user, but you will found an unexplored territory of new data modeling and reporting opportunities. (How many articles have you read about querying Tabular from Reporting Services?)

Next mission: convince IT Pros to not virtualize a SSAS box with Tabular because it is much more convenient to spend money on fast CPU, not too many cores, large and fast memory, cheap disks. Tactic: design a fancy dashboard for the CEO and getting immunity to fight against the SAN lobbies (Thomas would use a less polite term here).

Published Thursday, March 01, 2012 2:43 PM by Marco Russo (SQLBI)
Filed under: , ,

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

Lilly said:

I went to Microsoft  SQL Saturday / Pass event last Saturday in the Silicon Valley, in one of the sessions the presenter told us that the industry is posed to move away from MOLAP to Tabular and he actually mentioned you as pioneer (NOT  his exact words)

March 5, 2012 12:21 PM
 

Marco Russo (SQLBI) said:

Well, I wouldn't suggest to move a working MOLAP cube to Tabular today and I'm still starting new projects in MOLAP these days.

What I clearly see is that there are opportunities in Tabular thanks to the flexibility, but these are very early days and we lack of new client tools able to take advantage of the new model. Power View is just one, but there is space for more.

Thanks for your comment!

Marco

March 5, 2012 3:26 PM
 

HarshB said:

Hi Marco,

When you say there is space for more client tools -

a. do you mean a client tool like the old Proclarity - but with more features appropriate for BISM/DAX ?

b. Or do you mean more capabilities within Excel?

c. other ideas?

I remember your post:

http://sqlblog.com/blogs/marco_russo/archive/2010/01/04/microsoft-doesn-t-play-the-traditional-bi-client-game.aspx

Thanks

March 6, 2012 6:33 PM
 

Marco Russo (SQLBI) said:

I mean both.

Take a look at XLCubed, for example, they have an Excel add-in that will support DAX very soon (weeks).

Stay tuned!

Marco

March 6, 2012 6:39 PM
 

HarshB said:

http://cwebbbi.wordpress.com/2012/02/28/natural-and-unnatural-hierarchies-in-the-ssas-2012-tabular-model/

Hey Marco,

Just wondering - based on the new findings by Chris Webb in the above post - how deep do you think the following capability will be?

"You can query data using relationships that are not defined in the data model as “regular” relationships "

I am very interested in this topic - as it can enable some new data analysis techniques.

Thanks

March 9, 2012 7:01 PM
 

Marco Russo (SQLBI) said:

Hierarchies in Tabular are not related (by now) to specific features in tabular, they serve more as a support for Multidimensionaland MDX  compatibility. You can of course use them in MDX, but by now not in DAX.

But I think there will be some evolution in future releases of Tabular...

Marco

March 10, 2012 6:18 AM
 

max conde said:

hi marco

starting new in bi, which technologie would you choose?

April 16, 2012 12:43 PM
 

JO said:

How are larger datasets (terabyte size) handled is tabular vs MOLAP?

June 12, 2012 12:54 PM
 

Marco Russo (SQLBI) said:

There are cubes large 24TB or more - the size or source data in SQL Server might be larger.

In Tabular the limit is the RAM available. However, because the compression rate might vary (usually 10x, but we can reach 100x in certain conditions) depending on data distribution you can build a 200GB cube that corresponds to a 20TB table in SQL Server or other relational database. But you cannot make a generic statement about that because it really depends on the data distribution.

June 13, 2012 9:04 AM
 

jo said:

Thanks for the response.

From what I've understood so far, and please correct me if I'm wrong, is that multidimensional is meant for larger datasets and tabular is geared more toward smaller datasets. You won't see Yahoo or AdCenter moving their multimdimensional models over to tabular anytime soon. If I have a cube in SSAS 2008 that is over 1TB I wouldn't dream of migrating over to tabular unless I have a bunch of RAM available. No?

Being limited by RAM scares me, at least for the next couple years. Generally it's much easier to add disk than add RAM. Disk is always cheaper than memory.

June 19, 2012 2:37 PM
 

Marco Russo (SQLBI) said:

Jo, memory is a concern, but there are several scenarios in which 100GB compressed corresponds to 1TB or more of data on disk and having no maintenance costs for aggregation lower the TCO much more than the cost of RAM. It really depends, there are no reasons to migrate a working OLAP solution to Tabular, but there are scenarios that don't fit well in Multidimensional and works pretty well in Tabular.

June 19, 2012 6:58 PM
 

Vadim said:

I understood from overall reading that tabular olap knows to swap data to the disk, so if having proper partitioning for large cube(s) and querying (mostly) specific area (partition) of the cube we can get reasonable performance even for huge cube that cannot fit the memory. Am I right?

August 28, 2012 3:47 PM
 

Marco Russo (SQLBI) said:

Vadim, unfortunately not. First, paging in Tabular is designed only for paging data during process, not because there are queries running or because the database doesn't fit in memory. Second, xVelocity in-memory engine (VertiPaq) in its current implementation does not consider table partitioning in order to reduce column scan during a query. Instead, ColumnStore in SQL Server is able to do such optimization there. In Tabular you might consume less memory by ignoring some columns in the table, but not by querying only some partitions. However, this is in theory, in practice the current implementation does not support a smart way to page out unused columns.

August 28, 2012 5:35 PM
 

Vadim said:

Thanks a lot for your answer.

Does your answer mean that for large cubes we need to stay with multi-dimensional model (assuming that we have not enough RAM memory)?

Regarding SQL Server option: we have discovered that combination ColumnStore index + data is much more expensive from storage point of view (15M in multi-dimesnsional SSAS against ~500M of compressed test table plus ColumnStore index). ColumnStore index alone(!) took 150 M. When speaking about production storage it becomes very sufficient. It seems to be very strange. Isn't? We have defined it very straightforwardly via wizard with all default parameters.

August 29, 2012 10:49 AM
 

Marco Russo (SQLBI) said:

Vadim, this is really strange. Usually Multidimensional does not store in less space than Tabular. ColumnStore might be less optimized than Tabular but the compression really depends on many factors. You can perform many optimization in compression for Tabular (and so also for ColumnStore) - for example, take a look at this article: http://www.sqlbi.com/articles/optimizing-high-cardinality-columns-in-vertipaq/

August 29, 2012 11:07 AM
 

Vadim said:

Thanks a lot.

I will follow your article advices

August 29, 2012 11:25 AM
 

John said:

Great article Marco. I'm finding it hard to work out why a structure of preaggregated fact /dimension tables in the relational engine is not talked about more as an alternative to tabular or MD cubes.

Imagine a load is done into these aggregated tables each night from the OLTP data? We're in the 30 - 40 million row / 50-100GB of raw data (eg without indexes) range so caching of the aggregated data would not be a problem.

September 23, 2012 9:01 PM
 

Marco Russo (SQLBI) said:

John, you have to consider that Analysis Services provides a layer of metadata that SQL Server alone cannot provide. With a large set of data you can evaluate using ROLAP or DirectQuery in order to keep data on SQL Server (maybe with ColumnStore indexes). ROLAP MD also create indexed views for aggregations automatically.

September 26, 2012 1:00 PM
 

Harsha said:

But there are many issues/functionalities that still need to be incorporated in Tabular.

1. Incremental Data Refresh/Processing is not as powerful as the traditional MOLAP.

a. If you have Slowly Changing Dimensions of Type 1 or 2 there is no other option other than Full Refresh. This proves to be very difficult to implement when your dimensions have a large number of records(5 million+ rows with 100 columns each in the table)

b. If your Fact table is transactional (records getting updated), again the only option is a full refresh.

2. Development on projects that contain more than 20 tables is pathetic as the UI responses become dead slow. Just a click on one of the tables in the model take 30+seconds to respond.

Do you have any work arounds for these ???

October 18, 2012 12:18 PM
 

Marco Russo (SQLBI) said:

Harsha, regarding your point 1), SCD handled in Multidimensional have a similar challenge, a process update always have to read the whole dimension, and the transactional fact table has the same refresh issue than in SSAS Multidimensional.

If you compare with SQL Server, that's a different story but still the main difference is the lack of metadata, then there is a difference in expressivity between SQL and DAX.

Regarding your point 2, I completely understand your pain. Waiting for improvements in SQL Server Data Tools, only workarounds are:

- use workspace server in the same PC with SSDT

- use a small set of data in design mode

- use a faster clock for your CPU - favor a workstation with less cores but faster clock.

Hope it helps.

Marco

October 18, 2012 6:22 PM
 

Harsha said:

Marco... I need to know what guidelines for configuring the hardware for an instance of Tabular SSAS. Currently I have a server that has 2 CPUs with 8 cores each and a 500 GB RAM.

Am looking at 10+ fact tables each with 40-50 million records, 10 frequently used Dimension tables with the same number of records and 20+ dimensions with 3-4 million records each.

2 of the Dimension tables have Role based Row level security enabled. Data access is mapped based on the NT Login of every user.

With this configuration I am facing performance issues with simple queries(MDX queries generated by MS Excel) taking 20+ minutes to execute with 20 concurrent users. With a single user am able to run these queries in under 2 minutes.

Please let em know if I can reach out to you with further details...

October 25, 2012 4:37 AM
 

Marco Russo (SQLBI) said:

Harsha,

the performance issue you described seems related to some calculation that doesn't scale well in the storage engine of Tabular.

This probably cannot be improved with the hardware but by checking the data model and the DAX measures used (if the queries are generated by Excel in MDX, usually don't contain particularly critical calculations).

Using the profiler we might perform a deeper analysis.

Please contact me at marco (at) sqlbi (dot) com so that we can establish a direct connection if you are interested in such a consulting activity.

Marco

October 25, 2012 8:55 AM
 

Hiren Parikh said:

Sir, I have purchased your book SQL Server Analysis Services 2012 and found really helpfull. Please provide me your twitter details so  can follow you. Thanks again

January 8, 2013 6:37 AM
 

Marco Russo (SQLBI) said:

Hiren, my twitter account is @marcorus

You can follow me here: https://twitter.com/marcorus

January 8, 2013 7:19 AM
 

Ruben said:

Hi Marco,

I've been testing some of new BI features of SQL Server 2012.

I'm interested in the Tabular Model using DirectQuery.

I've set up some examples. But I have some questions about the DirectQuery mode usage.

When we use DirectQuery does it mean we don't need a database exclusively prepared for supporting the BI model? In a microsoft white paper it says that "Using DirectQuery does not eliminate the need for ETL. Updates to the source table still need to be managed.", but in that case how can we have real time queries, if we have to load the data from the source DB?

When we use multidimensional model we have an Application DB from where we extract data to a another DB prepared for the cube. My question is if when using tabular with Directquery we still need to have this DB with extracted data or if we can use the Application DB directly. That way we could reflect the changes in the business in Real Time without having to wait for the data to be loaded to other database. Hope it makes sense.

February 27, 2013 9:55 AM
 

Marco Russo (SQLBI) said:

Ruben

there are two problems related to working with application DB: performance and data consistency.

The performance issue exists using DirectQuery in general - the SQL code generated is not very efficient in this release, it could stress SQL Server very much, much more than a well-designed custom SQL query.

The data consistency appear in case you have several tables with many relationships and there are missing data, which could hide valid transactions. In a simple database, this is not a big issue, but in complex databases this is the very reason why an ETL is needed - not for performance, but to "clean" data for analysis.

You can do your ETL by using SQL views, but usually this could hurt performance, too.

It's an hard balance. I suggest you to make some test with DirectQuery using a copy or your real data, so you will be able to evaluate performance that you can obtain in this way. If your database is quite small (a few MBs), this could be a good choice for you. But with large database (tens of GBs) you will see that speed of Analysis Services with xVelocity is way better than DirectQuery (and DirectQuery could be really slow, unless you have a SQL Server Parallel Data Warehouse...).

February 27, 2013 10:20 AM
 

Bill said:

Hi Marco,

Great article!

Is there anyway to convert an instance from Multidimensional to Tabular once installed?  Or, does SSAS need to be re-installed as Tabular?

Thanks.

March 5, 2013 11:59 AM
 

Marco Russo (SQLBI) said:

I think the best choice is to install two instances side-by-side.

Technically it is possible to change a configuration setting in msmdsrv.ini and starting the instance in a different way than original installation, but I think this is absolutely not supported and you should remove all databases before restarting the instance.

If you install two instances side-by-side, you can Always start only the service you are using, without consuming RAM if not required.

March 5, 2013 12:15 PM
 

Ruben said:

Hi Marco,

thanks for you answer to my previous post.

My main interest in using DirectQuery it would be to achieve real time BI, or near real time.

Is this still possible to achieve using xVelocity?

Is there any model that allows me having a changed made on my application DB available in me reports as soon as possible?

Or do I still have to wait for all the processing to be made from my application DB to my clean DB ready for data analysis?

Thank you.

March 6, 2013 6:16 AM
 

Marco Russo (SQLBI) said:

Ruben, if you use DirectQuery you can obtain real-time, becasue any DAX query is translated into a SQL one. However, you no longer use the xVelocity InMemory Analytics Engine (VertiPaq) in this scenario, and performance depends on SQL Server power. In this case, you use Analysis Services just as a metadata layer on top of your SQL Server database.

As an alternative, you might consider continuous incremental process of a partition or table, reaching a 1-2 minute period of refresh (it's hard to refresh Tabular with an higher frequency).

Thanks.

March 6, 2013 6:21 PM
 

Velmourougan Catinat said:

Hi Macro,

We were looking at using datamining algorithms supported by SSAS Multidimensional model with Tabular model. It looks like currently it is not supported. Is it possible to do linear regression with DAX/Tabular model?

Regards

Vel

May 9, 2013 12:21 PM
 

Marco Russo (SQLBI) said:

Currently, it is not supported in Tabular. However, you can install a Multidimensional instance on the same server and then using such instance for data mining algorithms.

Marco

May 9, 2013 5:04 PM
 

Sam said:

Hi Marco. I am about to start up a new BI project. So far I haven't done a Tabular project yet. The business seems very interested in Power View as a front end, which is only supported by tabular data models at this point in time, as far as I understand. But because of the nature of their busines about 50% of their "dimensions" will have many 2 many relationship, which is not supported "out of the box" in tabular. What is your comment?

Thanks

May 24, 2013 12:33 AM
 

Marco Russo (SQLBI) said:

Hi Sam,

as Always, it depends. There are many things to consider and this is part of my daily consultancy job (http://www.sqlbi.com/consulting) :-)

Generally speaking, many-to-many relationships are usually a reason to choose Tabular over Multidimensional because of better performance - but if you have many measures (i.e. 100 measures) then the effort involved to apply the M2M pattern to every measure might be too much in terms of development cost (especially for maintenance).

But, again, it depends...

Marco

May 24, 2013 1:45 PM
 

Eduard said:

Hello,

I'm on the analize phase of a new BI project.

Still deciding which tool to use (MS or not).

I had been playing with PowerPivot recently and I love the possibility for business users to "link" excel sheets to the data model, so they can create custom bands and groups and they can modify its values within the same excel. That's cool. But from the other side I would like to have a by user security in which depending on the user some data is shown and some not. For that and to provide a similar user experience (creating pivot, charts and slices) I would choose a SSAS tabular model. But then I loose the by user "link to Excel sheet".

Is there a possibility to have the two things?

Something like a link to "SSAS tabular model" so after a user can privately enrich the data model?

Thanks

June 28, 2013 4:04 AM
 

Eduard said:

Ups...

It seems I'm not the only one asking that...

http://sqlblog.com/blogs/marco_russo/archive/2013/06/19/common-request-export-tabular-model-and-data-to-powerpivot.aspx

Any workaround to my particular problem?

Where you able to provide a security filter in PowerPivot? Or a link to Excel experience in SSAS tabular model?

June 28, 2013 4:39 AM
 

Marco Russo (SQLBI) said:

Eduard,

as you have read, this is still a missing feature.

You cannot have security filter in PowerPivot, all the data in the workbook can be seen by any user that has the workbook.

However, you can navigate a Tabular Model by using a PivotTable - what do you mean by "link to Excel experience in SSAS Tabular Model"?

June 28, 2013 12:04 PM
 

Eduard said:

Sorry for my English.

I try to use the analogy between "import from excel" and "link to excel" in PowerPivot.

In reality I was asking for a SSAS tabular model inheritance on PowerPivot.

AFAIK I can connect to an existing SSAS Tabular Model within PowerPivot, but then I need to manually create the MDX queries to recreate the structure (tables, relationships, etc...).

If IT adds a new relationships or table or column in the SSAS tabular model IT(*) need to manually update all PowerPivot MDX queries on all business-user excels.

(*) Connecting to a SSAS tabular model from PowerPivot and generating MDX queries is not very business-user friendly IMO.

I see the PowerPivot connectivity to SSAS tabular model as "metadata import" and "data link". I would like to have a "metadata link" too.

The SSAS tabular model would be the mandatory and read-only starting point for the private business user PowerPivot model.

July 2, 2013 4:49 AM
 

Eduard said:

As a workaround I would investigate the possibility to create a PowerPivot that connects to the SQL datawarehouse using windows authentication and to set a row level security into the SQL Server DWH (user would only be able to select on custom filtered views).

Before deploying the powerpivot excel I would need to delete its data. Maybe refreshing the powerpivot using a windows user that cannot see data (within my custom filter).

I loose central management, but from a user perspective I think it is easier to work directly against the DWH and PowerPivot model as far a working examples is given, a link to excel table, some measures, some kpi's some calculated columns,...

When a version 2.0 will come (IT creating a new powerpivot excel) business users will loose its changes unless they provide it to IT, so IT would be able to consolidate in the new version.

Do you think it could work in the real world?

July 2, 2013 5:27 AM
 

Marco Russo (SQLBI) said:

Eduard,

I can say that this scenario is well known to Microsoft developer, we already illustrated this need in 2010, shortly after the first beta version of PowerPivot was made available, Unfortunately, the upcoming new features are related to Power BI and I'm worried we will have to wait some time before having some feature related to the problem of merging metadata between models.

Please note that you can use BISM Normalizer in order to compare two data models (just import PowerPivot to SSAS by using SSMS in order to compare two PowerPivot workbooks' schema):

http://visualstudiogallery.msdn.microsoft.com/5be8704f-3412-4048-bfb9-01a78f475c64

Marco

July 17, 2013 6:09 PM
 

Zach said:

Marco,

I'm an infrastructure guy and we are attempting to put a sql server 2012 clustered environment in VMware esx 5.1.  The infrastructure would appear to be solid, but the queries that take 3 seconds in sql server 2008 on a physical box are taking 7 seconds on a sql server 2012 virtual box.  Would you expect this delay while going to virtual and using tabular mode?  We have done as much testing as we can to verify it is fully loaded into memory, it just sits and churns for 7 seconds before the output appears.  Have you had any experience with virtualizing a tabular environment and any suggestions would be greatly appreciated!

Thanks,

Zach

July 29, 2013 4:49 PM
 

Ron Cash said:

Hi Marco,

I am using SSAS 2012 Tabular and would like to filter a table based on a parameter passed to the where clause which needs to be based on users permissions.  And before spending hours of research time, I would like to find out if it is even possible.

So in all my tables there is a TMNL_KEY column, which stands for Terminal Key, which is what Terminal that row of data is for.  There are say Terminal 5, 6, 7 values in the TMNL_KEY column.

So I only want Terminal 5 data to be seen by users that work at Terminal 5.  So based on there logon, I was thinking of somehow passing the value of 5 to the table query when users from Ter. 5 logon, there are several users in Ter. 5.

I thought of doing this in the query for the table, WHERE TMNL_KEY = ISNULL(@Terminal, TMNL_KEY) so that when they logon on from Ter. 5 there permissions pass the value '5' (varchar) to that @Terminal parameter so they only see the results filtered for Terminal '5'.

Is this even possible and of course what are the best options for this, are there several?  The Company does use Active Directory.

September 4, 2014 2:32 PM
 

Marco Russo (SQLBI) said:

Ron,

you should review how security works in Tabular.

The data are read only once from SQL Server (or other sources). Security is local to SSAS and can only filter rows, depending on the user connected.

OurA SSAS Tabular book also contains an entire chapter about security, if you are interedetd:

http://www.sqlbi.com/books/microsoft-sql-server-2012-analysis-services-the-bism-tabular-model/

September 5, 2014 7:20 PM
 

Senthil said:

Hi Marco,

Is there a way to use the MDX query designer which is with powerpivot in winforms C#? i.e call the API in C# code or whatever?

Since PowerPivot not supported in 2003 or 2007, i just wanted to replicate the feature in those versions using C# winforms.

Regards,

Senthil

November 17, 2014 7:56 AM
 

Marco Russo (SQLBI) said:

It is not supported - the same editor is part of Reporting Services and Report Builder, but it is not available as an API as far as I know.

November 17, 2014 7:59 AM

Leave a Comment

(required) 
(required) 
Submit

About Marco Russo (SQLBI)

Marco Russo is a consultant, writer and trainer specialized in Business Intelligence with Microsoft technologies. He runs the SQLBI.COM website, which is dedicated to distribute resources useful for BI developers, like Integration Services components, Analysis Services models, tools, technical information and so on. Marco is certified as MCT, MCDBA, MCSD.NET, MCSA, MCSE+I.

This Blog

Syndication

Archives

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement