THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

  • Calculating First and Second Year of Sales by Customer in #dax #powerpivot #ssas #tabular

    I recently wrote an article that describes how to calculate a measure (such a sales, but it could be anything else) considering for each customer its first 12 months, then months 13-24 and so on. In practice, for every customer you have a different range of dates to consider, that starts on the date of its first order.

    Yearly Historical Sales by Year

    In the past I implemented similar calculation in Analysis Services Multidimensional (MOLAP) by implementing a special dimension in the fact table, processed by the ETL, because otherwise the performance would have been very bad, not to mention the complexity of MDX involved. In Power Pivot and Tabular, however, it is much simpler and even if you can perform the calculation in a complete dynamic way, from a performance point of view it is better to leverage on a calculated column that persists the date of the first order for each customer. If you are curios of looking at the total dynamic approach, you can take a look at the DAX query below. The problem is that performance is 2 seconds by filtering only the Management occupation, and it becomes 13 seconds if you remove the Occupation filter at all. The solution described in the article (which include also sample Excel workbooks) simply moves the FirstOrder calculation in a calculated column, so the FILTER can be reduced iterating only the FirstOrder column and the performance is much better, with response time almost always in the 1-2 seconds area.

    DEFINE

        MEASURE Customer[FirstOrder] =

            CALCULATE (

                MIN ( 'Internet Sales'[Order Date] ),

                ALL ( 'Date' )

            )

        MEASURE Customer[SalesFirstYear] = 

            SUMX(

                FILTER (

                    Customer,

                    CONTAINS ( 'Date', 'Date'[Date], [FirstOrder] )

                ),

                CALCULATE (

                    SUM ( 'Internet Sales'[Sales Amount] ),

                    DATESINPERIOD (

                        'Date'[Date],

                        [FirstOrder],

                        12,

                        MONTH

                    )

                )

            )

        MEASURE Customer[SalesSecondYear] = 

            SUMX(

                FILTER (

                    Customer,

                    CONTAINS ( 'Date', 'Date'[Date], [FirstOrder] )

                ),

                CALCULATE (

                    SUM ( 'Internet Sales'[Sales Amount] ),

                    DATEADD (

                        DATESINPERIOD (

                            'Date'[Date],

                            [FirstOrder],

                            12,

                            MONTH

                        ),

                        1,

                        YEAR

                    )

                )

            )

        MEASURE Customer[SalesThirdYear] =  

            SUMX(

                FILTER (

                    Customer,

                    CONTAINS ( 'Date', 'Date'[Date], [FirstOrder] )

                ),

                CALCULATE (

                    SUM ( 'Internet Sales'[Sales Amount] ),

                    DATEADD (

                        DATESINPERIOD (

                            'Date'[Date],

                            [FirstOrder],

                            12,

                            MONTH

                        ),

                        2,

                        YEAR

                    )

                )

            )

    EVALUATE

    CALCULATETABLE (

        ADDCOLUMNS (

            SUMMARIZE (

                'Internet Sales',

                'Date'[Calendar Year]

            ),

            "First Order", [FirstOrder],

            "Sales 1st", [SalesFirstYear],

            "Sales 2nd", [SalesSecondYear],

            "Sales 3rd", [SalesThirdYear]

        ),

        Customer[Occupation] = "Management"

    )

     

  • Time Intelligence Video Tutorial on Project Botticelli #dax #powerpivot #ssas

    Alberto Ferrari recently recorded a free short video introduction about what is time intelligence.

    When you have to perform time-related calculations in Power Pivot or in Analysis Services Tabular, you have to rely on some DAX expressions. There are specific functions grouped in a “Time Intelligence” section in the on-line documentation, but it’s a good idea learning a few basic concepts at the beginning, since these functions require a Date table in the data model and all of them simply applies more complex filters to a CALCULATE expression. Knowing that, you will avoid a few pitfalls like the one involved in performing time intelligence calculations within a row context and you will be able to perform week-based calculations (which are not supported by native time intelligence functions in DAX).

    If you want to learn these concepts from the ground up by reading a longer and detailed description, you can read either the SSAS Tabular or the Power Pivot book. Or you can get one hour of your time and watch the full Time Intelligence video, where Alberto explains in detail how to use these functions. You can access to the entire video library available on Project Botticelli buying a monthly subscription (you can also register for just one month – no long term commitment is required). If you are interested, a special 15% discount is available until end of September 2013 using the following discount code: SQLBI2013SEP. Otherwise, a 10% discount is available until end of 2013: SQLBI2013.

  • Microsoft shut down MCM/MCSM/MCA programs–my recap #msmcm

    A couple of weeks ago, I wrote a blog post about MCSM for BI certification. This raised an interesting discussion about the reasons why such a certification does not exists, whether it was really necessary, what it should include, and so on. There was no signs at the moment of the decision Microsoft communicated on Friday evening to existing and candidate MCM/MCSM/MCA community. The email transcript has been reported by Neil Johnson, a Microsoft employee certified as MCSM:Messaging.

    You can imagine the community reaction, and I will not repeat the many concepts expressed in many posts and comments I will list below. I disagree with this decision, but I understand the need for Microsoft to shut down the program for economical reasons. I don’t like that, and I think it’s not a smart move for Microsoft, too – but I would not add other points to those already expressed by many people. My point is about communication and timing.

    A few days after the news, I still cannot understand:

    • Why this mail has been sent to MCM/MCSM/MCA on a Friday night, before a long week-end in US
    • Why such a short notice (exams will be retired on October 1, 2013, in just one month)
    • Why there is no communication about future plans
    • Why the MCA, MCSM web pages are still active without any update
    • Why this decision after other changes were already applied to the MCSM program effective on July 1, 2013
    • Why Microsoft didn’t put an official statement on that (but this is a consequence of sending an email on a Friday night before an extended week-end)

    I really would like an answer to these open questions, because I cannot even think that the consequences of such a communication plan weren’t discussed in advance. But, again, this time I cannot understand why. The obvious alternative was having a soft-landing of the MCM/MCSM/MCA program, for example no new admissions and 6 months for completing certification by open submissions; publishing a roadmap towards new certifications and how existing ones will be automatically remapped on the new ones; public statement and instant update of the web pages for a coordinated communication that would have hurt someone, but would have not destroyed confidence and trust in a much bigger community than the one impacted by this decision. Why this was not possible? I would like to know the answer.

    From a PR point of view, I cannot imagine a worst way to communicate the bad news. Sending an email on a Friday night, leaving to bloggers, journalists, analysts, influencers and community in general 4 entire days to discuss, blame and writing bad feelings, without any official answer by Microsoft, will be hard to recover. When on Tuesday morning MS people in Redmond will start reading the thunderstorm of the past three days, they will realize that in many areas of the world outside US the days elapsed were 4, 2 in a regular week-end and 2 whole working days. However, they are lucky that the Microsoft-Nokia deal will get the attention of anyone, so the MCM news is not going to spread too much, but the damage made to community trust will remain.

    This is a list of links you can read until now:

    As I said, in this moment I’m not concerned about the future of Microsoft certifications, I will judge it when there will be a roadmap and some new announcements. I am concerned about the destruction of trust and loyalty in a large community and ecosystem that helps Microsoft selling its architecture. My perception is that the damages this communication is producing is way beyond the boundary of MCM/MCSM/MCA community. Maybe I have a misperception about this entire topic and I am just worried for a self-referential issue that does not affect the real word.

    I really hope I am wrong this time.

  • Dynamic Calculation of Inventory with #dax in #powerpivot and #ssas #tabular

    I recently wrote an article on SQLBI describing the dynamic calculation of stock inventory in DAX. With this technique, you can avoid creating a snapshot fact table just to store the inventory stock for every day and every product, creating a table that in certain business is huge and expensive, especially if you have to load it in memory.

    I also performed some performance comparisons between the two approaches. I have to say that I would not suggest one option against the other, unless I can do some test on a real set of data. The problem is that the real convenience depends on data distribution and density. Moreover, performance differences also depends on the query, and I have found that the traditional snapshot-based approach is faster when performance are always good (queries below one second), but becomes slower when there are queries returning large result set. I still don’t have a clear explanation of the reason why there are these differences, so if you have any feedback based on experience on different data sets, you are more than welcome if you will share it here!

    You can read the article and download the sample workbooks (for both Excel 2010 and Excel 2013) following this link.

  • SSAS Maestro and MCM:BI, MCA:BI, MCSM:BI – make your voice heard #ssas #mcsm #sqlserver

    Two years ago Microsoft started a certification program called SSAS Maestro, which was intended to be as a step towards a broader certification such as a Microsoft Certified Architect (MCA) or Microsoft Certified Solution Master (MCSM) for Business Intelligence (so the names MCA:BI and MCSM:BI). Analysis Services is not the only part of the skillsets in such a solution, but it’s one of the more complex in this stack. I think that an MCA:BI certification should include also SSIS, SSRS, Data Warehouse data modeling, PerformancePoint and in the near future also Power BI. But SSAS is a key part, and the SSAS Maestro certification has a level of difficulty that is comparable with the requirements for MCA. An MCSM:BI should have a strong commitment on Analysis Services too, and it could get benefit of much of the existing SSAS Maestro training, even if it would not include the case study preparation.

    Unfortunately, Microsoft didn’t push on MCA/MCSM:BI until now, and the only certification available is SSAS Maestro, which is also not valid as part of the credentials for a Microsoft Partner. It’s not a good situation for all those people who invested time passing the exam (requiring the writing of a case study that oftentimes was days of work). It’s not an incentive for companies in investing on such a training and certification for their employees, since nothing is returned to the company as a direct benefit.

    I and others already pinged Microsoft several times at several levels in order to ask putting some attention on that. But, until today, nothing happened. Microsoft is a big company, and it only cares about big numbers. I see the numbers of visitors of this blog, of the SQLBI web site, of the books sold, of the questions I receive by e-mail. I can say I am not alone, the community around is large, and for some reasons I think its importance is underestimated in Redmond. Maybe we need to send a signal.

    So here is the call to action: please, write an email to advcert@microsoft.com and tell them you would be interested in Microsoft Certified Architect (MCA) and Microsoft Certified Solutions Master (MCSM) for Business Intelligence certifications. Make your voice heard!

  • Excel 2013 stand-alone now includes #powerpivot

    This time Microsoft listened to the many complaints received since Excel 2013 release and the stand-alone version of Excel now includes Power Pivot and Power View! This is a very important news because there were customer who didn’t upgrade from Excel 2010 to Excel 2013 just because they would have lost the Power Pivot add-in!

    This news outdates my previous posts written in February and I am very happy for that!

    Microsoft updated this blog post describing the SKUs that includes Power Pivot and Power View, and now the list includes the Excel stand-alone version that anyone can buy (for example on Amazon). If you already have Excel 2013 and do not see Power Pivot active, check you have this upgrade.

    As a side note, Power Pivot (formerly known as PowerPivot) has now an official name that is made by two words and is no longer a single word. The reason is that Microsoft announced Power BI, which includes Power Pivot, Power View, Power Query (code named Data Explorer), Power Map (code named GeoFlow) and much more. For a long time, we will see this name written in both ways, I think…

  • Performance Tuning of #tabular models in Analysis Services 2012 #ssas

    In the last weeks the Microsoft BI buzzword has been Power BI for Office 365, which will be available soon in preview. I read a few comments about the fact that there is a constant discussion about the future but a lack of information and discussion about what is really used and required today in the real world. I think it is a good news that Microsoft released the first important white paper about performance in Tabular: Performance Tuning of Tabular Models in SQL Server 2012 Analysis Services.

    I suggest the reading of this white paper to all of you using Analysis Services Tabular and also to Power Pivot modelers. If you already have our SSAS Tabular book, you will find some more details in the white paper in many areas. And remember the repository we have for DAX Query Plan analysis on SQLBI: http://www.sqlbi.com/topics/query-plans/ – we hopefully will add more content in the next months, after the DAX Query Plans white paper written by Alberto Ferrari.

    The performance optimization in Analysis Services Tabular is a brand new topic: it’s different from the past, it’s not like SQL and it’s not like MDX. We still don’t have the right tools and the proper knowledge and experience written in books, but I think we’re on the right track to fill the gap.

  • Understanding #DAX Query Plans for #powerpivot and #tabular

    Alberto Ferrari wrote a very interesting white paper about DAX query plans. We published it on a page where we'll gather articles and tools about DAX query plans: http://www.sqlbi.com/topics/query-plans/

    I reviewed the paper and this is the result of many months of study - we know that we just scratched the surface of this topic, also because we still don't have enough information about internal behavior of many of the operators contained in a query plan. However, by reading the paper you will start reading a query plan and you will understand how it works the optimization found by Chris Webb one month ago to the events-in-progress scenario. The white paper also contains a more optimized query (10 time faster), even if the performance depends on data distribution and the best choice really depends on the data you have. Now you should be curious enough to read the paper until the end, because the more optimized query is the last example in the paper!

  • Watch #TechEd sessions online for free #powerpivot #dax

    The sessions recorded at TechEd 2013 are online. I and Alberto Ferrari delivered the same sessions and the recording are available for both TechEd North America and TechEd Europe.

    It’s nice for me being able to compare two deliveries of the same session. In theory, the second delivery (Europe) should have been improved based on feedback of previous conference. If anyone watched the two recordings, I will be very happy to get any further feedback!

    Here is the two sessions available with the links to the recordings:

    • Microsoft PowerPivot Excel 2013 in Action by Alberto Ferrari
      Microsoft Excel 2013 and Business Intelligence: a marriage made in heaven. PowerPivot and Power View, the two most exciting BI tools in the Microsoft stack are now integrated into Excel 2013, providing the perfect environment to build self-service BI solutions. In this session, see how to create a business solution from scratch, loading tables inside PowerPivot, enriching them with new data, building complex formulas using the DAX language, leveraging the materialization of DAX queries in Excel and finally building dashboards and reports using the new PivotTable features and Power View. The goal of the session is to provide a clear understanding of the power of BI in Excel 2013 and raise your curiosity about the DAX language and the Tabular architecture.
      Recording at TechEd North America
      Recording at TechEd Europe

     

    • Querying in DAX by Marco Russo
      DAX is not only an expression language but also a query language and, when it comes to performance, the xVelocity in-memory engine is second to none. Scanning fact tables and performing leaf-level computation happens in a matter of milliseconds. Nevertheless, as with any other language, you can write good DAX or bad DAX, depending on your understanding of the engine internals. This session introduces DAX as a query language, showing the different ways of querying with DAX using real-world data. Some queries will be fast, others will need optimizations. Many practical examples based on common patterns and an analysis of the query plans show how to get the best out of DAX.
      Recording at TechEd North America
      Recording at TechEd Europe

    And if someone is interested in attending the SSAS Tabular Workshop online on July 22-23, hurry up! Early bird is expiring soon!

  • Common request: export #Tabular model and data to #PowerPivot

    I received this request in many courses, messages and also forum discussions: having an Analysis Services Tabular model, it would be nice being able to extract a correspondent PowerPivot data model. In order of priority, here are the specific feature people (including me) would like to see:

    • Create an empty PowerPivot workbook with the same data model of a Tabular model
    • Change the connections of the tables in the PowerPivot workbook extracting data from the Tabular data model
      • Every table should have an EVALUATE ‘TableName’ query in DAX
    • Apply a filter to data extracted from every table
      • For example, you might want to extract all data for a single country or year or customer group
      • Using the same technique of applying filter used for role based security would be nice
    • Expose an API to automate the process of creating a PowerPivot workbook
      • Use case: prepare one workbook for every employee containing only its data, that he can use offline
      • Common request for salespeople who want a mini-BI tool to use in front of the customer/lead/supplier, regardless of a connection available

    This feature would increase the adoption of PowerPivot and Tabular (and, therefore, Business Intelligence licenses instead of Standard), and would probably raise the sales of Office 2013 / Office 365 driven by ISV, who are the companies who requests this feature more. If Microsoft would do this, it would be acceptable it only works on Office 2013. But if a third-party will do that, it will make sense (for their revenues) to cover both Excel 2010 and Excel 2013.

    Another important reason for this feature is that the “Offline cube” feature that you have in Excel is not available when your PivotTable is connected to a Tabular model, but it can only be used when you connect to Analysis Services Multidimensional.

    If you think this is an important features, you can vote this Connect item.

  • Campaign for #PowerPivot and #Tabular textual modeling language #msbi #ssas

    If you think PowerPivot and Tabular are good tools but sometime the design tools are not fast/complete/flexible enough, please consider voting this suggestion on Connect to request a textual representation Tabular or PowerPivot Data Models.

    This is the simplest way to improve the Productivity of PowerPivot and Tabular users/analysts/designers. 

    I would like having a simple textual representation of a BISM model, that can be translated back and forth to a XAML representation of the data model (this could be then integrated within Visual Studio, but other development tools might use this language as well).
    I’d like to write (for Tabular) something like:
     
    TABLE Customers (
        CustomerKey INTEGER,
        Name STRING,
        Weight FLOAT
    )
     
    TABLE Sales (
        CustomerKey INTEGER ref Customers[CustomerKey],
        Quantity INTEGER,
        Amount CURRENCY
    )
     
    This would be the syntax to keep the functional-style, even if I would prefer a more declarative language such as:
     
    TABLE Customers
        CustomerKey INTEGER
        Name STRING
        Weight FLOAT
    END TABLE
     
    TABLE Sales
        CustomerKey INTEGER ref Customers[CustomerKey]
        Quantity INTEGER
        Amount CURRENCY
    END TABLE
     
    We can discuss about the syntax, but we all know that this can be done, especially for Tabular which has a smaller number of properties to handle if compared to Multidimensional (which could benefit from the same approach, too).
    The implementation of tools enabling the translation of models to/from this language would be cheaper that writing/improving advanced graphical editors and many other features that you have for free once you have a text file:

    • Source control
    • Revision history
    • Version comparison
    • Team development

    This is the single feature for Tabular/PowerPivot with the best ROI that I can imagine.
    And if Microsoft does not time to implement it… well, at least they could publish a document such as “Tabular Language Draft 1.0” and somebody in the community will implement the necessary translation tools. Having an "official" Language is important, because in this way nobody would spend time on something that could change too much in the future.
     
    Are you convinced? Nice!
    So please, help me in this campaign for a PowerPivot and Tabular textual modeling language.
    This is the call for action:

    1. Vote this item on Connect
    2. Spread the word as loud as you can!

    Don't wait, do it now!

  • Not selected as a speaker at PASS Summit 2013 #sqlpass #sqlfamily

    I received the communication from the PASS Program Committee that none of my proposals have been accepted at PASS Summit 2013. Probably this year there was too many proposals and I made a mistake not using all the possible proposals available per speaker. In fact, I just proposed one pre-conference day and two regular sessions:

    • From 0 to DAX (preconference)
      DAX is the new language for creating Business Intelligence in the Microsoft stack. You can use it to query and program both Microsoft PowerPivot for Excel 2013 and the new SQL Server Analysis Services Tabular model. Learning the basics of DAX is very simple, but mastering it is a different story because it requires a mindset that involves some study, both for BI Professionals and Information Workers.
      During this one-day workshop, we will guide you in the process of learning the DAX language, through many examples and scenarios that help you understand what are the important concepts of DAX and how to leverage them in your everyday work.
      The seminar is useful to many profiles: BI developers who want to learn the DAX language and information workers interested in exploiting the many interesting features of PowerPivot for Excel 2013.
    • Optimizing Data Models in Tabular & PowerPivot (regular session)
      Is your new Tabular solution performing at its best? Are you using the best practices to reduce memory footprint, increase query speed and get the best out of the new engine? In this session, we will look at several techniques that can really make the difference in a Tabular solution. Distinct count reduction, join optimizations, condition consolidation, pros and cons of normalized data models, selection of the columns to store in the database are some of the topics covered in this session, which will highlight not only the best practices, but also practices that were best in Multidimensional and became worst in Tabular. As we are used to teach: if you known Multidimensional, you need to forget it in order to let the new concepts of Tabular shape your model.
    • Time Intelligence with DAX (regular session)
      Time Intelligence is probably the most used feature of any analytical solution. Computing Year To Date, Month To Date, Same Period Previous Year is quite easy in DAX. However, the DAX formulas start to be harder to write as soon as the requests for time intelligence involve working days evaluation, complex and custom calendar shapes, seasonal pattern recognition, ISO-based weeks handling, 445 fiscal calendar.
      In this session, we start showing how to compute the classical time intelligence by using the built-in DAX functions. Then, we will see some more complex time-intelligence formulas that require thinking out of the box, using advanced data modeling and querying techniques to produce interesting and useful formulas.

    I have been informed that this year a new selection process has taken place. Abstracts reviews and speakers reviews have been made by two separate teams, and a final judgment has been made by combining the two ratings. In particular, abstracts team didn’t know the speaker’s name of each session. It seems a smart way to handle 850 submission in a transparent way. Unfortunately, I wasn’t aware of this new process.

    In the last years, I and Alberto Ferrari had too many sessions selected, so the board decided to assign us only a few of them and the selected ones wasn’t always the best ones (because it wasn’t possible for us to establish an order of preference in case more submissions were selected). For this reason, this year we limited our submissions to the sessions that were more likely to be interesting, considering the current request on the market, the questions we received usually and the need to avoid presenting something already presented in the previous year. So we made a small selection of sessions avoiding to submit the maximum number of sessions per speaker. But this has been a wrong choice, because based on the new evaluation system, it would have been certainly better to propose more sessions and not only in the area in which we are most specialized.

    I am not saying I have proposed sessions that was better than others selected. I have seen the proposals here and there are very interesting sessions (I still don’t know which ones have been selected at the moment of writing). I would have just proposed at least other 2 regular sessions more advanced and focused on specific use cases. Being part of a board selecting sessions in other conferences (like this) I know that every selection process has some drawbacks and the best thing you can do is to communicate very well how the selection process will work, so that everybody is able to help the board producing the best proposals. I thought that proposing a lower number of session would have been better, but I was wrong.

    Well, I have been a speaker at PASS Summit in the last 4 years in a row (2009, 2010, 2011, 2012), so it’s probably time to take some rest from speaking!  Smile

  • How Far Can You Push Tabular? Birds of a Feather at TechEd 2013–North America #msteched #ssas #tabular

    I and Alberto Ferrari will moderate the following Birds of a Feather session at TechEd North America 2013:

    CODE: BOF-ITP21
    TITLE: How Far Can You Push Tabular?
    SPEAKER: A. Ferrari; M. Russo
    TIMESLOT: June 5, 2013 at 1:30 pm
    ROOM: 263

    Abstract: Tabular is the new engine in SQL Server Analysis Services. It is an in-memory columnar database capable of unprecedented performance but… how far can we go with an in-memory database? Will you be able to load your data warehouse in-memory to obtain the best performance? What should you care about when building the DWH? What does “fast” mean? What about real-time data? Can you query many billions rows really fast? In this BOF we will show some theory and share some of our experience, but we will welcome your questions and experience, to build a round-table of experts in order to speak about this fascinating topic.

    I really look forward to this event – I and Alberto have some experience using Tabular in many different scenarios, but every time we meet new people at conferences we discover that there are new challenges ahead. This session will be the place to discuss together where are the limits today. Much far than many people realize, in my opinion!

  • From 0 to DAX at TechEd Pre-Conference Seminar #dax #msteched #tee13

    In June I and Alberto will deliver a pre-conference seminar at both TechEd North America (New Orleans, LA) and TechEd Europe (Madrid, Spain).

    This day is a very good quickstart for those of you that still didn't complete one of our books, or those of you that missed one of our workshop about Tabular or PowerPivot. If you are planning to go to TechEd, you might also consider attending a full day about DAX, following the From 0 to DAX one-day seminar. Here are the links:

    • TechEd North America – From 0 to DAX Pre-Conference Seminar (New Orleans, LA - June 2, 2013)
    • TechEd Europe – From 0 to DAX Pre-Conference Seminar (Madrid, Spain - June 24, 2013)

    And in case you are underestimating the importance of DAX in your future BI projects... read this blog post from Dandy Weyn - his privileged point of view inside Microsoft highlights how much DAX is important today and will be pervasive in the future!

  • Group Sales by Age of Customers #dax #powerpivot #tabular

    I published an article describing how to implement the grouping of sales transactions by age of customer at the moment of the transaction by using PowerPivot or Analysis Services. The same pattern can be used also for any kind of banding operation, this specific case is useful also to recycle the formula that gets the exact age of the customer for each transaction.

    An interesting point is related to performance optimization. The technique is based on adding a calculated column in a table that might contain millions of transactions. This is less expensive than adding a column that contains a foreign key and then a relationship between a table containing group definitions and the transactions table. Every relationship is expensive and generates additional structures (you can see more files in the Analysis Services database, too). Adding one or two columns that have a low number of distinct values (10-15 rows) usually has a lower memory cost than creating a relationship with another table. The article also contains PowerPivot examples for both Excel 2010 and Excel 2013.

    If only I could decouple attribute visualization from physical structure, I would put these “degenerate dimensions” in a separate folder, because in this way such attribute will be included in attributes belonging to the fact table, which might not be so clear in presenting data. However, I understand that such a decoupling could make live very hard to DAX clients (but probably for MDX it could be not a big issue).

This Blog

Syndication

Archives

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement