THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

SQLBI - Marco Russo

SQLBI is a blog dedicated to building Business Intelligence solutions with SQL Server.
You can follow me on Twitter: @marcorus

  • Hardware and Virtualization Settings for Analysis Services Tabular #ssas #tabular

    If you use Analysis Services Tabular, you should dedicate a good amount of time in hardware selection. Unfortunately, throwing money in expensive hardware configuration could be a very bad idea, resulting in your 1,500$ desktop running faster than your expensive 100,000$ server. Moreover, if you use virtualization you have to be very careful in certain settings, that might affect the performance in a very bad way. When I say this, I’m describing queries running up to 2-3 times slower than in optimal conditions. So, why spending time to gain 10% when you have a bigger issue to solve?

    I described the main best practices in the article Optimize Hardware Settings for Analysis Services Tabular on SQLBI. This is the result of helping many companies to detect hardware bottleneck and to plan the right hardware configuration. My experience says that the time you spend to correctly allocate the budget has a huge return of investment. Usually you cannot change the CPU or the RAM of a brand new server, so this step is critical. The next step is to check that hardware configuration is correct. It’s incredible how many times I discovered that BIOS settings of an expensive server were the reason of slow performance, so now it’s my first priority when I see a benchmark with suspicious numbers (compared to tech spec of the CPU).

    Now, a common discussion I had is that “we have a standardized hardware and virtualization platform”. I completely understand that, but I like to remind that the goal is to get a better return of investment, and standardization has the ultimate goal to reduce costs. So we start to evaluate the cost of a solution that is compliant with the standards, but allocates different hardware to specific workload. The result of a this is spending less (in hardware and licenses) getting more (performance).

    I’d like to hear your stories about that – write your experience in the comments!

  • Challenge your #dax skills with DAX Puzzle #powerbi #powerpivot

    Last week we launched a new page in SQLBI website: DAX Puzzle (you can also use

    The idea is very simple: we describe a scenario, we ask to solve a problem in that scenario and we might provide some hint to help you finding the solution. You can download a sample file (in Power BI Desktop, which is freely available to anyone – but we might consider Excel too, please provide your feedback) and spend some time finding the solution. When you are done, or if you are curious and don’t have enough time, you can access to the page with the solution, read our description, download the file with the solved problem.

    There are no prizes. It’s just workout for your mind. But it’s a good way to check whether you have something to learn in DAX. For every puzzle, we also provide some link to particular section of our Definitive Guide to DAX, which described the topic related to the puzzle in a deeper way. Yes, I admit, it’s also a marketing initiative, but it’s funny if you like DAX!

    The first puzzle we published is about USERELATIONSHIP. We already received several comments and I suggest you to use the scenario page only for comments about the question, whereas the solution page is the right place to discuss alternative solutions. There are interesting conversations about the performance of different approaches, and I would like to advise you that this first puzzle is not about performance. In fact, the faster solution doesn’t use neither USERELATIONSHIP at all, and requires the new GROUPBY function. Now you also have another reason to read not only the solution, but also all the comments!

    The next puzzle will be published in a few days… subscribe to our newsletter to be notified about new puzzles!

  • The Definitive Guide to DAX is now available! #dax #powerpivot #powerbi #ssas #tabular

    I am so happy to announce that The Definitive Guide to DAX is finally available!

    Cover The Definitive Guide to DAXI and Alberto Ferrari spent one year writing this book, and several years collecting the knowledge necessary to do that. The complete title is The Definitive Guide to DAX: Business intelligence with Microsoft Excel, SQL Server Analysis Services, and Power BI. You can imagine why we like to shorten it! However, the complete title gives you an important hint: this book cover the new DAX syntax of Excel 2016, Power BI Desktop, and Analysis Services 2016. For example, we covered all table functions useful for calculated tables, which is a feature released in Power BI Desktop after we completed the book writing. This has been an additional challenge, but our goal was to publish a book dedicated to the DAX language, independent from the product and completely up-to-date.

    But everything has a cost. It took us a huge amount of time to reach the depth and completeness we wanted in this book. And it will took you weeks if not months to read it cover-to-cover. Yes, I know, you no longer read technical books in this way. You open it at the right chapter and you get the content you need, you copy the pattern, you get the good hint. I do that at least once a week. But you will be able to use this book in that way once you have a solid understanding of DAX. At the beginning, my suggestion is to start from chapter 1, even if you are an experienced DAX developer.

    What if you are a DAX beginner? This book will be your guide, but you might consider a more introductive book to start (you can find other books from us and from Rob Collie, depending on the product you use and the writing style you prefer). This is particularly important because we don’t spend a line in the book discussing about user interface. We wrote a book about the DAX language, so you have to know in advance the UI of a product that use this language. Today, the list ranges from Excel (2010/2013/2016), Analysis Services (2012/2014/2016), and Power BI Desktop.

    Why am I so excited about this book? After all, I wrote many books (this should be the 10th in English, and I wrote other three books in Italian). Well, first of all, after a few months after completing the writing, I and Alberto would not add or modify anything in this book. As you will read in the introduction, we made no compromises. We thought the size would have been 450-500 pages, but the result is 530 pages of content (plus indexes, table of contents, and so on). Is it the perfect book? No, I am pretty sure we will discover some error and something to clarify and to fix. It always happens. But we set the bar very high this time, and we are very satisfied about the final result. Only reviews will tell us if our perception I right, but we know this is the best result possible today. We had technical reviewers that helped us so much in getting the point of view of the reader, and I would like to mention the incredible job made by Gerhard Brueckl. Believe me, if you wrote a technical book, your worst nightmare is the technical reviewer that review too much, so that you spend more time explaining why you were right instead of fixing the content. Well, Gerhard had the skills and the ability to highlight the right thing. Thanks Gerhard, you deserve a public mention!

    After this self-celebration, let me spend some paragraph about the content. We use this book as companion content for our courses Mastering DAX *and* Optimizing DAX. During the courses we have hands-on-labs and a lot of interactions, but we constantly refer to the book to get more detailed information about specific functions and behaviors. Thus, if you attend these courses, you will find it easier to read the book. But you will not be able to skip it! Here is the table of contents, with some comments:

    • Foreword: three of the authors of the DAX language and the VertiPaq engine wrote the foreword of our book: Marius Dumitru, Cristian Petculescu, and Jeffrey Wang.
    • Introduction: read the introduction before buying the book. You will understand if it is the book of you or not.
    • Chapter 1: What is DAX?
    • Chapter 2: Introducing DAX
    • Chapter 3: Using basic table functions
    • Chapter 4: Understanding evaluation contexts
    • Chapter 5: Understanding CALCULATE and CALCULATETABLE
    • Chapter 6: DAX examples
    • Chapter 7: Time intelligence calculations
    • Chapter 8: Statistical functions
    • Chapter 9: Advanced table functions
    • Chapter 10: Advanced evaluation context
    • Chapter 11: Handling hierarchies
    • Chapter 12: Advanced relationships
    • Chapter 13: The VertiPaq engine
    • Chapter 14: Optimizing data models
    • Chapter 15: Analyzing DAX query plans
    • Chapter 16: Optimizing DAX

    Topics in chapters 1 to 12 are covered in our Mastering DAX workshop. We organized the content so that you can read them one after the other. The content is very dense, at the beginning we use simpler examples, but we never repeat the same concepts, so if you skip one chapter you might miss some knowledge to fully understand the following topics. Even in chapter 6, which tries to consolidate previous content with practical examples, you will find something new in terms of ways you can use DAX.

    Topics in chapters 13 to 16 are covered in our Optimizing DAX workshop. Please, don’t jump to this part if you didn’t read the previous chapters before. Also for attendees of the course, we suggest to complete the self-assessment for prerequisites to attend the course, and you can try to do the same for the book. If you are not ready, you will simply see a huge amount of numbers, without understanding how to connect the dots. You need a solid and deep knowledge of how evaluation context works in DAX before doing any optimization.

    My personal estimate is that if you dedicate one week to every chapter, you will be able to complete the learning in 4 months. Read the book, absorb the content, make practice. You might be faster at the beginning if you already know DAX. But be careful, you never read anywhere what we describe in chapter 10 (we rewrote that chapter 3 times… but this is another story), and this is of paramount importance to really “understand” DAX. You hardly have seen the complete description of all DAX table functions in chapter 9. You will not find an extensive use of variables, but the VAR / RETURN syntax is described early in the book and you will see this used more and more with the advent of Excel 2016 / Power BI Desktop / SSAS 2016.

    Finally, the goal of the book is not to give you patterns and best practices, but to teach you how DAX works, how to write good code, and how to measure the performance, find the bottlenecks and possibly optimize it. As I always say, do not trust any best practice when it comes to DAX optimization. Don’t trust blogs, articles, books. Don’t trust my writings, too. Simply, measure and evaluate case by case. And the reason is the first answer to any question that the consultant receive: it depends!

    If you want to order the book on Amazon, here is a quick reference to links in all the available versions of this site:

    Have a nice reading!

  • A new hope. Tale from SQL Saturday 454 #sqlsat454

    This is one of the few non-technical posts of this blog. Just skip it if you want to quickly come back to 100% BI related topics.

    Last Saturday we run the SQL Saturday 454 in Turin. I was part of the organization, and actually I was one of the promoters for this event, running on the same city just a few months after SQL Saturday 400. The reason for that was an idea we had a few months ago. Running a SQL Saturday very close to Milan, the city hosting Expo 2015 until October 31, 2015. In our plans, we should have been able to attract a large number of foreign attendees interesting in combining a week-end in Italy, one day in Turin for SQL Saturday, and one day in Milan for Expo 2015. The initial target was more than double the attendees of a “regular” SQL Saturday in Italy, reaching 250 people and maybe also 300. After all, everyone was looking forward to visit Expo 2015, right?

    Unfortunately, I was wrong.

    Part of my job is reading through the numbers. It took me just a few hours after opening a survey through our SQLBI newsletter and other social media to realize that Expo 2015 was not the worldwide attraction we assumed initially. Our ambitious goal was completely unreachable, and this was clear to me before anyone else accepted that. So we downsized the venue, but we wanted to run the best event we can. After all, it was still the SQL Saturday close to the Expo 2015. And we kept the event in English. We requested all the speakers to delivery their speeches in English, regardless of the fact 90% of attendees would have been Italian.

    Now, if you never visited Italy, you might be not aware of the lack of English skills of the majority of the population. You might think that people working in IT should have English skills in their CV by default. While this is true for reading technical documents, it is not entirely true for listening and speaking. From this point of view, the situation in Europe is very different between different countries. Smallest countries have better English skills. My guess is that movies are not dubbed, many have just subtitles, whereas largest countries (Germany, France, Spain, and Italy) tend to distribute only the dubbed version of the movies, keeping the original version only for limited number of cinemas in large cities. This fact alone makes a big difference in listening and speaking capabilities. I don’t have any study to demonstrate this correlation, it’s just my experience as a frequent traveler.

    I wanted to write this disclaimer to describe another challenge we had for SQL Saturday 454. We were at risk of not having enough foreign attendees (a certainty for me) and not having a good number of Italian attendees, frightened by the fact that all the sessions would have been in English. In the past, we had only a few sessions in English, but a complete conference in a foreign language without simultaneous translation was an unprecedented experiment. However, I was confident this would have stopped someone, but not many of the interested attendees.

    At this point, you might be curious to know whether the event was a success or a failure. Well, in terms of numbers, we reached our predicted (downsized) target. It was an event slightly larger than the average in Italy and, ignoring our initial unreachable dreams of glory, it has been a success. But what impressed me was something unexpected.
    There is a number of IT professionals in Italy that can attend an event, following all the sessions, engaging the speakers, making questions and keeping the conversation without the language barrier I was used to see a few years ago. I was wrong again, but this time in a pleasant way.

    The economic turmoil of the recent years has been very though in this country. I have a privileged position and a particular point of view, clearly seeing the issues that limit the competitiveness of companies and professionals in the global market, especially in IT. Language barrier is one of the many issues I see. Lack of self-investment in education is another one. And the list does not end here. I am an optimist by nature, but I am also realistic in any forecast. People around me know I don’t predict anything good for Italy in the short and medium term. However, even if I still don’t have data supporting that, I feel something has been changing.

    I have a new hope.

    There is a number of people spending a sunny Saturday in Italy to attend a conference in English, and they are able to not only listen, but to interact in a foreign language. I am sure nobody (myself included) would have bet anything on that ten years ago. For one day, I felt at home in my city doing my job. If you attended SQL Saturday 454 in Turin, I would like to thank you. You made my day.


  • When to use calculated tables in #dax

    In the September release of Power BI Desktop, Microsoft introduced a new important feature: calculated tables. Chris Webb wrote a quick introduction to this feature, and Jason Thomas published a longer post about when to use calculated tables.

    The reason of this excitement about this feature is that it adds an important tool to the data modeling capabilities of DAX based tools (even if, at the moment, only Power BI Desktop shows this feature, but I guess that at least Analysis Services 2016 will provide the same capability). Using calculated columns you can materialize the result of a table DAX expression in the data model, adding also relationships to it. Without this tool, you should read the data from outside Analysis Services and then push the data back - and this wouldn't be possible in Power BI. I implemented similar techniques in the past by using SQL Server linked servers, materializing the result of a DAX query in a SQL Server table, and then importing that table again in the data model. Thanks to calculated columns, today I wouldn't to this roundtrip and I would save processing time and reduce data model complexity.

    Alberto Ferrari wrote an article describing a good use case for calculated tables. The article presents an implementation of a transition matrix between customer categories evaluated automatically based on other measures (for example, the revenues). I suggest you reading Transition Matrix Using Calculated Tables and then try to implement the same intermediate table for the calculation with other techniques (ETL, SQL, ...). You will discover that calculated tables help you writing cleaner and faster code for a transition matrix pattern.

  • Synoptic Panel for Power BI Best Visual Contest #powerbi #contest

    Today (September 30, 2015) is the last day to submit an entry in the Power BI Best Visual contest. I and Daniele Perilli (who has the skills to design and implement UI) spent hours thinking about something that would have been challenging and useful at the same time. Daniele published a couple of components (Bullet Chart and Card with States) that have been useful understanding the interfaces required to implement a Power BI visual component. But the “big thing” that required a huge amount of time was another.

    We wanted a component to color areas of a diagram, of a planimetry, of a flow chart, and of course of a map. From this idea, Daniele developed (and published today – what a rush!) the Synoptic Panel component for Power BI.

    The easiest way to see it is watching the video. However, an additional description can help. Let’s consider a couple of scenario. For a brick and mortar shop, you can color the areas corresponding to categories (and subcategories) of products, using either saturation of colors or three-state logic (red-yellow-green, but you can customize these colors, too).

    image image

    But what if you are in the airline industry? No problem, it’s just another bitmap.


    Wait a minute, how do you map your data to the graphics? How can you start from a bitmap, and define the areas that you want to relate to airplane seats or product categories and subcategories? We don’t have coordinates like latitude and longitude, right?

    Well, you can simply go in, import a bitmap and design your area, straight in the browser, no download, no setup, no fees required. Each area has a name, that you will use to connect data to your data model. Yes you read it right. You will not change your data model to use the Synoptic Panel. For example, here you draw seats area in an airplane:


    And with some patience you locate all the areas of a shop, too:


    In the right panel you have the coordinates you can modify manually, and the editor also has grid to help you in alignment (snap to grid feature is also available).

    Once you finished, you export the area definition in a JSON file that you have to save in a public accessible URL so that it will be read by the component (we will add the capability to store this information in the database, too – yes, dynamic areas will be available, too).

    At this point, in Power BI you insert the component, specify the URL of the bitmap, the URL of the JSON file with the areas, the category, the measure to display, the measure to use for the color (as saturation or color state), you customize the colors, and your data are now live in a beautiful custom visualization.

    Thanks Daniele for you wonderful job!

  • Fix performance issue of pivot tables with Tabular models

    If you use SSAS Tabular, this is a very important news!

    Microsoft released a very important update for Analysis Services 2012 that provides performance improvements to pivot tables connected to an Analysis Services Tabular model: it is SQL Server 2012 SP2 Cumulative Update 8.

    Microsoft discussed some of these improvements in this blog post: Performance problems on high cardinality column in tabular model

    UPDATE 2015-09-22: I fixed this post in the following part.

    In a previous version of this post, I wrongly reported that this fix was fixing the unnatural hierarchies problem, too! This is described in the article Natural Hierarchies in Power Pivot and Tabular. In reality, only Power Pivot for Excel 2016 and SQL Server Analysis Services 2016 fixed the issue, which is still present in previous versions of Analysis Services (2012/2014) and Power Pivot for Excel (2010/2013).

  • Don’t ignore the Context Transition in #dax

    Almost 3 years ago I wrote an article with the rules for DAX code formatting. If you quickly look at the article, you might think that it is all about readability of the code, and this is fundamentally true. But there two rules that have a particular importance for performance, too:

    • Never use table names for measures
    • Always use table names for column reference
      • Even when you define a calculated column within a table

    Well, it is not that writing/omitting table name has a direct impact on performance, but you can easily miss an important bottleneck in your formula. Let me clarify with an example. If I read this:

    = [A] + SUMX ( Fact, Fact[SalesAmount] )

    I would say that SalesAmount is a column of the Fact table, and the SUMX iteration will not perform a context transition. But if I read this:

    = [A] + SUMX ( Fact, [SalesAmount] )

    I would start to be worried about the number of rows in Fact table, because each one will invoke a context transition for the measure SalesAmount evaluated for each row of the Fact table, creating a different filter context for each evaluation.

    This simple detail makes a huge difference in performance. Context transition is fast, but doing it million times require time.

    Wait a minute: are you asking yourself what is a context transition and why a measure generate it? No problem: read the article Understanding Context Transition to get a quick recap of the question (and if you want to dig deeper, preorder The Definitive Guide to DAX, available in October 2015!

  • Back to the basic: #dax primers for new #powerbi users

    The growing adoption of Power BI Desktop is gathering new users of the DAX language. At the same time, there are a few new features, such as bidirectional filter propagation, that introduce new concepts to existing knowledge. For this reason, in the last weeks we published two articles describing important basic concepts and clarifying the behavior of filter context propagation with the bidirectional filter.

    A more complete description is included in our new book, The Definitive Guide to DAX, which will be available in October 2015.

  • Power BI Desktop & Excel

    The August 2015 update of Power BI Desktop added two important features for existing Excel and Analysis Services users:

    In case you didn't try it before, Power BI Desktop can connect to Analysis Services Tabular (connection for Multidimensional will arrive later, but Microsoft is working on it). It is interesting to consider that Power BI Desktop sends DAX queries that are different for Analysis Services 2012/2014 and Analysis Services 2016. The latter has better performance, thanks to the many new DAX functions and other improvements in the query engine. Thus, especially in complex reports, consider a test using the latest available CTP of SQL Server 2016 (at the moment of writing the CTP2.3 is the latest available, but consider that new versions might be released every month).

    The other important news is that you can import in Power BI Desktop an existing data model created in Power Pivot for Excel. In reality, you import also Power Query scripts and Power View reports. I found some minor issue when I imported linked tables, but overall the experience I had is very good. After you import the data model, you can refresh it within Power BI. If you used Excel linked tables, you have a Power Query script that reads the same data from the original Excel files when you refresh the data model.

    The opposite is not possible, so you cannot import in Power Pivot for Excel a data model created in Power BI Desktop. Since a real pivot table is not present in Power BI today, it would be very useful being able to connect an Excel pivot table to an existing Power BI data model. If you like having this feature integrated and supported, please vote the Ability to connect Excel to Power BI Data Model and create Pivot/Charts suggestion in Power BI support web site.

    Now, as it is described in the proposal, there are two ways to obtain this feature:

    • Connect the pivot table to the model hosted on this would be similar to the connection to a model hosted in SharePoint. I guess that the only existing barrier to implement this feature is the authentication, in fact such a feature is not available in SharePoint online, too. Of course, such a feature would be more than welcome.
    • Connect the pivot table to a local PBIX file: this is a completely different story, and it would part of the scenarios I described in the Power BI Designer API feature request a few months ago (with around 1,400 votes it is the fifth most requested feature). In this case, the implementation might be realized in two ways: by integrating the Power BI engine within Excel, or by connecting Excel to Power BI Desktop. The former is unlikely to happen, because Power Pivot for Excel is already the engine we are talking about and I think that the release cycle of the two products will be always different in order to enable this scenario. The latter i simpler, and actually is already possible and completely unsupported. It would be nice if Microsoft simply enable the support for it.

    At this point you might be curious about how to connect an Excel pivot table to Power BI Desktop. Well, let me start with an important note.

    DISCLAIMER: the following technique is completely unsupported and you should not rely on that for production use, and you should not provide this to end users that might rely on that for their job. Use it at your own risk and don't blame neither me nor Microsoft is something will not work as expected. I suggest to use this just to quickly test measures and models created in Power BI using a pivot table.

    Well, now if you want to experiment, this is the procedure:

    1. Open Power BI Desktop and load the data model you want to use
    2. Open DAX Studio and connect to Power BI Desktop model
    3. In the lower right corner of DAX Studio, you will find a string such as "localhost:99999", where 99999 is a number that is different every time you open a model in Power BI Desktop (the same model changes this number every time you open it). Remember this number
    4. Open Excel (2007, 2010, 2013 - you can use any version) and connect to Analysis Services (in Excel 2013 go in Data / Get External Data / From Other Sources / From Analysis Services), specifying the previous string "localhost:99999" as server name (using the right number instead of 99999) and using the Windows Authentication
    5. At this point you will see a strange name as database, and a cube named Model. Click Finish and enjoy your data using a pivot table, a pivot chart, or a power view report (why should you use the latter in this scenario I don't know...)

    I will save your time describing the problems you will have using this approach:

    • If you close the Power BI Desktop window, the connection will be lost and the pivot table will no longer respond to user input.
    • If you save an Excel file created with this connection, the next time you open it the connection should be updated, using the right server name with correct number (if you try to refresh the pivot table, you get an error and you can change the server name in a dialog box that appears).
    • This feature might be turned off by Microsoft at any moment (in any future update of Power BI Desktop).

    That said, I use this technique to test the correctness of measures in a Power BI Desktop data model, because the pivot table is faster than other available UI elements to navigate in data examining a large number of values. But I never thought for a second to provide such a way to navigate data to an end user. I would like this to be supported by Microsoft before doing so. Thus, if you think the same, vote for Microsoft supporting it.

  • Large Dimensions in SSAS Tabular #ssas #vertipaq

    After many years of helping several companies around the world creating small and large data models using SQL Server Analysis Services Tabular, I’ve seen a common performance issue that is underestimated at design time. The VertiPaq engine in SSAS Tabular is amazingly fast, you can have billion of rows in a table and query performance are incredible. However, in certain conditions queries made over tables with a few million rows are very slow. Why that?

    Sometime the problem is caused by DAX expressions that can be optimized. But if the problem is in the storage engine (something that you can measure easily with DAX Studio), then you might be in bigger troubles. However, if you are not materializing too much (and this is a topic for another post of for the Optimizing DAX course), chances are that you are paying the price of expensive relationships in your data model.

    The rule of thumb is very simple: a relationship using a column that has more than 10 million unique values will be likely slow (hopefully this will improve in future versions of Analysis Services – this information is correct for SSAS 2012/2014). You might observe slower performance already at 1 million unique values in the column defining the relationship. As a consequence, if you have a star schema and a large dimension, you have to consider some particular optimization (watch my session at Microsoft Ignite to get some hint about that).

    If you want to know more, read my article on SQLBI about the Costs of Relationships in DAX, with a more complete discussion of the problem and a few measures of the timings involved.

  • DAX Formatter now supports Power BI Desktop and Excel 2016 #dax #powerbi

    If you use DAX, you should try DAX Formatter. Now it supports all the new functions introduced in Power BI Desktop and in Excel 2016.

    There are more than 70 new functions, even if half of them corresponds to Excel functions with the same name (see the second group). DAX Formatter also supports the variable syntax available in the new DAX.

    These are the new “original” DAX functions:

    • EXACT
    • EXCEPT
    • IGNORE
    • MEDIAN
    • UNION
    • XIRR
    • XNPV

    And this is the list of the functions identical to the Excel ones:

    • ACOS
    • ACOSH
    • ACOT
    • ACOTH
    • ASIN
    • ASINH
    • ATAN
    • ATANH
    • BETA.INV
    • COMBIN
    • COS
    • COSH
    • COT
    • COTH
    • EVEN
    • GCD
    • ISODD
    • ISEVEN
    • LCM
    • ODD
    • PERMUT
    • SIN
    • SINH
    • SQRTPI
    • TAN
    • TANH
  • Zero Inbox

    This is a blog post completely unrelated to the technical content I’m used to cover. But I’ve been asked so many times how I do handle my mail that I thought having a blog post will save me time to explain. So, if you are not interested, wait for the next blog post, which will be about Business Intelligence again!

    First of all, I only use email. I’ve seen (and tried) several other technologies with their to-do list and workflow management. But the problem is that I work for many customers, with different standards, that it’s impossible to standardize on a single technology. The mantra today is to keep it simple, and my conclusion is to use only one system. So I use email, and only email.

    Now, the problem is that email includes communication with customers, colleagues, friends. But it also contains newsletters, alerts, reports. And I also receive digests from forums, blog posts, Facebook messages, yammer communications, SharePoint alerts, and so on. It seems crazy, but in this way I have to handle email properly and I cannot afford losing or forget it. The side effect is that email is the most reliable way to get something done by me. I send email to myself from mobile phone to remember stuff. But if you try to contact me by SMS, Twitter, Facebook, WhatsApp, or whatever else that does not forward me an email… well, sooner or later I will see that, but you might be out of luck. I receive an average of 150/200 emails every working day, with peaks of 300. I send an average of 30-40 mail every working day. It happens that I make something wrong and I lose one email. But this happens once a month, maybe less. It is a 99.97% reliability, and I can live with that. However, I can manage that thanks to methodology and tools.

    Methodology: I use zero inbox. The idea is simple: at the end of the day, your inbox is empty. I have to admit that this does not happen every day, but just because I want to keep some message in evidence regardless of everything else. There are a lot of example over the web about how to reach that, but the principle is simple: triage often, process immediately or defer, but keep inbox empty or relatively small.

    I’m addicted to Outlook and I use Office 365. It is very consistent and integrated. I tried Gmail with a personal account, but I never got it. I work with people who would never get rid of their Gmail inbox, whereas I’m in another area. Outlook allows me to define rules that work on the server. This is very important for certain messages (forum, mailing lists) that I don’t want to pollute my Inbox, because I will read them when it’s the right time during the day. No rush. Rules working server side are important when I check email from my mobile phone. However, I have to use the Outlook desktop, because I rely on a couple of add-ins that I absolutely need.

    First of all, I have to remove messages from Inbox once I processed them. I don’t delete them, I archive them in a relatively lean folder structure (less than 100 folders in a hierarchical structure). Archive is very important to quickly find stuff I need. However, moving messages quick is important. I use SimplyFile. It has an algorithm that predict the right folder, and when the first choice is not the right one, you can browse the list or search in available folder names. I archive 80% of messages with a single shortcut in the keyboard, and the other 20% with less than 5 clicks on the keyboard. No mouse involved. Important for productivity. It also archives messages I send, so in the folder of a customer I have both messages received and sent. Very useful. The only problem is that when I triage and/or reply from my mobile phone, I know that I will have to complete the archive process on the desktop. But I don’t like services that do a similar service only online, because I want to be able to triage email when I have no connection. And the latency of a bad connection is also another big issue, and I travel a lot. So if you have some suggestion for an alternative service, please don’t lose time describing some online-only service because I will never spend time trying it. I’m happy with Outlook, I want the same experience on a mobile device.

    Second, I have to defer mail that I cannot process immediately. Outlook has its own tools, but I prefer to use SnoozeIt. This tool simply moves a mail out from the inbox for a certain amount of time (that I can choose for every message). It could be one hour, one day, one week, one month. When it’s time, the message appears in the inbox again (marked as unread if you want). There are many other features (categorization, statistics, and so on) but I simply don’t care. I see the mail in the inbox when I supposed it would have been a good time. I am writing this blog post because a few months ago I had this idea, but I wouldn’t be able to find the time until I finished my last book about DAX. And finally that day arrived (well, you have to wait a few other weeks for the book because of final production processes, but the content is ready, now it is in the paging and proofreading stage).

    And that’s it.

    I have around 200 messages snoozed for a future date. This does not correspond to 200 tasks I delayed, most of them are tasks that I cannot do until a certain date, or just remainders to check whether a certain action has been done by someone else. Well, I have many tasks I delayed because I didn’t have time, but not 200!

    I have been using this technique since 2007 using SpeedFiler (no longer supported I think). I moved to SimplyFile in 2010 because SpeedFiler did not support Outlook 2010. I adopted SnoozeIt since first beta in 2014. It works very well for me. However, I’ve seen that it is not good for everyone. Depending on your habits, you might love or hate it. I’m not trying to convince anyone using this technique, I’m just writing my experience because I think it will save me time when someone will see my empty inbox asking how is it possible.

    DISCLAIMER: I regularly paid licenses of SpeedFiler, SimplyFile, and SnoozeIt I use. I do not receive any compensation by these companies and I will not get any fee for possible purchases made by blog readers. Feel free to add alternative products in the comments, provided it is your experience and not just advertising.

  • The ALLSELECTED function under the cover #dax #tabular #powerpivot #powerbi

    I and Alberto Ferrari recently completed the writing of The Definitive Guide to DAX, and we spent months to correctly describe the internals of evaluation context in this language. There are many details that make data model working with both DAX and MDX, and sometime there are behaviors that are not intuitive to understand.

    A function that seems to work like magic is ALLSELECTED, which is very useful when you create measures that will be used in Excel pivot tables. What is not obvious is that the DAX engine has to realize what the user is selecting on a pivot table that generates a query in MDX. In reality, there is no other communication between client and server other than the MDX query, and ALLSELECTED is not related to MDX, it is a DAX function!

    Alberto extracted from the book part of this description and published the Understanding ALLSELECTED article on SQLBI. You will see that the magic in this function is just a particular manipulation of the filter context, which keeps track of the iterated table in the filter context every time a context transition happens. Not clear enough? Well, the article explains this better!

  • VertiPaq Analyzer for Analysis Services #ssas #tabular #powerpivot #powerbi

    During the writing of The Definitive Guide to DAX I wanted a simple way to analyze the content and distribution of data compressed in the VertiPaq engine, used by Analysis Services Tabular, Power Pivot and Power BI models. I always relied on BISM Memory Report (thanks Kasper!), but when you focus on a single database there are a number of details available in other data management views (DMVs) other than the one used by BISM Memory Report.

    I created VertiPaq Analyzer, which is a Power Pivot data model that collects data by these other DMVs and shows them in pivot tables that provide you information about compression, size of data and related structures (such as relationships and hierarchies), and column selectivity (very important to understand how to optimize DAX queries.

    You can download the workbook here, and read the article that describes all the metrics used.

    DMV Size 04

This Blog



Privacy Statement