THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Stacia Misner

  • Automate MDX Query Testing

    In one my recent MDX classes, a student asked about automation of MDX queries for the purposes of testing query performance, so today I’ll answer that question and draw your attention to a few resources available.

    Get the ASCMD Utility

    If you’re running SQL Server 2008 Analysis Services or SQL Server 2008 R2 Analysis Services, you can download  the ASCMD_StressTestingScripts from the Analysis Services Samples page on Codeplex to get the main prize, the compiled ASCMD executable. (This version should also work with SQL Server 2005 Analysis Services, but I haven’t tested it.)

    This utility allows you to execute MDX or DMX queries or XMLA commands from the command line. This capability means that you can build out some complex batch files to automate your MDX query testing, along with other repetitive tasks such as database processing. You can view the 2005 version of the ReadMe file to see the syntax as well as various ways that you might you use this utility (if you scroll all the way to the bottom of the page).

    The old ReadMe file doesn’t include all of the arguments that are in the latest version of ASCMD. You can download a more current ReadMe document that’s buried deep in the bowels of Codeplex to get more information.

    Try a Simple Command

    In theory, the stress testing scripts let you execute ASCMD as a single client with single query or multiple queries, or as multiple clients with single query, or as multiple clients with multiple queries.  I wasn’t able to get that piece working as a file seems to be missing from the download, but you can create your own batch files to do similar things. You can get the basic idea by reviewing the RunASCMDParallelStressTest.cmd and RunASCMDSerialStressTest.cmd files that come in the download.

    If you don’t want to wade through all that batch file stuff to figure out how to make ASCMD work, let me cut to the chase. You can execute asmcd /? to see the parameters available. For running an MDX query, assuming that you want to capture trace information to see the query duration and key trace events like QuerySubcubeVerbose and GetDataFromCache, you use the following syntax:

     ascmd –S <server\instance> -d <database name> -I <mdx file> -o Output.xml -T <CSV file>

    So, for example, to run on my local instance, using a query file found in the Queries subfolder of the download, I would run the following command:

    ascmd -S localhost -d "Adventure Works DW 2008R2" -i Queries\Query1-3.mdx -o Output.xml -T Trace.csv

    The Output.xml will contain the query results and a lot of metadata about the cube that you can probably ignore if your goal is to get performance testing data. The Trace file is a pipe-delimited CSV file that contains the same type of trace data that you get when you run a SQL Server Profiler trace for Analysis Services. The trace file gets overwritten on each execution, except as noted below.

    You can adjust the level of detail in your trace file by adding the –Tl argument with one of the following values

    • High (default) – captures everything.
    • Medium – captures everything except ProgressReportCurrent and Notification events.
    • Low – captures only events with “End” or “Error”.
    • Duration – captures only execution duration and writes one line in the trace file with current time, duration, execution text, database, and server name. If you use this argument, and execute ASCMD multiple times, each execution appends new data to the file.
    • Duration-result – captures the same results as when you use “duration” but includes an addition column to store the result of the execution. Each execution appends new data to the file.

    Batch It

    The ASCMD utility can run one file at a time, but you can put multiple queries into the file using a GO command between the queries. Or you can set up a batch process with a loop to call ASCMD multiple times.

    You can leave out the –o argument but you’ll get the output stream on your screen, unless of course you put this into a batch file and use echo off and echo on after executing the ASCMD.

    For performance testing, you should also include some additional steps before you run the MDX query to get true baseline performance measurements. Put these steps into an MDX or XMLA script file and use ASCMD to execute them in your batch file before running the MDX query that you’re testing:

    • Clear the Analysis Services cache
    <Batch xmlns="">
          <DatabaseID>Adventure Works DW 2008R2</DatabaseID>
    SELECT {} ON 0 FROM [Adventure Works]

    Learn More About Query Performance Analysis

    Now that you’ve captured trace event information, what do you do with it? Here are some resources to help you determine what it means:

  • 3 Big Changes in Analysis Services 2012 Enabling Flexible Design

    The upcoming release of SQL Server 2012 has a lot of new features for business intelligence developers to love. The free preview of Introducing Microsoft SQL Server 2012 (Microsoft Press, 2012) does not include the chapter on Analysis Services, but you’ll be able to read the details when the final version of the ebook is released for download in March.

    Overall, there are a lot of changes in Analysis Services 2012, and it’s easy to get overwhelmed by the details. So, just as I did for Integration Services 2012 last month, I thought about the key aspects of this release that I would single out as important:

    • Business Intelligence Semantic Model
    • Tabular mode
    • Installation experience of PowerPivot for SharePoint

    Business Intelligence Semantic Model (BISM)

    Back in Analysis Services 2005, the Unified Dimensional Model (UDM) made its debut.  The UDM was supposed to blur the lines between relational and multidimensional modeling. How well it accomplished that goal could be argued, but I’m not going to take sides on that issue because now it no longer matters. Why? Because BISM replaces UDM and very definitely addresses two styles of modeling: multidimensional and tabular (about which I explain more below).  To get the background, see Microsoft's vision and roadmap statement describing the benefits at a high level.

    Just as many people didn’t really understand that their development in Analysis Services (versions 2005, 2008, or 2008 R2) produced a UDM, they also don’t need to understand that their development in Analysis Services 2012 produces a BISM. Cubes from prior version will upgrade automatically when migrated to Analysis Services 2012.  It still stays multidimensional, and there is no magic button to convert it to tabular.

    Lack of a magic button is not that big of a deal. I can’t think of a good reason to convert from multidimensional to tabular in most cases. However, I have heard through the grapevine of some scenarios where queries performed significantly faster against a tabular model as compared to a comparable design in a multidimensional model. It’s probably too early to say whether these performance improvements resulted from an anomaly or a genuine benefit of the VertiPaq engine that the tabular model uses.  For now, I would say the only way to know is to test it yourself with your own data. To do this, you will have to build the tabular model from scratch (unless some clever developer comes up with a tool to do the conversion some day).

    Tabular Mode

    When you install Analysis Sevices, you must choose from one of three server modes: Multidimensional mode (the one we’ve had since Analysis Services 2005), Tabular mode, and PowerPivot for SharePoint mode. They differ in the way they store data, how they use memory, and the engine they use to retrieve data for queries. Tabular mode uses the same VertiPaq Engine as PowerPivot for SharePoint mode, but as a separate instance and without the same dependency on SharePoint.

    You can only store tabular models on a server running a tabular mode instance. If you have ever built a PowerPivot model, you will find that building tabular models is strikingly similar. The difference is that you use SQL Server Data Tools (SSDT), the SQL Server 2012 replacement for Business Intelligence Development Studio that runs as a Visual Studio 2010 shell.

    With tabular models, you can go beyond using relational sources to populate the model, using any of the sources that PowerPivot supports:  relational databases, Analysis Services cubes (any mode), Reporting Services reports (as a data feed), Azure DataMarket datasets, ATOM data feeds, Excel files, or text files. This flexibility can significantly speed up development time, and enables you to rationalize development of one-off or limited life-span tabular models, which might not happen if you were limited to multidimensional mode.

    Some shortcomings of the tabular model that originated in PowerPivot have been overcome in this release. The following design features are now available in both PowerPivot and tabular models:

    • Hierarchical structures, including parent-child hierarchies
    • Perspectives
    • Key performance indicators
    • Diagram interface for working with tables and relationships

    (UPDATE: Removed aggregations from the list. Strikethrough doesn't seem to work for me here!)

    One advantage that tabular models have over PowerPivot models is the ability to partition the data. You can then manage reprocessing of individual partitions manually within SSDT or using scripts that you execute on the tabular mode server.

    Another advantage with tabular models is the ability to use role-based security to control what people can see and do. At minimum, you set one role with permissions to allow read access and another role to administer the database. You can optionally create a role to allow users to both query and process the database, or restrict users only to processing. Another aspect of security that you might implement is row-level filtering. For example, you might set up a role that can view only Bikes as a category, but you can also completely block a role from viewing any rows from a particular table.

    A third distinctive feature of tabular mode is its DirectQuery mode. You use this mode when you need to retrieve current data (rather than use the cache that tabular mode creates by default) from the source or when you want to query data volumes that are too large to hold in memory. You can use DirectQuery only when using SQL Server 2005 (or later) data. There are some additional limitations with formulas, security, and client tool support that you should understand before choosing this option.

    Use the following links to download tabular samples and follow a tutorial:

    Use these links to follow blogs that discuss BISM and tabular topics:

    PowerPivot for SharePoint Installation Experience

    Installation and configuration of SQL Server 2008 R2 PowerPivot for SharePoint and SharePoint Server 2010 can be a bit challenging. Curiously, my post on performing these steps on a Windows 7 machine is one of the most popular posts on my blog. It was challenging due to the dependencies on the SharePoint farm, but the new release provides a configuration wizard that greatly simplifies and automates the process. You are not required to use it, but it’s a terrific option if you’re not well-versed in SharePoint administration. If you prefer to use PowerShell, there are SharePoint and PowerPivot PowerShell cmdlets that you can use instead.

     Virtual Event Coming Soon!

    And if possible, make some time to learn more at the SQL Server 2012 Virtual Launch on March 7th.

    What features are you really looking forward to trying out?

  • Book Review: MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook

    1308EN_MDX with Microsoft SQL Server Analysis Services 2008 R2 Cookbook

    As I mentioned in an earlier post, I first met Tomislav Piasevoli’s (blog|twitter) in 2008 at the PASS Summit in Seattle, and saw him again most recently again in Seattle at PASS Summit 2011. There I had the pleasure of telling him personally how much I liked his book. There are not many MDX books available, so when a new one arrives on the market, I definitely want to take a look. Now I have taught MDX classes for many years, and I think perhaps my first class was in 2002. However, it’s been so long now, I don’t recall exactly when that first class was. What I do know is that my understanding of MDX and my fluency with the language has grown by leaps and bounds since then through experience with all kinds of bleeding edge MDX, and I have tried to communicate some of my insights developed along the way to students in my classroom. MDX concepts can stretch your thinking (to put it mildly), and more so if you have to unlearn SQL concepts as well, which is true of the majority of my students. As a native English speaker, I can find it challenging at times to explain certain aspects of MDX. For all these reasons, I can fully appreciate Tomislav’s efforts to create this book and commend him for his excellent examples and explanations using a language that is not his mother tongue.

    Tomislav’s book is definitely not for beginners. There is an underlying assumption that you already have some familiarity with the basics of MDX. The purpose of this book is to provide you with the tools necessary to continue building your skills. The chapters group together a series of related concepts, called recipes. You don’t need to read the book sequentially from cover to cover. Instead, you search for the desired outcome, such as handling division by zero errors (described in Chapter 1) or calculating row numbers (found in Chapter 7). Ideally, you don’t just read the recipe, but actually try out the steps yourself. I did some of my reading of this book when I was away from my laptop, and frequently found that I wanted to try out something that Tomislav mentioned. Even long-time MDX developers like myself can find interesting tidbits of information to round out their repertoire!

    Each concept within a chapter is presented in a similar manner, beginning with “Getting ready” which are the steps you need to perform to set up your query. For example, you might need to work in Management Studio and set up a query as a starting point or open the Script View for a cube in Business Intelligence Development Studio. Then the next section is “How to do it…” which provides step-by-step instructions for working with a query or MDX script to accomplish the intended outcome. Then Tomislav continues with the “How it works…” section which provides background information on the key concepts for the current recipe. This section is the real meat of the book, but is nicely separated from the implementation steps if you want to jump straight to the technical details. Depending on the recipe, Tomislav also includes additional sections, such as “There’s more…” to provide alternative solutions or to point you to helpful links on the Internet such as whitepapers and blog articles providing more insight, and “See also…” to cross-reference you to another recipe in the book that covers a closely related concept.  Where applicable, he also includes information about how the techniques in the recipe behave differently in earlier versions of Analysis Services.

    Chapter 1: Elementary MDX Techniques

    Don’t let the word Elementary fool you into thinking you can learn entry-level MDX from this chapter. As I mentioned earlier, this book is not for beginners. There are good techniques here and a few basics, such as a great explanation of the FORMAT_STRING property and troubleshooting its use. However, I would consider the majority of these techniques to be elementary only as compared to the other techniques found later in the book. For example, the WHERE clause is one of those things that I see people really get into a tangle over (because they can’t forget their T-SQL), and Tomislav demonstrates using it to implement a logical OR on members from different hierarchies and a logical AND for members from the same hierarchy. One of my favorite sections in this chapter is the coverage of alternatives for the FILTER() function, which can cause performance problems. Use this chapter to get grounded with some foundational concepts, then strap on your seatbelt before diving into the rest of the book!

    Chapter 2: Working with Time

    Time is something that every cube has. Or at least every cube that I’ve ever met. I can’t imagine a cube without one (even if it’s called Date) because often business analysis is comparing one period to another or monitoring trends over time. Tomislav starts with the basics of the YTD() function but delves into variations on the theme and points out pitfalls to avoid. He moves on to parallel periods, moving averages, and finding last dates with data, among other time-related topics. A useful recipe in this chapter is the use of string functions to calculate a date, as I see this requirement a lot when working with Reporting Services reports that use Analysis Services as a source. A good case for working with a single-member named set rather than a calculated member is also made in this chapter.

    Chapter 3: Concise Reporting

    A report in this chapter means a pivot table used in some front-end tool for Analysis Services, and not Reporting Services exclusively. The goal of this chapter is to reduce the size of the pivot table, and thereby improve performance. I would characterize this chapter as one that helps you find the best or the worst members in a group, whether in a hierarchy, among siblings, or among descendants. Tomislav starts off the chapter with a recipe to get the top N members. In this recipe, Tomislav includes a great explanation of what can go wrong when you use the TopCount() function. Well, it’s not a matter of it behaving incorrectly because it’s doing what you ask. The problem is that many people misunderstand how the TopCount() function behaves under certain conditions and Tomislav delves deeply into the behavior here. He then builds on these ideas throughout the chapter and introduces alternatives for finding and displaying the best and the worst.

    Chapter 4: Navigation

    Hierarchies in a dimension are extremely useful for a number of reasons, one of which is navigation. The chapter begins with some simple queries that use Boolean logic to test the context of a current member on the row axis, and then expands to use scoping in the MDX script or use a query (using CELL CALCULATION) to determine if members are in the same branch of a hierarchy. Are you confused about when to use the Exists() function and the EXISTING keyword? Tomislav covers them both in this chapter in a variety of contexts. Also, having advocated on behalf of a named set in a previous chapter, Tomislav explores the pros and cons of named sets more fully in this chapter.

    Chapter 5: Business Analytics

    This chapter covers several techniques that are encountered less frequently (depending on who you ask, I suppose) than those covered up to this point in the book. For example, the chapter begins with linear regression which I’ve never had to use in 10 years of writing MDX. But I said the same about the Correlation() function once upon a time and I now use it frequently in a current project, so my feeling is that you never know when you’ll need to use a seemingly obscure function. Because these analytical functions are used less commonly, the amount of information available through Books Online or elsewhere on the Internet is pretty slim. Therefore, having this chapter’s working examples at your fingertips is invaluable. Also covered in this chapter is adjusting forecasts based on periodic cycles, alternative approaches to expense allocations, finding slow-moving inventory items, categorizing customers, and ABC analysis (which is an application of Pareto analysis).

    Chapter 6: When MDX is Not Enough

    In this chapter, Tomislav makes the case that when an MDX approach gets overly complicated, it’s time to look at making changes to the dimension or cube design. For example, he says, “Every time you catch yourself using functions like Except(), Filter(), or similar() too often in your calculations, you should step back and consider whether that’s a repeating behavior and whether it would pay off to have an attribute to separate the data you’ve been separating using MDX calculations.” He also explains how and why to create a placeholder measure in the cube to use with assignments in the MDX script. Utility dimensions for unit conversion or for time-based calculations are also covered in this chapter.

    Chapter 7: Context-aware Calculations

    Understanding context is an important aspect of MDX development. As Tomislav explains in the introduction to this chapter, context can be unpredictable based on what a user might select to place on rows and columns, or it can be partially known when you expect a particular measure or hierarchy to be used, or it can be completely known. The trick is to produce a calculation that behaves correctly regardless of context, which can be made trickier based on a combination of factors that Tomislav describes. The recipes in this chapter help you explore context from a number of, um, contexts, starting with how to know how many columns and rows will be in a query’s result set, how to determine which axis contains measures, how to determine what has been placed on an axis, among other useful techniques.

    Chapter 8: Advanced MDX Topics

    Now frankly I considered several of the recipes up to this point to be advanced, so I had to chuckle at the title of this chapter. Let’s just say these recipes are more complex! In this chapter, you’ll find techniques for working with parent-child hierarchies and displaying random values for sampling purposes. Hopefully, you’re avoiding the use of parent-child hierarchies and random sampling is not a common request in reports, so this section of the chapter is interesting primarily from an academic viewpoint. But then we move to complex sorts – a very useful subject indeed. Tomislav provides several examples and highlights potential problem areas. Also in this chapter is a recipe for recursively calculating cumulative values.

    Chapter 9: On the Edge

    Tomislav uses this chapter to collect topics that don’t neatly fit into the earlier chapters. Here he covers Analysis Services stored procedures (which are nothing like T-SQL stored procedures, by the way), as well as using the OPENQUERY() and OPENROWSET() functions for calling MDX from a T-SQL statement. He also introduces Dynamic Management Views (DMVs) for documenting and monitoring cubes, and shows how to use SQL Server Profiler to capture MDX queries. Last, he shows how to use the DRILLTHROUGH command.

    If you’re an MDX developer, whether brand new or experienced, you will find lots of good information in this book and practical examples of how and why to implement specific techniques. I definitely recommend that you add it to your library, in whatever format you prefer. It’s available in paperback, PDF, ePub and Mobi from the publisher and in Kindle format from Amazon.

  • SQL Community, Twitter, and Hashtags

    I’ve been working with the Microsoft SQL Server BI stack since late 2000. When I started with SQL Server, my “community” consisted of my co-workers, my clients, and me. As communities go, it was a great community to be in. In 2001, our small boutique consulting company EssPro, founded by Mike Luckevitch, merged with another small boutique consulting company called Aspirity to create…Aspirity (which we lovingly referred to for a while as Aspirity 2 internally). Aspirity was the company who wrote the Microsoft Official Curriculum for the SQL Server 2000 BI stack – the DTS, OLAP, and MDX courses. I learned a lot about these products from the likes of Hilary Feier, Liz Vitt, Reed Jacobson, and Tom Chester. I even learned enough that they let me teach those courses, and thus my career as a BI trainer was launched. Eventually, they even let me write new courses and help co-author a book with Mike and Liz, Business Intelligence: Making Better Decisions Faster, thus launching my career as an author. Now that’s a great community!

    But as great as that community was, it was only a microcosm of all the people working day in and day out with the Microsoft BI stack. I met many of them over the years at various Microsoft events, including my first PASS Summit in 2002. But in the early years of the past decade, I was one of many in an organization (even if it was small) that couldn’t send everyone to PASS, so it wasn’t until 2006 when I went independent that I could start going to PASS regularly. My community circle widened and PASS became a reunion for community. But even of all the people who were attending PASS, they still represented only a fraction of the community at large.

    Somewhere along the line, Twitter became a phenomenon with the SQL Server community. When I first learned about Twitter, I didn’t understand the big deal. But now that I’ve been tweeting for a while, I’ve really come to appreciate the power of this medium for building relationships, sharing knowledge, and helping one another. I appreciate Twitter for its ability to create and foster community. And for a good laugh from time to time. So no matter where in the world that we find ourselves, our community is only a tweet away. We don’t need to meet face to face to be part of community. We simply need to engage.

    One of the ways that the SQL Community engages to help one another by using Twitter is to use the #sqlhelp hashtag. (If you’re new to Twitter, a hashtag is a string of characters preceded by the # symbol and used to filter tweets so that you can easily find the tweets related to a particular topic.) But the SQL Server stack is loaded with a variety of technologies, and there’s a lot more traffic related to database engine Q & A than this business intelligence girl can sift through to spot where I can help.

    Mark Vaillancourt (blog | twitter) – a thoroughly entertaining fellow that I met last week at PASS Summit 2011 – proposed the idea that the community should separate these Twitter traffic streams and introduced the #SSRSHelp hashtag earlier this week. Following along on that theme, I asked my fellow SSAS Maestro Council members if they would be willing to help me monitor a new hashtag, #SSASHelp, for questions related to Analysis Services and they agreed. Joining me are:

    Of course, anyone can help us answer questions. Although the four of us are spread out across the US and Europe, covering a variety of time zones, we’re busy folks too (and occasionally sleep), so we can’t promise to be instantly available. The more people involved in helping, the more people we can help. And that’s what makes this community so awesome!

  • PASS Summit 2011 Epilogue

    It's already half a week after PASS Summit 2011 and my head is still reeling! Just when I think PASS Summit can't possibly get any better, it does. When Erika and I arrived at the Washington State Convention Center on Sunday afternoon to register, I felt like I had just been there for Summit 2010. Has this past year been only a dream?

    SQL Community

    Early arrival meant Erika and I were both "interviewed" about SQL Community, which was displayed at the opening of the first day's keynote on Wednesday, October 12. Both of us were also in the Welcome to PASS Summit video.

    Connect. Share. Learn.

    That was the theme for this year's Summit. And Erika and I did plenty of each. I connected with friends made at last year's Summit, friends from my very first Summit in 2002, and of course friends made along the way. It was especially fun to connect with people whom I knew until now only through Twitter. Some people were taller than I imagined, and some were shorter - but I'm not confessing any further details here! One feature of this year's Summit that I thought was executed quite well was the attention given to first timers. The blogging community provided help in advance, first timers were given special ribbons to wear, and a special reception was held to make first timers feel welcome and learn the ropes. Then they were turned loose into the Summit community! I spoke with many first timers throughout the week and there was much appreciation expressed for the extra effort to help them get acclimated.

    For sharing - I participated in several events. First, there was my pre-conference session on Tuesday which had a lot more people than I expected frankly for the somewhat niche topics of MDX, DAX, and DMX. If you were in my class (and even if you weren't), you can download the demonstration files here, plus in the file I included the queries that I managed to save off from our detour discussions:

  • MDX Demos
  • Workbook for DAX
  • DMX Demos
  • MDXPrecon - the "final" queries


    Then on Thursday, Denny Cherry (Blog | Twitter) and I co-presented "So How does the BI Workload Impact the Database Engine?". We had some great questions from the audience and still managed to get through our content. Originally, we had thought this was a 4-hour session, but discovered a few weeks prior to the Summit that it was only a 3-hour session. I think we did a decent overview and certainly could have talked at least another hour, if not an entire day drilling into the details. Denny put up  a Web page containing links to resources to get background information on the topics we covered in this session.

    On Friday, I was super busy with my Connect and Share parts of the program. I hosted a Birds of a Feather table where we discussed Collaborative and Mobile BI. I was surprised how many people chose to be at the table rather than land there by default, considering this topic is still somewhat futuristic! Mostly the conversation centered around Mobile BI. No one here is actively using it yet, although there is interest. We discussed the current offerings in the marketplace, the announcement of Microsoft's mobile BI strategy, and the challenges everyone has in delivering mobile BI to their users.

  • After lunch, I dashed off to Starbuck's for a mini-reunion with my Aspirity colleagues with whom I worked since 2000 (well,  most of them!). Some of us have moved on to independent consulting careers, while others remain with Hitachi Consulting, which acquired Aspirity in 2004. From left to right, Dave DuVarney (Twitter), Dan Reh, me, Hilary Feier, Susan O'Connell, Scott Cameron (we started working together pre-Aspirity in 1997), and Paul Turley (Blog | Twitter) (who joined us after the photo shoot!).

    Another connecting/sharing activity on Friday was the panel session, Are You a Linchpin? I was on the panel with Brent Ozar (Blog | Twitter), Jeremiah Peschka (Blog | Twitter), Louis Davidson (Blog | Twitter), Kevin Kline (Blog | Twitter), Andy Warren (Blog | Twitter), and Thomas LaRock (Blog | Twitter). The session was moderated by Andy Leonard (Blog | Twitter). We had a very interesting, interactive discussion about professional development and how one becomes indispensable in today's fast-paced world. The jumping off point for the discussion was a book by Seth Godin, Linchpin: Are You Indispensable?

    Afterwards, Erika reminded me about a tweet I had shared recently. She was just waiting for me to bring it up during the panel discussion, but it had long ago fluttered out of my mind! However, I agree that it was pertinent to the discussion, and am including the link here for those interested. The article, Why Do Some People Learn Faster?, discussed failure as a learning tool. It makes me feel better about all those mistakes I make. I'm just learning faster that way! And you'll hear this as a common refrain in my classes. I don't think we learn anything when software, processes, technology, (fill in the blank) just works. It's only when something breaks that we have the opportunity to investigate to see why it broke and how to keep it from breaking again. That's true learning.

    What else did I learn? Lots of new stuff coming from Microsoft next year, although that's not huge news to me as I've already started writing about SQL Server 2012 (Denali). You can download a draft of the eBook that Ross Mistry and I are co-authoring, Introducing Microsoft SQL Server Code Name Denali. The final book is due out next year.

    But I also learned a few new things at the keynotes as well. You can catch my highlights of the keynotes in my previous posts:

    There was so much else going on during the week that I can't possibly capture it all here. I'm looking forward to catching the replay of many sessions as I didn't get a chance to attend as many as I would have liked. Apparently, my plan to clone myself (via Erika) didn't enable me to absorb more knowledge. On to Plan B...

  • So Many Choices, but Only One of Me! PASS Summit 2011 Picks

    It’s that time of year again when the anticipation of the upcoming PASS Summit begins to build. My first attempt at using the Summit Schedule Builder by selecting the various topics of interest to me resulted in extreme overbooking. Sigh… Now I really have to do some serious thinking about which sessions I want to see and how that fits around the various obligations that I have to participate in other events besides the sessions that I’m presenting:

    Preconference: MDX, DAX, and DMX: An Introduction to the Languages of BI (Tuesday, Oct 11, 8:30 am – 4:30 pm)

    Half-day session: So How Does the BI Workload Impact the Database Engine? (Thursday, Oct 13, 1:30 pm – 4:30 pm). Denny Cherry and I are co-presenting this session and provided a sneak preview for 24 Hours of Pass Fall 2011 which you can view here (see Session 9).

    Panel session: Are You a Linchpin? Career management lessons to help you become indispensable (Friday, Oct 14, 2:30 pm – 3:45 pm)

    Of course, I’d love for you to come to all of my sessions, but there are plenty of other speakers to see, too! If you’re a business intelligence beginner, I recommend you try to see these sessions:

    If you’ve already got some BI experience, the following sessions are must-see:

    Then, of course, the big news at PASS this year is the upcoming Denali release. There are plenty of Denali-focused BI sessions to see, but I’ll leave it to you to find the sessions that are right for you!

    Overlap? Unfortunately, yes – so we’re all going to have to make some hard choices. But if your budget permits, then the best way out of this conundrum is to buy the conference DVD! Then you have the benefit of reviewing presentations later and don’t have to worry about trying to remember everything that was discussed during Summit, because I guarantee it’s going to be an information overload kind of week!

  • Back to the Future with MDX and PASS

    1308EN_MDX with Microsoft SQL Server Analysis Services 2008 R2 CookbookAn interesting confluence of events this week takes me both back to the past as I ponder the future! What gives? Well, the recent release of Tomislav Piasevoli’s (blog|twitter) new book, MDX with Microsoft SQL Server 2008 R2 Analysis Services Cookbook, reminded me of my first meeting with him at a PASS Summit event a few years ago. Alas, I don’t remember which event because – I confess – I’m getting old and my brain just doesn’t store details like that anymore. Not when it’s full of more important tidbits more necessary to my daily work! If I had to guess, I think it was the first event that Erika attended with me, which means it would have been in 2008 but it could have been 2009, and definitely not 2010. How’s that for narrowing it down? [Edit: Tomislav reminded me that we actually met at the 2008 PASS Summit but we talked about the Axis function at the 2009 PASS Summit Birds of a Feather.]

    Regardless of which year it was, Tomislav was seated at table during a Birds of a Feather lunch talking to several people about MDX, and he made a comment about the Axis function which I thought was very interesting. To be honest, I don’t remember what he said specifically, but it intrigued me enough to jot it down to pursue at a later time. It turned out not to be helpful for whatever MDX problem I was faced with at the time, but I kept the note to myself in my smartphone, which transferred over to the next two smartphones I’ve owned since then. My experience with MDX is that there are a handful of MDX functions that we who are inclined to write MDX use quite often. For an interesting, but entirely unscientific, analysis of function popularity, see this 2005 post from Mosha Pasumansky, “What are the popular MDX functions?” Although the post is a bit dated, I think it’s still valid. When I look at the top list of functions, I do indeed use many of those functions frequently (although NonEmptyCrossJoin is deprecated and I now use the NonEmpty function instead).

    This year, I finally found a situation where I found the Axis function to be useful. Basically, this function gives us a way to “see” what’s currently on rows or columns. The situation facing me was  to come up with a way to create a calculation based on values found in two columns in a pivot table. More specifically, each column could be a different time period. So the calculation might be comparing Q1 and Q2 of a particular year, or it might be Q1 of one year and Q2 of another year. But we would not know in advance which two time periods would be involved. I built a calculation to count how many date members were on the columns axis and return a null if there were more than two, otherwise to do the math. I’ll provide the details for this in a separate post, but there are other scenarios in which this function is useful as Tomislav explains in this SQLBits session (recording freely accessible, slides and samples downloadable): Universal calculated measures in MDX queries.

    That encounter with Tomislav is just one example of many that highlights the value of attending the PASS Summit. It’s not just what you learn by attending the sessions, but also what you learn in casual conversations with experts. Opportunities abound for these conversations, whether at a Birds of a Feather table or in the hallway or at one of the many evening events. (Hint: Asking questions during SQLKaraoke is probably not a good time.) People who are at PASS – both speakers and non-speakers alike – are there because they’re passionate about what they do. If you plan to go and it’s your first time, the best advice I can give is Don’t Be Shy! Talk to someone, anyone, everyone! You won’t regret it.

    It’s only a few more short weeks until Erika and I head out to this year’s summit. I’ve got to finalize my demos for my preconference workshop on MDX, DAX, and DMX and for my half-day session with Denny Cherry, So How Does the BI Workload Impact the Database Engine?. I think all I need to do is show up for my panel session, Are You a Linchpin? 

    Meanwhile, I’m going to reading Tomislav’s new book to see what other gems he has to share, and I’ll post a review when I finish the book. Stay tuned!

  • 24HOP: BI Workload Follow-Up

    Yesterday, Denny Cherry (blog|twitter) and I co-presented a 24HOP session for the Fall 2011 lineup, “So How Does the BI Workload Impact the Database Engine?” 24HOP stands for 24 Hours of PASS and is a semiannual roundup of speakers from the SQL Server community. Initially, this event consisted of 24 consecutive sessions, each lasting an hour, but later it became a two-day event with 12 consecutive sessions each day. The sessions are free to attend and feature many great topics covering the spectrum of SQL Server things to know. Even if you missed previous 24HOP events, you can always go back and view recordings of sessions that interest you at the 24HOP site for Spring 2011 and Fall 2010.

    And if you missed Denny and me yesterday, a recording will be available in a couple of weeks and I’ll update this post with a link. Our hour-long session for 24HOP was a sneak preview of our upcoming half-day session of the same name that we’ll be presenting at the PASS Summit in Seattle on Thursday, October 13, 2011 from 1:30 pm to 4:30 PM. In our half-day session, we’ll dig into the details and spend more time on database engine analysis, whereas in our 24HOP session, we focused on reviewing the architecture and highlighting the connection between BI components and the database engine.

    We were able to answer a few questions at the end, but one question in particular could not be answered easily in the time allotted in a single sentence or two: How much RAM do I need to plan for Integration Services (SSIS)? Andy Leonard (blog|twitter) did manage a succinct response: All of it! I, on the other hand, am not known for being succinct, so deferred the question for this post.

    Andy is right that SSIS wants as much memory as you can give it, which can be problematic if you’re executing an SSIS package on the same box as SQL Server. On the other hand, there are benefits to executing the package on the same box as well, so there is no one-size-fits-all solution. And the solution for one data integration scenario might not be the right solution for another data integration scenario. A lot depends on what CPU and RAM resources a given server has and how much data is involved. In order to know how much horsepower you need, you’re going to have to do some benchmark testing with packages. Here are some good resources for SSIS if you’re concerned about memory:

    Is there a rule of thumb for deciding how much memory you’ll need for SSIS? Well, no less than 4 GB per CPU core is a good place to start. But if that’s not possible, you certainly want to have memory that’s at least two or three times the size of data that you expect to be processing at a given time. So if you’re processing 1 GB of data, you’ll want at least 2-3 GB of memory and, of course, more memory is even better!

  • Women in Technology - Answering the Call

    I read an article today on Women in Technology - A Call for Obsoletion by Claire Willett. I started to comment and realized it was turning into a whole post on its own! I, too, have been puzzled by the disappearance of women in the field. Coming up on 27 years in IT, I am definitely not the norm (since most leave between 10-20 years in), but I did introduce my daughter (and my son, too!) to computers at an early age and told them they could be anything they wanted to be. Interestingly, my daughter didn't show much interest, but got her degree in theoretical math from MIT, while my son showed a lot of interest, but then became a welder. Go figure!

    Claire pointed out several proposed explanations that others have come up with to explain why there are fewer women entering the field and cited statistics to show they're not staying in the field.  The quandry that I have is where the line is to push kids into a direction they don't want to go versus to encourage them to explore technology/science careers. Either way, I think this applies to any kids- not just mine - and not just girls!

    As for the discouragement factor that she brings up, I think it's a plausible, and insidious, explanation for the problem. Having "techie" parents and growing up in the heart of the space industry, I never experienced any discouragement as a young girl or teenager. But Claire's article reminded me when I was going back to school when I was 28 and attempting to take an advanced math class, I really had to battle with the counselor to get her to sign off on the course. I was the recipient of a Pell grant at the time, and couldn't just take it without authorization. I was told that I had been out of school too long to be successful. Furthermore, I wanted to take the self-paced class rather than the lecture class and was told that no one ever succeeds with that route. Well, I pushed back, explaining that I was in the software industry and could hold my own quite nicely in math. (I got an A. Can you imagine how badly I wanted to wave that in the face of that mis-guidance counselor!) How many other young women are getting told not to bother trying and believe it?

    Fortunately for me, that's the only real discouragement I really encountered that I actively had to battle. I discuss a few others in my previous post, Maybe It's Just Me...A Perspective from One Woman in IT. And in that post, I also described my daughter's perspective as a representative of the incoming generation of women in technology.

    Although providing the girls in our sphere of influence exposure to the field of technology is a good step, we also need to be cognizant of other voices that they hear (friends, teachers, guidance counselors) that might be counteracting our best efforts. Helping them develop confidence in themselves and to believe that they can be whatever they want to be is also vitally important. Again, I feel this way about all kids, so I struggle a lot with singling out girls in particular. But if girls are especially vulnerable to the discouragement factor, then we need to be aware of that and take every opportunity to help them resist that discouragement, whether it's your daughter, niece, friend's daughter, or neighbor. We CAN make a difference one girl at a time! Along with Claire, I'd really like to see the obsolescence of WIT as well.

  • Ready for the Rally? SQLRally 2011, Here I Come!

    When I was a kid, living in New Jersey at the time, my dad was very involved in sports car racing and motorcycles. Our garage was never used to actually park a car that we rode around in. Instead, it was strewn with all kinds of half-built engines and body parts. Car body parts, that is. And much to our neighbors' chagrin, I'm sure, we often had cars in the backyard that my dad would use to snag some part that he needed for his current racing machine (TR5 at one point, later Formula V). Many weekends were spent traveling to the latest Sports Car Club of America (SCCA) race somewhere in New England (I don't remember ever hearing about NASCAR in those days) or we were off to see the professionals in the Grand Prix at Watkins Glen. Frankly, being a spectator at these events wasn't particularly fun as a kid, nor was it particularly memorable except for the time that I got to see Paul Newman at a race when he was just getting started in the sport. What always seemed much more fun to me were the rally events - I guess because there was some sort of puzzle involved or some type of navigation required that let me participate with my dad, even if I wasn't much help. Of all the things from those experiences that persist to this day in my life, I am very good at using maps! The rest didn't rub off so much on me - at least not the car-related stuff. I did inherit half of my computer-savvy genes from him!

    So for me, the word rally has very positive connotations and fond memories of being with my dad. And I get to add to those warm and fuzzies by attending the very first PASS SQLRally 2011 in Orlando this May! I feel very honored to be part of the speaker line-up because the SQL Server community decided who the speakers would be, and there was quite an exceptional field of contenders. The session I'll be presenting is Data Visualizations in SSRS 2008 R2. While using data visualizations in reports can be an effective way to communicate information, there are good ways to do that and bad ways to do that which I'll be sharing with attendees. My focus will be on Reporting Services 2008 R2, but even attendees who haven't migrated to the latest version will learn some useful tips in this session.

    Now I won't just be presenting my session and hiding out in my hotel room the rest of the time. I'll be checking out other speakers' sessions, hanging out with people that I usually only get to see at PASS, and meeting new people, too! The "people" part of conferences is just as much fun and invaluable as the learning opportunity.

    I'm going to rev up my SQL Rally experience by attending a pre-conference session by Grant Fitchey (blog | twitter). I'm a business intelligence kinda gal so normally I throw data into a cube to get really good performance, but sometimes that's not an option. And I didn't get into business intelligence by starting out as a DBA, so my relational performance tuning skills for SQL Server are pretty basic. I'm looking forward to adding some new skills to my repertoire.

    Although I'd like to be able to sit in on everyone's session - I'm also a learning kinda gal - not everything is applicable to what I do, and some things I already know how to do. But there are still a few sessions that I really want to see:

    I'm really looking forward to this event. If you're going to be there too, please be sure to look for me and say hi!

  • Multi-State Maps in Reporting Services

    In SQL Server 2008 R2 Reporting Services, you can now create maps for reports using the built-in map gallery of the United States or individual states, ESRI shapefiles, or a spatial query. In a previous post, I explained how to obtain an ESRI shapefile for another country and convert it into spatial data so that you can have other maps available in the map gallery. In this post, I'll show you how to create one map from multiple ESRI shapefiles. Whether you add it to the map gallery or not is up to you!

    Obtaining State Shapefiles

    To start, the process is similar to the one I described in my previous post on working with spatial data. This time my goal is to create a map of two states - Nevada and California. The best place to download free shapefiles for US states is the U.S. Census Bureau where I used the link to download by state. I found the Nevada directory and then used the topmost directory 32 where there many files to download. I chose Similarly in the California directory, I used the topmost directory to find and download  Then I extracted each zip file to its own directory on my computer. They each contain a variety of files, including SHP and DBF files that I could use for a map using the ESRI shapefile data source option if I want to create a separate map for each state. But I want to combine these maps, so my mission is not yet complete.

    Converting Shapefiles to SQL Spatial Data

    My next step is to use a tool called Shape2SQL to extract the spatial data from the shapefile into a SQL Server table. Pinal Dave (blog|twitter) has a tutorial explaining how to do that. I had the same experience that I described in my last post - I had to clear the Create Spatial Index checkbox to get the table to load properly for the first state. When I loaded the second state, I had to clear the Replace Existing Table checkbox. Now I have both states in a table called State, as shown below, with a geom column having the SQL Geometry data type.

    Creating the Multi-State Map

    Now that I have my spatial data in a table, I'm ready to create the map. In Business Intelligence Development Studio, I added the map to my report which launched the Map Wizard and then I used the following steps:

    1. On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next.
    2. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data.
    3. On the Choose a connection to a SQL Server spatial data source page, select New.
    4. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary):
      Data Source=(local);Initial Catalog=SpatialData
    5. Click OK and then click Next.
    6. On the Design a query page, add a query for the country shape, like this:
      SELECT  NAME00, geom FROM State
    7. Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below.

    Of course, you can use these technique with any shapefiles that you can locate. It's not limited to the United States. For example, if you want to create a map of multiple European countries or a combination of states in India, you could use a similar process by downloading shapefiles from either the Global Administrative Areas spatial database or the Natural Earth database.

  • The Case of the Extra Page: Rendering Reporting Services as PDF

    I had to troubleshoot a problem with a mysterious extra page appearing in a PDF this week. My first thought was that it was likely to caused by one of the most common problems that people encounter when developing reports that eventually get rendered as PDF is getting blank pages inserted into the PDF document. The cause of the blank pages is usually related to sizing. You can learn more at Understanding Pagination in Reporting Services in Books Online.

    When designing a report, you have to be really careful with the layout of items in the body. As you move items around, the body will expand to accommodate the space you're using and you might eventually tighten everything back up again, but the body doesn't automatically collapse. One of my favorite things to do in Reporting Services 2005 - which I dubbed the "vacu-pack" method - was to just erase the size property of the Body and let it auto-calculate the new size, squeezing out all the extra space. Alas, that method no longer works beginning with Reporting Services 2008. Even when you make sure the body size is as small as possible (with no unnecessary extra space along the top, bottom, left, or right side of the body), it's important to calculate the body size plus header plus footer plus the margins and ensure that the calculated height and width do not exceed the report's height and width (shown as the page in the illustration above). This won't matter if users always render reports online, but they'll get extra pages in a PDF document if the report's height and width are smaller than the calculate space.

    Beginning the Investigation

    In the situation that I was troubleshooting, I checked the properties:

    Item Property Value
    Body Height 6.25in
      Width 10.5in
    Page Header Height 1in
    Page Footer Height 0.25in
    Report Left Margin 0.1in
      Right Margin 0.1in
      Top Margin 0.05in
      Bottom Margin 0.05in
      Page Size - Height 8.5in
      Page Size - Width 11in

    So I calculated the total width using Body Width + Left Margin + Right Margin and came up with a value of 10.7 inches. And then I calculated the total height using Body Height + Page Header Height + Page Footer Height + Top Margin + Bottom Margin and got 7.6 inches. Well, page sizing couldn't be the reason for the extra page in my report because 10.7 inches is smaller than the report's width of 11 inches and 7.6 inches is smaller than the report's height of 8.5 inches. I had to look elsewhere to find the culprit.

    Conducting the Third Degree

    My next thought was to focus on the rendering size of the items in the report. I've adapted my problem to use the Adventure Works database. At the top of the report are two charts, and then below each chart is a rectangle that contains a table.

    In the real-life scenario, there were some graphics present as a background for the tables which fit within the rectangles that were about 3 inches high so the visual space of the rectangles matched the visual space of the charts - also about 3 inches high. But there was also a huge amount of white space at the bottom of the page, and as I mentioned at the beginning of this post, a second page which was blank except for the footer that appeared at the bottom. Placing a textbox beneath the rectangles to see if they would appear on the first page resulted the textbox's appearance on the second page. For some reason, the rectangles wanted a buffer zone beneath them. What's going on?

    Taking the Suspect into Custody

    My next step was to see what was really going on with the rectangle. The graphic appeared to be correctly sized, but the behavior in the report indicated the rectangle was growing. So I added a border to the rectangle to see what it was doing.

    When I added borders, I could see that the size of each rectangle was growing to accommodate the table it contains. The rectangle on the right is slightly larger than the one on the left because the table on the right contains an extra row. The rectangle is trying to preserve the whitespace that appears in the layout, as shown below.

    Closing the Case

    Now that I knew what the problem was, what could I do about it? Because of the graphic in the rectangle (not shown), I couldn't eliminate the use of the rectangles and just show the tables. But fortunately, there is a report property that comes to the rescue: ConsumeContainerWhitespace (accessible only in the Properties window). I set the value of this property to True. Problem solved. Now the rectangles remain fixed at the configured size and don't grow vertically to preserve the whitespace. Case closed.

  • Applying Interactive Sorting to Multiple Columns in Reporting Services

    A nice feature that appeared first in SQL Server 2005 (not SQL Server 2008 - I mis-remembered!) is the ability to allow the user to click a column header to sort that column. It defaults to an ascending sort first, but you can click the column again to switch to a descending sort. You can learn more about interactive sorts in general at the Adding Interactive Sort to a Data Region in Books Online. Not mentioned in the article is how to apply interactive sorting to multiple columns, hence the reason for this post!

    Let’s say that I have a simple table like this:

    To enable interactive sorting, I open the Text Box properties for each of the column headers – the ones in the top row. Here’s an example of how I set up basic interactive sorting:

    Now when I preview the report, I see icons appear in each text box on the header row to indicate that interactive sorting is enabled. The initial sort order that displays when you preview the report depends on how you design the report. In this case, the report sorts by Sales Territory Group first, and then by Calendar Year.

    Interactive sorting overrides the report design. So let’s say that I want to sort first by Calendar Year, and then by Sales Territory Group. To do this, I click the arrow to the right of Calendar Year, and then, while pressing the Shift key, I click the arrow to the right of Sales Territory Group twice (once for ascending order and then a second time for descending order). Now my report looks like this:

    This technique only seems to work when you have a minimum of three columns configured with interactive sorting. If I remove the property from one of the columns in the above example, and try to use the interactive sorting on the remaining two columns, I can sort only the first column. The sort on the second column gets ignored. I don’t know if that’s by design or a bug, but I do know that’s what I’m experiencing when I try it out!

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details.

    And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!)

    Multidimensional Thinking: The Presentation

    My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract:

    “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.”

    If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did!

    Kimball All the Way

    There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading.

    Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right!

    Schema Generation

    One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops.

    Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services.

    If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

  • Books are Dead! Long Live the Books!

    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another.

    As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.)

    As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house!

    Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests.

    With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from

    Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at but I have no idea how long that will remain valid.

    My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point.

    Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

More Posts Next page »
Privacy Statement