THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Stacia Misner

  • Managing Expectations

    If you're a business intelligence practitioner like me, you might have certain expectations from a book entitled Business Intelligence Success Factors: Tools for Aligning your Business in the Global Economy. You might think it contains lots of practical examples for implementing BI projects. Or if you're someone who uses (or wants to use) BI in day-to-day operations, you might think it can help you to find and fix the gaps in your current usage. This book meets neither of those expectations.

    Digression alert…Once upon a time, I owned a copy of Olivia Parr Rud's book Data Mining Cookbook. I say "once upon a time" because many years ago I had a collection of data mining books that I would take to my Analysis Services classes for students to peruse on the day that we tackled the data mining component. I kept the books in a bag in the trunk of my car, but one day I set the bag in the garage to make space in the trunk and then forgot about the bag. My husband assumed that a bag of books in the garage must mean that my intent was to dispose of the books, so he set the bag out for the garbage collector. It took me a long time to overcome my shock and horror that someone would actually throw books away. (Confession: I hoard books.)

    Having read Data Mining Cookbook, I expected Rud's second book to provide the same type of "how-to" explanation that I found in her first book. It didn't take me long to realize that my expectations were all wrong.

    So what is the book about? The first part of the book concentrates on the state of business in general. There are plenty of problems familiar to those in management positions and BI practitioners cited in Chapter 1, "The Evolving Business Landscape." One point raised with which I agree wholeheartedly is the acknowledgment that IT for the most part has kept up with advances in technology (such as BI), but business people are often unable to keep the same pace which ultimately leads to failure of the technology.

    Overlooking human issues related to the technology is a key contributor to this failure, Rud explains. A survey of the possibilities for addressing this failure is the real premise of the book, rather than a focus on BI as a technology or process. Chapter 2, "Models from Science and Nature," is an interesting blend of quantum theory, the hive mind, chaos theory, and universality (among others) and a hypothesis that these models might be applied successfully to the business world. But how would you do that?

    The goal of Part 2 is to answer that question by enumerating five key competencies that a business must adopt to be competitive and ostensibly to use its information effectively: effective communication, collaboration, innovation, adaptability, and leadership. Each of the five chapters in Part 2 is devoted to one competency. While you won't find anything that directly relates to BI here, you will find some interesting ideas and brief case studies that you might find helpful if you're responsible for managing groups of people.

    In Part 3, the book transitions to "Models and Practices." Chapter 8, "Systems Thinking," is where the book gets technical, but not about BI in the pure sense. Instead, the chapter leads you through the application of systems thinking to business analytics, with lots of diagrams to illustrate recurring patterns commonly found in time-series analysis. The final pages of the chapter explain how this approach ties into BI in general. Chapter 9 introduces "Holacracy," an innovative way to manage a business that among other non-traditional practices includes "integrative decision-making."

    Part 4 concludes the book with a single chapter, "Possibilities," which imagines a world in which the practice of Holacracy extends beyond business to community, geographical regions, and beyond. The chapter also includes highlights of a few entrepreneurs that exemplify thinking out of the box and fostering positive change in the world. Whereas the book began with an assessment of the world as we know it, it concludes with an inspired vision of the world as it might be.

    In the end, I felt that this book was not really about business intelligence after all. But that was just a matter of my personal expectations. If one considers business intelligence as a way of doing business by interacting intelligently with people, then the book provides ample food for thought and describes interesting aspects of current research in fields that relate directly or indirectly to business management. From that perspective, I think it would make a good addition to the curriculum of an MBA program.

    Olivia Parr Rud, Business Intelligence Success Factors: Tools for Aligning your business in the Global Economy, Hoboken, NJ: John Wiley & Sons, 2009. ISBN: 0470392401.

  • A Whole Lotta Talkin' Goin' On!

    Winding down on the authoring process of my latest book, Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 & Office 2010, I decided to look at my calendar to see what I should work on next, and saw that I have a lot of presentations to prepare! Here's a list of events planned through November (with placeholders for details that aren't yet publicly available - to be updated soon). Maybe you can join me for one or more?

    Building a Data Mart with Integration Services (Free Webcast)

    September 8, 2010 @ 3:00 PM - 4:00 PM EDT

    Understanding current trends in business conditions often requires asking a lot of questions, but allowing users to run query after query against your operations databases causes more problems than it solves. By creating a data mart, you can maintain a separate data store that can withstand the heavy query load that business analysis demands. In this webcast, you'll learn why you need a dimensional data model for your data mart, how to quickly build a dimensional model, and how to use control flow and data flow components in Integration Services to build simple packages that populate the dimensional model with data.

    Intro to MDX - 24 Hours of Pass (Free Webcast)

    September 15, 2010 @ 2:00 PM - 3:00 PM EDT

    "Everyone" says MDX is hard-so what's the point of learning it? Quite simply, some answers are easier to get from your data by using multidimensional queries instead of relational queries. Sure, it's possible to write pages of T-SQL code to do things like period-over-period analysis, cumulative results such as month-to-date and year-to-date comparisons, or trending calculations for Key Performance Indicators. But in this session, you'll learn how much easier you can perform these calculations by using MDX (after you put your data into a cube, of course!), not to mention how much faster MDX queries execute compared to their relational counterparts.

    To help you get started with MDX, I'll explain how to think about multidimensional space, how to create basic queries, which functions to learn first, and how to construct simple, but useful, calculations. In my PASS Summit 2010 session, "Demystifying MDX in Reporting Services," I'll then show you how to apply these concepts to queries in Reporting Services and how to adapt queries that return multidimensional results to the two-dimensional dataset structure required by Reporting Services.

    Strategies to Fast Track Your BI Implementation (Free Webcast)

    September 23, 2010 @ 12:00 PM - 1:00 PM EDT

    One of the secrets to a successful BI implementation is quick delivery of a solution that delivers valuable information to your business users. Quick delivery does not require you to take shortcuts to reduce the length of the project, but instead demands that you implement strategies that enable you to leverage technology, people, and process effectively. Before you can begin implementing these strategies, you must have a clear understanding of the opportunity areas for information delivery and assess both the level of complexity and value of information for each opportunity. Rather than build a comprehensive solution, you can manage each opportunity as a separate project while keeping the big picture in mind. Then by following a simple set of strategies while remaining mindful of critical success factors for each project, you can progressively deliver an integrated and flexible solution that meets the needs of business users sooner rather than later.

    Attend this webinar and learn:

    • How to quickly deliver a BI solution without taking shortcuts that sacrifice vital components
    • Critical success factors for managing a new BI implementation
    • How to deliver a scalable BI solution in manageable stages

    Advanced SQL Server Reporting Services (Online training - details to come)

    September 28-29-30, 2010 @ 12:00 - 1:30 PM and 2:00 - 3:30 PM EDT

    Accelerating Answers with Analysis Services (Free Webcast - details to come)

    October 12, 2010 @ 3:00 PM - 4:00 PM EDT

    A relational data mart is a great way to improve query performance for business analysis, but having that data available in an Analysis Services cube can provide an even better experience. A cube can not only deliver queries faster, but can simplify the queries needed to compare one time period to another or to work with cumulative values, like month-to-date and quarter-to-date. Join us for this webcast to learn how a cube can deliver high-performance queries, how to build a basic cube, and how to share the cube with your user community.

    BI Immersion Event (Event details and link to registration w/early bird rate for registrations by September 17)

    October 25 - 29, 2010 M-Th 9:00 AM - 5:30 PM, Fri 9:00 - 3:30 PM in Bellevue, WA

    Even with little to no knowledge of BI, this course will provide you with immediately-usable, production-level knowledge of BI. The course has been designed to take you from installation to configuration to application and integration - for all facets of the BI stack (SSIS, SSAS, SSRS, PowerPivot, and SharePoint) - it is meant for anyone who wants to get involved in BI. Our first thought was to create a class for DBAs who want to better understand their environment, but the course content is just as valuable whether you're new to BI or ready to expand your skills. This class might be the perfect way to advance your career and dig deeper into topics that don't typically get covered in most BI classes.

    Maybe you accomplish quite a bit with Excel - but how do you get the data there? Quite a few folks use cut-and-paste, or import from a CSV - but of course there's a much better, and much more scalable and robust way using SSIS! Relational data warehousing - with lots of aggregate tables - helps you keep your data organized for Excel or any other type of reporting or analysis tool that you might want to use, but some types of business questions are answered more easily when you move your data from the data warehouse into SSAS cubes. Whether you store data in tables or cubes, you have a variety of options in the BI stack to access and analyze the data, but which is best? Learn how to choose the right tool for the task and how to manage a secure and scalable environment for reporting and analysis.

    The course starts out in the 100-200 range on Day 1 but moves into the 200-300 range for the remainder of the week. For each technology in the BI stack, you'll learn basic usage and design principles, followed by more coverage of configuration, performance tuning, and other management tasks than you'll find in most BI end-to-end classes. The course will not only bridge the gap in your BI knowledge, but will continue on to take you a lot further. If you've always wanted to immerse yourself into BI, figure out the end-to-end story, and go behind-the-scenes to learn how to install, configure, and deploy BI solutions, this is the place to be!

    PASS Summit 2010 (Conference details)

    November 8 - 11, 2010, Seattle, WA

    Real World Analysis Services Stored Procedures Analysis Services stored procedures (ASSP) are nothing like the stored procedures you create for your SQL Server databases. Not much is written about this feature, so you might wonder whether you need to bother with them at all. Put simply, you can simplify your MDX queries, combine cube data with relational data, or personalize calculations for specific users-just to name a few possibilities with a little C# (or VB) code and MDX. Come to this session to learn some practical ways you can take advantage of Analysis Services stored procedures.

    Demystifying MDX in Reporting Services As an enterprise reporting solution, Reporting Services can consume many different types of data sources, including Analysis Services cubes and PowerPivot workbooks. You can build simple reports using a graphical design interface in Reporting Services without learning the Multidimensional Expressions (MDX) language, but to fully leverage multidimensional data in your reports, you should develop a solid understanding of this language and how to structure your query to facilitate the report design process. In this session, we'll review how the graphic designer constructs an MDX query, explore ways that you might want to change the generated query, and demonstrate how to work with parameterized MDX queries. We'll also explore how to use T-SQL stored procedures to call MDX queries.

    Recruiting, Retaining & Advancing Women in Technology: Why does it matter? WIT Luncheon and Panel Discussion at PASS

    November 10, 2010 - 11:30 am - 1:30 pm

    Increasing the role of women in technology has a direct impact on the women working in hi-tech, but the effects can go far beyond that. How do female tech workers influence innovation and product development? How do men benefit from having more women working in technology? Can the presence of women in tech affect a company's bottom line? What does it mean for women and their families when they have access to hi-tech jobs?

    Delivering Information with Reporting Services (Free Webcast - details to come)

    November 17, 2010 @ 3:00 PM - 4:00 PM EDT

    Whether your data source is an operations database, a data mart, or an Analysis Services cube, you can use Reporting Services to deliver information on demand or on a scheduled basis in a variety of formats. With Reporting Services in place, report developers can quickly produce static or interactive reports, system administrators can effectively manage security and performance, and decision makers can easily find, use, and share information. If you're never used Reporting Services, join us for this webcast to learn about the report controls available to satisfy a variety of reporting requirements, the reporting features that create an interactive user experience, and the steps to publish, secure, and manage content on the report server.

  • Maybe It's Just Me... A Perspective from One Woman in IT

    I could be a little late to the party, but I caught wind of a blog post, "Booth Babes Don't Wear Glasses," circulating in the Twitterverse that prompted me to investigate and I was a bit startled by some of the statements made. I was trying to wrap up my latest chapter on my current book project (the primary reason for my silence this month - that and a 2 week vacation!), so I didn't spend a lot of time analyzing the blog post other than to tweet my initial reaction that I found it hard to believe. Not the booth babe part. I've seen those gals before, but not so much at PASS, or SQL Connections, or TechEd. Or at least I haven't noticed them if they're there. Maybe it's just me...

    No, the statement that caught my attention first was "...when I approach male attendees, they seem a bit shocked that I am talking technology with them." And I thought, really? In this day and age? I mean, it's 2010, right? If I am reading this post correctly, this quote comes from something that Denise Dubie of NetworkWorld wrote, but I couldn't locate the article online and the blog post didn't provide a link.

    Conceding that I read the post in a hurry yesterday, I decided that perhaps I'd been too hasty and read something out of context. So I went back and re-read the post again today. The author, Lori MacVittie, appears to be quite a prolific blogger at DevCentral. I don't know anything about her, but her blog archives are impressive. Clearly, she's a woman with several years of experience, not someone who just stepped out of college and entered the work world. I'd expect that across those years, she'd have a variety of experiences with all types of men - some who were favorably disposed towards working with women, and some who weren't, but her post implies a preponderance of those who weren't. She doesn't really provide any evidence, anecdotal or otherwise, but makes the following observations:

    • Offering an explanation for the failure of women to enter science, technology, engineering or math (STEM), "Young women, according to research [which she fails to cite], aren't thinking about the difficulties that exist in a traditionally male-dominated field...[it has] everything to do with men and attitudes."
    • Why do these men have these attitudes? She explains that IT is still young compared to other STEM, it's dominated by men, and women in general haven't been out in the work force all that long (less than 40 years,she says). "It's no surprise that men in general, then, haven't had a whole lot of time yet to adjust."
    • "It would likely be difficult to find a woman in technology that hasn't had this same experience. The shock, the surprise, the change in tone and demeanor that comes from male counterparts upon realizing that the woman they're talking to knows what she's talking about."
    • "The debate over 'booth babes' isn't really about booth babes... It's about the reaction of men to a technically competent woman, no matter what she's wearing...It's about the assumption that no woman is technically competent -- at a trade show or on a conference call."

    Wow. Those are some heavy-duty attitudes she's talking about. And it's quite possible those are attitudes that she has to contend with. I just haven't seen anything like it in my career, at least not for a very long time. And even when I did, I would consider the experience an outlier. No, actually - the time I'm thinking about had nothing to do with the man in question going into shock upon learning that I knew what I was talking about. In my situation, he was blatantly discriminating by holding me back from a job he knew I could do in favor of a man and kindly asked me do the work required of that position until he did find a man. In retrospect, it really did work out for the best. I was the one in shock that I actually heard him say - to my face - that he wanted to put a man in that position. But I digress...

    I started my IT career in 1984, installing computer systems for tractor dealerships and training the personnel how to do everything with that new computer -- running the parts department, managing the service department, selling farm equipment, doing payroll, and keeping the books. As a very young woman in a very male-dominated industry - agriculture - it was quite an experience. I had one good ol' boy in Kansas refuse to hand over the $250,000 check for his newly installed system because he didn't think it was right that my company send someone that was still wet behind the years (and female, although he didn't say it out loud). I just smiled and reminded him that he had a working system and trained personnel, so I thought he got exactly what he paid for. He smiled back and handed me the check.

    Other than dealing with the very male client base, I didn't realize at the time what a novel position I was in from a sociohistoric point of view. The software company that employed me had a lot of women. More than 50% women as I recall. The men there didn't have an attitude that I recollect. But maybe it's just me...and a bad memory. Anyway, it just seemed normal to have women in the workplace. It wasn't until years later when I took a women's history course that I realized that 1984 was the year that women entered the workplace en masse, in professions other than teachers and nurses and secretaries. I was so busy being in the middle of it that I didn't notice. And had nothing to compare it to.

    If I had no reason to appreciate the novelty of what I was doing in 1984, because I had nothing to compare with the experience, why should men who are younger than me require a period to adjust? That's the implication in the "Booth Babes" post. We simply need to give men more time. And let me do that math - 2010 - 1984 = 26 years. If any man my age or older can't adjust in 26 years, well, I'm sorry. I think he's got other issues, since the world is moving much faster and requiring many more adjustments than having women around. Maybe it's just me...I might need to develop a more sympathetic outlook towards men who are having such trouble coping with technologically-savvy women. If I can find one.

    I guess I'm luckier than women like Lori and Denise who are encountering men with bad attitudes. I am surrounded by a community that is admittedly male-dominated. I no longer work for that software company surrounded by women, and as I get older, I notice there are fewer and fewer women in IT. The facts bear this out if one considers just computer science degrees - only 18% were earned by women in 2008 as compared to 37% in 1985 (National Center for Women & Information Technology).

    My daughter graduated from MIT in 2008. Not with a computer science degree, but one in theoretical math. I thought young women weren't supposed to do that sort of thing? Maybe it's just me...and my genes. :) She decided that the proper thing to do with a theoretical math degree would be to continue on to grad school, but ultimately decided against that and became a business intelligence consultant like me. (See, I'm doing my part for WIT!)

    I asked for her perspective on this notion that men have negative attitudes about women in IT or STEM for that matter. She said that in school she had heard that women in math could have problems with old professors, but she never personally encountered it and never felt disadvantaged, although being female clearly put her in the minority. No, the bigger problem was with other male students, but she chalked that up to their introverted natures. Hmm, could it be the shock and awe that Lori and Denise described was not a negative attitude, but rather the social awkwardness that sometimes characterizes guys who work in IT? Just a thought...

    As for her experiences in the work world, my daughter says her biggest obstacle to acceptance has not been gender, but age. She's still young and has to prove herself. And that's to be expected.

    Maybe it's just me... Perhaps my worldview of the attitudes towards women in IT is too narrowly focused. Over a 26 year career, I've worked in a lot of industries with a lot of different technologies, but I certainly have no idea what's going on in the IT world at large. My focus has been limited to the Microsoft SQL Server community for the past 10 years. From the way Lori and Denise make it sound out there, I'm rather glad I'm having such a sheltered experience. The SQL Server community has been quite welcoming to women for as long as I've been a member. In fact, there is an active effort to get more women involved through a Women in Technology (WIT) special interest group. At every conference I attend, there is always some event devoted to WIT and this year I've been invited to participate in the panel discussion at PASS Summit 2010.

    Maybe it's just me... but I think the guys in the SQL Server community are awesome and certainly don't deserve to be lumped into the same group with others who might have a problem with women in IT. Looking forward to seeing all of you in Seattle in November!

  • Analysis Services Data Connections and SharePoint - One Solution to "Connection Cannot Be Made"

    Recently I came across a problem with an Analysis Services data connection when I was trying to create a status indicator in SharePoint 2010 (formerly known as a key performance indicator in SharePoint 2007). I had successfully created the data connection in Excel, verified that I could create a PivotTable with it, and then saved the connection to Excel Services as an Office Data Connection (ODC) file. However, when I tried to use the ODC to create a status indicator, the following error displayed:

    "A connection cannot be made. Ensure that the server is running."

    Well, I knew that the server was running, because I had just created a PivotTable with the same ODC. Then I tried searching the Web, but couldn't find anything useful (which prompted me to write this post!). In this scenario, I am running all servers on the same machine for demonstration purposes, so I knew it wasn't a double-hop problem. I had read that enabling SQL Browser would help, or using the IP address rather than the server name could be solutions also, but neither of those recommendations helped.

    I tried different connections that I had created over time and several had the same problem - connection cannot be made. After fussing around with different options, I finally went back to study my connection strings and saw my problem:

    Provider=MSOLAP.4;Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=Adventure Works DW 2008R2;Data Source=.;Location=.;MDX Compatibility=1;Safety Options=2;MDX Missing Member Mode=Error

    When I removed Location=.; from the string, and exported the new connection to SharePoint, the connection worked just fine! In further testing, I found that I put the server name, rather than the . placeholder to represent localhost, the connection also worked.

    Now the mystery is how I got that property into the connection string in the first place. I can't reproduce it by using the Data Connection Wizard in Excel, but that's generally the only way I produce an ODC file. Regardless of how it came to be in my connection string, I now know that the Location property is not required for the ODC file to work, or must reference the actual server name if it is included.

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.

    Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.

    Self-Service BI

    From the Microsoft BI Conference Keynote delivered by Ted Kummert

    Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.

    This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:

    • PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)
    • Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)
    • One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)
    • Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)
    • Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)

    This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.

    It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.

    Collaborative BI

    I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.

    Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."

    The Microsoft BI Stack in General

    A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.

    Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"

    Expo Hall

    I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.

    Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.

    Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!

    Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below.

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    tt_300_thumb_710BE775.jpg

    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obvious answer for me with regard to favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2:

    And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine.

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.

    The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.

    At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.

    Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.

    My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.

    Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.

    DISCLAIMER 1: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help.

    DISCLAIMER 2: The instructions below apply only to an environment in which the Windows 7 client is joined to a domain. You must log into the client machine using a valid Windows user account for that domain. Thanks to Kevin who pointed out this requirement which I had neglected to mention in my initial post (Updated May 30, 2010).

    I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.

    Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:

    • You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.
    • You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.

    I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error:

    Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.

    I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.

    When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:

    • On the Choose the installation you want page of the installation wizard, I chose Server Farm.
    • On the Server Type page, I chose Complete.
    • At the end of the installation, I did not run the configuration wizard.

    Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.

    It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.

    I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:

    • Install SQL Server 2008 R2 to get a database engine instance installed.
    • Run the SharePoint configuration wizard to set up the SharePoint databases.
    • In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.
    • Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.

    I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.

    Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.

    I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes.

  • SQL Server 2008 R2 Reporting Services - The World is But a Stage (T-SQL Tuesday #006)

    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources:

    Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started:

    In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data.

    In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report.

    Preparing the spatial data

    First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database.

    The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region).

    Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post.

    Step 1: Using the Map Wizard to Create a Map of France

    You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France:

    1. On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next.
    2. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data.
    3. On the Choose a connection to a SQL Server spatial data source page, select New.
    4. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary):
      Data Source=(local);Initial Catalog=Spatial
    5. Click OK and then click Next.
    6. On the Design a query page, add a query for the country shape, like this:
      select * from fra_adm1
    7. Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below.
      Map Wizard with France regional boundaries
      You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature.
    8. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later.
    9. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next.
    10. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish.
    11. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map.

    Step 2: Add a Map Layer to an Existing Map

    The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps:

    1. If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it.
    2. Click on the New Layer Wizard button in the Map layers window.
      2_NewLayerWizard.png
      And then we start over again with the process by choosing a spatial data source.
    3. Select SQL Server spatial query, and click Next.
    4. Select Add a new dataset with SQL Server spatial data, and click Next.
    5. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next.
    6. Add a query with spatial data (like the one I included in the downloadable project), and click Next.
    7. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly.
    8. Select Embed map data in this report, and click Next.
    9. On the Choose map visualization page, select Basic Marker Map, and click Next.
    10. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map.
      You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day.
    11. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini.
      3_FinalMap.png
      Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities.
  • Reporting Services - It's a Wrap!

    If you have any experience at all with Reporting Services, you have probably developed a report using the matrix data region. It's handy when you want to generate columns dynamically based on data. If users view a matrix report online, they can scroll horizontally to view all columns and all is well. But if they want to print the report, the experience is completely different and you'll have to decide how you want to handle dynamic columns. By default, when a user prints a matrix report for which the number of columns exceeds the width of the page, Reporting Services determines how many columns can fit on the page and renders one or more separate pages for the additional columns. In this post, I'll explain two techniques for managing dynamic columns. First, I'll show how to use the RepeatRowHeaders property to make it easier to read a report when columns span multiple pages, and then I'll show you how to "wrap" columns so that you can avoid the horizontal page break. Included with this post are the sample RDLs for download.

    First, let's look at the default behavior of a matrix. A matrix that has too many columns for one printed page (or output to page-based renderer like PDF or Word) will be rendered such that the first page with the row group headers and the inital set of columns, as shown in Figure 1.

    Figure 1

    The second page continues by rendering the next set of columns that can fit on the page, as shown in Figure 2.This pattern continues until all columns are rendered.

    Figure 2

    The problem with the default behavior is that you've lost the context of employee and sales order - the row headers - on the second page. That makes it hard for users to read this report because the layout requires them to flip back and forth between the current page and the first page of the report. You can fix this behavior by finding the RepeatRowHeaders of the tablix report item and changing its value to True. The second (and subsequent pages) of the matrix now look like the image shown in Figure 3.

    Figure 3

    The problem with this approach is that the number of printed pages to flip through is unpredictable when you have a large number of potential columns. What if you want to include all columns on the same page? You can take advantage of the repeating behavior of a tablix and get repeating columns by embedding one tablix inside of another.

    For this example, I'm using SQL Server 2008 R2 Reporting Services. You can get similar results with SQL Server 2008. (In fact, you could probably do something similar in SQL Server 2005, but I haven't tested it. The steps would be slightly different because you would be working with the old-style matrix as compared to the new-style tablix discussed in this post.) I created a dataset that queries AdventureWorksDW2008 tables:

    SELECT TOP (100)
    e.LastName + ', ' + e.FirstName AS EmployeeName, d.FullDateAlternateKey, f.SalesOrderNumber, p.EnglishProductName, sum(SalesAmount) as SalesAmount
    FROM FactResellerSales AS f INNER JOIN
    DimProduct AS p ON p.ProductKey = f.ProductKey INNER JOIN
    DimDate AS d ON d.DateKey = f.OrderDateKey INNER JOIN
    DimEmployee AS e ON e.EmployeeKey = f.EmployeeKey
    GROUP BY p.EnglishProductName, d.FullDateAlternateKey, e.LastName + ', ' + e.FirstName, f.SalesOrderNumber
    ORDER BY EmployeeName, f.SalesOrderNumber, p.EnglishProductName

    To start the report:

    • Add a matrix to the report body and drag Employee Name to the row header, which also creates a group.
    • Next drag SalesOrderNumber below Employee Name in the Row Groups panel, which creates a second group and a second column in the row header section of the matrix, as shown in Figure 4.

    Figure 4

    Now for some trickiness.

    • Add another column to the row headers. This new column will be associated with the existing EmployeeName group rather than causing BIDS to create a new group. To do this, right-click on the EmployeeName textbox in the bottom row, point to Insert Column, and then click Inside Group-Right.
    • Then add the SalesOrderNumber field to this new column. By doing this, you're creating a report that repeats a set of columns for each EmployeeName/SalesOrderNumber combination that appears in the data.
    • Next, modify the first row group's expression to group on both EmployeeName and SalesOrderNumber. In the Row Groups section, right-click EmployeeName, click Group Properties, click the Add button, and select [SalesOrderNumber].
    • Now you need to configure the columns to repeat. Rather than use the Columns group of the matrix like you might expect, you're going to use the textbox that belongs to the second group of the tablix as a location for embedding other report items. First, clear out the text that's currently in the third column - SalesOrderNumber - because it's already added as a separate textbox in this report design. Then drag and drop a matrix into that textbox, as shown in Figure 5.

    Figure 5

    Again, you need to do some tricks here to get the appearance and behavior right. We don't really want repeating rows in the embedded matrix, so follow these steps:

    • Click on the Rows label which then displays RowGroup in the Row Groups pane below the report body.
    • Right-click on RowGroup,click Delete Group, and select the option to delete associated rows and columns.

    As a result, you get a modified matrix which has only a ColumnGroup in it, with a row above a double-dashed line for the column group and a row below the line for the aggregated data. Let's continue:

    • Drag EnglishProductName to the data textbox (below the line).
    • Add a second data row by right-clicking EnglishProductName, pointing to Insert Row, and clicking Below. Add the SalesAmount field to the new data textbox.
    • Now eliminate the column group row without eliminating the group. To do this, right-click the row above the double-dashed line, click Delete Rows, and then select Delete Rows Only in the message box.

    Now you're ready for the fit and finish phase:

    • Resize the column containing the embedded matrix so that it fits completely.
    • Also, the final column in the matrix is for the column group. You can't delete this column, but you can make it as small as possible. Just click on the matrix to display the row and column handles, and then drag the right edge of the rightmost column to the left to make the column virtually disappear.
    • Next, configure the groups so that the columns of the embedded matrix will wrap. In the Column Groups pane, right-click ColumnGroup1 and click on the expression button (labeled fx) to the right of Group On [EnglishProductName].
    • Replace the expression with the following: =RowNumber("SalesOrderNumber" ). We use SalesOrderNumber here because that is the name of the group that "contains" the embedded matrix.
    • The next step is to configure the number of columns to display before wrapping. Click any cell in the matrix that is not inside the embedded matrix, and then double-click the second group in the Row Groups pane - SalesOrderNumber.
    • Change the group expression to the following expression: =Ceiling(RowNumber("EmployeeName")/3)
    • The last step is to apply formatting. In my example, I set the SalesAmount textbox's Format property to C2 and also right-aligned the text in both the EnglishProductName and the SalesAmount textboxes. And voila - Figure 6 shows a matrix report with wrapping columns.

    Figure 6

  • Introducing Microsoft SQL Server 2008 R2 - Business Intelligence Samples

    On April 14, 2010, Microsoft Press (blog | twitter) released my latest book, co-authored with Ross Mistry (twitter), as a free ebook download - Introducing Microsoft SQL Server 2008 R2. As the title implies, this ebook is an introduction to the latest SQL Server release. Although you'll find a comprehensive review of the product's features in this book, you will not find the step-by-step details that are typical in my other books. For those readers who are interested in a more interactive learning experience, I have created two samples file for download:

    Here's a recap of the business intelligence chapters and the samples I used to generate the screen shots by chapter:

    Chapter 6: Scalable Data Warehousing covers a new edition of SQL Server, Parallel Data Warehouse. Understandably, Microsoft did not ship me the software and hardware to set up my own Parallel Data Warehouse environment for testing purposes and consequently you won't see any screenshots in this chapter. I received a lot of information and a lot of help from the product team during the development of this chapter to ensure its technical accuracy.

    Chapter 7: Master Data Services is a new component in SQL Server. After you install Master Data Services (MDS), which is a separate installation from SQL Server although it's found on the same media, you can install sample models to explore (which is what I did to create screenshots for the book). To do this, you deploying packages found at \Program Files\Microsoft SQL Server\Master Data Services\Samples\Packages. You will first need to use the Configuration Manager (in the Microsoft SQL Server 2008 R2\Master Data Services program group) to create a database and a Web application for MDS. Then when you launch the application, you'll see a Getting Started page which has a Deploy Sample Data link that you can use to deploy any of the sample packages.

    Chapter 8: Complex Event Processing is an introduction to another new component, StreamInsight. This topic was way too large to cover in-depth in a single chapter, so I focused on information such as architecture, development models, and an overview of the key sections of code you'll need to develop for your own applications. StreamInsight is an engine that operates on data in-flight and as such has no user interface that I could include in the book as screenshots. The November CTP version of SQL Server 2008 R2 included code samples as part of the installation, but these are not the official samples that will eventually be available in Codeplex. At the time of this writing, the samples are not yet published.

    Chapter 9: Reporting Services Enhancements provides an overview of all the changes to Reporting Services in SQL Server 2008 R2, and there are many! In previous posts, I shared more details than you'll find in the book about new functions (Lookup, MultiLookup, and LookupSet), properties for page numbering, and the new global variable RenderFormat. I will confess that I didn't use actual data in the book for my discussion on the Lookup functions, but I did create real reports for the blog posts and will upload those separately. For the other screenshots and examples in the book, I have created the IntroSQLServer2008R2Samples project for you to download. To preview these reports in Business Intelligence Development Studio, you must have the AdventureWorksDW2008R2 database installed, and you must download and install SQL Server 2008 R2. For the map report, you must execute the PopulationData.sql script that I included in the samples file to add a table to the AdventureWorksDW2008R2 database. The IntroSQLServer2008R2Samples project includes the following files:

    • 01_AggregateOfAggregates.rdl to illustrate the use of embedded aggregate functions
    • 02_RenderFormatAndPaging.rdl to illustrate the use of page break properties (Disabled, ResetPageNumber), the PageName property, and the RenderFormat global variable
    • 03_DataSynchronization.rdl to illustrate the use of the DomainScope property
    • 04_TextboxOrientation.rdl to illustrate the use of the WritingMode property
    • 05_DataBar.rdl
    • 06_Sparklines.rdl
    • 07_Indicators.rdl
    • 08_Map.rdl to illustrate a simple analytical map that uses color to show population counts by state
    • PopulationData.sql to provide the data necessary for the map report

    Chapter 10: Self-Service Analysis with PowerPivot introduces two new components to the Microsoft BI stack, PowerPivot for Excel and PowerPivot for SharePoint, which you can learn more about at the PowerPivot site. To produce the screenshots for this chapter, I created the Sales Analysis workbook which you can download (although you must have Excel 2010 and the PowerPivot for Excel add-in installed to explore it fully). It's a rather simple workbook because space in the book did not permit a complete exploration of all the wonderful things you can do with PowerPivot. I used a tutorial that was available with the CTP version as a basis for the report so it might look familiar if you've already started learning about PowerPivot.

    In future posts, I'll continue exploring the new features in greater detail. If there's any special requests, please let me know!

More Posts « Previous page
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement