THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Sergio Govoni

A place where share my experience on SQL Server and related technologies

  • 24 Hours of PASS (September 2014): Summit Preview Edition

    Which sessions you can expect to find at the next PASS Summit 2014 ? Find it out on September 09, 2014 (12:00 GMT) at the free online event: 24 Hours of PASS: Summit Preview Edition.

    Register now at this link.

    No matter from what part of the world you will follow the event, the important thing is to know that they will be 24 hours of continuous training on SQL Server and Business Intelligence on your computer!

  • AdventureWorks databases for SQL Server 2014

    From CodePlex website, you can download the AdventureWorks databases (OLTP, Data Warehouse, Tabular and Multidimensional model) for SQL Server 2014.

    Download links and installation's details are available here.

    Enjoy with the new features of SQL Server 2014!

  • 24 Hours of PASS (June 2014): Recordings Now Available!

    Sessions of the event 24 Hours of PASS: SQL Server 2014 (which was held on last June at 25th and 26th) were recorded and now they are available for online streaming!

     

    If you have missed one session (in particular) or the entire live event, you can view it or review your preferred sessions here.

    What could you aspect from the next PASS Summit? Find it out on September 9 at 24 Hours of PASS: Summit 2014 Preview Edition!

  • Updateable Column Store Indexes in SQL Server 2014

    Introduction

    Column store indexes had been released with SQL Server 2012 to optimize data warehouse workloads that have specific patterns, data are loaded through T-SQL scripts or through SSIS packages, several times a day, in any case the frequency is not important, only that the available data is loaded in the same execution. At the end of ETL process, data is read with reporting tools. Usually data is written one time, then read multiple times.

    In SQL Server 2012 there was the non-clustered column store index only; like a traditional B-Tree non-clustered index, it was a secondary index. However, it differs from a traditional B-Tree index because it is based on a columnar structure, though, the base table remains organized by row (in a structure called row-store and saved in 8K data pages).

    The column store indexes are part of Microsoft In-Memory Technologies because they use xVelocity engine for data compression optimization and its implementation is based on a columnar structure such as PowerPivot and SSAS Tabular. Data in column store indexes are organized by column, each memory page stores data from a single column, so each column can be accessed independently. This means that SQL Server Storage Engine will be able to fetch the only columns it needs. In addition, data is highly compressed, so more data will fit in memory and the I/O operations can greatly decrease.

    Column store indexes structure

    Before talking about new feature of column store indexes in SQL Server 2014, it is important to introduce three keywords: Segment, Row Group and Compression. In a column store index, a segment contains values for one column of a particular set of rows called row group. As it is possible to see in the following picture, each red and gray portions are segments. When you create a column store index, the rows in the table will be divided in groups and each row group contains about 1 million rows (the exact number of rows in each row group is 1,048,576; in other word there are 2^20 rows in each row group). Column store transforms the internal index organization from row organization to columnar organization and there will be one segment for each column and for each row group. Column store indexes are part of Microsoft In-Memory technologies in which data is compressed and the compression plays a very important role, so each segment is compressed and stored in a separate LOB.

    This article does not detail the algorithms to compress data in column store index. At any rate, keep in mind that each segment stores the same type of data, so in a segment, there will be homogeneous data and in this scenario, the compression algorithms will be more efficient than the one used to compress table rows because row usually contains different type of data. In general, data compression can be implemented using different techniques such as:

    • Dictionary Encoding
    • Run-length Encoding
    • Bit Packing
    • Archival Compression (only in SQL Server 2014)
      • It can reduce disk space up to 27%

    The techniques used by SQL Server to compress data are undocumented.

    The following picture shows an example of row groups and segments in a column store index... continue reading full article here.

     

    Thanks to my friend Viviana Viola for the grammar review.

  • Introducing Microsoft Azure HDInsight

    A post on Microsoft Press Blog has announced the publication of the eBook Introducing Microsoft Azure HDInsight.

    If you want to know what Big Data really means and how you can use it to your advantage in your company or organization, don't miss this eBook!

    Enjoy the book!

  • 24 hours of PASS is back!

    The most important free on-line event on SQL Server and Business Intelligence is back! The 24 Hours of PASS is coming back with a great edition fully based on the new features of SQL Server 2014.

    Register now at this link.

    No matter from what part of the world you will follow the event, the important thing is to know that it will be 24 hours of continuous training on SQL Server and Business Intelligence.

  • Review of the book Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8

    Recently I had the opportunity to read the book Learning Windows Azure Mobile Services for Windows 8 and Windows Phone 8 written by Geoff Webber-Cross (@webbercross) and published by Packt Publishing.

    In the last year, Windows Azure has increased the offer of cloud-based services which are hosted on Windows Azure platform. One of those new services is Windows Azure Mobile Services that allows developers to build web-connected application easily.

    Before reading this book my knowledge on Windows Azure was on other topics such as SQL Azure Database, Storage and Windows Azure Virtual Machine. When I have heard about the opportunity to read and review this book I have thought it was a great opportunity to learn something new about the services offered by Windows Azure for mobile application.

    The book covers all features of Windows Azure Mobile Services starting from the activity to prepare the Windows Azure Mobile Services Portal up to the Best Practices for Web-Connected Applications development. When you start to develop an Apps for Windows 8 or Windows Phone 8 with Windows Azure Mobile Services you may want to know what software and hardware are needed, this topic is covered in the second chapter. Security, Customization, Notification and Scalability are topics covered in the chapters 3, 4, 5 and 6.

    Another thing I have appreciated in this book is the attention to the cost of services; many times in the book I read sentences like this "At this point, if we choose the … we will start incurring costs". As confirm, the concept "Choosing a pricing plan for services you wish to implement" is covered at the beginning of the first chapter.

    There are lot of pictures in the book, which make it practical and easy to read. If you want to look inside the book you can download a sample chapter here and this is the table of contents:

    • Chapter 1: Preparing the Windows Azure Mobile Services Portal
    • Chapter 2: Start Developing with Windows Azure Mobile Services 19
    • Chapter 3: Securing Data and Protecting the User
    • Chapter 4: Service Customization with Scripts
    • Chapter 5: Implementing Push Notifications
    • Chapter 6: Scaling Up with the Notifications Hub
    • Best Practices for Chapter 7: Web-connected Apps 

    This book cannot missing in your digital or physical library, enjoy!

  • Introducing Microsoft SQL Server 2014

    The release of the latest two versions of SQL Server (2008 R2 and 2012) has been accompanied by the publication of their respective eBooks. On April 2nd, a post on Microsoft Press Blog has announced the publication of the eBook Introducing Microsoft SQL Server 2014!

     

    This eBook is divided in two parts; the first one is dedicated to Database Administration, here the authors describe the editions of SQL Server 2014, the characteristics of the new Engine In-Memory and how the transition from on-premises solutions to the Cloud is easy with added support for Hybrid environments. In the second part, the authors describe the components that support data analysis and what improvements have been done in related technologies such as Microsoft Excel 2013, Power BI for Office 365, HDInsight, and PolyBase.

    The PDF version of the eBook is available, for free, here. In addiction you will find this book into the eBook section on Microsoft Virtual Academy home site.

    Enjoy the book and thanks to authors Ross Mistry (@RossMistry) and Stacia Misner (@StaciaMisner).

  • Unexpected SPID change

    A few time ago I had the opportunity to investigate about an unexpected SPID (Session Process ID) change. The inquiry began when a customer of ours starts to complain for locks, and lock time out error. These locks sometime were more frequent and sometimes not.

    To investigate about this issue, I have taken two SQL Profiler trace files related of two execution of the Application functionality on which the customer has complained about the problem, with the same execution criteria. The only difference is that the first trace file was taken when the user complained locks and lock timeout error, while the second one was taken when no locks were occurred.

    Comparing the two SQL Profiler trace files I have noticed an "unexpected" SPID change happened exactly when the Application has been locked; I have written "unexpected" because the Application uses always one connection to perform the queries captured by SQL Profiler. I have checked the piece of code that execute the queries shown in the following picture and I have verified no new connection was been opened explicitly.

    The following picture shows the first execution in which you can see the unexpected SPID change.

    Picture 1 – Execution with unexpected SPID change

    Let me focus on the trace captured during the first execution. As you can see in the Picture 1, at certain point for the ClientProcessID number 192, there was been a SPID change from SPID number 111 to SPID number 110. The last query executed with SPID 110 has only the SP:StmtStarting event without SP:StmtCompleted event because this query was blocked from the previous SPID number 111 and for this reason the Application has been blocked.

    The following picture shows the second execution.

    Picture 2 – Execution without SPID change

    Let me focus on the second execution of the same Application functionality, on the same client and with the same execution criteria, of course. As you can see in the picture 2 the SPID number is always the 68 for all queries performed in the same piece of code (as I expect).

    Now the questions are: Which is the reason for this unexpected SPID change? Which are the conditions that force SQL Server to take the decision of changing the SPID number between two queries execution?

    Talking about this issue with Erland Sommarskog he asked me "What API does the application use?" and my answer: "OLE DB", so he replied me "We have the answer!".

    When you use OLE DB or something based on OLE DB and you perform a query on a connection which has not fully consumed the entire result-set of the previous, the default behavior is to open a new connection behind the scenes. This new connection will have a different SPID number, it is an attempt to be helpful, but it is not be ever helpful.

    In particular, the result-set not fully consumed was in the last query executed by the SPID number 111, the same tables were been accessed from the last query with SPID 110 and then there was been the lock.

    Thanks for the help Erland!

  • Hurry up! Buy one book and get one for free on Packt Publishing!

    Packt Publishing wants to celebrate the release of their 2000th title with the great offer "Buy One, Get One Free" applied into their comprehensive catalog.

    If you buy a book, of your choice, you will get another one for free. In addition, you will exploit this offer unlimitedly during the offer period.

    I suggest you to take a look to available titles, you can risk to bring home two books at the ones price :) in particular, I suggest you to look this categories:

    Enjoy the books!

  • Performance Tuning with SQL Server Dynamic Management Views

    From Red-Gate website you can download (for free) the ebook Performance Tuning with SQL Server Dynamic Management Views written by Louis Davidson (@drsql) and Tim Ford (@sqlagentman).

    DMVs are objects that publish information about the health state of a SQL Server instance. They allow you to monitor the heart of the system to diagnose problems and measure performance. They represent a very good tool for DBAs and database Developer as well. Reading this book you will learn how to obtain information from the most frequently used DMVs in investigating query execution, index usage, session and transaction activity, disk I/O, and how SQL Server is using or abusing the operating system.

    The editorial review is available here.

    This book cannot miss in your digital library! And obviously you can buy the paperback on Amazon.

  • Troubleshooting Clusters

    The session Troubleshooting Clusters by Allan Hirt (@SQLHA) has been published on channel SQLPASS TV.

    Abstract

    Whether you are new to clusters or have years of experience, you may still cross your fingers when implementing a failover cluster instance (FCI) of SQL Server or an availability group (AG). Both require an underlying Windows Server failover cluster (WSFC), but how many DBAs are involved in the planning and deployment of it? What happens if you try to install an FCI or configure an AG and things don't work? Or worse, what if the implementation fails after going live in production? This session will cover how to diagnose what went wrong and the free tools available to you, as well as demonstrate how to fix common issues that every DBA should know about.

    Link

    http://www.youtube.com/watch?v=ELtmP_1uEi8 

    Enjoy! 

  • Skewed Data - Poor Cardinality Estimates... and Plans Gone Bad

    The session Skewed Data, Poor Cardinality Estimates, and Plans Gone Bad by Kimberly Tripp (@KimberlyLTripp) has been published on channel SQLPASS TV.
     
     
    Abstract
     
    When data distribution is heavily skewed, cardinality estimation (how many rows the query optimizer expects each operator to process) can be wildly incorrect, resulting in poor quality query plans and degraded performance. You've probably seen the advice to update all statistics if a query plan looks wrong – but is that the right advice? In many cases, no! These are "sledgehammer" approaches, and while they might solve some problems (usually parameter sniffing problems), they don't solve the actual problem. In this session, you'll learn a generalized yet tailored-to-the-table way to solve query plan quality problems for very large tables (VLTs). Topics will include creating, using, and updating filtered statistics; using forced parameterization and templatized plan guides; and understanding stored procedures and how they can leverage filtered statistics.
     
    Enjoy!
  • Review of the book SQL Server 2012 Reporting Services Blueprints

    Recently I had the opportunity to read the book SQL Server 2012 Reporting Services Blueprints written by Marlon Ribunal and Mickey Stuewe and published by Packt Publishing.

    First, I have to tell you that I am not a Reporting Services guy, I have much more knowledge on other topics such as Database Development, Performance Tuning, and so on. When I have heard about the opportunity to read this book I have thought it was a great opportunity to increase my knowledge on SQL Server Reporting Services, but the time is always short, so I needed a practical book that goes to the point quickly.

    If I would describe you the book with only one adjective, absolutely I could tell you that this book is concrete and easy to read. There are many pictures in the book and the authors guide the reader, step-by-step, from the Definition of the Data Source for an SSRS Project up to the Reporting Services Best Practices and Transactional Replication topics.

    The book covers all feature of SQL Server Reporting Services and provides you all information you need to start develop database reports with SSRS in a week. If you want to look inside the book you can download a sample chapter here and this is the table of contents:

    • Chapter 1: Let's Breakdown the Numbers
    • Chapter 2: Making a Good Impression
    • Chapter 3: Getting Some Actions
    • Chapter 4: Everything Looks Better with Charts
    • Chapter 5: Location, Location, Locations!
    • Chapter 6: Let's Get Analytical!
    • Chapter 7: Sir, We're Ready to Deploy!
    • Chapter 8: Beyond the Horizon!
    • Appendix A: SSRS Best Practices
    • Appendix B: Transactional Replication for Reporting Services

    My preferred chapters are the 2, 4 and 5.

    This book cannot missing in your digital or physical library!

  • Building Your T-SQL Tool Kit: Window Function Fundamentals

    The session Building Your T-SQL Tool Kit: Window Function Fundamentals by Christina E. Leo (@christinaleo) has been published on channel SQLPASS TV.

    Abstract

    Have you pulled a script to identify duplicates from a blog post but couldn't quite get it to work because you weren't sure what that ROW_NUMBER() function was doing? Maybe you heard talk about creating running totals without using subqueries, but you got frustrated when the groups weren't totaling correctly. Or maybe, you've never even heard of window functions. All are good reasons to attend this all-demo session, which demystifies this versatile T-SQL tool. First, we'll break apart the OVER clause, the key to understanding how window functions work. Then we'll expand on each group of functions that can use the OVER clause: ranking, aggregate, and analytic functions. Finally, we'll look at real scenarios where this tool works and talk about performance considerations. When you leave, you'll have the fundamentals you need to fully develop your mastery of window functions.

    Enjoy!

More Posts Next page »
Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement