THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Louis Davidson

  • Louisville SQL Saturday…

    One more week until we get to SQL Saturday 122 in Louisville KY. I have a couple of sessions on the calendar this time. First, the one on Sequences:

    ------------------------------------

    What Sequence objects are (and are not)

    SQL Server 2012 adds a new object to our arsenal called a sequence that can will give us the capability to implement automatically incrementing values. However, it cannot replace a lot of functionality that we have used a numbers table and windowing functions for (though they can be complimentary to one another). In this session I will demonstrate the uses and performance characteristics of sequences, including how they compliment the use of number tables and windowing functions to create surrogate key and sorting values, and more.

    ------------------------------------

    The second session is my professional development session that goes along with my What Counts for a DBA blog series on SimpleTalk. Come with your ideas about what makes a great DBA so we can all get into the conversation (and not just so I won’t have to create too many slides). I will have my topic spinning wheel with me, so who knows exactly what we will discuss, not even I know.

    ------------------------------------
    What Counts For a DBA

    The world of a DBA can be daunting for a person, either as a new or old, because not only do they need to keep up with new and emerging technologies, but also with the code and designs of their coworkers. In this highly participation driven session, we will employ a random topic chooser to pick several of these traits for discussion as a group. Possible topics include past blog topics such as Logic, Curiosity, Failure, Humility, Skill and Passion, as well as any other topics that might be added for that day. So come prepared to participate and voice your opinion about what counts for a DBA.

    ------------------------------------

    Hope to see you there! (and if you can’t make it to Louisville, I will be in Cleveland OH for SQL Saturday #164 on August 16,  and then in Chattanooga for Devlink the last week of August. Chattanooga, TN.

  • Utility Queries–Database Files, (and Filegroups)

    It has been a while since I last posted a utility query, and today, to avoid other work I am supposed to be doing, I decided to go ahead and work on another post.  Today, I went ahead and worked on a server configuration type query. One query I find I use pretty often is the following one that lists the files in the database. In this blog I will include 3 queries.  The first will deal with files and databases, and the second runs in a database to see the files and their filegroups (If there is an easy way to get the filegroups at a server level, I am not sure of it…let me know).

    Database Files, All Databases – File Level (Download source)

    It is a pretty simple query, and it returns the following columns. (A value of '@TOTAL' indicates that the row is a summary row, and some file_types will not report a file size. ):

    • database_name – The name of the database
    • database_file_name – The file name that was set when the file was added to the database (the logical name, not the physical name)
    • size_in_kb – The size of the file in kilobytes, such that it matches the file size in the Windows Explorer
    • size_in_mb – The size of the file in megabytes, a size that is more typical the people want to see
    • size_in_gb – The size of the file in gigabytes, useful when looking at really large files
    • file_type – How the file is used in the server
    • filesystem_drive_letter – the drive letter where the file is located
    • filesystem_file_name – name of the physical file
    • filesystem_path – the path where the files are located.

    --Get the files and total size of files for all databases

    SELECT  --the name of the database
            CASE WHEN GROUPING(DB_NAME(database_id)) = 1 THEN '@TOTAL'
                 ELSE DB_NAME(database_id)
            END AS database_name ,

            --the logical name of the file
            CASE WHEN GROUPING(master_files.name) = 1 THEN '@TOTAL'
                 ELSE master_files.name
            END AS database_file_name ,

            --the size of the file is stored in # of pages
            SUM(master_files.size * 8.0) AS size_in_kb,
            SUM(master_files.size * 8.0) / 1024.0 AS size_in_mb,
            SUM(master_files.size * 8.0) / 1024.0 / 1024.0 AS size_in_gb,

            --the physical filename only
            CASE WHEN GROUPING(master_files.name) = 1 THEN ''
                 ELSE MAX(master_files.type_desc)
            END AS file_type , 
           
            --the physical filename only
            CASE WHEN GROUPING(master_files.name) = 1 THEN ''
                 ELSE MAX(UPPER(SUBSTRING(master_files.physical_name, 1, 1)))
            END AS filesystem_drive_letter ,              


           --thanks to Phillip Kelley from http://stackoverflow.com/questions/1024978/find-index-of-last-occurrence-of-a-sub-string-using-t-sql
           --for the REVERSE code to get the filename and path.

            --the physical filename only
            CASE WHEN GROUPING(master_files.name) = 1 THEN ''
                 ELSE MAX(REVERSE(LEFT(REVERSE(master_files.physical_name),
                         CHARINDEX('\', REVERSE(physical_name)) - 1)))
            END AS filesystem_file_name ,

            --the path of the file only
           cASE WHEN GROUPING(master_files.name) = 1 THEN ''
                 ELSE MAX(REPLACE(master_files.physical_name,
                    REVERSE(LEFT(REVERSE(master_files.physical_name),
                                 CHARINDEX('\', REVERSE(physical_name)) - 1)), ''))
                 END AS filesystem_path

    FROM    sys.master_files
    GROUP BY DB_NAME(database_id) , --the database and filegroup and the file (all of the parts)
             master_files.name WITH rollup
    ORDER BY database_name, database_file_name

    Single Database By Filegroup (Download source)

    Edited: Added code based on one of the comments here: http://www.sqlblog.lv/2011/05/ka-apskatit-datu-bazes-failu-izmeru-un.html.  His post does all db’s with sizing, but I preferred to have this query only work on one database. I added columns for available space, used space, as well as on disk space

    In the second query, it will, for one database, list all of the row and log filegroups and their files. Like the previous query, it may list filegroups that have a 0 size for types like full text. It uses sys.database_files for the files. This has one downside, and that is that if the database is read only, it is possible that the results will not be correct and will reflect a previous version of the metadata. Use master_files if you want to get current values, but there is no guarantees that it will match the filegroups. 

    It will return:

    • filegroup_name – The name of the filegroup in the database
    • database_file_name – The file name that was set when the file was added to the database (the logical name, not the physical name)
    • size_in_kb – The size of the file in kilobytes, such that it matches the file size in the Windows Explorer
    • size_in_mb – The size of the file in megabytes, a size that is more typical the people want to see (Commented Out)
    • size_in_gb – The size of the file in gigabytes, useful when looking at really large files
    • used_size_in_kb – The amount of the file that has data allocated, in kilobytes
    • used_size_in_mb – The amount of the file that has data allocated in megabytes, a size that is more typical the people want to see (Commented Out)
    • used_size_in_gb – The amount of the file that has data allocated, in gigabytes, useful when looking at really large files
    • available_size_in_kb – The amount of free space in kilobytes, such that it matches the file size in the Windows Explorer
    • available_size_in_mb – The amount of free space in megabytes, a size that is more typical the people want to see (Commented Out)
    • available_size_in_gb – The amount of free space in gigabytes, useful when looking at really large files
    • size_on_disk_kb – The amount of space the file takes in the file system (reported from the DMVs)
    • file_type – How the file is used in the server
    • filesystem_drive_letter – the drive letter where the file is located
    • filesystem_file_name – name of the physical file
    • filesystem_path – the path where the files are located.

    SELECT  --the name of the database

                   --the name of the filegroup (or Log for the log file, which doesn't have a filegroup)
                   CASE WHEN GROUPING(filegroups.name) = 1 THEN '@TOTAL'
                             WHEN filegroups.name IS NULL THEN 'LOGS'
                             ELSE filegroups.name
                    END AS filegroup_name ,
           
                   --the logical name of the file
                   CASE WHEN GROUPING(database_files.name) = 1 THEN '@TOTAL'
                            ELSE database_files.name
                   END AS database_file_name ,

                   --the size of the file is stored in # of pages
                   SUM(database_files.size * 8.0) AS size_in_kb,
                   --SUM(database_files.size * 8.0) / 1024.0 AS size_in_mb,
                   SUM(database_files.size * 8.0) / 1024.0 / 1024.0 AS size_in_gb,
                  
                   SUM(FILEPROPERTY(database_files.NAME,'SpaceUsed') * 8.0) AS used_size_in_kb,
                   --SUM(FILEPROPERTY(database_files.NAME,'SpaceUsed') * 8.0)/ 1024.0  AS used_size_in_mb,
                   SUM(FILEPROPERTY(database_files.NAME,'SpaceUsed') * 8.0) / 1024.0 / 1024.0 AS used_size_in_gb,                             

                   SUM((database_files.size - FILEPROPERTY(database_files.NAME,'SpaceUsed')) * 8.0) AS available_size_in_kb,
                   --SUM((database_files.size - FILEPROPERTY(database_files.NAME,'SpaceUsed')) * 8.0)/ 1024.0  AS available_size_in_mb,
                   SUM((database_files.size - FILEPROPERTY(database_files.NAME,'SpaceUsed')) * 8.0) / 1024.0 / 1024.0 AS available_size_in_gb,  

                   SUM(DIVFS.size_on_disk_bytes/1024.0) AS size_on_disk_kb,
                  
                  --the physical filename only
                  CASE WHEN GROUPING(database_files.name) = 1 THEN ''
                            ELSE MAX(database_files.type_desc)
                   END AS file_type , 

                   --the physical filename only
                   CASE WHEN GROUPING(database_files.name) = 1 THEN ''
                            ELSE MAX(UPPER(SUBSTRING(database_files.physical_name, 1, 1)))
                   END AS filesystem_drive_letter ,        
           
                  --thanks to Phillip Kelley from http://stackoverflow.com/questions/1024978/find-index-of-last-occurrence-of-a-sub-string-using-t-sql

                   --the physical filename only
                   CASE WHEN GROUPING(database_files.name) = 1 THEN ''
                             ELSE MAX(REVERSE(LEFT(REVERSE(database_files.physical_name), CHARINDEX('\', REVERSE(database_files.physical_name)) - 1)))
                    END AS filesystem_file_name ,

                    --the path of the file only
                   CASE WHEN GROUPING(database_files.name) = 1 THEN ''
                             ELSE MAX(REPLACE(database_files.physical_name, REVERSE(LEFT(REVERSE(database_files.physical_name),
                                                                                                                       CHARINDEX('\', REVERSE(database_files.physical_name)) - 1)), ''))
                    END AS filesystem_path
    FROM    sys.database_files --use sys.master_files if the database is read only and you want to see the metadata that is the database
                 --log files do not have a filegroup
                         LEFT OUTER JOIN sys.filegroups
                                 ON database_files.data_space_id = filegroups.data_space_id
                        Left Join sys.dm_io_virtual_file_stats(DB_ID(), DEFAULT) DIVFS
                                On database_files.file_id = DIVFS.file_id                        
    GROUP BY  filegroups.name ,
                     database_files.name WITH ROLLUP
    ORDER BY     --the name of the filegroup (or Log for the log file, which doesn't have a filegroup)
                     CASE WHEN GROUPING(filegroups.name) = 1 THEN '@TOTAL'
                              WHEN filegroups.name IS NULL THEN '@TOTAL-SortAfter'
                              ELSE filegroups.name
                      END,
                      database_file_name

    Hope these queries help out sometime.  More on the way as I finish up other projects!

  • Speaking at PASS 2012… Exciting and Scary… As usual…

    Edit: As I reread this, I felt I should clarify.. As usual refers mostly to the "Scary" part. I have a lot of stage fright that I have to work through. And it is always exciting to be picked.  

    I have been selected this year at the PASS Summit 2012 to do two sessions, and they are both going to be interesting.

    • Pre-Con: Relational Database Design Workshop - Abstract
    • Triggers: Born Evil or Misunderstood? - Abstract

    The pre-con session entitled Relational Database Design Workshop will be (at least) the third time I will have done this pre-con session, and I am pretty excited to take it to a bit larger scale. The one big change that I am forcing this time is a limit on the lecture time. Each of the first two times I have done this session the biggest disappointment has been that we didn't get nearly enough time for the exercises. If people get tired of the exercises, I will certainly have a lot of extra material to do, but the focus will be on getting at least three hours of design time in. Some as a full group on the screen, and some broken up into groups. (Of course, we will adjust the schedule based on the size of the group and whatever they are throwing at me verbally… and physically! I will have material to pad out at least an hour if people start looking bored (or if the group members start screaming at each other…you know, like a real database design session would be like if people weren't concerned with losing their jobs.))

    The triggers session is the one that I have been mulling over for years now, and it is going to be minimally interesting, and probably a lot of fun (particularly if Hugo Kornelis (@Hugo_Kornelis) and Tom LaRock (@SQLRockstar) (Tom is not a fan of triggers! http://thomaslarock.com/2009/03/sql-database-triggers/) show up to offer their opinions). Triggers are probably the most controversial of SQL Server objects, and for good reason. There are server and database settings that affect how they work, and it is not trivial to write them in a manner that doesn't harm performance. Worst yet, they can cause side effects that (if not performed correctly) really harm performance, data integrity, and the sanity of the developers who don't realize they exist. But for all of the negatives, there are some tasks that just fit the trigger to a T. I might be trying to do too much in a 1:15 session, but it would not be the first time!

    So I hope to see a good number of you there, for the pre- con, and certainly for the Trigger session. Just leave the potential projectiles in your hotel room...

  • PASS Nomination Committee

    Edit: Well, I didn't get elected, and at this point, seeing who won (great choices in Angela, Stephanie, and Jason) and what else I have going on between now and the end of the year (presentations, grand childen being born, etc), I couldn't be more pleased with the outcome.  Usually the phrase "maybe next year" is cold comfort to the people who were not (as the election site so elequently uses) top vote getters, but in this case, I think we can all be happy that the top vote getters will do a great job for PASS and that we have more time to do other community activities. So, see you maybe next year.

     -------------------------------------------

    Starting today you can vote for your choice for who should be on the Nomination Committee for SQL PASS (or perhaps some previous day if you aren't reading this on the 13, and if it is after noon on the 23, well, tough). There are 12 people who are up for vote with 3 slots available so this should be fun. There are a few people I know quite well on the list, and a few I don't really know at all. Looking at their qualifications I think one thing stands out. You probably can't make a bad choice. One of them is myself, and this is a job I did once in the past, and is something I feel like I am somewhat uniquely qualified to do from 11 years of working with PASS members and board members alike.

    The job is helping to pick who will get the chance to run for the Board of Directors for PASS by vetting the people who apply (according to the PASS By-Laws, Section VIII.3, assuming I am reading it right.) So, the fact is, your choice ought to be somewhat based on how your idea of a board member matches up with our ideas. Of course, it will be a team effort and there is always compromise.

    So let me tell you the criteria that I personally have always hoped to find in board members.

    Professional level technical skills (primarily SQL). The last thing we need is people who know nothing of what we do simply trying to make a name for themselves with the concept of a professional group like ours. We need people that understand what it is they will be representing. They don't have to be experts by any means, just solid skills, probably demonstrated in the community.

    Strong leadership qualifications. Ideally, we will get people who have led groups of people before. Not everyone who would be a great Board Member will live near enough to a SQL user group to be involved, so this could mean technical management, local government, and volunteer work for charities, church leadership, or even being a team leader. A big plus is that the person should be able to communicate well publicly to unify the community of PASS through the good times and the times that are opposite to the good times.

    History of volunteerism. Possibly just with PASS, or possibly with some other type of organism. The person ought to know what it means to work for not a buck.

    History with PASS. Even if for just a reasonably short period of time, the person should know enough about PASS that they aren't just blindly trying to join at the top. Attendance at a conference or two would be a good bonus so they know what it is about but not everyone can afford to attend, even if they were comp'd the entrance fee.

    Ideas on what they think PASS should be like and how they can help get it there. (Perhaps) obviously we all expect that candidates aren't just campaigning because they think it sounds cool. They have a desire to move PASS toward some better future.

    Over the years I have put my name in the hat a several times for the board of directors, and twice was cut by the Nomination Committee, and the one time I was a candidate I lost. The funniest part of that year's Summit for me was that I was sincerely relieved when Kevin Kline sat me down and told be I had lost. I did not (and still do not, really) have the strong leadership qualifications needed to be on the Board of Directors. However, like most any longtime volunteer, I feel like I have a solid understanding what kind of kind of person inspires me as a volunteer. (Hence the list!)

    If you look at the 5 qualities, all of them are solidly focused on community. If you agree with my vision of what a PASS leader should be, vote for me. I promise to make sure the candidates are up to the high quality that you expect. If you don't agree (or perhaps just didn't find my black and white photo to your liking), vote for someone else, everyone seems quite solid as members of this committee. 

    If I don't get elected...there is less work for me to do, and I will have more time to server PASS and the SQL Community in any other way I can.

  • Crazy Week Next Week–SQLRally and SQL Saturday Birmingham

    A lot of my time lately has been spent getting ready for next week and I hope to see you there.  First up SQL Rally.

    On Wednesday, I will be doing a pre-con session on Database Design. I have already blogged about it here, and I am definitely excited about it.  There are still seats available, and I will be giving away a few goodies including my new design book (though it won’t yet be printed, I will get it to attendees), several of my, Tim Ford and Glenn Berry’s DMV books from RedGate, and who know what else I might have in my bag (Apress is sending me a lot of swag to give away during the conference too).

    Friday (last session of the conference), I will be presenting my new session entitled Sequences (What They Are and Are Not).  This session is a great departure from my typical presentation.  I typically do 95% slides, and sometimes 100% slides.  In this presentation I currently have 17 slides, and that will only change to include a bit more  contact information.  I have a lot of code using sequences in multiple ways, including racing them against identities and different caching levels.

    After the rally ends, I will hop in my yellow Focus and make the following delightful journey:

    image

    Assuming traffic is kind to me and Bing’s time estimates are reasonably correct, a bit over 9 hours of driving and a night in a Hampton Inn to be named later and I should arrive at Birmingham’s SQL Saturday event.

    Sounds insane right?  Well, yeah, it kinda does.  But I wanted to make their event again this year, and it was just a couple of hours out of my way back to Nashville. You may also think it sound like a bit of vanity, and it isn’t really.  Assuming traffic is favorable, my hope is that I can arrive for the 2:00 sessions by Chris Price on Data Quality (or Kevin Boles on Column Store Indexes), Sven Aelterman’s session on File Tables as well before my 4:30 session on Sequences.  Obviously if traffic is bad it might turn out that I only make my own session, but even then I plan to hang out for the after event so it won’t be a total loss.

    In any case, hope to see you all in Dallas or Birmingham next week!

  • Utility Queries–Column Metadata

    Very often, I find myself wanting to query system metadata for columns. Some of the metadata is fairly easy to deal with, like if you want to find column names, just simply querying sys.columns is very straightforward.  But if you want to query data types, it gets stickier.  There are type types listed, one that is the declared type, and another for the base data type, which if you are using CREATE TYPE to create alias data types. So I started working on the query based on INFORMATION SCHEMA values, because it is a lot easier to work with. In my design book, I used that version (and will link this version in erratta/book updates.

    But in the Books Online entry for the COLUMNS object (and others too), I discovered the following warning, a bit too late the change: “Do not use INFORMATION_SCHEMA views to determine the schema of an object. The only reliable way to find the schema of a object is to query the sys.objects catalog view.” So I rewrote it to use the catalog views.

    The query is fairly complex looking, but overall is pretty simple.  I put comments through the code that explains what it does.  I will note that I have tested the database using a case sensitive collation, but I haven’t tried it with a and CLR types.  The query returns the following:

    • table_type – The type from sys.tables. Could be a user table, system table, user view, internal table
    • schema_name – The name of the schema of the table that the column is in
    • table_name – The name of the table
    • column_name – The name of the column
    • nullability – Whether that column is nullable, plus if the column is an identity column
    • declared_datatype – The data type as declared. For alias types (like sysname), this will be the name the user specified.
    • base_datatype – The data type as is implemented. For alias types this will be the base intrinsic type (like for sysname this will be nvarchar(128))
    • primary_key_column – A bit value, indicating that the column participates in a primary key constraint
    • column_id – The internal key of the column, which is used for ordering the columns
    • default_value – If there is a default constraint on the column, this will be the text of the declaration
    • column_check_constraint – If there is a check constraint that only deals with the one column, it will be considered a column check constraint. This will contain the text of the check constraint
    • table_check_constraint_reference – A bit value that indicates that there is one or more table level check constraints that reference the column

    The query text follows.  It is pretty unwieldy, so instead of the derived table you might want to consider building it into a view or even a procedure.  I keep a SSMS project with all of my metadata queries that I maintain over time to call in any place I need them, so I use use this one ad-hoc.  You can download the query from my website’s downloadable packages page here: http://www.drsql.org/Documents/ColumnMetadataQuery.sql

    SELECT  *
    FROM    (
    SELECT    REPLACE(LOWER(objects.type_desc), '_', ' ') AS table_type, schemas.name AS schema_name, objects.name AS table_name,
                        columns.name AS column_name, CASE WHEN columns.is_identity = 1 THEN 'IDENTITY NOT NULL'
                                                          WHEN columns.is_nullable = 1 THEN 'NULL'
                                                          ELSE 'NOT NULL'
                                                     END AS nullability,
                       --types that have a ascii character or binary length
                        CASE WHEN columns.is_computed = 1 THEN 'Computed'
                             WHEN types.name IN ( 'varchar', 'char', 'varbinary' ) THEN types.name +
                                            
    CASE WHEN columns.max_length = -1 THEN '(max)'       
                                                    ELSE '(' + CAST(columns.max_length AS VARCHAR(4)) + ')' END       
                             --types that have an unicode character type that requires length to be halved
                             WHEN types.name IN ( 'nvarchar', 'nchar' ) THEN types.name +
                                              CASE WHEN columns.max_length = -1 THEN '(max)'       
                                                    ELSE '(' + CAST(columns.max_length / 2 AS VARCHAR(4)) + ')'       
                                                                                         END
                              --types with a datetime precision
                             WHEN types.name IN ( 'time', 'datetime2', 'datetimeoffset' ) THEN types.name +
                                                                       '(' + CAST(columns.scale AS VARCHAR(4)) + ')'
     

                             --types with a precision/scale
                             WHEN types.name IN ( 'numeric', 'decimal' )
                             THEN types.name + '(' + CAST(columns.precision AS VARCHAR(4)) + ',' +
                                                                        CAST(columns.scale AS VARCHAR(4)) + ')'

                            --timestamp should be reported as rowversion
                             WHEN types.name = 'timestamp' THEN 'rowversion'
                             --and the rest. Note, float is declared with a bit length, but is
                             --represented as either float or real in types 
                             ELSE types.name
                        END AS declared_datatype,

                       --types that have a ascii character or binary length
                        CASE WHEN baseType.name IN ( 'varchar', 'char', 'varbinary' )
                                      THEN baseType.name +
                                                  CASE WHEN columns.max_length = -1 THEN '(max)'
                                                       ELSE '(' + CAST(columns.max_length AS VARCHAR(4)) + ')'
                                                  END
                   
                             --types that have an unicode character type that requires length to be halved
                             WHEN baseType.name IN ( 'nvarchar', 'nchar' ) 
                                      THEN baseType.name +
                                                  CASE WHEN columns.max_length = -1 THEN '(max)'       
                                                        ELSE '(' + CAST(columns.max_length / 2 AS VARCHAR(4)) + ')'       
                                                  END

                             --types with a datetime precision
                             WHEN baseType.name IN ( 'time', 'datetime2', 'datetimeoffset' )
                                       THEN baseType.name + '(' + CAST(columns.scale AS VARCHAR(4)) + ')'

                             --types with a precision/scale
                             WHEN baseType.name IN ( 'numeric', 'decimal' )
                             THEN baseType.name + '(' + CAST(columns.precision AS VARCHAR(4)) +
                                                     ',' +  CAST(columns.scale AS VARCHAR(4)) + ')'

                             --timestamp should be reported as rowversion
                             WHEN baseType.name = 'timestamp' THEN 'rowversion'
                             --and the rest. Note, float is declared with a bit length, but is
                             --represented as either float or real in types
                             ELSE baseType.name
                        END AS base_datatype,
                        CASE WHEN EXISTS ( SELECT *       
                                           FROM   sys.key_constraints       
                                                     JOIN sys.indexes       
                                                          ON key_constraints.parent_object_id = indexes.object_id       
                                                              AND key_constraints.unique_index_id = indexes.index_id       
                                                     JOIN sys.index_columns       
                                                          ON index_columns.object_id = indexes.object_id       
                                                              AND index_columns.index_id = indexes.index_id       
                                           WHERE  key_constraints.type = 'PK'       
                                             AND columns.column_id = index_columns.column_id       
                                             AND columns.OBJECT_ID = index_columns.OBJECT_ID )
                                     
    THEN 1       
                              ELSE 0 END AS primary_key_column,
                       
    columns.column_id, default_constraints.definition AS default_value,
                        check_constraints.definition AS column_check_constraint,
                        CASE WHEN EXISTS ( SELECT   *
                                           FROM     sys.check_constraints AS cc
                                           WHERE    cc.parent_object_id = columns.OBJECT_ID
                                                    AND cc.definition LIKE '%~[' + columns.name + '~]%' ESCAPE '~'
                                                    AND cc.parent_column_id = 0 ) THEN 1
                             ELSE 0
                        END AS table_check_constraint_reference
              FROM      sys.columns
                        JOIN sys.types
                            ON columns.user_type_id = types.user_type_id
                        JOIN sys.types AS baseType
                            ON columns.system_type_id = baseType.system_type_id
                               AND baseType.user_type_id = baseType.system_type_id
                        JOIN sys.objects
                                JOIN sys.schemas
                                       ON schemas.schema_id = objects.schema_id
                            ON objects.object_id = columns.OBJECT_ID
                        LEFT OUTER JOIN sys.default_constraints
                            ON default_constraints.parent_object_id = columns.object_id
                                  AND default_constraints.parent_column_id = columns.column_id
                        LEFT OUTER JOIN sys.check_constraints
                            ON check_constraints.parent_object_id = columns.object_id
                                 AND check_constraints.parent_column_id = columns.column_id ) AS rows
    WHERE   table_type = 'user table'
                  AND schema_name LIKE '%'
                  AND table_name LIKE '%'
                  AND column_name LIKE '%'
                  AND nullability LIKE '%'
                  AND base_datatype LIKE '%'
                  AND declared_datatype LIKE '%'
    ORDER BY table_type, schema_name, table_name, column_id

  • SQL Rally Relational Database Design Pre-Con Preview

    On May 9, 2012, I will be presenting a pre-con session at the SQL Rally in Dallas, TX on relational database design. The fact is, database design is a topic that demands more than a simple one hour session to really do it right. So in my Relational Database Design Workshop, we will have seven times the amount of time in the typical session, giving us time to cover our topics in a bit more detail, look at a lot more designs/code, and even get some time to do some design as a group. Our topics will be:

    • Theory - More or less the foundational principals and processes that will presumably help you understand the "why" behind the rest of the material.
    • General Data Modeling - The basic concepts behind database design, data modeling, graphically and semantically. Terminology and concepts will be covered to make sure that when I say a word, you know what I am meaning. The field of computer science is littered with confusing terminology that needs to be made clear.
    • Normalization - Basically the process of making your database work well with the relational engine form both a performance and data integrity point of view.
    • Physical Modeling - The actual process of creating a working database by choosing proper data types, protecting the data integrity, etc. We will discuss the steps, as well as take a look at a model that is implemented.
    • Implementation Patterns- For the most part, the primary pattern that we use in a relational database is normalization. Database, tables and columns are rather easily mapped to requirements, and using normalization, we usually will arrive at a solution.However, there are certain types of solutions that crop up in common implementations. Examples include uniqueness, hierarchies, files, user-specified schema, data driven design, and more.
    • Other Miscellaneous Advice - Time permitting, I have a set of slides that cover some basic performance/security/implementation related material.

    A lot of the material/examples/demos come from my 2012 book (SQL Server 2012 Relational Database Design and Implementation) that will be shipping just before the Rally, so I am making arrangements to get copies of the book for most if not all of the attendees. So the day will introduce the material to you in a rapid format, and then you can take the material home and read it again (and my email is always available for further questions on the material as well.)

  • Utility Objects–Waitfor Delay Coordinator (SQL Server 2008+)

    Finally… took longer than I had expected when I wrote this a while back, but I had to move my website and get DNS moved before I could post code…

    When I write code, I do my best to test that code in as many ways as necessary. One of the last types of tests that is necessary is concurrency testing. Concurrency testing is one of the most difficult types of testing because it takes running multiple processes simultaneously and making sure that you get the correct answers multiple times. This is really difficult when you need to make > 1 statement that takes only a second or two simultaneously with another (or even 10 other) connections.  What I am trying to replicate is the WAITFOR statement with a DELAY, but multiple connections use the same delay.

    As I am writing my sequences presentation for SQL Rally, I wanted to test how values were allocated to each row based on different caching levels. I was looking at the clock, creating WAITFOR TIME statements, and copying the values into multiple windows as I have done so many times before. I was also thinking that I should start blogging again, and (like my sequence presentation) do something code oriented rather than design for a bit (I just finished my design book, so I am going to rest on that subject for a bit, well, other than the SQL Rally precon that I am subliminally trying to get you to sign up for before prices go up again).

    So envisioned a kind of multi-user WAITFOR statement that would say: Start all processes in 20 seconds, then start multiple connections. (As I write this, I am envisioning a kind of semaphore version of this that you could hold until ready to go, but more on that some other day.)  I initially used some 2012 features like FORMAT, but I wanted to make sure this was useful to a more wide audience.  The usage of the system is really simple, to have your connection wait for 20 seconds, you execute:

    EXEC Utility.WaitForSync$StartWait @WaitSeconds=20

    To have multiple connections start at the same second, you execute the same statement on a different connection.  The stored procedure calculates the time in 20 seconds, stores the value off and executes a WAITFOR TIME statement on each connection you have executed this statement on.  After the time passes and the connection continues, you have to reset the connection or you will receive the following message (not an error, just a PRINT statement, since you may want the batch to complete):

    WaitForSync: An error occurred. Message:
    Too late to start another WAITFOR session. Last session DELAY time was 2012-03-20 17:55:03. Use Utility.WaitForSync$reset to start new sessions

    To reset the session, as the message says, just execute Utility.WaitForSync$reset. There are also procedures to view the time when the processes will continue. 

    If you want to not do the action if the WAITFOR TIME statements have already completed before you start an operation, the procedure will return a –100.  So you might use something like the following TRY…CATCH Block to control execution:

    BEGIN TRY
    DECLARE @RETVAL int
    EXEC @RETVAL = Utility.WaitForSync$StartWait @WaitSeconds=10
    IF @RETVAL = -100
         RAISERROR ('Time Elapsed',16,1)

    --Other code

    END TRY
    BEGIN CATCH
        PRINT ERROR_MESSAGE()
    END CATC
    H

    Which would return something like:

    WaitForSync: An error occurred. Message:
    Too late to start another WAITFOR session. Last session DELAY time was 2012-03-20 21:20:06. Use Utility.WaitForSync$reset to start new sessions

    Time Elapsed

    You can download the code from my website’s downloadable code page

     

    Of course, the most interesting thing about downloading code is learning how to write your own code that uses the same patterns and techniques, so let me cover the highlights.  The table is pretty simple:

    CREATE TABLE Utility.WaitForSync
    (
        WaitforDelayValue nchar(8) NOT NULL,
        StartTime          datetime2(0) NOT NULL, --used to make sure you can do a waitfor that crosses a day boundry
        OnlyOneRow          tinyint PRIMARY KEY DEFAULT (1) CHECK (OnlyOneRow = 1) 
    );

    It only allows a single row using the PK Constraint, a default, and a check constraint. A bit of a trick, but definitely workable trick to ensure a given cardinality in a table.  The important two columns are one for the value that will be used in the WAITFOR statement, and the StartTime column is used to allow you to do your demos right around midnight (and yes, I put this in because when I was first using the procedure, it was taking a really long time to complete late one night.)

    The idea is, the first user will put a row in here at a given time, then every caller after that (and before the StartTime) will use the same time.  After the StartTime, the process has already started.

    The simplest procedure is the reset procedure.  It just deletes the row from the table (even if other users are still in progress, but this is just a utility for demos and testing).

    CREATE PROCEDURE Utility.WaitForSync$Reset
    AS
    ----------------------------------------------------------------------------------------------
    -- Louis Davidson        drsql.org
    --
    -- Simple enough, just delete the row from the table
    ----------------------------------------------------------------------------------------------
        DELETE FROM Utility.WaitForSync
    GO

    The StartWait procedure is a bit more complicated that that..  It takes a parameter of @WaitSeconds that lets the first caller to set the time.  Usually 10 seconds works great for a few connections, but it depends on just how much you need to coordinate.  The code is commented, and is really pretty simple.  The basics are that it checks to see if a row exists and grabs the value, if it doesn’t, then it creates a new row. 

    ALTER PROCEDURE Utility.WaitForSync$StartWait
    (
        @WaitSeconds    int --minimum amount of time to wait. The WAITFOR statement
                                --always starts as minute changes..
    )
    AS
    ------------------------------------------------------------------------------------------------
    ---- Louis Davidson        drsql.org
    ----
    ---- Either starts a new wait for session or uses the existing one
    ------------------------------------------------------------------------------------------------
    SET NOCOUNT ON
    BEGIN TRY
        DECLARE @StartTime datetime2(0),
                @WaitforDelayValue NCHAR(8),
                @ProcStartTime datetime2(0) = SYSDATETIME();
        --Get the row from the table where we hold the sync value.
        SELECT  @StartTime = StartTime,
                @WaitforDelayValue = WaitforDelayValue
        FROM   Utility.WaitForSync;

        --if a value was not already stored, we are just starting
        IF @StartTime IS NULL
          BEGIN
                --set the startTime to the current time + the number of seconds in parame
                  SET @StartTime = DATEADD(second,@waitSeconds,SYSDATETIME());

                --then I use a good old style convert with a format to get the time for the delay
                SET @WaitforDelayValue = CONVERT(nchar(8),@starttime, 108);

                --insert the value into the WaitForSyncing table. StartTime willbe used to make sure
                --the time is later, even if it crosses the day boundry
                 INSERT INTO Utility.WaitForSync(WaitforDelayValue, StartTime)
                VALUES (@WaitforDelayValue,@StartTime);
          END
        --if the time has already passed, we raise an error to the client that you can't piggy back on the
        --existing session, reset   
        ELSE IF @StartTime <= SYSDATETIME()
          BEGIN
            DECLARE @msg nvarchar(1000)
            SET @msg = N'Too late to start another WAITFOR session. Last session DELAY time was '
                        + CAST(@startTime AS NVARCHAR(20)) + '.'
                       + N' Use Utility.WaitForSync$reset to start new sessions';
            RAISERROR (@msg,16,1);
          END
        --finally, we take the delay value we created and use it in an execute statement
        --note that SSMS won't how the SELECT until after the batch resumes.
        DECLARE @queryText NVARCHAR(100) = 'WAITFOR TIME ' + QUOTENAME(@WaitforDelayValue,'''');
       
        EXECUTE (@queryText);
        PRINT 'WaitForSync: Starting at: ' + CAST(@startTime AS NVARCHAR(20)) +
               ' Waited for: ' + CAST(DATEDIFF(SECOND,@procStartTime, SYSDATETIME()) AS VARCHAR(10)) + ' seconds.'
      END TRY
     
      BEGIN CATCH
        --benign output that won't stop anything. Simply keep going
        PRINT 'WaitForSync: An error occurred. Message: ' + CHAR(13) + CHAR(10)
                + ERROR_MESSAGE()
        RETURN -100 --The caller can use the return value to decide if they want to stop what they are doing
      END CATCH
    GO

    Finally, there is a viewing procedure that lets you see what the time that the procedure starts is.  Note that it has a 1 return value if there is no session started, and a –100 if an expired session is out there or there is an error.

  • Utility Objects Series Introduction (but mostly a bit of an update)

    So, I have been away from blogging about technical stuff for a  long time,  (I haven’t blogged at all since my resolutions blog, and even my Simple Talk “commentary” blog hasn’t had an entry since December!)  Most of this has been due to finishing up my database design book, which I will blog about at least one more time after it ships next month, but now it is time to get back to it certainly in a bit more regularly.

    For SQL Rally, I have two sessions, a precon on Database Design, and another session on Sequence Objects and as I was building a demo with concurrent connections, I found myself again struggling with getting WAITFOR statements synchronized across multiple statements and I built a quick tool to make it easier to get multiple connections to start simultaneously. As I did this, I realized that I have a ton of utility objects that may (or may not) be interesting to the rest of the community. Other concepts I have in store are objects to grant rights, do DDL that SQL Server doesn’t implement (like dropping a schema with all objects, or dropping a column and constraints), metadata viewing etc.

    So, I will present a series (maybe long, maybe short, who knows) of T-SQL utilities that I find useful, and perhaps you will too. If you have ideas for utilities you are interested in having built (or want me to share/link to,) let me know either as a comment or email me at louis@drsql.org.

    SQLRally_Speaking 

    And hey, if you want a day filled with database design, be sure and register for SQL Rally and my precon in May in Dallas, TX…

  • 2012 Blog Resolutions

    I skipped last year making blog resolutions, but this year I need to get myself back on the straight and narrow and encourage myself to do a few things.

    1. Finish my book quickly and efficiently – Well, duh, I supposed, but the quickly and efficiently is the biggest important part.  I have no idea when SQL Server 2012 will be released, but with RC0 having been publicly released, it isn’t going to be SQL Server 2013 now is it?

    2. Blog regularly – Note I didn’t exactly say more, but I do want to be regular. Perhaps a substantive blog (more than telling people about some event) once a month here on SQL Blog while working on a book, and 2 or 3 times otherwise.  On simple-talk my goal is to average 2 What Counts For A DBA blog entries.

    2.1 Blog about my other (computer) love occasionally – I love gadgets (I now carry 6 music/video players, including 1 Windows Phone 7, 4 Zunes, my new Android player, and a Nintendo 3DS) and software (user interfaces are my favorite pet peeve), and I occasionally I feel like talking about them.

    3. Do 1 or 2 new presentations, and reduce doing the ones on database design. I love talking about database design, and I will still put my main db design session in for conferences, but I have 3 new ideas for sessions that are about 50% code and 50% design (for example, a session on triggers, which I want to show not only how they can be used, but why they should be used (or honestly not be used, which is a far larger list indeed).

    4. Develop and present a DB design a variable length (1-3 day?) seminar.  I am pretty happy how the book is organized, based on how I forced myself to work on it this time. I had a bunch of ideas how I wanted to change things us and shorten the book, but during the process, I realized the natural progression of how it ought to be done. I feel like it translates well to a class, even a bit more than the pre-con I have written that I did for the Orlando SQL Saturday (and which I will be happy to do for any SQL Saturday for any size group (I don’t care so much about making money, as long as my expenses are covered after the room is paid for…though I wouldn’t mind making a buck :)

    5. Get more involved with our local group. I attend, I help out, I did some work with trying to SQL Rally in Nashville, but I think I could do more to help out there. It helps that I work with Christine Leo, Kevin Kline, and Joe Webb, all super dynamic folks who kick butt.

    In my personal life, I have to make a number of changes that are probably apparent to anyone who has met me or seen a picture of me… That will probably be the hardest of them all, though possibly the most important.

    Wow, looking at the list I realize why I stopped making resolutions… Maybe I should just resolve to go to Disney World more? Well, that would be way too easy!

  • Planning my Louisville SQL Saturday Presentation

    So it is Tuesday night, just a few days until my presentation entitled What Counts For A DBA and I am still not completely sure exactly what is going to go on. In fact, I don’t exactly plan to know what is going on until the presentation is over.  On paper it seems like a simple idea. I am going to use 9 of the topics I have posted on my simple-talk blog about What Counts For a DBA (http://www.simple-talk.com/community/blogs/drsql/), and write them on my spinning wheel that looks like this:

    24 Inch Dry Erase Spinning Prize Wheel White Face

    The extra three spaces will be prize slots, which will (the first time they are landed on, earn the spinner a book: I will bring a copy of MVP Deep Dives 2, a copy of my relational design book, and a copy of Tom Larock’s DBA Survivor book too. In these thee spots I will have a Joker that lets the spinner choose their own topic, and two envelope spots that will let them choose an envelope that contains one of my queued up topics that I will ask the spinner to comment on, as well as anyone else in attendance. Any repeat topics will merit an envelope open as well.

    Will it work? I don’t have a clue. It will take me really stepping up my looseness when I speak and an audience that isn’t prepping their minds for the Women In Technology lunch. Either I dropped 120 bucks on a spinning wheel that takes up garage space and is never used or that is used a few times a year.  Either way, hope to see you in Louisville this weekend!

  • PASS Week/Speaking/Doing Schedule

    clip_image001

    Well, we are finally here at what is the secular version of the holiday season for Microsoft SQL Server nerd types, the week of the SQLPASS Summit. This year, I am speaking 3 times and will also be doing the Quiz Bowl at the Welcome Reception, so I am going to be busy. If you are here and are interested in database design, please do stop by and check out my sessions.

    Monday and Tuesday I will be in side sessions that are NDA for much of the day, and that is probably all that I can say. Tuesday night will be the Quiz Bowl where Tim Ford and I will attempt to one up ourselves and attempt to (along with some unwitting (but hopefully full of wit) experts) entertain you with our yearly Jeopardy-esque wanna be game show.

    clip_image002Wednesday, from 1-1:30, a bunch of the writers of the SQL Server MVP Deep Dives 2 book will be signing copies for you (there is a second signing at 7:15 - 8 AM on Friday, if you can stomach the earliness). If you want to see what is in the book, check http://www.manning.com/delaney/. The book will be on sale starting Tuesday afternoon for you if you want is signed or not. Note that all author proceeds of the book are going to charity (this edition’s proceeds going to http://www.operationsmile.org/), so you get a good book with lots of different subjects, and make a donation to a worthy cause. I am almost certain that you get an ebook edition of the book for free with the paper version that you can read on your portable device as well.

    clip_image003At 4:45 - 6 PM, I will be doing my "Characteristics of a Great Relational Database " session that was picked up as an alternate . This needs to be a very interactive session, and my hope is to learn a bit from your ideas as well as the slides I have prepared. The session is fairly light and a bit humorous, so if you are feeling particularly serious and sour, well, my Thursday session is far less fun:

     

    Thursday afternoon, from 3-4:15 PM, I will be presenting "Database Design Fundamentals" which presents a more deep dive on the concepts that go into designing a relational database. My goal here is to present the basics of the process of creating a database from conception until you are ready to start typing CREATE TABLE statements.

    Friday, I will be in a panel discussion called "Are you a Linchpin? Career management lessons to help you become indispensible.", representing the corporate developers in the world who like that they have one set of problems to solve that while the basis never varies, can never really be solved because the demand outstrips the realities of the day. Other members of the panel include Jeremiah Peschka, Stacia Misner, Kevin Kline, Brent Ozar, Thomas LaRock, Andy Warren, Andy Leonard

    Saturday I go home...

  • Chapters 9, 10, 11, and 12

    So I have been a bit remiss on my blogging the book duties. The fact is, the first 8 chapters were fairly heavy rewrites and reworkings, and even a good amount of new material.  But when doing a new version of a book that has already existed, you do need to reuse a good deal of the material from previous version.  Chapters 9, 10, and 11 are these chapters for this edition of the book.  The chapters are:

    • Chapter 9 - Database Security and Security Patterns – The biggest change to this chapter was the differences that Contained Database bring to the picture.
    • Chapter 10 - Table Structures and Indexing – Minimal changes to this chapter, mostly concerning some of the changes to compression. Since columnstore indexes aren’t really pertinent to OLTP databases, I didn’t do anything with them. An example of a columnstore index will appear in Chapter 12
    • Chapter 11 - Coding for Concurrency – Again, not a tremendous difference for Denali, so just a bit of touch up.

    Then comes Chapter 12. Back in the original version of the book, I had a reporting chapter, and I wasn’t amazingly pleased with it. So in 2005 I cut it, and decided to leave it out.  But it always felt like a bit of a hole in the book, having to say to “don’t denormalize and do reporting in your OLTP database, build a data warehouse, which I won’t talk about”.  Since those early years, I have learned a lot about data warehousing, attended a Kimball class on on dimensional modeling, and have designed our corporate data warehouse (with the requisite original “failure” before training, naturally.)  But I had to face facts, I was not the right person to write even a chapter on reporting/dimensional modeling.

    I had made a decision to not have cowriters for this version, a bit for space reasons (I loved having Kevin Kline as a cowriter for the past two versions, but I just didn’t have space last time, and we made it a download), a bit for coordination reasons (I had a devil of a time with a spatial section for the last book), and a bit for selfish reasons (you can figure that out for your own self.)  But a few months ago, as the outline gelled, I decided I just needed something about dimensional modeling…

    So I reached out to a particular writer named Jessica Moss (@jessicammoss) that I have known for a while, and had recently worked with on my data warehouse project as a mentor to help our team grow quite a bit in our ETL skills and to help mold our design.  So I asked her if she wanted to write a chapter on dimensional modeling, and she accepted.  I have seen her early version of the chapter, and I am very excited to add it as chapter 12.

    Only one more chapter to write, number 13, which I will blog about in the next day or two once I get my ideas down on “paper” (and if assuming I don’t decide to split the chapter, will annoy my editor, so probably not!)

  • MVP Deep Dives 2: Coming Really Soon

    You probably have noticed that I haven’t blogged all that much these days. Part of the reason has been taking on way too many projects/speaking engagements/writing projects etc. I am generally proud of all of these things, but the most project that I am probably the most proud of is the MVP Deep Dives 2 book.  The project was helmed by the one and only Kalen Delaney, whom I have always admired for many reasons. Working with her on the project has been awesome, even when she had to crack the whip on us pokey editors.

    The other editors are a group of names that, let’s face it, if you have heard of me, you have heard of them and probably long before me (well, unless you are a family member of mine!) Greg Low who rules www.sqldownunder.com), Brad McGehee the smiling face of many Red Gate ads and blogger at http://www.bradmcgehee.com/, Paul Nielsen, writer of the SQL Server Bible series until 2008 and a good friend of mine who loves database design like I do and Paul Randal and Kimberly Tripp of SQLSkills fame. It is an honor to be even mentioned along with these names.

    You see the table of contents and purchase the book here: http://www.manning.com/delaney/. Just like last time, the book starts out with a section on Architecture, edited by me. It is the smallest section, which is probably to be expected. The 6 chapters in Architecture focus on the softer topics, including constraints/uniqueness, storage, generalization of designs, and general characteristics of designs.  I hope you like it, but one thing that is excellent about a book like this is that there are 60 chapters with something for everyone.

    If you are going to be at SQLPASS, there is scheduled to be a good number of books for purchase there, and we will have a mass signing again. I always feel funny about signing books, but the book signing was a lot of fun last time.  The charity for this book is Operation Smile (http://www.operationsmile.org/). All author and editor proceeds will go to them. Last time we raised well into a 5 digit sum for War Child, so it isn’t chicken feed at the least.

    image

    One last note, I should note that over the project I lost an author due to his time commitments and some difficultly with the size of the chapter. I feel bad that we couldn’t work it out and I hope that his material (that was really good in its original form, though far too large for this format, and when we cut it down it just couldn’t be saved. The terrible part of being an editor is occasionally having to make a hard decision. In any case, this person’s writing, reasoning, etc were all good and if I see it posted later I will link to it for you.

  • PASS First Timer Advice–Make it worth it!

    I have seen a lot of other people giving advice about what to do on your first trip to the SQL PASS Conference and I want to give you my two cents worth as well. Many people will be pushing the social aspects of the conference and that is excellent advice which I too will emphasize, but in my mind there is one main thing you need to do:

    Make it worth it.

    Someone has shelled out a pretty large sum of money to get you there, and they want to see some return on investment in order for you or your coworkers to do it again. There are far too many choices out there for training, and PASS is a great bet to really learn a breadth of information in a short amount of time. Add to that a pre-con or two and you can get some deep insight to in the short period of time.  One thing that I love about it is that there are so many sessions that even after 10 years I can get some extremely deep information from super geniuses Conor Cunningham and Bob Ward (and not of the Wile E Coyote variety!) as well as some very deep information from a host of others, all on topics that I am already quite good at and still learn a very valuable thing or two (and sometimes even more). Then I can pick up get beginner and intermediate topics on stuff that I am just interested in.

    If you have problems you need solved, write them down and bring them with you. Bring your laptop with demonstrations of your problem. I know I love to help out people with design problems if they have enough information to make it easy to see what they are trying to do. The SQLCat team (http://sqlcat.com/) usually has a great presence and will talk to you about problems, and there are labs to try out features that you might not usually have access to. Add to that the lounge with a bunch of current and future MVPs hanging out willing to give you some time talking about SQL Server related topics. Just don’t come to most technical sessions and expect to ask a question that takes 10 minutes of explaining and get your solution while everyone else waits…

    So take it somewhat seriously and learn something to take back to your company and show that the investment was worth it.  And try not to quit and change jobs the week after the Summit, if you can. Nothing kills a training budget like people getting the feeling that they are paying their employees to go to a job fair for a week.

    Now, as long as you can make the investment pay off for whomever has paid for you to come to the Summit (even if it is you!), now have fun. There are tons of opportunities to have fun at the Summit. On the opening night we have a Quiz Bowl game where we quiz some of the smartest (goofiest) people in the SQL community on various  insane topics.  There is a PASS Party one night, a dinner you can sign up to attend on Monday and if you keep your ears open, plenty of other happenings around the Summit. Right around the conference center there is a theater, numerous restaurants, an excellent arcade, so there are plenty of places around to hang out with your new PASS friends you might make. And if you take one of the shuttles to your hotel with other people, you will probably meet a few people heading to the conference right after you get off of the plane (if you don’t bump into someone on the plane!) One of the best things about attending a large conference like this is that you can meet a lot of people you probably read/watch on the Internet and find out that they are just people (albeit people who spend a good amount of free time punishing various keyboard devices a little extra).

    I said I would mention it, and social networking is a very useful tool, especially at conferences. My suggestion is to (at least a few weeks prior to the Summit,) sign up for twitter, get a twitter client and follow @sqlpass at a minimum (feel free to follow @drsql too!) Also use your twitter client (or if you refuse, a browser) to periodically watch a search of sqlpass: http://twitter.com/#!/search/sqlpass. All of the twitter types will be telling everything that is going on, so if you go to a session and don’t like it, you can find out another that is good. If you want to find a group of people out one night to hang with, there is always something going on. A handy tool I have started using on my Windows Phone 7 is an app called Spout (there is an iPhone and Android version too) that lets you watch a twitter stream, twitter search, facebook account, google reader, and several others in cool looking rotating display. I used it at Devlink last week and it was cool watching what everyone was saying about stuff based on a twitter search of “devlink”. And the best part of using twitter? The friends you make at the conference go home with you and become close friends over time, sometimes even those you never even physically meet.

    In the end, you can either go to the conference, attend some sessions and go home, or…end up with a head full of knowledge, some real new friends, a host of virtual friends, and a community that you can lean on when you have needs (of course, they will lean back too.) And if you really like this conference stuff, there are lots of user groups and one day little PASS conferences all over the country world these days called SQL Saturday that you can go to and see some of the same people and lots of new faces.  Who knows, you might even find yourself compelled to speak at next year’s event!

This Blog

Syndication

Links to my other sites

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement