THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Peter DeBetta's SQL Programming Blog

Peter DeBetta's blog about programming in SQL Server using technologies such as T-SQL, .NET, CLR, C#, VB, Visual Studio, and SQL Server Management Studio.

  • DevWeek Downloads

    For all those who attended any of my talks at DevWeek In London last week, I have included any changed slide decks and code samples in the attached zip file (at bottom of post).

    Enjoy!

  • Multiple-Valued Parameters

    So I was doing some research on which is the best way to pass multiple-valued parameters into a stored procedure or user-defined function when I found some interesting results. I had always used a string-based user-defined function to parse a delimited list of values and I knew I could use XML to do the same thing, yet I had never compared the two side-by-side to see which is a better choice.

    So which was faster? Well, using one of many available string-based UDFs, I had unexpected results. I thought that the XML version would be equivalent if not slower, but alas, I was wrong. The XML-based UDF consistently performed about 35% faster than the string-based inline table-valued UDF and about 65% faster than string-based table-valued UDF I tested.

    I will be following-up this post soon with a post about various XML-based techniques that I tested…

    One last item of note – this post was published (tags and all) from Word 2007.

    As for now, here are the various functions used in the tests:

    CREATE FUNCTION dbo.fnString2IntList(@sData varchar(8000), @sDelim char(1))
    RETURNS @tList TABLE (ListValue int)
    AS
    BEGIN
        DECLARE @sTemp int, @nPos int, @nPos2 int
        IF (Len(RTrim(LTrim(@sData))) = 0) RETURN

        SET @nPos = CharIndex(@sDelim, @sData, 1)

        IF @nPos = 0 OR @nPos = Len(@sData)
         BEGIN
            SET @sTemp = CAST(SubString(@sData, 1, (Len(@sData) - Sign(@nPos))) as int)
            INSERT INTO @tList VALUES(@sTemp)
            RETURN
         END

        SET @sTemp = CAST(SubString(@sData, 1, @nPos - 1) as int)
        INSERT INTO @tList VALUES(@sTemp)

        WHILE @nPos > 0
         BEGIN    
            SET @nPos2 = CharIndex(@sDelim, @sData, @nPos + 1)
            IF (@nPos2 = 0) SET @sTemp = CAST(SubString(@sData, @nPos + 1, Len(@sData)) as int)
            ELSE SET @sTemp = CAST(SubString(@sData, @nPos + 1, ABS(@nPos2 - @nPos - 1)) as int)

            INSERT INTO @tList VALUES(@sTemp)
            SET @nPos = CharIndex(@sDelim, @sData, @nPos + 1)
         END
        RETURN
    END
    GO

    CREATE FUNCTION dbo.fnString2IntList2(@sData varchar(8000), @sDelim char(1))
    RETURNS TABLE
    AS
        RETURN
        (WITH csvtbl(i,j)
        AS
        (
            SELECT i = 1, j = CHARINDEX(@sDelim, @sData + @sDelim)
            UNION ALL
            SELECT i = j + 1, j = CHARINDEX(@sDelim, @sData + @sDelim, j + 1)
            FROM csvtbl
             WHERE CHARINDEX(@sDelim, @sData + @sDelim, j + 1) <> 0
        )
        SELECT CAST(SUBSTRING(@sData, i, j - i) as int) AS ListValue
        FROM csvtbl)
    GO

    /*    Assumes XML is as such
        <list>
            <i>1</i>
            <i>23</i>
        </list>
        etc...

        Uses minimal xml markup to keep input size as small as possible
    */

    ALTER FUNCTION dbo.fnXml2IntList(@xmlList xml)
    RETURNS TABLE
    AS
        RETURN (SELECT tList.ListValue.value('.', 'int') AS ListValue
                FROM @xmlList.nodes('list/i') AS tList(ListValue))
    GO

  • Folder Info

    Although not a SQL Server post, I thought this would be helpful for anyone...

    In an attempt to clean up my laptop, I have been archiving old projects, setup files, etc., off to a file server. Alas, some directories have a multitude of subdirectories, and trying to figure out how much space each takes up is cumbersome at best. So, I decided to write a quickie utility that will iterate through all files in the directory and subdirectories (or not, if you choose so).

    It outputs its results to a multiline text box, so you can easily copy and paste into Excel, sort the results, and see which directories are using all your hard drive space.

    The program requires the .NET 2.0 Framework.

    And of course, use at your own risk.

  • Non-Technical: Blog Tag

    It seems that a new game has been making its way around the blog sites: Blog Tag. I was myself just tagged by Richard Hundhausen yesterday. Luckily I got a heads-up about it or I might not have known for weeks. Anyways, part of the game is that you write 5 things about yourself that people probably didn't know, and then you tag five more people. Five things... Hmmm...

    1. My first introduction to computers and programming was the Atari 400, which I still possess to this day, and once every couple of years I like to take it out for a spin.
    2. I graduated from Bucknell University in 1990 with a BA in Linguistics and had no formal training until I took a training job in Southwest Connecticut back in 1992.
    3. The last time I played tag was in the spring of 2000 during a production of Taming of the Shrew at Raleigh Little Theatre (Raleigh, NC) where I played the part of Pedant. The game is played during live productions, very discreetly, on stage, and certainly adds a bit of thrill to group scenes.
    4. I sing and play guitar, the latter mostly to backup my singing. I have written and performed some original pieces that you can find here. I am self-taught on guitar, but am very much out of practice and never got to a level that I considered to be expert. I'm hoping to change that in the coming years as my kids begin to take part in music (my son already sings quite well for a 3 year old).
    5. I have been working since I was about 12 and have done the following jobs: Newspaper delivery boy, supermarket stock boy, video store clerk, and more notably, locksmith, graphic artist, and high school teacher.

    Well, I think that is more than anyone ever needed to know about me. Now it's my turn to tag some fellow bloggers. I hereby tag Kirk Haselden, Bob Beauchemin, Kent Tegels, Michael Campbell, and my fellow SQLblog blogger Adam Machanic.

  • SQLblog News: SP2, Windows Live Writer, and Other Tools

    Many thanks to Marco Russo for telling me about Windows Live Writer and the bug it had with Community Server SP1 in relation to posting images. 

    So, first things first - I upgraded the SQLblog.com site to CommunityServer SP2. Very painless! :-) I definitely using a File Diff tool like WinMerge, to determine which web files actually needed updating. I also use FileZilla so I can queue up the changed files prior to processing the queue itself.

    And now, I am writing this post in Live Writer, and am including a picture in the post (which is very easy to do in Live Writer) of Rubert Jee (of David Letterman fame) and me in his Hello Deli shop outside of the CBS Studios in New York City.

    Technorati tags:

    Happy Blogging!

  • Non-Technical: SQLblog, Upgrade, New Hosting Provider, Happy New Year

    Although it should be apparent from the news on the home page, SQLblog has changed hosting providers, but what may not be readily apparent is that SQLblog is now running on CommunityServer 2.1 (it was on 2.0 until last week). I had been prepping the site in my spare time over the last couple of months, including some upgrades to the ReverseDoS component, CSS changes, and other sundries. I did a number of test runs of moving the data from the old provider (SQL 2000) to the new one (SQL 2005), upgrading the schema, etc. Everything seems to be working very smoothly, and ServerIntellect (our new provider) has been very helpful in keeping it that way - a small issue was quickly resolved and so far, we are very happy with them. (No I do not get any kickbaks if you switch to them).

    Peter and Chris Putting images inline is now easier with 2.1, as shown here (a picture of me and my son)

    I also want to wish everyone a very Happy New Year and good luck with the resolutions, whatever they may be. :-)

     

  • Medians, Actual Query Cost, and Statistics

    I was going to comment on Adam's post "Medians, ROW_NUMBERs, and performance" (which was a follow-up to Aaron's post "Take the high road to middle ground") and I realized that I needed to show some code and alas, comments are not so nice about formatting code, so I am posting a follow-up post...

    At the PASS Summit back in November, Itzik and I were examining this issue of the QP doing the table spool when one would think it shouldn't need to and we were both stumped as to why. But further examination revealed some strangeness: Even though the logical reads for Ben-Gan's solution is quite high, the actual query cost is less than Celko's solution. It makes one wonder: Which is the better indicator of better perfromance - actual query cost or statistics such as time and IO? I have seen people use actual query cost as the basis for choosing one solution over another. Is using actual query cost a valid way of deciding which solution to choose? Like I said, it makes one wonder....

    For Ben-Gan's solution, I also was dissatisfied with the QP's choice in doing the table spool. And so I decided to create a variation on Ben-Gan's solution that calculated the count for each CustomerID separately. Although the logical reads is higher than Celko's solution, it is only 748 as compared to the 703 of Celko's solution; and better yet, the execution time is [on average] just slightly better and the query cost is always better (37% versus 63%). Put a supporting index in place (CustomerID, TotalDue) and this solution's logical reads drops to 121 (versus 76 for Celko's) but is consistently better in execution time and actual query cost (12% versus 88%)

    Here is the query:

    ;with c as
      (SELECT CustomerId , TotalDue, 
        Row_Number()
          OVER (PARTITION BY CustomerId ORDER BY TotalDue) as rn
      FROM Sales.SalesOrderHeader
    )
    ,
    d as
      (SELECT CustomerId ,
        (COUNT(*)+1) /2 as cnt1,
        (COUNT(*)+2) /2 as cnt2
      FROM Sales.SalesOrderHeader
      GROUP BY CustomerID
    )
    SELECT c.CustomerId, Avg(c.TotalDue) AS Median
    FROM C
      INNER JOIN d
        ON c.CustomerId = d.CustomerId AND c.rn IN (d.cnt1, d.cnt2)
    GROUP BY c.CustomerId

    I agree with Adam that cost-based optimization is far from perfect. But I don't like the inconsistency in the various results of STATISTICS IO, STATISTICS TIME, and actual query cost. And so what is the best way to decide which query performs better and which will scale better: STATISTICS IO, STATISTICS TIME, actual query cost, a combination of all of these, or perhaps something else (such as a detailed analysis of the actual query plans)?

  • XML Data Type Methods and SSIS Package Variables

    This is a follow-up to a posting from several months ago (Using XML Data Type Methods to query SSIS Packages). I included a temp table (based on information I found in the Microsoft.SqlServer.Dts.Runtime.VarTypeConvert class's constructor) which gives the conversion of the numeric data type value to the appropriate TypeCode (System.TypeCode). Thank goodness for Lutz Roeder's Reflector. This code iterates through all the executables in the package and for all variables returns the following properties: Executable Scope, Name, Namespace, Value, DataType (and data type description), EvaluateAsExpression, Expression, ReadOnly, RaiseChangedEvent, DTSID, Description, and CreationName. I am waiting on some confirmation about the property data types themselves (string lengths in particular) and will follow up as soon as I know more. In the meantime, here is the code. Be sure to change the package path to the location of your package file before running...

    DECLARE @x xml

    CREATE TABLE #t (PackageName varchar(100), PackageCode xml)

    SET @X = (SELECT * FROM OPENROWSET
       (BULK N'C:\Projects\MyPackage.dtsx',

       SINGLE_BLOB) AS Document)

    INSERT INTO #t VALUES ('MyPackage.dtsx', @x)

    CREATE TABLE #VarTypeCode (TypeID int, TypeName varchar(20))

    INSERT INTO #VarTypeCode VALUES(0, 'Empty')
    INSERT INTO #VarTypeCode VALUES(1, 'DBNull'
    )
    INSERT INTO #VarTypeCode VALUES(2, 'Int16'
    )
    INSERT INTO #VarTypeCode VALUES(3, 'Int32'
    )
    INSERT INTO #VarTypeCode VALUES(4, 'Single'
    )
    INSERT INTO #VarTypeCode VALUES(5, 'Double'
    )
    INSERT INTO #VarTypeCode VALUES(6, 'Decimal'
    )
    INSERT INTO #VarTypeCode VALUES(7, 'DateTime'
    )
    INSERT INTO #VarTypeCode VALUES(8, 'String'
    )
    INSERT INTO #VarTypeCode VALUES(9, 'Object'
    )
    INSERT INTO #VarTypeCode VALUES(10, 'Object'
    )
    INSERT INTO #VarTypeCode VALUES(11, 'Boolean'
    )
    INSERT INTO #VarTypeCode VALUES(12, 'Object'
    )
    INSERT INTO #VarTypeCode VALUES(13, 'Object'
    )
    INSERT INTO #VarTypeCode VALUES(14, 'Decimal'
    )
    INSERT INTO #VarTypeCode VALUES(16, 'SByte'
    )
    INSERT INTO #VarTypeCode VALUES(17, 'Byte'
    )
    INSERT INTO #VarTypeCode VALUES(18, 'UInt16'
    )
    INSERT INTO #VarTypeCode VALUES(19, 'UInt32'
    )
    INSERT INTO #VarTypeCode VALUES(20, 'Int64'
    )
    INSERT INTO #VarTypeCode VALUES(21, 'UInt64'
    )
    INSERT INTO #VarTypeCode VALUES(22, 'Int16'
    )
    INSERT INTO #VarTypeCode VALUES(23, 'UInt16'
    )
    INSERT INTO #VarTypeCode VALUES(24, 'Empty'
    )

    ;WITH XMLNAMESPACES ('www.microsoft.com/SqlServer/Dts' as p1,
       
    'www.microsoft.com/SqlServer/Dts' as DTS
    )
    ,
    Executables
    AS
    (   SELECT CAST('' as varchar(100)) AS ParentNodeName
    ,
         
    P.Vars.value('./p1:Property[@p1:Name="ObjectName"][1]', 'varchar(100)') AS NodeName
    ,
         
    P.Vars.query('.') AS CurrentNode
    ,
         
    P.Vars.query('./p1:Executable') AS ChildNodes
    ,
         
    1 AS
    Level
       FROM
    #t
          CROSS APPLY #t.PackageCode.nodes('/DTS:Executable') AS P(Vars)

       UNION ALL

       SELECT Executables.NodeName AS ParentNodeName,
         
    P.Vars.value('./p1:Property[@p1:Name="ObjectName"][1]', 'varchar(100)') AS NodeName
    ,
         
    P.Vars.query('.') AS CurrentNode
    ,
         
    P.Vars.query('./p1:Executable') AS ChildNodes
    ,
         
    Executables.Level +
    1
       FROM
    Executables
          CROSS APPLY Executables.ChildNodes.nodes('/DTS:Executable') AS P(Vars
    )
    )
    ,
    Variables AS
    (  
    SELECT Executables.NodeName, Executables.Level
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="ObjectName"][1]', 'varchar(255)') AS [Name]
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="Namespace"][1]', 'varchar(255)') AS Namespace
    ,
         
    V.Vars.value('./p1:VariableValue[1]', 'varchar(1024)') AS [Value]
    ,
         
    V.Vars.value('./p1:VariableValue[1]/@p1:DataType', 'int') AS DataType
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="Expression"][1]', 'varchar(1024)') AS Expression
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="EvaluateAsExpression"][1]', 'bit') AS EvaluateAsExpression
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="ReadOnly"][1]', 'bit') AS ReadOnly
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="RaiseChangedEvent"][1]', 'bit') AS RaiseChangedEvent
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="DTSID"][1]', 'uniqueidentifier') AS DTSID
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="Description"][1]', 'varchar(255)') AS Description
    ,
         
    V.Vars.value('./p1:Property[@p1:Name="CreationName"][1]', 'varchar(255)') AS CreationName
      
    FROM Executables
          CROSS APPLY Executables.CurrentNode.nodes('/p1:Executable/p1:Variable') AS V(Vars
    )
    )
    SELECT V.NodeName AS ExecutableName, V.Level, V.[Name],
       V.Namespace, V.[Value], V.DataType, TCC.TypeName
    ,
       
    V.Expression, V.EvaluateAsExpression, V.ReadOnly,
       V.RaiseChangedEvent,
    V.DTSID, V.Description, V.CreationName
    FROM
    Variables AS V
      
    INNER JOIN #VarTypeCode AS TCC ON V.DataType = TCC.TypeID
    ORDER BY Level, NodeName, [Name]

    DROP TABLE #t

    DROP TABLE #VarTypeCode

    --Peter

  • iPod and iTunes - Parsing the XML Library

    So I was trying to figure out a way to get my iTunes/iPod music library into SQL Server. Unfortunately, the XML in the library is not the easiest with which to work. Here is an abridged version of that XML:

    <plist version="1.0">
       <
    dict
    >
          <
    key>Major Version</key
    >
          <
    integer>1</integer
    >
          <key>Application Version</key
    >
          <
    string>7.0.2</string
    >
          <key>Show Content Ratings</key
    >
          <
    true
    />
          <key>Tracks</key
    >
          <dict
    >
             <
    key>1288</key
    >
             <
    dict>
                <
    key>Track ID</key
    >
                <
    integer>1288</integer
    >
                <
    key>Name</key
    >
                <
    string>Brighter Than Sunshine</string
    >
                <
    key>Artist</key
    >
                <
    string>Aqualung</string
    >
                <
    key>Album Artist</key
    >
                <
    string>Aqualung</string
    >
                <
    key>Album</key
    >
             </dict
    >
          </dict
    >
       </dict
    >
    </plist>

    It uses a name-value pairing of the nodes, which would not have been my first choice is the schema. I would have gone with something more like this:

    <plist version="1.0">
       <
    key name="Major Version" value="1" type="integer"
    />
       <
    key name="Application Version" value="7.0.2" type="string"
    />
       <
    key name="Show Content Ratings" value="true" type="boolean"
    />
       <
    tracks
    >
          <
    track id="1288"
    >
             <
    key name="TrackID" value="1288" type="integer"
    />
             <
    key name="Name" value="Brighter Than Sunshine" type="string"
    />
             <
    key name="Artist" value="Aqualung" type="string"
    />
             <
    key name="Album Artist" value="Aqualung" type="string"
    />
             <
    key name="Album" value="Strange &amp; Beautiful" type="string"
    />
          </
    track
    >
       </
    tracks
    >
    </
    plist>

    I digress. The point here is how do I get the data from the exported library file (File... Export Library... in iTunes). It was an interesting task and the solution used the XML, CTE, and PIVOT features of SQL Server 2005

    DECLARE @x xml
    CREATE
    TABLE #t (LibraryID int identity(1, 1), Library xml)

    -- Load the XML, you will need to do the convert to drop the DTD refs
    -- and load without errors

    SET @X = (SELECT CONVERT(xml, BulkColumn, 2) FROM
    OPENROWSET
       (BULK N'C:\Library.xml',
     
       SINGLE_BLOB) AS Document)

    -- Use the nodes method to extract the tracks
    INSERT
    INTO #t (library
    )
    SELECT T.C.query('.') AS
    SongInfo
    FROM @x.nodes('/plist[1]/dict[1]/dict[1]/dict') AS T(C
    )

    -- Generate the name-value pairs
    ;
    WITH kvps AS
       (SELECT library.value('/dict[1]/integer[1]', 'int') AS [ID]
    ,
          t.c.value('local-name(.)', 'varchar(255)') AS [type]
    ,
          t.c.value('.', 'varchar(255)') AS [value]
    ,
          t.c.value('for $s in . return count(../*[. << $s]) + 1', 'int') 
    AS
    RowNum
          -- this last one generates a row number for the returned nodes 
       FROM
    #t
          CROSS APPLY library.nodes('/dict/*') AS T(C
    ))
    , final AS
     
       (SELECT k.id
    ,
          k2.RowNum / 2 AS [RowNum],
     
          k.value AS [Key],
     
          CASE WHEN k2.[value] IS NOT NULL AND k2.[value] != ''
     
             THEN k2.[value] ELSE k2.type END AS [Value]
    ,
          CASE WHEN k2.[value] IS NOT NULL AND k2.[value] != ''
     
             THEN k2.[type] ELSE 'bit' END AS
    [Type]
       FROM kvps AS
    k
          INNER JOIN kvps AS
    k2 
             ON k.rownum + 1 = k2.rownum AND k.id = k2.id
       WHERE k2.rownum >AND k2.rownum % 2 = 0
    )
    SELECT
    *
    INTO
    #t2
    FROM final

    -- Pivot the results
    ;
    WITH t2 AS (SELECT [id], [key], [value] FROM #t2)
    SELECT [id], [Name], [Artist], [Album Artist], [Album], [Genre],
     
       [Kind]
    , [Size], [Total Time],
     
       IsNull([Disc Number], 1) AS [Disc Number], 
       IsNull([Disc Count], 1) AS [Disc Count],
     
       IsNull([Track Number], 1) AS [Track Number], 
       [Track Count],
     [Year], [Date Modified],
     
       [Date Added], [Bit Rate],
    [Sample Rate]
    INTO
    #t3
    FROM
    t2
    PIVOT
     
       (MAX([Value]
    )
       FOR [key] IN
     
          ([Name], [Artist], [Album Artist], [Album], [Genre], [Kind], 
          [Size], [Total Time],
     [Disc Number], [Disc Count], 
          [Track Number], [Track Count], [Year], [Date Modified],
     
          [Date Added], [Bit Rate], [Sample Rate]
    )
       ) AS
    pvt

    -- Viola - you have a list of tracks
    SELECT
    * FROM #t3
    ORDER BY [Artist], [Album], [Disc Number], [Track Number]

    --DROP TABLE #t
    --DROP TABLE #t2
    --DROP TABLE #t3

    If you want to make the data more permanent, don't use temporary tables.

    Now this will take some time to run, depending on how many songs you actually have. My collection of 13,851 songs took 5 1/2 minutes to run this code and generate my results. Mind you, the XML file is 20 MB and the joins are awful. However, some indexes would make this go faster.

    The cool thing is that you could take this concept to the next level using other technologies (such as SSIS) in order to keep your SQL database up to date.

    FYI: I included the SQL file that contains all the code in this post.

    Enjoy!

    --Peter

  • SQL Connections Slides and Demos & Other Sundries

    Well, after a well-needed relaxing weekend with my family and another hectic start of the week, I have finally gotten all the slide decks and code ready for posting from the SQL Connections Conference in Las Vegas last week.

    Special thanks to Richard Hundhausen - the SQL 2005 Worst Practices talk would not have been the same without him!

    I have been getting SQLblog.com ready for a big move to a new hosting provider. Once this is done, I will be posting some technical content again.

    --Peter

  • Non-Technical: I'm now a father of two...

    My lovely wife, Claudia, delivered our new baby girl on Monday, 25 September at 2:49 PM CDT. Kate weighed 6 lbs, 5 oz and was 19 inches long. She joins her 3-year-old brother, Christopher, to make us four in total.

    Katie gave us quite a scare during the pregnancy, with blood incompatibilities that, fortunately, never manifested into any real issues. We dodged the proverbial bullet and have a healthy little girl.

    I expect I won't be posting for the next couple of weeks, but I shall return soon...

    I've attached a picture of Chris about to give Katie a kiss.

  • Another RegEx Trick for SSMS

    So I wanted to have some consistency in some T-SQL code for bracketed identifier use. Objects in the code had bracketed identifiers in some cases, and not in others (sometimes even on the next line of code) and so I saw both TableName and [TableName]. Again, RegEx saved the day. For any given object OBJECT_NAME, use the following in the Search and Replace dialog (using the RegEx option):

    1. Find what:
      {[^\[]}{<OBJECT_NAME>}{[^\]]}
    2. Replace with:
      \1\[\2\]\3

    This will replace the OBJECT_NAME with [OBJECT_NAME] without losing surrounding white space or punctuation and without doubling up the brackets. So if I had 
      [SomeSchema].OBJECT_NAME
    and did this replace, it would then be
      [SomeSchema].[OBJECT_NAME]

    And of course [SomeSchema].[OBJECT_NAME] would not be affected, so you would not end up with [SomeSchema].[[OBJECT_NAME]] after replacing.

    But of course, be wise and either replace one at a time, or if you choose to replace all, always double check your code afterwards.

  • Accessors and Case-Sensitivity in T-SQL and CLR

    So in my preparation for the 2006 PASS Community Summit presentation that I will be giving on Managed Types in SQL Server 2005, I was mulling over the statement that "Accessors are case-sensitive" and decided to test out some things.

    Yes, if you have a UDT with a public field named "Test" in your CLR code (C# shown here)...

    public Int32 Test;

    ...then you must refer to that field as Test in your T-SQL code.

    DECLARE @d Date

    SET @d = '2004-02-29'

    SET @d.Test = 7

    If you try to refer to "test", you will get an exception

    Msg 6592, Level 16, State 3, Line 9

    Could not find property or field 'test' for type...

    But what happens if you implement both "Test" and "test"?

    public Int32 Test;

    public Int32 test;

    Well, they are both distinct fields and therefore are both usable, regardless of the database being case-sensitive or case-insensitive.

    DECLARE @d Date

    SET @d = '2004-02-29'

    SET @d.Test = 7

    SET @d.test = 10

    SELECT @d.Test AS Test, @d.test AS test

    Returns...

    Test        test

    ----------- -----------

    7           10

    (1 row(s) affected)

    Now I am not suggesting that you create accessors that vary only by case, just that it's possible...

  • Replacing Quoted Identifiers with Bracketed Identifiers

    So I just generated a DDL script using Visio and it uses quoted identifiers for all object names. But I don't like quoted identifiers and much more prefer bracketed identifiers. So how do I manage to replace all those pesky double quotes? Why, simply use "Find and Replace" in SQL Server Management Studio (SSMS)!

    1. SAVE THE SCRIPT FILE (just in case) before doing this operation!
    2. Open the Find and Replace dialog.
    3. At the bottom (in the "Find options") is a check box called "Use:". Check it.
    4. Select "Regular expressions" from the drop list below it.
    5. Then enter the following:
      1. Find what:
        \"{<.@>}\"
      2. Replace with:
        \[\1\]
    6. Click Replace All.

    That's it, you now have bracketed identifiers for your objects.

    WARNING: This changes all quoted text, including data in INSERT, UPDATE, or even DELETE statements, and therefore could cause issues with your data. This is meant to be used only with DDL scripts that contain no data. And please check before saving again and better yet, check the code into source control before doing the massive replace.

  • Really, what are your SQL Server "Worst Practices"

    Richard Hundhausen already posted about this topic on his blog Tales from the Doghouse, but I thought it be wise to post as well, if only to get more feedback.

    Rich and I will be presenting at SQL Connections about SQL Server "worst practices", while of course, highlighting "preferred practices".

    So, as Rich asked, "what are some of your 'worst practices' that you'd like to share?"

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement