THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

SELECT Hints, Tips, Tricks FROM Hugo Kornelis WHERE RDBMS = 'SQL Server'

  • Principles of Modeling: the Jargon Principle

    In one of my previous posts, I discussed whether data modeling is art or science, and I concluded that, unfortunately, the current state of affairs is that it’s closer to art than to science, whereas I would like to see the opposite. And I think that the same applies to process modeling.

    Back in 1994, I learned about a methodology that managed to transition data modeling from art to science. As a result, I have become much more effective at creating successful data models. The root cause of this methodology is that it is founded on three basic principles that I have since embraced for everything I do.

    The Jargon Principle

    The first of the principles I learned in 1994 is called “The Jargon Principle”. I don’t recall the exact wording, but here’s how I would paraphrase it:

    “For all communication that takes place between analyst and domain expert, the analyst will use the jargon of the domain expert instead of forcing the domain expert to use the jargon of the analyst.”

    So, what does this mean and how to apply this principle?

    Jargon is great …

    For most people, the first association that comes with the word “jargon” is the nutty professor scribbling tons of unintelligible symbols on the chalkboard, or the highly qualified engineer that no one understands because he assumes that everyone not only knows exactly what an ACMF engine is, but is also extremely interested in discussing its details. And though these are definitely fine examples of jargon, I am using the term in a much broader sense here.

    The jargon that the Jargon Principle references applies to all forms of communication, be it language (spoken or written words), symbolic (numbers, diagrams, tables, etc), or any other form. As soon as this communication uses elements that are familiar to people in a certain group but not (or less) familiar to people outside that group, I consider it to be jargon.

    Every group and every profession has its own jargon. And for good reason. When I am in the United States to attend a conference (like, recently, the PASS Summit 2011), I sometimes switch the television to a sports channel. And then I invariably get to see people playing the game the Americans call football (even though they primarily hold the ball in their hands), and I then hear the commentator mention that a player is “three for seven”. I have no idea what that means. But I am sure it makes sense to all who regularly watch these games, and it would become very boring and longwinded if the presenter had to explain this bit of jargon every time he uses it.

    And just think how hard our own job would become if you could not say to your colleague that you believe that discount should be an attribute of the Sale entity instead of a relationship. Or if you could not use Entity-Relationship Diagrams to lay out the design of a new application and discuss it with your peers. Those are exactly the situations that jargon is intended for, because it helps you communicate more efficiently and more clearly with your co-workers – the shared jargon ensures rapid communications without misunderstandings that can could result from having to describe everything in “normal” English.

    … except when it isn’t.

    The examples above that illustrate the greatness of jargon all have one common factor, and that is that in these examples, the jargon was used to facilitate communications between people that share knowledge of the same jargon – except the first example, that left me sitting bewildered in front of the television set, wondering what the hey the commenter meant when telling me that a player was “three for seven”.

    When you, in your job role as a data modeler or process modeler, step outside the safety of your office where your coworkers are to discuss the next application to build with the domain expert, you are talking with someone who does not know the same jargon you do. If you have to create the data model for the local candy store and you ask the owner if she agrees that discount should be an attribute of the Sale entity, your only answer will probably the “thud” of her jaw hitting the floor. If she answers anything else, she either is an expert in the field of data modeling herself, in which case she doesn’t need you, or she is afraid she’ll look dumb if she admits that your words are Greek to her and gives random replies in an attempt to mask her perceived lack of knowledge. Whereas in fact, you are the dumb person, because you asked a question in a jargon that is unfamiliar to the domain expert, setting her, yourself, and the entire project up for failure.

    What’s the alternative?

    I hope my examples above have convinced you that you should not ever use your own jargon, the jargon of the data or process modeler, when talking to domain experts. And since not talking to them is not an option, you are now left with only two options. The first is to avoid the use of any jargon. That might look like a viable option, but it has problems. How do you even know if a word is jargon or not? When I use the word “person”, I usually think about a human being. But for a lawyer, a person is a legal entity. He will use the word in that context without even realizing he’s using jargon. And I can’t help it, because (a) I may not even be aware that he is using jargon, and (b) that would still mean forcing the domain expert to talk my (now jargon-free) language, a practice I don’t like.

    And that leaves only one option – modelers should use the domain expert’s jargon when talking to him or her. That’s the Jargon Principle.

    How?

    Stating a principle is easy, living up to it can be hard. But if I want to take myself seriously, if I want to ensure that I base my models on trustworthy information, I have to force myself to live up to this principle. Any other method of communicating with the domain expert simply carries too much risk of failure.

    And I’m not the only one who believes so. If you check job adverts for data modeling jobs, or even for analyst and developer positions, you’ll almost always see the requirement that candidates have to have several years experience in the companies’ line of business. That is not because oil producers use different modeling languages or different dialects of T-SQL and C# than hospitals. It’s because people with several years in a line of business generally have picked up enough of the relevant jargon to be able to communicate efficiently with the domain experts. However, this interpretation of the Jargon Principle is both limiting (for the modeler) and dangerous (for the business).

    The “limiting” aspect is quite obvious. If your first job as modeler happens to be with a bank and you decide to move on after six years, you’ll probably have a good chance to be hired as data modeler for another bank – but not in any other industry. You might as well update the job title on your resume from “data modeler” to “data modeler in the banking industry”, for that is what you’ll be doing the rest of your working life. Unless you are willing to slip back into the junior role again, and to accept the corresponding pay cut.

    The “dangerous” aspect is less obvious. So let’s start with a quick show of hands. Everyone who has ever witnessed a data professional, be it a modeler, an analyst, a developer, or whatever other function, say something like “they don’t specify this configuration option, but I’ll put it in anyway, just in case”, or “rip this out? Hmmm, I’ll just comment it, they’ll surely want it back within a few months” (or encountered such commented code in decades old programs), or even “nah, that spec is incorrect, I’ll do it another way”, raise your hands. Yup, thought so. Now raise your other hand if the person who said that just happened to be you. I’m willing to bet that at least 80% of you are now in the universal “I surrender” position.

    This comes from the fact that several years of experience in an industry will teach you way more than just the jargon. You’ll find that your knowledge starts to match that of the domain expert, or even exceed it if the domain expert has less experience than you do. And if the domain expert says something that does not match your experience and your notions of how the business works, you may even be tempted to disregard the words of the domain expert and push on with the model that you think is the more correct one. And you might even be correct. But you might be wrong as well, and that’s where the danger lies. In a well-organized company, statements made by the domain experts are under scrutiny, because they make up the specs of the system. Deviations from those specs by a smart-ass modeler (or developer) have a much higher chance to go unnoticed until it’s too late.

    Bottom line

    I believe the Jargon Principle to be of utmost importance when doing data or process modeling. In the current state of affairs in our profession (that of data professionals), this unfortunately means that one needs sufficient experience in the industry of the domain expert to pick up his jargon. The down sides and risks of this have to be accepted as the lesser of all evils.

    However, I also believe that there must be a better way. I believe that it’s possible for a data modeler to speak the domain expert’s jargon without first having to master that jargon, by simply following rules that specify exactly what questions to ask and how to phrase them. How these rules work exactly is out of scope for this post, but those who own the first SQL Server MVP Deep Dives book (and those who don’t should buy it now, along with the second SQL Server MVP Deep Dives book – all author royalties go to charity, so you get two exceptional books and help children in need along the way) can read my chapter on finding functional dependencies (a small but important part of the work of a data modeler) to get some general idea how this communication method works.

  • Bin packing part 5: Set-based iteration

    One of the most common techniques authors use to keep their readers interested is to leave them with a cliff-hanger. It’s what I did when I finished part 4 of my series on the bin packing problem – never intending to leave you all hanging over a cliff for almost three years, though that is exactly what happened. My apologies to everyone who has been checking my blog on a daily basis all that time, in the idle hope of finally learning that faster method I promised.

    For those of you had have forgotten what I wrote in the previous parts, or who have never read them before, here are a few quick links:

    • The first post includes an explanation of the bin-packing solution, sets up some sample tables and test data, and establishes a baseline that all other solutions have to be compared against – for both performance (lowest speed) and efficiency (lowest number of bins – sessions in the case of the chosen sample).
    • The second post introduces several techniques to increase packing efficiency, though at the cost of reduced performance.
    • In the third post, I investigate several ways to improve performance, and find out exactly how much they decrease packing efficiency.
    • The fourth post investigates how a completely set-based solution is guaranteed to find the best possible solution for bin-packing problems that are limited enough that you could do it by hand, but falls apart completely when you want to scale.

    Changed hardware, changed performance

    Three years have passed since I last worked on this series, and my test environment has changed considerably. My old laptop has been replaced by a newer one, with two disk drives, more memory and a faster, 64-bit processor. And I have also upgraded my DBMS to SQL Server 2008 R2, Service Pack 1. This has invalidated all my previous measurements, so I decided to first repeat the performance tests for the shortlist of relevant solutions that I included in part 4 – except for OrderDesc, since I found in part 4 that this should not have been listed as a relevant solution at all. I have executed each version ten times and calculated the average execution time from those test results. In most cases the execution times were fairly close; only the baseline had a larger variety. The table below lists the results of my tests:

    image

    If you compare the table above to the table I posted in part 4, you’ll notice that all algorithms got faster by 13 to 20 percent. The overall performance boost is a great confirmation that the hard-earned cash I invested in my new laptop was not wasted. The difference in percentage performance gain suggests that extensive testing of all algorithms might yield some surprises; some of the algorithms previously excluded from the list above might have to be added again – but I don’t expect the difference to ever be more than a few percent. I decided not to spend the extra time that would have been needed for this full investigation. You are of course free to do so yourself, all the required code is still available from my blog. But after seeing the performance of the algorithm I’ll describe in this blog post, you’ll probably understand why I am not that interested anymore in whether any of the other cursor-based algorithms now happens to be one or two percent faster than those listed above.

    All at once or one at a time?

    One of the more common best practices for SQL Server is to avoid using cursors and other iterative solution, and use set-based logic instead – and only choose an iterative solution if you are 100% sure that you have run into one of the very few situation where a set-based solution will not work. While I do endorse this best practice in general, it has a down side: it makes people believe that either set-based (process all at once) or iterative (process one row at a time, usually with cursors) are the only alternatives.

    They are not. There are more options available. One of these alternatives that I have found to be highly useful in a few situations is what I have dubbed “set-based iteration”.

    Set-based iteration: the perfect blend?

    If you characterize set-based processing as “using a single query that processes all rows at once”, and iterative processing as “using a loop that processes one row per execution”, then you can characterize set-based iteration as “using a loop that processes many rows per execution”. So you have a set-based query that processes many (but not all!) rows, that is enclosed in a loop to repeat that query as often as needed. The challenge here is to find a form where the set-based query does not take too much time, yet processes as many rows as possible so that the number of iterations remains low.

    For the bin packing problem, this means that instead of filling one bucket at a time (as we did in the various cursor-based solutions), we’ll fill many buckets at once. The amount of buckets to fill should be as high as possible, but not so high that we end up taking more buckets than needed, as the goal was to use as little buckets as possible. The only problem here is that we don’t know in advance how many buckets we will end up needing. But we do know the minimum amount that will be required anyway – if the maximum seating capacity of the examination room is 100 students and there are 1742 students registered, we can be absolutely sure that there is no way we will ever pack those students in 17 sessions; we know for sure that we will need at least 18 sessions. So instead of opening one session and assigning registrations one at a time to it, we can now create 18 sessions at once, assign 18 registrations to those sessions at a time, and repeat this until either all registrations are assigned or all sessions are full – and if at that point we are still left with unassigned registrations, then the distribution of registration sizes was apparently unlucky and we need one or more extra sessions to assign the remaining registrations to; this is done by simply repeating the process for only the unassigned registrations.

    The algorithm in detail

    The exact algorithm I use in my “set-based iteration” solution for the bin packing problem needs some explaining, so I decided to use some sample data and some pretty (ahem) pictures to illustrate the various steps. To keep it simple, I limit the bucket capacity to 10, and I pack 9 packages, three of size 6, three of size 5, and the last three of sizes 3, 2, and 1. To find the minimum number of bins required, we calculate the total size of all packages (3 * 6 + 3 * 5 + 3 + 2 + 1 = 39) and divide by bin size, rounding all fractions up (39 / 10 = 3.9 à 4 bins). So we immediately create 4 empty bins.

    To assign packages to these bins, we find the threshold (the largest available capacity in the current range of bins – 10 for now, since all bins are still empty), rank the bins by descending available capacity, rank the packages that don’t exceed the threshold by descending size, and then assign packages to bins based on equal rank – but only if the package does actually fit in the bin with the same rank. This is illustrated below.

     
      clip_image002[4]

    The threshold calculation, the ranking of bins by remaining capacity, and the test that a package fits the bin with the same rank may all seem pretty pointless when assigning the first bunch of packages to the bins. But the same code is reused for later iterations and then these are all important ingredients, as you will see in a bit.

    After assigning these first four packages, the remaining capacity of all bins is recalculated and the process repeats – the threshold is calculated (now 5, since that is the largest remaining capacity). None of the remaining packages exceeds this threshold, so all remaining packages are ranked by size; all bins are ranked by remaining capacity, and packages are once more assigned to bins based on equal ranks, as illustrated below:

    clip_image004[4]

    As you can see, package F and bin 1 are both ranked 2 in their respective orderings, so they are assigned to each other, but package F exceeds the remaining capacity of bin 1, so this combination is discarded, as indicated by the dotted arrow. The other packages all do fit in their assigned bins, so packages E, G, and H are assigned to bins 4, 2, and 3 respectively.

    On the third iteration of this step, bin 1 has the highest remaining capacity, so the threshold is set to 4. Package F at size 5 exceeds this threshold; this package won’t any of the remaining bins, so this package is exempted from the process until we have finished filling the current batch of bins and start a new series of bins.

    The bins are ranked by descending size. The packages (or rather, the single remaining package) is ranked as well, and then assigned to the bin with the same rank, as illustrated in the figure below:

    clip_image006[4]

    A fourth iteration of this process doesn’t cause any new changes. There is only one package left to assign and it exceeds the threshold (that is now even down to 3), so the iteration stops here; all 4 bins that were assigned at the start of the process have been filled as far as possible with the available packages.

    Not all packages have been assigned to a bin, though. Apparently, the sizes of the available packages were distributed such that it was not possible to distribute them to only four bins; an extra bin is needed. So the whole process starts again from scratch, using only the single remaining package: calculate total size (5), divide by bin size and round up to find the minimum required number of additional bins (1), rank both the single bin for this batch and the single package, then assign the package to the bin. The end result can be seen below:

    clip_image008[4]

    The implementation

    I won’t spend much time on the T-SQL implementation of this algorithm. You can find the full code in “SetBasedIter.sql”, which is part of the ZIP file I attached to this blog post. I included comments where I thought they might be relevant. The T-SQL I used in this code uses several features that many won’t use on a regular basis, so just looking at this code and trying to understand how it works should already present a learning opportunity.

    But how does it perform?

    At the end of the day, the only thing we’re interested in are the results. So I executed this procedure a total of ten times, and the average execution time is only 6,741 ms. That is an improvement of almost 90% over the baseline, and over 75% faster than FillThenNext, the previous fastest solution. And unlike FillThenNext, the SetBasedIter algorithm does not pay for its increased performance by lower efficiency. With my standard set of test data, SetBasedIter packs all registrations in a total of 19,293 sessions. Better than the baseline or the two fastest cursor-based algorithms, but admittedly not as good as the two slower cursor-based algorithms FillThenSearchDesc and Order50FirstB. But given the performance difference, I expect most companies to be willing to accept that 2% efficiency loss for the 80% performance gain.

    Too good to be true?

    You probably know the saying “when something sounds too good to be true, it probably is”. Well, this is the exception. Saving 75% on ever the fastest solution does sound too good to be true, but all my tests show that it is actually true. Could that be a sign that this algorithm is not “too good to be true” at all? That it is, in fact, still not good enough? Well, to me it definitely isn’t. In the next part of this series, I will show how some smart changes in the code of the set-based iteration algorithm can reduce its execution time even further. And I will also investigate how the various algorithms scale, for an algorithm that is the winner for this test-set but scales exponentially could quickly become a loser when the company increases its business – and I like to know that kind of stuff before it happens!

    So stay tuned for the sixth part of this series. And I promise, it won’t be another three-year wait this time!

  • Bin packing part 4: The set-based disaster

    Almost a year ago now, I started a series of blog post on the bin packing problem. But after the first three posts, various reasons caused the research I still had to do for the fourth part to be massively delayed. It’s only now that I have finally found the time to finish my research and write up the fourth installment.

     

    After a nine-month delay, I can hardly expect you to remember what I covered in the first posts, so you may want to follow these links to re-read that:

    ·        The first post includes an explanation of the bin-packing solution and establishes a baseline that all other solutions have to be compared against – for both performance (lowest speed) and efficiency (lowest number of sessions, or bins).

    ·        The second post introduces several techniques to increase packing efficiency, though at the cost of reduced performance.

    ·        In the third post, I investigate several ways to improve performance, and find out exactly how much they decrease packing efficiency.

     

    Changed version, changed performance

     

    All performance numbers quoted in the previous posts were based on tests on my laptop, which was at that time running SQL Server 2005, SP2. But not anymore. In the meantime, I have upgraded my laptop to SQL Server 2008 RTM, so it’s highly probable that performance of all queries tested so far has changed – hopefully for the better. Since I don’t want to revive SQL Server 2005 on my laptop, I had no choice but to repeat all tests that I ran earlier in the course of researching and writing the first three parts of this series. I won’t bore you with all the details; instead I’ll just show an updated version of the table that I posted in the conclusion of the third part, listing the performance and efficiency of the baseline and of all versions of the algorithm that show an interesting trade-off between efficiency and performance.

     

    Version

    Execution time (ms)

    Number of sessions

    Baseline

    76,928

    19,457

    FillThenNext

    37,968

    23,630

    FillThenSearch (with index)

    42,353

    19,749

    FillThenSearchDesc (with index)

    46,977

    18,925

    Order50FirstB (with index)

    63,924

    18,923

    OrderDesc (with index)

    73,824

    18,923

     

    If you compare the table above to the table I posted in part 3, you’ll notice a few interesting changes:

    ·        There are no really big performance changes – queries got just a couple of percents faster or slower, but not the big speedups one would hope for when moving to a new version.

    ·        Two of the algorithms, the baseline and OrderDesc (with index) are now slower than before. The others all got a bit faster.

    ·        There’s also a new algorithm in the table: Order50FirstB (with index). Without the index, this 18,932 sessions – but with the index, this improved to 18,923. Unfortunately, when I originally tested the effects of the index I was too focused on the performance to note the increased packing efficiency. So, at the presumed 18,932 sessions I concluded that another algorithm (FillThenSearchDesc) was both faster and more efficient, and thus excluded this one from the table – whereas in reality, Order50FirstB should have been in and OrderDesc should have been out (for it’s now exactly as efficient but slower). In fact, the only reason why I still included  OrderDesc (with index) in the table above is because it was so that you could compare it to the table in the previous part.
    And in case you are wondering how it’s possible that adding an index affected packing efficiency as well as speed, here’s why: the Order50FirstB procedure uses a SELECT TOP 1 query without ORDER BY, which is known to produce undetermined results; in this case, adding the INDEX obviously affected what sessions were returned whenever this query was executed, and this had its effects on the packing efficiency. Which only goes to prove, once more, that one should never use the TOP clause without ORDER BY. (Okay, in this case I’m pretty satisfied with the unexpected change, but it could just as well have been the other way around!)

     

    Less sample data

     

    As promised in the previous part, I will now focus on developing a purely set-based solution for the bin packing problem. But before I can go there, I have to produce a new set of test data that is drastically reduced in size as compared to the original test data. There are various reasons for this. One reason is that the set-based solution requires an advanced knowledge of the number of sessions required, with the size of the query expanding if the theoretic number of sessions increases. Another reason is that the size of the query also increases as the session size increases, so I wanted to test with a smaller session size first. And finally, the performance of this tiny test-set turned out to be already pretty bad; I’ll try to make an estimation of the total execution time for a set-based solution for the original problem near the end of this post – after which you’ll understand why I never actually built or tested this version.

     

    In the attached ZIP file, you’ll find the file “Tiny Testset (Random).sql”. This is a new version of the random test data generator presented previously in the first part of this series, with the following modifications:

    ·        The maximum number of candidates per registration that was hardcoded as 100 in the original script has been replaced by a variable, defaulting to 10.

    ·        All quarters now use the same, simple expression to get an even distribution of registration sizes between 1 and the maximum size. The switches to include or exclude each quarter have been removed. I felt that the extra complexity of different distributions was not required in this case.

    ·        Since I had to code the query for a fixed maximum number of possible solutions, I added an extra variable for the maximum number of candidates in all sessions for each quarter; if the random distribution of registration sizes causes this to be exceeded, the script will simply remove the largest registrations until the total size of all remaining sessions in each quarter no longer exceeds the maximum.

     

    For my first round of testing, I decided to limit the sessions to a maximum of 20 candidates, and to assume a maximum of four sessions per quarter. I limited the amount of subjects (sessions per quarter) to 10, and set the maximum registration size to 10 as well. With these numbers, I decided to set the maximum number of candidates per quarter to 70. In theory, this still allows for impossible data combinations – for instance, there is no way that you can ever fit 10 registrations for 7 candidates each in less than 5 sessions –, but none of the tests I ran ever actually ran into this problem. But if you are going to run your own test with a different seed for the randomizer, or with different settings for other variables, you need to be aware of this possibility. Since the set-based algorithm will simply not return any results for a quarter that needs more sessions than the query caters for, this problem is easy to spot – you just need to check that the query returns results for as many quarters as your test data spans (which can be done by simply checking the amount of rows returned – easily visible when returning results in grid view in SQL Server Management Studio).

     

    The set-based solution

     

    Credit where credit is due. The set-based solution I present here is not mine. I have first seen this solution in a newsgroup posting by John Gilson, dating back to early 2005. His code did contain some minor errors that I corrected, and I have also added some optimizations to his code – but the logic used in this solution is entirely his.

     

    The set-based solution presented here is not implemented in a single query but as a series of views, building on top of each other. This has several advantages. One of them is that the logic is easier to understand. Another advantage is that it prevents endless code repetition: every view can reference the previously defined views by name instead of repeating the entire definition of the view as a subquery. This does not automatically lead to performance gains though – SQL Server will replace each view by its definition before handing the result over to the query optimizer, so the resulting plan will be exactly the same as if the entire query were written out referencing only base tables.

     

    As in the previous parts, the code is too long to reproduce here. In the attached ZIP file, you’ll find the code for this solution in the script “PureSetbasedViews.sql”, complete with extensive comments. This code also contains (commented out) statements to review the contents of each of the intermediate views, so if you wish you can uncomment them and see how the steps build upon each other to massage the data into the final solution.

     

    Step 1: Aggregating by size

     

    The first step of the algorithm is pretty simple. It builds on the basic idea that in any give solution, it’s not important which subjects are included in each session, but only how many of each size. And for the purpose of the bin packing algorithm, subjects with the same number of candidates are completely interchangeable. As an example, let’s say that there are five registrations for a given quarter, three for 6 candidates each and two for 9 candidates each. This will be represented by five rows in the dbo.Registrations table, but it can just as well be represented by two rows – one row representing 3 registrations for 6 candidates, and a second row representing 2 registrations for 9 candidates. This is implemented in the attached code in the dbo.RegsBySize view.

     

    The great advantage of this is that it vastly reduces the number of potential (but not really different) solutions. For instance, there may be solutions where a session is built by combining one of the registrations for 6 candidates is with one of the registrations of 9 candidates. When building straight from the dbo.Registrations table, any possible set-based solution would find 6 variations on this session (combining each of the three 6-candidate registrations with each of the two 9-candidate ones); when building from dbo.RegsBySize, there will be just one row representing this possible combination of two registrations.

     

    Step 2: Building sessions

     

    Using the aggregated registrations, the second step is to combine one or more registrations into possible sessions of all sizes from one up to the maximum session size. Combinations of registrations that would exceed the maximum session size are excluded, but combinations that are smaller than the maximum are not. The view dbo.PossibleSessions describes a completely denormalized result set, holding up to five (Size, Count) pairs from the dbo.RegsBySize view. Like in the previous step, only one row is created for each possible permutation – so based on the example above, there will be one row for “1 registration size 6 and 1 registration size 9 (total size 15)”, even though two such sessions could be formed from the total registrations available. Other possible sessions constructed from the above sample data would be “1 registration size 6 (total size 6)”, “1 registration size 9 (total size 9)”, “2 registrations size 6 (total size 12)”, “3 registrations size 6 (total size 18)”, and “2 registrations size 9 (total size 18)”.

     

    As you can see, this view also includes a column SessionSize (holding the total number of candidates in all the session’s registrations) and columns for the total number of sessions for each size available. The former can easily be computed from the Size and Cnt columns, and the latter can easily be fetched by joining to dbo.RegsBySize – but redundantly including these columns here vastly reduces the amount of code required in the next step.

     

    The final column in this view, RowNum, uses the ROW_NUMBER() function to assign each possible session a unique number within its quarter. The numbers are assigned by ordering the possible sessions by session size and then by size and count of each of the (size, count) pairs; this could in fact have easily been replaced by any other ordering – the only goal is to have some method to assign unique row-numbers within each quarter. These numbers are used in the next step , when solutions are made by combining multiple possible sessions, to prevent duplicate solutions – for instance, if one solution combines possible sessions 1 and 2, we don’t want another solution formed by combining sessions 2 and 1. If you want to run this code on SQL Server 2000, or on any other database platform that does not support the ROW_NUMBER() function, check John Gilson’s original posting (that was written before SQL Server 2005 was released) to see how he achieved the same goal without the RowNum column – by adding lots of complicated comparisons in the next logical step).

     

    You’ll probably note that the maximum session size of 20 is hardcoded in this view, since parameters are not supported in views. I could have used an inline table-values user-defined function instead (sacrificing support for SQL Server 2000 and before), but that would not really have helped, since the number of columns this view requires is also dependant on the maximum session size. The five (size, count) pairs supported in this version are not chosen arbitrarily; it is the minimum number required to describe all possible permutations of registrations whose total size does not exceed 20. The smallest possible session size that would require a sixth (size, count) pair would be 21 (combining one registration each of the sizes 1, 2, 3, 4, 5, and 6). Similarly, a seventh (size, count) pair (and accompanying avail column) would be required when the maximum session size is 28 or more, and so on. For the maximum session size of 100 that I used in the original test data, no less than 13 (size, count) pairs are required – so if you want to extend the set-based solution to this original size, prepare to expand the view to a total of 43 columns, and no less than a whopping 307 lines of code (if I didn’t mess up my estimations…). I’m sure this makes you start appreciating why I chose to limit myself to a much smaller set of test data for this part of the series!

     

    Step 3: Finding solutions

     

    Now that we know all possibilities to assemble sessions of all allowed sizes, it’s time to combine these possible sessions into complete solutions. Of course, not any combination is a solution – in order to qualify as a solution, each registration has to be used exactly once. This is implemented with a clever trick – the code checks to see that no registration is used more than once (or rather, that no registration size is used more often than the number of registrations of that size), and that the total number of candidates in all sessions combined equals the total number of candidates in all registrations. Since the only way to get at a total equal to the total registration size without using registrations twice is to use them all, this requirement is equivalent with the requirement to use each registration exactly once, but much easier to test.

     

    When assembling solutions from the possible sessions view, it’s important to remember that the possible sessions stores permutations – so it’s allowed to use multiple instances of the same possible session as long as the total number of available registrations of each size is not exceeded. So going back to the sample data presented earlier, one possible solution would include two occurrences of the “1 registration size 6 and 1 registration size 9 (total size 15)” session, plus one occurrence of “1 registration size 6 (total size 6)” to top it off. Another solution would include one occurrence of “3 registrations size 6 (total size 18)”, and one occurrence of “2 registrations size 9 (total size 18)”.

     

    If you look at the dbo.Solutions view in the code, you’ll see a design that is even more extremely denormalized than the dbo.PossibleSessions view already was. It includes the five (size, count) pairs already familiar from the latter, but repeats them four times – thus allowing for a maximum of four possible sessions to be combined to make up a solution. This means that the solution fails when it’s given data that requires five or more sessions to accommodate all registrations – it will simply not return any rows for that quarter. To solve that, all you have to do is add more columns for a fifth session, and add an extra LEFT JOIN to a fifth occurrence of the dbo.PossibleSessions view. You may have noted that the join condition for each join references all “previously” joined occurrences of dbo.PossibleSessions, thus increasing in size with each additional occurrence.

     

    Apart from the columns for the denormalized description of all sessions in a solution, the dbo.Solutions view includes four more columns. Apart from the unavoidable Year and Quarter columns, there is a NumSessions column holding the number of sessions used, based on a simple CASE expression (only added for easier understanding of the output; it’s not really required). The final column calculates a ranking for each solution by using the ROW_NUMBER() function, partitioning by quarter and ordering by number of sessions in the solution (least sessions used first) using a clever trick based on the fact that NULL sorts before other values (I could also have repeated the CASE expression used for the NumSessions column, but that would have been more typing). Note that the order of solutions with the same number of sessions is irrelevant for the approach used here. This Ranking column will be used in the next step.

     

    If you have already checked the query, you’ll probably understand that it’s impossible to scale this to the dimensions required to solve the original problem. The maximum session size of 100 candidates already forces us to use 13 (size, count) pairs in the dbo.PossibleSessions view, that all return here; the inclusion of one copy of that view per session in the solution moves us into some really insane figures. For instance, if you want to allow up to 200 sessions per quarter, you’d need to define 5,204 columns in the view (which is more than SQL Server allows), and the CREATE VIEW statement would have more than four million lines, well beyond the maximum batch size SQL Server allows. And that would still not be enough for the original test data – the tests conducted earlier show that some quarters need slightly more than 700 sessions. Maybe this query will find a way to pack them into less than 700, but then we’d first need to find a database that allows views with 18,204 columns and a definition of 51,155,633 (!) lines…

     

    Step 4: The best solution …

     

    After all this hardcore query magic, the fourth and final step is almost disappointingly simple. Finding the best solution is a simple matter of returning the solution with the lowest score in the Ranking column – i.e. a ranking equal to 1. And if you prefer to see all possible solutions with the lowest possible number of sessions, all it takes is to replace ROW_NUMBER() with RANK() in the definition of the dbo.Solutions view – this will assign the number 1 to all solutions with the lowest number of sessions, and higher numbers to all solutions that use more sessions.

     

    Once more, you can check John Gilson’s posting to see that without ROW_NUMBER() the dbo.Solutions view has to be joined to (a derived table based upon) itself. In practice this means that after replacing views by their definitions, the resulting query passed internally to the query optimizer doubles in size – I don’t even want to begin to imagine what effect this will have on this solution’s performance. Especially considering that performance is already pretty bad with using the ROW_NUMBER() function.

     

    Maybe this is also a good place to point out that the results you get when you query the dbo.BestSolutions view are not actually yet the final results. Okay, you will get a result set showing that, for some quarter, you have a session consisting of three registrations for five candidates plus one registration for three candidates, but you still have to select the four actual registrations to assign to this session. Since this blog post is already longer than any I’ve done before (and still growing), I decided to leave that part as an exercise to the reader. Just one word of warning – if you come up with a solution that uses multiple copies of the dbo.BestSolutions view, replace it with a temporary table holding the contents of the view. Otherwise, SQL Server will happily produce a query execution plan where dbo.BestSolutions is evaluated as often as you use it in your query – and trust me, once you’ve seen how it performs, you don’t want that to happen!

     

    … for some definition of “best”

     

    I you have already checked out the definitions of the views used, you probably don’t expect a great performance anymore. And after reading my ominous hints above, you’re really bracing yourself – and with good right. With the amount of test data reduced to just ten (or less) registrations per quarter, querying the dbo.BestSolutions view to find the best solutions for all quarters (on an empty cache) turns out to take an average execution time of 94,535 milliseconds – that’s more than one and a half minute!

     

    I have already indicated that the original code that I based this solution on was written before SQL Server 2005 was available. That is probably the reason why separate views were used instead of a single query with CTE’s. But now that I’m already running SQL Server 2008, there is of course no reason not to try a CTE-based version. After all, I also started to make use of the ranking functions that have also been introduced in SQL Server 2005. So I changed the view-based code to the CTE-based but further equivalent code in PureSetbasedCTEs.sql (also in the attached ZIP file), and executed it a couple of times to see if this would affect performance. In theory, it shouldn’t – but in practice it did: the average execution time for the CTE-based version was 90,984 ms, almost 4% faster. Even more intriguing was my observation that the CTE-based version performed more constant, with all measurements between 89 and 94 seconds. The view-based version ranged from 80 to over 100 seconds! I have not been able to explain either the performance difference or the difference in variation.

     

    I also tested a third version. Not as “purely relational” as the views or the CTE’s, but much better performing: by materializing intermediate results in temporary tables instead of keeping the virtual in views or CTE’s, I worked around the annoying weakness in SQL Server’s query optimizer, namely that it does not realise that for a query that includes multiple copies of the same logic, that logic needs not be duplicated in the execution plan. Even in cases such as these, where the duplication is very easy to spot because it’s just multiple copies of the same view or CTE name, the optimizer will still spit out a plan that happily repeats the same steps over and over again. So I replaced the views with temporary tables, forcing SQL Server to materialize and reuse intermediate results instead of recalculating them over and over again. This code, that you can find in PureSetbasedTemp.sql in the attached ZIP file, did indeed perform lots better, though still a far cry from the cursor-based solution I presented in the previous posts: 32,879 milliseconds for the tiny amount of test data this solution handles.

     

    It gets worse

     

    Even though I already know that the set-based solution will never scale to the size required to handle the original problem, I still wanted to have some idea of how it scales as the maximum session size, the maximum number of sessions, and/or the amount of registrations increase. So I started with the “best” version so far, the version with temp tables, and created two variations on it: one that allows for one more session in the solution, bringing the total to a maximum of five sessions per quarter; and another one that does the same but also allows the session size to go up to a maximum of 35 candidates by using seven (size, amount) pairs for the possible sessions instead of five. These versions are also in the attached ZIP file as PureSetbasedTempMore.sql and PureSetbasedTempBigger.sql.

     

    I did not test these versions with data that actually required bigger or more sessions, but I did test all the versions with various amounts of data by changing the amount of registrations per quarter. In the table below, you will find the number of registrations I chose, the actual average number of registrations per quarter after trimming the data down to at most 70 candidates per quarter, and the running time of each of the three set-based versions with temporary tables. Note that in this case, I executed the longer running queries only once instead of my usual practice of averaging the execution time of five executions. And in case you intend to test these queries yourself, make sure to size your databases appropriately – to repeat the tests I executed, you’ll need at least 200 MB in the data file and 400 MB in the log file. Allocate less, and autogrow will probably kick in, reducing performance even further.

     

    #Sessions

    Execution time (ms)

    requested

    actual

    Base

    More

    Bigger

    6

    6

    847

    1,015

    1,600

    7

    7

    1,510

    1,934

    3,612

    8

    8

    4,546

    6,968

    14,687

    9

    8.975

    11,980

    23,071

    57,347

    10

    9.875

    32,789

    80,656

    247,186

    11

    10.675

    70,220

    202,123

    726,233

     

    After seeing these numbers, I didn’t even try to extrapolate them to the original problem (that needs 13 (size, amount) pairs for possible sessions up to 100 candidates, and up to 700 sessions per quarter with the full amount of test data). It would probably be several years, maybe even centuries – though of course I have already demonstrated that the query will never run on any current database anyway, because it’s too long and returns too many columns.

     

    Back to the cursor?

     

    By now, you might have concluded that this post shows that for the bin-packing problem, using a cursor really is the only way to go. But allow me to disagree. While there is no doubt that a truly set-based single query solution is absolutely not viable here, there are still other possibilities. As I already indicated in the first part of the series, it is possible to combine set-based queries with iteration for a solution that packs about as well as the better cursor-based variants, but runs much faster. This solution will be the subject of my next post in this series – so stay tuned!

  • Speaking at PASS

    Are you planning to attend this year’s PASS Community Summit? There’s only three weeks left before the pre-conference seminars kick off, so if you’re not registered yet, now is the time to act!

     

    The session schedule shows two days with seven full-day pre-conference seminars each, followed by three days packed full of high-quality sessions – with four time slots per day, over three days, and twelve sessions in parallel at each time slot, you problem will not be finding sessions that are of interest to you, but rather choosing between them.

     

    And you will be able to see and hear me speaking at two of those sessions.

    • On Wednesday, I will be one of the participants in Much Ado: A Panel Discussion About “Nothing”, a debate between several MVPs and SQLBloggers about the always heated subject of NULL.
    • On Thursday, I’ll present “Cursors or setbased? Why not both?”. In this session, I’ll show that using cursors or a set-based query is not an either or decision; but that there are ways to combine the strengths of both approaches into a (sometimes!) winning strategy.

    (Please note that I’m not sure if the session schedule is final or still subject to change)

     

    So if you are at PASS, I hope that you’ll decide to drop in on either or both of those sessions. But even if you don’t, I still look forward to meeting you – so if you see me passing by and want to shake hands or exchange some opinions, don’t hesitate to catch me!

  • Data modeling: art or science?

    When I started blogging here on sqlblog.com, I intended to write about stuff like T-SQL, performance, and such; but also about data modeling and database design. In reality, the latter has hardly happened so far – but I will try to change that in the future. Starting off with this post, in which I will pose (and attempt to answer) the rather philosophical question of the title: is data modeling an art or a science?

     

    Before I can answer the question, or at least tell you how I think about the subject, we need to get the terms straight. So that if we disagree, we’ll at least disagree over the same things instead of actually agreeing but not noticing. In the next two paragraphs, I’ll try to define what data modeling is and how it relates to database design, and what sets art apart from science in our jobs.

     

    Data modeling and database design

     

    When you have to create a database, you will ideally go through two stages before you start to type your first CREATE TABLE statement. The first stage is data modeling. In this stage, the information that has to be stored is inventoried and the structure of that information is determined. This results in a logical data model. The logical data model should focus on correctness and completeness; it should be completely implementation agnostic.

     

    The second stage is a transformation of the logical data model to a database design. This stage usually starts with a mechanical conversion from the logical data model to a first draft of the implementation model (for instance, if the data model is represented as an ERM diagram but the data has to be stored in an RDBMS, all entity and all many-to-many relationships become tables and all attributes and 1-to-many relationships become columns). After that, optimization starts. Some of the optimizations will not affect the layout of tables and columns (examples are choosing indexes, or implementing partitioning), but some other optimizations will do just that (such as adding a surrogate key, denormalizing tables, or building indexed views to pre-aggregate some data). The transformation from logical data model to database design should focus on performance for an implementation on a specific platform. As long as care is taken that none of the changes made during this phase affect the actual meaning of the model, the resultant database design will be just as correct and complete (or incorrect and incomplete) as the logical data model that the database design is based on.

     

    Many people prefer to take a shortcut by combining these two stages. They produce a data model that is already geared towards implementation in a specific database. For instance by adding surrogate keys right into the first draft of the data model, because they already know (or think) that they will eventually be added anyway. I consider this to be bad practice for the following reasons:

    1. It increases the chance of errors. When you have to focus on correctness and performance at the same time, you can more easily lose track.
    2. It blurs the line. If a part of a data model can never be implemented at acceptable performance, an explicit decision has to be made (and hopefully documented) to either accept crappy performance or change the data model. If you model for performance, chances are you’ll choose a better performing alternative straight away and never document properly why you made a small digression from the original requirements.
    3. It might have negative impact on future performance. The next release of your DMBS, or maybe even the next service pack, may cause today’s performance winner to be tomorrows performance drainer. By separating the logical data model from the actual database design, you make it very easy to periodically review the choices you made for performance and assess whether they are still valid.
    4. It reduces portability and maintainability. If, one day, your boss informs you that you need to port an existing application to another RDBMS, you’ll bless the day you decided to separate the logical data model from the physical database design. Because you now only need to pull out the (still completely correct) logical data model, transform again, but this time apply optimization tricks for the new RDBMS. And also, as requirements change, it is (in my experience) easier to identify required changes in an implementation-independent logical data model and then move the changes over to the physical design, than to do that all at once if only the design is available.
    5. It may lead to improper choices. More often that I’d like, I have seen good modelers fall victim to bad habits. Such as, for instance, adding a surrogate key to every table (or entity or whatever) in the data model. But just because surrogate keys are often good for performance (on SQL Server that is – I wouldn’t know about other DBMS’s) doesn’t mean they should always be used. And the next step (that I’ve witnessed too!) is forgetting to identify the real key because there already is a key (albeit a surrogate key).

     

    Art and science

     

    For the sake of this discussion, “art” (work created by an artist) is the result of some creative process, usually completely new and unique in some way. Most artists apply learned skills, though not always in the regular way. Artists usually need some kind of inspiration. There is no way to say whether a work of art is “good” or “bad”, as that is often in the eye of the beholder – and even if all beholders agree that the work sucks, you still can’t pinpoint what exactly the artist has done wrong. Examples of artists include painters, composers, architects, etc. But some people not usually associated with art also fit the above definition, such as politicians, blog authors, or scientists (when breaking new grounds, such as Codd did when he invented the relational model). Or a chef in a fancy restaurant who is experimenting with ingredients and cooking processes to find great new recipes to include on the menu.

     

    “Science” does NOT refer to the work of scientists, but to work carried out by professionals, for which not creativity but predictability is the first criterion. Faced with the same task, a professional should consistently arrive at correct results. That doesn’t imply that he or she always will get correct results, but if he or she doesn’t, then you can be sure that an error is made and that careful examination of the steps taken will reveal exactly what that error was. Examples of professionals include bakers, masons, pilots, etc. All people that you trust to deliver work of a consistent quality – you want your bread to taste the same each day, you want to be sure your home won’t collapse, and you expect to arrive safely whenever you embark a plane. And a regular restaurant cook is also supposed to cook the new meal the chef put on the menu exactly as the chef instructed.

     

    Data modeling: art or science?

     

    Now that all the terms have been defined, it’s time to take a look at the question I started this blog post with – is data modeling an art or a science? And should it be?

     

    To start with the latter, I think it should be a science. If a customer pays a hefty sum of money to a professional data modeler to deliver a data model, then, assuming the customer did everything one can reasonably expect1 to answer questions from the data modeler, the customer can expect the data model to be correct2.

     

    1          I consider it reasonable to expect that the customer ensures that all relevant questions asked by the data modeler are answered, providing the data modeler asks these questions in a language and a jargon familiar to the customer and/or the employees he interviews. I do not consider it reasonable to expect the customer (or his employees) to learn language, jargon, or diagramming style used by data modelers.

    2          A data model is correct if it allows any data collection that is allowed according to the business rules, and disallows any data collection that would violate any business rule.

     

    Unfortunately, my ideas of what data modeling should be like appear not to be on par with the current state of reality. One of the key factors I named for “science” is predictability. And to have a predictable outcome, a process must have a solid specification. As in, “if situation X arises, you need to ask the customer question Y; in case of answer Z, add this and remove that in the data model”. Unfortunately, such exactly specified process steps are absent in most (and in all commonly used!) data modeling methods. However, barring those rules, you have to rely on the inspiration of the data modeler – will he or she realize that question Y has to be asked? And if answer Z is given, will the modeler realize that this has to be added and that has to be removed? And if that doesn’t happen, then who’s to blame? The modeler, for doing everything the (incomplete) rules that do exist prescribe, but lacking the inspiration to see what was required here? Or should we blame ourselves, our industry, for allowing ourselves to have data modeling as art for several decades already, and still accepting this as “the way it is”?

     

    Many moons ago, when I was a youngster that had just landed a job as a PL/I programmer, I was sent to a course for Jackson Structured Programming. This is a method to build programs that process one or more inputs and produce one or more outputs. Though it can be used for interactive programs as well, it’s main strength is for batch programs, accessing sequential files. The course was great – though the students would not always arrive at the exact same design, each design would definitely be either correct, or incorrect. All correct designs would yield the same end result when executed against the same data. And for all incorrect designs, the teacher was able to pinpoint where in the process an error was made. For me, this course changed programming from art into science.

     

    A few years later, I was sent to a data modeling course. Most of the course focused on how to represent data models (some variation of ERM was used). We were taught how to represent the data model, but not how to find it. At the end, we were given a case study and asked to make a data model, which we would then present to the class. When we were done and the first model was presented, I noticed some severe differences from my model – differences that would result in different combinations of data being allowed or rejected. So when the teacher made some minor adjustments and then said that this was a good model, I expected to get a low note for my work. Then the second student had model that differed from both the first and my model – and again, the teacher mostly agreed to the choices made. This caused me to regain some of my confidence – and indeed, when it was my turn to present my, once again very different, model, I too was told that this was a very good one. So we were left with three models, all very different, and according to the instructor, they were all “correct” – and yet, data that would be allowed in the one would be rejected by the other. So this course taught me that data modeling was not science, but pure art.

     

    This was over a decade ago. But has anything changed in between? Well, maybe it has – but if so, it must have been when I was not paying attention, for I still do not see any mainstream data modeling method that does provide the modeler with a clear set of instructions on what to do in every case, or the auditor with a similar set of instructions to check whether the modeler did a great job, or screwed up.

     

    Who’s to blame?

     

    Another difference between art and science is the assignment of blame. If you buy a bread, have a house built, or embark on a plane, then you know who to blame if the bread is sour, if the house collapses, or if the plane lands on the wrong airport. But if you ask a painter to create a painting for your living and you don’t like the result, you can not tell him that he screwed up – because beauty is truly a matter of taste.

     

    Have you ever been to a shop where all colors paint can be made by combining adequate amounts of a few base colors? Suppose you go to such a shop with a small sample of dyed wood, asking them to mix you the exact same color. The shopkeeper rummages through some catalogs, compares some samples, and then scribbles on a note: “2.98 liters white (#129683), 0.15 liters of cyan (#867324), and 0.05 liters of red (#533010)”. He then tells you that you have to sign the note before he can proceed to mix the paint. So you sign. And then, once the paint has been mixed, you see that it’s the wrong color – and the shop keepers then waves the signed slip of paper, telling you it’s “exactly according to the specification you signed off”, so you can’t get your money back. Silly, right?

     

    And yet, in almost every software project, there will be a moment when a data model is presented to the customer, usually in the form of an ERM diagram or a UML class diagram, and the customer is required to sign off for the project to continue. This is, with all respect, the same utter madness as the paint example. Let’s not forget that the customer is probably a specialist in his trade, be it banking, insurance, or selling ice cream, but not in reading ERM or UML diagrams. How is he supposed to check whether the diagram is an accurate representation of his business needs and business rules?

     

    The reason why data modelers require the customer to sign off the data model is, because they know that data modeling is not science but art. They know that the methods they use can’t guarantee correct results, even on correct inputs. So they require a signature on the data model, so that later, when nasty stuff starts hitting the fan, they can wave the signature in the customer’s face, telling him that he himself signed for the implemented model.

     

    In the paint shop, I’m sure that nobody would agree to sign the slip with the paint serial numbers. I wouldn’t! I would agree, though, to place my signature on the sample I brought in, as that is a specification I can understand. Translating that to paint numbers and quantities is supposed to be the shopkeepers’ job, so let him take responsibility for that.

     

    So, I guess the real question is … why do customers still accept it when they are forced to sign for a model they are unable to understand? Why don’t they say that they will gladly sign for all the requirements they gave, and for all the answers they provided to questions that were asked in a language they understand, but that they insist on the data modeler taking responsibility for his part of the job?

     

    Maybe the true art of data modeling is that data modelers are still able to get away with it…

  • [OT] Alive and kicking

    Wow! Would you believe that it’s almost five months since my last blog post? How time flies.

     

    No, I have not forgotten about you. I know you’ve all been faithfully checking the site (or your feed) each day, maybe even each hour, to get my next post. What can I say? I’m sorry for keeping you waiting so long.

     

    The truth is, that I have been very, very busy. Lots of time has been spent on work (for without work, there will be no salary and hence no way to feed the family) and family (for why bother working hard to feed them if I can’t enjoy my time with them). And out of the remaining spare time, large chunks have been allocated to tech editing the next edition of Paul Nielsen’s SQL Server Bible. And apart from that, there are also a few hobby’s and interests that have nothing to do with SQL Server and that I will therefore not share with you J.

     

    I hope to be able to post a bit more in the future. I plan to publish some of my thoughts on database design. But I have not forgotten about the unfinished “bin packing” series I started late last year – at least two more installments are planned there. And apart from that, I have various ideas and snippets collected. So if time permits, you can expect to see a bit more from me in the future.

  • Let's deprecate UPDATE FROM!

    I guess that many people using UPDATE … FROM on a daily basis do so without being aware that they are violating all SQL standards.

     

    All versions of the ANSI SQL standard that I checked agree that an UPDATE statement has three clauses – the UPDATE clause, naming the table to be updated; the SET clause, specifying the columns to change and their new values; and the optional WHERE clause to filter the rows to be updated. No FROM or JOIN – if you need data from a different table, use a subquery in the SET clause. The optional FROM and JOIN clauses were added by Microsoft, as an extension to the standard syntax (and just to make out lives more interesting, they invented different variations of the syntax for SQL Server and for Access). So when you are in the habit of using them, be prepared to review all your UPDATE statements when moving to Oracle, DB2, Sybase, MySQL, or even a different Microsoft database!

     

    Standards? Bah, who cares?

     

    Well, some do. Me for instance – I will never use proprietary syntax if I know a standard alternative, expect if using the latter has severe negative consequences. And maybe you will, one day, when your boss comes back from the golf course with the great news that he managed to convince a colleague (who just happens to work in an Oracle shop) to buy a copy of your company’s application instead of some off-the-shelf product. Or when there’s a great job opportunity for someone with cross platform skills. Or when you are asked to help out this new colleague with 10+ years of DB2 experience. One of the lesser known side effects of Murphy’s Law is that those who least expect having to move their database to another platform, will.

     

    But even if you really don’t care about portability, there are other reasons to be wary of using UPDATE FROM. In fact, the most important reason why I dislike UPDATE FROM is not that it’s non-standard, but that it is just too easy to make mistakes with.

     

    Correctness? Bah, who cares?

     

    Well, most do. That’s why we test.

     

    If I mess up the join criteria in a SELECT query so that too many rows from the second table match, I’ll see it as soon as I test, because I get more rows back then expected. If I mess up the subquery criteria in an ANSI standard UPDATE query in a similar way, I see it even sooner, because SQL Server will return an error if the subquery returns more than a single value. But with the proprietary UPDATE FROM syntax, I can mess up the join and never notice – SQL Server will happily update the same row over and over again if it matches more than one row in the joined table, with only the result of the last of those updates sticking. And there is no way of knowing which row that will be, since that depends in the query execution plan that happens to be chosen. A worst case scenario would be one where the execution plan just happens to result in the expected outcome during all tests on the single-processor development server – and then, after deployment to the four-way dual-core production server, our precious data suddenly hits the fan…

     

    That’s all?

     

    Well, almost. There’s one more thing. Probably not something you’ll run into on a daily base, but good to know nonetheless. If the target of the update happens to be a view instead of a base table, and there is an INSTEAD OF UPDATE trigger defined for the view, the UPDATE will fail with this error message:

     

    Msg 414, Level 16, State 1, Line 1

    UPDATE is not allowed because the statement updates view "v1" which participates in a join and has an INSTEAD OF UPDATE trigger.

     

    Of course, most people will never run into this. But I did have the misfortune of doing so once – unfortunately, I discovered this limitation after rewriting several hundred ANSI standard UPDATE statements to the equivalent UPDATE FROM, and having to convert them all back after as much as a single test…

     

    And that’s why you want to deprecate UPDATE FROM?

     

    Well, no. The view with INSTEAD OF UPDATE trigger won’t affect many people. And the possibility of error can be somewhat thwarted by making sure (and double-checking) to always include all columns of the primary key (or a unique constraint) of the source table. So we’re back to the more principle point of avoiding proprietary syntax if there is an ANSI standard alternative with no or limited negative consequences. And in the case of UPDATE FROM, there are some cases where the standard syntax just doesn’t cut it.

     

    One such scenario is when a file is read in periodically with updated information that has to be pushed into the main table. The code below sets up a simplified example of this – a table Customers, with SSN as its primary key, that stores address and lots of other information, and a table Moved, which is the staging table containing the contents of a file received from a third party listing new address for people who recently moved. I have also included the code to preload the tables with some mocked up data – the Customers table has 10,000 rows, and the Moved table has 3,000 rows, 1,000 of which match an existing row in the Customers table. The others don’t – those people are apparently not our customers.

     

    CREATE TABLE Customers

          (SSN char(9) NOT NULL,

           Street varchar(40) NOT NULL,

           HouseNo int NOT NULL,

           City varchar(40) NOT NULL,

           LotsOfOtherInfo char(250) NOT NULL DEFAULT (''),

           PRIMARY KEY (SSN),

           CHECK (SSN NOT LIKE '%[^0-9]%')

          );

    CREATE TABLE Moved

          (SSN char(9) NOT NULL,

           Street varchar(40) NOT NULL,

           HouseNo int NOT NULL,

           City varchar(40) NOT NULL,

           PRIMARY KEY (SSN),

           CHECK (SSN NOT LIKE '%[^0-9]%')

          );

    go

    INSERT INTO Customers(SSN, Street, HouseNo, City)

    SELECT RIGHT(Number+1000000000,9), 'Street ' + CAST(Number AS varchar(10)),

           Number, 'City ' + CAST(Number AS varchar(10))

    FROM   dbo.Numbers

    WHERE  Number BETWEEN 1 AND 30000

    AND    Number % 3 = 0;

    INSERT INTO Moved(SSN, Street, HouseNo, City)

    SELECT RIGHT(Number+1000000000,9), 'New street ' + CAST(Number AS varchar(10)),

           Number * 2, 'New city ' + CAST(Number AS varchar(10))

    FROM   dbo.Numbers

    WHERE  Number BETWEEN 1 AND 30000

    AND    Number % 10 = 0;

    go 

     

    Since ANSI-standard SQL does not allow a join to be used in the UPDATE statement, we’ll have to use subqueries to find the new information, and to find the rows that need to be updated, resulting in this query:

     

    UPDATE Customers

    SET    Street  = (SELECT Street

                      FROM   Moved AS m

                      WHERE  m.SSN = Customers.SSN),

           HouseNo = (SELECT HouseNo

                      FROM   Moved AS m

                      WHERE  m.SSN = Customers.SSN),

           City    = (SELECT City

                      FROM   Moved AS m

                      WHERE  m.SSN = Customers.SSN)

    WHERE EXISTS     (SELECT *

                      FROM   Moved AS m

                      WHERE  m.SSN = Customers.SSN);

     

    There’s a lot of duplicated code in here. And if we were getting data from a complicated subquery instead of the table Moved, it would be even worse (though we can at least put all the duplicated code in a CTE since SQL Server 2005). Of course, writing the code is done quickly enough once you master the use of copy and paste, but the code has to be maintained as well.

     

    Maybe even worse is that the performance of this query just sucks – if you run this (enclosed in a BEGIN TRAN / ROLLBACK TRAN, so you can run the variations below without having to rebuild the original data) and check out the execution plan, you’ll see that the optimizer needs no less than five table scans (one for Customers, and four for Moved) and four merge join operators. And that, too, would be much worse if the source of the data had been a complex subquery (and no, using a CTE will not help the optimizer find a better plan – it just doesn’t understand that the four subqueries are similar enough that they can be collapsed.

     

    Now, if Microsoft had chosen to implement row-value constructors (as defined in the ANSI standard), we could have simplified this to

     

    UPDATE Customers

    SET   (Street, HouseNo, City)

                   = (SELECT Street, HouseNo, City

                      FROM   Moved AS m

                      WHERE  m.SSN = Customers.SSN)

    WHERE EXISTS     (SELECT *

                      FROM   Moved AS m

                      WHERE  m.SSN = Customers.SSN);

     

    But this is invalid syntax in any version of SQL Server (including the latest CTP for SQL Server 2008), and I know of no plans to change that before SQL Server 2008 RTMs.

     

    But with using the proprietary UPDATE FROM syntax, we can simplify this, and get a much better performance to boot. Here’s how the same update is written in non-portable code:

     

    UPDATE     c

    SET        Street     = m.Street,

               HouseNo    = m.HouseNo,

               City       = m.City

    FROM       Customers AS c

    INNER JOIN Moved     AS m

          ON   m.SSN      = c.SSN;

     

    And now, the optimizer will produce a plan that scans each table only once and has only a single merge join operator. Some quick tests (with much more rows in the tables) show that it executes two to three times quicker than the ANSI standard version. For that performance gain, I will gladly choose the proprietary syntax over the standard!

     

    What’s with the title of this post then? Why deprecate a fine feature?

     

    Patience, we’re getting there. Bear with me.

     

    All the above is true for versions of SQL Server up to SQL Server 2005. But SQL Server 2008 will change the playing field. It introduces a new statement, MERGE, that is specifically designed for situations where rows from a table source either have to be inserted into a destination table, or have to be used to update existing rows in the destination table. However, there is no law that prescribes that any MERGE should always actually include both an insert and an update clause – so with this new statement, we can now rewrite the above code as follows:

     

    MERGE INTO Customers AS c

    USING      Moved     AS m

          ON   m.SSN      = c.SSN

    WHEN MATCHED

    THEN UPDATE

    SET        Street     = m.Street,

               HouseNo    = m.HouseNo,

               City       = m.City;

     

    As you can see, the source table and the join criteria are included only once, just as in the proprietary UPDATE FROM. The execution plan (tested on the February CTP, also known as CTP6) is also quite similar, including just a few extra operators that are specific to the new MERGE statement. What really surprised me, was that the plan for the MERGE statement was estimated to be about 65% cheaper (faster) than the corresponding UPDATE FROM statement. However, I think SQL Server is lying here – a quick test with more data shows only an extremely marginal advantage of MERGE over UPDATE FROM. This test was too limited to draw serious conclusions, but I am quite sure that there will not be a 65% saving by using MERGE over UPDATE FROM. (I do expect such a saving form either MERGE or UPDATE FROM over the ANSI-compliant UPDATE statement for this case).

     

    The good news is that:

    1)      The MERGE statement is described in SQL:2003 and can thus be considered ANSI standard. (In fact, SQL Server implements a superset of the ANSI standard MERGE syntax: everything described in the syntax is implemented, but there are some non-standard extensions that make the command even more useful as well. However, the example above uses only the standard features and should hence run on each DBMS that conforms to the SQL:2003 version of MERGE).

    2)      The MERGE statement will return an error message if I mess up my join criteria so that more than a single row from the source is matched:

    Msg 8672, Level 16, State 1, Line 1

    The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.

    3)      The MERGE statement will gladly accept a view with an INSTEAD OF UPDATE trigger as the target of the update.

     

    So as you see, MERGE allows me to achieve what I previously could achieve only with an ANSI standard UPDATE statement with lots of duplicated code and lousy performance, or with a UPDATE FROM statement that hinders portability, introduces a higher than normal risk of errors going unnoticed through QA right into the production database, and has some odd limitation on views with INSTEAD OF UPDATE triggers. None of these downsides and limitations apply to MERGE. And if there are any other problems with MERGE, I have yet to find them.

     

    With this alternative available, I fail to see any reason why the proprietary UPDATE FROM syntax should be maintained. In my opinion, it can safely be marked as deprecated in SQL Server 2008. It should of course still work, as “normal” supported syntax in both SQL Server 2008 and the next version, and in at least one version more if the database is set to a lower compatibility – but it should be marked as deprecated, and it should eventually be removed from the product. Why waste resources on maintaining that functionality, when there is an alternative that is better in every conceivable way? I’d much rather see the SQL Server team spend their time and energy on more important stuff, such as full support for row-value constructors and full support for the OVER() clause. Or maybe even on releasing Service Pack 3 for SQL Server 2005!

  • Want a Service Pack? Ask for it!

    Service pack 2 for SQL Server 2005 is already 11 months old. And there is still no sign of service pack 3 on the horizon. Why is that? Has Microsoft managed to release a perfect, completely bug-free product? No, of course not – with the size and complexity of a product such as SQL Server is, that will simply never happen.

     

    There have, in fact, enormous numbers of bugs been uncovered and fixed since SP2 was released. And roughly once every two months, a so-called “cumulative update package” gets released. The last one is officially called “Cumulative update package 5 for SQL Server 2005 Service Pack 2”, or simply CU5 for friends. Quite a mouthful. But if you think that name is long, check out the list of bugs that CU5 fixes!

     

    I think that it’s great that Microsoft now releases these cumulative update packages at regular intervals. I see them as a good addition that nicely fits in between hot-fixes for quick fixes with limited testing, only for those needing it, on one side, and fully tested service packs that are released once or at most twice per year on the other side.

     

    Given the long list of bugs fixed in CU5, should everyone be recommended to install it, just as if it were a service pack? Well, no. Microsoft themselves advise against this. In fact, you can’t even just download and install the package; you have to “submit a request to Microsoft Online Customer Services to obtain the cumulative update package”. This quote comes directly from the Knowledge Base article for CU5, as come these further disclaimers:

    it is intended to correct only the problems that are described in this article. Apply it only to systems that are experiencing these specific problems

    if you are not severely affected by any of these problems, we recommend that you wait for the next SQL Server 2005 service pack that contains the hotfixes in this cumulative update package

     

    These quotes, which have been in the same or a similar form in all cumulative updates, make it pretty clear that I should wait for the next service pack. So we waited. And waited. And waited some more. And then, some MVP's got impatient and suggested (in the MVP newsgroup) to release SP3 as soon as possible. The answer surprised me – apparently, Microsoft has no plans yet to release a new service pack, because not enough customers have asked for it. (MS: “Good, but honestly, at least our management says, we're not getting feedback requesting SP3 from enough customers to require it” – Me: “Is that reason under NDA? Because if it's not, I can post a blog entry supporting people to write MS management asking for SP3” – MS: “As far as I know that's a public response”). So, the KB article says to wait, all customers do as asked and then Microsoft concludes that nobody wants a service pack because nobody asks for it? And I misunderstood when I thought that “any hotfix that is provided in a SQL Server service pack is included in the next SQL Server service pack” implies that there will actually be a next service pack. Apparently, my graps of the English language is not as good as I’d like to believe…

     

    Anyway, I now understand that Microsoft will only release a new service pack if enough people ask for it. So I’ve decided to make sure that they get the message. I’ve gone to Connect and filed a suggestion to release Service Pack for SQL Server 2005, including all changes up to and including CU5.

     

    If you read this, and you agree with me that Service Pack 3 for SQL Server 2005 is overdue, you now know what to do – log in to Connect and vote 5 for my suggestion. And if you think that I’m losing my marbles and that there should be no Service Pack 3, then you should log in to Connect and vote 1. In short: make yourself heard!

     

    Microsoft will not release service pack 3 because insufficient customer are asking for it? Well, I’m asking – can we have service pack 3, please? Pretty please? Pretty pretty pretty pretty pretty please????

     

    With sugar on top…
  • Bin packing part 3: Need for speed

    In the first post of this series, I explained the bin-packing problem and established a baseline solution. The second post investigated ways to increase the packing efficiency. In none of these posts did I pay particular attention to performance – and frankly, it shows. Performance of all solutions presented thus far sucks eggs. Time to see what can be done about that.

     

    If you look in detail at the most efficient solution so far (dbo.OrderDesc, as described in the second post of the series), you’ll see that all 40,000 registrations are processed one by one, with a cursor. No surprise here, as I’ve already promised to postpone the set-based solution to a later moment. For each of these 40,000 registrations, the following actions are executed:

     

    ·        The dbo.Sessions table is queried to find a session for the correct year and quarter that still has enough space left for the current registration.

    ·        When needed, a new session is added to the dbo.Sessions table.

    ·        The registration is updated in the dbo.Registrations table. Well, updating all 40,000 registrations is pretty hard to avoid as the goal is to assign each registration to a session, and as long as the processing is cursor-based, the updates will inevitably come in one by one.

    ·        The chosen session is updated in the dbo.Sessions table to reduce the space left by the size of registration that was just assigned to the session.

     

    Improve indexes

     

    Indexing is one of the most common answers to performance problems. In this case, based on the above summary of actions taken in the loop, it’s clear that adding or changing an index on dbo.Registrations won’t do much. The update of this table is currently based on the clustered index key, which is the fastest possible way to perform an update. The cursor requires a table scan (to read all rows) and a sort (to order them by year, quarter, and descending number of candidates); the sort can be avoided if the clustered index is on these three columns, but at the price of 40,000 bookmark lookups for the updates – I don’t need to run a test to see that this is a bad idea!

     

    The dbo.Sessions table is less clear-cut. The clustered index is on (Year, Quarter, SessionNo), so searching for a session with enough room for a registration currently seeks the clustered index to process only the sessions of a single quarter, but it still has to scan through sessions in the quarter until it finds one with enough space. A nonclustered index on (Year, Quarter, SpaceLeft) will speed up this process, especially since it is covering for the query (the query uses the SessionNo column as well, but that column is included in the nonclustered index as part of the reference to the clustered index key). The downside to this index is that it has to be updated each time the space left in a session changes and each time a session is added. So, the question to answer is whether gained when searching for a session to add a registration to outweighs the performance lost during updates. To find out, I added this index before repeating the tests of dbo.OrderDesc:

     

    CREATE NONCLUSTERED INDEX ix_Sessions

    ON dbo.Sessions (Year, Quarter, SpaceLeft);

     

    The packing efficiency didn’t change. The execution time did change, though – with the index in place, the average execution time of five test runs of dbo.OrderDesc was down to 68,300 ms, a 12.4% improvement over the execution time without the index. Clearly, the extra overhead incurred on the UPDATE statements is less than the savings on the SELECT statements.

     

    Note that I create the index before starting the test. I assume that the index can be added permanently to the database – if the database frequently updates the dbo.Sessions table at other moments, when the bin packing procedure is not executing, it might make more sense to create it at the start of this procedure and remove it when done – and in that case, the time taken for creating and dropping the index (less than 100 ms) should be added in.

     

    For completeness, I also tested the dbo.Order50FirstB version, that proved to be a good trade-off between performance and efficiency in the previous post. This version should of course see a similar performance benefit from the additional index, and indeed it does – the average execution time for dbo.Order50FirstB was down to 68,144 ms after adding the index, a saving of 9.8%.

     

    If you’re duplicating the tests on your own server, then don’t forget to remove the index – we won’t need it in the following step, and the overhead of maintaining it would just waste precious time.

     

    DROP INDEX dbo.Sessions.ix_Sessions;

     

    Do less work …

     

    Saving 12.4% execution time is great – but (of course) not enough to satisfy me! So it’s time to take another approach: let’s see if I can’t find any way to reduce the number of times the dbo.Sessions table is searched and updated. How, you might ask? Well, let’s again check how a human would operate. If I have to stow packages in boxes, I’d probably keep adding packages to the same box until I get a package that won’t fit. Only when I get a package that doesn’t fit the box anymore would I put the box away and open an new box. I have coded the T-SQL equivalent of this method, and called it dbo.FillThenNext (see FillThenNext.sql in the attached ZIP file).

     

    The average execution time for this simple algorithm turned out to be just 39,110 ms, so this saves 42.6% over dbo.OrderDesc with index, or 47.0% over the baseline. But the packing efficiency turns out to be really down the drain with this one:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       6670        75.364617        16431.800000

    2       3548        78.769447        7532.600000

    3       8066        75.025787        20144.200000

    4       5346        78.283015        11609.900000

    ALL     23630       76.420440        13929.625000

     

     

    A total of 23,630 sessions exceeds the number of sessions required by the baseline by 21.4%, and that of dbo.OrderDesc by 24.9%. What a high price to pay for a speed advantage!

     

    … but do it smart

     

    The reason for this enormous efficiency loss is that I became very wasteful. Suppose that the first registration processed is for 20 persons, and the second for 85. The second can not be combined with the first in a single session, so the algorithm opens a new session – and never again looks at the first session, even though there still are 80 seats left! That is of course a HUGE waste of resources. So I modified the algorithm. I still have a “current” session that I keep adding registrations to as long as I can, but if I find a registration that won’t fit I now first search the previous sessions for one with enough empty seats before closing the current session and opening a new one. The code for this version (dbo.FillThenSearch (is in FillThenSearch.sql in the attached ZIP file. And the test results are really awesome! The average execution time is now 45,506 ms (with the nonclustered index back in place – without it, execution time is 50,874 ms), which is of course slightly slower than my previous attempt but still 33.4% faster than dbo.OrderDesc with index, and 38.3% faster than the baseline. But the packing is much better:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5399        93.106501        3721.800000

    2       2863        97.615787        682.600000

    3       6945        87.135781        8934.200000

    4       4542        92.140246        3569.900000

    ALL     19749       91.438300        4227.125000

     

    This version still doesn’t pack as efficient as dbo.OrderDesc (4.4% more sessions) or the baseline (1.5% more sessions) – but saving 33.4% execution time at the price of only 4.4% more sessions sounds like a proposition that deserves serious consideration, unlike the previous attempt!

     

    Revive an old trick

     

    If you have checked the source code, you may have noticed that I have once more removed the extra ordering that I added in the previous installment. I did that on purpose, because this extra ordering

    a)      decreased performance – I want to increase performance in this part, and

    b)      is not guaranteed to have the same effects in a different packing algorithm.

     

    But I did not want to end this post without at least testing the effect of adding back in the ordering that proved most efficient in the previous episode, so I re-introduced the sorting by descending number of candidates in dbo.FillThenSearchDesc (FillThenSearchDesc.sql in the ZIP file), and I discovered that this might be the best tradeoff so far – only 2 sessions more than dbo.OrderDesc, at only 48,626 ms (28.6% less than dbo.OrderDesc – with the nonclustered index still in place, of course).

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5087        98.816984        601.800000

    2       2799        99.847802        42.600000

    3       6771        89.374981        7194.200000

    4       4268        98.055529        829.900000

    ALL     18925       95.419550        2167.125000

     

    Best options so far

     

    After having investigated so many different options, it’s all too easy to lose track. The table below lists the versions most worth remembering – except for the baseline, I did not include any version for which there was another version that produces less sessions in less time. The remaining sessions are the one you’ll have to choose from, making your own trade-off between saving sessions or saving execution time.

     

    Version

    Execution time (ms)

    Number of sessions

    Baseline

    73,759

    19,457

    FillThenNext

    39,110

    23,630

    FillThenSearch (with index)

    45,506

    19,749

    FillThenSearchDesc (with index)

    48,626

    18,925

    OrderDesc (with index)

    68,300

    18,923

     

    Note that I ordered these versions, except the baseline, fastest to slowest (or least efficient to most efficient packer).

     

    This concludes the third part of this series. Though I still have some ideas that might improve the performance of my current cursor-based approach, I’ll postpone them and switch gears – next episode, I’ll start investigating if these numbers can be beaten by a set-based approach.

  • Bin packing part 2: Packing it tighter

    In my previous post, I explained the bin packing problem, explained an example scenario, and established a baseline for both speed and efficiency of bin packing algorithms by writing a rather crude cursor-based procedure. In this part, I will look at some small modifications that can be made to this code to make it better at packing bins as full as possible, so that less bins will be needed.

     

    Reordering the cursor

     

    The Baseline procedure didn’t process the registrations in any particular order. Granted, there is an ORDER BY Year, Quarter in the cursor declaration, but that is only needed to enable me to generate consecutive session numbers within each quarter without having to do a SELECT MAX(SessionNo) query. Had I elected to use an IDENTITY column for the Sessions table, I could even have omitted the ORDER BY completely.

     

    Since there is no order specified for the registrations within the quarter, we can assume that they will be processed in some unspecified order. But maybe we can get a more efficient distribution of registrations if we order them before processing? I am, of course, not referring to ordering by subject code, as the actual subjects are irrelevant for the algorithm – I am referring to order by the number of candidates in a registration.

     

    This is very easy to test, of course. All it takes is adding one extra column to the ORDER BY clause of the cursor definition. So I created the procedures dbo.OrderAsc and dbo.OrderDesc to test the effects of ordering by ascending or descending number of candidates (see OrderAsc.sql and OrderDesc.sql in the attached ZIP file).

     

    Ordering by ascending number of candidates turned out to be a pretty lousy idea. Well, to be fair, I didn’t expect otherwise – after all, if you save all the biggest registrations for the last, you’ll have no smaller registrations left to fill up the gaps. In fact, all registrations for 51 or more candidates will get a session of their own, and will not be combined with any other registration. So it’s not surprising at all to see that this method results in a huge amount of extra sessions as compared to the baseline version – an increase of no less than 19.9%!

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       6435        78.116860        14081.800000

    2       3419        81.741444        6242.600000

    3       7796        77.624166        17444.200000

    4       5686        73.602004        15009.900000

    ALL     23336       77.383227        13194.625000

     

    Equally unsurprising is the fact that changing the ORDER BY clause to sort by descending number of candidates results in more successful packing of the registrations in less sessions. This version saves 2.7% as compared to the baseline, as shown in this breakdown:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5085        98.855850        581.800000

    2       2799        99.847802        42.600000

    3       6771        89.374981        7194.200000

    4       4268        98.055529        829.900000

    ALL     18923       95.429635        2162.125000

     

    Even though this part mainly focuses on efficiency achieved, the time taken still remains an important factor. So I ran these two procedures five times in a row, recorded execution times, and calculated the average.  For dbo.OrderAsc, the average execution time was 78,531 ms, whereas dbo.OrderDesc clocked in at 77,934 ms. As you see, both versions are slightly slower than the baseline version. For the OrderDesc version, this is caused by the rapid growth of the number of sessions as the first, biggest registrations are processed first. This means that searching a session with enough empty space for a registration soon becomes a more time-consuming task. For OrderAsc, the reverse is true – since the smallest registrations are processed first, there will at first be only a few sessions. This means that this algorithm will be a lot faster at first – but once the bigger registrations are processed and the total number of sessions rapidly increases to be way more that that in the baseline version, this advantage is lost, and the time required to search for sessions with enough empty space soon gets so high that the advantage this algorithm had as first then turns into a disadvantage.

     

    Sorting the registrations by ascending number of candidates within a quarter before processing them hurts both speed and packing efficiency of the algorithm; we can henceforth forget about this option. On the other hand, sorting by descending number of candidates increases the packing efficiency by 2.7%, though this comes at the price of a 5.7% increase in execution time. If I had to choose between these two, my pick would depend on the needs of the organization I’m working for – if the cost per session is high and plenty of time is available for the computation, I’d go with the ordered version, but if speed is more  important than saving those few extra sessions, I’d use the unsorted one.

     

    Less obvious orderings

     

    But why choose between these two only? I have thus far only considered the obvious sort orders, ascending and descending. Why not try a few more variations?

     

    Thinking about the even distribution of data generated for the first quarter of each year in my test set, I considered that, were I tasked with manually combining the registrations as efficient as possible, I’d probably start by making sessions by combining two registrations for 50 candidates, than combining a registration for 51 candidates with one for 49, and so on. All of these sessions would total 100 candidates, and because of the even distribution of data, I expect roughly the same number of registrations for each size so I’d have only a few spare sessions left in the end.

     

    That technique can’t be exactly mimicked by changing the sort order of the cursor. There are other ways to mimic it, though – but I’ll leave those for a future post J. But we can simulate this effect by ordering the registrations so that those with 50 candidates come first, then those with 49 and 51 candidates, and so on. This is done by changing the ORDER BY clause in the cursor definition to order by the “distance” between the number of candidates and the magic number 50, being half the maximum session size:

     

      ORDER BY Year, Quarter,

               ABS(NumCandidates - (@MaxCandidatesPerSession / 2.0)) ASC;

     

    I didn’t hardcode the number 50, because I wanted my stored procedures to be fit for any maximum number of candidates. I divide by 2.0 instead of just 2 so that for an odd maximum session size (e.g. 25), the fraction is retained and registrations for 12 and 13 candidates are kept together because they are the same distance from half the maximum size (12.5).

     

    It is of course also possible to use DESC instead of ASC to start with registrations for 100 candidates, then those for 99 or 1, and so on, saving the 50-candidate registration for the last. Both these versions are included in the attached ZIP file, in the files Order50First.sql and Order50Last.sql.

     

    These versions both took slightly more time than the baseline version when I tested them on my laptop: 76,807 ms for dbo.Order50First, and 77,322 ms for dbo.Order50Last. The packing efficiency of dbo.Order50First is better than the baseline, but not as good as that of dbo.OrderDesc:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5096        98.642464        691.800000

    2       2805        99.634224        102.600000

    3       6780        89.256342        7284.200000

    4       4269        98.032560        839.900000

    ALL     18950       95.293667        2229.625000

     

    For dbo.Order50Last, the resulting number of sessions is even more than we had in the baseline!

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5444        92.336884        4171.800000

    2       3032        92.174802        2372.600000

    3       6928        87.349595        8764.200000

    4       4617        90.643491        4319.900000

    ALL     20021       90.196044        4907.125000

     

    The reason for the disappointing efficiency of the dbo.Order50First procedure is that there is no control over the order of the registrations that have the same distance to 50. So it is quite possible, for instance, to start with a bunch of registrations for 49 candidates that will promptly be combined to sessions for 98 candidates each – so that, when the 51-sized registrations start coming in, they have to get sessions of their own. In an attempt to fix that, I tweaked the sort order some more, making sure that the for registrations with the same distance from 50, the “bigger” registrations come before the “smaller” ones.

     

      ORDER BY Year, Quarter,

               ABS(NumCandidates - (@MaxCandidatesPerSession / 2.0)) ASC,

               NumCandidates DESC;

     

    With this ORDER BY clause, I can be certain that all 51-candidate registrations are processed first, each getting its own session. After that, the 49-candidate registrations will exactly fill out all those sessions. This version (enclosed in Order50FirstB.sql) had a slightly better packing ration than dbo.Order50First – but still not as good as dbo.OrderDesc. Here are the results, which took 75,547 ms (on average) to achieve:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5089        98.778148        621.800000

    2       2804        99.669757        92.600000

    3       6771        89.374981        7194.200000

    4       4268        98.055529        829.900000

    ALL     18932       95.384270        2184.625000

     

    After these tests, there was still one thing left I wanted to try. Starting with registrations for 50 places was based on an idea for evenly distributed data. For other distributions, this might turn out to be a much worse idea (though the results don’t show as much). But what if, instead of starting at half the maximum session size, we start at the average registration size? For evenly distributed data, this should work out approximately the same. But maybe this order achieves a better packing ratio for other distributions? Let’s find out.

     

    Ordering by distance from the average session size for a quarter can be accomplished by using a correlated subquery in the ORDER BY clause (compatible with all versions of SQL Server), or by using an AVG function with the OVER clause (only SQL Server 2005 and up):

     

      ORDER BY a.Year, a.Quarter,

               ABS(a.NumCandidates - (SELECT AVG(b.NumCandidates * 1.0)

                                      FROM   dbo.Registrations AS b

                                      WHERE  b.Year = a.Year

                                      AND    b.Quarter = a.Quarter)) ASC;

    or

      ORDER BY a.Year, a.Quarter,

               ABS(a.NumCandidates - AVG(a.NumCandidates * 1.0)

                       OVER (PARTITION BY a.Year, a.Quarter)) ASC;

     

    Surprisingly, when dry-testing the query by itself, the correlated subquery turned out to be faster than the one using the OVER clause, so I didn’t have to sacrifice speed for backward compatibility. I used the correlated subquery, both with the ASC and the DESC sort option (see OrderHalfFirst.sql and OrderHalfLast.sql in the attachment), to test the two possible variations of this option. Both versions turned out to be quite inefficient packers, since they both took more sessions than the baseline. Here are the results of dbo.OrderHalfFirst, acquired in 75,056 ms:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5120        98.180078        931.800000

    2       3264        85.623161        4692.600000

    3       6771        89.374981        7194.200000

    4       5044        82.970063        8589.900000

    ALL     20199       89.401207        5352.125000

     

    And OrderHalfLast, after running 79,294 ms, produced these results:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       6036        83.280649        10091.800000

    2       2941        95.026861        1462.600000

    3       7796        77.624166        17444.200000

    4       4951        84.528580        7659.900000

    ALL     21724       83.125345        9164.625000

     

    Conclusion

     

    I’ve investigated many options to increase packing efficiency. It turns out that, at least with the test data I used, just starting with the biggest registration and working down to the smallest yields the best results. This is not the fastest option, though. The baseline version discussed in the previous episode of this series is still fastest. So the choice would appear to depend on the requirements of your application – if you have plenty of time and computer capacity but need to use as little sessions as possible, go for the dbo.OrderDesc version. If execution time is of utmost importance and a few extra sessions are no big deal, then stick to the baseline (for now).

     

    If you are in search of a solution that offers both speed and efficient packing, then the dbo.Order50FirstB version seems to be the right choice. It is only 0.05% less efficient than the best packer (dbo.OrderDesc), but over 3% faster. In the next episode I’ll be looking at ways to make the algorithm go faster. I’ll be making huge performance improvements – but packing efficiency will suffer. How much? As soon as I have completed all my tests and written the accompanying text, you’ll read it. Right here on sqlblog.com.

  • Bin packing part 1: Setting a baseline

    Some problems can only be solved by brute-forcing every possible combination. The problem with such an approach, is that execution time grows exponentially as the amount of input data grows – so that even on the best possible hardware, you will get inacceptable performance once the input data goes beyond the size of a small test set. These problems are called “NP-complete” or “NP-hard”, and the most viable way to deal with them is to use an algorithm that finds a solution that, while not perfect, is at least good enough – with a performance that, while not perfect, is at least fast enough.

     

    The bin packing problem is one of those problems. It is basically a very simple problem – for example, you are given a number of packages, each with its own weight, and an unlimited number of bins with a given maximum weight capacity. Your task is to use as little bins as possible for packing all packages. There are various situations in real life where some sort of bin packing is required – such as loading trucks to transport all freight in as little trucks as possible, assigning groups of people to rooms, cutting forms from raw material (this is a two-dimensional variation of bin packing), etc.

     

    Back in 2004 and 2005, when I had just started answering questions in the SQL Server newsgroups, I replied to some questions that were essentially a variation on the bin packing problem – one involving packages (with a weight) and trucks (with a maximum load capacity); the other involving inviting families to dinner. I came up with an algorithm that combined set-based and iterative characteristics, and managed to find a very efficient distribution of packages, at a very good performance. I wanted to share this algorithm ever since I started blogging; the reason I haven’t done so yet is that there is much more to say about this category of problems, and that I never before found the time to investigate and describe it all.

     

    This is the first part of what will become a series of posts investigating all possible (and some impossible) solutions to this problem, including some new possibilities (such as SQLCLR) that have only become available since SQL Server 2005 was released.

     

    The sample scenario

     

    As a sample scenario for testing various algorithms, I decided to stray from the packages and trucks, and switch to examinations. The scenario outlined here is imaginary, though it is (very loosely) based on a Dutch examination institute that I have done some work for, several years ago.

     

    ImEx (Imaginary Examinations Inc.) is responsible for various certifications. It does not teach students, but it does define what candidates need to know, publish model exams, and (of course) take exams. The latter activity is four times per year. Candidates register for one of the many subjects available. ImEx only has a small office for its staff, so it has to rent a room in a conference centre where the exams are takes. This room has a maximum capacity of 100 seats; there are more candidates, so the room is rented for a period of time; during that time, ImEx will hold two examination sessions per day. All candidates that take an exam in the same subject have to be seated in the same session, since an expert on that subject has to be available to answer questions and settle disputes. However, candidates for different subjects can be combined in the same session, as long as the maximum capacity is not exceeded. Since the rent for this room is high, ImEx wants to seat all registered candidates in as little sessions as possible.

     

    The script file “Create DB + tables.sql” (see attached ZIP file) contains the SQL to create a database for ImEx, and to create the two tables that we will focus on in this series. The table dbo.Registrations holds the registrations – not the individual registrations (imagine that they are in a different table, that is not relevant for the bin packing problem), but the aggregated number of registrations per subject for each quarter. The table dbo.Sessions holds the sessions that will be held in each quarter. One of the columns in dbo.Registrations is a foreign key to dbo.Sessions; this column is NULL at the start and has to be filled with a link to the session in which each subject will be examined. The table dbo.Sessions has a column SpaceLeft that is equal to (100 – SUM(NumCandidates) of registrations appointed to this session); this is of course just a helper column, included solely for performance.

     

    Another script file, “Generate test data.sql” (also in the attached ZIP file), fills the table dbo.Registrations with a set of randomly generated data. I use different data distributions for each of the four quarters, so that I can test the various bin packing algorithms for evenly distributed data (Q1), for data with more small groups and less large groups (Q2), for data with more large groups and less small groups (Q3), and for data with a non-linear distribution of group size – very few small (4 – 10) and large (70 – 80) groups; many average (35 – 45) groups (Q4). I seed the random number generator with a known value at the beginning of the script to get reproducible results, so that I can make meaningful comparisons when regenerate the data to test a new algorithm. For “truly” random data, you’ll have to remove this statement – or you can use a different seed value to see if different data makes much difference for the results.

     

    Setting a baseline: the first attempt

     

    I’ll conclude this first post of the series with a first attempt at solving the problem. One that is probably neither the fastest, nor the most effective. This first version will than act as a baseline to compare future attempts to. I will measure both speed (how fast does it complete on a given set of test data) and effectiveness (how many sessions does it create for a given set of test data). I hope to find an algorithm that yields the maximum effectiveness while still exceeding the speed of other algorithms, but it’s more likely that I’ll end up having to choose for a trade-off between speed and effectiveness.

     

    This first attempt is based on mimicking how a human would approach this problem – inspect each registration in turn, and assign it to a session that still has sufficient capacity if one exists, or to a new session otherwise. Implementing this algorithm in T-SQL results in the code that you find in the enclosed ZIP file in Baseline.sql. The code is pretty straightforward. Since the algorithm will inspect the registrations one by one, it is all centred around a cursor over the registrations table – of course, using the fastest cursor options available (see this blog entry for details). I also experimented with the “poor man’s cursor” (see this post), but in this case the real cursor turned out to be (slightly) faster.

     

    I want the session numbers to be sequential and starting from 1 within each quarter. There are two ways to do that – either query the Sessions table for the MAX(SessionNo) within the current quarter each time a new session is added, or use a variable that I increment for each new session, and that I reset when a new quarter starts. I chose the latter, since variable manipulation is lots cheaper than accessing the Sessions table.

     

    At the heart of the procedure is the SELECT TOP 1 query that I use to find a single session that has enough space left to accommodate the current registration. Since there is no ORDER BY in this query, the results are not deterministic – that is, I know it will return a session with enough space left if there is one, but if there is more than one session with enough space left, no one can predict which one will be returned, nor whether consecutive runs will yield the exact same results. Many details, such as the number of processors available, workload, and other factors, can influence the query execution plan that is used, so don’t be surprised if you get different results when testing this code on your machine. I could make this query return deterministic, reproducible results – but that would affect both performance and efficiency of the procedure, so I left that for the next part of this series.

     

    The test setup

     

    To test this and all future algorithms, I created a generic stored procedure for testing, called dbo.PerfTest (see PerfTest.sql in the attached ZIP file). In this stored procedure, I first wipe clean any results that may have been left behind by the previous run. Then I make sure that both the data cache and the procedure cache are empty. And then, I call the procedure I need to test (which is passed to dbo.PerfTest as a parameter and assumed to be in the dbo schema), making sure to note the time the call is made and the time the procedure returns control. The difference in milliseconds is then returned to the client, as the duration of the procedure to be tested.

     

    The script file RunTest.sql is the file I actually execute to do the tests. Whenever I need to test a new algorithm, I only have to change the name of the stored procedure to test in the line that calls dbo.PerfTest and then I can hit the execute button and sit back. When the procedure finishes, it displays two result sets – one from dbo.PerfTest displaying the duration of the test procedure in milliseconds; the second generated by the code in RunTest.sql to assess the efficiency of the algorithm by comparing the number of sessions, the average session size, and the average number of free seats per quarter for each of the quarters and overall.

     

    As was to be expected, the speed of my first attempt is abysmal. For the 40,000 registrations in my randomly generated test set, the average elapsed time for 5 test runs was 73,759 milliseconds. Faster than when I had to do it by hand, but that’s about all I can say in favour of this “speed”.

     

    The efficiency of this algorithm turned out to be pretty good. The tightest packing ratio is achieved with the data for quarter 2, that consists mainly of small groups. Quarter 3, with an overdose of big groups, turns out to be much more challenging for this algorithm. Even though the average group size for quarter 4 is slightly smaller than that of quarter 1, it is harder to pack because the bell curve results in a much lower number of small groups that can be used to fill those last few seats in an almost packed session. Here are the full results:

     

    Quarter NumSessions AvgSessionSize   AvgEmptySeats

    ------- ----------- ---------------- ----------------

    1       5271        95.367482        2441.800000

    2       2825        98.928849        302.600000

    3       6871        88.074225        8194.200000

    4       4490        93.207349        3049.900000

    ALL     19457       92.810556        3497.125000

     

    And now?

     

    As I said before, this is just a baseline. I have already investigated several other algorithms and intend to investigate even more – such as improving the cursor code (for both speed and efficiency), using a set based solution, combining set based and cursor based techniques, and employing the CLR. Watch this space for the next episode!

  • Poor men see sharp - more cursor optimization

    After making my post on cursor optimization I received some comments that triggered me to do some further investigation. Adam Machanic wrote in my blog’s comments that using SQLCLR to loop over a SqlDataReader would be much faster than any T-SQL based cursor. And Erland Sommarskog wrote in a usenet thread that he has some colleagues who think that a “poor man’s cursor” is always better than a real cursor. So I decided to give these options a try and see what comes out in a real test. I simply reused the test cases I had already used for testing the various cursor options, but with the code adapted to use SQLCLR or to use a cursor-less iteration.

     

    The poor man’s cursor

     

    I don’t think that “poor man’s cursor” is an official phrase, but what the hay – if we all start using it, we can make it official J. In case you want to know what a term means before using it, the term “poor man’s cursor” refers to any method of iterating over the rows in the result set of a query, processing them one by one, without using the DECLARE CURSOR, OPEN CURSOR, FETCH, CLOSE CURSOR, and DEALLOCATE CURSOR keywords that were added to T-SQL for the sole purpose of iterating over the rows in a result set of a query.

     

    Why would you want to do that, you may ask? Well, I think that the most common reason is that programmers have heard that cursors are generally bad for performance, but fail to understand that the performance impact is not caused by the cursor itself, but by the fact that iterating over a result set reduces the options available to the query optimizer and negates the development team in Redmond has done to optimize SQL Server for set based operations. So they think that the cursor itself is to blame, and try to code around it without moving from their algorithmic, iteration-based approach to a declarative, set-based approach.

     

    Usenet newsgroups and web-forums being full of simple one-liners such as “cursors are evil”, many people claiming that cursors incur a heavy overhead, and even some otherwise respectable websites listing WHILE loops first in a list of cursor alternatives, all have done their fair share to contribute to the popularity of the idea that you can improve cursor performance by simply replacing it with a different iteration mechanism. So, let’s find out if there is any truth to this claim.

     

    Reading data

     

    I started with the fastest of all cursor options, the one using a local, forward only, static, read only cursor with an ORDER BY matching the clustered index. I ripped out all cursor-related command and replaced them with the appropriate SELECT TOP(1) commands to read and process one row at a time, and ended up with this code:

     

    -- Keep track of execution time

    DECLARE @start datetime;

    SET @start = CURRENT_TIMESTAMP;

     

    -- Declare and initialize variables for loop

    DECLARE @SalesOrderID int,

            @SalesOrderDetailID int,

            @OrderQty smallint,

            @ProductID int,

            @LineTotal numeric(38,6),

            @SubTotal numeric(38,6);

    SET @SubTotal = 0;

     

    -- Read first row to start loop

    SELECT TOP (1) @SalesOrderID = SalesOrderID,

                   @SalesOrderDetailID = SalesOrderDetailID,

                   @OrderQty = OrderQty,

                   @ProductID = ProductID,

                   @LineTotal = LineTotal

    FROM           Sales.SalesOrderDetail

    ORDER BY       SalesOrderID, SalesOrderDetailID;

     

    -- Process all rows

    WHILE @@ROWCOUNT > 0

    BEGIN;

     

      -- Accumulate total

      SET @SubTotal = @SubTotal + @LineTotal;

     

      -- Read next row

      SELECT TOP (1) @SalesOrderID = SalesOrderID,

                     @SalesOrderDetailID = SalesOrderDetailID,

                     @OrderQty = OrderQty,

                     @ProductID = ProductID,

                     @LineTotal = LineTotal

      FROM           Sales.SalesOrderDetail

      WHERE          SalesOrderID > @SalesOrderID

      OR (           SalesOrderID = @SalesOrderID

          AND        SalesOrderDetailID > @SalesOrderDetailID)

      ORDER BY       SalesOrderID, SalesOrderDetailID;

     

    END;

     

    -- Display result and duration

    SELECT @SubTotal;

    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

    go


    I ran this code five times in a row and calculated average execution time as 3166 milliseconds. I then re-ran the cursor code five times (I didn’t want to use the old measurements, as I was unsure if I had the same applications active – and having a different load on my machine would surely influence results); this code took 3265 milliseconds. So the first round goes to the poor man’s cursor, for beating the “real” cursor by three percent. I must add to this that I have later run another test, as part of research for a future blog post, where the results were reversed and the real cursor beat the poor man’s cursor by a small margin.

     

    Of course, real life is not always so nice as to throw us only problems that require ordering the data by the clustered index key. So my next step was to investigate what happens to the comparison if the problem requires the data to be read in an order that can be served by a clustered index. Remember that I had a similar test case in the cursor option comparison, so I again was able to reuse the existing cursor code for ordering by ProductID. For the poor man’s cursor version, this involved changing the ORDER BY on both queries, but I also had to change the WHERE clause in the second – to make sure that the WHERE clause filters out all rows already processed, I have to include rows with a higher ProductID as well as rows with an equal ProductID and a higher primary key value – and in order for this to work, I also have to include the primary key columns as tie-breakers to the ORDER BY clause. I won’t post the full code, as most of it remains the same, but the “Read next row” query in the loop now reads like this:

     

      -- Read next row

      SELECT TOP (1) @SalesOrderID = SalesOrderID,

                     @SalesOrderDetailID = SalesOrderDetailID,

                     @OrderQty = OrderQty,

                     @ProductID = ProductID,

                     @LineTotal = LineTotal

      FROM           Sales.SalesOrderDetail

      WHERE          ProductID > @ProductID

      OR (           ProductID = @ProductID

          AND        SalesOrderID > @SalesOrderID)

      OR (           ProductID = @ProductID

          AND        SalesOrderID = @SalesOrderID

          AND        SalesOrderDetailID > @SalesOrderDetailID)

      ORDER BY       ProductID, SalesOrderID, SalesOrderDetailID;

     

    The impact on performance is dramatic, to say the least. With this slight modification in the order in which rows have to be processed, the average execution time for five consecutive test runs rises to 5822 ms. The cursor version gets slower as well as a result of the new sort order, but by far less – it still takes only 3377 ms, so the poor man’s cursor is now worse by over seventy percent!

     

    For the final test, I checked the effects of ordering by a column that’s not indexed at all. I did this in the original cursor test by ordering on LineTotal, so I’ll do the same here. Since LineTotal is, like ProductID in the previous test case, not constrained to be unique, the same consideration apply. That means that I can reuse the code of the version that ordered by ProductID except of course that I have to change each occurrence of ProductID to LineTotal.

     

    This change really wrecked performance for the poor man’s cursor. I wanted to sit it out, but finally decided to kill the test after one and a half hours. I finally realized that the LineTotal column I was using is a non-persisted computed column, which adds an enormous amount of overhead – for each of the 121,317 iterations, SQL Server has to recalculate the LineTotal for each of the 121,317 rows – that is a total of almost 15 billion calculations! So I decided to change this test case to sort on OrderQty instead, then left the computer to execute the test run overnight. The next day, the duration was listed as a whopping 27,859,593 ms (seven and three quarter hours!) – just a tad slower than the real cursor, which clocked in at an average execution time of 3430 ms when sorting on the computed LineTotal column and 3352 ms when sorting of OrderQty.

     

    Modifying data

     

    Naturally, I wanted to test the performance of a poor man’s cursor in a modifying scenario as well. I didn’t really expect any surprises. After all, I already know that the fastest cursor solution uses the exact same cursor options as when reading data. I’ll spare you the poor man’s cursor code this time, since it’s based on the cursor code published on my previous blog posts, with the same modifications as above. Since this update scenario happens to be based on ordering by the clustered index key, I expected the poor man’s cursor to be just a tad faster, just as in the reading scenario.

     

    After running the tests, I was surprised. The real cursor version took 5009 ms on average; the poor man’s cursor achieved the same task in just 4722 ms – a speed gain of over five percent. The speed gain was so much more than I expected that I actually repeated the tests – but with the same results. I must admit that I have no idea why the exact same cursor, transformed to the exact same poor man’s cursor, results in more speed gain when the rows are then updated then when they are merely used in a computation.

     

    I did not test performance of the poor man’s cursor in a scenario where the rows have to be processed in a different order than the clustered index key. Based on the results of the tests for reading data, I expect performance to go down the drain in a very similar way.

     

    Conclusion

     

    People claiming that a poor man’s cursor performs better than a real cursor are mostly wrong. When the order in which rows have to be processed does not match the clustered index key, a properly optimized cursor will run rings around the poor man’s cursor.

     

    The only exception is if the required ordering happens to coincide with the clustered index key. In those cases, a poor man’s cursor may sometimes beat a real cursor by a few percent, although there are other cases where the cursor still wins (also by a few percent). Even in the cases where the poor man’s cursor does win, the margin is so small that I’d recommend just using real cursors, with the appropriate options for maximum performance (that is, LOCAL, FORWARD_ONLY, STATIC, and READ_ONLY) in all cases.

     

    Except of course in the 99.9% of all cases where a set-based solution beats the cursor-based one J.

     

    Using the CLR

     

    When you use CLR code to process data provided by SQL Server, iterating over rows to process them one at a time becomes the standard – after all, there is no support for set-based operations in C#, VB.Net, or any other .Net enabled programming language. As such, the claim made by Adam Machanic has valid grounds. A language that has no other option but to iterate over the rows and process them one at a time pretty well should be optimized for this kind of processing!

     

    Reading data

     

    The CLR version of the code to calculate the sum of all LineTotal values is about as simple as it gets:

     

    [Microsoft.SqlServer.Server.SqlProcedure]

    public static SqlInt32 ReadData([SqlFacet(Precision = 38, Scale = 6)] out SqlDecimal Total)

    {

        // Initialize subtotal

        decimal SubTotal = 0;

     

        // Set up connection and query

        SqlConnection conn = new SqlConnection("context connection=true");

        conn.Open();

        SqlCommand cmd = conn.CreateCommand();

        cmd.CommandText = "SELECT   SalesOrderID, SalesOrderDetailID, " +

                          "         OrderQty, ProductID, LineTotal " +

                          "FROM     Sales.SalesOrderDetail " +

                          "ORDER BY SalesOrderID, SalesOrderDetailID;";

                          //"ORDER BY ProductID;";

                          //"ORDER BY LineTotal;";

                          //"ORDER BY OrderQty;";

        cmd.CommandType = CommandType.Text;

     

        // Process rows from reader; accumulate total

        SqlDataReader rdr = cmd.ExecuteReader();

        while (rdr.Read() == true)

        {

            SubTotal += (decimal)rdr[4];

        }

     

        // Clean up and return result

        conn.Dispose();

        Total = new SqlDecimal(SubTotal);

        return 0;

    }

     

    Note that I did not do any dynamic SQL or fancy stuff to make the processing order variable; I just commented one line, uncommented another and recompiled. This avoids dangerous dynamic SQL and complex CASE expressions in the ORDER BY, plus it mimics much better how a real application would work – cursors and other iterative solutions are often used when developers (think they) need to process in a certain order, so the ORDER BY would usually be fixed.

     

    The results of running the tests proved that Adam got it completely right – this CLR implementation does indeed run rings around even the fastest of all cursor solution, taking on average only 1060 milliseconds when ordering by the clustered index, 1072 milliseconds when ordering by the nonclustered index, 1132 milliseconds when ordering by the computed non-indexed column, and 1050 milliseconds when ordering by the non-computed non-indexed column. Three of these are so close together that I think that the differences are within the statistical margin of error and that they should be considered to be the same. The 70 extra milliseconds for ordering by a computed column are obviously the time taken to compute the value for each row in order to do the sorting.

     

    I don’t understand why ordering by the clustered index key doesn’t result in some additional performance gain, as I expected this one to differ from the others by one sort step in the execution plan. This was another test case I repeated a few more times to make sure that I didn’t accidentally mess things up. If I execute the cmd.CommandText as a separate query in SQL Server Management Studio, I do get a significant cheaper execution plan when ordering by the clustered key index, so I guess that this will just have to be filed as one of the many things of SQL Server I don’t understand.

     

    Modifying data

     

    The CLR starts showing a completely different face when you have to modify the data you read from a cursor. The main problem is that you can’t use SqlDataReader anymore, since this blocks the context connection from being used for any other queries. You could choose to open a separate connection, but that has the disadvantage that you perform the updates in a separate transaction context so that you run into a huge blocking risk, plus a rollback of the main transaction would not roll back the changes made from this procedure.

     

    So that leaves me with only one other option – use SqlDataAdapter.Fill method to copy the entire results of the query to a DataSet, then loop over it and process its rows one by one. This results in the CLR version doing a lot more work and using a significant amount of memory. The fact that we no longer update the row we have just read, but rather read them all and only then update them all means that there is also an increased chance that the row is no longer in the data cache and hence has to be read from disk a second time for the update, effectively doubling the amount of physical I/O (though this did not happen in my case).

     

    [Microsoft.SqlServer.Server.SqlProcedure]

    public static SqlInt32 ModifyData()

    {

        // Open connection (context connection since we're called in-process)

        SqlConnection conn = new SqlConnection("context connection=true");

        conn.Open();

     

        // Prepare commands to fetch rows and to update

        String SelCmd = "SELECT   SalesOrderID, SalesOrderDetailID, " +

                        "         OrderQty, ProductID, LineTotal " +

                        "FROM     Sales.SalesOrderDetail " +

                        "ORDER BY SalesOrderID, SalesOrderDetailID;";

        SqlDataAdapter da = new SqlDataAdapter(SelCmd, conn);

     

        String UpdCmd = "UPDATE Sales.SalesOrderDetail " +

                        "SET    OrderQty = @OrderQty " +

                        "WHERE  SalesOrderID = @SalesOrderID " +

                        "AND    SalesOrderDetailID = @SalesOrderDetailID;";

        SqlCommand upd = new SqlCommand(UpdCmd, conn);

        upd.Parameters.Add("@SalesOrderID", SqlDbType.Int);

        upd.Parameters.Add("@SalesOrderDetailID", SqlDbType.Int);

        upd.Parameters.Add("@OrderQty", SqlDbType.SmallInt);

     

        // Read rows to process; copy to DataAdapter

        DataSet ds = new DataSet();

        da.Fill(ds);

       

        // Process rows

        foreach (DataRow dr in ds.Tables[0].Rows)

        {

            Int32 SalesOrderID = (Int32)dr[0];

            Int32 SalesOrderDetailID = (Int32)dr[1];

            Int16 OrderQty = (Int16)dr[2];

     

            // Set parameters; perform update

            upd.Parameters[0].Value = SalesOrderID;

            upd.Parameters[1].Value = SalesOrderDetailID;

            upd.Parameters[2].Value = OrderQty + 1;

            upd.ExecuteNonQuery();

        }

     

        // Cleanup and return

        conn.Dispose();

        return 0;

    }

     

    After compiling and deploying the code above, I once more ran 5 tests. The average execution time for this version was 12,215 milliseconds, almost 150% more than the cursor version. My guess is that this huge increase in time is not a result of the update command itself, but a result of the requirement to pre-load the data in a DataSet and then iterate over that. I did not test it, but I expect to see a similar problem if a cursor requires reading some additional data, based on the data read in the cursor – this, too, would require the CLR version to employ a DataSet instead of simply looping over a SqlDataReader.

     

    Conclusion

     

    Adam’s suggestion to use CLR makes sense, but only for cases where no additional data access, either reading or modifying, is required when processing the rows in the cursor. As soon as the latter becomes a requirement, the CLR version has to switch from using a SqlDataReader to using SqlDataAdapter.Fill, and performance suffers horribly.

  • Curious cursor optimization options

    The best way to optimize performance of a cursor is, of course, to rip it out and replace it with set-based logic. But there is still a small category of problems where a cursor will outperform a set-based solution. The introduction of ranking functions in SQL Server 2005 has taken a large chunk out of that category – but some remain. For those problems, it makes sense to investigate the performance effects of the various cursor options.

     

    I am currently preparing a series of blog posts on a neat set-based solution I found for a problem that screams “cursor” from all corners. But in order to level the playing field, I figured that it would be only fair to optimize the hell out of the cursor-based solution before blasting it to pieces with my set-based version. So I suddenly found myself doing something I never expected to do: finding the set of cursor options that yields the best performance.

     

    That task turned out to be rather time-consuming, as there are a lot of cursor options that can all be combined in a huge number of ways. And I had to test all those combinations in various scenarios, like reading data in a variety of orders, and updating data in two separate ways. I won’t bore you with all the numbers here; instead, I intend to point out some highlights, including some very curious finds. For your reference, I have included a spreadsheet with the results of all test as an attachment to this post.

     

    Disclaimer: All results presented here are only valid for my test cases (as presented below) on my test data (a copy of the SalesOrderDetail table in the AdventureWorks sample database), on my machine (a desktop with 2GB of memory, a dual-core processor, running SQL Server 2005 SP2), and with my workload (just myself, and only the test scripts were active). If your situation is different, for instance if the table will not fit in cache, if the database is heavily accessed by competing processes, or if virtually any other variable changes, you really ought to perform your own test if you want to squeeze everything out of your cursor. And also consider that many options are included to achieve other goals than performance, so you may not be able to use all options without breaking something.

     

    Reading data

     

    Many cursors are used to create reports. The data read is ordered in the order required for the report, and running totals and subtotals are kept and reset as required while reading rows. Those already on SQL Server 2005 can often leverage the new ranking functions to calculate the same running totals without the overhead of a cursor, but if you are still stuck on SQL Server 2000 or if you face a problem that the ranking functions can’t solve, you may find yourself preferring a cursor over the exponentially degrading performance of the correlated subquery that the set-based alternative requires.

     

    Since the order of these cursors is dictated by the report requirements rather than the table and index layout, I decided to test the three variations you might encounter – you may be so lucky that the order of the report matches the clustered index, or you might find that a nonclustered index matches the order you need, or you may be so unlucky that you need to order by a column that is not indexed.

     

    I used the code below for my performance tests. You can run this code as is on the AdventureWorks sample database, or you can do as I did and copy the Sales.SalesOrderDetail table, with all indexes and all data, to your own testing database.

     

    -- Keep track of execution time

    DECLARE @start datetime;

    SET @start = CURRENT_TIMESTAMP;

     

    -- Declare and initialize variables for cursor loop

    DECLARE @SalesOrderID int,

            @SalesOrderDetailID int,

            @OrderQty smallint,

            @ProductID int,

            @LineTotal numeric(38,6),

            @SubTotal numeric(38,6);

    SET @SubTotal = 0;

     

    -- Declare and init cursor

    DECLARE SalesOrderDetailCursor

      CURSOR

        LOCAL           -- LOCAL or GLOBAL

        FORWARD_ONLY    -- FORWARD_ONLY or SCROLL

        STATIC          -- STATIC, KEYSET, DYNAMIC, or FAST_FORWARD

        READ_ONLY       -- READ_ONLY, SCROLL_LOCKS, or OPTIMISTIC

        TYPE_WARNING    -- Inform me of implicit conversions

    FOR SELECT   SalesOrderID, SalesOrderDetailID,

                 OrderQty, ProductID, LineTotal

        FROM     Sales.SalesOrderDetail

        ORDER BY SalesOrderID, SalesOrderDetailID; -- Match clustered index

    --    ORDER BY ProductID;                      -- Match nonclustered index

    --    ORDER BY LineTotal;                      -- Doesn’t match an index

     

    OPEN SalesOrderDetailCursor;

     

    -- Fetch first row to start loop

    FETCH NEXT FROM SalesOrderDetailCursor

          INTO @SalesOrderID, @SalesOrderDetailID,

               @OrderQty, @ProductID, @LineTotal;

     

    -- Process all rows

    WHILE @@FETCH_STATUS = 0

    BEGIN;

     

      -- Accumulate total

      SET @SubTotal = @SubTotal + @LineTotal;

     

      -- Fetch next row

      FETCH NEXT FROM SalesOrderDetailCursor

            INTO @SalesOrderID, @SalesOrderDetailID,

                 @OrderQty, @ProductID, @LineTotal;

     

    END;

     

    -- Done processing; close and deallocate to free up resources

    CLOSE SalesOrderDetailCursor;

    DEALLOCATE SalesOrderDetailCursor;

     

    -- Display result and duration

    SELECT @SubTotal;

    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

    go

     

    The first surprise came straight when I set my baseline by commenting out all options of the DECLARE CURSOR statement. The execution time when ordering by the clustered index was 6.9 seconds; when ordering by a nonclustered index it was 9 seconds – but when ordering by an unindexed column, the cursor with default options turned out to be faster, at only 6.4 seconds. I later found the reason for this to be that the first two defaulted to a relatively slow dynamic cursor, whereas the latter used the faster technique of a keyset cursor.

     

    Choosing LOCAL or GLOBAL had no effect on cursor performance. This was as expected, since this option only controls the scope of the cursor, nothing else. For this reason, I excluded this option from testing the variants for updating with a cursor.

     

    I didn’t see any difference between the FORWARD_ONLY and SCROLL options either. This came as a surprise, since FORWARD_ONLY exposes only a subset of the functionality of the SCROLL version. I really expected SQL Server to be able to do some clever optimization if it knew that I’d never read in any other direction than from the first to the last row. I’m really wondering why the FORWARD_ONLY option is not deprecated, seeing that there is no advantage at all in specifying it – but maybe the development team in Redmond knows something I don’t?

     

    The static, keyset, and dynamic cursors performed exactly as expected – in all cases, the static cursor was the fastest, the keyset came second, and the dynamic cursor finished last. No surprises here – until I started my tests with the cursor that orders by an unindexed column. In these tests, SQL Server informed be (due to the TYPE_WARNING option) that the created cursor was not of the requested type. It did not tell me what type it did create, nor why it disregarded the requested options. I failed to see anything in Books Online to explain this behavior, so I filed a bug for this. This did explain why the “hardest” sort option was the fastest when running with default options – since a dynamic cursor was not available, this one had to use a keyset cursor instead.

     

    My biggest surprise came when I tested the FAST_FORWARD option. According to Books Online, this option “specifies a FORWARD_ONLY, READ_ONLY cursor with performance optimizations enabled”, so I expected performance to be at least on par with, and probably better than that of a STATIC FORWARD_ONLY READ_ONLY cursor – but instead, the FAST_FORWARD option turned out to be consistently slower, in some cases even by 15%!

     

    The last set of options, the ones specifying the locking behavior, turned out to depend on the chosen cursor type. For a static cursor, the two available options made no difference. For other cursors, READ_ONLY was best – but SCROLL_LOCKS was second for keyset cursors and third for dynamic cursors, and OPTIMISTIC was second for dynamic and third for keyset. Go figure.

     

    Based on all tests, it turns out that the best performance is achieved by specifying a STATIC cursor. I would add the LOCAL, FORWARD_ONLY, and READ_ONLY options for documentation purposes, but they make no performance difference. With these options, execution time went down from 6.3 to 9 seconds (depending on the ORDER BY) to 3.3 to 3.4 seconds. Of course, none of those come even close to the 0.2 seconds of the set-based equivalent for this test case:

     

    -- Keep track of execution time

    DECLARE @start datetime;

    SET @start = CURRENT_TIMESTAMP;

     

    -- Calculate and display result

    SELECT SUM(LineTotal)

    FROM   Sales.SalesOrderDetail;

     

    -- Display duration

    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

    go

     

    Modifying data

     

    Another scenario in which cursors are used is when data has to be updated, and the calculation to determine the new data is thought to be to complicated for a set-based approach. In those cases, a cursor is used to process the rows one by one, calculate the new data, and update the data with the calculation results.

     

    If you specify the FOR UPDATE clause in the cursor declaration, you can use the WHERE CURRENT OF clause of the UPDATE command to update the last row fetched. Of course, you can also omit the FOR UPDATE clause and use a regular UPDATE statement, using the primary key values of the row just read to find the row to update.

     

    Since I expected a FOR UPDATE cursor to be optimized for updating the last row fetched, I first tested its performance, by using this code:

     

    -- Enclose in transaction so we can roll back changes for the next test

    BEGIN TRANSACTION;

    go

     

    -- Keep track of execution time

    DECLARE @start datetime;

    SET @start = CURRENT_TIMESTAMP;

     

    -- Declare and initialize variables for cursor loop

    DECLARE @SalesOrderID int,

            @SalesOrderDetailID int,

            @OrderQty smallint,

            @ProductID int,

            @LineTotal numeric(38,6);

     

    -- Declare and init cursor

    DECLARE SalesOrderDetailCursor

      CURSOR

        LOCAL           -- LOCAL or GLOBAL makes no difference for performance

        FORWARD_ONLY    -- FORWARD_ONLY or SCROLL

        KEYSET          -- KEYSET or DYNAMIC

                        --    (other options are incompatible with FOR UPDATE)

        SCROLL_LOCKS    -- SCROLL_LOCKS or OPTIMISTIC

                        --    (READ_ONLY is incompatible with FOR UPDATE)

        TYPE_WARNING    -- Inform me of implicit conversions

    FOR SELECT   SalesOrderID, SalesOrderDetailID,

                 OrderQty, ProductID, LineTotal

        FROM     Sales.SalesOrderDetail

        ORDER BY SalesOrderID, SalesOrderDetailID

    FOR UPDATE          -- FOR UPDATE or FOR UPDATE OF OrderQty

        ;

     

    OPEN SalesOrderDetailCursor;

     

    -- Fetch first row to start loop

    FETCH NEXT FROM SalesOrderDetailCursor

          INTO @SalesOrderID, @SalesOrderDetailID,

               @OrderQty, @ProductID, @LineTotal;

     

    -- Process all rows

    WHILE @@FETCH_STATUS = 0

    BEGIN;

     

      -- Change OrderQty of current order

      UPDATE Sales.SalesOrderDetail

      SET    OrderQty = @OrderQty + 1

      WHERE  CURRENT OF SalesOrderDetailCursor;

     

      -- Fetch next row

      FETCH NEXT FROM SalesOrderDetailCursor

            INTO @SalesOrderID, @SalesOrderDetailID,

                 @OrderQty, @ProductID, @LineTotal;

     

    END;

     

    -- Done processing; close and deallocate to free up resources

    CLOSE SalesOrderDetailCursor;

    DEALLOCATE SalesOrderDetailCursor;

     

    -- Display duration

    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

    go

     

    -- Rollback changes for the next test

    ROLLBACK TRANSACTION;

    go

     

    Just as with the tests that only read the data, there was no difference between SCROLL and FORWARD_ONLY cursors. And just as with the tests that only read the data, KEYSET cursors were consistently faster than their DYNAMIC counterparts. However, in this case the SCROLL_LOCKS locking option turned out to be consistently faster than OPTIMISTIC, though I expect that this might change if only a fraction of the rows is updated.

     

    From a performance point of view, there is absolutely no difference between a generic FOR UPDATE or a completely specified FOR UPDATE OF column, column, … For documentation purposes, I would prefer the latter.

     

    And again, just as with the tests that only read the data, the default cursor options chosen when I did not specify any turned out to select the slowest of all available options. Ugh!

     

    However, the real kicker came when I left out the FOR UPDATE clause of the CREATE CURSOR statement and changed the UPDATE statement to use the primary key values instead of the WHERE CURRENT OF clause. One would expect that this clause would be fast – since it is written especially for, and can be used exclusively in, the processing of a FOR UPDATE cursor, every trick in the book can be used to optimize this. However, the reverse turned out to be true. Even the fastest of all WHERE CURRENT OF variations I tested was easily beaten by even the slowest of all WHERE PrimaryKey = @PrimaryKey variations. Here is the code I used, in case you want to test it yourself:

     

    -- Enclose in transaction so we can roll back changes for the next test

    BEGIN TRANSACTION;

    go

     

    -- Keep track of execution time

    DECLARE @start datetime;

    SET @start = CURRENT_TIMESTAMP;

     

    -- Declare and initialize variables for cursor loop

    DECLARE @SalesOrderID int,

            @SalesOrderDetailID int,

            @OrderQty smallint,

            @ProductID int,

            @LineTotal numeric(38,6);

     

    -- Declare and init cursor

    DECLARE SalesOrderDetailCursor

      CURSOR

        LOCAL           -- LOCAL or GLOBAL makes no difference for performance

        FORWARD_ONLY    -- FORWARD_ONLY or SCROLL

        STATIC          -- STATIC, KEYSET, DYNAMIC, or FAST_FORWARD

        READ_ONLY       -- READ_ONLY, SCROLL_LOCKS, or OPTIMISTIC

        TYPE_WARNING    -- Inform me of implicit conversions

    FOR SELECT   SalesOrderID, SalesOrderDetailID,

                 OrderQty, ProductID, LineTotal

        FROM     Sales.SalesOrderDetail

        ORDER BY SalesOrderID, SalesOrderDetailID;

     

    OPEN SalesOrderDetailCursor;

     

    -- Fetch first row to start loop

    FETCH NEXT FROM SalesOrderDetailCursor

          INTO @SalesOrderID, @SalesOrderDetailID,

               @OrderQty, @ProductID, @LineTotal;

     

    -- Process all rows

    WHILE @@FETCH_STATUS = 0

    BEGIN;

     

      -- Change OrderQty of current order

      UPDATE Sales.SalesOrderDetail

      SET    OrderQty = @OrderQty + 1

      WHERE  SalesOrderID = @SalesOrderID

      AND    SalesOrderDetailID = @SalesOrderDetailID;

     

      -- Fetch next row

      FETCH NEXT FROM SalesOrderDetailCursor

            INTO @SalesOrderID, @SalesOrderDetailID,

                 @OrderQty, @ProductID, @LineTotal;

     

    END;

     

    -- Done processing; close and deallocate to free up resources

    CLOSE SalesOrderDetailCursor;

    DEALLOCATE SalesOrderDetailCursor;

     

    -- Display duration

    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

    go

     

    -- Rollback changes for the next test

    ROLLBACK TRANSACTION;

    go

     

    So from using WHERE CURRENT OF and default options, at 16.6 seconds, I’ve gotten execution time down to 5.1 seconds by using the primary key for the update and specifying a STATIC cursor (including the LOCAL, FAST_FORWARD, and READ_ONLY options for documentation). Looks good, as long as I close my eyes to the 0.4 second execution time of the set-based version:

     

    -- Enclose in transaction so we can roll back changes for the next test

    BEGIN TRANSACTION;

    go

     

    -- Keep track of execution time

    DECLARE @start datetime;

    SET @start = CURRENT_TIMESTAMP;

     

    -- Change OrderQty of all orders

    UPDATE Sales.SalesOrderDetail

    SET    OrderQty = OrderQty + 1;

     

    -- Display duration

    SELECT DATEDIFF(ms, @start, CURRENT_TIMESTAMP);

    go

     

    -- Rollback changes for the next test

    ROLLBACK TRANSACTION;

    go

     

    Conclusion

     

    If you have to optimize a cursor for performance, keep the following considerations in mind:

     

    1. Always try to replace the cursor by a set-based equivalent first. If you fail to see how, do not hesitate to ask in one of the SQL Server newsgroups.
    2. If you are really stuck with a cursor, then do NOT rely on the default options. They will result in the slowest of all possible option combinations
    3. If you think that the FAST_FORWARD option results in the fastest possible performance, think again. I have not found one single test case where it was faster than, or even as fast as, a STATIC cursor.
    4. Do NOT use the WHERE CURRENT OF syntax of the UPDATE command. Using a regular WHERE clause with the primary key values will speed up your performance by a factor of two to three.
    5. Do not rely blindly on my performance results. Remember, the one thing that is always true when working with SQL Server is: “it depends”.
  • So-called "exact" numerics are not at all exact!

    Attempting to dispel myths tends to make me feel like Don Quixote, riding against hordes of windmills that won’t budge. In this case, even some of my fellow MVPs and Microsoft’s own Books Online are among the windmills…

     

    Books Online says that there are two categories of numeric data types: “approximate” (float and real), and “exact” (all others, but for this discussion mainly decimal and numeric). It also says that “floating point data is approximate; therefore, not all values in the data type range can be represented exactly”, thereby suggesting that other numeric data types are capable of representing all values in the data type range. The latter is of course not true, for there is no way that values such as 1/3, π, or √2 can ever be represented exactly in any of SQL Server’s data types.

     

    But Books Online is not the only one to blame – many respected MVPs carry part of the blame as well. For instance, Aaron Bertrand, the original author of the famous website www.aspfaq.com, write on a page about rounding errors when using floating point mathematics: “You should try to avoid the FLOAT datatype whenever possible, and opt for the more versatile, and precise, DECIMAL or NUMERIC datatypes instead”. And just today, I was reading this (otherwise impressive) book by Bob Beauchemin and Dan Sullivan, when I came across a passage that presented a code snippet to demonstrate rounding errors in the .Net equivalent of float; the authors did present size and speed as possible reasons to choose float over decimal, but failed to mention that decimal is not exact either.

     

    Since reading this paragraph was the final straw that caused me to blog on this, I’ll start with a SQL Server equivalent of the code presented by Bob and Dan:

     

    DECLARE @Float1 float, @Float2 float, @Float3 float, @Float4 float;

    SET @Float1 = 54;

    SET @Float2 = 0.03;

    SET @Float3 = 0 + @Float1 + @Float2;

    SELECT @Float3 - @Float1 - @Float2 AS "Should be 0";

     

    Should be 0

    ----------------------

    1.13797860024079E-15

     

    DECLARE @Fixed1 decimal(8,4), @Fixed2 decimal(8,4), @Fixed3 decimal(8,4);

    SET @Fixed1 = 54;

    SET @Fixed2 = 0.03;

    SET @Fixed3 = 0 + @Fixed1 + @Fixed2;

    SELECT @Fixed3 - @Fixed1 - @Fixed2 AS "Should be 0";

     

    Should be 0

    ---------------------------------------

    0.0000

     

    As you see, adding some numbers and then subtracting them again does indeed incur a rounding error. The result is 0.0000000000000011379786 instead of 0. But what happens if we do a similar test with multiplying and dividing? The code below should always return 1. It does for the floating point calculation, but not for the fixed point version – this one’s result is off by exactly 1E-15, approximately the same margin of error that float caused when adding and multiplying.

     

    DECLARE @Float1 float, @Float2 float, @Float3 float, @Float4 float;

    SET @Float1 = 54;

    SET @Float2 = 0.03;

    SET @Float3 = 1 * @Float1 / @Float2;

    SELECT @Float3 / @Float1 * @Float2 AS "Should be 1";

     

    Should be 1

    ----------------------

    1

     

    DECLARE @Fixed1 decimal(8,4), @Fixed2 decimal(8,4), @Fixed3 decimal(8,4);

    SET @Fixed1 = 54;

    SET @Fixed2 = 0.03;

    SET @Fixed3 = 1 * @Fixed1 / @Fixed2;

    SELECT @Fixed3 / @Fixed1 * @Fixed2 AS "Should be 1";

     

    Should be 1

    ---------------------------------------

    0.99999999999999900

     

    It even gets more interesting when you change the value of @Fixed2 from 0.03 to 0.003 – in that case, the floating point calculation still runs fine and without error, whereas the fixed point calculation bombs:

     

    Msg 8115, Level 16, State 8, Line 11

    Arithmetic overflow error converting numeric to data type numeric.

    Should be 1

    ---------------------------------------

    NULL

     

    Now I’m sure that many of you will already have experimented and found that they could “fix” this by increasing the scale and precision of the fixed point numbers. But they can never exceed 38, and it’s not hard at all to come up with examples of rounding errors in fixed point calculations for any setting off scale and precision.

     

    Mind you, I am not saying that float is “better” than decimal. It is not – but it’s not worse either. Both “exact” and “approximate” numeric data types have their place. A grand choice for “exact” numeric data, is when dealing with numbers that have a fixed number of decimal places and represent an exact amount, such as monetary units. There’s no way that I would ever use floating point data in such an application!

     

    But if you are dealing with scientific data, that is usually derived from some measurement and hence by definition an approximation of reality (since there’s no way to measure with unlimited precision), floating point data is an excellent choice. Not because it’s approximate nature mimics the act of trying to get a measure as close as possible to reality, but also (or maybe I should say: mainly) because it can easily represent both very large and very small numbers with a large number of significant figures – try for instance to do something like this with “exact” numeric data types, if you don’t believe me!

     

    DECLARE @Float1 float, @Float2 float, @Float3 float, @Float4 float;

    SET @Float1 = 987654321.0 * 123456789.0;

    SET @Float2 = 0.123456789 / 998877665544332211.0;

    SET @Float3 = 1 * @Float1 / @Float2;

    SELECT @Float3 / @Float1 * @Float2 AS "Should be 1";

     

    Should be 1

    ----------------------

    1

  • How NOT to pass a lot of parameters

    Did you know that SQL Server allows stored procedures to have up to 2100 parameters? And more important: do you care? Well, some people do care, and Joe Celko seems to be one of them.

     

    If you are a regular reader of SQL Server newsgroups, you probably know Joe Celko from his always unfriendly and often incorrect replies. Here is a typical example, one that I have seen several times recently, in a paraphrased form:

    Question: I want to send a list of values to my stored procedure, but WHERE ColumnName IN (@ValueList) does not work – how to solve this?

    Answer: SQL Server can handle over 1000 parameters. You should use WHERE ColumnName IN (@Parm1, @Parm2, …, @ParmN).

     

    Joe Celko is the only one I have ever seen giving this advise. Many people will then jump into the discussion, challenging Joe’s advise. To which Joe will always reply that he has received a smart stored procedure that will solve a Sudoku puzzle, and that takes 81 parameters (one for each cell in the puzzle) as its input – unfortunately, Joe has so far refused to actually publish his code to let other people verify his claims.

     

    The test setup

     

    I still wanted to see for myself how passing 81 parameters into a stored procedure compares to other methods of passing in the same input, so I wrote three simple test procedures. Each of these procedures takes a Sudoku puzzle as input, but in three different forms. Each of the three then uses the input to populate a temporary table (#Problem) with the puzzle, and then performs a pretty standard pivot query to output the puzzle in the usual form.

     

    After verifying that all of the procedures worked as expected, I uncommented the pivot query to reduce the output for my performance tests. I then set the tests. I selected two real Sudoku puzzles (an easy one, with 34 cells given, and a hard one with only 27 cells given) and added two nonsensical ones of my own (one with only 5 cells given, and one with 72 cells). For each combination of a puzzle and a procedure, I coded a loop that calls the procedure a thousand times and records the elapsed time in a table. These twelve loops were than enclosed in an endless loop. Once satisfied with the code, I hit the execute button, pushed my laptop out of the way and went on to other stuff.

     

    Some 24-odd hours later, I interrupted the test script. Each of the twelve “thousand calls” tests had been executed 400 times. I dusted of a query I originally wrote for testing the performance of code for the Spatial Data chapter in “Expert SQL Server 2005 Development” to calculate the average duration per single call, disregarding the fastest and slowest 10% of the measurements to exclude the influence of semi-random other activities on my laptop.

     

    (Note that all code to create the stored procedures and run the tests is in the attachment to this post, so you can always repeat these tests on your machine.)

     

    The contenders

     

    The first contender is of course the procedure with 81 parameters that Joe Celko is such an avid fan of. Creating this procedure involved a lot of copying and pasting, a lot of editing numbers in the copied and pasted code, and a lot of tedious debugging until I had finally found and corrected all locations where I had goofed up the copy and paste or where I had failed to edit a number after pasting. The resulting code is very long, tedious to read and maintain, and screams “Hey! You forgot to normalize these repeating groups into their own table” all the way. Manually typing the EXEC statements to call this procedure with test data was also very cumbersome and error-prone. In a real situation, the procedure would probably be called from some user-friendly front end code. I’m far from an expert in front end code, but I expect this code to be very long as well, since it has to check and optionally fill and pass 81 parameters.

     

    The second contender uses a pretty standard CSV string as input, with the additional requirement that each value in the CSV is three characters: row@, column#, value. The procedure uses a variation of one of the scripts found on Erland Sommarskog’s site to parse the CSV list into a tabular format. This code is lots shorter, and as a result easier on the eyes and easier to maintain. Typing in the EXEC statements for testing is still pretty cumbersome (though I found a way to cheat – simply copy the parameter list for the Celko version, do some global search and replace to remove various characters, and the end was exactly the string I needed to call this procedure). The front end code will probably be lots shorter, since it can use a simple loop to process the input and build the CSV parameter.

     

    The third and last contender takes a CHAR(81) parameter as input. The first 9 characters of this parameter describe the top row, using a space to depict an empty cell; the second set of 9 characters is for the second row, and so forth. Parsing this parameter turned out to be even easier than parsing the CSV parameter. Another observation I made is that is was much easier to manually enter the parameter for the tests – just read the puzzle left to right and bottom to top and type either a number or a space for each cell. This was absolutely painless, and I didn’t make a single mistake. Of course, this is irrelevant for the expected real situation where the parameter is built by the front end – the code to do this will probably be about as complex as that for the CSV parameter.

     

    Performance results

     

    If you’re as eager to see the test results, you’ll probably have skipped the previous section. No problem, just promise to go back and read it later, m’kay?

     

    Test version

    Joe Celko’s 81 parameters

    Single CSV parameter

    Single CHAR(81) parameter

    Almost empty (5 cells)

    1.08 ms

    1.40 ms

    1.05 ms

    Hard puzzle (27 cells)

    1.80 ms

    1.78 ms

    1.35 ms

    Easy puzzle (34 cells)

    2.04 ms

    1.90 ms

    1.45 ms

    Almost full (72 cells)

    3.34 ms

    2.56 ms

    1.99 ms

     

    As you see, using lots of parameters is faster than using a single CSV parameter only if you don’t actually pass values in these parameters. As soon as you use the parameters, performance of a procedure with lots of parameters deteriorates quickly.

     

    You can also see that the CHAR(81) parameter wins in all cases.

     

    Network bandwidth

     

    My testing was all carried out on my laptop. The results will for the most part me a result of the time needed to process the input, not on network capacity. However, it is easy to see by just looking at the EXEC statements that the CHAR(81) version uses the least network resources, Celko’s version with 81 parameters uses the most, and the CSV versions sits nicely in between.

     

    Final thoughts

     

    You may have noted that I have not included a version with an XML input parameter in my tests. I probably should have done that, but I have to admit that I still have so much to learn on how to handle XML in a SQL Server database that I didn’t feel comfortable enough to sit down and write one myself. But your submissions are welcomed – if you feel that you can write an efficient version of this procedure that accepts its input in XML format, do not hesitate to write me. As soon as I can spare the time to set up the laptop for another all-nighter of performance testing, I’ll rerun the script with your XML solution included and post the results back to this site,

     

    While writing this post, I found a newsgroup posting by Joe Celko where he reveals a snippet of “his” Sudoku solver. And guess what? I was wrong when I thought that I could guess how his procedure looks. It turns out that he does not use defaults for his parameter; you always have to supply them all, using 0 for an empty cell. I didn’t want to repeat all the tests at this time. I expect that this will reduce performance even more, though not by much – but it will also cause a huge increase in network usage!

    I also saw that the parameters in Joe Celko’s version were declared as integer, so that each parameter will use 4 bytes instead of just 1. This will definitely affect both the performance of the procedure and the network pressure.

     

    Conclusion

     

    If you have to pass a long list of parameters to a stored procedure or function, you should not use a long collection of parameters. It makes the code harder to write and maintain, prone to subtle errors, longer (which will affect parse and compile time, though I did not include this in my test), uses far more network resources than any of the other alternatives (except, maybe, XML), and gets terribly slow as more parameters are actually used.

     

    Joe celko will probably find that he too can shorten the amount of code in his Sudoku solver *and* increase performance by using a different strategy to pass the puzzle. Of course, in the case of solving a Sudoku, those two 2 milliseconds extra execution time won’t really matter, not the few hundred extra bytes travelling over the network. But if you ever encounter a similar multi-parameter problem in a procedure that will be called from a web page that will get hundreds of hits per second, those 2 milliseconds and those extra bytes in the network can suddenly become a huge bottleneck!

     

    SQL Server may support up to 2100 parameters – but that does not imply that it is a good idea to actually use them!

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement