THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Alexander Kuznetsov

Controlling the number and size of unit tests.

I have been practicing database unit testing for two years, and I am a big fan of it. My test harness is a great help to me - it lets me change my modules quickly and with confidence. However, having too many unit tests is counterproductive - they become a burden and slow you down rather than help you out.Joel Spolski recently wrote a great article about the overhead of maintaining your unit tests being "disproportional to the amount of benefit that you get out of them".

 That article is a must read!


Unit tests are an overhead.


It takes time to develop unit test, and more time to maintain them. They also slow down working with source control and searches. For example, every time your search on a stored procedure name in your solution, every unit test running in shows up in your search results. Yet we go for this overhead, because we expect the payoff from better code quality and easier maintenance to compensate for it. Clearly to break even or benefit from unit tests we need to keep the overhead within reason.


Unit tests take time to run, sometimes too much time.


To be really useful, your unit test harness needs to run quickly - otherwise it will slow you down. You will either spend too much time watching it run, or you will not run it as often as you should - if you break something, you will not notice it right away. Sometimes this becomes a serious problem with C# or C++ solutions. Because database tests usually run much much slower, this may become a serious problem much sooner. In many cases you simply cannot afford to keep all your database unit test because of the sheer amount of time they run. Alternatively, database unit testing can be completely abandoned because of this slowness. It can be dismissed as yet another approach that sounds good in theory but does not hold up in the real world. Database unit testing does hold up in real life projects, but only if you use common sense and do it properly.


Unit tests may help you out if you need to change your module.


They can be extremely useful - you can recall what the module is supposed to do, identify an issue, make a change, and retest in just a few minutes. This is why we need them most of all. So, the more a unit test is potentially useful, the more you want to keep it long term. And, using the same logic, if a unit test is very unlikely to ever help you out, there is no reason to keep it.


Complex modules need unit tests most of all. Trivial tests and tests for simple modules can be eliminated.


Complex queries need more maintenance, because it takes more time to understand and change them, and because they are more likely to need optimization, This is where unit tests come very handy. On the other hand, trivial queries such as the following are easier to change and less likely to need optimization later:


CREATE PROCEDURE Readers.SelectCustomerByID
@CustomerID INT
SELECT CustomerIDFirstNameLastName 
FROM Data.Customers 
WHERE CustomerID @CustomerID;



Do you have problems understanding what this procedure does? Do you expect it will be difficult to change it if needed? Do you think this procedure needs an explicit unit test?


Do you really need existence tests?


When we write blog posts and articles, our examples must be short and simple. Running a unit test to determine if your database or your table exists is a clear and simple example and a great way to learn how unit tests work - (make sure you have read this article by SQL Server MVP Andy Leonard). However, there is no need to keep such tests in real production solutions - if a database or a table is dropped, you will get a clear error message, and you will know what is your problem right away. If a database or a table is dropped, you will not need an existence unit test to help you out with troubleshooting, will you?


 Make a distinct database call once, and verify everything.


Database call are slow, so you really want to limit the number of database calls your test harness does. For example, suppose that you have a complex stored procedure Readers.SelectCustomersByName, and if you only provide last name, it must return all the customers with that last name. Because the procedure is complex, you need a unit test for each distinct case of its usage. Because you want to minimize the number of database calls, you want to execute the following command only once:

EXEC  Readers.SelectCustomersByName @LastName 'Larsen';

It is very important to verify everything: number of result sets, number of columns, their names and types, and all the columns and rows returned by the procedure. The reason is simple: any change can be breaking, and you need to determine if you broke something when you do your changes. For more explanations read this article: "Close those Loopholes - Testing Stored Procedures"

I've used the approach described in that article for two years and it works great. Everything is verified automatically, so I never forget to test anything, and I only need one database call for all my checks, so it runs fast.


Clean up after test driven development.


I like and practice test driven development, however I have the following reservations regarding it: all-too-often when you think that are done you have a huge amount of trivial unit tests. That seems to be the whole idea behind test driven development: every step in your progress is a unit test. However, there is no need to keep the whole history of your progress, to keep all the tests that drove your development, for the whole life of your system (each of us humans has had gills and a tail at some early stage of our development; we don't keep them forever, we undergo a refactoring). When you are done with your development, you only need the tests that might help you out in troubleshooting. Most likely you'll need to consolidate your unit tests in the way described in the previous chapter. Remember that test driven development is supposed to consist of the following stages:

  1. Encounter a non-trivial problem.
  2. Write a unit test that fails.
  3. Develop some code so that the test passes.
  4.  Refactor - test driven development does not stop when your tests pass!

I think that this refactoring should include not only your code but unit tests too. Refactoring is done when your remaining unit tests are lean, so that the overhead of keeping them is justified by the benefits of having them.

Let me repeat myself: from where I sit, test-driven development can be a huge help if you are solving a complex problem. However, it can slow you down if the steps you are taking are too easy for you. I guess I should write up some example of test driven development, but it is getting late, so not this time.


Published Tuesday, June 23, 2009 10:59 PM by Alexander Kuznetsov



JeffS said:

What is your test harness btw?  Is it written for SQL/T-SQL, or is it at the application level (C, C++, C#...)

June 24, 2009 8:17 AM

Alexander Kuznetsov said:

June 24, 2009 9:09 AM

Steven Feuerstein said:


I focus a lot on automated testing in the Oracle PL/SQL world (I wrote the first version of utPLSQL, which is "NUnit for PL/SQL" and am the architect of Quest Code Tester for Oracle, a commercial test automation tool).

I very much enjoyed reading your article, and learning from the experiences of someone practicing reasonably thorough, disciplined unit testing.

A few comments....

* I suggest prioritizing the automated testing of your DELIVERABLES. That is, sure and theoretically we should test every single unit - but you are right, we need to be practical. If your responsibility is to provide a program that delivers XYZ results to either the front end or some other backend unit, make sure to prove that overall delivery before you worry about testing the subunits that are called by the main program.

* Regarding the challenges of maintaining test code: actually, the bigger problem before you get to maintenance is that with most XUnit tools you simply have to write too much code. Most developers will never have the discipline or time to do this. Search out every and any possibility for generating the test code that you need. This will help you build more tests faster and greatly reduce the cost of maintaining those tests.

That's what I did with Code Tester ( you describe the expected behavior of your subprograms, and CT generates all the PL/SQL code needed to verify the behaviors.

Perhaps something similar can be done for T-SQL programs.


Steven Feuerstein

June 24, 2009 9:23 AM

Alexander Kuznetsov said:


When I was working with Oracle a few years ago, I truly enjoyed reading your books!

Having played with different libraries, I also think that NUNit's approach to unit testing is probably the most convenient one.

On automated tests of deliverables I'd like to add this: they verify how your implement your contract with the customers. As such they typically live longer than the tests against the internal modules, which can be refactored. Because they live longer, the investment in their development in maintenance is more likely to pay off.

I agree that with many tools "you simply have to write too much code".

I will read about Code Tester and see if there are any useful ideas/approaches that I can utilize - thanks for the link!

June 24, 2009 10:42 AM
New Comments to this post are disabled

About Alexander Kuznetsov

Alex Kuznetsov has been working with object oriented languages, mostly C# and C++, as well as with databases for more than a decade. He has worked with Sybase, SQL Server, Oracle and DB2. He regularly blogs on, mostly about database unit testing, defensive programming, and query optimization. Alex has written a book entitled "Defensive Database Programming with Transact-SQL" and several articles on and Currently he works as an agile developer.

This Blog


Privacy Statement