A couple of days ago, Aaron Bertrand posted about
a method for calculating medians in SQL Server 2005 using the ROW_NUMBER function in conjunction with the COUNT aggregate. This method (credited to Itzik BenGan) is interesting, but I discovered an even better way to attack the problem in
Joe Celko's Analytics and OLAP in SQL.
Rather than using a COUNT aggregate in conjunction with the ROW_NUMBER function, Celko's method uses ROW_NUMBER twice: Once with an ascending sort, and again with a descending sort. The output rows can then be matched based on the ascending row number being within +/ 1 of the descending row number. This becomes clearer with a couple of small examples:
A

1

5

B

2

4

C

3

3

D

4

2

E

5

1

In the first table (even number of rows), the median rows are B and C. These can be matched based on [Ascending Column] IN ([Descending Column] + 1, [Descending Column]  1). In the second table (odd number of rows), the median row is C, which is matched where [Ascending Column] = [Descending Column]. Note that in the second table, the match criteria for the first table does not apply  so the generic expression to match either case is the combination of the two: [Ascending Column] IN ([Descending Column], [Descending Column] + 1, [Descending Column]  1).
We can apply this logic within the AdventureWorks database to find the median of the "TotalDue" amount in the Sales.SalesOrderHeader table, for each customer:
SELECT
CustomerId,
AVG(TotalDue)
FROM
(
SELECT
CustomerId,
TotalDue,
ROW_NUMBER() OVER (
PARTITION BY CustomerId
ORDER BY TotalDue ASC, SalesOrderId ASC) AS RowAsc,
ROW_NUMBER() OVER (
PARTITION BY CustomerId
ORDER BY TotalDue DESC, SalesOrderId DESC) AS RowDesc
FROM Sales.SalesOrderHeader SOH
) x
WHERE
RowAsc IN (RowDesc, RowDesc  1, RowDesc + 1)
GROUP BY CustomerId
ORDER BY CustomerId;
The equivalent logic using Itzik BenGan's method follows:
SELECT
CustomerId,
AVG(TotalDue)
FROM
(
SELECT
CustomerId,
TotalDue,
ROW_NUMBER() OVER (
PARTITION BY CustomerId
ORDER BY TotalDue) AS RowNum,
COUNT(*) OVER (
PARTITION BY CustomerId) AS RowCnt
FROM Sales.SalesOrderHeader
) x
WHERE
RowNum IN ((RowCnt + 1) / 2, (RowCnt + 2) / 2)
GROUP BY CustomerId
ORDER BY CustomerId;
Taking a look at the estimated execution plans for these two queries, we might believe that BenGan's method is superior: Celko's algorithm requires an expensive intermediate sort operation and has an estimated cost of 4.96, compared to 3.96 for BenGan's.
Remember that these are merely estimates. And as it turns out, this is one of those times that the Query Optimizer's cost estimates are are totally out of line with the reality of what
happens when you actually run the queries. Although the performance
difference is not especially noticeable on a set of data as small as
that in Sales.SalesOrderHeader, check out the STATISTICS IO output. Celko's version does 703 logical reads; BenGan's does an astonishing 140110!
There is a good lesson to be learned from this: Costbased optimization is far from perfect! Never completely trust what estimates tell you; they've come a long way, but clearly there is still some work to do in this area. The only way to actually determine that one query is better than another is to run it against a realistic set of data and look at how much IO and CPU time is actually used.
In this case, BenGan's query probably should perform better than Celko's. It seems odd that the Query Processor can't collect the row counts at the same time it processes the row numbers. Regardless, as of today this is the best way to solve this problem... Not that I've ever needed a median in any production application I've worked on. But I suppose that's beside the point!