THE SQL Server Blog Spot on the Web

Welcome to SQLblog.com - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

Supermicro motherboards and systems

I used to buy SuperMicro exclusively for my own lab. SuperMicro always had a deep lineup of motherboards with almost every conceivable variation. In particular, they had the maximum memory and IO configuration that is desired for database servers. But from around 2006, I became too lazy to source the additional components necessary to complete the system, and switched to Dell PowerEdge Tower servers.

Now, I may reconsider as neither Dell or HP are offering the right combination of PCI-E slots. Nor do the chassis support the capability I am looking for. The two Supermicro motherboards of interest are the X9DRX+-F for 2-way Xeon E5-2600 series processors, and the X9QR7-TF-JBOD for 4-way Xeon E5-4600 series processors.

Below is a comparison of the Dell, HP and Supermicro 2-way Xeon E5-2600 systems (or motherboards). Both the Dell and HP have PCI-E x16 slots. Unfortunately this is not particularly useful as the only PCI-E SSD capable of using the full x16 bandwidth is the Fusion-IO ioDrive Octal at over $90K.

 Dell
T620
HP
ML350p G8
SuperMicro
X9DRX+-F
DIMM sockets242416
PCI-E 3 x16430
PCI-E 3.0 x82110
PCI-E 3.0 x4140
PCI-E 2.0 x4011

Below are the Dell and HP 4-way systems for Xeon E5-4600, the HP 4-way Xeon E7 (Westmere-EX) and the Supermicro 4-way E5-4600 motherboard. It is apparent that neither the Dell and HP E5-4600 systems are meant to fully replace the previous generation 4-way E7 (Westmere-EX) systems, as both implement only half of the full set of PCI-E lanes.

 Dell R820HP DL560 Gen8HP DL580 Gen7SuperMicro
DIMM sockets48486424
PCI-E 3 x16 2207
PCI-E 3.0 x85*36 (g2)1
PCI-E 3.0 x40000
PCI-E 2.0 x40150

The Xeon E5-2600 and 4600 series processor has 40 PCI-E gen 3 lanes, and the DMI which is equivalent to x4 PCI-E gen 2. One processor needs to have the south-bridge using DMI, but the others could implement a x4 g2 port. Of course the full set of 160 gen 3 lanes are only available if all processors are populated, but the same concept applies for the memory sockets. These systems are probably more suitable for VM consolidation servers. Hopefully there will be a true database server in the works?

Today, I am interested in maximum IO bandwidth with a uniform set of PCI-E slots. Maximum memory bandwidth is required to support this, but it is not absolutely essential to have maximum memory capacity.

The IO bandwidth plan is built around SSDs, because a 15K HDD starts around $200 providing 200MB/s on the outer tracks while a 128GB SSD can deliver 500MB/s for around $100. It would actually be easier to build a high bandwidth system with PCI-E SSDs.

The Intel 910 400GB model is rated at 1GB/s for just over $2000, and the 800GB model does 2GB/s at the same IO bandwidth per dollar. The Fusion-io ioDrive2 Duo 2.4TB can do 3GB/s but costs $25K (or is it $38K?). The Micron P320h can also do 3GB/s but is probably expensive being based on SLC.

The other option is 8 x 128GB SATA SSDs on a PCI-E SAS RAID Controller. The LSI SAS 9260-8i can support 2.4GB/s with 8 SSDs. In theory 6 SSDs could support this but I have not validated this. So one 9260-8i ($500) and 8x128GB SATA SSDs ($100 each) means I can get 2.4GB/s for $1300, possibly less. I understand that the LSI SAS 9265-8i ($665) can support 2.8GB/s (or better?), but the LSI rep did not send me one when said he would. LSI now has PCI-E 3.0 SAS controllers, the 9271 and 9286, but I do not have any yet.

Bandwidth versus cost options
Fusion-IO ioDrive2 365GB            910MB/s        $6K
Intel 910 400GB                           1000MB/s       $2K
LSI+8 SATA SSD                         2400MB/s?     $1.3K?

To implement this strategy, the chassis should support many SATA/SAS devices organized 8 bays per 8 SAS lanes. The both Dell T620 and HP ML350p support 32 2.5in SAS devices, but organized as 16 per dual SAS ports (x4 each?). So for my purposes, these systems are adequate to house SSDs for 2 adapters. It could also be pointed out that the 2.5in SAS bays are designed for the enterprise class 10K/15K HDDs which are 15mm thick. SATA SSDs on the other hand are 9.3mm thick, designed to fit laptop HDD dimensions. It could be even thinner without the case.

I should point out that the Intel 910 400GB PCI-E SSD has 768GB actual NAND, about 50% of capacity is reserved, so this should have very good write endurance for MLC. This typical for enterprise oriented SSDs. The typical consumer SSD, has about 7% reserve. For example, a device with 128GB (binary 1GB=1024^3) has 128GB decimal (1GB=10^9) user capacity. So for production use, stay with the enterprise oriented products.

RichB
First, the is a lab, not a production server, and I am paying for this myself, so $1-2K matters to me.
Lets start with the a 2-way Xeon E5, and the Dell T620 for example.
A reasonable IO target for this system is 5GB/s based on 300MB/s per core. I can get this with 2 PCI-E cards that can do 2.5GB/s, but the cheap card can only do 1GB/s so I need 5 cards. Plus I might like 2 RAID controllers to HDD so I can do really fast local backups. Next I might like to have 2x10GbE or even an Infiniband HBA. So by this time I am out of slots. I might like to run extreme IO tests, so perhaps targeting 8GB/s.
So the x16 slots are wasting PCI-E lanes that I need for extra x8 slots. And I cannot afford the Fusion Octal, and Fusion will not lend one to me long-term.

Next, the Intel 910 400GB is $2100, while the Fusion ioDrive2 365GB is $5-6K (sorry, how much is 1£ these days?)
both are about the same in bandwidth. The Fusion is about 50% better in random read, and 10X+ in random write. Both cite 65us latency. If I had only a single card, I am sure I could see the difference between the Intel and Fusion. But if I were to fill the empty slots with PCI-E SSDs, I am inclined to think that I have exceeded the ability of SQL Server to drive random IO.

I tested this once with OCZ RevoDrives, but OCZ cannot provide server system support and OCZ uses the Marvell PCIE to SATA controller, so I stopped using OCZ PCI-E cards. I still used OCZ SATA SSDs, just connected to LSI SAS HBAs. Intel uses the LSI, which has much better interrupt handling. While Fusion may be better at the individual card level, I am not willing to spend the extra money on a lab system. And I am using SATA SSDs because they even cheaper than the Intel 910.

Realistically, I need to replace my lab equipment every 1-2 years to be current, so I treat this as disposables, not investments.

Published Sunday, October 28, 2012 9:47 PM by jchang

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS

Comments

 

RichB said:

I guess you could go for a pair of the cheap and small FusionIO Duo2s - the 350ish GB are only about £3k, the 1.2s are about double that.

Also, what is the relevance of whether they can actually push the full x16 in your 3rd para?

Surely its better to spend a few extra quid on a decent full size bus - if your server will last you 3-5 years maybe you will be looking at some new ssd kit before you want to replace the mobo?

Of course, I also think you should be looking at more than sheer MB/s - or does seek time not have a relevance for your application?

October 29, 2012 1:28 PM
 

mrkrad said:

Check out M5014 on ebay $66 is the best i've found for a lsi 9260-8i - plug in 8 drives into each (two). Samsung 830's are very solid. Xtremesystems got 6PB writes before failure.

External sas target perhaps? or direct connect sas to sas ? Could be very cheap.

Prosafe XS* series 8,10,and 12 port 10gbase-T will be about $100/port shortly.

February 5, 2013 7:14 PM

Leave a Comment

(required) 
(required) 
Submit

About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog

Syndication

Powered by Community Server (Commercial Edition), by Telligent Systems
  Privacy Statement