THE SQL Server Blog Spot on the Web

Welcome to - The SQL Server blog spot on the web Sign in | |
in Search

Joe Chang

New Seagate SSD and Hard Disks

Seagate today announced a near complete overhaul of their enterprise product line.
This include second generation SSD now with either SAS and SATA interfaces.
The first generation Pulsar SSD only supported SATA interface.
The new 2.5in 15K and 10K hard drive models have higher capacity.
The 2.5in 7.2K hard drive was upgraded to 1TB last month?
The 7.2K 3.5in is now available upto 3TB.
All models support 6Gbps.

Pulsar SSD (SAS/SATA interface, 2.5in FF)
The new second generation Seagate Pulsar SSD comprises two product lines.
The Pulsar XT.2 is based on SLC NAND at 100, 200 and 400GB capacities with SAS interface only.
The Pulsar.2 is based on MLC NAND at 100, 200, 400 and 800GB capacities available in both SAS and SATA interfaces.
Performance specifications cited for the Pulsar XT.2 are 360MB/s read and 300MB/sec write
at 128KB, equivalent to sequential IO in hard disks.
Random 4KB IO rates are 48K IOPS read and 22K IOPS write.
Performance specifications were not cited for the Pulsar.2.
Based on other vendors with both SLC and MLC product lines,
the expectation is that the MLC model should have comparable read performance.
Write performance might be less than the SLC model, but still adequate to almost all requirements.

Savvio 15K 2.5in HDD
The Savvio 15K.3 supercedes the 15K.2 product line with 300GB and 146GB capacities
replacing the 146GB and 73GB models in the 15K.2 line.
Sequential transfer rate is 202MB/s on the outer tracks and 151 on the inner tracks,
up from 160 to 122 in the 15K.2
Average read and write seek time in reduced to 2.6/3.1ms, down from 2.9/3.3ms,
resulting is slightly improved random IO performance.

Savvio 10K 2.5in HDD
The Savvio 10K.5 product line features 300, 450, 600 and 900GB capacities (300MB per platter),
up from 450 and 600GB in the 10K.4.
Sequential transfer rate is 168 to 93MB/s, up from 141 to 75MB/s.
Average seek time is 3.4/3.8ms (3.7/4.1 for the 900GB model), down from 3.9/4.5 ms in the earlier model.

Constellation 7.2K 3.5in HDD
The Constellation ES.2 enterprise grade 7200RPM 3.5in 3TB drive is available in both SAS and SATA interfaces.
Sequential transfer rate is 155MB/s. The previous generation ranged from 500GB to 2TB.

Constellation 7.2K 2.5in HDD
The Constellation.2 enterprise grade 7200RPM 2.5in product line features 250GB, 500GB and 1TB capacities,
in both SATA and SAS interfaces (SATA only for 250GB).

The 3.5in 15K and 10K product lines have not been refreshed. It is unclear whether there will be future models for these product lines.

Other Vendors
Intel released the SSD 510 this month (March 2011), shows as in-stock on some web-stores.
SATA 6Gbps interface, 120 and 250GB capacities.
Sequential read/write 500/315MB/s. Random read/write 20K/8K IOPS (this seems low).

The OCZ Vertex 3 regular and Pro series with SATA 6Gbps interface and 2.5in form factor.
have been in the news, but there is no information on the OCZ website.
Correction: The Vertex 3 is not listed under the products section, but is described in the press release section

Toms Hardware lists the Vertex 3 with the SandForce 2281 controller, 550/525MB/s sequential, 60K IOPS 4K random and $499 price for the 240GB model. The Vertex 3 Pro with the SF-2582 controller, 550/500MB/s sequential, 70K IOPS and the 200GB model priced at $775.
The OCZ Z-Drive R3 with PCI-E gen 2 interface has been announced, available probably in Apr or May.
Sequential Read/Write at 1000/900MB/s.
Random at 135K IOPS.

No pricing is available on the new Seagate drives. The bare drive pricing (not from system vendors) on the current Seagate 2.5in 15K 146GB is about $200, and the 2.5in 10K 600K about $400. Given that the consumer grade OCZ Vertex 2 is $430 for the 240GB and $215 for the 120GB, my thinking thinking is that the 15K 146GB drives are nolonger viable, evening considering the higher price of the Vertex 3 Pro. The 10K 600GB is only barely viable from the capacity point of view. So the new higher capacity drives really need to come in at comparable price points to the current models.

All the Seagate 2.5 devices, SSD and HDD are 15mm height, standard on server storage bays. The OCZ SSD follow the common notebook 9.3mm height form factor. Going forward, perhaps there will be a 2U storage enclosure that can accommodate 36 x 9.3mm bays in place of the current 24 x 15mm bays. But an even better idea is to have a new DIMM size form factor with the connection at the short side so a that a 1U storage enclosure can accommodate 36 of these devices.

Seagate first generation Pulsar
Some one brought up the point that Pulsar, the first generation Seagate SSD, was vaporware. My guess would be that Seagate was late to market without any performance or price advantages over products already available, did not get any design wins with majors system and storage player, and hence decided not to launch. So there is concern that the second generation Pulsar might also be vaporware. Stayed tuned.

Published Tuesday, March 15, 2011 10:53 PM by jchang

Comment Notification

If you would like to receive an email when updates are made to this post, please register here

Subscribe to this post's comments using RSS



Greg Linwood said:

Interesting update Joe - why don't you cover Fusion-io?

March 16, 2011 8:29 AM

jchang said:

Because this is an update for very recent news. Fusion-IO came out with PCI-E gen 2 products last year?, way before anyone else. But I don't believe they released anything very recently. Last fall, I did a survey of SSD, from NAND to controllers to SSD, which mentioned the Fusion-IO.

Now other players are coming out with PCI-E gen 2 or SAS/SATA 6Gbps SSDs.

March 16, 2011 8:53 AM

Greg Linwood said:

Fair enough.. Fusion-io recently released "direct cache" which isn't a hardware device but is a pretty big innovation. It's basically a software driver that allows a Fusion-io device to be used as an I/O cache, effectively enhancing the size of a storage system's cache by using the Fusion-io's flash locally..

March 16, 2011 4:45 PM

jchang said:

well, on the subject of Fusion-IO, they had a paper at Flash Memory Summit, arguing that given main memory is mostly used a disk cache, it would be better implemented in NAND flash instead of DRAM.

My take is that true memory, ie, not the buffer cache, should be moved closer to the procesor. Then buffer cache, regardless of whether it is DRAM or Flash, becomes block storage instead of byte addressable.

March 16, 2011 10:54 PM

Greg Linwood said:

I haven't seen the paper but maybe by "better" they mean "more"? because you can certainly get a lot more memory closer to the processor through flash than regular memory - eg, Fusion-io offers 5TB on a single PCIE presently.

March 17, 2011 3:34 PM

Tom said:

Joe, I like you balanced approach, you cover all vendors and not just Fusion-IO like other reviewers do.

Keep up the good work, I real all your blogs

March 17, 2011 4:40 PM

Greg Linwood said:

Tom - I agree its good to get a balanced article covering all vendors, which is why I asked why he left out Fusion-io :)

March 17, 2011 5:07 PM

jchang said:

Greg, I meant bring memory really really close. Lets revisit. Most of memory today is used for caching. Only a small amount is used for program code and "data structures" for managing the program. So the object is not to get more "memory", but to get memory closer to the processor core than it is today, even if it means giving up memory size.

The capacity given up, would be implemented as block access structures at comparable latency, or even a little more.

So by bring memory closer to the processor, what I am really thinking is to have the memory on the processor package itself. The Intel idea is to mount the DRAM chip directly on top of the processor die. The connection is with thru connections, meaning the path between memory and processor would be many thousands of bits.

March 17, 2011 6:34 PM

anonymous said:

March 28, 2011 7:21 PM

jchang said:

What I meant is described in Figure 7 below

March 28, 2011 9:33 PM

Joe User said:

May 17, 2011 2:43 PM

Leave a Comment


About jchang

Reverse engineering the SQL Server Cost Based Optimizer (Query Optimizer), NUMA System Architecture, performance tools developer - SQL ExecStats, mucking with the data distribution statistics histogram - decoding STATS_STREAM, Parallel Execution plans, microprocessors, SSD, HDD, SAN, storage performance, performance modeling and prediction, database architecture, SQL Server engine

This Blog


Privacy Statement