256KB Sequential Reads
In my two previous posts (1, 2), I highlighted the fact that while file fragmentation had a huge adverse performance impact on directly attached storage (DAS), it did not have much, if any, impact on the drive presented from a high end enterprise class disk array. That observation was derived from running disk I/O tests with 1KB sequential writes.
What about other disk I/O workloads? In this post, let look at the test results from running 256KB sequential reads on the same DAS and SAN drives.
To see the behavior in even more extreme, I also ran the tests with the test file fragmented into 60,000 fragments. Each fragment was 128KB in size. So I checked three test scenarios:
The 10GB file was created on a freshly formatted empty drive, thus without any fragmentation at all.
- The freshly formatted drive was first filled to the full capacity with 2MB files, and then some of these 2MB files were randomly deleted to make sufficient room for the 10GB test file. In this case, the 10GB test file was fragmented into more than 3000 non-contiguous fragments.
- The freshly formatted drive was first filled to the full capacity with 128KB files, and then some of these 128KB files were randomly deleted to make sufficient room for the 10GB test file. In this case the 10GB test file was fragmented into more than 60,000 non-contiguous fragments.
The following chart shows the results of many repeated tests, applying 256KB sequential reads at various load levels.
Again and clearly, as is the case with the 1KB sequential write tests, severe file fragmentation had no impact on the disk I/O performance of 256KB sequential reads on this drive presented from a high end enterprise class fibre channel disk array.
Note that the disk array had a cache that was much larger than 10GB. You may say, that’s cheating and not a fair comparison with a directly attached storage. Well, maybe, but that’s how high end disk arrays work.