Wednesday, March 28, 2012

RAID 5 beats RAID 10

RAID 5 beats RAID 10

Can I get some feedback on these results? We were having some serious
IO issues according to PerfMon so I really pushed for RAID 10. The
results are not what I expected.

I have 2 identical servers.

Hardware:
PowerEdge 2850
2 dual core dual core Xeon 2800 MHz
4GB RAM
Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)

PowerVault 220S
Each Array consisted of 6-300 GB drives.

Server 1 = Raid 10
3, 6-disk arrays

Server 2 = Raid 5 (~838 GB each)
3, 6-disk arrays (~1360 GB each)

TestWinner% Faster
SQL Server - UpdateRAID 513
Heavy ETLRAID 516
SQLIO - Rand WriteRAID 1040
SQLIO - Rand ReadRAID 1030
SQLIO - Seq WriteRAID 515
SQLIO - Seq ReadRAID 5Mixed
Disktt - Seq WriteRAID 518
Disktt - Seq ReadRAID 52000
Disktt - Rand ReadRAID 562
Pass Mark - mixedRAID 10Varies
Pass Mark -
Simulate SQL ServerRAID 51%

I have much more detail than this if anyone is interested.Are you absolutely absolutely absolutely sure the disk write cache on both
machines was set the same?

RAID 10 will always out perform RAID 5 on read performance in a real
situation because it has 2 copies of the data it can concurrently read. When
writing to disk RAID 5 needs to read as well in order to calculate parity.

There is just so much to doing the comparison...

--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials

"Dave" <daveg.01@.gmail.com> wrote in message
news:1146510578.745595.255290@.g10g2000cwb.googlegr oups.com...
> RAID 5 beats RAID 10
> Can I get some feedback on these results? We were having some serious
> IO issues according to PerfMon so I really pushed for RAID 10. The
> results are not what I expected.
> I have 2 identical servers.
> Hardware:
> PowerEdge 2850
> 2 dual core dual core Xeon 2800 MHz
> 4GB RAM
> Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
> PowerVault 220S
> Each Array consisted of 6-300 GB drives.
> Server 1 = Raid 10
> 3, 6-disk arrays
> Server 2 = Raid 5 (~838 GB each)
> 3, 6-disk arrays (~1360 GB each)
> Test Winner % Faster
> SQL Server - Update RAID 5 13
> Heavy ETL RAID 5 16
> SQLIO - Rand Write RAID 10 40
> SQLIO - Rand Read RAID 10 30
> SQLIO - Seq Write RAID 5 15
> SQLIO - Seq Read RAID 5 Mixed
> Disktt - Seq Write RAID 5 18
> Disktt - Seq Read RAID 5 2000
> Disktt - Rand Read RAID 5 62
> Pass Mark - mixed RAID 10 Varies
> Pass Mark -
> Simulate SQL Server RAID 5 1%
> I have much more detail than this if anyone is interested.|||All the arrays have the same settings

Read Cache: Adaptive Read Ahead
Write Cache: Write Back
Cache Policy: Cache I/O|||If you are using Dell hardware with Perc controllers - Read this:

http://forums.2cpu.com/showpost.php...26&postcount=11

I will be testing this during the next day to see if this explains my
overall bad diskperformance.

"Dave" <daveg.01@.gmail.com> wrote in message
news:1146510578.745595.255290@.g10g2000cwb.googlegr oups.com...
> RAID 5 beats RAID 10
> Can I get some feedback on these results? We were having some serious
> IO issues according to PerfMon so I really pushed for RAID 10. The
> results are not what I expected.
> I have 2 identical servers.
> Hardware:
> PowerEdge 2850
> 2 dual core dual core Xeon 2800 MHz
> 4GB RAM
> Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
> PowerVault 220S
> Each Array consisted of 6-300 GB drives.
> Server 1 = Raid 10
> 3, 6-disk arrays
> Server 2 = Raid 5 (~838 GB each)
> 3, 6-disk arrays (~1360 GB each)
> Test Winner % Faster
> SQL Server - Update RAID 5 13
> Heavy ETL RAID 5 16
> SQLIO - Rand Write RAID 10 40
> SQLIO - Rand Read RAID 10 30
> SQLIO - Seq Write RAID 5 15
> SQLIO - Seq Read RAID 5 Mixed
> Disktt - Seq Write RAID 5 18
> Disktt - Seq Read RAID 5 2000
> Disktt - Rand Read RAID 5 62
> Pass Mark - mixed RAID 10 Varies
> Pass Mark -
> Simulate SQL Server RAID 5 1%
> I have much more detail than this if anyone is interested.|||Per Schjetne wrote:
> If you are using Dell hardware with Perc controllers - Read this:
> http://forums.2cpu.com/showpost.php...26&postcount=11
> I will be testing this during the next day to see if this explains my
> overall bad diskperformance.

I may be missing something but is "write back" not slower than "write
through" anyway? I mean with write through the data has to be written
twice with RAID 10 before the IO call returns; I'm not sure whether this
can happen in parallel - if not you're at twice the time. But with
write back the controller can put the data into its internal cache (as
long as there is space left), IO call can return and then it can writing
stuff in the background.

Regards

robert|||I have the exact same situation. We had a PowerEdge 2800 with RAID 5,
when we got a new one I pushed hard for RAID 10, and then when I ran
performance tests for our database it turned out to be not quite as
good as the RAID 5.|||I can confirm the same thing. We have 2 x PowerEdge 2800 with the disks on a
PowerVault 220S. I have reconfigured one of the servers to Raid 10 and the
diskperformance went slightly down. I used ATTO Disk Benchmark for testing.
I also run some test-procedures in SQL Server and it confirmed the same
thing.

"sql_server_user" <kaioptera@.gmail.com> wrote in message
news:1147186969.361868.284160@.i39g2000cwa.googlegr oups.com...
>I have the exact same situation. We had a PowerEdge 2800 with RAID 5,
> when we got a new one I pushed hard for RAID 10, and then when I ran
> performance tests for our database it turned out to be not quite as
> good as the RAID 5.|||In theory, should this happen? Does anyone know of any published
benchmarks that compare Raid 5 to Raid 10 while holding the number of
disks constant?|||Dave, I feel you should read Kimberly L. Tripp's response more carefully.
Her response is quite to the point. The performance comparison is not based
on the same number of physical disks, it is based on the same drive
capacity, using the same physical drives, but different number of them. Of
course if measured by basing on the same number of physical drives, you will
get the performance number as you stated, but that is just not the way
currently used to assess the performance.

"Dave" <daveg.01@.gmail.com> wrote in message
news:1147443352.693450.119150@.j73g2000cwa.googlegr oups.com...
> In theory, should this happen? Does anyone know of any published
> benchmarks that compare Raid 5 to Raid 10 while holding the number of
> disks constant?|||I understood her post, I just don' think that the "current way" is
a logical or scientific way to analyze Raid. I understand the fault
tolerance and Degradation/Rebuilding benefits of Raid 10. However, for
performance reasons alone, I it doesn't appear to be justified.

I admit my testing is inconclusive. I wish I had to opportunity to
conduct more tests and see how performance varies with the number of
disks in the array.

It would also be interesting to repeat the tests on different hardware.|||My opinion is that this only highlights the fact that *general*
guidelines will not always apply.

What we have here is a couple of reports that RAID 10 is slower that
RAID 5 for the database in question. The vast majority of expert reports
that I have read (including the vendor of our medical database) is that
*IN GENERAL* RAID 10 is faster than RAID 5 for databases. Nowhere have I
ever seen the statement that it is *ALWAYS* faster.

I don't doubt that the posters are reporting accurate information, I
just don't see where it means that RAID 5 is *ALWAYS* faster than RAID
10 any more that the opposite is true...

Regards,
Hank Arnold

Dave wrote:
> I understood her post, I just don' think that the "current way" is
> a logical or scientific way to analyze Raid. I understand the fault
> tolerance and Degradation/Rebuilding benefits of Raid 10. However, for
> performance reasons alone, I it doesn't appear to be justified.
> I admit my testing is inconclusive. I wish I had to opportunity to
> conduct more tests and see how performance varies with the number of
> disks in the array.
> It would also be interesting to repeat the tests on different hardware.|||Hank Arnold wrote:
> My opinion is that this only highlights the fact that *general*
> guidelines will not always apply.
> What we have here is a couple of reports that RAID 10 is slower that
> RAID 5 for the database in question. The vast majority of expert reports
> that I have read (including the vendor of our medical database) is that
> *IN GENERAL* RAID 10 is faster than RAID 5 for databases. Nowhere have I
> ever seen the statement that it is *ALWAYS* faster.

that's a nonsense. RAID10 is ALWAYS as fast or faster than RAID5. It's
a physics questions (the number movements of the disk heads necessary
to read or write an amount of data).

defective implementations are another history.|||1492a2001@.terra.es wrote:
> Hank Arnold wrote:
>> My opinion is that this only highlights the fact that *general*
>> guidelines will not always apply.
>>
>> What we have here is a couple of reports that RAID 10 is slower that
>> RAID 5 for the database in question. The vast majority of expert reports
>> that I have read (including the vendor of our medical database) is that
>> *IN GENERAL* RAID 10 is faster than RAID 5 for databases. Nowhere have I
>> ever seen the statement that it is *ALWAYS* faster.
> that's a nonsense. RAID10 is ALWAYS as fast or faster than RAID5. It's
> a physics questions (the number movements of the disk heads necessary
> to read or write an amount of data).
> defective implementations are another history.

What a nice, polite response.... :-(

Regards,
Hank Arnold|||Raid 01 should be faster than raid5

1. Raid 5 has to calculate the xor'd data
2. raid 5 has to do 2 writes (1 for the actual and 1 for the xor'd
data)
3. Raid 5 will be slow in a degraded array, more drives the slower it
becomes
(If a drive fails, it will have to read all the other data, pulls
the xor'd data to recreate the missing piece. 10 drives, 1 fails, all
drives have to be read)
4. Raid 5 upside: disk efficiency. You only lose 1 drives capacity for
redundancy (Note: I didnt say you use one drive for redundancy, just
its capacity). More drives you have, the more efficient the storage (3
drives yields 66% capacity. 10 drives yields 90% capacity)
5. Raid 0+1 still has two writes, but it does not have the overhead to
calculate the xor'd data
6. Raid 0+1 does not suffer ill effects if one of its drives fails.

Normally, Raid 0+1 should blow the doors off of Raid 5, shouldn't
even be a contest. Raid 5 is great for mostly reads and where
performance is not critical if the array is degraded. Raid 0+1 is
faster, but more costly since you get only 50% capacity of the total
disk storage.

It's worrisome to me, thinking I might have one of these controllers in
my HP machine (HP bought Compaq...anyone know if HP uses the LSI
controllers?)

No comments:

Post a Comment