Wednesday, March 28, 2012

RAID 5 beats RAID 10

RAID 5 beats RAID 10
Can I get some feedback on these results? We were having some serious
IO issues according to PerfMon so I really pushed for RAID 10. The
results are not what I expected.
I have 2 identical servers.
Hardware:
PowerEdge 2850
2 dual core dual core Xeon 2800 MHz
4GB RAM
Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
PowerVault 220S
Each Array consisted of 6-300 GB drives.
Server 1 = Raid 10
3, 6-disk arrays
Server 2 = Raid 5 (~838 GB each)
3, 6-disk arrays (~1360 GB each)
Test Winner % Faster
SQL Server - Update RAID 5 13
Heavy ETL RAID 5 16
SQLIO - Rand Write RAID 10 40
SQLIO - Rand Read RAID 10 30
SQLIO - Seq Write RAID 5 15
SQLIO - Seq Read RAID 5 Mixed
Disktt - Seq Write RAID 5 18
Disktt - Seq Read RAID 5 2000
Disktt - Rand Read RAID 5 62
Pass Mark - mixed RAID 10 Varies
Pass Mark -
Simulate SQL Server RAID 5 1%
I have much more detail than this if anyone is interested.Are you absolutely absolutely absolutely sure the disk write cache on both
machines was set the same?
RAID 10 will always out perform RAID 5 on read performance in a real
situation because it has 2 copies of the data it can concurrently read. When
writing to disk RAID 5 needs to read as well in order to calculate parity.
There is just so much to doing the comparison...
--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"Dave" <daveg.01@.gmail.com> wrote in message
news:1146510578.745595.255290@.g10g2000cwb.googlegroups.com...
> RAID 5 beats RAID 10
> Can I get some feedback on these results? We were having some serious
> IO issues according to PerfMon so I really pushed for RAID 10. The
> results are not what I expected.
> I have 2 identical servers.
> Hardware:
> PowerEdge 2850
> 2 dual core dual core Xeon 2800 MHz
> 4GB RAM
> Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
> PowerVault 220S
> Each Array consisted of 6-300 GB drives.
> Server 1 = Raid 10
> 3, 6-disk arrays
> Server 2 = Raid 5 (~838 GB each)
> 3, 6-disk arrays (~1360 GB each)
> Test Winner % Faster
> SQL Server - Update RAID 5 13
> Heavy ETL RAID 5 16
> SQLIO - Rand Write RAID 10 40
> SQLIO - Rand Read RAID 10 30
> SQLIO - Seq Write RAID 5 15
> SQLIO - Seq Read RAID 5 Mixed
> Disktt - Seq Write RAID 5 18
> Disktt - Seq Read RAID 5 2000
> Disktt - Rand Read RAID 5 62
> Pass Mark - mixed RAID 10 Varies
> Pass Mark -
> Simulate SQL Server RAID 5 1%
> I have much more detail than this if anyone is interested.
>|||All the arrays have the same settings
Read Cache: Adaptive Read Ahead
Write Cache: Write Back
Cache Policy: Cache I/O|||We almost identical hardware and I'm not satisfied at all with the disk
performance. This is based on a gut feeling working with SQL Server 2000
with the same amount of data I'm working with now on SQL Server 2005 with
better hardware. But my problem has been to estimate what to expect. I'm
therefore very interested in your results and test-methods.
First of all: How did you configure the initial raid-setup in bios? Did you
change any of the default parameters for each raid?
"Dave" <daveg.01@.gmail.com> wrote in message
news:1146510578.745595.255290@.g10g2000cwb.googlegroups.com...
> RAID 5 beats RAID 10
> Can I get some feedback on these results? We were having some serious
> IO issues according to PerfMon so I really pushed for RAID 10. The
> results are not what I expected.
> I have 2 identical servers.
> Hardware:
> PowerEdge 2850
> 2 dual core dual core Xeon 2800 MHz
> 4GB RAM
> Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
> PowerVault 220S
> Each Array consisted of 6-300 GB drives.
> Server 1 = Raid 10
> 3, 6-disk arrays
> Server 2 = Raid 5 (~838 GB each)
> 3, 6-disk arrays (~1360 GB each)
> Test Winner % Faster
> SQL Server - Update RAID 5 13
> Heavy ETL RAID 5 16
> SQLIO - Rand Write RAID 10 40
> SQLIO - Rand Read RAID 10 30
> SQLIO - Seq Write RAID 5 15
> SQLIO - Seq Read RAID 5 Mixed
> Disktt - Seq Write RAID 5 18
> Disktt - Seq Read RAID 5 2000
> Disktt - Rand Read RAID 5 62
> Pass Mark - mixed RAID 10 Varies
> Pass Mark -
> Simulate SQL Server RAID 5 1%
> I have much more detail than this if anyone is interested.
>|||Actually, I didn't set it up. The servers were built by our systems
department then handed off to me to install/configure SQL Server and to
test.
If you have any specific questions I would be more than happy to find
out. Please be specific because I don't know much about the Raid
bios settings. Remember it is a PowerEdge 2850 and PowerVault 220S.
I just noticed that we have a failed drive on our RAID 5 array. The
array failed before we conducted this last round of test so the Raid 5
data is inaccurate. I will post new data once it is available.|||This isn't really a comparison of apples to apples. You're taking the same
number of disks instead of the same useable disk space. With the RAID 5 array
you're using you're getting striping over 6 disks (with a penalty for
parity)... with RAID 10 you're effectively striping over only 3 disks. What
you need to compare is:
RAID 5 with 4 disks v.
RAID 10 with 6 disks
where both have the same usable disk space. Then - you should see RAID 10
out perform R5. However, I think I'd also need a bit more details on those
numbers.
Hope this helps,
kt
"Dave" wrote:
> All the arrays have the same settings
> Read Cache: Adaptive Read Ahead
> Write Cache: Write Back
> Cache Policy: Cache I/O
>|||I don't agree. Why would you use a 6 disk Raid 10 when you could use a
6 disk Raid 5 that will out perform it? I think that is apples and
apples. We are talking about the most performance for the money
aren't we?
I just re-ran SQLIO after the Raid 5 rebuild and I got similar results
to what I posted above.
Random Write - Raid 10 big winner
Sequential Write - Raid 5 Winner
Random Read - tie (Raid 10 slight favorite)
Sequential Read - tie (Raid 5 Slight favorite)|||Hey there Dave - I completely see your point actually... but usually what
people do (in most cases) is
(1) Figure out how much disk space they need...
(2) Configure their RAID configuration to match their needs:
RAID 10 usually sacrafices more disks for better redundancy and great
performance
RAID 5 usually sacrafices performance for a lower cost.
What you should also test though - is:
The speed of the R5 array when a disk is damaged (in most systems you lose
all caching)
The speed of a backup/restore scenario
Finally, just realize that you are MUCH more likely to have complete array
failure with R5 as it can only tolerate the loss of one drive. R10 can
tolerate the loss of multiple drives (obviously not an entire mirror
pair/set).
Personally, I always error on the side of redundancy AND performance and R5
is usually not worth it. Also, to be honest, if you're doing nothing but pure
disk throughput testing you might not really see your SQL Server perf
issues... I can do more *in* the database (in terms of perf) rather than with
the disk (don't get me wrong though - everything helps!).
As a side note - the transaction is WAY more important (in terms of disk
speeds) than the data portion. I wrote a blog entry on tuning the log here:
http://www.sqlskills.com/blogs/kimberly/PermaLink.aspx?guid=934f3755-5b1d-4572-a386-c6a2a0d14a9e. That may give you a few other things to think about.
Finally, for some fun (and some great links too), check out www.baarf.com.
It's a bunch of Oracle experts who are in the "Battle Against Any RAID Five".
There are also some good discussions there as to why R5 really "isn't worth
it."
Hope this helps!
kt
"Dave" wrote:
> I don't agree. Why would you use a 6 disk Raid 10 when you could use a
> 6 disk Raid 5 that will out perform it? I think that is apples and
> apples. We are talking about the most performance for the money
> aren't we?
> I just re-ran SQLIO after the Raid 5 rebuild and I got similar results
> to what I posted above.
> Random Write - Raid 10 big winner
> Sequential Write - Raid 5 Winner
> Random Read - tie (Raid 10 slight favorite)
> Sequential Read - tie (Raid 5 Slight favorite)
>|||I don't think you proved the Raid 5 outperformed the Raid 10. In real life
you will almost never get sequential reads from database files. Some OLAP
operations may do that but it will be the exception rather than the rule.
One other thing to consider about Raid 10 is that you can get multiple disk
failures and still be operational. With Raid 5 you can only have 1 and then
the performance is severely degraded when you do.
--
Andrew J. Kelly SQL MVP
"Dave" <daveg.01@.gmail.com> wrote in message
news:1146768412.080063.215970@.j33g2000cwa.googlegroups.com...
>I don't agree. Why would you use a 6 disk Raid 10 when you could use a
> 6 disk Raid 5 that will out perform it? I think that is apples and
> apples. We are talking about the most performance for the money
> aren't we?
> I just re-ran SQLIO after the Raid 5 rebuild and I got similar results
> to what I posted above.
> Random Write - Raid 10 big winner
> Sequential Write - Raid 5 Winner
> Random Read - tie (Raid 10 slight favorite)
> Sequential Read - tie (Raid 5 Slight favorite)
>|||I understand Raid 10 is better at fault tolerance and all. I was the
one who really pushed for Raid 10. I also understand that my little
test doesn't prove anything.
However the Raid 5 server did beat the Raid 10 server on a heavy ETL
process that involved several reads, writes, aggregations, indexing,
etc. The process took less than 5 hours on Raid 5 and well over 6
hours on Raid 10. That code was the best real world test I could come
up with. I repeated the test while the server was under a load (large
updates, and disk benchmark tools running in background) and Raid 5
still won by 20% or more.
I am still not convinced that "true" Raid 5 can beat "true"
Raid 10. I have read several posts on how Dell does not implement true
Raid 10 on the Perc4 controllers. I have also read posts that claim
that dell documentation is inaccurate. I have forwarded our findings
to Dell for comment and I have still not heard back.
http://docs.us.dell.com/support/edocs/software/smarrman/marb32/ch8_perc.htm
If the documentation is accurate and I read it correctly then Dell
really doesn't support true Raid 10.
Is there anyway someone can test Raid 5 vs. Raid 10 using a different
controller card?|||If you are using Dell hardware with Perc controllers - Read this:
http://forums.2cpu.com/showpost.php?p=252226&postcount=11
I will be testing this during the next day to see if this explains my
overall bad diskperformance.
"Dave" <daveg.01@.gmail.com> wrote in message
news:1146510578.745595.255290@.g10g2000cwb.googlegroups.com...
> RAID 5 beats RAID 10
> Can I get some feedback on these results? We were having some serious
> IO issues according to PerfMon so I really pushed for RAID 10. The
> results are not what I expected.
> I have 2 identical servers.
> Hardware:
> PowerEdge 2850
> 2 dual core dual core Xeon 2800 MHz
> 4GB RAM
> Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
> PowerVault 220S
> Each Array consisted of 6-300 GB drives.
> Server 1 = Raid 10
> 3, 6-disk arrays
> Server 2 = Raid 5 (~838 GB each)
> 3, 6-disk arrays (~1360 GB each)
> Test Winner % Faster
> SQL Server - Update RAID 5 13
> Heavy ETL RAID 5 16
> SQLIO - Rand Write RAID 10 40
> SQLIO - Rand Read RAID 10 30
> SQLIO - Seq Write RAID 5 15
> SQLIO - Seq Read RAID 5 Mixed
> Disktt - Seq Write RAID 5 18
> Disktt - Seq Read RAID 5 2000
> Disktt - Rand Read RAID 5 62
> Pass Mark - mixed RAID 10 Varies
> Pass Mark -
> Simulate SQL Server RAID 5 1%
> I have much more detail than this if anyone is interested.
>|||Per Schjetne wrote:
> If you are using Dell hardware with Perc controllers - Read this:
> http://forums.2cpu.com/showpost.php?p=252226&postcount=11
> I will be testing this during the next day to see if this explains my
> overall bad diskperformance.
I may be missing something but is "write back" not slower than "write
through" anyway? I mean with write through the data has to be written
twice with RAID 10 before the IO call returns; I'm not sure whether this
can happen in parallel - if not you're at twice the time. But with
write back the controller can put the data into its internal cache (as
long as there is space left), IO call can return and then it can writing
stuff in the background.
Regards
robert|||I have the exact same situation. We had a PowerEdge 2800 with RAID 5,
when we got a new one I pushed hard for RAID 10, and then when I ran
performance tests for our database it turned out to be not quite as
good as the RAID 5.|||I can confirm the same thing. We have 2 x PowerEdge 2800 with the disks on a
PowerVault 220S. I have reconfigured one of the servers to Raid 10 and the
diskperformance went slightly down. I used ATTO Disk Benchmark for testing.
I also run some test-procedures in SQL Server and it confirmed the same
thing.
"sql_server_user" <kaioptera@.gmail.com> wrote in message
news:1147186969.361868.284160@.i39g2000cwa.googlegroups.com...
>I have the exact same situation. We had a PowerEdge 2800 with RAID 5,
> when we got a new one I pushed hard for RAID 10, and then when I ran
> performance tests for our database it turned out to be not quite as
> good as the RAID 5.
>|||In theory, should this happen? Does anyone know of any published
benchmarks that compare Raid 5 to Raid 10 while holding the number of
disks constant?|||Dave, I feel you should read Kimberly L. Tripp's response more carefully.
Her response is quite to the point. The performance comparison is not based
on the same number of physical disks, it is based on the same drive
capacity, using the same physical drives, but different number of them. Of
course if measured by basing on the same number of physical drives, you will
get the performance number as you stated, but that is just not the way
currently used to assess the performance.
"Dave" <daveg.01@.gmail.com> wrote in message
news:1147443352.693450.119150@.j73g2000cwa.googlegroups.com...
> In theory, should this happen? Does anyone know of any published
> benchmarks that compare Raid 5 to Raid 10 while holding the number of
> disks constant?
>|||I understood her post, I just don' think that the "current way" is
a logical or scientific way to analyze Raid. I understand the fault
tolerance and Degradation/Rebuilding benefits of Raid 10. However, for
performance reasons alone, I it doesn't appear to be justified.
I admit my testing is inconclusive. I wish I had to opportunity to
conduct more tests and see how performance varies with the number of
disks in the array.
It would also be interesting to repeat the tests on different hardware.|||My opinion is that this only highlights the fact that *general*
guidelines will not always apply.
What we have here is a couple of reports that RAID 10 is slower that
RAID 5 for the database in question. The vast majority of expert reports
that I have read (including the vendor of our medical database) is that
*IN GENERAL* RAID 10 is faster than RAID 5 for databases. Nowhere have I
ever seen the statement that it is *ALWAYS* faster.
I don't doubt that the posters are reporting accurate information, I
just don't see where it means that RAID 5 is *ALWAYS* faster than RAID
10 any more that the opposite is true...
Regards,
Hank Arnold
Dave wrote:
> I understood her post, I just don' think that the "current way" is
> a logical or scientific way to analyze Raid. I understand the fault
> tolerance and Degradation/Rebuilding benefits of Raid 10. However, for
> performance reasons alone, I it doesn't appear to be justified.
> I admit my testing is inconclusive. I wish I had to opportunity to
> conduct more tests and see how performance varies with the number of
> disks in the array.
> It would also be interesting to repeat the tests on different hardware.
>|||Hank Arnold wrote:
> My opinion is that this only highlights the fact that *general*
> guidelines will not always apply.
> What we have here is a couple of reports that RAID 10 is slower that
> RAID 5 for the database in question. The vast majority of expert reports
> that I have read (including the vendor of our medical database) is that
> *IN GENERAL* RAID 10 is faster than RAID 5 for databases. Nowhere have I
> ever seen the statement that it is *ALWAYS* faster.
that's a nonsense. RAID10 is ALWAYS as fast or faster than RAID5. It's
a physics questions (the number movements of the disk heads necessary
to read or write an amount of data).
defective implementations are another history.|||1492a2001@.terra.es wrote:
> Hank Arnold wrote:
>> My opinion is that this only highlights the fact that *general*
>> guidelines will not always apply.
>> What we have here is a couple of reports that RAID 10 is slower that
>> RAID 5 for the database in question. The vast majority of expert reports
>> that I have read (including the vendor of our medical database) is that
>> *IN GENERAL* RAID 10 is faster than RAID 5 for databases. Nowhere have I
>> ever seen the statement that it is *ALWAYS* faster.
> that's a nonsense. RAID10 is ALWAYS as fast or faster than RAID5. It's
> a physics questions (the number movements of the disk heads necessary
> to read or write an amount of data).
> defective implementations are another history.
>
What a nice, polite response.... :-(
Regards,
Hank Arnold|||Raid 01 should be faster than raid5
1. Raid 5 has to calculate the xor'd data
2. raid 5 has to do 2 writes (1 for the actual and 1 for the xor'd
data)
3. Raid 5 will be slow in a degraded array, more drives the slower it
becomes
(If a drive fails, it will have to read all the other data, pulls
the xor'd data to recreate the missing piece. 10 drives, 1 fails, all
drives have to be read)
4. Raid 5 upside: disk efficiency. You only lose 1 drives capacity for
redundancy (Note: I didnt say you use one drive for redundancy, just
its capacity). More drives you have, the more efficient the storage (3
drives yields 66% capacity. 10 drives yields 90% capacity)
5. Raid 0+1 still has two writes, but it does not have the overhead to
calculate the xor'd data
6. Raid 0+1 does not suffer ill effects if one of its drives fails.
Normally, Raid 0+1 should blow the doors off of Raid 5, shouldn't
even be a contest. Raid 5 is great for mostly reads and where
performance is not critical if the array is degraded. Raid 0+1 is
faster, but more costly since you get only 50% capacity of the total
disk storage.
It's worrisome to me, thinking I might have one of these controllers in
my HP machine (HP bought Compaq...anyone know if HP uses the LSI
controllers?)|||"Kimberly L. Tripp, MVP/RD, SQLskills.com" <Kimberly L. Tripp, MVP/RD,
SQLskills.com@.discussions.microsoft.com> wrote in message
news:0CEE4502-2150-4548-85E4-52485E235710@.microsoft.com...
> Hey there Dave - I completely see your point actually... but usually what
> people do (in most cases) is
> (1) Figure out how much disk space they need...
> (2) Configure their RAID configuration to match their needs:
> RAID 10 usually sacrafices more disks for better redundancy and great
> performance
> RAID 5 usually sacrafices performance for a lower cost.
Kimberly... given your comments on testing methodology... here's a real-life
scenario I'm currently in, and your comments tend to tell me there's no
point in doing what I've been considering doing.
We have a 6x146GB HP Array currently configured with HP's ADG (Raid 5+)
technology. Which means we have two parity drives, and four data drives,
giving us 520GB of data storage. This is absolute overkill for our
(currently) 15GB database.
I'm evaluating conversion of this ADG array to a RAID10 array, a stripe of 3
mirrors, which would give us 3x146GB of data storage, approx 420GB.. still
overkill for our 15GB database.
But, based on what you've said in this thread, I find myself asking Dave's
very same question:
Why would you use a 6 disk Raid 10 when you could use a 6 disk Raid 5
that will out perform it?
And, as for the idea of basing a purchase on "how much disk space is
needed", while the actual purchase of the number of spindles and size of the
drive is related to this, in part, there aren't that many choices when SCSI
drive arrays are concerned. You got 36GB, 72GB, 146GB, 320GB (and maybe
larger, now) sizes, and all spindles in the array must be the same size.
Raid10 requires at least 4 drives; Raid5 requires at least 3 drives.
Further, RAID10 requires the purchase of spindles in even number quantities.
Given that one would consider RAID10 vs RAID5, then first that scenario must
be committed to purchasing at least four spindles for the array. At this
point, the number of spindles in the array is pretty much cost driven by
most organizations. I have a choice 4x72GB, 4x146GB, etc. the total usable
volume is significantly different from each drive size configuration,
e.g.
4x72GB = 216GB RAID5 capacity; 144GB RAID10 capacity
4x146GB = 438GB RAID5 capacity; 292GB RAID10 capacity
Let's consider those environments where 100GB of storage for MDF files is
more than sufficient. In such a scenario, drive capacity is the least
signficant question. What's relevant is: How many spindles do I have to buy
to implement a given technology. RAID5 requires 3; RAID10 requires 4. By
definition, RAID10 is 25% more expensive to deploy, at the minimum
deployment. The drive size may be a significant factor, as a 3 drive RAID5
(3x146GB = 292GB) array is twice the capacity of a 4 drive RAID10 (4x72GB =144GB) array with the next drive size down -- that is, purchasing smaller
drives to get more spindles may not be a feasible trade-off, as it may not
provided the necessary capacity for the project.
So I submit that in most cases, the question of the number of spindles has
nothing to do with data capacity, but has almost everything to do with cost.
If then, you tell me that a RAID5 with 4x146GB drives will always outperform
a RAID10 with 4x146GB drives, and in order to get comparable (or better)
performance, I need to use a RAID10 with 6x146GB drives, we're talking about
an incremental cost of 2 146GB drives in the array to get the same
performance.
THAT is a significant question for just this little scenario.
Consider the cost differential for larger arrays!
Now, there's a whole new picture on the cost/performance comparision of
RAID10 in large arrays.
Please.. tell me what I'm missing in understanding this picture, because if
one has to spend 33% more money to get 'better performance' (we've already
established that the RAID5 array with a smaller number of spindles has
sufficient capacity), then RAID10 may not be the panacea it's being made out
to be, IMHO.
Lawrence Garvin, M.S., MVP-Software Distribution
Everything you need for WSUS is at
http://technet2.microsoft.com/windowsserver/en/technologies/featured/wsus/default.mspx
And, eveything else is at
http://wsusinfo.onsitechsolutions.com
...|||Hi
I can't see the rest of the thread, but how did you measure the performance
of the RAID 5 vs. RAID 10?
Was there enough IO to completely flush SQL Server's buffer cache to a point
where you became IO bound?
Regards
--
Mike
This posting is provided "AS IS" with no warranties, and confers no rights.
"Lawrence Garvin (MVP)" <onsitech@.news.postalias> wrote in message
news:u7Eyb0VlGHA.4076@.TK2MSFTNGP05.phx.gbl...
> "Kimberly L. Tripp, MVP/RD, SQLskills.com" <Kimberly L. Tripp, MVP/RD,
> SQLskills.com@.discussions.microsoft.com> wrote in message
> news:0CEE4502-2150-4548-85E4-52485E235710@.microsoft.com...
>> Hey there Dave - I completely see your point actually... but usually what
>> people do (in most cases) is
>> (1) Figure out how much disk space they need...
>> (2) Configure their RAID configuration to match their needs:
>> RAID 10 usually sacrafices more disks for better redundancy and great
>> performance
>> RAID 5 usually sacrafices performance for a lower cost.
>
> Kimberly... given your comments on testing methodology... here's a
> real-life scenario I'm currently in, and your comments tend to tell me
> there's no point in doing what I've been considering doing.
> We have a 6x146GB HP Array currently configured with HP's ADG (Raid 5+)
> technology. Which means we have two parity drives, and four data drives,
> giving us 520GB of data storage. This is absolute overkill for our
> (currently) 15GB database.
> I'm evaluating conversion of this ADG array to a RAID10 array, a stripe of
> 3 mirrors, which would give us 3x146GB of data storage, approx 420GB..
> still overkill for our 15GB database.
> But, based on what you've said in this thread, I find myself asking Dave's
> very same question:
> Why would you use a 6 disk Raid 10 when you could use a 6 disk Raid 5
> that will out perform it?
> And, as for the idea of basing a purchase on "how much disk space is
> needed", while the actual purchase of the number of spindles and size of
> the drive is related to this, in part, there aren't that many choices when
> SCSI drive arrays are concerned. You got 36GB, 72GB, 146GB, 320GB (and
> maybe larger, now) sizes, and all spindles in the array must be the same
> size. Raid10 requires at least 4 drives; Raid5 requires at least 3 drives.
> Further, RAID10 requires the purchase of spindles in even number
> quantities. Given that one would consider RAID10 vs RAID5, then first that
> scenario must be committed to purchasing at least four spindles for the
> array. At this point, the number of spindles in the array is pretty much
> cost driven by most organizations. I have a choice 4x72GB, 4x146GB, etc.
> the total usable volume is significantly different from each drive size
> configuration,
> e.g.
> 4x72GB = 216GB RAID5 capacity; 144GB RAID10 capacity
> 4x146GB = 438GB RAID5 capacity; 292GB RAID10 capacity
> Let's consider those environments where 100GB of storage for MDF files is
> more than sufficient. In such a scenario, drive capacity is the least
> signficant question. What's relevant is: How many spindles do I have to
> buy to implement a given technology. RAID5 requires 3; RAID10 requires 4.
> By definition, RAID10 is 25% more expensive to deploy, at the minimum
> deployment. The drive size may be a significant factor, as a 3 drive RAID5
> (3x146GB = 292GB) array is twice the capacity of a 4 drive RAID10 (4x72GB
> = 144GB) array with the next drive size down -- that is, purchasing
> smaller drives to get more spindles may not be a feasible trade-off, as it
> may not provided the necessary capacity for the project.
> So I submit that in most cases, the question of the number of spindles has
> nothing to do with data capacity, but has almost everything to do with
> cost.
> If then, you tell me that a RAID5 with 4x146GB drives will always
> outperform a RAID10 with 4x146GB drives, and in order to get comparable
> (or better) performance, I need to use a RAID10 with 6x146GB drives, we're
> talking about an incremental cost of 2 146GB drives in the array to get
> the same performance.
> THAT is a significant question for just this little scenario.
> Consider the cost differential for larger arrays!
> Now, there's a whole new picture on the cost/performance comparision of
> RAID10 in large arrays.
> Please.. tell me what I'm missing in understanding this picture, because
> if one has to spend 33% more money to get 'better performance' (we've
> already established that the RAID5 array with a smaller number of spindles
> has sufficient capacity), then RAID10 may not be the panacea it's being
> made out to be, IMHO.
>
> --
> Lawrence Garvin, M.S., MVP-Software Distribution
> Everything you need for WSUS is at
> http://technet2.microsoft.com/windowsserver/en/technologies/featured/wsus/default.mspx
> And, eveything else is at
> http://wsusinfo.onsitechsolutions.com
> ...
>|||Guys,
Read that and read this:
http://docs.us.dell.com/support/edocs/software/smarrman/marb32/ch3_stor.htm#1030240
Goto "Organizing Data Storage for Availability and Performance" toward the
bottom.
Not sure what Dell is thinking, but there RAID 10, is NOT ture RAID 10 on
these controllers! BEWARE!
It's called RAID 1 - Concatenation. It does NO STRIPING!!! It sees an
array of drives as 1 volume, but it fills one drive completely, then starts
filling the next and the next. These "concatenated" drive are then mirrored
in a RAID 1.
Since you do not use all the drives wands at the same time, No performance
boost of RAID 10 period! Basically, it just like having RAID 1 array with 2
HUGE drives. But in this case the 2 huge drives are made up of many drives.
Thank you Dell for you new RAID type, I call it "FRAID 10". Stands for
"FAKE RAID 10". And I'd be a-FRAID to use it.
MP
"Per Schjetne" wrote:
> If you are using Dell hardware with Perc controllers - Read this:
> http://forums.2cpu.com/showpost.php?p=252226&postcount=11
> I will be testing this during the next day to see if this explains my
> overall bad diskperformance.
>
> "Dave" <daveg.01@.gmail.com> wrote in message
> news:1146510578.745595.255290@.g10g2000cwb.googlegroups.com...
> > RAID 5 beats RAID 10
> >
> > Can I get some feedback on these results? We were having some serious
> > IO issues according to PerfMon so I really pushed for RAID 10. The
> > results are not what I expected.
> >
> > I have 2 identical servers.
> >
> > Hardware:
> > PowerEdge 2850
> > 2 dual core dual core Xeon 2800 MHz
> > 4GB RAM
> > Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
> >
> > PowerVault 220S
> > Each Array consisted of 6-300 GB drives.
> >
> > Server 1 = Raid 10
> > 3, 6-disk arrays
> >
> > Server 2 = Raid 5 (~838 GB each)
> > 3, 6-disk arrays (~1360 GB each)
> >
> > Test Winner % Faster
> > SQL Server - Update RAID 5 13
> > Heavy ETL RAID 5 16
> > SQLIO - Rand Write RAID 10 40
> > SQLIO - Rand Read RAID 10 30
> > SQLIO - Seq Write RAID 5 15
> > SQLIO - Seq Read RAID 5 Mixed
> > Disktt - Seq Write RAID 5 18
> > Disktt - Seq Read RAID 5 2000
> > Disktt - Rand Read RAID 5 62
> > Pass Mark - mixed RAID 10 Varies
> > Pass Mark -
> > Simulate SQL Server RAID 5 1%
> >
> > I have much more detail than this if anyone is interested.
> >
>
>|||Interesting.
If you have mirroring in hardware, you can let SQL do the striping for you.
Just mount each of your mirror pairs in the OS and in SQL Server create
filegroups with files on multiple volumes. Since SQL Server evenly
disributes data and load across the files in a filegroup, this will give you
the same effect as striping the volumes together.
David
"Mrpush" <Mrpush@.discussions.microsoft.com> wrote in message
news:228CF599-6063-4F10-9343-1AECB73C3418@.microsoft.com...
> Guys,
> Read that and read this:
> http://docs.us.dell.com/support/edocs/software/smarrman/marb32/ch3_stor.htm#1030240
> Goto "Organizing Data Storage for Availability and Performance" toward the
> bottom.
> Not sure what Dell is thinking, but there RAID 10, is NOT ture RAID 10 on
> these controllers! BEWARE!
> It's called RAID 1 - Concatenation. It does NO STRIPING!!! It sees an
> array of drives as 1 volume, but it fills one drive completely, then
> starts
> filling the next and the next. These "concatenated" drive are then
> mirrored
> in a RAID 1.
> Since you do not use all the drives wands at the same time, No performance
> boost of RAID 10 period! Basically, it just like having RAID 1 array with
> 2
> HUGE drives. But in this case the 2 huge drives are made up of many
> drives.
> Thank you Dell for you new RAID type, I call it "FRAID 10". Stands for
> "FAKE RAID 10". And I'd be a-FRAID to use it.
> MP
>
>
> "Per Schjetne" wrote:
>> If you are using Dell hardware with Perc controllers - Read this:
>> http://forums.2cpu.com/showpost.php?p=252226&postcount=11
>> I will be testing this during the next day to see if this explains my
>> overall bad diskperformance.
>>
>> "Dave" <daveg.01@.gmail.com> wrote in message
>> news:1146510578.745595.255290@.g10g2000cwb.googlegroups.com...
>> > RAID 5 beats RAID 10
>> >
>> > Can I get some feedback on these results? We were having some serious
>> > IO issues according to PerfMon so I really pushed for RAID 10. The
>> > results are not what I expected.
>> >
>> > I have 2 identical servers.
>> >
>> > Hardware:
>> > PowerEdge 2850
>> > 2 dual core dual core Xeon 2800 MHz
>> > 4GB RAM
>> > Controller Cards: Perc4/DC (2 arrays), Perc4e/Di (1 array)
>> >
>> > PowerVault 220S
>> > Each Array consisted of 6-300 GB drives.
>> >
>> > Server 1 = Raid 10
>> > 3, 6-disk arrays
>> >
>> > Server 2 = Raid 5 (~838 GB each)
>> > 3, 6-disk arrays (~1360 GB each)
>> >
>> > Test Winner % Faster
>> > SQL Server - Update RAID 5 13
>> > Heavy ETL RAID 5 16
>> > SQLIO - Rand Write RAID 10 40
>> > SQLIO - Rand Read RAID 10 30
>> > SQLIO - Seq Write RAID 5 15
>> > SQLIO - Seq Read RAID 5 Mixed
>> > Disktt - Seq Write RAID 5 18
>> > Disktt - Seq Read RAID 5 2000
>> > Disktt - Rand Read RAID 5 62
>> > Pass Mark - mixed RAID 10 Varies
>> > Pass Mark -
>> > Simulate SQL Server RAID 5 1%
>> >
>> > I have much more detail than this if anyone is interested.
>> >
>>

No comments:

Post a Comment