I'm making whatever hardware upgrades I can very easily do on this box.
Due to only having 6 HDD slots in my IBM x235 tower's backplane, I can't have 3 volumes AND have one of them as RAID 5. Also; a 3 volume RAID 5 just seems silly. Wouldn't it have worse performance than mirrored (both read and write)? Well; I don't claim any knowledge on all this so should stop on the wild guesses.
My DB is 3GB (+ 2GB log file). It's looking like it may grow up to 5 or 10 GB per year on the very outside, so 140GB for DB should be plenty for years to come.
I just want to keep this upgrade small and simple with zero exposure to upgrade heartache and regret. Therefore; I plan to avoid an O/S or Software rebuild.
My hardware / network guy says upgrading the O/S drive from 10K RPM to a larger 15K RPM drive requires OS reload. He's not entirely sure since he's never done it using this IBM RAID card, but 10K RPM should be fine for the O/S.
Existing:
Internal IBM PCI RAID Controller (not sure model, but decent). All RAID is via hardware controller.
Volume 1 = 10K RPM mirrored SCSI.
Volume 2 = 10K RPM RAID5 with 3 volumes
Slot 6 empty.
Solution 1:
1. 2 volume solution that includes a small RAID 5.
Volume 1 = 10K RPM RAID1 SCSI. (O/S, Log, and DB Backup files)
Volume 2 = 15K RPM RAID6 or RAID5 with 4 disks
2. 3 volume solution
Volume 1 = 10K RPM RAID1 SCSI. (O/S only)
Volume 2 = 15K RPM RAID1 SCSI (for DB only)
Volume 3 = 15K RPM RAID1 SCSI (for Log files only)
I have heard it's better to have log files on a mirrored disk because the I/O tends to be write intensive and contiguous. I don't know how intensive DB I/O is vs. Log File I/O vs. OS I/O. Maybe someone who does could advise me in this matter.
Additional question: Are higher capacity HDDs faster for read/write? Say; comparing 32GB vs. 300GB, the 300GB must be more compressed, so more data would travel over the heads. Yet; the specs don't seem to imply that there's much difference (unless I'm missing something).
Thanks!Hi vich
others will be able to answer - I'm pretty hopeless on hardware. Every so often I try to work my way through this and pick up a little more each time:
http://sql-server-performance.com/Community/forums/t/2337.aspx|||Let me take a stab at this and then let the experts chew me apart...
I would recommend the 3 RAID-1 volumes with a big caveat: make sure you have adequate space on the main data volume. If you don't, and you end up "spilling over" data files onto either the O/S volume or the log volume, you've negated any advantage.
Also, check your channel configuration carefully; you'd like the IO from the data LUNs on a separate channel from the IO for the log files (but definitely separate from the backup LUNs). Usually one SCSI card supports two channels, so you'll probably end up having to compromise something here.
My reasoning:
The OS volume should have the paging file; segregating the I/O for the paging file onto a separate LUN should be a plus;
Having the log files write to a RAID 1 LUN should be a plus; RAID-1 does sequential writes better (when compared to RAID-5);
Provided there's sufficient room, having a RAID-1 LUN for your data should be good; better would be to have more physical spindles for data access (ie, reads), but everything's a compromise in the end.
You did not specify where you were going to write the db backup files in the 3-drive scenario: I would probably go with the OS LUN (again, if it's big enough). Alternatively, if you're network is big enough and reliable enough (and the target is reliable enough), you can write your backups over the network. A LOT of people recommend against this (and with good reason). But if you understand the risks and you are SURE that the network is adequate and the target is reliable, then over the network backups can be a big boost in terms of space saved and IO performance.
Higher capacity drives do not directly affect IO performance. The key metrics for IO performance are RPM and latency. Indirectly however, they do have an impact. Since the new drives are higher cap, the budget guys tend to think you need fewer drives. Fewer drives means fewer spindles. Fewer spindles means slower (read) performance.
Recognizing that not every organization can afford them, I keep pushing SANs, even if it means using some of the lower end systems like the AX 1000 from EMC. Once you break into this world, you'll never want to go back.
Okay, so now everyone can rip into me and show me how ignorant I am!
Regards,
hmscott|||Okay, so now everyone can rip into me and show me how ignorant I am!I had to google LUN :o|||Sounds pretty good to me, hmscott.
In an ideal world, I would want separate spindles for the following objects split off in order of importance:
1) OS/pagefile
2) data filegroups
3) logs
4) tempdb
5) index filegroups
If you have a system that does not do a lot of writing, and mainly reading joined tables, or sorting output, items 3 and 4 invert. Reporting systems love tempdb. If you can not get 5 physical spindles, indexes collapse into the data. If you can not get 4 spindles, try to get tempdb in with the transaction logs. If you only have 2 spindles, hope you have a lightly used system.
Tempdb and transaction logs like having mirror arrays. Data/index filegroups can live with RAID 5, since the write I/O is asynchronous.
EDIT: Cleared up some of the ambiguous pasages.|||Coo - why would you make logs a higher priority on a read heavy system?|||DOH! I renumbered things when I added OS. That should be that tempdb and logs invert, not datafiles and logs invert. Perhaps I should do some editing...|||Sounds pretty good to me, hmscott.
In an ideal world, I would want separate spindles for the following objects split off in order of importance:
1) OS/pagefile
2) data filegroups
3) logs
4) tempdb
5) index filegroups
If you have a system that does not do a lot of writing, and mainly reading joined tables, or sorting output, items 3 and 4 invert. Reporting systems love tempdb. If you can not get 5 physical spindles, indexes collapse into the data. If you can not get 4 spindles, try to get tempdb in with the transaction logs. If you only have 2 spindles, hope you have a lightly used system.
Tempdb and transaction logs like having mirror arrays. Data/index filegroups can live with RAID 5, since the write I/O is asynchronous.
EDIT: Cleared up some of the ambiguous pasages.Great info. Thanks.
So; would I be correct to interpret this as ... having separate spindles for logs + tempdb (ie: 3 spindles rather than just two) should take precedence over putting RAID5 on the Data?
Also; for heavy reporting (I think that's my environment) you clearly say that putting TempDB on a separate spindle is better. But should the Log collapse onto the DB (+ Index) or TempDB spindle? Of any given hour; I would say that only 10 minutes of it actually has a report running somewhere in background. Most of those reports are heavy index users that do not do sorts (Crystal Reports does the sorting - separate processor) so I don't think they use TempDB much. Is there a stock-software simple way to analyze TempDB activity vs. Log activity? My thinking is to combine TempDB and Log files.
I should note: Although we run a fair number of report queries, the non-reporting aspect of the system is more performance critical. Users here wouldn't notice a 100% gain in reporting speed but would find a 200% update / random access slow-down unacceptable.
Your wording almost implies that you don't mind putting Data on a mirror if space allows. My DB is relatively tiny (3GB) so mirrored 72GB drives would be ample. I thought striping actually added performance, albeit hampered by extra processing so only applicable if it's an adequate controller, ergo the popular recommendation of RAID-0 on speed hungry systems not requiring fault tolerance (hard to imagine). Are you saying it's a minor consideration compared with the advantage of spindle separation?
Note: As a humble suggestion, in a high budget system; wouldn't you make a 6th spindle for the OS's swap file? It could be small and cheap, but placing it with the OS would thrash some. Having it's own dedicated spindle would keep the heads in one place and, since it's like-memory storage, it should be as fast as humanly possible.|||If the users are willing to pay the extra $X,000 dollars for the extra spindle, I would certainly not mind using it. Part of this job, however, is to deliver maximum performance at minimum cost. If the existing system shows no signs of disk queues, then most of this is moot. At 3GB most of this database is going to fit in memory, especially if only the last 2 weeks of data are accessed 90% of the time.|||If the users are willing to pay the extra $X,000 dollars for the extra spindle, I would certainly not mind using it. Part of this job, however, is to deliver maximum performance at minimum cost. If the existing system shows no signs of disk queues, then most of this is moot. At 3GB most of this database is going to fit in memory, especially if only the last 2 weeks of data are accessed 90% of the time.
Thanks. They don't mind paying $2.5K for 4 new disks ($650 ea) and $700 for two 1GB RAM sticks. It'll make this box last until next year. It's a safety net for the largely unknown extra load of upcoming MRP - going live next month, and for a $30M company, really, $3K is nothing for a little insurance. They could easily pay the $30K for a new box if it were warranted, but it's not. The larger expense is drawing my time from my larger project, MRP Implementation.
Lots of other hardware boost paths make more sense, but this is the easiest (I think).
My hardware guy will do the actual upgrade. The day before, I'll simply move the DB from E: drive (the RAID 5 volume) to D: drive (Mirrored volume shared by D and C: drives). It'll be a little slow for a day. If it's a disaster; I'll take it down and move back (15 minutes). Then, the next evening (Friday night) he can remove the RAID 5 volume and rebuild those 4 bays however I request.
That box's backplane only has 6 disk bays.
I can either have one 4-disk RAID 5 volume or two RAID 1 volumes.
So; the question on this thread is which one?
With more memory (4GB) plus the boot.ine 3GB switch on; you have a good point that (for the time being) nearly all of the popular pages will always be in memory. So; (DB writes being asynchronous), having slower DB drives will be fine.
Since, in an effort to keep this risk free and simple, we are NOT upgrading the OS volume (C/D), if I go with just a single RAID-5 volume, the more time-critical (synchronous) LOG files will still be on the slower volume.
Going the 3-RAID1-volume route; for the shorter more ongoing index-heavy random IO I'm gambling that they'll usually be using in-memory-pages since the short random reads is where striped volumes help (correct? still fuzzy for me).
Solution 2 is to migrate the DB to our Standy Server, a non-production machine (I use for development). It's the same model (IBM x235) but by a fluke it has 2 processors that are faster (2.6 GHz vs 2.0 GHz), and a faster backplane (although purchased only a month later, IBM had just increased the specs from 400K to 530K). But; it currently only has a single Windows Mirrored volume.
This weekend, when remoted in and fooling around with all this upgrade-option stuff, I discovered it's RAID controller isn't the one my hardware guy thought. It's the 4Lx (see link (http://www-304.ibm.com/jct01004c/systems/support/supportsite.wss/docdisplay?brandind=5000008&lndocid=MIGR-59936)), not the 6M as he thought. Huge difference! So; as of this morning, Solution 2 is looking more attractive. A new RAID controller is another $950, but this (Standby) box (with 15K drives and faster controller), should be at least a 30% overall boost. That would also leave me with a far better standby solution (the old server). Slightly more risk since the OS needs rebuilding.
However; the Standby machine also has the same 6 bay backplane. So I'm stuck with the same Volume choice.
MCrowley - I realize I'm splitting hairs here. Although it is a real upgrade exercise, it's been a wonderful opportunity to display my ignorance, and by doing so, allow some of it to evaporate. :) However; any more "thinking" on this could be considered "playing around". It's really been an education to see the parts in motion, as it were. Well; back to the Application Software side of my job here. I feel more comfortable knowing I at least won't screw things up by eliminating RAID 5. Next year; I'll add a SAS or SAN box if the MRP grinds and expands as I think it will.|||As a caution, I do feel I have to say that learning this stuff is a good thing for you to do. But knowledge, being power, can be dangerous, too (just ask the Sorcerer's Apprentice). I do not know how your shop works, but as a generalism, businesses like to talk about ROI, and results, and other such rubbish. If you spend too much money for no noticeable effect, then the budget for next year may not consider your improvement ideas. It is an almost impossible hole to dig out of, too. Think about it this way. If your doctor prescribes a bunch of pills one week, then a whole new set the next week, and a third set the third week, and you are not feeling any better, how many weeks would you maintain faith in the doctor? This is much more a matter of office politics than technical detail, so your judgment there is going to carry much more weight than I am.|||As a caution, I do feel I have to say that learning this stuff is a good thing for you to do. But knowledge, being power, can be dangerous, too (just ask the Sorcerer's Apprentice). I do not know how your shop works, but as a generalism, businesses like to talk about ROI, and results, and other such rubbish. If you spend too much money for no noticeable effect, then the budget for next year may not consider your improvement ideas. It is an almost impossible hole to dig out of, too. Think about it this way. If your doctor prescribes a bunch of pills one week, then a whole new set the next week, and a third set the third week, and you are not feeling any better, how many weeks would you maintain faith in the doctor? This is much more a matter of office politics than technical detail, so your judgment there is going to carry much more weight than I am.
Thanks. I'm taking that to mean you don't think it's warranted. From a performance standpoint, you may be right. I'm not skilled enough reading Disk performance stats to really know. The upcoming MRP implementation is the motivation. It's well into the 6 digit number already, so $4K for adding 30% to 4 year old servers seems worth the insurance.
However; I'll take your advice and go with migrating the DB to Standby Server. This solution has the added ROI of improving the realistic capability of our Standby Server solution to actually assume the production load in the event of a failure. Coupled with the extra performance (and accompanying extra capacity for Load growth), it's a true ROI. Not ROI they may see, but "they" are in tune with the fact that not all ROIs are visible.
Of course; if I "upgraded" and made things worse by eliminating RAID-5 (ergo, this thread), then I'd (rightly) have a credibility problem.
The way this shop works is that my credibility is very good so they allow me to make all IT decisions, including spending - within a set budget, but they don't hesitate to expand that budget on my say-so. In 4 years, they have never once said "no" nor questioned my word, except as clarification. I've taken that trust to heart. In return; they're very satisfied with how smoothly IT runs itself (so to speak). They happily shell out my rate and give me utmost respect. It's wonderful and I would hate to forfeit that position so safeguarding my credibility is a worthy suggestion. Still; they spend $300K+ on new manufacturing gear, trucks, etc at the drop of a hat. $4K won't even be noticed (unless it buys them grief).
Appreciate all the help.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment