Building a Home Server – The Complete Guide


It’s finally time to install Windows Server 2012!

Actually, no it’s not. Do you remember that part about UEFI when we listed what items to get? Well, that matters now (to a certain degree). If you don’t have a UEFI BIOS motherboard, no need to be worried. The install process will be a tad different. For UEFI users, you can install Windows Server 2012 as you normally would.


Before we begin install, let us first go through what exactly UEFI/EFI is. BIOS’ traditionally only allowed booting in MBR (Master Boot Record) drives. The usual person, even today, doesn’t install an operating system on a large 3TB+ drive; for those that do however, there is UEFI. What UEFI does is allow us to boot into GPT (GUID Partition Tables), and requires a 64-bit operating system (which WS 2012 exclusively is). The Micrsoft GPT FAQ is a brilliant resource if you want to read more; the maximum NTFS GPT volume sizes come from it:


So what does it matter? Well, MBR doesn’t allow booting into volumes greater than 2.2TB in size. In fact, Windows will list the maximum capacity of the drive at 1.91TB for a 4k cluster size (default).  This is a problem for us, because our primary boot drive is a 19TB RAID 5 volume. GPT on the other hand can support booting into the array no problem (8k default cluster size).


Where the problem comes into play is the installation media and BIOS interacting. If you have UEFI BIOS, the boot device manager will give you the option of loading normal BIOS or UEFI BIOS (will mention which one is which). This will tell the Windows installer that there is a >2TB drive present in the system and that it can be converted into GPT, and used as a single primary OS volume as we intended all along.

Fact is, it may not work. UEFI has been integrated into motherboards, and while not being brand-spanking new, it’s not very reliable either. Take our motherboard for example. It has EFI support for CD/DVD media, BUT NOT USB. This forces us to install a DVD drive, burn the OS onto a DVD, and cross our fingers in the hopes that everything proceeds correctly. It’s just way too much hassle and not enough trust for this to be viable. If your motherboard natively supports UEFI though, you have nothing to lose. The problems with Gigabyte’s Hybrid EFI are also documented in this blog post in case you want to read more.

gigabyte efi uefi bios screen

Now, what if you don’t have UEFI? We can tell you right off that using the entire RAID array as your primary OS boot drive is out of the question. Your best bet is to either install the operating system on a lone drive separate from the array, or to break the array down to 17TB and  and use the rest of the 3TB space as your OS partition. If you don’t do this, you end up with 17 terabytes of unusable space. The Windows installer will read the entire 19TB volume, but without UEFI BIOS, will designate it as MBR instead of GPT. So you boot into Windows Server 2012, only to find 1.91TB of usable space, and the rest unusable:


Because the installer made a partition based on MBR, and because our entire array is now MBR, the unusable space cannot be touched and will remain as “unallocated space” under the Windows Storage  Manager. Remember, the array shows up as one drive, not the eight drives separate. Therefore, we can’t do anything to the unallocated space because the entire drive is MBR – no conversion into GPT either because we’re booted into Windows. Hooray! 17TB of useless space.

Why can’t we convert to GPT before installing Windows? You can try, but trust us when we say, these WILL NOT work if you do not have UEFI/EFI. The first is converting the RAID array into GPT and then assigning the array a 512k strip size and an 8k cluster size so it reads the drive properly (check the Windows cluster tables above to see which size you need to set for maximum capacity to be utilized). Here are the scripts you need to run:

Windows Server 2012 Configuration (10)

To get into the Command Prompt (CMD) press SHIFT + F10. If you want to try out a few extra commands, they are as follows. Read the Microsoft Diskpart.efi GUID article for extra tips and steps:


list disk

select disk 0


create partition primary align=512

format fs=ntfs unit=8K quick


Didn’t work? We told you so. The conversion into GPT can only be done in the operating system, and not on the main OS drive while Windows is running. If you try and convert your OS boot drive into GPT via the method we listed above, the Windows installer will see right through it, yielding the message that Windows cannot be installed on GPT without a certain BIOS (UEFI):


Comparing this to GPT, we can see how it splits the volumes nicely and still allows us to use our entire array:

windows gpt uefi efi

Again, your best bet is what we said before if you don’t have UEFI/EFI: grab an SSD, make it your primary OS boot drive, install Windows Server 2012 on it which without UEFI will classify it as MBR (we don’t care, it’ll be less than 2.2TB anyway).

windows convert gpt array


  1. how much did this cost all together?

      • The system before the HDDs, LSI card, PSU and case was about 5000. Factor in about 3000 for the sponsored equipment IF you need eight of those hard drives. A stingy builder could probably put this exact system together for under 7000 I would bet

    • Hey Andy,

      The total price is listed in the components and conclusions section, but there is not precise amount.

      Since we didn’t prioritize heavy gaming or heavy usage, our components are a tad older. If you were to take the present-day updated equivalent components (for example, a 3770k, GTX 680, and a better motherboard), it comes out to roughly $5000 average; of course, give or take $750 depending on where you live and how your prices are.

      If you want the identical price to what we paid for ours during the time we bought them, it would come out to that price. If you can find the exact components we’re using present day, you’re looking at around $3500 total!

      Again, just a matter of where you live, what you plan on buying, and how your prices/availability looks like.


    • Hey Andy,

      The total price is listed in the components and conclusions section, but there is not precise amount.

      Since we didn’t prioritize heavy gaming or heavy usage, our components are a tad older. If you were to take the present-day updated equivalent components (for example, a 3770k, GTX 680, and a better motherboard), it comes out to roughly $5000 average; of course, give or take $750 depending on where you live and how your prices are.

      If you want the identical price to what we paid for ours during the time we bought them, it would come out to that price. If you can find the exact components we’re using present day, you’re looking at around $3500 total!

      Again, just a matter of where you live, what you plan on buying, and how your prices/availability looks like.

      Sorry for the late reply; my comments kept getting marked as spam.

  2. Timely article! I’m just going to start a backup PC build
    using mostly parts on the shelf from previous builds and LSI 9265-8i about to
    be replaced by a LSI 9271-8iCC and new drives. Question: will a copy of Win 7
    Home premium work for the OS? The 2 workstations on our home LAN (wired
    gigabyte) are using Win 7-64 Home Premium and Win 7-64 Pro.

    Thanks Cal

    • Hey Cal,

      Make sure to read Section IV (page 10) if you plan on using a 3TB+ drive/array as your OS boot drive.

      Aside from that, Win 7 Home Premium should work just fine!

      • Hi again.

        The beat-up old case that was a demo on the back shelf at my local NCIX store has room for 15 drives: 8 2T enterprise HDD on a 9265-8i as a RAID-5 /w spare, 5 2T on the MB as a RAID-10 /w spare, a SATA CD/DVD, and either a 1T SATA or a 320G IDA for the OS.

        My ASUS MB started crashing after I tried to add RAM so I bought a MSI MB to use the X58 CPU and RAM for the backup build.

        Low BIOS RAM: In your write-up you made a quick aside about difficult access to the LSI WebBIOS with the limited BIOS size in the MB that you used. I have a LSI 9211-8i in an Intel D975BX2 with that same issue and I expect the same problem with a 9265 in the MSI X58M as well. I could not find a refference to the solution at LSI, probibly my searching abilities, but I did find a refference at which pointed to an unavailable LSI KnowledgeBase Article 16602. Also .

        Thanks for the help. Cal

      • Hey Cal,

        I hope I’m reading the right part, but it seems like you’re having trouble getting into WebBIOS.

        I feared this may come up. I was hoping that it was an issue with my Gigabyte board, but I can see now that it is not.

        The LSI diagnostic check, while helpful, takes its sweet time. However, there are a few things that make it go haywire, and mass rebooting is the best way to fix these problems.

        I’ll outline a couple, starting from WebBIOS:

        1. CTRL+H not triggering WebBIOS – this can happen for a couple of reasons. One of them is mashing CTRL+H. This is a big no-no, as it’ll just hang if you do that. Press it once, and it’ll load.

        The second reason is more subtle, and probably what you’re suffering from. WebBIOS says that it will load into it once the computer posts. Well if it posts, we’ll go into Windows (or drive boot failure if you haven’t installed an OS), not WebBIOS. The trick is to get into your booth menu (usually F12). I’m not sure what the MSI boot menu looks like, but if you’re familiar with it, you should end up with an odd entry. Mine was something alone the lines of “LSI CD ROM”. It could also be something having to do with SCSI or PCI-E RAID. Regardless, the out-of-place entry will be the WebBIOS boot utility.

        Now, the server may restart when you pick this, or it may go straight into WebBIOS. If it does restart, let it load normally and hit CTRL+H at the prompt once without mashing anything else. It’ll load right into WebBIOS.

        2. Failure to load into options – an exceptionally aggravating problem with the diagnostic check is that sometimes your keyboard will not register the keys you mash to get into certain options. For example, as you mash DELETE to get into your BIOS, or F12 to get into your boot menu, after the CTRL+H WebBIOS prompt the server will either hang, or continue through as if oblivious to your key registration.

        If this happens, restart and try again. The CTRL+H prompt stays on screen for about 15 seconds, so while mashing from the BIOS post screen up to the prompt, hit the respective key you’re mashing once, a couple of seconds before the CTRL+H prompt goes away. This will ensure that you will get into whatever option you’re trying to get into.

        For some reason the diagnostic has a short and long memory. Sometimes you just need to mash for a few seconds to get into an option; other times you need to do it right from post up to the final seconds before CTRL+H prompt goes away and boots. Super annoying, especially considering how long the diagnostic check goes on for.

      • Seems my strange workaround on WebBIOS entry on Z68 motherboards still applies (it was always a little goofy to have to do this, but even LSI points to this same workaround)

        with SSD Review mention over here:

        Thanks for posting your saga, Deepak!

      • No problem, and thank you as well for the contribution Paul 🙂

  3. Are guys on weed? Home server and you choose to bypass a key feature for the home or SOHO user, Storage Spaces. You choose to use very expensive drives and hardware RAID and try to aim this at the home user, WTF?

    Choosing to stick an OS on such a large array is also just plain over-complicated. You could have used a cheap pair of SATA drives on the on-board SATA ports in RAID-1 or AHCI with dynamic RAID-1 and kept the pool for what is wanted. Now you have a array that will be turning and burning 24/7 as there is an OS on it.
    No mention of the networking headaches that arise with Server 2012 and it’s extra bloat that plays havoc with older OS’s or network devices.

    Sorry, but you guys missed the mark in so many ways.

    • Thanks for the response and we can see your view of things. Fortunately, we have received several responses to the contrary as well. From our viewpoint, we wanted to approach things from the most understandable level and such that it was a complete picture that could be followed by others. We hope to have accomplished this.

      To answer your question, I recall our initial discussions where we wanted to build a system that all could build, using very conservative parts and/or those that have some bite to them and leave ourselves open to build on the initial report in the future. Can the server be upgraded or could it have been built in a different fashion? Absolutely and I am sure you will agree that we could have thrown an SSD in as well… Watch for things as you have suggested in the future as we build on this first report. With the response we have seen thus far, we believe the interest in this subject is much greater than originally thought.

      Thanks again.

    • Just to follow up on what Les posted, yes we left the guide open for people to choose however they wanted to approach the server. We mentioned that you could go hardware RAID, software/motherboard RAID, or something else, including Storage Spaces. Even the OS can be different. There are even more methods, such as unRAID, but it all depends on what the user wants. If a user is going for Windows 8 or WS 2012, Storage Spaces will be advertised for obvious reasons. It’s not something that needs showcasing, but we did mention it in case readers don’t know about it.

      Onboard doesn’t always work, and that was the case with our motherboard. When we chose a RAID 1 array with two of the drives, it built it, but the Windows installer didn’t recognize it. Hence, the installer showed the two drives separately untouched, without any pooling or redundancy. In addition this also forced us to use IDE, and furthermore did not allow us to get into UEFI or use GPT.

      We want to provide the technical analysis of what it is like to build an ultimate home server – the keyword there being ultimate. Therefore, we want to address the most complicated methods and be as comprehensive as we can so people can fine-tune their own procedure, and remove/add steps as they see fit.

      We also prioritized our build, which was solely on storage. With the money we saved using older parts, the rest of the budget went into high-end parts, because that is what matter to us the most. We made a note that if users want to add anything extra, such as the ability to play games or encode, then they should be willing to spend more for a mid to high-end GPU. The approach to building can be heavily customized, and we tried out best to offer advice for users that want to do something other than what we did. Of course, we couldn’t cover every single possibility, but we hit the typical ones.

      We also made a note that it may be beneficial to the user to use an SSD as their main boot drive, provided they are willing to spend the money. It is not a requirement, and it would’ve indeed led to an extra step on making a backup schedule for the SSD directed to the RAID 5 array in case something went awry, a solution we would pick over Storage Spaces.

      The way we planned it meant that wasn’t a need, so if the user decided to use the RAID 5 array as their primary boot drive, backing up wouldn’t have to be done at all. Yes it is more complicated but it is worth it in the long haul.

      We also mentioned and addressed the networking problems by doing a walk-through We have used WHS 2011, and the same problems are in WS 2012. Once configured, there should be no problems at all. Our server has two other NAS units, two laptops, and ten desktops connected to it without any problems.

      Finally on the subject of Storage Spaces and why we chose the array over it – we don’t want to leave the decision to Microsoft. WHS V1 had Drive Extender, and WHS 2011 did not, which meant that users either had to use an old, unsupported operating system, or upgrade and lose that feature. WS 2012 brings it back yes, but there is no telling what the future holds. We don’t want the readers to have to deal with this problem should it ever arise again, and if anything we rather provide them with an easy means to upgrade considering how quick Microsoft has killed off all of their previous server operating systems, with WHS 2011 barely lasting a year before getting the EoL stamp.

      Hope this clears up the confusion.

    • ThatHomeServerBuilder

      1 Not his fault your broke… 2 Onboard Raid Sucks its “Fake Raid”.

      3 raid 1 really? why not raid 4, 5 or 6?


  4. Les,very good review…You’re almost in my mind, as atm i am trying to make a parts list for a similar home project (9260-4i though)

    Is the system yet available for some benchmarking ?
    I mean, since you did a massive build with a very-very helpful & detailed article, it would be nice to measure the performance and maybe time the LAN transfer by the onboard gigabit adapter.

    • Lets ask Deepak as this review was his baby….stay tuned…

    • Hey Felix,

      Great question. I actually ran four simultaneous copy tests from two different sources to the server, including a 600GB live backup session. Overall speed was pretty darn nice, about 35+ MB/s for each copy session, of course varying due to sizes. A 550GB folder of 2.5k files and 141 folders took about 4 hours to copy over, while the other four copies were going on.

      We’re waiting on CacheCade at the moment to post proper benchmark results. We got about 650MB read, but only about 60MB write for random IO testing using CDM, so we’ll see how much CacheCade boost performance.

      The LAN side of it is doing much better, and those numbers will rise once we activate CacheCade. My LAN network is entirely on gigabit speeds, and there were absolutely no hiccups while all of this was going on. It was absolutely seamless.

  5. Been planning a project like this – though thinking of running Linux. What do you see are the pros and cons between Linux and Windows? OS price is my main consideration. Support is offered by some companies (e.g. Red Hat) though I haven’t done my research in that area – I’m on Linux Mint and the forums help enough.

    • I believe overall Linux will suit you better, but in terms of ease I prefer Windows. Support I find is better for Linux. Keep in mind WS 2012 is new, but if WHS 2011 is anything to go by, solutions are most often offered by forums as you said. Tech support, especially on the MS side, is awful to put it nicely.

      I can also add that WS 2012, while a lot more polished, is just not as well documented. If you run into a problem, chances are you’re on your own.

      If you are comfortable, definitely use Linux. Just remember what you want out of your server. WS 2012 has everything I want, so I went with it. Give Amahi a try and see if you like it. If not, go with WS 2012, or WHS 2011 if you’re worried about cost.

      WHS 2011 is EoL but support will last for about 4-5 years. Just remember it has no Storage Pooling feature.

  6. WOW….I cannot believe you are advising people to not have a backup plan because you are using RAID 5… ” so we don’t have to worry about backing-up anything as RAID 5 has redundancy (managed by the LSI 9270-8i, or whatever RAID controller you’re using).” That is by far the worst advise that I have heard in a LONG time. Otherwise a good article.

    • Hey Brian,

      The reason I said that is because the array was used as the main OS drive, and unfortunately WS 2012 doesn’t allow backups on the main drive. At least in my case, it gave me errors whenever I tried setting it, and would always try and look for other drives to backup to.

      Aside from that though, if you do have extra hard drives, definitely setup a backup. I made it optional, as WS 2012 has some weird backup options. I had to move my backup folder to a separate drive from the array to actually get the backup process working.

  7. There are so many ..unexperienced things with your server build

    First of all,you could have bought /rented a cheap VPS from the net
    I have recently moved my 6 websites(,,,, and from to where I have rented a 4 core/20 GB Ram,400 GB SSD server…called Zalmoxe with Cpanel

    yes,the cpanel extensions are like 116 dollars per month and the server itself is another 35 dollars per month but hits very far from your 7000-8000 dollars spent on your build which is far wrost than mine

    You dont feel comfortable with linux?with a simple apt-get install or yum or pkg or zypper…you can have everything and Linux is like 8 times faster than Windows server (which I have also tried at
    Read the messages from the system and you are good to go

    Secondly…one cheap 500 W source?let me tell you,if you run your server for like one week..your power source will be gone with your video card and your motherboard
    not much to lose though…4670?really?
    I remember when I cursed all those VPSes for having bad graphics but my 5670 runs circles around it

    Thirdly I see now that you are recommending SSDs but you are using HDDS for storing your forums:))))
    so I guess their reliability is not so great
    Fourth:the site is really slow on have perhaps bad sectors on your Toshiba raid array..use a Raid 10 next time

    Fifth:your server is gone,you are using Amazon VPSes right now…you have inexistent pages…visit my websites:)))

Leave a Reply