1. Esxi Raid Configuration
  2. Software Raid Download

I faced the same problem during my first install of ESXi. As mentioned earlier, the VMware hypervisor does not recognize the RAID arrays built by the software RAID controller (also called fake RAID).

I'm building an virtual environment for a small business. It is based around a single ESXi 5.1 host, which will host half a dozen or so VMs.

I'm having some doubts regarding how to implement the storage though. I naturally want the datastore to be fault tolerant, but I can't get the funds for a separate storage machine, nor expensive hardware RAID solutions, so I would like to use some software RAID (lvm/mdadm, most likely).

How can this be implemented? My only idea so far would be to create a VM which has the storage adapter as passthrough, puts some software RAID on top of the disks and then presents the resulting volumes 'back' to the ESXi host which then creates a datastore from which other VMs get their storage presented. This does seem kind of round-about, do I have any better options? From my research, passthrough seems to come with quite a few drawbacks, such as no suspend/resume etc. This does seem kind of round-about, do I have any better options? Your general idea is spot-on. I would personally suggest using ZFS with Solaris or FreeBSD, but mdadm might also work.

Maybe though you don't get all the advantages I write about in this post, so take that as a disclaimer. This post will be quite long, I apologize in advance for the wall of text. From my research, passthrough seems to come with quite a few drawbacks, such as no suspend/resume etc.

There are some, notably:. Only works with vt-d (or AMD IOMMU) support on CPU (Intel) or CPU+board (AMD): no problem, it is hard to get a server without it today, as every Intel CPU except the Atoms have it (even in the basic systems from HP, Dell and others). VMware-snapshots of all VMs with passed-through elements cannot be created: in theory a problem, but this affects only your storage VM, which has minimal configuration setup on it and is used for nothing else. You can do your snapshots at the storage level, where they are faster, cheaper and do not slow down the system. Additionally you can snapshot all other VMs on the host without problems (and even combine it with filesystem snapshots for powerful restore options and longtime archival of states). Your internal complexity is increased in some areas: At first sight, this is true - you add an additional layer and you need to manage more stuff, like your new internal SAN (network/VLAN setup in VMware) or your storage VM itself (updates etc).

But on the other hand you also have simplicity and flexibility:. Consistent backups of running virtual machines can be created automatically with a few simple scripts and are completely free. They can also be stored on another machine, disk or cloud (encrypted), all without any additional expensive software solutions.

Esxi Raid Configuration

In case your server dies, just buy another off-the-shelf replacement, install ESXi, enable passthrough, configure your network, add your disks and boot your storage VM. After it is up, rescan your storage and it is as if there was just a power failure, all your data is safe and you know it (instead with HW RAID, you hope it is).

Special requirements can be met with minimal change as need arises. The business has a legacy application that requires local disks for backups? Just configure iSCSI and present your storage transparently.

They experience growth and need more storage? Just grow the pool with more disks and present them either directly over iSCSI or via VMware (NFS or iSCSI with vmdk on top). They want to use a database on a beefy separate server? Just open up your NFS on another LAN/VLAN and supply it to the new server as a 'real' SAN.

GPU passthrough works only for expensive Nvidia cards and all AMD cards: This is currently true, but your storage VM does not need a dedicated GPU in any case. There are also general annoyances unrelated to passthrough:. To reboot the storage VM, all other VMs that depend on it need to be shutdown first: This obvious problem is seldom talked about, but in my eyes the most annoying. Of course, updates to ESXi itself also require a full reboot, but as you now have two systems, the times may not sync perfectly. I recommend a stable operating system and lining up non-critical updates between both systems.

Additionally you should limit the storage VM to its own internal virtual LAN, further reducing the need for applying fixes as soon as they are released. Note that this also applies to accidental reboots of the storage VM from the GUI. Errors in the underlying stack render your whole machine inoperable: This risk is increased in comparison to ESXi only, because now you have two systems and two network stacks between them. On the bright side, both your storage VM and ESXi should normally be stable and errors should be few.

Esxi

Nonetheless, I advise to schedule updates some days/weeks after release so that you can see if others have experienced problems. Not changing the configuration on the other hand means that it is very stable, which is a plus for SME (less support needed). Solution is not known to 3rd party support personnel: It is a quite rare setup, so the chance that your random replacement can figure it out without your documentation from the start is slim - this may be a problem or an advantage, depending on your business goals. It can be mitigated by some basic documentation explaining the structure of the setup (use pictures/diagrams!), comparison to a traditional RAID setup, and what to do in common problem cases (backup, restore, disk replacement, updates, network changes, hardware expansion). Technical issues aside, you have to think about your goals and the ways to achieve them.

This determines the practicability of your chosen solution along with its upsides and downsides, and your overall outcome (success or failure). This is something that largely depends on the needs of the business itself. Some considerations for or against your proposed solution from a business perspective:. Budget: There are businesses that can justify to pay several thousand dollars a year for a support contract that is used almost never, because the one time it is used, it is worth much more to them.

There are businesses that can only pay for immediate value, and that can cope with unexpected downtime quite cheaply/flexible, so the money would be wasted. Safety requirements: There are businesses where a single destroyed or damaged file from 10 years ago is not an option, and others where even a complete loss of all current data is only an inconvenience. The needs for backups, snapshots, etc. Change dependent on this. Support structure: There are businesses that want to buy a machine, set it up and then run it (virtually) forever without any support needs at all, and others that want and need continuous change, upgrades and direct support from people they trust. Flexibility requirements: There are businesses that change so rapidly that you could not see their needs in advance, which may or may not be an argument for a more flexible setup.

On the other hand there are those that never change and value stability and predictability over anything else. You should keep those points in mind in any case. Your solution can only be successful if you meet the goals, it does not matter what is the choice of the majority, only what is a) technically possible to implement, b) within the budget, c) meeting the goals of the business. If all those points are met and you still have to decide, choose the easier/less complicated solution (KISS). If they are equally easy, decide for the one that brings you more money and/or happiness. Yes, I agree, 4 years ago ESXi was the best solution in most cases, now kvm has catched up considerably, even surpassing ESXi in some areas (for example, there are no GPU passthrough limitations).

In my experience, even without 'official' compatibility from the list, most server grade hardware works reliably. For example HP and Dell entry level servers. Of course, if possible buy nice hardware, like LSI HBAs instead of the default Intel onboard SATA, but this largely depends on the budget. – Jun 21 '16 at 16:07. I would fire you if I were a small business and you deployed something like this. This is a common theme, though. VMware has a well-defined.

However, when used as a standalone server, you NEED hardware RAID. Non-RAIDed disks will work as well, but that's not what you want.

So my questions:. Not enough funds for storage?

What type of server hardware is this? You can afford disks but no RAID controller? A compatible RAID controller is not expensive. Isn't this a case of managing customer expectations? Separate storage would clearly be more expensive than hardware RAID. While are possible, they are best for specific technical requirements, not cost-cutting. It's a case of VMware-abuse.

Software RAID is not supported. I would go back to the customer and revise the build/requirements.

'How much is your data worth?' Small businesses do not care what you sell them, as long as its within the budget and gets the job done.

So your focus should be on 'how can I achieve the needed goals with the given means?' And not 'What would people at vendor X say even if I never talk to them?' Vendors can only offer limited support and will charge you handsomely for it. It may be the correct way in some cases, but not in others, but what counts is the goal you are trying to achieve.

Additionally, the question already outlined vt-d, therefore it is not about software RAID on ESXi, but a (internal) SAN. – Jun 21 '16 at 14:21. ' I naturally want the datastore to be fault tolerant, but I can't get the funds for a separate storage machine, nor expensive hardware RAID solutions, so I would like to use some software RAID (lvm/mdadm, most likely). How can this be implemented? Esxi will not work without a REAL HW Based RAID for the Datastore. Not even BIOS based software raid will work. I run a HW backed 2x1TB SSD datastore for VMS Got my raid controller Adaptec 6405e for 100$ on Ebay!

BUT in regards to the next part, My only idea so far would be to create a VM which has the storage adapter as passthrough, puts some software RAID on top of the disks and then presents the resulting volumes 'back' to the ESXi host which then creates a datastore from which other VMs get their storage presented.' My 'FileServer' consists of 4x5TB HD's passed directly to a VM.

I then built mdadm Raid 5 for a total of about 14TB and exported that over NFS to all my VM's. About 15/20 any given time, with 10/20 dev VM's that are off unless being used. This has worked well for me, but this is not with a large group of users. Infact I am really the only local user, but I do host websites which generate some traffic, but again, they are static mostly.

A good question to ask in this scenario, if you are thinking of using this idea is, what is the FileServer for? In my case, 90% of my VM's if not all, host all the necessary data inside the VM (linux) and are less than 20GB's in size. I use the FileServer as a central repo for backups, also, any media applications like Plex will read from the FileServer, and my P2P saves directly to the FileServer, but non of my hosts have a Database or anything that resides on the FileServer. They do however do all their backups to the FileServer. My fileserver is my only VM which hosts 2 services, and that is NFS for VM's and also SMB for Windows access via GUI. This has worked wonderfully for me. I also have mounted the FileServer via NFS as a datastore and I can mount ISO's to new VM's from within the datastore.

I also backup OVA snapshopts over SMB in windows directly to the FileServer. Running VM's on an exported NFS software raid would be nuts, but Exporting a large NFS datastore back to the esXi host has many benifits.

I am a big fan of software RAIDs (Linux) because they are flexible, cost-effective, easy to manage and completely predictable. In real life scenarios they always beat mid-range hardware RAID controllers for speed. The ONLY problem - to get reliable RDM or disk controller passthrough to VM that runs NAS.

Software Raid Download

Most inexpensive LSI controllers in IT mode do the trick. I get amazing speed and stability with software RAID10 on Openmediavault-based virtual NAS (Vmxnet3 adapter, Paravirtual disk controller) that exports datastore for another VMs on the same host via NFS (10Gbit internal link). It's just the matter of budget. If your budget is unlimited - go with top range RAID adapters from Whitelist. If you want to save some money and you are familiar with ESXi and NAS internals - go with software raids. I think that your idea could work quite ok.

Hp raid controller software

See, for example, It's a pity that ESXi does not support mdadm. So, your idea sounds like round-about, but I assume, that if you will configure everything properly, there will be only a very minor performance overhead.

See the following article about the performance: If you have a small setup, mdadm will probably use only 5-25% of single CPU core. From my own experience, I had a very low CPU usage by mdadm on CentOS server with RAID 5, which was connected to ESXi as NFS share over 1 Gbit network. However, we had a problem with VMs that used disks heavy (mainly not because of software RAID, but because of NFS).

Please see my another answer for details:. Another advantage of such setup will be that mdadm is well documented, it is easy to reconnect the array to another server in case of server failure. However, you should consider that the usual setup of ESXi does not expect the datastore to be on the same server with ESXi itself. In your case, if hard drive with ESXi and/or 'VM which has the storage adapter as passthrough' fails (and it is not in RAID, of course), your datastore will not be accessible anymore.

If you have separated datastore, you will need less steps to restore your setup in the case of failure. So I consider that you should try again to find funds for separate datastore PC. It could be a used PC with 1-2 GHz CPU and SATA controller, where you could setup Linux OS with mdadm. Do not forget to setup monitoring (e.g. E-mail notifications) about the status of your mdadm RAID array.

I loved ESXi when I first used it, I still like it but no longer love it, and are testing proxmox as an alternative. What I learn is that VMWARE of course want to make money, and they likely have agreement with hardware vendors to only support high end server class hardware which boosts their sales in return for kickbacks from those vendors, I know another plausible explanation is to manage their support overheads, but I think its more about 'encouraging' people over to server class hardware. I have used software raid in various different situations and I consider it just as reliable,e if not more reliable than hardware raid, I think the only solid advantage hardware raid has is the battery backup up allowing writes to complete on power failures. However my home systems are hooked to UPS devices and my business systems in datacentres have UPS supplied by the datacentre.

I consider things like software zfs raid much safer than something like a HP smartarray, as is bitrot protection, as well as direct access to the disk SMART status. With that said it doesnt mean I never use hardware raid, if the customer directly requests it or they have a handsome budget which will pay for it, then I deploy it, but I dont have the attitude where by if I have a poor customer I start telling them that they need to rethink because anything but hardware raid is suicidal, thats just moronic.