Mate,
Damm my browser destroy my long post before save/post it, aghrrr L(
But in short now
Officially VMDP (VM Direct Path, so VM Passthru using intel VT-d and AMD IOMMU are solutions SR-IOV sugested by PCI-SIG group) was designed for passing only NIC cards, of course there is a huge possibility to pass almost anything like usb, pcie itself, graphic cards and raid/hba controllers, but in theory they can be SR-IOV capable if their vendor make them like that,, but there is no such cards on the market (as my best knowledge, maybe something from LSI but I'm not sure that it's really work)
There is a long topic of VMDP of ATI Graphic cards here (especially look after 22 page but if you would like to understand the VMDP problem read whole long discussion):
In my opinion VMDP will be used to pass most of hw in nearest future but not for all hw it’s easy and documented, some cards works easier and some really hard to get working, but most of success are editing/adding parameters in .vmx file and/or advanced parameters in the host (harder to do and may result crashes on ESXi), for example solution for working VMDP graphics cards is editing .vmx file and adding something like this:
pciHole.start = "1200"
pciHole.end = "2200”
and then you can use more than 1 vcpu and more than 2 GB of ram (I used 16GB and 4 vcpu with ati radeon 5870 and some onboard matrox g200 as I remember)
So that technology is need to be tuned for graphic cards and raid/hba controllers what will happen in next years I think, but there is very good to post a problems and solutions we found to share important unknown knowledge and save ones time and money for tests
So Thank You mate
And about the performance of RAID/HBA on HW, VM and native via VMDP i analyse it and it looks like that:
- on bare metal/HW performance is 100%
- on vm with VMDP performance warries but should be about 90% - I plan to use like that my Areca 1882-24 RAID6 Controller in the nearest future, so I will write my opinions/problems/solutions here or create a new topic with link here.
- on vm (as a virtual storage appliance) uses a hw RAID controller connected to esxi (v4/v5) and uses storage as a vmfs as a storage for that VSA to next or local esxi performace is poor, and range 20-30%
- on vm (as a VSA) uses a hw RAID controller connected to esxi (v4/v5) and uses storage as a LUNs (connected directly from RAID Controller to the VSA vm) as a storage for that VSA to next or local esxi performace is still rader poor, and range 30-50% so only make sens to use SSD like that, normal especially SATA 7k2 disks work terrible on this model (, there are some of VSA's that trying to do such operation fast but in truly (in my opinion after long years of tests) it's not working fast and it's only good for really poor performance lab/test enviroments (I absolutely dont use it on my lab even I have bought some of such products licenses).
additionally about storage performance for VMware ESXi 4/5 you have to notice that:
- RAID level is really important, fastest is a RAID 1, RAID 0 and RAID 10 and a single disk (yep!! ssd's best works as a single disks or some easy raid 1)
- to have good (optimal) performance especially for RAIDs you have to has BBU (Battery Backup Unit) on RAID controller, other way in raid 5 for example you will have less than 10MB/s writes and it also reduce reads, there are some methods to cheat vmware to enable full write cache without BBU but its really dangerous and after 2-3 crash/reboots/halts of ESXi host your VMFS will be gone ( its normal unfortunately
- in RAIDs especially RAID 5 and RAID6 you have to consider very carefully stripe size, its VERY important, for example vm os generates average of about 6kB chunks so best stripe size will be something close to it so 8kB as I used, You really notice a huge difference when you start on your lab server more than 5-8 vms that use a hdd for example for simple LU (you start 8 vms that are not up to date and then tyeh load a updates and kill your storage wonderfully) and of course when your vms starting up, after 20 vms on one storage you will really notice differences even if your vms dont du a lot on a storage, notice that also single VMFS should be oficially occupied by max 8 vms !!!
- in RAID 5 and RAID 6 you also have to always as possible consider number of disks, its important because bad number of disks reduce your storage performance 2x (-50%) because ehen you do something on storage like admin tasks or just copying something that do large chunks (64-256 kB) your storage need to do 2x more IOPS than it should, so the idea of designing RAID 5 storage is to have 4+1 disks, than 8+1 disk then 16+1 disk (in teory you may use 12+1 disk but I will not recommend it because its rather still a problem, same like 6+1), for RIAD 6 idea is that you have 4+2, 8+2, 16+2 (and in theory 12+2 what I will not recommend, same like 6+2 in my opinion). from both a performance, price and security reasons I recommend RAID 5 for 4+1 + hotspare or 8+1 + 2x hotspare (ideally RAID 5 EE) and RAID 6 for 8+2 and 16+2 + hotspare - but warning is that RAID controllers sometimes support up to 16x hdd/sdd for RAID5 and RAID6), these last recommendation in mainly for test/lab/home/soho/smb enviroment, for enterprise there are some different rules also because of type of storage, disks and array software/vendor used.
Regards
NTShad0w