Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month, online, via Jitsi Meet.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] New to VMWare DataStores



It is seeing the 4 TB disks as 1.8 TB each?  Sounds like the
well-known 2TB limitation related to MBR partition tables and 512-byte
sectors.  Does VMWare support 4096-byte sector format disks?  Does
VMWare support GPT partition tables?  Perhaps this may help:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2058287

On Wed, Jan 14, 2015 at 12:57:40PM -0500, Scott Ehrlich wrote:
> A server reboot and a closer look via vCenter client shows the machine
> shipped with 5.5 U2.
> 
> It also does show what appears to be our four x 4 TB drives - they do
> show in the machine's BIOS.
> 
> But, under vCenter client, when seeing the available disks to use, it
> lists the 4 x 4 TB disks at 1.8 TB each.
> 
> Again, they are NOT in a RAID, but individual SAS, and directly
> attached.   There is another storage device with ESXi on it.
> 
> The server does have a PERC, but it does not acknowledge any disks,
> indicating the four disks are truly independent.
> 
> Having the latest ESXi version, what is the next step to having the
> system actually see each 4 TB drive at or near the raw capacity (i.e.
> 3.6 TB)?
> 
> This page - https://communities.vmware.com/thread/467221 - has been
> very helpful.
> 
> Thanks.
> 
> Scott
> 
> On Wed, Jan 14, 2015 at 11:30 AM, Edward Ned Harvey (blu)
> <blu at nedharvey.com> wrote:
> >> From: Discuss [mailto:discuss-bounces+blu=nedharvey.com at blu.org] On
> >> Behalf Of Scott Ehrlich
> >>
> >> I am new to VMWare Datastores.   Previous positions have already had a
> >> vCenter system built and enterprise-storage ready.
> >>
> >> We just installed a new Dell R520 PowerEdge server with 4 x 4 TB SAS
> >> drives, vCenter 5.5.0 preinstalled, and a 2 TB boot disk.
> >>
> >> Do we need to create a PERC RAID set with the 4 TB disks for vCenter
> >> to see that volume as a Datastore?
> >>
> >> I've been googling to see what is fundamentally needed for disks to be
> >> visible in vCenter for a Datastore to be created.
> >
> > Oh dear.  You're not going to like this answer.
> >
> > First of all, if you're installing on a Dell, you should ensure you've checked Dell's site to see if you need the Dell customized installation image.
> >             Go to http://support.dell.com
> >             Enter your service tag
> >             Go to Drivers & Downloads
> >             For OS, select VMWare ESXi 5.1 (or whatever is latest)
> >             If you see "Enterprise Solutions" with "ESXi Recovery Image" under it, you need to use that custom ISO.
> >         Otherwise, go to vmware.com and download the standard VMWare ESXi ISO
> >
> > Before you begin, you probably want to configure your PERC as one big raid set.  Vmware doesn't have support for storage changes, soft raid, changes to hard raid, snapshots, or backups.  It can handle raw disks, iscsi, nfs, and not much else.  It also doesn't support backups unless you pay for some thing (not sure what, and not sure how much.)  With a Dell server specifically, you can contact Dell about how to install OMSA, which will give you an interface you can use to control your PERC configuration without needing to reboot the system, but the extent of usefulness will be limited to basically replacing failed disks without the need for rebooting.  Just make sure you don't upgrade vmware after you've installed OMSA (or else you have to reinstall OMSA).
> >
> > By far, far, far, the best thing to do is to run vmware on a system that is either diskless or has a minimal amount of disk that you don't care about, just for vmware.  My preference is to let the vmware be diskless, and let the storage system handle all the storage, including the vmware boot disk.  (Generally speaking, it's easy to boot from iscsi nowadays, so there's no need to muck around with pxe or anything annoying.)  Let the guest machine storage reside on something like a ZFS or other storage device that's meant to handle storage well, including snapshots & backups.  Solves *all* the problems.  Using simple dumb 10G ether works well and inexpensively as long as you get sufficient performance out of it (it performs equivalently to approx 6 or 9 disks).  Anything higher performance will need infiniband, fiber channel, or similar.
> >
> > By far, my favorite setup is a ZFS server for storage, and a diskless ESX server to do work.
> _______________________________________________
> Discuss mailing list
> Discuss at blu.org
> http://lists.blu.org/mailman/listinfo/discuss



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org