This post focuses on design considerations for using Auto Deploy Server to provision stateless ESXi hosts. Here are the three topics that I’ll address:
- Placement of New Services (Auto Deploy Server, DHCP, TFTP, Syslog, ESXi Core Dump Collector)
- DHCP Reservations
- Dedicated NICs for ESXi Management
Placement of New Services
Auto Deploy has a few prerequisites:
- PXE boot environment (DHCP and TFTP)
- Auto Deploy Server
- Image Builder (included with PowerCLI)
Some other nice to haves are Syslog and ESXi Core Dump Collector – both installable services that can reside on the same Windows host.
Note: Auto Deploy Server shares the same system requires as vCenter Server. Here are VMware’s suggested configurations:
- Medium Deployments – up to 50 hosts and 500 powered on VMs: 2 cores and 4GB memory
- Large Deployments – up to 300 hosts and 3,000 powered on VMs: 4 cores and 8GB memory
- Extra-large Deployments – up to 1,000 hosts and 10,000 powered on VMs: 8 cores and 16GB memory
Here’s my current lab configuration:
- Server 1 – vCenter Server and vCenter Orchestrator
- Server 2 – Auto Deploy Server, Microsoft DHCP Server, and TFTPD32
- Server 3 – VUM, Syslog Collector, ESXi Core Dump Collector
All of these servers reside on management VLAN.
During the installation, you’ll need to specify the size of the image profile depot. The default is 2GB but I changed mine to 5GB. You should consider the number of different server vendors, and the number and type of 3rd party host extension providers when planning the Auto Deploy installation.
Note: VMware includes Auto Deploy Server, a DHCP server, and a TFTP server with the vCenter Server Appliance. If you already have a functioning PXE boot environment, then you can use that provided the ESXi hosts can access it.
Most normal management environments use static IP addressing. However, that won’t work with Auto Deploy because of the PXE boot requirement so the next best thing is to create DHCP reservations. This will ensure consistency and won’t affect my management tools. You can use a simple script to add the reservations. Below is the command line syntax and an example:
*Update: I forgot to mention that you need to right-click on cmd.exe and select Run as administrator for this to work. Also specifying the server name/IP is not required if you are running this on the DHCP server.*
netsh.exe dhcp server scope <scope> add reservedip <IP_address> <mac_address> <description>
netsh.exe dhcp server scope 192.168.1.0 add reservedip 192.168.1.100 0025b503e01a esx1-mgmt
All of the NICs in my lab hosts are connected to trunk ports and I have several VLANs already defined for things like Management, IP storage, vMotion, etc. This also means that I have a default VLAN (which is what I use for PXE booting). Since I do not want my Management traffic to be on the default VLAN, I need a new work around for PXE booting on the Management VLAN.
I decided to dedicate a pair of NICs for ESXi management traffic. This allows me to keep my management traffic isolated from everything else AND it gives me the ability to PXE boot on the Management VLAN. I’m sure there are other methods but I think this is more simplistic and it gets the job done.
If anyone has any other suggestions or would like to comment on what I’ve done, please feel free to comment. I’d love to hear how you addressed your design challenges.