Complete Homelab Environment - Current State


Complete Homelab Environment - Current State

Last Updated: 2025-12-28 Gathered via: SSH inventory + Proxmox API Purpose: ESXi to Proxmox migration planning


Executive Summary

Current State:

  • 2x ESXi 7.0.2 hosts (NUC9i9QNX) - Production workloads
  • 1x Proxmox VE 8.4.1 server (Staging) - Already running Home Assistant + Frigate
  • Total VMs: 12 on ESXi + 10 on Proxmox = 22 VMs
  • Active Services: Plex, Docker, Palo Alto FW, Home Assistant, Frigate, Pi-hole

Migration Context:

  • βœ… Blue Iris β†’ Frigate migration COMPLETE (now running on Proxmox with Coral TPU)
  • βœ… Proxmox platform validated via staging server
  • 🎯 Goal: Migrate ESXi workloads to NUCs running Proxmox
  • 🎯 End State: 2x NUC Proxmox nodes + Staging server as HA witness

1) Physical Infrastructure

Host 1: ghost-esxi-01 (10.1.1.120) - ESXi

Hardware:

  • Model: Intel NUC9i9QNX (NUC 9 Extreme)
  • CPU: Intel Core i9-9980HK (8 cores / 16 threads, HT enabled)
  • RAM: 64GB (63.7 GB usable)
  • iGPU: Intel UHD Graphics 630 - Passthrough to β€œxeon” VM
    • PCI ID: 00000:000:02.0
    • Vendor/Device: 0x8086/0x3e9b
    • Driver: pciPassthru
  • Storage: 1TB WD Blue SN550 NVMe
    • Datastore: m2-primary-datastore (VMFS-6)
    • Total: 931 GB | Free: 425 GB | Used: 506 GB (54%)
  • OS: VMware ESXi 7.0.2 (build-17867351)

Physical Network Adapters:

  • vmnic0: Intel I219-LM 1GbE

    • MAC: a4:ae:12:77:0d:65
    • Status: Up, 1000 Full Duplex
    • MTU: 9000
    • vSwitch: vSwitch0 (uplink)
  • vmnic1: Intel 82599 10 Gigabit Dual Port (Port 0)

    • MAC: 90:e2:ba:74:4d:84
    • Status: Up, 10000 Full Duplex
    • MTU: 1500
    • vSwitch: vSwitch1 (uplink - vMotion)
  • vmnic2: Intel 82599 10 Gigabit Dual Port (Port 1)

    • MAC: 90:e2:ba:74:4d:85
    • Status: Up, 10000 Full Duplex
    • MTU: 9000
    • vSwitch: vSwitch0 (uplink)
    • β†’ MikroTik Switch SFP1 (10GbE DAC cable)
  • vmnic3: Intel I210 1GbE

    • MAC: a4:ae:12:77:0d:66
    • Status: Down (not connected)
    • MTU: 1500

vSwitch Configuration:

vSwitch0 (MTU 9000)

  • Uplinks: vmnic2, vmnic0 (LAG/active-active)
  • Used Ports: 14/2560
  • Port Groups:
    • Management Network (VLAN 0) - 1 active client (ESXi management)
    • Internal (VLAN 4095) - 6 active clients (xeon, jarnetfw, iridium, pihole, docker)
    • Public (VLAN 300) - 1 active client (jarnetfw external interface)
    • Lab (VLAN 50) - 1 active client (jarnetfw)
    • VM Network (VLAN 0) - 0 clients

vSwitch1 (MTU 1500)

  • Uplinks: vmnic1
  • Used Ports: 4/2560
  • Port Groups:
    • vMotion (VLAN 0) - 1 active client

vSwitch2 (MTU 1500)

  • Uplinks: None (internal only)
  • Used Ports: 1/2560
  • Port Groups:
    • Lab-Int (VLAN 0) - 0 clients

VMs (7 total, 5 powered on):

VM NamevCPURAM (GB)Disk (GB)StatusGuest OSPurpose
xeon48~60ONUbuntu 64-bitPlex server w/ iGPU (Quick Sync)
docker24~60ONUbuntu 64-bitDocker services (media stack)
jarnetfw47~60ONCentOS 64-bitPalo Alto NGFW (PA-VM-10.1.3) - CRITICAL
pihole11~20ONUbuntu 64-bitDNS / Ad blocking
iridium21~20ONUbuntu 64-bitUnknown - needs clarification
Win-1148~60OffWindows 11Lab/testing
Win7-Victim12~20OffWindows 7Metasploitable/security lab

VM Network Mappings:

VM NamePort GroupsVLANsIP AddressNotes
xeonInternal409510.1.1.1251 NIC
dockerInternal409510.1.1.321 NIC
jarnetfwLab, Public, Internal (x2)50, 300, 409510.1.1.1034 NICs - Multi-interface firewall
piholeInternal409510.1.1.351 NIC
iridiumInternal409510.1.1.1261 NIC
Win-11(Powered off)---
Win7-Victim(Powered off)---

Host 2: ghost-esx-02.skinner.network (10.1.1.121) - ESXi

Hardware:

  • Model: Intel NUC9i9QNX (NUC 9 Extreme)
  • CPU: Intel Core i9-9980HK (8 cores / 16 threads, HT enabled)
  • RAM: 64GB (63.7 GB usable)
  • iGPU: Intel UHD Graphics 630 - Passthrough to β€œhome-security” VM
    • PCI ID: 00000:000:02.0
    • Vendor/Device: 0x8086/0x3e9b
    • Driver: pciPassthru
    • Note: VM is powered off (Blue Iris replaced by Frigate)
  • Storage: 1TB WD Blue SN550 NVMe
    • Datastore: m2-datastore (VMFS-6)
    • Total: 2.79 TB | Free: 1.43 TB | Used: 1.36 TB (49%)
  • OS: VMware ESXi 7.0.2 (build-17867351)

Physical Network Adapters:

  • vmnic0: Intel I219-LM 1GbE

    • MAC: a4:ae:12:77:0e:13
    • Status: Up, 1000 Full Duplex
    • MTU: 1500
    • vSwitch: vSwitch0 (uplink)
  • vmnic1: Intel X520 10 Gigabit (Port 0)

    • MAC: a0:36:9f:07:e3:74
    • Status: Up, 10000 Full Duplex
    • MTU: 1500
    • vSwitch: vSwitch1 (uplink - vMotion)
  • vmnic2: Intel X520 10 Gigabit (Port 1)

    • MAC: a0:36:9f:07:e3:76
    • Status: Up, 10000 Full Duplex
    • MTU: 1500
    • vSwitch: vSwitch0 (uplink)
    • β†’ MikroTik Switch SFP2 (10GbE DAC cable)
  • vmnic3: Intel I210 1GbE

    • MAC: a4:ae:12:77:0e:14
    • Status: Down (not connected)
    • MTU: 1500
    • vSwitch: vSwitch3 (uplink, but unused)

vSwitch Configuration:

vSwitch0 (MTU 1500)

  • Uplinks: vmnic2, vmnic0 (LAG/active-active)
  • Used Ports: 6/2560
  • Port Groups:
    • Management Network (VLAN 0) - 1 active client (ESXi management)
    • Lab (VLAN 50) - 0 clients
    • Public (VLAN 300) - 0 clients
    • VM Network (VLAN 4095) - 0 clients

vSwitch1 (MTU 1500)

  • Uplinks: vmnic1
  • Used Ports: 4/2560
  • Port Groups:
    • vMotion (VLAN 0) - 1 active client

vSwitch2 (MTU 1500)

  • Uplinks: None (internal only)
  • Used Ports: 1/2560
  • Port Groups:
    • Lab-Int (VLAN 0) - 0 clients

vSwitch3 (MTU 1500)

  • Uplinks: vmnic3 (link down)
  • Used Ports: 3/2560
  • Port Groups:
    • Tap (VLAN 0) - 0 clients

VMs (5 total, ALL powered off):

VM NamevCPURAM (GB)Disk (GB)StatusGuest OSPurpose
home-security88~100OffUbuntu 64-bitOLD Blue Iris (iGPU) - REPLACED by Frigate
server-201948~100OffWindows Server 2019OLD Windows host for Blue Iris?
xsoar48~100OffUbuntu 64-bitSOAR/security platform (lab)
win11-sse44~60OffWindows 11Lab/testing
win-1024~60OffWindows 10Lab/testing

Host 3: pve-staging (10.1.1.123) - Proxmox VE (PRODUCTION)

Hardware:

  • Model: Unknown (likely desktop/workstation)
  • CPU: Intel Core i5-8400T (6 cores, no HT, @ 1.70GHz)
  • RAM: 32GB (31 GB usable)
  • USB Controller: Intel Cannon Lake PCH USB 3.1 xHCI
    • PCI ID: 0000:00:14
    • Driver: vfio-pci (passed to home-sec VM)
    • Coral TPU USB device attached here
  • Storage:
    • Root: 94GB (9.3GB used, 11%)
    • local-lvm: 833GB (165GB used, 20%)
  • Network:
    • eno1: 1GbE β†’ vmbr0 (10.1.1.123/24)
    • enp1s0f0/f1: Additional NICs (not configured)
    • wlp3s0: WiFi (not used)
  • OS: Proxmox VE 8.4.1 (kernel 6.8.12-11-pve)

Network Configuration:

  • vmbr0 (Linux bridge): eno1
    • IP: 10.1.1.123/24
    • Gateway: 10.1.1.1
    • Purpose: VM network bridge

VMs (10 total, 2 running):

VMIDNamevCPURAM (GB)Disk (GB)StatusPurpose
103home-sec48100RUNNINGHome Assistant + Frigate (USB controller passthrough for Coral TPU)
200docker-host-14860RUNNINGDocker services
201promethium4860StoppedDocker host (backup/testing?)
100ubuntu-test2232StoppedTesting
102test-cloud-init2233.5StoppedCloud-init testing
300jumpbox1120StoppedK8s lab - jumpbox
301server1220StoppedK8s lab - control plane
302node-01220StoppedK8s lab - worker
303node-11220StoppedK8s lab - worker
9001ubuntu-24-cloud223.5StoppedCloud-init template

Key Configuration:

  • VM 103 (home-sec):
    • Machine type: q35
    • Static IP: 10.1.1.208/24
    • PCI Passthrough: hostpci0: 0000:00:14,pcie=1 (USB controller for Coral TPU)
    • Boot on start: Yes
    • QEMU agent: Enabled

2) Network Architecture

Core Switch: MikroTik CRS328-24P-4S+ (10.1.1.102)

Hardware:

  • Model: CRS328-24P-4S+ (24-port PoE + 4x SFP+ 10GbE)
  • MAC Address: 2c:c8:1b:d0:a1:34
  • Serial: F6090ED7A959
  • Management IP: 10.1.1.102 (static)
  • Uptime: 15 days 18:43:40

10GbE SFP+ Ports (Critical):

  • SFP1 β†’ ghost-esxi-01 (10.1.1.120) - 10G link (FS 3m copper DAC)

    • Carries VLAN 1 β€œLab” for VMs
    • Carries VLAN 300 β€œPublic” for Palo Alto firewall (MAC: 00:50:56:a7:0f:60)
    • Multiple ESXi VMs visible (VMware MAC prefixes: 00:0c:29, 00:50:56)
  • SFP2 β†’ ghost-esx-02 (10.1.1.121) - 10G link (FS 3m copper DAC)

    • Intel NIC MAC: a4:ae:12:77:0e:13
    • VLAN 1 β€œLab”
  • SFP3: No link (available)

  • SFP4 β†’ Synology NAS (10.1.1.150) - 10G link (FS 2m copper DAC) βœ…

    • Synology expansion card MAC: 0c:c4:7a:1f:04:99
    • VLAN 1 β€œLab”
    • Provides NFS storage for all media (27TB)

VLANs Configured:

  • VLAN 1 β€œLab”: Management and internal services

    • Members: SFP1, SFP2, SFP4, Ports 2, 7, 8, 11, 15, 16, 18-24
    • Contains: ESXi hosts, Synology NAS, Proxmox staging, VMs
  • VLAN 300 β€œPublic”: Internet-facing

    • Members: Port24 (Internet NTU), SFP1 (Palo Alto FW external interface)
    • Purpose: Internet uplink and firewall external interface

Gigabit Ethernet Ports:

  • Port23: Proxmox staging server (Intel NIC: a4:ae:12:77:0d:65)
  • Port19: UniFi Access Point (multiple wireless client MACs)
  • Port15: Old NAS LAG (deprecated - now using SFP4 10GbE)
  • Port24: Internet NTU/ONT (VLAN 300, MAC: 00:a2:00:b2:00:c2)
  • Port11: Unknown device (MACs: bc:24:11:95:dc:45, 98:fa:9b:a0:72:e9)
  • Port21: Unknown device (MAC: d0:11:e5:ef:9f:55)
  • Port18: Unknown device (MAC: d8:5e:d3:5e:9f:53)
  • Port22: Unknown device (MAC: 00:18:dd:24:11:7b)
  • Ports 7, 8, 16, 20: 100M devices (various MACs)

Switch Features:

  • IGMP Snooping: Disabled
  • MikroTik Discovery Protocol: Enabled
  • Independent VLAN Lookup: Enabled
  • Watchdog: Enabled

Complete Network Topology:

                                    Internet
                                       β”‚
                              [Internet NTU/ONT]
                                       β”‚
                                  Port24 (VLAN 300)
                                       β”‚
                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                        β”‚    MikroTik CRS328-24P-4S+  β”‚
                        β”‚         (10.1.1.102)        β”‚
                        β””β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚        β”‚        β”‚
                    β”Œβ”€β”€β”€β”€β”€β”€β”˜        β”‚        └──────┐
                    β”‚               β”‚               β”‚
                 SFP1 (10G)      SFP2 (10G)      SFP4 (10G)
                    β”‚               β”‚               β”‚
         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”      β”‚      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚  ghost-esxi-01    β”‚      β”‚      β”‚  Synology NAS   β”‚
         β”‚  (10.1.1.120)     β”‚      β”‚      β”‚  (10.1.1.150)   β”‚
         β”‚                   β”‚      β”‚      β”‚  27TB RAID5     β”‚
         β”‚ vmnic2 (10G MTU9000)β”‚     β”‚      β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚ MAC: 90:e2:ba:74:4d:85β”‚  β”‚
         β”‚                   β”‚      β”‚
         β”‚ β”Œβ”€ vSwitch0 ─────┐│      β”‚      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
         β”‚ β”‚ MTU 9000       β”‚β”‚      └─────▢│ ghost-esx-02    β”‚
         β”‚ β”‚                β”‚β”‚             β”‚ (10.1.1.121)    β”‚
         β”‚ β”‚ Port Groups:   β”‚β”‚             β”‚                 β”‚
         β”‚ β”‚ β€’ Internal (VLAN 4095)β”‚       β”‚ vmnic2 (10G)    β”‚
         β”‚ β”‚ β€’ Public (VLAN 300)β”‚          β”‚ MAC: a0:36:9f:07:e3:76β”‚
         β”‚ β”‚ β€’ Lab (VLAN 50)β”‚β”‚             β”‚                 β”‚
         β”‚ β”‚ β€’ Mgmt (VLAN 0)β”‚β”‚             β”‚ β”Œβ”€ vSwitch0 ───┐│
         β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚             β”‚ β”‚ MTU 1500     β”‚β”‚
         β”‚          β”‚         β”‚             β”‚ β”‚ Port Groups: β”‚β”‚
         β”‚    β”Œβ”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”  β”‚             β”‚ β”‚ β€’ Lab (50)   β”‚β”‚
         β”‚    β”‚ VMs:       β”‚  β”‚             β”‚ β”‚ β€’ Public (300)β”‚β”‚
         β”‚    β”‚            β”‚  β”‚             β”‚ β”‚ β€’ Mgmt (0)   β”‚β”‚
         β”‚    β”‚ β€’ jarnetfw │◀─┼─────┐       β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚
         β”‚    β”‚   (FW 4 NICs)β”‚ β”‚     β”‚       β”‚                 β”‚
         β”‚    β”‚ β€’ xeon (Plex)β”‚ β”‚     β”‚       β”‚ All VMs: OFF    β”‚
         β”‚    β”‚ β€’ docker    β”‚  β”‚     β”‚       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚    β”‚ β€’ pihole    β”‚  β”‚     β”‚
         β”‚    β”‚ β€’ iridium   β”‚  β”‚     β”‚
         β”‚    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚     β”‚
         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚
                                     β”‚
                            Palo Alto FW
                            (jarnetfw VM)
                            β”œβ”€ eth0: Internal (10.1.1.1/24) - DHCP Server
                            β”œβ”€ eth1: Public (VLAN 300) - External
                            β”œβ”€ eth2: Lab (VLAN 50)
                            └─ eth3: Internal (additional)
                                     β”‚
                              [Default Gateway]
                                  10.1.1.1
                                     β”‚
                        β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                        β”‚                         β”‚
                   All VMs/Clients           DNS: Pi-hole
                   (10.1.1.0/24)            (10.1.1.35)

Key Network Insights:

  1. 10GbE Backbone:

    • ghost-esxi-01 vmnic2 β†’ MikroTik SFP1 (MTU 9000 for jumbo frames)
    • ghost-esx-02 vmnic2 β†’ MikroTik SFP2 (MTU 1500)
    • Synology NAS β†’ MikroTik SFP4 (MTU 9000)
  2. VLAN Strategy:

    • VLAN 4095 (Internal): All production VMs on ghost-esxi-01
    • VLAN 300 (Public): Internet uplink + Palo Alto external interface
    • VLAN 50 (Lab): Palo Alto lab interface
    • VLAN 0 (Untagged): Management traffic
  3. Palo Alto Firewall (Critical):

    • Runs as VM with 4 virtual NICs on ghost-esxi-01
    • Provides default gateway (10.1.1.1) for entire 10.1.1.0/24 network
    • DHCP server for all clients
    • Routes between VLANs and to Internet (VLAN 300)
  4. MTU Configuration:

    • ghost-esxi-01 vSwitch0: MTU 9000 (jumbo frames for media/NFS)
    • ghost-esx-02 vSwitch0: MTU 1500 (standard)
    • Synology NAS: MTU 9000 on eth5

Routing/Firewall:

  • Virtual Palo Alto NGFW (jarnetfw on ghost-esxi-01) handles L3/inter-VLAN routing
  • Gateway: 10.1.1.1 (Palo Alto ethernet1/1 - internal interface)
  • DNS: 10.1.1.35 (Pi-hole VM)

3) Storage Summary

HostTypeDatastoreFilesystemTotalUsedFree% Used
ghost-esxi-01ESXim2-primary-datastoreVMFS-6931 GB506 GB425 GB54%
ghost-esx-02ESXim2-datastoreVMFS-62.79 TB1.36 TB1.43 TB49%
pve-stagingProxmoxlocal-lvmLVM-Thin833 GB165 GB668 GB20%
pve-stagingProxmoxlocal (root)ext494 GB9.3 GB80 GB11%

Notes:

  • NFS shares from Synology (10.1.1.150) - see below
  • All host storage is local (no shared storage between hosts)
  • Planned upgrade: 1TB β†’ 2TB WD Blue SN580 NVMe on both NUCs

External Storage: Synology NAS β€œtitanium” (10.1.1.150)

Hardware:

  • Model: Synology DS1618+ (6-bay NAS)
  • CPU: Intel Atom C3538 (4 cores)
  • Network: 6x Ethernet ports
    • Built-in: 4x 1GbE (eth0-eth3)
    • Expansion: 2x 1GbE (eth4-eth5)
    • Active: eth5 with MTU 9000 (jumbo frames enabled)

Storage Configuration:

  • RAID Type: RAID5 (1 disk fault tolerance)
  • Array: 6 disks (sda-sdf)
  • Capacity: 27TB usable (29.3TB raw minus RAID5 parity)
  • Usage: 21TB used, 6.1TB free (77% full)
  • Volume: /dev/vg1000/lv mounted on /volume1

RAID Details:

md2: RAID5 across sda5, sdb5, sdc5, sdd5, sde5, sdf5 [6/6] [UUUUUU]
md1: RAID1 (system) across 6 disks [6/6] [UUUUUU]
md0: RAID1 (boot) across 6 disks [6/6] [UUUUUU]

NFS Exports:

  • /volume1/datastore β†’ Exported to 10.1.1.0/24 (entire VLAN)
    • CRITICAL: Contains all media for Plex/Radarr/Sonarr
    • Mounted on platinum VM (10.1.1.125) as /mnt/media
    • Mounted on docker VM (10.1.1.32) as /mnt/media
  • /volume1/Backup β†’ Exported to 10.1.1.120, 11.11.11.20
    • Purpose: ESXi VM backups

Media Library Structure (/volume1/datastore/media):

  • Movies: 88,718 files
  • TV: 10,314 files
  • 4K Movies: 352 files
  • Music: 44,404 files
  • Books: 1,884 files
  • Downloads: Active download directory for SABnzbd
  • Family Videos
  • Incomplete: SABnzbd incomplete downloads

Other Services:

  • Docker installed (@docker directory exists, not currently accessible)
  • Synology Active Backup for Business (@ActiveBackup)
  • Hyper Backup (@Repository)
  • Snapshot Replication (@S2S)

Network Configuration:

  • Primary IP: 10.1.1.150
  • Interface: eth5 (MTU 9000 - jumbo frames)
  • Protocol: NFSv4.1
  • Read/Write size: 128KB

Backup Configuration:

  • Has Active Backup, Hyper Backup, and Snapshot Replication installed
  • ESXi VMs backed up to /volume1/Backup
  • Internal snapshots for data protection

Migration Impact:

  • ⚠️ CRITICAL DEPENDENCY - All media stored here
  • No data migration needed (media stays on NAS)
  • NFS mounts must be reconfigured on migrated VMs
  • Verify Proxmox nodes can access NFS (tested working on pve-staging)
  • 77% full - monitor space during migration

4) PCI Passthrough / Hardware Acceleration

HostVMDevicePCI IDPurposeStatus
ghost-esxi-01xeonIntel UHD 630 iGPU00:02.0Plex Quick Sync transcodingActive
ghost-esx-02home-securityIntel UHD 630 iGPU00:02.0OLD Blue Iris hardware decodeInactive (VM off)
pve-staginghome-secUSB 3.1 xHCI Controller00:14.0Coral TPU for Frigate object detectionActive

5) Critical Services Mapping

Production Services (Currently Running)

ServiceHostVMvCPURAMPriorityNotes
Palo Alto NGFWghost-esxi-01jarnetfw47GBCRITICALNetwork will be down if offline
Plexghost-esxi-01xeon48GBHIGHiGPU passthrough for transcoding
Docker Stackghost-esxi-01docker24GBHIGHRadarr, Sonarr, SABnzbd, etc.
Pi-holeghost-esxi-01pihole11GBMEDIUMDNS/ad-blocking
Home Assistantpve-staginghome-sec48GBHIGHHome automation + Frigate NVR
Frigatepve-staginghome-sec48GBHIGHCCTV/NVR with Coral TPU
Dockerpve-stagingdocker-host-148GBMEDIUMSecondary docker host

Retired/Replaced Services

ServiceOld VMStatusReplacement
Blue Irishome-security (ghost-esx-02)Powered offFrigate (pve-staging/home-sec)
Blue Iris host?server-2019 (ghost-esx-02)Powered offN/A

6) Migration Strategy - Updated with Context

End-State Architecture

Production Nodes (NUCs):

  1. Proxmox Node 1 (ghost-esxi-01 β†’ proxmox-01): Primary workload host
  2. Proxmox Node 2 (ghost-esx-02 β†’ proxmox-02): Primary workload host

Cluster Configuration:

  • 3-node Proxmox cluster with HA capability
  • Nodes 1 & 2: Production workhorses
  • Node 3 (pve-staging): Quorum/witness node (QDevice)

Workload Distribution (Proposed):

  • Node 1: Plex (iGPU), Docker stack, Pi-hole, Palo Alto FW
  • Node 2: Home Assistant + Frigate (will move from staging), spare capacity
  • Staging: Docker, K8s lab, templates, witness role

Key Insights from Current State

  1. βœ… Blue Iris migration already complete - Frigate working well on Proxmox
  2. βœ… Proxmox validated - Platform proven suitable for your workloads
  3. ⚠️ iGPU passthrough critical - Must work on NUCs for Plex
  4. ⚠️ Coral TPU passthrough - USB controller passthrough proven working
  5. ⚠️ Network dependency - Palo Alto FW must stay online during migration
  6. ⚠️ Host 2 VMs all offline - Easier to wipe and install Proxmox first
  7. βœ… Storage headroom - Both ESXi hosts ~50% used, room for migration data

7) Questions & Unknowns

Questions for User (Already Answered in Context)

  1. βœ… Which VM is Plex? β†’ xeon on ghost-esxi-01
  2. βœ… Which VM is Blue Iris? β†’ Replaced by Frigate on pve-staging
  3. βœ… Proxmox platform suitable? β†’ Yes, validated on staging server
  4. βœ… What is pve-staging’s role? β†’ Will become HA witness node

Remaining Questions

  1. VM β€œiridium” purpose? (2GB RAM, Ubuntu, on ghost-esxi-01 - running but unknown purpose)
  2. NVMe upgrade timing? Before, during, or after migration?
  3. Proxmox storage backend? ZFS or LVM-Thin for NUC nodes?
  4. NFS backup mount (10.1.1.150)? Still in use? Purpose?
  5. server-2019 VM? Was this the old Blue Iris host? Can we delete it?
  6. xsoar VM? Keep or decommission?
  7. Acceptable downtime windows? For Plex, Palo Alto, etc.?

8) Planned Hardware Upgrades

Both NUC Hosts:

  • Upgrade: 1TB WD Blue SN550 NVMe β†’ 2TB WD Blue SN580 NVMe
  • Impact: Doubles storage capacity
  • Timing: TBD - Need user input

Timing Options:

  • Before migration: Fresh Proxmox install on 2TB drives (recommended)
  • During migration: Replace Host 2 drive during Proxmox install
  • After migration: More complex, requires VM migration again

Next Steps

  1. βœ… Complete inventory (DONE)
  2. βœ… Identify current state (DONE)
  3. Answer remaining questions (NVMe timing, storage backend, etc.)
  4. Create detailed migration runbooks for each phase
  5. Test iGPU passthrough on Proxmox (can test on staging first)
  6. Plan migration sequence with specific downtime windows

Appendix: Access Information

ESXi Hosts:

  • SSH: ssh -i ~/.ssh/esxi_migration_rsa root@10.1.1.120
  • SSH: ssh -i ~/.ssh/esxi_migration_rsa root@10.1.1.121

Proxmox:

  • SSH: ssh -i ~/.ssh/esxi_migration_rsa root@10.1.1.123
  • Web UI: https://10.1.1.123:8006
  • API Token: PVEAPIToken=terraform@pam!terraform=4c5b41e3-1b7c-4936-b002-c2477991915a
  • Query script: ./proxmox-query.sh {nodes|vms|running}