Complete Homelab Environment - Current State
Complete Homelab Environment - Current State
Last Updated: 2025-12-28 Gathered via: SSH inventory + Proxmox API Purpose: ESXi to Proxmox migration planning
Executive Summary
Current State:
- 2x ESXi 7.0.2 hosts (NUC9i9QNX) - Production workloads
- 1x Proxmox VE 8.4.1 server (Staging) - Already running Home Assistant + Frigate
- Total VMs: 12 on ESXi + 10 on Proxmox = 22 VMs
- Active Services: Plex, Docker, Palo Alto FW, Home Assistant, Frigate, Pi-hole
Migration Context:
- β Blue Iris β Frigate migration COMPLETE (now running on Proxmox with Coral TPU)
- β Proxmox platform validated via staging server
- π― Goal: Migrate ESXi workloads to NUCs running Proxmox
- π― End State: 2x NUC Proxmox nodes + Staging server as HA witness
1) Physical Infrastructure
Host 1: ghost-esxi-01 (10.1.1.120) - ESXi
Hardware:
- Model: Intel NUC9i9QNX (NUC 9 Extreme)
- CPU: Intel Core i9-9980HK (8 cores / 16 threads, HT enabled)
- RAM: 64GB (63.7 GB usable)
- iGPU: Intel UHD Graphics 630 - Passthrough to βxeonβ VM
- PCI ID: 00000:000:02.0
- Vendor/Device: 0x8086/0x3e9b
- Driver: pciPassthru
- Storage: 1TB WD Blue SN550 NVMe
- Datastore:
m2-primary-datastore(VMFS-6) - Total: 931 GB | Free: 425 GB | Used: 506 GB (54%)
- Datastore:
- OS: VMware ESXi 7.0.2 (build-17867351)
Physical Network Adapters:
-
vmnic0: Intel I219-LM 1GbE
- MAC: a4:ae:12:77:0d:65
- Status: Up, 1000 Full Duplex
- MTU: 9000
- vSwitch: vSwitch0 (uplink)
-
vmnic1: Intel 82599 10 Gigabit Dual Port (Port 0)
- MAC: 90:e2:ba:74:4d:84
- Status: Up, 10000 Full Duplex
- MTU: 1500
- vSwitch: vSwitch1 (uplink - vMotion)
-
vmnic2: Intel 82599 10 Gigabit Dual Port (Port 1)
- MAC: 90:e2:ba:74:4d:85
- Status: Up, 10000 Full Duplex
- MTU: 9000
- vSwitch: vSwitch0 (uplink)
- β MikroTik Switch SFP1 (10GbE DAC cable)
-
vmnic3: Intel I210 1GbE
- MAC: a4:ae:12:77:0d:66
- Status: Down (not connected)
- MTU: 1500
vSwitch Configuration:
vSwitch0 (MTU 9000)
- Uplinks: vmnic2, vmnic0 (LAG/active-active)
- Used Ports: 14/2560
- Port Groups:
- Management Network (VLAN 0) - 1 active client (ESXi management)
- Internal (VLAN 4095) - 6 active clients (xeon, jarnetfw, iridium, pihole, docker)
- Public (VLAN 300) - 1 active client (jarnetfw external interface)
- Lab (VLAN 50) - 1 active client (jarnetfw)
- VM Network (VLAN 0) - 0 clients
vSwitch1 (MTU 1500)
- Uplinks: vmnic1
- Used Ports: 4/2560
- Port Groups:
- vMotion (VLAN 0) - 1 active client
vSwitch2 (MTU 1500)
- Uplinks: None (internal only)
- Used Ports: 1/2560
- Port Groups:
- Lab-Int (VLAN 0) - 0 clients
VMs (7 total, 5 powered on):
| VM Name | vCPU | RAM (GB) | Disk (GB) | Status | Guest OS | Purpose |
|---|---|---|---|---|---|---|
| xeon | 4 | 8 | ~60 | ON | Ubuntu 64-bit | Plex server w/ iGPU (Quick Sync) |
| docker | 2 | 4 | ~60 | ON | Ubuntu 64-bit | Docker services (media stack) |
| jarnetfw | 4 | 7 | ~60 | ON | CentOS 64-bit | Palo Alto NGFW (PA-VM-10.1.3) - CRITICAL |
| pihole | 1 | 1 | ~20 | ON | Ubuntu 64-bit | DNS / Ad blocking |
| iridium | 2 | 1 | ~20 | ON | Ubuntu 64-bit | Unknown - needs clarification |
| Win-11 | 4 | 8 | ~60 | Off | Windows 11 | Lab/testing |
| Win7-Victim | 1 | 2 | ~20 | Off | Windows 7 | Metasploitable/security lab |
VM Network Mappings:
| VM Name | Port Groups | VLANs | IP Address | Notes |
|---|---|---|---|---|
| xeon | Internal | 4095 | 10.1.1.125 | 1 NIC |
| docker | Internal | 4095 | 10.1.1.32 | 1 NIC |
| jarnetfw | Lab, Public, Internal (x2) | 50, 300, 4095 | 10.1.1.103 | 4 NICs - Multi-interface firewall |
| pihole | Internal | 4095 | 10.1.1.35 | 1 NIC |
| iridium | Internal | 4095 | 10.1.1.126 | 1 NIC |
| Win-11 | (Powered off) | - | - | - |
| Win7-Victim | (Powered off) | - | - | - |
Host 2: ghost-esx-02.skinner.network (10.1.1.121) - ESXi
Hardware:
- Model: Intel NUC9i9QNX (NUC 9 Extreme)
- CPU: Intel Core i9-9980HK (8 cores / 16 threads, HT enabled)
- RAM: 64GB (63.7 GB usable)
- iGPU: Intel UHD Graphics 630 - Passthrough to βhome-securityβ VM
- PCI ID: 00000:000:02.0
- Vendor/Device: 0x8086/0x3e9b
- Driver: pciPassthru
- Note: VM is powered off (Blue Iris replaced by Frigate)
- Storage: 1TB WD Blue SN550 NVMe
- Datastore:
m2-datastore(VMFS-6) - Total: 2.79 TB | Free: 1.43 TB | Used: 1.36 TB (49%)
- Datastore:
- OS: VMware ESXi 7.0.2 (build-17867351)
Physical Network Adapters:
-
vmnic0: Intel I219-LM 1GbE
- MAC: a4:ae:12:77:0e:13
- Status: Up, 1000 Full Duplex
- MTU: 1500
- vSwitch: vSwitch0 (uplink)
-
vmnic1: Intel X520 10 Gigabit (Port 0)
- MAC: a0:36:9f:07:e3:74
- Status: Up, 10000 Full Duplex
- MTU: 1500
- vSwitch: vSwitch1 (uplink - vMotion)
-
vmnic2: Intel X520 10 Gigabit (Port 1)
- MAC: a0:36:9f:07:e3:76
- Status: Up, 10000 Full Duplex
- MTU: 1500
- vSwitch: vSwitch0 (uplink)
- β MikroTik Switch SFP2 (10GbE DAC cable)
-
vmnic3: Intel I210 1GbE
- MAC: a4:ae:12:77:0e:14
- Status: Down (not connected)
- MTU: 1500
- vSwitch: vSwitch3 (uplink, but unused)
vSwitch Configuration:
vSwitch0 (MTU 1500)
- Uplinks: vmnic2, vmnic0 (LAG/active-active)
- Used Ports: 6/2560
- Port Groups:
- Management Network (VLAN 0) - 1 active client (ESXi management)
- Lab (VLAN 50) - 0 clients
- Public (VLAN 300) - 0 clients
- VM Network (VLAN 4095) - 0 clients
vSwitch1 (MTU 1500)
- Uplinks: vmnic1
- Used Ports: 4/2560
- Port Groups:
- vMotion (VLAN 0) - 1 active client
vSwitch2 (MTU 1500)
- Uplinks: None (internal only)
- Used Ports: 1/2560
- Port Groups:
- Lab-Int (VLAN 0) - 0 clients
vSwitch3 (MTU 1500)
- Uplinks: vmnic3 (link down)
- Used Ports: 3/2560
- Port Groups:
- Tap (VLAN 0) - 0 clients
VMs (5 total, ALL powered off):
| VM Name | vCPU | RAM (GB) | Disk (GB) | Status | Guest OS | Purpose |
|---|---|---|---|---|---|---|
| home-security | 8 | 8 | ~100 | Off | Ubuntu 64-bit | OLD Blue Iris (iGPU) - REPLACED by Frigate |
| server-2019 | 4 | 8 | ~100 | Off | Windows Server 2019 | OLD Windows host for Blue Iris? |
| xsoar | 4 | 8 | ~100 | Off | Ubuntu 64-bit | SOAR/security platform (lab) |
| win11-sse | 4 | 4 | ~60 | Off | Windows 11 | Lab/testing |
| win-10 | 2 | 4 | ~60 | Off | Windows 10 | Lab/testing |
Host 3: pve-staging (10.1.1.123) - Proxmox VE (PRODUCTION)
Hardware:
- Model: Unknown (likely desktop/workstation)
- CPU: Intel Core i5-8400T (6 cores, no HT, @ 1.70GHz)
- RAM: 32GB (31 GB usable)
- USB Controller: Intel Cannon Lake PCH USB 3.1 xHCI
- PCI ID: 0000:00:14
- Driver: vfio-pci (passed to home-sec VM)
- Coral TPU USB device attached here
- Storage:
- Root: 94GB (9.3GB used, 11%)
- local-lvm: 833GB (165GB used, 20%)
- Network:
- eno1: 1GbE β vmbr0 (10.1.1.123/24)
- enp1s0f0/f1: Additional NICs (not configured)
- wlp3s0: WiFi (not used)
- OS: Proxmox VE 8.4.1 (kernel 6.8.12-11-pve)
Network Configuration:
- vmbr0 (Linux bridge): eno1
- IP: 10.1.1.123/24
- Gateway: 10.1.1.1
- Purpose: VM network bridge
VMs (10 total, 2 running):
| VMID | Name | vCPU | RAM (GB) | Disk (GB) | Status | Purpose |
|---|---|---|---|---|---|---|
| 103 | home-sec | 4 | 8 | 100 | RUNNING | Home Assistant + Frigate (USB controller passthrough for Coral TPU) |
| 200 | docker-host-1 | 4 | 8 | 60 | RUNNING | Docker services |
| 201 | promethium | 4 | 8 | 60 | Stopped | Docker host (backup/testing?) |
| 100 | ubuntu-test | 2 | 2 | 32 | Stopped | Testing |
| 102 | test-cloud-init | 2 | 2 | 33.5 | Stopped | Cloud-init testing |
| 300 | jumpbox | 1 | 1 | 20 | Stopped | K8s lab - jumpbox |
| 301 | server | 1 | 2 | 20 | Stopped | K8s lab - control plane |
| 302 | node-0 | 1 | 2 | 20 | Stopped | K8s lab - worker |
| 303 | node-1 | 1 | 2 | 20 | Stopped | K8s lab - worker |
| 9001 | ubuntu-24-cloud | 2 | 2 | 3.5 | Stopped | Cloud-init template |
Key Configuration:
- VM 103 (home-sec):
- Machine type: q35
- Static IP: 10.1.1.208/24
- PCI Passthrough:
hostpci0: 0000:00:14,pcie=1(USB controller for Coral TPU) - Boot on start: Yes
- QEMU agent: Enabled
2) Network Architecture
Core Switch: MikroTik CRS328-24P-4S+ (10.1.1.102)
Hardware:
- Model: CRS328-24P-4S+ (24-port PoE + 4x SFP+ 10GbE)
- MAC Address: 2c:c8:1b:d0:a1:34
- Serial: F6090ED7A959
- Management IP: 10.1.1.102 (static)
- Uptime: 15 days 18:43:40
10GbE SFP+ Ports (Critical):
-
SFP1 β ghost-esxi-01 (10.1.1.120) - 10G link (FS 3m copper DAC)
- Carries VLAN 1 βLabβ for VMs
- Carries VLAN 300 βPublicβ for Palo Alto firewall (MAC: 00:50:56:a7:0f:60)
- Multiple ESXi VMs visible (VMware MAC prefixes: 00:0c:29, 00:50:56)
-
SFP2 β ghost-esx-02 (10.1.1.121) - 10G link (FS 3m copper DAC)
- Intel NIC MAC: a4:ae:12:77:0e:13
- VLAN 1 βLabβ
-
SFP3: No link (available)
-
SFP4 β Synology NAS (10.1.1.150) - 10G link (FS 2m copper DAC) β
- Synology expansion card MAC: 0c:c4:7a:1f:04:99
- VLAN 1 βLabβ
- Provides NFS storage for all media (27TB)
VLANs Configured:
-
VLAN 1 βLabβ: Management and internal services
- Members: SFP1, SFP2, SFP4, Ports 2, 7, 8, 11, 15, 16, 18-24
- Contains: ESXi hosts, Synology NAS, Proxmox staging, VMs
-
VLAN 300 βPublicβ: Internet-facing
- Members: Port24 (Internet NTU), SFP1 (Palo Alto FW external interface)
- Purpose: Internet uplink and firewall external interface
Gigabit Ethernet Ports:
- Port23: Proxmox staging server (Intel NIC: a4:ae:12:77:0d:65)
- Port19: UniFi Access Point (multiple wireless client MACs)
- Port15: Old NAS LAG (deprecated - now using SFP4 10GbE)
- Port24: Internet NTU/ONT (VLAN 300, MAC: 00:a2:00:b2:00:c2)
- Port11: Unknown device (MACs: bc:24:11:95:dc:45, 98:fa:9b:a0:72:e9)
- Port21: Unknown device (MAC: d0:11:e5:ef:9f:55)
- Port18: Unknown device (MAC: d8:5e:d3:5e:9f:53)
- Port22: Unknown device (MAC: 00:18:dd:24:11:7b)
- Ports 7, 8, 16, 20: 100M devices (various MACs)
Switch Features:
- IGMP Snooping: Disabled
- MikroTik Discovery Protocol: Enabled
- Independent VLAN Lookup: Enabled
- Watchdog: Enabled
Complete Network Topology:
Internet
β
[Internet NTU/ONT]
β
Port24 (VLAN 300)
β
ββββββββββββββββ΄βββββββββββββββ
β MikroTik CRS328-24P-4S+ β
β (10.1.1.102) β
ββββ¬βββββββββ¬βββββββββ¬βββββββββ
β β β
ββββββββ β ββββββββ
β β β
SFP1 (10G) SFP2 (10G) SFP4 (10G)
β β β
ββββββββββββ΄βββββββββ β ββββββββββ΄βββββββββ
β ghost-esxi-01 β β β Synology NAS β
β (10.1.1.120) β β β (10.1.1.150) β
β β β β 27TB RAID5 β
β vmnic2 (10G MTU9000)β β βββββββββββββββββββ
β MAC: 90:e2:ba:74:4d:85β β
β β β
β ββ vSwitch0 βββββββ β βββββββββββββββββββ
β β MTU 9000 ββ βββββββΆβ ghost-esx-02 β
β β ββ β (10.1.1.121) β
β β Port Groups: ββ β β
β β β’ Internal (VLAN 4095)β β vmnic2 (10G) β
β β β’ Public (VLAN 300)β β MAC: a0:36:9f:07:e3:76β
β β β’ Lab (VLAN 50)ββ β β
β β β’ Mgmt (VLAN 0)ββ β ββ vSwitch0 βββββ
β ββββββββββ¬ββββββββββ β β MTU 1500 ββ
β β β β β Port Groups: ββ
β βββββββ΄βββββββ β β β β’ Lab (50) ββ
β β VMs: β β β β β’ Public (300)ββ
β β β β β β β’ Mgmt (0) ββ
β β β’ jarnetfw ββββΌββββββ β βββββββββββββββββ
β β (FW 4 NICs)β β β β β
β β β’ xeon (Plex)β β β β All VMs: OFF β
β β β’ docker β β β βββββββββββββββββββ
β β β’ pihole β β β
β β β’ iridium β β β
β βββββββββββββββ β β
βββββββββββββββββββββββ β
β
Palo Alto FW
(jarnetfw VM)
ββ eth0: Internal (10.1.1.1/24) - DHCP Server
ββ eth1: Public (VLAN 300) - External
ββ eth2: Lab (VLAN 50)
ββ eth3: Internal (additional)
β
[Default Gateway]
10.1.1.1
β
ββββββββββββββ΄βββββββββββββ
β β
All VMs/Clients DNS: Pi-hole
(10.1.1.0/24) (10.1.1.35)
Key Network Insights:
-
10GbE Backbone:
- ghost-esxi-01 vmnic2 β MikroTik SFP1 (MTU 9000 for jumbo frames)
- ghost-esx-02 vmnic2 β MikroTik SFP2 (MTU 1500)
- Synology NAS β MikroTik SFP4 (MTU 9000)
-
VLAN Strategy:
- VLAN 4095 (Internal): All production VMs on ghost-esxi-01
- VLAN 300 (Public): Internet uplink + Palo Alto external interface
- VLAN 50 (Lab): Palo Alto lab interface
- VLAN 0 (Untagged): Management traffic
-
Palo Alto Firewall (Critical):
- Runs as VM with 4 virtual NICs on ghost-esxi-01
- Provides default gateway (10.1.1.1) for entire 10.1.1.0/24 network
- DHCP server for all clients
- Routes between VLANs and to Internet (VLAN 300)
-
MTU Configuration:
- ghost-esxi-01 vSwitch0: MTU 9000 (jumbo frames for media/NFS)
- ghost-esx-02 vSwitch0: MTU 1500 (standard)
- Synology NAS: MTU 9000 on eth5
Routing/Firewall:
- Virtual Palo Alto NGFW (jarnetfw on ghost-esxi-01) handles L3/inter-VLAN routing
- Gateway: 10.1.1.1 (Palo Alto ethernet1/1 - internal interface)
- DNS: 10.1.1.35 (Pi-hole VM)
3) Storage Summary
| Host | Type | Datastore | Filesystem | Total | Used | Free | % Used |
|---|---|---|---|---|---|---|---|
| ghost-esxi-01 | ESXi | m2-primary-datastore | VMFS-6 | 931 GB | 506 GB | 425 GB | 54% |
| ghost-esx-02 | ESXi | m2-datastore | VMFS-6 | 2.79 TB | 1.36 TB | 1.43 TB | 49% |
| pve-staging | Proxmox | local-lvm | LVM-Thin | 833 GB | 165 GB | 668 GB | 20% |
| pve-staging | Proxmox | local (root) | ext4 | 94 GB | 9.3 GB | 80 GB | 11% |
Notes:
- NFS shares from Synology (10.1.1.150) - see below
- All host storage is local (no shared storage between hosts)
- Planned upgrade: 1TB β 2TB WD Blue SN580 NVMe on both NUCs
External Storage: Synology NAS βtitaniumβ (10.1.1.150)
Hardware:
- Model: Synology DS1618+ (6-bay NAS)
- CPU: Intel Atom C3538 (4 cores)
- Network: 6x Ethernet ports
- Built-in: 4x 1GbE (eth0-eth3)
- Expansion: 2x 1GbE (eth4-eth5)
- Active: eth5 with MTU 9000 (jumbo frames enabled)
Storage Configuration:
- RAID Type: RAID5 (1 disk fault tolerance)
- Array: 6 disks (sda-sdf)
- Capacity: 27TB usable (29.3TB raw minus RAID5 parity)
- Usage: 21TB used, 6.1TB free (77% full)
- Volume: /dev/vg1000/lv mounted on /volume1
RAID Details:
md2: RAID5 across sda5, sdb5, sdc5, sdd5, sde5, sdf5 [6/6] [UUUUUU]
md1: RAID1 (system) across 6 disks [6/6] [UUUUUU]
md0: RAID1 (boot) across 6 disks [6/6] [UUUUUU]
NFS Exports:
/volume1/datastoreβ Exported to 10.1.1.0/24 (entire VLAN)- CRITICAL: Contains all media for Plex/Radarr/Sonarr
- Mounted on platinum VM (10.1.1.125) as /mnt/media
- Mounted on docker VM (10.1.1.32) as /mnt/media
/volume1/Backupβ Exported to 10.1.1.120, 11.11.11.20- Purpose: ESXi VM backups
Media Library Structure (/volume1/datastore/media):
- Movies: 88,718 files
- TV: 10,314 files
- 4K Movies: 352 files
- Music: 44,404 files
- Books: 1,884 files
- Downloads: Active download directory for SABnzbd
- Family Videos
- Incomplete: SABnzbd incomplete downloads
Other Services:
- Docker installed (@docker directory exists, not currently accessible)
- Synology Active Backup for Business (@ActiveBackup)
- Hyper Backup (@Repository)
- Snapshot Replication (@S2S)
Network Configuration:
- Primary IP: 10.1.1.150
- Interface: eth5 (MTU 9000 - jumbo frames)
- Protocol: NFSv4.1
- Read/Write size: 128KB
Backup Configuration:
- Has Active Backup, Hyper Backup, and Snapshot Replication installed
- ESXi VMs backed up to /volume1/Backup
- Internal snapshots for data protection
Migration Impact:
- β οΈ CRITICAL DEPENDENCY - All media stored here
- No data migration needed (media stays on NAS)
- NFS mounts must be reconfigured on migrated VMs
- Verify Proxmox nodes can access NFS (tested working on pve-staging)
- 77% full - monitor space during migration
4) PCI Passthrough / Hardware Acceleration
| Host | VM | Device | PCI ID | Purpose | Status |
|---|---|---|---|---|---|
| ghost-esxi-01 | xeon | Intel UHD 630 iGPU | 00:02.0 | Plex Quick Sync transcoding | Active |
| ghost-esx-02 | home-security | Intel UHD 630 iGPU | 00:02.0 | OLD Blue Iris hardware decode | Inactive (VM off) |
| pve-staging | home-sec | USB 3.1 xHCI Controller | 00:14.0 | Coral TPU for Frigate object detection | Active |
5) Critical Services Mapping
Production Services (Currently Running)
| Service | Host | VM | vCPU | RAM | Priority | Notes |
|---|---|---|---|---|---|---|
| Palo Alto NGFW | ghost-esxi-01 | jarnetfw | 4 | 7GB | CRITICAL | Network will be down if offline |
| Plex | ghost-esxi-01 | xeon | 4 | 8GB | HIGH | iGPU passthrough for transcoding |
| Docker Stack | ghost-esxi-01 | docker | 2 | 4GB | HIGH | Radarr, Sonarr, SABnzbd, etc. |
| Pi-hole | ghost-esxi-01 | pihole | 1 | 1GB | MEDIUM | DNS/ad-blocking |
| Home Assistant | pve-staging | home-sec | 4 | 8GB | HIGH | Home automation + Frigate NVR |
| Frigate | pve-staging | home-sec | 4 | 8GB | HIGH | CCTV/NVR with Coral TPU |
| Docker | pve-staging | docker-host-1 | 4 | 8GB | MEDIUM | Secondary docker host |
Retired/Replaced Services
| Service | Old VM | Status | Replacement |
|---|---|---|---|
| Blue Iris | home-security (ghost-esx-02) | Powered off | Frigate (pve-staging/home-sec) |
| Blue Iris host? | server-2019 (ghost-esx-02) | Powered off | N/A |
6) Migration Strategy - Updated with Context
End-State Architecture
Production Nodes (NUCs):
- Proxmox Node 1 (ghost-esxi-01 β proxmox-01): Primary workload host
- Proxmox Node 2 (ghost-esx-02 β proxmox-02): Primary workload host
Cluster Configuration:
- 3-node Proxmox cluster with HA capability
- Nodes 1 & 2: Production workhorses
- Node 3 (pve-staging): Quorum/witness node (QDevice)
Workload Distribution (Proposed):
- Node 1: Plex (iGPU), Docker stack, Pi-hole, Palo Alto FW
- Node 2: Home Assistant + Frigate (will move from staging), spare capacity
- Staging: Docker, K8s lab, templates, witness role
Key Insights from Current State
- β Blue Iris migration already complete - Frigate working well on Proxmox
- β Proxmox validated - Platform proven suitable for your workloads
- β οΈ iGPU passthrough critical - Must work on NUCs for Plex
- β οΈ Coral TPU passthrough - USB controller passthrough proven working
- β οΈ Network dependency - Palo Alto FW must stay online during migration
- β οΈ Host 2 VMs all offline - Easier to wipe and install Proxmox first
- β Storage headroom - Both ESXi hosts ~50% used, room for migration data
7) Questions & Unknowns
Questions for User (Already Answered in Context)
- β Which VM is Plex? β xeon on ghost-esxi-01
- β Which VM is Blue Iris? β Replaced by Frigate on pve-staging
- β Proxmox platform suitable? β Yes, validated on staging server
- β What is pve-stagingβs role? β Will become HA witness node
Remaining Questions
- VM βiridiumβ purpose? (2GB RAM, Ubuntu, on ghost-esxi-01 - running but unknown purpose)
- NVMe upgrade timing? Before, during, or after migration?
- Proxmox storage backend? ZFS or LVM-Thin for NUC nodes?
- NFS backup mount (10.1.1.150)? Still in use? Purpose?
- server-2019 VM? Was this the old Blue Iris host? Can we delete it?
- xsoar VM? Keep or decommission?
- Acceptable downtime windows? For Plex, Palo Alto, etc.?
8) Planned Hardware Upgrades
Both NUC Hosts:
- Upgrade: 1TB WD Blue SN550 NVMe β 2TB WD Blue SN580 NVMe
- Impact: Doubles storage capacity
- Timing: TBD - Need user input
Timing Options:
- Before migration: Fresh Proxmox install on 2TB drives (recommended)
- During migration: Replace Host 2 drive during Proxmox install
- After migration: More complex, requires VM migration again
Next Steps
- β Complete inventory (DONE)
- β Identify current state (DONE)
- Answer remaining questions (NVMe timing, storage backend, etc.)
- Create detailed migration runbooks for each phase
- Test iGPU passthrough on Proxmox (can test on staging first)
- Plan migration sequence with specific downtime windows
Appendix: Access Information
ESXi Hosts:
- SSH:
ssh -i ~/.ssh/esxi_migration_rsa root@10.1.1.120 - SSH:
ssh -i ~/.ssh/esxi_migration_rsa root@10.1.1.121
Proxmox:
- SSH:
ssh -i ~/.ssh/esxi_migration_rsa root@10.1.1.123 - Web UI: https://10.1.1.123:8006
- API Token:
PVEAPIToken=terraform@pam!terraform=4c5b41e3-1b7c-4936-b002-c2477991915a - Query script:
./proxmox-query.sh {nodes|vms|running}