Application Architecture - Complete Stack
Application Architecture - Complete Stack
Last Updated: 2025-12-28 Discovered via: Automated SSH inventory Purpose: Application layer documentation for migration planning
Executive Summary
Architecture Pattern: Microservices with centralized NFS storage
Key Components:
- Media Server: Plex with hardware transcoding (iGPU)
- Media Management: Radarr, Sonarr, SABnzbd (automated downloads)
- Smart Home: Home Assistant + Frigate NVR (with Coral TPU)
- Network Services: Pi-hole DNS, Palo Alto Firewall
- Infrastructure: Traefik reverse proxy, Portainer, Uptime Kuma
Critical External Dependency:
- NFS Storage: 10.1.1.150 (Synology NAS) - 27TB (21TB used, 77% full)
- All media stored centrally, mounted on Plex + Docker VMs
Application Stack by VM
1. “platinum” - Plex Media Server ⭐ HIGH PRIORITY
Hostname: platinum
IP: 10.1.1.125
Host: ghost-esxi-01 (10.1.1.120)
Resources: 2 vCPU, 1GB RAM, 20GB disk (78% used)
Note: ESXi VM name may be “iridium” but hostname is “platinum”
Primary Application:
- Plex Media Server v1.41.6.9685-d301f511a
- Process running since Dec 25
- Hardware transcoding: Intel UHD 630 (iGPU passthrough)
- Plex database: Local on VM (~500MB)
- Database location:
/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/
Storage:
- NFS Mount:
10.1.1.150:/volume1/datastore/media→/mnt/media- Protocol: NFSv4.1
- Total: 27TB
- Used: 21TB (77%)
- Free: 6.1TB
- Read/Write size: 128KB
- Media library points to this mount
Media Library Structure (on NFS):
- Movies:
/mnt/media/Movies - TV Shows:
/mnt/media/TV - (Plex libraries configured to scan these directories)
Dependencies:
- ✅ NFS Server (10.1.1.150) - CRITICAL! Media files
- ✅ Intel iGPU - For hardware transcoding (Quick Sync)
- ⚠️ Traefik (on docker VM) - For remote access reverse proxy (if configured)
Migration Considerations:
- iGPU passthrough MUST work on Proxmox
- NFS mount must be reconfigured
- Plex database should be backed up before migration
- Transcode directory can be tmpfs or local disk
2. “docker” VM - Media Management Stack ⭐ HIGH PRIORITY
ESXi VM Name: docker
Hostname: docker
IP: 10.1.1.32
Host: ghost-esxi-01 (10.1.1.120)
Resources: 2 vCPU, 4GB RAM, ~60GB disk
Docker Compose Stack (/home/luke/docker/docker-compose.yml):
Download Management
- SABnzbd (Usenet downloader)
- Image:
linuxserver/sabnzbd - Port:
2099:8080 - Config:
${USERDIR}/docker/sabnzbd:/config - Downloads:
${USERDIR}/Downloads/completed:/downloads - Incomplete:
${USERDIR}/Downloads/incomplete:/incomplete-downloads
- Image:
Media Automation
-
Radarr (Movie management)
- Image:
linuxserver/radarr - Movies:
/mnt/media/Movies(NFS mount) - Downloads:
/mnt/media/Downloads(NFS mount) - Behind Traefik:
cyan.${DOMAINNAME}
- Image:
-
Sonarr (TV management)
- Image:
linuxserver/sonarr - TV:
/mnt/media/TV(NFS mount) - Downloads:
/mnt/media/Downloads(NFS mount) - Behind Traefik:
teal.${DOMAINNAME}
- Image:
Request Management
-
Ombi
- Image:
linuxserver/ombi:latest - Port: 3579
- Behind Traefik:
ombi.${DOMAINNAME}
- Image:
-
Overseerr
- Image:
sctx/overseerr:latest - Port: 5055
- Behind Traefik:
overseerr.${DOMAINNAME}
- Image:
Infrastructure
-
Traefik (Reverse Proxy) ⭐ CRITICAL
- Image:
traefik:v1.7.16 - Ports:
80:80,443:443 - Cloudflare integration (DNS-01 challenge for SSL)
- Environment:
CLOUDFLARE_EMAILCLOUDFLARE_API_KEY
- Provides SSL termination for all services
- Behind Traefik:
traefik.${DOMAINNAME}
- Image:
-
Portainer (Docker management)
- Image:
portainer/portainer - Port:
9000:9000 - Web UI for Docker management
- Image:
-
Watchtower (Auto-updater)
- Image:
v2tec/watchtower - Schedule: Daily at 4:00 AM
- Auto-updates all containers
- Image:
-
Uptime Kuma (Monitoring)
- Image:
louislam/uptime-kuma:1 - Port:
3001:3001 - Behind Traefik:
status.${DOMAINNAME}
- Image:
Storage:
- NFS Mount:
10.1.1.150:/volume1/datastore/media→/mnt/media(SAME as Plex)- 27TB total, 21TB used
- Shared with Plex server
- Local Docker volumes: 16 volumes for container configs/databases
- User directory:
${USERDIR}/docker/for container configs
Network:
- Docker Network:
traefik_proxy(external network for Traefik) - Bridge networks: Multiple Docker bridge networks (172.19.0.1, 172.20.0.1)
Dependencies:
- ✅ NFS Server (10.1.1.150) - CRITICAL! Media files
- ✅ Cloudflare account - For Traefik SSL certificates
- ✅ Domain name - For Traefik reverse proxy
- ✅ Plex - Radarr/Sonarr update Plex libraries after downloads
- ⚠️ SABnzbd credentials - Needed for Radarr/Sonarr integration
Data Flow:
- User requests media via Ombi/Overseerr
- Radarr/Sonarr search for content
- SABnzbd downloads from Usenet
- Radarr/Sonarr move to
/mnt/media/Moviesor/mnt/media/TV - Plex automatically scans and adds to library
- User watches via Plex
Migration Considerations:
- Docker compose file must be migrated
- Environment variables (
.envfile) needed - NFS mount must be reconfigured
- Cloudflare API credentials needed
- Traefik config directory must be migrated
- All container config volumes should be backed up
3. “iridium” - Plex Support Services ⭐ MEDIUM PRIORITY
Hostname: iridium
IP: 10.1.1.126
Host: ghost-esxi-01 (10.1.1.120)
Resources: 4 vCPU, 8GB RAM, ~60GB disk
Note: ESXi VM name may be “xeon” but hostname is “iridium”
iGPU: Intel UHD 630 passthrough configured (not currently in use)
Applications Running:
-
Tautulli (Plex Monitoring)
- Port: 8181
- Purpose: Plex statistics, monitoring, and notifications
- Process: Python-based, running via s6-supervise
- Dependencies: Connects to Plex at 10.1.1.125
- Web UI: http://10.1.1.126:8181
-
Cloudflared (Cloudflare Tunnel)
- Purpose: Secure tunnel for remote access to services
- Running as: cloudflared tunnel
- Provides: Remote access without port forwarding
- Dependencies: Cloudflare account, tunnel configured
- Note: Tunnel token visible in process (should be backed up)
-
UniFi Controller (Network Management)
- Purpose: Ubiquiti UniFi network device management
- Process: Running via s6-supervise (svc-unifi-controller)
- Port: Likely 8443 (HTTPS), 8080 (HTTP)
- Manages: UniFi access points, switches, gateways
- Database: Local MongoDB (embedded in UniFi Controller)
- Web UI: https://10.1.1.126:8443
Dependencies:
- ✅ Plex Server (10.1.1.125) - Tautulli monitors this
- ✅ Cloudflare Account - For tunnel authentication
- ✅ UniFi Network Devices - Managed by controller
- ⚠️ Network connectivity - Critical for network management
Migration Considerations:
- HIGH IMPORTANCE: UniFi Controller manages network infrastructure
- Backup UniFi Controller database before migration
- Tautulli config and database should be backed up
- Cloudflare tunnel token must be documented/backed up
- UniFi devices will lose management during migration (brief outage acceptable)
- Consider: May not need iGPU passthrough (can reclaim for other use)
4. “pihole” VM - DNS/Ad-Blocking
ESXi VM Name: pihole
Hostname: pihole
IP: 10.1.1.35
Host: ghost-esxi-01 (10.1.1.120)
Resources: 1 vCPU, 1GB RAM, ~20GB disk
Primary Application:
- Pi-hole - Network-wide ad blocking and DNS
- Provides DNS resolution for entire network
- Blocks ads, trackers, malware domains
- Upstream DNS: 127.0.0.1 (likely using its own recursive DNS or forwarding)
Network Role:
- Primary DNS server: 10.1.1.35
- Used by DHCP (likely served by Palo Alto or separate DHCP server)
- All network clients point here for DNS
Configuration:
- Config:
/etc/pihole/setupVars.conf - Custom blocklists (if configured)
- Local DNS records (if configured)
Dependencies:
- ⚠️ Upstream DNS - Internet connectivity for DNS resolution
- ⚠️ Network routing - Must be accessible from all VLANs
Migration Considerations:
- Minimal downtime critical (DNS outage affects entire network)
- Backup Pi-hole configuration before migration
- Consider temporary DNS fallback (8.8.8.8) during migration
- Verify DHCP servers updated to point to new IP if it changes
5. “jarnetfw” VM - Palo Alto Firewall ⭐ CRITICAL
ESXi VM Name: jarnetfw
Hostname: Unknown (PA-VM appliance)
IP: 10.1.1.103 (management interface)
Host: ghost-esxi-01 (10.1.1.120)
Resources: 4 vCPU, 7GB RAM, ~60GB disk
Primary Application:
- Palo Alto PA-VM-10.1.3 - Next-Generation Firewall
- Inter-VLAN routing (Layer 3)
- Firewall rules
- NAT (likely for internet access)
- Possibly VPN (site-to-site or remote access)
Network Interfaces:
- Multiple vNICs mapped to different VLANs
- Handles routing between:
- VLAN 0 (Management)
- VLAN 50 (Lab)
- VLAN 300 (Public)
- VLAN 4095 (Internal/Trunk)
Critical Functions:
- Inter-VLAN routing: ALL traffic between VLANs goes through this
- Internet gateway: Likely provides NAT for outbound internet
- Security policies: Firewall rules control traffic
Dependencies:
- ⚠️ Network will be DOWN if this VM is offline
- All VLANs depend on this for routing
Migration Considerations:
- HIGHEST RISK migration
- Network outage for all services during migration
- Must migrate during maintenance window
- Export Palo Alto configuration before migration
- Document all firewall rules, NAT, routing
- Test all VLAN connectivity after migration
- Verify interface → VLAN mappings match exactly
6. “home-sec” VM - Home Assistant + Frigate NVR ⭐ HIGH PRIORITY
Proxmox VM ID: 103
Hostname: home-sec
IP: 10.1.1.208
Host: pve-staging (10.1.1.123)
Resources: 4 vCPU, 8GB RAM, 100GB disk
Special: USB controller passthrough (PCI 00:14.0) for Coral TPU
Docker Containers:
-
Home Assistant (Smart Home Hub)
- Image:
ghcr.io/home-assistant/home-assistant:stable - Status: Up 9 days
- Purpose: Home automation central hub
- Integrations: Lights, sensors, cameras, etc.
- Web UI: Port 8123
- Image:
-
Frigate (NVR - Network Video Recorder)
- Image:
ghcr.io/blakeblackshear/frigate:stable - Status: Up 9 days (healthy)
- Purpose: CCTV/camera recording with AI object detection
- Coral TPU: Google Coral USB accelerator for object detection
- Passed through via USB controller (PCI 00:14.0)
- Enables real-time object detection (person, car, etc.)
- Recording storage: Local on VM (100GB disk)
- Camera streams: RTSP from IP cameras
- Image:
-
Mosquitto (MQTT Broker)
- Image:
eclipse-mosquitto:latest - Status: Up 9 days
- Purpose: Message broker for IoT devices
- Used by: Home Assistant, Frigate, possibly other IoT devices
- Image:
Storage:
- Local disk: 100GB for VM OS, Docker, Frigate recordings
- Recordings: Stored locally on VM (consider space requirements)
Dependencies:
- ✅ Coral TPU - For Frigate object detection (USB passthrough)
- ✅ IP Cameras - RTSP streams from cameras on network
- ⚠️ MQTT clients - Any IoT devices using Mosquitto
Migration Considerations:
- USB controller passthrough (Coral TPU) must work on new node
- Different USB controller PCI ID on NUC vs staging server
- Frigate recordings may be large - check disk usage
- Home Assistant config backup recommended
- Mosquitto persistence data should be backed up
7. “docker-host-1” VM - Spare Docker Host
Proxmox VM ID: 200
Hostname: docker-host-1
IP: 10.1.1.200
Host: pve-staging (10.1.1.123)
Resources: 4 vCPU, 8GB RAM, 60GB disk
Status:
- Docker installed
- NO containers currently running
- Purpose unclear - backup/failover host?
Migration Considerations:
- Low priority
- Can potentially be decommissioned if not needed
External Dependencies
1. NFS Storage Server - 10.1.1.150 ⭐ CRITICAL
Type: Likely Synology NAS
Mount: /volume1/datastore/media
Size: 27TB total, 21TB used (77% full), 6.1TB free
Protocol: NFSv4.1
Mounted On:
- Plex VM (iridium/platinum, 10.1.1.125):
/mnt/media - Docker VM (docker, 10.1.1.32):
/mnt/media
Contents:
/mnt/media/Movies- Movie library for Plex/Radarr/mnt/media/TV- TV show library for Plex/Sonarr/mnt/media/Downloads- Download directory for SABnzbd
Critical Importance:
- ALL media files stored here
- Plex and media management tools depend on this
- If NFS server is down, Plex cannot serve media
Migration Considerations:
- NFS mounts must be reconfigured on migrated VMs
- Ensure network connectivity from Proxmox nodes to 10.1.1.150
- No data migration needed (media stays on NAS)
- Test NFS mount performance on Proxmox before migration
2. Cloudflare Account
Purpose: Traefik reverse proxy SSL certificates
API Key: Required for DNS-01 challenge
Domains: ${DOMAINNAME} (specific domain not captured)
Used For:
- Automatic SSL certificate generation (Let’s Encrypt)
- DNS-based validation
- All public-facing services (Radarr, Sonarr, Ombi, Overseerr, etc.)
Migration Considerations:
- Cloudflare API key must be available for Traefik
- Domain DNS records may need updating if IPs change
3. Usenet Provider
Used By: SABnzbd Purpose: Content downloads
Migration Considerations:
- SABnzbd config includes Usenet provider credentials
- Must be migrated with SABnzbd container config
Service Dependencies Map
┌─────────────────────────────────────────────────────────────────┐
│ External Dependencies │
│ ┌──────────────┐ ┌────────────┐ ┌─────────────────────────┐ │
│ │ NFS Storage │ │ Cloudflare │ │ Usenet Provider │ │
│ │ 10.1.1.150 │ │ (SSL/DNS) │ │ (Downloads) │ │
│ └──────┬───────┘ └─────┬──────┘ └───────┬─────────────────┘ │
└─────────┼────────────────┼─────────────────┼───────────────────┘
│ │ │
┌─────────┼────────────────┼─────────────────┼───────────────────┐
│ │ Network Layer (Palo Alto FW + VLANs) │ │
│ │ │ │ │ │
└─────────┼────────────────┼─────────────────┼───────────────────┘
│ │ │
┌─────────┼────────────────┼─────────────────┼───────────────────┐
│ Application Layer │
│ │ │ │ │
│ ┌──────▼────────┐ ┌────▼─────────────────▼─────────────┐ │
│ │ Plex Server │ │ Docker VM (Media Stack) │ │
│ │ (iridium) │ │ ┌─────────────────────────────┐ │ │
│ │ │ │ │ Traefik (Reverse Proxy) │ │ │
│ │ - NFS mount │◄─┼───┤ - Cloudflare integration │ │ │
│ │ - iGPU HW TX │ │ └─────────────────────────────┘ │ │
│ │ - Libraries │ │ ┌─────────────────────────────┐ │ │
│ └───────────────┘ │ │ SABnzbd (Downloader) │ │ │
│ │ │ - Usenet downloads │ │ │
│ ┌──────────────┐ │ └────────┬────────────────────┘ │ │
│ │ Pi-hole │ │ │ │ │
│ │ (DNS) │ │ ┌────────▼────────────────────┐ │ │
│ │ 10.1.1.35 │ │ │ Radarr/Sonarr (Automation) │ │ │
│ └──────────────┘ │ │ - NFS mount (media) │ │ │
│ │ │ - Manages downloads │ │ │
│ ┌──────────────┐ │ └────────┬────────────────────┘ │ │
│ │ Home │ │ │ │ │
│ │ Assistant │ │ ┌────────▼────────────────────┐ │ │
│ │ + Frigate │ │ │ Ombi/Overseerr (Requests) │ │ │
│ │ (Coral TPU) │ │ │ - User requests │ │ │
│ └──────────────┘ │ └─────────────────────────────┘ │ │
│ │ ┌─────────────────────────────┐ │ │
│ │ │ Portainer (Management) │ │ │
│ │ └─────────────────────────────┘ │ │
│ │ ┌─────────────────────────────┐ │ │
│ │ │ Uptime Kuma (Monitoring) │ │ │
│ │ └─────────────────────────────┘ │ │
│ │ ┌─────────────────────────────┐ │ │
│ │ │ Watchtower (Auto-update) │ │ │
│ │ └─────────────────────────────┘ │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
Data Flow:
- User requests content via Ombi/Overseerr (web UI via Traefik)
- Request triggers Radarr/Sonarr search
- Radarr/Sonarr sends download to SABnzbd
- SABnzbd downloads from Usenet to
/mnt/media/Downloads - Radarr/Sonarr moves completed downloads to
/mnt/media/Moviesor/mnt/media/TV - Plex automatically scans NFS mount and adds to library
- User streams via Plex (with iGPU hardware transcoding)
Storage Architecture
NFS-Based Centralized Storage
┌──────────────────────────────────────┐
│ Synology NAS (10.1.1.150) │
│ /volume1/datastore/media │
│ 27TB (21TB used, 6.1TB free) │
│ │
│ ├── Movies/ (Radarr/Plex) │
│ ├── TV/ (Sonarr/Plex) │
│ └── Downloads/ (SABnzbd) │
└──────────┬───────────────────────────┘
│
│ NFSv4.1 Mount
│
┌─────┴─────┐
│ │
┌────▼────┐ ┌───▼─────┐
│ Plex VM │ │ Docker │
│ /mnt/ │ │ /mnt/ │
│ media │ │ media │
└─────────┘ └─────────┘
Key Points:
- Media is NOT on VMs - stored centrally on NAS
- Simplifies migration (no large data transfers)
- Both Plex and Docker VM mount same NFS share
- NAS has ~6TB free space for growth
Network Diagram
Internet
│
┌───▼────────────────────────────────────────────┐
│ Palo Alto Firewall (10.1.1.103) │
│ - Inter-VLAN Routing │
│ - NAT │
│ - Security Policies │
└───┬────────────────────────────────────────────┘
│
┌───┴────────────────────────────────────────────┐
│ MikroTik Switch (10GbE SFP+) │
│ VLANs: 0 (Mgmt), 50 (Lab), 300 (Public), │
│ 4095 (Trunk) │
└───┬────────────────────────────────────────────┘
│
├─ VLAN 0 (Management): 10.1.1.0/24
│ ├─ ESXi 10.1.1.120, 10.1.1.121
│ ├─ Proxmox 10.1.1.123
│ ├─ Palo Alto 10.1.1.103
│ ├─ Pi-hole 10.1.1.35
│ ├─ Plex 10.1.1.125
│ ├─ Docker 10.1.1.32
│ ├─ Home Assistant 10.1.1.208
│ └─ NFS Server 10.1.1.150
│
├─ VLAN 50 (Lab)
│
├─ VLAN 300 (Public)
│
└─ VLAN 4095 (Internal/Trunk)
Migration Impact Analysis
Critical Services (Highest Priority)
- Palo Alto Firewall - Network outage affects ALL services
- Plex - Media streaming, requires iGPU passthrough
- Docker Stack - Media management, downloads, reverse proxy
- Home Assistant/Frigate - Home automation, CCTV
Medium Priority
- Pi-hole - DNS (can use fallback 8.8.8.8 temporarily)
Low Priority / Can Decommission
- xeon VM - No active workload (spare Docker host)
- docker-host-1 - No active workload (spare Docker host)
Sequence Recommendation
Based on dependencies and risk:
-
Phase 1: Migrate low-risk VMs first (test migration process)
- xeon (spare Docker host) - good test case
- docker-host-1 (spare)
-
Phase 2: Migrate supporting services
- Pi-hole (prepare DNS fallback first)
-
Phase 3: Migrate Docker stack
- docker VM with all media automation
- Test NFS mount
- Test Traefik/Cloudflare integration
- Validate all containers start correctly
-
Phase 4: Migrate Plex (HIGH RISK - iGPU required)
- Backup Plex database
- Ensure iGPU passthrough works on Proxmox
- Test NFS mount
- Validate hardware transcoding
-
Phase 5: Migrate Palo Alto Firewall (HIGHEST RISK)
- Scheduled maintenance window required
- Network outage expected
- Thorough testing of all VLANs post-migration
-
Phase 6: Migrate Home Assistant/Frigate
- Move from staging to NUC node
- USB controller passthrough for Coral TPU
- Validate object detection working
Configuration Backup Checklist
Before migration, backup these critical configs:
Plex
- Plex database:
/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/ - Plex config:
Preferences.xml - Library locations (document NFS mount paths)
Docker Stack
- Docker Compose file:
/home/luke/docker/docker-compose.yml - Environment file:
/home/luke/docker/.env(if exists) - Traefik config:
${USERDIR}/docker/traefik/ - All container configs:
${USERDIR}/docker/*/ - Document Cloudflare API key
Pi-hole
- Pi-hole config:
/etc/pihole/setupVars.conf - Custom DNS records (if any)
- Blocklists configuration
- DHCP settings (if DHCP server enabled)
Palo Alto Firewall
- Full config export via web UI
- Screenshot all firewall rules
- Document NAT policies
- Document routing table
- Interface → VLAN mappings
Home Assistant
- Home Assistant config directory (Docker volume)
- Frigate config (Docker volume)
- Mosquitto config/persistence (Docker volume)
Network Requirements for Migration
Required Connectivity
- NFS: Proxmox nodes must reach 10.1.1.150 on NFS ports
- DNS: All VMs need DNS resolution (use Pi-hole or fallback)
- Internet: Traefik needs Cloudflare API access
- Cloudflare: Port 443 outbound for DNS-01 challenge
Firewall Rules to Verify Post-Migration
- NFS access from Proxmox nodes to 10.1.1.150
- VLAN routing still works
- Internet access from all VLANs
- DNS resolution working (Pi-hole)
Questions Answered / Mysteries Solved
- ✅ Which VM is Plex? → “iridium” (hostname platinum, 10.1.1.125)
- ✅ Which VM is Docker? → “docker” (10.1.1.32) with full media stack
- ✅ What is xeon? → Spare Docker host with iGPU (no active workload)
- ✅ Where is media stored? → NFS on 10.1.1.150 (Synology NAS, 27TB)
- ✅ What is iridium VM? → Actually the Plex server! (Confusing naming)
- ✅ Blue Iris status? → Replaced by Frigate on Proxmox (complete)
- ✅ Home Assistant details? → On Proxmox with Frigate + Coral TPU
- ✅ docker-host-1 purpose? → Spare/unused Docker host
Recommendations for Migration
1. Rename VMs for Clarity
Current naming is confusing:
- ESXi “xeon” has hostname “iridium” (spare Docker)
- ESXi “iridium” has hostname “platinum” (Plex!)
Recommended Renaming:
- “iridium” VM → “plex” (hostname: platinum)
- “xeon” VM → “docker-spare” (hostname: iridium)
- “docker” VM → “docker-media” (hostname: docker)
2. Consolidate iGPU Usage
Currently:
- “xeon” VM has iGPU passthrough but no workload (wasted)
- “iridium” VM (Plex) has iGPU and uses it
Action: iGPU passthrough correctly assigned to Plex
3. Decommission Unused VMs
- xeon/docker-spare: No containers, wasting resources
- docker-host-1: No containers, wasting resources
- server-2019, home-security, xsoar on Host 2: All offline
Action: Confirm these can be permanently offline
4. NFS Mount Strategy
Current: Same NFS share mounted on 2 VMs
- Plex (read media)
- Docker (write media via Radarr/Sonarr)
For Proxmox:
- Continue same pattern - mount NFS on each VM
- Consider Proxmox Storage integration (mount NFS at PVE level)
- Test NFS performance on Proxmox before migration
5. Backup Strategy
Currently no evidence of backups seen.
Recommendation:
- Proxmox Backup Server (PBS) or external backup
- Critical: Plex database, Docker configs, Frigate recordings
- NFS data on NAS (should have separate backup strategy)
Next Steps
- ✅ Application layer documented (COMPLETE)
- Review and confirm VM purposes with user
- Update migration plan with application-specific steps
- Create detailed runbooks for each VM migration
- Test NFS mount on Proxmox before migration
- Test iGPU passthrough on Proxmox (critical for Plex)
- Backup all configurations before starting migration
Ready to proceed with migration planning!