Technical Reference · Infrastructure Architecture

AI Enrichment Platform
Network Architecture

The network-level architecture of Atsky's AI enrichment platform — showing how the Multi-Agent Framework pipeline integrates with enterprise network infrastructure, Kubernetes clusters, and observability layers in a production telecom environment.

Full Product Architecture → Telecom Use Cases

Platform

Kubernetes-native AI enrichment pipeline running on enterprise-managed infrastructure with full multi-tenancy support

Agent Framework

Multi-Agent Framework orchestrating Supervisor, Diagnostic, RCA, and Best-Action agents across distributed compute

Compliance

EU data residency enforced at network boundary. Full audit trail and egress controls aligned to EU AI Act requirements

AI Enrichment Platform — AI Enrichment Network Architecture

● Cluster: cl-hby-ai-brain-dev-00 CNI: Antrea K8s v1.32 · VMware Photon · amd64 3 CP + 3 Workers
External Systems
BHOM Platform (Network Infrastructure Platform)
🔵 API-1 — Anomaly Source
GET /api/bhom/v1/situations · polled by Orchestrator
🟢 API-2 — Writeback Sink
POST /api/bhom/v1/situations/{id}/enrich · called by Orchestrator
LLM Gateway
🟣 External LLM Endpoint
Network Infrastructure Platform AI Gateway / OpenAI-compatible · called by Multi-Agent Framework
VMware Tanzu Kubernetes Grid — Guest Cluster
ns: svns-hby-100007639-non-prod-00
🔀
NGINX Ingress Controller
NodePort · Workers · Port :80 / :443
Optional for this use case — both microservices are outbound-only. Useful for /metrics or admin access if needed.
Service Layer
172.16.0.0/16 · CoreDNS @ 172.16.0.10
ClusterIP Service
orchestrator-svc
172.16.x.x :8080
DNS: orchestrator-svc.<ns>.svc.cluster.local
ClusterIP Service
Multi-Agent Framework-svc
172.16.x.x :8081
DNS: Multi-Agent Framework-svc.<ns>.svc.cluster.local
Built-in Services
kube-dns
172.16.0.10
K8s API: 172.16.0.1
Pod Layer — Antrea Overlay
172.17.0.0/16 per-node /24 PodCIDRs
Orchestrator
Orchestrator / Poller
Polls BHOM API-1 at configurable interval.
Dispatches anomaly tasks to Multi-Agent Framework.
Receives enriched result, writes to BHOM API-2.
Pod: 172.17.0.x Worker A · PodCIDR /24
Deployment · 1 replica · requests 100m/256Mi
Multi-Agent Framework
Multi-Agent Framework (AI Agent)
Receives anomaly task from Orchestrator.
Calls external LLM Gateway for reasoning.
Returns enriched EnrichmentResult.
Pod: 172.17.2.x Worker B · PodCIDR /24
Deployment · 1–2 replicas · requests 200m/512Mi
Antrea CNI
✓ Cross-worker pod comms
✓ Per-node PodCIDR allocation
✓ Overlay tunnel (default)
✗ No NetworkPolicy yet
⚠ Add policies pre-prod
Node / VM Layer
172.20.27.0/24 · NSX-T vSphere Overlay
Control Plane VMs × 3
Control Plane
mz64v-[A/B/C] — etcd + API Server + Scheduler
3-node HA · kube-apiserver, controller-manager, etcd
Worker VMs × 3
Worker A
qz7dr-bxnkk
PodCIDR 172.17.0.0/24
← Orchestrator Pod
Worker B
qz7dr-h4fk9
PodCIDR 172.17.6.0/24
← Multi-Agent Framework Pod
Worker C
qz7dr-knp9j
PodCIDR 172.17.2.0/24
Reserved / Scale-out
Node / VM layer · 172.20.27.0/24
Pod layer · 172.17.0.0/16
Service layer · 172.16.0.0/16
External (egress)
Cluster boundary
Antrea cross-node pod routing
Egress (outbound HTTPS)
Key principle: App-to-app calls always use Service IPs (172.16.x.x) via CoreDNS, never Pod IPs (ephemeral) or Node IPs. Egress to external endpoints exits via the Node VM through vSphere NSX-T routing.
1
Orchestrator Pod
172.17.0.x
HTTPS GET · egress via Worker A node
BHOM API-1
external
EGRESS
Poll anomaly situations at configured interval
2
BHOM API-1
200 OK · JSON SituationBundle
Orchestrator Pod
INGRESS REPLY
Anomaly data enters pod memory
3
Orchestrator Pod
172.17.0.x
HTTP POST · Multi-Agent Framework-svc:8081
Multi-Agent Framework-svc
172.16.x.x → Pod 172.17.6.x
SERVICE IPANTREA X-NODE
Internal K8s — never leaves cluster
4
Multi-Agent Framework Pod
172.17.6.x
HTTPS POST · egress via Worker B node
LLM Gateway
external
EGRESS
Prompt + anomaly context → LLM reasoning loop
5
LLM Gateway
HTTPS response · AI enrichment result
Multi-Agent Framework Pod
INGRESS REPLY
Streaming or sync — model-dependent
6
Multi-Agent Framework Pod
172.17.6.x
HTTP 200 response back to caller
Orchestrator Pod
172.17.0.x
ANTREA X-NODE
EnrichmentResult payload returned
7
Orchestrator Pod
172.17.0.x
HTTPS POST · egress via Worker A node
BHOM API-2
external
EGRESS
Write enriched result back to BHOM
Steps 3 & 6 — Why ClusterIP + Antrea
Orchestrator → Multi-Agent Framework crosses Worker A → Worker B via Antrea overlay.
Pod IPs are ephemeral — Orchestrator always targets Multi-Agent Framework-svc:8081 (stable Service VIP at 172.16.x.x).
kube-proxy rewrites destination to live pod IP transparently.
Steps 1, 4, 7 — Egress path
Pod egress exits via the node's VM IP (172.20.27.x) through NSX-T.
Destination (BHOM, LLM Gateway) sees the node IP, not the pod IP.
Ensure BHOM/LLM firewall rules allow the 3× worker node IPs.
Network Layers
LayerCIDRUsed ForAWS EquivalentNotes
External — (internet / Enterprise Operator WAN) BHOM APIs, LLM Gateway Internet Gateway / NAT Egress via node VMs through NSX-T
Node / VM 172.20.27.0/24 Control plane & worker VM IPs EC2 VPC private subnet Do NOT use as app targets
Pod 172.17.0.0/16 Workload pod IPs (per-node /24) EC2 secondary ENI IPs Ephemeral — use Service DNS
Service 172.16.0.0/16 ClusterIP virtual IPs, CoreDNS AWS internal ALB / Route53 Stable — use for all app wiring
Per-Node PodCIDR Allocation
NodeRoleNode IPPodCIDRService Deployed
qz7dr-bxnkk Worker A ⬛ confidential 172.17.0.0/24 Orchestrator / Poller
qz7dr-h4fk9 Worker B ⬛ confidential 172.17.6.0/24 Multi-Agent Framework (AI)
qz7dr-knp9j Worker C ⬛ confidential 172.17.2.0/24 Reserved (future: Kafka / Qdrant / PG)
mz64v-[A/B/C] Control Plane × 3 ⬛ confidential etcd + API Server + Scheduler
Service DNS Reference
ServiceShort DNS (same NS)Full FQDNPort
orchestrator-svc orchestrator-svc orchestrator-svc.svns-hby-100007639-non-prod-00.svc.cluster.local 8080
Multi-Agent Framework-svc Multi-Agent Framework-svc Multi-Agent Framework-svc.svns-hby-100007639-non-prod-00.svc.cluster.local 8081
kube-dns kube-dns.kube-system kube-dns.kube-system.svc.cluster.local 53 UDP/TCP
CNI: Antrea
✓ Supported Now
Cross-worker pod routing
Per-node PodCIDR allocation
Service kube-proxy forwarding
NetworkPolicy enforcement (ready)
⚠ Not Yet Configured
NetworkPolicy rules
Pod egress restrictions
Namespace isolation
mTLS (needs Istio add-on)
→ Recommended Next
Verify antrea-config trafficEncapMode
Add egress allow rules to BHOM/LLM
Block pod-to-pod except Orch↔Agent
Add resource quotas to namespace

Interested in this
architecture for your network?

Book a technical session. We'll walk through how this maps to your infrastructure.

Book a Technical Call → See Production Results