Container Networking
Container Networking¶
Learning Objectives¶
- Understand Linux network namespaces and virtual networking
- Master Docker networking architecture and network drivers
- Learn Kubernetes networking model and CNI plugins
- Understand service discovery and load balancing in containers
- Implement network policies for security
- Learn service mesh concepts and implementations
- Configure ingress and external load balancing
- Troubleshoot container networking issues
Table of Contents¶
- Container Networking Fundamentals
- Docker Networking Model
- Docker Network Drivers
- Kubernetes Networking Model
- CNI Plugins Comparison
- Service Discovery and Load Balancing
- Network Policies
- Service Mesh
- Ingress and Load Balancing
- Troubleshooting Container Networks
- Practice Problems
1. Container Networking Fundamentals¶
Network Namespaces¶
Network namespaces provide network isolation in Linux:
Default Namespace Container Namespace
ββββββββββββββββββββββ ββββββββββββββββββββββ
β eth0: 10.0.1.10 β β eth0: 172.17.0.2 β
β β β (inside container)β
β Routing Table β β Routing Table β
β Firewall Rules β β Firewall Rules β
ββββββββββββββββββββββ ββββββββββββββββββββββ
Creating a network namespace:
# Create namespace
sudo ip netns add my_namespace
# List namespaces
sudo ip netns list
# Execute command in namespace
sudo ip netns exec my_namespace ip addr show
# Enter namespace shell
sudo ip netns exec my_namespace bash
Virtual Ethernet (veth) Pairs¶
veth pairs are virtual cable connections:
βββββββββββββββββββ βββββββββββββββββββ
β Namespace A β β Namespace B β
β β β β
β ββββββββββββ β veth0 βββββΌββββ veth1 β
β β veth0 ββββΌββββββββββββββββΌβββββββββββββ β
β β10.0.0.1 β β β β10.0.0.2β β
β ββββββββββββ β β ββββββββββ β
βββββββββββββββββββ βββββββββββββββββββ
Creating veth pair:
# Create veth pair
sudo ip link add veth0 type veth peer name veth1
# Move veth1 to namespace
sudo ip link set veth1 netns my_namespace
# Configure veth0 (host side)
sudo ip addr add 10.0.0.1/24 dev veth0
sudo ip link set veth0 up
# Configure veth1 (namespace side)
sudo ip netns exec my_namespace ip addr add 10.0.0.2/24 dev veth1
sudo ip netns exec my_namespace ip link set veth1 up
sudo ip netns exec my_namespace ip link set lo up
# Test connectivity
ping 10.0.0.2
Linux Bridge¶
Bridge connects multiple network interfaces:
Linux Bridge (br0)
ββββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ
β β β β
veth0 veth2 veth4 eth0 (host)
β β β β
β β β β
Container 1 Container 2 Container 3 Physical Net
172.17.0.2/16 172.17.0.3/16 172.17.0.4/16
Creating a bridge:
# Create bridge
sudo ip link add br0 type bridge
sudo ip link set br0 up
sudo ip addr add 172.17.0.1/16 dev br0
# Create container namespace and veth pair
sudo ip netns add container1
sudo ip link add veth0 type veth peer name veth1
# Connect veth1 to container
sudo ip link set veth1 netns container1
# Connect veth0 to bridge
sudo ip link set veth0 master br0
sudo ip link set veth0 up
# Configure container interface
sudo ip netns exec container1 ip addr add 172.17.0.2/16 dev veth1
sudo ip netns exec container1 ip link set veth1 up
sudo ip netns exec container1 ip link set lo up
sudo ip netns exec container1 ip route add default via 172.17.0.1
# Enable NAT for internet access
sudo iptables -t nat -A POSTROUTING -s 172.17.0.0/16 ! -o br0 -j MASQUERADE
Container Networking Architecture¶
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Host OS β
β β
β βββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Docker Bridge (docker0) β β
β β 172.17.0.1/16 β β
β βββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββ β
β β β β β β
β ββββββΌβββββ βββββΌββββββ βββββΌββββββ ββββββΌβββββ β
β β veth0 β β veth2 β β veth4 β β veth6 β β
β ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ β
β β β β β β
β βββββββΌβββββββ βββββββΌβββββββ βββββββΌβββββββ ββββββΌββββββ
β β Container1 β β Container2 β β Container3 β βContainer4β
β β 172.17.0.2 β β 172.17.0.3 β β 172.17.0.4 β β172.17.0.5β
β ββββββββββββββ ββββββββββββββ ββββββββββββββ ββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
2. Docker Networking Model¶
Container Network Model (CNM)¶
Docker uses CNM with three components:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Docker Engine β
β β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β Sandbox β β Endpoint β β Network β β
β β β β β β β β
β β (netns) ββββΆβ (veth) ββββΆβ (bridge) β β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
- Sandbox: Container's network stack (namespace)
- Endpoint: Virtual network interface (veth)
- Network: Virtual switch (bridge, overlay, etc.)
libnetwork¶
Docker's networking library:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Docker Engine β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββΌβββββββββββββββββββββββββββββββββββ
β libnetwork β
β β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β Bridge β β Overlay β β Macvlan β ... β
β β Driver β β Driver β β Driver β β
β ββββββββββββ ββββββββββββ ββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Docker Network Commands¶
# List networks
docker network ls
# Inspect network
docker network inspect bridge
# Create network
docker network create my_network
# Connect container to network
docker network connect my_network my_container
# Disconnect container
docker network disconnect my_network my_container
# Remove network
docker network rm my_network
# Run container on specific network
docker run --network my_network nginx
Default Docker Networks¶
# List default networks
docker network ls
NETWORK ID NAME DRIVER SCOPE
abcdef123456 bridge bridge local
1234567890ab host host local
fedcba098765 none null local
3. Docker Network Drivers¶
Bridge Network¶
Default network driver:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Host β
β β
β eth0 (10.0.1.10) β
β β β
β β ββββββββββββββββββββββββββββββββββββββ β
β ββββ docker0 (172.17.0.1/16) β β
β β (Linux Bridge) β β
β ββββββ¬βββββββββββββββ¬ββββββββββββββ¬βββ β
β β β β β
β ββββββΌβββββ βββββΌββββββ ββββββΌβββββ β
β β nginx β β redis β β mysql β β
β β.17.0.2 β β .17.0.3 β β .17.0.4 β β
β βββββββββββ βββββββββββ βββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Creating a custom bridge:
# Create custom bridge network
docker network create \
--driver bridge \
--subnet 192.168.1.0/24 \
--gateway 192.168.1.1 \
my_bridge
# Run containers
docker run -d --name web --network my_bridge nginx
docker run -d --name db --network my_bridge mysql
# Containers can communicate by name
docker exec web ping db
Port publishing:
# Publish port 80 to host port 8080
docker run -d -p 8080:80 --name web nginx
# iptables NAT rule created:
# DNAT: Host:8080 β Container:80
Host Network¶
Container shares host's network stack:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Host β
β β
β eth0 (10.0.1.10) β
β β β
β β (Same network namespace) β
β β β
β ββββββΌβββββββββββββββββββββββββββββββββββ β
β β Container (--network host) β β
β β Listens on 10.0.1.10:80 β β
β βββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Usage:
# Run with host network
docker run --network host nginx
# No port publishing needed
# Container binds directly to host's ports
Pros: - Best performance (no NAT) - Simple configuration
Cons: - No network isolation - Port conflicts possible
Overlay Network¶
Multi-host networking:
ββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββ
β Host 1 (10.0.1.10) β β Host 2 (10.0.1.11) β
β β β β
β ββββββββββββββββββββββββ β β ββββββββββββββββββββββββ β
β β Container A β β β β Container B β β
β β 10.0.9.2 (overlay) β β β β 10.0.9.3 (overlay) β β
β ββββββββββββ¬ββββββββββββ β β ββββββββββββ¬ββββββββββββ β
β β VXLAN β β β VXLAN β
β ββββββββββββΌββββββββββββ β β ββββββββββββΌββββββββββββ β
β β br0 (overlay bridge) β β β β br0 (overlay bridge) β β
β ββββββββββββ¬ββββββββββββ β β ββββββββββββ¬ββββββββββββ β
β β β β β β
β eth0 ββββββββββββββΌβββββββΌβββββββββββ eth0 β
β 10.0.1.10 β β 10.0.1.11 β
ββββββββββββββββββββββββββββββ ββββββββββββββββββββββββββββββ
Encapsulated traffic (VXLAN over UDP 4789)
Creating overlay network (Docker Swarm):
# Initialize swarm
docker swarm init
# Create overlay network
docker network create \
--driver overlay \
--subnet 10.0.9.0/24 \
my_overlay
# Deploy service on overlay
docker service create \
--name web \
--network my_overlay \
--replicas 3 \
nginx
Macvlan Network¶
Assign MAC addresses to containers:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Host β
β β
β eth0 (10.0.1.10) βββ Physical Network β
β β β
β β (Macvlan in bridge mode) β
β β β
β ββββββΌβββββββββββββββ¬ββββββββββββββ¬βββββββββββββββ β
β β β β β β β
β β ββββΌβββββ βββββΌββββββ ββββββΌβββββ β β
β β βnginx β β redis β β mysql β β β
β β β.0.1.20β β .0.1.21 β β .0.1.22 β β β
β β βββββββββ βββββββββββ βββββββββββ β β
β β (Appears on physical network with own MAC) β β
β βββββββββββββββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Creating macvlan network:
# Create macvlan network
docker network create -d macvlan \
--subnet=10.0.1.0/24 \
--gateway=10.0.1.1 \
-o parent=eth0 \
my_macvlan
# Run container
docker run -d \
--network my_macvlan \
--ip 10.0.1.20 \
nginx
Use cases: - Legacy applications requiring L2 connectivity - Container monitoring physical network - Direct network access without NAT
Network Driver Comparison¶
| Driver | Isolation | Multi-host | Performance | Use Case |
|---|---|---|---|---|
| Bridge | Yes | No | Good | Single host, development |
| Host | No | No | Excellent | Performance-critical |
| Overlay | Yes | Yes | Good | Multi-host, Swarm/K8s |
| Macvlan | Yes | No | Excellent | L2 connectivity needed |
| None | Complete | N/A | N/A | No networking required |
4. Kubernetes Networking Model¶
Kubernetes Network Requirements¶
Kubernetes imposes these requirements:
- All pods can communicate with each other without NAT
- All nodes can communicate with all pods without NAT
- Pod sees its own IP as others see it (no NAT)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Cluster Network β
β β
β ββββββββββββββββββββ ββββββββββββββββββββ β
β β Node 1 β β Node 2 β β
β β IP: 10.0.1.10 β β IP: 10.0.1.11 β β
β β β β β β
β β ββββββββββββββ β β ββββββββββββββ β β
β β β Pod A β β β β Pod C β β β
β β β 10.244.1.2 ββββΌβββββββββββββββΌβββ 10.244.2.2 β β β
β β ββββββββββββββ β β ββββββββββββββ β β
β β β β β β
β β ββββββββββββββ β β ββββββββββββββ β β
β β β Pod B β β β β Pod D β β β
β β β 10.244.1.3 β β β β 10.244.2.3 β β β
β β ββββββββββββββ β β ββββββββββββββ β β
β ββββββββββββββββββββ ββββββββββββββββββββ β
β β
β Pod A can directly communicate with Pod C (10.244.2.2) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Pod Networking¶
Each pod gets its own IP:
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Pod (10.244.1.5) β
β β
β ββββββββββββββββ ββββββββββββββββ β
β β Container A β β Container B β β
β β localhost:80 β βlocalhost:3306β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ β
β β β β
β ββββββββββ¬βββββββββ β
β β β
β βββββββββΌβββββββββ β
β β Network NS β β
β β eth0 β β
β β 10.244.1.5 β β
β ββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
Container Network Interface (CNI)¶
CNI is the standard interface for network plugins:
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Kubernetes (kubelet) β
ββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββ
β
β CNI Specification
β
ββββββββββββββββββββΌββββββββββββββββββββββββββββββββ
β CNI Plugin β
β (Calico, Cilium, Flannel, Weave, etc.) β
β β
β Responsibilities: β
β - Allocate IP to pod β
β - Setup network interface β
β - Configure routing β
ββββββββββββββββββββββββββββββββββββββββββββββββββββ
CNI configuration example:
{
"cniVersion": "0.4.0",
"name": "k8s-pod-network",
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
}
5. CNI Plugins Comparison¶
Calico¶
Architecture:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Calico Components β
β β
β ββββββββββββββββ ββββββββββββββββ β
β β Felix β β BIRD β β
β β (Agent) β β (BGP daemon) β β
β β - Routes β β - Route β β
β β - ACLs β β exchange β β
β ββββββββββββββββ ββββββββββββββββ β
β β
β ββββββββββββββββββββββββββββββββββββ β
β β etcd / Kubernetes API β β
β β (Datastore) β β
β ββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Features: - Pure L3 networking (no overlay) - BGP route distribution - Scalable (tested with 1000+ nodes) - Rich network policy
Network modes: - IP-in-IP (encapsulation) - VXLAN - Direct/Native (no encapsulation)
Cilium¶
Architecture:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Cilium Components β
β β
β ββββββββββββββββββββββββββββββββββββ β
β β eBPF Programs (kernel) β β
β β - Packet filtering β β
β β - Load balancing β β
β β - Network policy β β
β ββββββββββββββ¬ββββββββββββββββββββββ β
β β β
β ββββββββββββββΌββββββββββββββββββββββ β
β β Cilium Agent β β
β β - Identity management β β
β β - Policy enforcement β β
β ββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Features: - eBPF-based (Linux kernel technology) - L7 protocol visibility (HTTP, gRPC, Kafka) - Identity-based security - Hubble observability
Use cases: - Advanced security policies - Service mesh without sidecars - API-aware filtering
Flannel¶
Architecture:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Flannel Components β
β β
β ββββββββββββββββββββββββββββββββββββ β
β β flanneld (agent) β β
β β - Allocate subnet β β
β β - Configure VXLAN/host-gw β β
β ββββββββββββββ¬ββββββββββββββββββββββ β
β β β
β ββββββββββββββΌββββββββββββββββββββββ β
β β etcd / Kubernetes API β β
β β (Subnet allocation) β β
β ββββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Features: - Simple overlay network - Multiple backends (VXLAN, host-gw, UDP) - Easy to deploy - No network policy support
Backend comparison: - VXLAN: Works across L3, some overhead - host-gw: L2 required, better performance - UDP: Legacy, poor performance
Weave¶
Architecture:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Weave Components β
β β
β ββββββββββββββββββββββββββββββββββββ β
β β Weave Router (per node) β β
β β - Overlay network β β
β β - Mesh topology β β
β β - Encryption (optional) β β
β ββββββββββββββββββββββββββββββββββββ β
β β
β Automatic mesh formation between nodes β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Features: - Mesh network topology - Built-in encryption - Multicast support - Network policy support
CNI Plugin Comparison¶
| Plugin | Technology | Performance | Features | Complexity |
|---|---|---|---|---|
| Calico | BGP/eBPF | Excellent | Rich policy, scalable | Medium |
| Cilium | eBPF | Excellent | L7 policy, observability | High |
| Flannel | VXLAN/host-gw | Good | Simple, reliable | Low |
| Weave | Mesh/VXLAN | Good | Encryption, multicast | Medium |
Selection criteria: - Simple overlay: Flannel - Network policy + scale: Calico - L7 visibility: Cilium - Encryption: Weave - On-premises, L2: Calico (BGP)
6. Service Discovery and Load Balancing¶
Kubernetes Services¶
Services provide stable endpoints for pods:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Service (ClusterIP) β
β my-service: 10.96.0.10:80 β
βββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β (Load balances to endpoints)
ββββββββββββββΌβββββββββββββ
β β β
ββββββΌβββββ βββββΌββββββ ββββΌβββββββ
β Pod A β β Pod B β β Pod C β
β10.244.1.2β β10.244.1.3β β10.244.2.2β
βββββββββββ βββββββββββ βββββββββββ
Service types:
- ClusterIP (default): Internal cluster IP
- NodePort: Exposes on each node's IP at static port
- LoadBalancer: External load balancer (cloud provider)
- ExternalName: CNAME to external service
Service definition:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 80 # Service port
targetPort: 80 # Container port
kube-proxy¶
kube-proxy implements service load balancing:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Node β
β β
β βββββββββββββββββββββββββββββββββββ β
β β kube-proxy β β
β β Watches Service/Endpoint API β β
β ββββββββββββββ¬βββββββββββββββββββββ β
β β β
β ββββββββββββββΌβββββββββββββββββββββ β
β β Packet forwarding rules β β
β β (iptables / IPVS / eBPF) β β
β βββββββββββββββββββββββββββββββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
iptables Mode¶
Default mode using iptables NAT:
# Example iptables rules for service 10.96.0.10:80
# (Load balance to 3 backends)
# KUBE-SERVICES chain
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m tcp --dport 80 \
-j KUBE-SVC-XYZ
# Service chain (probabilistic load balancing)
-A KUBE-SVC-XYZ -m statistic --mode random --probability 0.33333 \
-j KUBE-SEP-AAA
-A KUBE-SVC-XYZ -m statistic --mode random --probability 0.50000 \
-j KUBE-SEP-BBB
-A KUBE-SVC-XYZ -j KUBE-SEP-CCC
# Endpoint chains (DNAT to pod IPs)
-A KUBE-SEP-AAA -p tcp -m tcp \
-j DNAT --to-destination 10.244.1.2:80
-A KUBE-SEP-BBB -p tcp -m tcp \
-j DNAT --to-destination 10.244.1.3:80
-A KUBE-SEP-CCC -p tcp -m tcp \
-j DNAT --to-destination 10.244.2.2:80
Pros: - Mature, well-tested - Kernel-level performance
Cons: - O(n) rule processing - Poor performance with 1000+ services
IPVS Mode¶
IP Virtual Server for better performance:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β IPVS Virtual Server β
β 10.96.0.10:80 (rr scheduling) β
βββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
ββββββββββββββΌβββββββββββββ
β β β
10.244.1.2 10.244.1.3 10.244.2.2
(weight 1) (weight 1) (weight 1)
Advantages: - O(1) lookup complexity - Better performance at scale - Multiple scheduling algorithms (rr, lc, dh, sh, etc.)
Enable IPVS mode:
# kube-proxy config
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
ipvs:
scheduler: "rr" # round-robin
CoreDNS¶
DNS-based service discovery:
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CoreDNS β
β Watches Services, creates DNS records β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
DNS Records:
- my-service.default.svc.cluster.local β 10.96.0.10
- my-service.default.svc β 10.96.0.10
- my-service.default β 10.96.0.10
- my-service β 10.96.0.10
(from default namespace)
# From pod:
nslookup my-service
# Returns: 10.96.0.10
CoreDNS configuration:
# Corefile
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
7. Network Policies¶
Kubernetes NetworkPolicy¶
Control traffic between pods:
Default: All traffic allowed
βββββββββββ βββββββββββ βββββββββββ
β Pod A ββββββΆβ Pod B ββββββΆβ Pod C β
βββββββββββ βββββββββββ βββββββββββ
With NetworkPolicy:
βββββββββββ βββββββββββ βββββββββββ
β Pod A β β β Pod B β β β Pod C β
ββββββββββββββββΆβββββββββββ- - -βΆβββββββββββ
Basic NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-frontend
namespace: default
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Effect:
- Only pods with label app=frontend can access backend on port 8080
- All other traffic to backend is denied
Ingress and Egress Rules¶
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-network-policy
spec:
podSelector:
matchLabels:
app: database
policyTypes:
- Ingress
- Egress
ingress:
# Allow from backend pods on port 3306
- from:
- podSelector:
matchLabels:
app: backend
ports:
- protocol: TCP
port: 3306
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow external backup server
- to:
- ipBlock:
cidr: 10.0.5.0/24
ports:
- protocol: TCP
port: 22
Namespace Isolation¶
# Deny all traffic to pods in namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
# No ingress/egress rules = deny all
# Allow only from same namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-same-namespace
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {} # Same namespace
Calico NetworkPolicy¶
More advanced than Kubernetes NetworkPolicy:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-egress-to-metadata-server
spec:
selector: all()
types:
- Egress
egress:
# Deny access to cloud metadata service
- action: Deny
destination:
nets:
- 169.254.169.254/32
# Allow all other egress
- action: Allow
Calico features: - Global policies - Policy ordering - Layer 7 rules (with Istio) - Logging/monitoring
8. Service Mesh¶
Service Mesh Architecture¶
ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Control Plane β
β (Istio Pilot, Citadel, Galley, Telemetry) β
ββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ
β Configuration
βββββββββββΌββββββββββ
β β β
ββββββΌβββββ βββΌβββββββ ββΌββββββββββ
β Envoy β β Envoy β β Envoy β
β Proxy β β Proxy β β Proxy β
βββββββββββ€ ββββββββββ€ ββββββββββββ€
β App A β β App B β β App C β
βββββββββββ ββββββββββ ββββββββββββ
Istio Sidecar Pattern¶
βββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Pod β
β β
β ββββββββββββββββββββ ββββββββββββββββββββ β
β β Application β β Envoy Proxy β β
β β Container ββββΆβ (Sidecar) β β
β β localhost:8080 β β - mTLS β β
β ββββββββββββββββββββ β - Metrics β β
β β - Tracing β β
β β - Retries β β
β βββββββββββ¬βββββββββ β
βββββββββββββββββββββββββββββββββββββΌβββββββββββββββ
β
Encrypted traffic
Mutual TLS (mTLS)¶
Service-to-service encryption:
Service A Service B
βββββββββββββββ βββββββββββββββ
β App β β App β
β β β β β β
β βΌ β β βΌ β
β Envoy β mTLS tunnel β Envoy β
β β ββββββββββββββββββββΆβ β β
β β β (cert exchange) β β β
ββββΌβββββββββββ ββββΌβββββββββββ
β β
βββββββββββββββββββββββββββββββββββ
Encrypted, authenticated traffic
Istio PeerAuthentication:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: default
spec:
mtls:
mode: STRICT # STRICT, PERMISSIVE, or DISABLE
Traffic Management¶
Virtual Service (routing rules):
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
# 90% to v1, 10% to v2 (canary deployment)
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
weight: 90
- destination:
host: reviews
subset: v2
weight: 10
Destination Rule (load balancing, circuit breaking):
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
maxRequestsPerConnection: 2
outlierDetection:
consecutiveErrors: 5
interval: 30s
baseEjectionTime: 30s
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Service Mesh Comparison¶
| Feature | Istio | Linkerd | Envoy |
|---|---|---|---|
| Language | Go/C++ | Rust | C++ |
| Complexity | High | Low | Medium |
| Resource usage | Heavy | Light | Medium |
| Features | Extensive | Moderate | Proxy only |
| mTLS | Yes | Yes | Yes |
| Observability | Extensive | Good | Basic |
9. Ingress and Load Balancing¶
Ingress Controller¶
HTTP/HTTPS routing to services:
Internet
β
βββββββΌββββββ
β LoadBalancer β
β (Cloud LB) β
βββββββ¬ββββββ
β
βββββββββββββββΌββββββββββββββ
β Ingress Controller β
β (nginx/traefik/haproxy) β
βββββββββββββββ¬ββββββββββββββ
β
βββββββββββββββΌββββββββββββββ
β β β
ββββββΌβββββ βββββΌββββββ βββββΌββββββ
βService Aβ βService Bβ βService Cβ
βββββββββββ βββββββββββ βββββββββββ
Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
tls:
- hosts:
- example.com
secretName: tls-secret
Gateway API¶
Next-generation Ingress:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
spec:
gatewayClassName: nginx
listeners:
- name: http
protocol: HTTP
port: 80
- name: https
protocol: HTTPS
port: 443
tls:
certificateRefs:
- name: tls-cert
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: my-route
spec:
parentRefs:
- name: my-gateway
hostnames:
- example.com
rules:
- matches:
- path:
type: PathPrefix
value: /app1
backendRefs:
- name: app1-service
port: 80
External Load Balancer¶
Cloud provider integration:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
# Cloud-specific annotations
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
Result: - Cloud provider provisions load balancer - External IP assigned to service - Traffic routed to NodePorts
10. Troubleshooting Container Networks¶
Docker Networking Troubleshooting¶
Check container network:
# Inspect container network
docker inspect <container> | jq '.[0].NetworkSettings'
# Check IP address
docker inspect -f '{{.NetworkSettings.IPAddress}}' <container>
# List connected networks
docker inspect -f '{{range $net, $conf := .NetworkSettings.Networks}}{{$net}} {{end}}' <container>
# Enter container network namespace
docker exec -it <container> bash
ip addr show
ip route show
Test connectivity:
# Ping from one container to another
docker exec container1 ping container2
# Check DNS resolution
docker exec container1 nslookup container2
# Check port connectivity
docker exec container1 nc -zv container2 80
Common issues:
- Containers can't reach each other:
- Check if on same network
- Check firewall rules
-
Verify DNS resolution
-
Container can't reach internet:
- Check NAT/masquerade rules
- Verify default route
-
Check DNS configuration
-
Port publishing not working:
- Verify iptables DNAT rules
- Check host firewall
- Confirm port not already in use
Kubernetes Networking Troubleshooting¶
Pod connectivity:
# Check pod IP and network
kubectl get pod <pod> -o wide
# Describe pod (check events)
kubectl describe pod <pod>
# Check pod network interfaces
kubectl exec <pod> -- ip addr show
# Test pod-to-pod connectivity
kubectl exec <pod1> -- ping <pod2-ip>
# Test service connectivity
kubectl exec <pod> -- curl http://my-service
Service troubleshooting:
# Check service endpoints
kubectl get endpoints my-service
# Verify service DNS
kubectl exec <pod> -- nslookup my-service
# Check kube-proxy logs
kubectl logs -n kube-system -l k8s-app=kube-proxy
# Check iptables rules (iptables mode)
kubectl exec -n kube-system <kube-proxy-pod> -- iptables-save | grep <service-name>
CNI troubleshooting:
# Check CNI plugin pods
kubectl get pods -n kube-system | grep calico
kubectl get pods -n kube-system | grep cilium
# Check CNI logs
kubectl logs -n kube-system <cni-pod>
# Verify CNI configuration
cat /etc/cni/net.d/*.conf
NetworkPolicy debugging:
# Check if policy applied
kubectl get networkpolicy
# Describe policy
kubectl describe networkpolicy <policy>
# Test connectivity before/after policy
kubectl exec <pod> -- curl <target>
# Check CNI plugin logs (policy enforcement)
kubectl logs -n kube-system <calico-node-pod>
Network Diagnostic Tools¶
Container debugging image:
# Run debug container with network tools
kubectl run debug --rm -it --image=nicolaka/netshoot -- bash
# Tools included:
# - tcpdump, wireshark
# - curl, wget, httpie
# - nslookup, dig, host
# - netcat, socat
# - iperf3, mtr, traceroute
Ephemeral debug container (K8s 1.23+):
# Attach debug container to existing pod
kubectl debug -it <pod> --image=nicolaka/netshoot --target=<container>
Packet capture:
# Capture traffic on pod interface
kubectl exec <pod> -- tcpdump -i eth0 -w /tmp/capture.pcap
# Copy to local machine
kubectl cp <pod>:/tmp/capture.pcap ./capture.pcap
# Analyze with Wireshark
wireshark capture.pcap
11. Practice Problems¶
Problem 1: Docker Custom Network¶
Create a custom Docker network with: - Subnet: 172.20.0.0/16 - Gateway: 172.20.0.1 - Run 3 containers: web, api, db - web should connect to api, api to db - Test connectivity
Solution:
# Create network
docker network create \
--driver bridge \
--subnet 172.20.0.0/16 \
--gateway 172.20.0.1 \
mynetwork
# Run containers
docker run -d --name db --network mynetwork postgres
docker run -d --name api --network mynetwork my-api
docker run -d --name web --network mynetwork nginx
# Test connectivity
docker exec web ping api
docker exec api ping db
docker exec web curl http://api:8080/health
Problem 2: Kubernetes Service¶
Deploy a web application with: - 3 nginx pods - ClusterIP service - Verify load balancing
Solution:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
# Apply
kubectl apply -f deployment.yaml
# Check endpoints
kubectl get endpoints web-service
# Test load balancing
kubectl run test --rm -it --image=busybox -- sh
# In pod:
for i in $(seq 1 10); do
wget -qO- http://web-service | grep 'Server:'
done
# Should see different pod IPs
Problem 3: NetworkPolicy¶
Implement security policy: - Allow frontend to access backend on port 8080 - Allow backend to access database on port 5432 - Deny all other traffic
Solution:
# backend-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
tier: backend
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
# Allow DNS
- to:
- namespaceSelector: {}
ports:
- protocol: UDP
port: 53
# Allow database
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
---
# database-policy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-policy
spec:
podSelector:
matchLabels:
tier: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
Problem 4: Debugging Network Issue¶
Scenario: Pod can't reach external service (example.com)
Troubleshooting steps:
# 1. Check pod IP and interface
kubectl exec <pod> -- ip addr show
kubectl exec <pod> -- ip route show
# 2. Check DNS resolution
kubectl exec <pod> -- nslookup example.com
# If fails, check DNS pods
kubectl get pods -n kube-system -l k8s-app=kube-dns
# 3. Test external connectivity
kubectl exec <pod> -- ping 8.8.8.8
# If fails, check network policy
kubectl get networkpolicy
# 4. Check egress policy
kubectl describe networkpolicy <policy>
# Ensure egress to 0.0.0.0/0 is allowed
# 5. Check NAT/masquerade
# On node:
sudo iptables -t nat -L POSTROUTING -n -v
# Should see MASQUERADE rule for pod CIDR
# 6. Verify CNI configuration
cat /etc/cni/net.d/*.conf
# Check if CNI supports egress
# 7. Check node routing
ip route show
# Should have route for pod CIDR
Problem 5: Service Mesh Traffic Splitting¶
Implement canary deployment: - 90% traffic to v1 - 10% traffic to v2 - Use Istio
Solution:
# virtual-service.yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-app
spec:
hosts:
- my-app
http:
- route:
- destination:
host: my-app
subset: v1
weight: 90
- destination:
host: my-app
subset: v2
weight: 10
---
# destination-rule.yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: my-app
spec:
host: my-app
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
# Apply configuration
kubectl apply -f virtual-service.yaml
# Generate traffic and observe distribution
for i in $(seq 1 100); do
curl http://my-app/version
done | sort | uniq -c
# Should see approximately 90 v1, 10 v2 responses
Summary¶
Container networking is complex but follows consistent principles:
Key concepts: 1. Network namespaces provide isolation 2. veth pairs and bridges connect containers 3. Docker CNM defines standardized networking 4. Kubernetes networking model requires flat network 5. CNI plugins implement different approaches (overlay, BGP, eBPF) 6. Services provide stable endpoints and load balancing 7. NetworkPolicies control traffic flow 8. Service meshes add L7 features (mTLS, observability) 9. Ingress provides HTTP/HTTPS routing
Best practices: - Choose CNI based on requirements (policy, performance, scale) - Use NetworkPolicies for security - Monitor network performance - Implement service mesh for complex microservices - Plan IP address allocation carefully
Container networking continues to evolve with technologies like eBPF (Cilium) and Gateway API, making it more powerful and easier to manage.
Difficulty: ββββ
Further Reading: - Kubernetes Network Model: https://kubernetes.io/docs/concepts/cluster-administration/networking/ - CNI Specification: https://github.com/containernetworking/cni - Istio Documentation: https://istio.io/latest/docs/ - Calico Documentation: https://docs.projectcalico.org/