root@fedora:/home/cilium# cilium status --verbose KVStore: Ok Disabled Kubernetes: Ok 1.24 (v1.24.3) [linux/amd64] Kubernetes APIs: ["cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "core/v1::Namespace", "core/v1::Node", "core/v1::Pods", "core/v1::Service", "discovery/v1::EndpointSlice", "networking.k8s.io/v1::NetworkPolicy"] KubeProxyReplacement: Strict [enp1s0 192.168.31.100 (Direct Routing)] Host firewall: Disabled CNI Chaining: none Cilium: Ok 1.12.0 (v1.12.0-9447cd1) NodeMonitor: Listening for events on 4 CPUs with 64x4096 of shared memory Cilium health daemon: Ok IPAM: IPv4: 3/254 allocated from 10.0.0.0/24, Allocated addresses: 10.0.0.11 (kube-system/coredns-6d4b75cb6d-tbgrr) 10.0.0.201 (health) 10.0.0.78 (router) BandwidthManager: Disabled Host Routing: BPF Masquerading: Disabled Clock Source for BPF: ktime Controller Status: 23/23 healthy Name Last success Last error Count Message cilium-health-ep 29s ago never 0 no error dns-garbage-collector-job 34s ago never 0 no error endpoint-1313-regeneration-recovery never never 0 no error endpoint-2868-regeneration-recovery never never 0 no error endpoint-2945-regeneration-recovery never never 0 no error endpoint-gc 3m34s ago never 0 no error ipcache-inject-labels 3m30s ago 3m33s ago 0 no error k8s-heartbeat 4s ago never 0 no error link-cache 15s ago never 0 no error metricsmap-bpf-prom-sync 4s ago never 0 no error resolve-identity-1313 3m29s ago never 0 no error resolve-identity-2868 3m29s ago never 0 no error resolve-identity-2945 3m30s ago never 0 no error sync-endpoints-and-host-ips 30s ago never 0 no error sync-lb-maps-with-k8s-services 3m30s ago never 0 no error sync-node-with-ciliumnode (fedora) 3m32s ago 3m33s ago 0 no error sync-policymap-1313 26s ago never 0 no error sync-policymap-2868 26s ago never 0 no error sync-policymap-2945 26s ago never 0 no error sync-to-k8s-ciliumendpoint (1313) 9s ago never 0 no error sync-to-k8s-ciliumendpoint (2868) 9s ago never 0 no error sync-to-k8s-ciliumendpoint (2945) 0s ago never 0 no error template-dir-watcher never never 0 no error Proxy Status: OK, ip 10.0.0.78, 0 redirects active on ports 10000-20000 Global Identity Range: min 256, max 65535 Hubble: Ok Current/Max Flows: 377/4095 (9.21%), Flows/s: 1.67 Metrics: Disabled KubeProxyReplacement Details: Status: Strict Socket LB: Enabled Socket LB Protocols: TCP, UDP Devices: enp1s0 192.168.31.100 (Direct Routing) Mode: DSR Backend Selection: Maglev (Table Size: 16381) Session Affinity: Enabled Graceful Termination: Enabled NAT46/64 Support: Disabled XDP Acceleration: Disabled Services: - ClusterIP: Enabled - NodePort: Enabled (Range: 30000-32767) - LoadBalancer: Enabled - externalIPs: Enabled - HostPort: Enabled BPF Maps: dynamic sizing: on (ratio: 0.002500) Name Size Non-TCP connection tracking 65536 TCP connection tracking 131072 Endpoint policy 65535 Events 4 IP cache 512000 IP masquerading agent 16384 IPv4 fragmentation 8192 IPv4 service 65536 IPv6 service 65536 IPv4 service backend 65536 IPv6 service backend 65536 IPv4 service reverse NAT 65536 IPv6 service reverse NAT 65536 Metrics 1024 NAT 131072 Neighbor table 131072 Global policy 16384 Per endpoint policy 65536 Session affinity 65536 Signal 4 Sockmap 65535 Sock reverse NAT 65536 Tunnel 65536 Encryption: Disabled Cluster health: 2/2 reachable (2022-08-23T01:59:33Z) Name IP Node Endpoints fedora (localhost) 192.168.31.100 reachable reachable knode1 192.168.31.101 reachable reachable
cilium-health status –probe 之后的结果, 多个节点之间的联通性正常:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
root@fedora:/home/cilium# cilium-health status --probe Probe time: 2022-08-23T02:01:04Z Nodes: fedora (localhost): Host connectivity to 192.168.31.100: ICMP to stack: OK, RTT=140.825µs HTTP to agent: OK, RTT=199.54µs Endpoint connectivity to 10.0.0.201: ICMP to stack: OK, RTT=128.2µs HTTP to agent: OK, RTT=263.034µs knode1: Host connectivity to 192.168.31.101: ICMP to stack: OK, RTT=235.981µs HTTP to agent: OK, RTT=330.706µs Endpoint connectivity to 10.0.1.251: ICMP to stack: OK, RTT=177.807µs HTTP to agent: OK, RTT=275.869µs