This guide covers how to set up Submariner for the external network use case. In this use case, pods running in a Kubernetes cluster can access external applications outside of the cluster and vice versa by using DNS resolution supported by Lighthouse or manually using the Globalnet ingress IPs. In addition to providing connectivity, the source IP of traffic is also preserved.
Prepare:
In this guide, we will use the following Kubernetes clusters and non-cluster host.
Name | IP | Description |
---|---|---|
cluster-a | 192.168.122.26 | Single-node cluster |
cluster-b | 192.168.122.27 | Single-node cluster |
test-vm | 192.168.122.142 | Linux host |
In this example, everything is deployed in the 192.168.122.0/24 segment. However, it is only required that cluster-a and test-vm are in the same segment. Other clusters, cluster-b and any additional clusters, can be deployed in different segments or even in any other networks in the internet. Also, clusters can be multi-node clusters.
Subnets of non-cluster hosts should be distinguished from those of the clusters to easily specify the external network CIDR. In this example, cluster-a and cluster-b belong to 192.168.122.0/25 and test-vm belongs to 192.168.122.128/25. Therefore, the external network CIDR for this configuration is 192.168.122.128/25. In test environments with just one host, an external network CIDR like 192.168.122.142/32 can be specified. However, design of the subnets need to be considered when more hosts are used.
Choose the Pod CIDR and the Service CIDR for Kubernetes clusters and deply them.
In this guide, we will use the following CIDRs:
Cluster | Pod CIDR | Service CIDR |
---|---|---|
cluster-a | 10.42.0.0/24 | 10.43.0.0/16 |
cluster-b | 10.42.0.0/24 | 10.43.0.0/16 |
Note that we will use Globalnet in this guide, therefore overlapping CIDRs are supported.
In this configuration, global IPs are used to access between the gateway node and non-cluster hosts, which means packets are sent to IP addresses that are not part of the actual network segment. To make such packets not to be dropped, anti-spoofing rules need to be disabled for the hosts and the gateway node.
Ensure that kubeconfig files for both clusters are available.
This guide assumes cluster-a’s kubeconfig file is named kubeconfig.cluster-a
and cluster-b’s is named kubeconfig.cluster-b
.
subctl
Download the subctl binary and make it available on your PATH.
curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile
If you have Go and the source code, you can build and install subctl instead:
cd go/src/submariner-io/subctl
go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd
(and ensure your go/bin
directory is on your PATH
).
subctl deploy-broker --kubeconfig kubeconfig.cluster-a --globalnet
When Submariner joins a cluster to the broker via the subctl join
command, it chooses a node on which to install the
gateway by labeling it appropriately. By default, Submariner uses a worker node for the gateway; if there are no worker
nodes, then no gateway is installed unless a node is manually labeled as a gateway. Since we are deploying all-in-one
nodes, there are no worker nodes, so it is necessary to label the single node as a gateway. By default, the node name is
the hostname. In this example, the hostnames are “cluster-a” and “cluster-b”, respectively.
Execute the following on cluster-a:
kubectl label node cluster-a submariner.io/gateway=true
Execute the following on cluster-b:
kubectl label node cluster-b submariner.io/gateway=true
Carefully review the CLUSTER_CIDR
and EXTERNAL_CIDR
and run:
CLUSTER_CIDR=10.42.0.0/24
EXTERNAL_CIDR=192.168.122.128/25
subctl join --kubeconfig kubeconfig.cluster-a broker-info.subm --clusterid cluster-a --natt=false --clustercidr=${CLUSTER_CIDR},${EXTERNAL_CIDR}
subctl join --kubeconfig kubeconfig.cluster-b broker-info.subm --clusterid cluster-b --natt=false
Create a list of upstream DNS servers as upstreamservers
:
Note that dnsip
is the IP of DNS server for the test-vm, which is defined as nameserver
in /etc/resolve.conf
.
dnsip=192.168.122.1
lighthousednsip=$(kubectl get svc --kubeconfig kubeconfig.cluster-a -n submariner-operator submariner-lighthouse-coredns -o jsonpath='{.spec.clusterIP}')
cat << EOF > upstreamservers
server=/svc.clusterset.local/$lighthousednsip
server=$dnsip
EOF
Create configmap of the list:
export KUBECONFIG=kubeconfig.cluster-a
kubectl create configmap external-dnsmasq -n submariner-operator --from-file=upstreamservers
Create a dns.yaml
as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns-cluster-a
namespace: submariner-operator
labels:
app: external-dns-cluster-a
spec:
replicas: 1
selector:
matchLabels:
app: external-dns-cluster-a
template:
metadata:
labels:
app: external-dns-cluster-a
spec:
containers:
- name: dnsmasq
image: registry.access.redhat.com/ubi8/ubi-minimal:latest
ports:
- containerPort: 53
command: [ "/bin/sh", "-c", "microdnf install -y dnsmasq; ln -s /upstreamservers /etc/dnsmasq.d/upstreamservers; dnsmasq -k" ]
securityContext:
capabilities:
add: ["NET_ADMIN"]
volumeMounts:
- name: upstreamservers
mountPath: /upstreamservers
volumes:
- name: upstreamservers
configMap:
name: external-dnsmasq
---
apiVersion: v1
kind: Service
metadata:
namespace: submariner-operator
name: external-dns-cluster-a
spec:
ports:
- name: udp
port: 53
protocol: UDP
targetPort: 53
selector:
app: external-dns-cluster-a
Use this YAML to create DNS server, and assign global ingress IP:
kubectl apply -f dns.yaml
subctl export service -n submariner-operator external-dns-cluster-a
Check global ingress IP:
kubectl --kubeconfig kubeconfig.cluster-a get globalingressip external-dns-cluster-a -n submariner-operator
NAME IP
external-dns-cluster-a 242.0.255.251
Modify routing for global CIDR on test-vm:
Note that subm_gw_ip
is the gateway node IP of the cluster in the same network segment of the hosts.
In the case of the example of this guide, it is the node IP of cluster-a.
Also, 242.0.0.0/8 is the default globalCIDR.
subm_gw_ip=192.168.122.26
ip r add 242.0.0.0/8 via ${subm_gw_ip}
To persist above configuration across reboot, check the document for each Linux distribution. For example, on Centos 7, to set presistent route for eth0, below command is required:
echo "242.0.0.0/8 via ${subm_gw_ip} dev eth0" >> /etc/sysconfig/network-scripts/route-eth0
Modify /etc/resolv.conf
to change DNS server for the host on test-vm:
For example)
nameserver 192.168.122.1
nameserver 242.0.255.251
Check that the DNS server itself can be resolved:
nslookup external-dns-cluster-a.submariner-operator.svc.clusterset.local
Server: 242.0.255.251
Address: 242.0.255.251#53
Name: external-dns-cluster-a.submariner-operator.svc.clusterset.local
Address: 10.43.162.46
Run on test-vm:
# Python 2.x:
python -m SimpleHTTPServer 80
# Python 3.x:
python -m http.server 80
Create headless Service without selector, Endpoints, ServiceExport to access the test-vm from cluster-a:
Note that Endpoints.subsets.addresses
needs to be modified to IP of test-vm.
export KUBECONFIG=kubeconfig.cluster-a
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: test-vm
spec:
ports:
- protocol: TCP
port: 80
targetPort: 80
clusterIP: None
EOF
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Endpoints
metadata:
name: test-vm
subsets:
- addresses:
- ip: 192.168.122.142
hostname: "web0"
ports:
- port: 80
name: "web"
EOF
subctl export service -n default test-vm
subsets.addresses[*].hostname
and subsets.ports[*].name
in Endpoints
must be specified.
Otherwise, corresponding globalingressip
and endpointslice
won’t be created.
Check global ingress IP for test-vm, on cluster-a:
kubectl get globalingressip
NAME IP
ep-test-vm-192.168.122.142 242.0.255.253
Verify access to test-vm from clusters:
export KUBECONFIG=kubeconfig.cluster-a
kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- bash
curl web0.cluster-a.test-vm.default.svc.clusterset.local
curl 242.0.255.253
export KUBECONFIG=kubeconfig.cluster-b
kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- bash
curl web0.cluster-a.test-vm.default.svc.clusterset.local
curl 242.0.255.253
On test-vm, check the console log of HTTP server that there are accesses from pods. Source IPs for these accesses will be one of the global egress IPs for the cluster.
Create Deployment in cluster-b:
export KUBECONFIG=kubeconfig.cluster-b
kubectl -n default create deployment nginx --image=registry.k8s.io/nginx-slim:0.8
kubectl -n default expose deployment nginx --port=80
subctl export service --namespace default nginx
From test-vm, verify access:
curl nginx.default.svc.clusterset.local
Check the console log of HTTP server that there is access from test-vm:
kubectl logs -l app=nginx
Source IP for the access will be the global ingress IP of the endpoint for the test-vm.
A StatefulSet
uses a headless Service
. Create a web.yaml
file as follows:
apiVersion: v1
kind: Service
metadata:
name: nginx-ss
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-ss"
replicas: 2
selector:
matchLabels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
template:
metadata:
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
containers:
- name: nginx-ss
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
Apply the above YAML to create a web StatefulSet
with nginx-ss as the headless service:
export KUBECONFIG=kubeconfig.cluster-b
kubectl -n default apply -f web.yaml
subctl export service -n default nginx-ss
From test-vm, verify access:
curl nginx-ss.default.svc.clusterset.local
curl cluster-b.nginx-ss.default.svc.clusterset.local
curl web-0.cluster-b.nginx-ss.default.svc.clusterset.local
curl web-1.cluster-b.nginx-ss.default.svc.clusterset.local
Check the console log of the HTTP server to verify there are accesses from test-vm:
kubectl logs web-0
kubectl logs web-1
Confirm the global egress IPs for each pod managed by Statefulset:
export KUBECONFIG=kubeconfig.cluster-b
kubectl get globalingressip | grep web
pod-web-0 242.1.255.251
pod-web-1 242.1.255.250
nslookup web-0.cluster-b.nginx-ss.default.svc.clusterset.local
Server: 242.0.255.251
Address: 242.0.255.251#53
Name: web-0.cluster-b.nginx-ss.default.svc.clusterset.local
Address: 242.1.255.251
nslookup web-1.cluster-b.nginx-ss.default.svc.clusterset.local
Server: 242.0.255.251
Address: 242.0.255.251#53
Name: web-1.cluster-b.nginx-ss.default.svc.clusterset.local
Address: 242.1.255.250
Verify the source IP of each access from each pod to test-vm is the same to its global egress IP:
export KUBECONFIG=kubeconfig.cluster-b
kubectl exec -it web-0 -- bash
curl web0.cluster-a.test-vm.default.svc.clusterset.local
curl 242.0.255.253
exit
export KUBECONFIG=kubeconfig.cluster-b
kubectl exec -it web-1 -- bash
curl web0.cluster-a.test-vm.default.svc.clusterset.local
curl 242.0.255.253
exit