This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters: one on AWS and the other one on OpeStack, both with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters. Note that this guide focuses on Submariner deployment on clusters with non-overlapping Pod and Service CIDRs. For connecting clusters with overlapping CIDRs, please refer to the Submariner with Globalnet guide.
Before we begin, the following tools need to be downloaded and added to your $PATH
:
Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.
Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:
$ aws configure
AWS Access Key ID [None]: ....
AWS Secret Access Key [None]: ....
Default region name [None]: ....
Default output format [None]: text
In this step you will deploy cluster-a in AWS (or any other public cloud can be used) using the default IP CIDR ranges:
Pod CIDR | Service CIDR |
---|---|
10.128.0.0/14 | 172.30.0.0/16 |
openshift-install create install-config --dir cluster-a
openshift-install create cluster --dir cluster-a
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.
Configure the OpenStack credentials for the command line client. Please refer to the official OpenStack documentation for detailed instructions.
In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. You may want to check your IP ranges with a CIDR calculator.
In this example, we will use the following IP ranges:
Pod CIDR | Service CIDR |
---|---|
10.132.0.0/14 | 172.31.0.0/16 |
openshift-install create install-config --dir cluster-b
Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:
sed -i 's/10.128.0.0/10.132.0.0/g' cluster-b/install-config.yaml
Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:
sed -i 's/172.30.0.0/172.31.0.0/g' cluster-b/install-config.yaml
And finally deploy the cluster:
openshift-install create cluster --dir cluster-b
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, will be displayed in your terminal.
subctl
Download the subctl binary and make it available on your PATH.
curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile
If you have Go and the source code, you can build and install subctl instead:
cd go/src/submariner-io/subctl
go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd
(and ensure your go/bin
directory is on your PATH
).
Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.
subctl cloud prepare
is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments,
handling the requirements specified above.
The default EC2 instance type for the Submariner gateway node is c5d.large
,
optimized for better CPU which is found to be a bottleneck for IPsec and Wireguard drivers.
Alternatively, you can choose to deploy using a different instance type.
export KUBECONFIG=cluster-a/auth/kubeconfig
subctl cloud prepare aws --ocp-metadata path/to/cluster-a/metadata.json --natt-port 4747
The default OpenStack compute instance type for the Submariner gateway node is PnTAE.CPU_4_Memory_8192_Disk_50
,
Alternatively, you can choose to deploy using a different instance type.
Make sure you use the appropriate cloud name from clouds.yaml, here it uses OpenStack.
export KUBECONFIG=cluster-b/auth/kubeconfig
subctl cloud prepare rhos --ocp-metadata path/to/cluster-b/metadata.json --cloud-entry\
openstack --natt-port 4747
To install Submariner with multi-cluster Service Discovery follow the steps below:
subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig
subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a --nattport 4747
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b --nattport 4747
To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx
service deployed in cluster-b
.
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Run nettest
from cluster-a
to access the nginx
service:
export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080
To access a Service in a specific cluster, prefix the query with <cluster-id>
as follows:
curl cluster-b.nginx.default.svc.clusterset.local:8080
A StatefulSet uses a headless Service. Create a web.yaml
as follows:
apiVersion: v1
kind: Service
metadata:
name: nginx-ss
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-ss"
replicas: 2
selector:
matchLabels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
template:
metadata:
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
containers:
- name: nginx-ss
image: nginxinc/nginx-unprivileged:stable-alpine
ports:
- containerPort: 80
name: web
Use this yaml to create a StatefulSet web
with nginx-ss
as the Headless Service.
export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default apply -f web.yaml
subctl export service -n default nginx-ss
curl nginx-ss.default.svc.clusterset.local:8080
To access the Service in a specific cluster, prefix the query with <cluster-id>
:
curl cluster-a.nginx-ss.default.svc.clusterset.local:8080
To access an individual pod in a specific cluster, prefix the query with <pod-hostname>.<cluster-id>
:
curl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080
The contexts on both config files are named admin
and need to be modified before running the verify
command.
Here is how this can be done using yq:
yq e -i '.contexts[0].name = "cluster-a" | .current-context = "cluster-a"' cluster-a/auth/kubeconfig
yq e -i '.contexts[0].context.user = "admin-a" | .users[0].name = "admin-a"' cluster-a/auth/kubeconfig
yq e -i '.contexts[0].name = "cluster-b" | .current-context = "cluster-a"' cluster-b/auth/kubeconfig
yq e -i '.contexts[0].context.user = "admin-b" | .users[0].name = "admin-b"' cluster-b/auth/kubeconfig
(if you’re using yq
4.18.1 or later, you can use yq -i
instead of yq e -i
).
More generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.
This will perform automated verifications between the clusters.
export KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig
subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose