This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on Azure with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters. Note that this guide focuses on Submariner deployment on clusters with non-overlapping Pod and Service CIDRs. For connecting clusters with overlapping CIDRs, please refer to the Submariner with Globalnet guide.
Before we begin, the following tools need to be downloaded and added to your $PATH
:
Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.
Create a service principal and configure its access to Azure resources. Output the result in an Azure SDK compatible auth file. Please refer to the official OpenShift on Azure documentation for details.
az ad sp create-for-rbac --sdk-auth > my.auth
In this step you will deploy cluster-a using the default IP CIDR ranges:
Pod CIDR | Service CIDR |
---|---|
10.128.0.0/14 | 172.30.0.0/16 |
openshift-install create install-config --dir cluster-a
openshift-install create cluster --dir cluster-a
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.
In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.
In this example, we will use the following IP ranges:
Pod CIDR | Service CIDR |
---|---|
10.132.0.0/14 | 172.31.0.0/16 |
openshift-install create install-config --dir cluster-b
Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:
sed -i 's/10.128.0.0/10.132.0.0/g' cluster-b/install-config.yaml
Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:
sed -i 's/172.30.0.0/172.31.0.0/g' cluster-b/install-config.yaml
And finally deploy the cluster:
openshift-install create cluster --dir cluster-b
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.
subctl
Download the subctl binary and make it available on your PATH.
curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile
If you have Go and the source code, you can build and install subctl instead:
cd go/src/submariner-io/subctl
go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd
(and ensure your go/bin
directory is on your PATH
).
Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.
subctl cloud prepare
is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments,
handling the requirements specified above.
Run the command for cluster-a:
export KUBECONFIG=cluster-a/auth/kubeconfig
subctl cloud prepare azure --ocp-metadata cluster-a/metadata.json --auth-file my.auth
Run the command for cluster-b:
export KUBECONFIG=cluster-b/auth/kubeconfig
subctl cloud prepare azure --ocp-metadata cluster-b/metadata.json --auth-file my.auth
To install Submariner with multi-cluster Service Discovery follow the steps below:
subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig
subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b
To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx
service deployed in cluster-b
.
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Run nettest
from cluster-a
to access the nginx
service:
export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080
To access a Service in a specific cluster, prefix the query with <cluster-id>
as follows:
curl cluster-b.nginx.default.svc.clusterset.local:8080
A StatefulSet uses a headless Service. Create a web.yaml
as follows:
apiVersion: v1
kind: Service
metadata:
name: nginx-ss
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-ss"
replicas: 2
selector:
matchLabels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
template:
metadata:
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
containers:
- name: nginx-ss
image: nginxinc/nginx-unprivileged:stable-alpine
ports:
- containerPort: 80
name: web
Use this yaml to create a StatefulSet web
with nginx-ss
as the Headless Service.
export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default apply -f web.yaml
subctl export service -n default nginx-ss
curl nginx-ss.default.svc.clusterset.local:8080
To access the Service in a specific cluster, prefix the query with <cluster-id>
:
curl cluster-a.nginx-ss.default.svc.clusterset.local:8080
To access an individual pod in a specific cluster, prefix the query with <pod-hostname>.<cluster-id>
:
curl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080
The contexts on both config files are named admin
and need to be modified before running the verify
command.
Here is how this can be done using yq:
yq e -i '.contexts[0].name = "cluster-a" | .current-context = "cluster-a"' cluster-a/auth/kubeconfig
yq e -i '.contexts[0].context.user = "admin-a" | .users[0].name = "admin-a"' cluster-a/auth/kubeconfig
yq e -i '.contexts[0].name = "cluster-b" | .current-context = "cluster-a"' cluster-b/auth/kubeconfig
yq e -i '.contexts[0].context.user = "admin-b" | .users[0].name = "admin-b"' cluster-b/auth/kubeconfig
(if you’re using yq
4.18.1 or later, you can use yq -i
instead of yq e -i
).
More generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.
This will perform automated verifications between the clusters.
export KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig
subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose