This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters: one on VMware vSphere with user provisioned infrastructure (UPI) and the other one on AWS with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters.
Before we begin, the following tools need to be downloaded and added to your $PATH
:
Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.
In this step you will deploy cluster-a using the default IP CIDR ranges:
Pod CIDR | Service CIDR |
---|---|
10.128.0.0/14 | 172.30.0.0/16 |
Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Follow the OpenShift documenation for installation instructions on supported versions of vSphere.
Submariner Gateway nodes need to be able to accept IPsec traffic. For on-premises clusters behind corporate firewalls, the default IPsec UDP ports might be blocked. To overcome this, Submariner supports NAT Traversal (NAT-T) with the option to set custom non-standard ports. In this example, we use UDP 4501 and UDP 501. Ensure that those ports are allowed on the gateway node and on the corporate firewall.
Submariner also uses VXLAN to encapsulate traffic from the worker and master nodes to the Gateway nodes. Ensure that firewall configuration on the vSphere cluster allows 4800/UDP across all nodes in the cluster in both directions.
Protocol | Port | Description |
---|---|---|
UDP | 4800 | Overlay network for inter-cluster traffic |
UDP | 4501 | IPsec traffic |
UDP | 501 | IPsec traffic |
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.
Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:
$ aws configure
AWS Access Key ID [None]: ....
AWS Secret Access Key [None]: ....
Default region name [None]: ....
Default output format [None]: text
In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.
In this example, we will use the following IP ranges:
Pod CIDR | Service CIDR |
---|---|
10.132.0.0/14 | 172.31.0.0/16 |
openshift-install create install-config --dir cluster-b
Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:
sed -i 's/10.128.0.0/10.132.0.0/g' cluster-b/install-config.yaml
Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:
sed -i 's/172.30.0.0/172.31.0.0/g' cluster-b/install-config.yaml
And finally deploy the cluster:
openshift-install create cluster --dir cluster-b
When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the
kubeadmin
user, display in your terminal.
Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 500 by default) when using IPsec. Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the IPsec connection.
prep_for_subm
is a script designed to update your OpenShift installer provisioned AWS infrastructure for Submariner deployments,
handling the requirements specified above.
prep_for_subm.sh
script and set permissions:curl https://raw.githubusercontent.com/submariner-io/submariner/master/tools/openshift/ocp-ipi-aws/prep_for_subm.sh -L -O
chmod a+x ./prep_for_subm.sh
Please note that oc
, aws-cli
, terraform
, and wget
need to be installed before the prep_for_subm.sh
script can be run.
Also note that the script is known to be working with Terraform version 0.12.
Maximum compatible version is 0.12.12.
The script deploys an m5n.large
EC2 instance type by default, optimized for improved network throughput and packet rate performance,
for the Submariner gateway node. Please ensure that the AWS Region you deploy to support this instance type. Alternatively, you can
customize the AWS instance type as shown below.
prep_for_subm.sh
script for cluster-b:export IPSEC_NATT_PORT=4501
export IPSEC_IKE_PORT=501
./prep_for_subm.sh cluster-b # respond "yes" when Terraform asks for approval, or otherwise add the -auto-approve flag
curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile
To install Submariner with multi-cluster service discovery, follow the steps below:
subctl deploy-broker --kubeconfig cluster-b/auth/kubeconfig --service-discovery
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --ikeport 501 --nattport 4501
subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --ikeport 501 --nattport 4501
To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx
service deployed in cluster-b
.
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
Note that headless Services can only be exported on non-globalnet deployments.
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Run nettest
from cluster-a
to access the nginx
service:
export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080
To access a Service in a specific cluster, prefix the query with <cluster-id>
as follows:
curl cluster-b.nginx.default.svc.clusterset.local:8080
A StatefulSet uses a headless Service. Create a web.yaml
as follows:
apiVersion: v1
kind: Service
metadata:
name: nginx-ss
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-ss"
replicas: 2
selector:
matchLabels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
template:
metadata:
labels:
app.kubernetes.io/instance: nginx-ss
app.kubernetes.io/name: nginx-ss
spec:
containers:
- name: nginx-ss
image: nginxinc/nginx-unprivileged:stable-alpine
ports:
- containerPort: 80
name: web
Use this yaml to create a StatefulSet web
with nginx-ss
as the Headless Service.
export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default apply -f web.yaml
subctl export service -n default nginx-ss
curl nginx-ss.default.svc.clusterset.local:8080
To access the Service in a specific cluster, prefix the query with <cluster-id>
:
curl cluster-a.nginx-ss.default.svc.clusterset.local:8080
To access an individual pod in a specific cluster, prefix the query with <pod-hostname>.<cluster-id>
:
curl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080
This will perform automated verifications between the clusters.
subctl verify cluster-a/auth/kubeconfig cluster-b/auth/kubeconfig --only service-discovery,connectivity --verbose