OpenShift with Service Discovery (vSphere/AWS)

In this quickstart guide, we shall cover the necessary steps to deploy OpenShift Container Platform (OCP) on vSphere and AWS. Once the OCP clusters are deployed, we shall cover how to deploy Submariner and connect the two clusters.

OpenShift Prerequisites

Before we proceed, the following prerequisites have to be downloaded and added to your $PATH:

  1. openshift-installer
  2. pull secret
  3. oc tools
  4. aws cli

Please ensure that the tools you downloaded above are compatible with the OpenShift Container Platform version.

Deploy Cluster on vSphere (OnPrem)

Create the necessary infrastructure on vSphere and ensure that your machines have direct internet access before starting the installation. To deploy OCP 4.4, follow the instructions shown here

Assuming that you deployed the cluster (say, cluster-a) with default network configuration, the Pod and Service CIDRs would be

Pod CIDR Service CIDR
10.128.0.0/14 172.30.0.0/16

Submariner creates a VxLAN overlay network in the local cluster and uses port 4800/UDP to encapsulate traffic from the worker nodes to the gateway nodes to preserve the source IP of the inter-cluster traffic. Ensure that firewall configuration on vSphere cluster allows 4800/UDP across all the worker nodes.

Protocol Port Description
UDP 4800 overlay network for inter-cluster traffic

Although we are using the default OCP network configuration on vSphere, you can install vSphere with a custom network configuration as shown here

Deploy Cluster on AWS

Configure AWS CLI with appropriate values

$ aws configure
AWS Access Key ID [None]: ....
AWS Secret Access Key [None]: ....
Default region name [None]: ....
Default output format [None]: text

For more details, follow this link:

https://docs.openshift.com/container-platform/4.4/installing/installing_aws/installing-aws-account.html

In this step we shall modify the default Cluster/Service CIDRs and deploy cluster-b on AWS.

Pod CIDR Service CIDR
10.132.0.0/14 172.31.0.0/16
openshift-install create install-config --dir cluster-b

Change the Pod IP network. Please note it’s a /14 range by default so you need to use +4 increments for “128”, for example: 10.132.0.0, 10.136.0.0, 10.140.0.0, …

sed -i 's/10.128.0.0/10.132.0.0/g' cluster-b/install-config.yaml

Change the Service IP network. This is a /16 range by default, so just use +1 increments for “30”: for example: 172.31.0.0, 172.32.0.0, 172.33.0.0, …

sed -i 's/172.30.0.0/172.31.0.0/g' cluster-b/install-config.yaml

And finally deploy

openshift-install create cluster --dir cluster-b

Make your AWS cluster ready for Submariner

Submariner gateway nodes need to be able to accept IPsec traffic. The default ports are 4500/UDP and 500/UDP. However, when you have some on-premise clusters (like vSphere in this example) which are typically inside a corporate network, the firewall configuration on the corporate router may not allow the default IPsec traffic. We can overcome this limitation by using non-standard ports like 4501/UDP and 501/UDP.

Additionally, the default OpenShift deployments do not allow assigning an elastic public IP to existing worker nodes, something that’s necessary at least on one end of the IPsec connections.

To handle these requirements on AWS, we provide a script that will prepare your AWS OpenShift deployment for Submariner, and will create an additional gateway node with an external IP.


curl https://raw.githubusercontent.com/submariner-io/submariner/master/tools/openshift/ocp-ipi-aws/prep_for_subm.sh -L -O
chmod a+x ./prep_for_subm.sh

./prep_for_subm.sh cluster-b      # respond yes when terraform asks

INFO Please note that oc, aws-cli, terraform, and unzip need to be installed before running the prep_for_subm.sh script.

Currently, prep_for_subm.sh script does not support specifying custom IPsec ports. Until the issue is resolved, execute the following commands to open the necessary ports:


export BROKER_IKEPORT=501
export NAT_PORT=4501
sed "s/\ 500/\ $BROKER_IKEPORT/g" -i cluster-b/ocp-ipi-aws/ocp-ipi-aws-prep/ec2-resources.tf
sed "s/\ 4500/\ $NAT_PORT/g" -i cluster-b/ocp-ipi-aws/ocp-ipi-aws-prep/ec2-resources.tf

./prep_for_subm.sh cluster-b      # respond yes when terraform asks

Submariner Installation

Download the subctl binary and make it available on your PATH.

curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile

Install Submariner with Service Discovery

To install Submariner with multi-cluster service discovery, follow the steps below.

Use cluster-b (AWS) as broker with service discovery enabled
subctl deploy-broker --kubeconfig cluster-b/auth/kubeconfig --service-discovery
Join cluster-b (AWS) and cluster-a (vSphere) to the broker
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b --ikeport 501 --nattport 4501
subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a --ikeport 501 --nattport 4501

Verify Deployment

To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.

Deploy ClusterIP Service
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
Deploy Headless Service

Note that headless Services can only be exported on non-globalnet deployments.

export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Verify

Run nettest from cluster-a to access the nginx service:

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default  run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080

Perform automated verification

This will perform automated verifications between the clusters.

subctl verify cluster-a/auth/kubeconfig cluster-b/auth/kubeconfig --only service-discovery,connectivity --verbose