In this quickstart guide, we shall cover the necessary steps to deploy OpenShift Container Platform (OCP) on vSphere and AWS. Once the OCP clusters are deployed, we shall cover how to deploy Submariner and connect the two clusters.
Before we proceed, the following prerequisites have to be downloaded and added to your
Please ensure that the tools you downloaded above are compatible with the OpenShift Container Platform version.
Create the necessary infrastructure on vSphere and ensure that your machines have direct internet access before starting the installation. To deploy OCP 4.4, follow the instructions shown here
Assuming that you deployed the cluster (say, cluster-a) with default network configuration, the Pod and Service CIDRs would be
|Pod CIDR||Service CIDR|
Submariner creates a VxLAN overlay network in the local cluster and uses port 4800/UDP to encapsulate traffic from the worker nodes to the gateway nodes to preserve the source IP of the inter-cluster traffic. Ensure that firewall configuration on vSphere cluster allows 4800/UDP across all the worker nodes.
|UDP||4800||overlay network for inter-cluster traffic|
Although we are using the default OCP network configuration on vSphere, you can install vSphere with a custom network configuration as shown here
$ aws configure AWS Access Key ID [None]: .... AWS Secret Access Key [None]: .... Default region name [None]: .... Default output format [None]: text
For more details, follow this link:
In this step we shall modify the default Cluster/Service CIDRs and deploy cluster-b on AWS.
|Pod CIDR||Service CIDR|
openshift-install create install-config --dir cluster-b
Change the Pod IP network. Please note it’s a /14 range by default so you need to use +4 increments for “128”, for example: 10.132.0.0, 10.136.0.0, 10.140.0.0, …
sed -i 's/10.128.0.0/10.132.0.0/g' cluster-b/install-config.yaml
Change the Service IP network. This is a /16 range by default, so just use +1 increments for “30”: for example: 172.31.0.0, 220.127.116.11, 18.104.22.168, …
sed -i 's/172.30.0.0/172.31.0.0/g' cluster-b/install-config.yaml
And finally deploy
openshift-install create cluster --dir cluster-b
Submariner gateway nodes need to be able to accept IPsec traffic. The default ports are 4500/UDP and 500/UDP. However, when you have some on-premise clusters (like vSphere in this example) which are typically inside a corporate network, the firewall configuration on the corporate router may not allow the default IPsec traffic. We can overcome this limitation by using non-standard ports like 4501/UDP and 501/UDP.
Additionally, the default OpenShift deployments do not allow assigning an elastic public IP to existing worker nodes, something that’s necessary at least on one end of the IPsec connections.
To handle these requirements on AWS, we provide a script that will prepare your AWS OpenShift deployment for Submariner, and will create an additional gateway node with an external IP.
curl https://raw.githubusercontent.com/submariner-io/submariner/master/tools/openshift/ocp-ipi-aws/prep_for_subm.sh -L -O chmod a+x ./prep_for_subm.sh ./prep_for_subm.sh cluster-b # respond yes when terraform asks
INFO Please note that oc, aws-cli, terraform, and unzip need to be installed before running the
Currently, prep_for_subm.sh script does not support specifying custom IPsec ports. Until the issue is resolved, execute the following commands to open the necessary ports:
export BROKER_IKEPORT=501 export NAT_PORT=4501 sed "s/\ 500/\ $BROKER_IKEPORT/g" -i cluster-b/ocp-ipi-aws/ocp-ipi-aws-prep/ec2-resources.tf sed "s/\ 4500/\ $NAT_PORT/g" -i cluster-b/ocp-ipi-aws/ocp-ipi-aws-prep/ec2-resources.tf ./prep_for_subm.sh cluster-b # respond yes when terraform asks
curl -Ls https://raw.githubusercontent.com/submariner-io/submariner-operator/master/scripts/subctl/getsubctl.sh | bash export PATH=$PATH:~/.local/bin echo export PATH=\$PATH:~/.local/bin >> ~/.profile
To install Submariner with multi-cluster service discovery, follow the steps below.
subctl deploy-broker --kubeconfig cluster-b/auth/kubeconfig --service-discovery
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b --ikeport 501 --nattport 4501
subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a --ikeport 501 --nattport 4501
To verify the deployment follow the steps below which creates an nginx service and ServiceExport for it.
export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx
export KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.supercluster.local:8080
This will perform all automated verification between your clusters
subctl verify cluster-a/auth/kubeconfig cluster-b/auth/kubeconfig --only service-discovery,connectivity --verbose