On AWS with Globalnet

This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on AWS with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner to interconnect the two clusters. Since the two clusters share the same Cluster and Service CIDR ranges, Globalnet will be enabled.

Prerequisites

Before we begin, the following tools need to be downloaded and added to your $PATH:

  1. OpenShift installer, pull secret, and command line interface. All can be downloaded from here.
  2. AWS CLI which can be downloaded from here.

Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.

Setup Your AWS Profile

Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:

$ aws configure
AWS Access Key ID [None]: ....
AWS Secret Access Key [None]: ....
Default region name [None]: ....
Default output format [None]: text

Create and Deploy cluster-a

In this step you will deploy cluster-a using the default IP CIDR ranges:

Pod CIDR Service CIDR
10.128.0.0/14 172.30.0.0/16
openshift-install create install-config --dir cluster-a
openshift-install create cluster --dir cluster-a

When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

Create and Deploy cluster-b

In this step you will deploy cluster-b using the same default IP CIDR ranges:

Pod CIDR Service CIDR
10.128.0.0/14 172.30.0.0/16
openshift-install create install-config --dir cluster-b
openshift-install create cluster --dir cluster-b

When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

Prepare AWS Clusters for Submariner

Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 500 by default) when using IPsec. Submariner also uses UDP port 4800 to encapsulate traffic from worker and master nodes to the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the IPsec connection.

prep_for_subm is a script designed to update your OpenShift installer provisioned AWS infrastructure for Submariner deployments, handling the requirements specified above.

  • Download the prep_for_subm.sh script and set permissions:
curl https://raw.githubusercontent.com/submariner-io/submariner/master/tools/openshift/ocp-ipi-aws/prep_for_subm.sh -L -O
chmod a+x ./prep_for_subm.sh

Please note that oc, aws-cli, terraform, and wget need to be installed before the prep_for_subm.sh script can be run. Also note that the script is known to be working with Terraform version 0.12. Maximum compatible version is 0.12.12.

The script deploys an m5n.large EC2 instance type by default, optimized for improved network throughput and packet rate performance, for the Submariner gateway node. Please ensure that the AWS Region you deploy to supports this instance type. Alternatively, you can customize the AWS instance type as shown below.

  • Run the prep_for_subm.sh script for cluster-a and cluster-b:
./prep_for_subm.sh cluster-a # respond "yes" when Terraform asks for approval, or otherwise add the -auto-approve flag
./prep_for_subm.sh cluster-b # respond "yes" when Terraform asks for approval, or otherwise add the -auto-approve flag

Note that certain parameters, such as the IPsec UDP ports and AWS instance type for the gateway, can be customized before running the script. For example:

export IPSEC_NATT_PORT=4501
export IPSEC_IKE_PORT=501
export GW_INSTANCE_TYPE=m4.xlarge

Install subctl

Download the subctl binary and make it available on your PATH.

curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile

Install Submariner with Service Discovery and Globalnet

To install Submariner with multi-cluster service discovery and support for overlapping CIDRs follow the steps below.

Use cluster-a as Broker with service discovery and globalnet enabled

subctl deploy-broker  --kubeconfig cluster-a/auth/kubeconfig --service-discovery --globalnet

Join cluster-a and cluster-b to the Broker

subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b

Verify Deployment

To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.

Deploy ClusterIP Service
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
Deploy Headless Service

Note that headless Services can only be exported on non-globalnet deployments.

export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Verify

Run nettest from cluster-a to access the nginx service:

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default  run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080

To access a Service in a specific cluster, prefix the query with <cluster-id> as follows:

curl cluster-a.nginx.default.svc.clusterset.local:8080

Verify StatefulSets

A StatefulSet uses a headless Service. Create a web.yaml as follows:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ss
  labels:
    app.kubernetes.io/instance: nginx-ss
    app.kubernetes.io/name: nginx-ss
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app.kubernetes.io/instance: nginx-ss
    app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx-ss"
  replicas: 2
  selector:
    matchLabels:
        app.kubernetes.io/instance: nginx-ss
        app.kubernetes.io/name: nginx-ss
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: nginx-ss
        app.kubernetes.io/name: nginx-ss
    spec:
      containers:
      - name: nginx-ss
        image: nginxinc/nginx-unprivileged:stable-alpine
        ports:
        - containerPort: 80
          name: web

Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default apply -f web.yaml
subctl export service -n default nginx-ss
curl nginx-ss.default.svc.clusterset.local:8080

To access the Service in a specific cluster, prefix the query with <cluster-id>:

curl cluster-a.nginx-ss.default.svc.clusterset.local:8080

To access an individual pod in a specific cluster, prefix the query with <pod-hostname>.<cluster-id>:

curl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080

Perform automated verification

This will perform automated verifications between the clusters.

subctl verify cluster-a/auth/kubeconfig cluster-b/auth/kubeconfig --only service-discovery,connectivity --verbose