With Service Discovery and Globalnet

AWS

openshift-install and pull-secret

Download the openshift-install and oc tools, and copy your pull secret from:

https://cloud.redhat.com/openshift/install/aws/installer-provisioned

Find more detailed instructions here:

https://docs.openshift.com/container-platform/4.3/installing/installing_aws/installing-aws-default.html

Make sure the aws cli is properly installed and configured

Installation instructions

https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html

$ aws configure
AWS Access Key ID [None]: ....
AWS Secret Access Key [None]: ....
Default region name [None]: ....
Default output format [None]: text

See also for more details:

https://docs.openshift.com/container-platform/4.3/installing/installing_aws/installing-aws-account.html

Create cluster A

This step will create a cluster named “cluster-a” with the default IP CIDRs.

Pod CIDR Service CIDR
10.128.0.0/14 172.30.0.0/16
openshift-install create install-config --dir cluster-a
openshift-install create cluster --dir cluster-a

This may take some time to complete so you can move on to the next section in parallel if you wish.

Create cluster B

This step will create a cluster named “cluster-b” with the default IP CIDRs.

Pod CIDR Service CIDR
10.128.0.0/14 172.30.0.0/16
openshift-install create install-config --dir cluster-b

And finally deploy

openshift-install create cluster --dir cluster-b

Make your clusters ready for Submariner

Submariner gateway nodes need to be able to accept traffic over UDP ports (4500 and 500 by default) when using IPsec. In addition we use port 4800/UDP to encapsulate traffic from the worker nodes to the gateway nodes and ensuring that Pod IP addresses are preserved.

Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the IPsec connections.

To handle all those details we provide a script prep_for_subm.sh that will prepare your AWS OpenShift deployment for Submariner, and will create an additional gateway node with an external IP:


curl https://raw.githubusercontent.com/submariner-io/submariner/master/tools/openshift/ocp-ipi-aws/prep_for_subm.sh -L -O
chmod a+x ./prep_for_subm.sh

./prep_for_subm.sh <OCP install path>  # respond yes when terraform asks to approve, or add after path: -auto-approve

INFO Please note that oc, aws-cli, terraform, and wget need to be installed before running the prep_for_subm.sh script.

In the following example, we create the gateway node on cluster-b, with custom IPsec ports and instance type:


export IPSEC_NATT_PORT=4501
export IPSEC_IKE_PORT=501
export GW_INSTANCE_TYPE=m4.xlarge

./prep_for_subm.sh cluster-b

Install subctl

Download the subctl binary and make it available on your PATH.

curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile

Install Submariner with Service Discovery and Globalnet

To install Submariner with multi-cluster service discovery and support for overlapping CIDRs follow the steps below.

Use cluster-a as Broker with service discovery and globalnet enabled

subctl deploy-broker  --kubeconfig cluster-a/auth/kubeconfig --service-discovery --globalnet

Join cluster-a and cluster-b to the Broker

subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid west
subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid east

Verify Deployment

To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.

Deploy ClusterIP Service
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
Deploy Headless Service

Note that headless Services can only be exported on non-globalnet deployments.

export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Verify

Run nettest from cluster-a to access the nginx service:

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default  run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080

To access a Service in a specific cluster, prefix the query with <cluster-id> as follows:

curl cluster-a.nginx.default.svc.clusterset.local:8080

Verify StatefulSets

A StatefulSet uses a headless Service. Create a web.yaml as follows:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ss
  labels:
    app.kubernetes.io/instance: nginx-ss
    app.kubernetes.io/name: nginx-ss
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app.kubernetes.io/instance: nginx-ss
    app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx-ss"
  replicas: 2
  selector:
    matchLabels:
        app.kubernetes.io/instance: nginx-ss
        app.kubernetes.io/name: nginx-ss
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: nginx-ss
        app.kubernetes.io/name: nginx-ss
    spec:
      containers:
      - name: nginx-ss
        image: nginxinc/nginx-unprivileged:stable-alpine
        ports:
        - containerPort: 80
          name: web

Use this yaml to create a StatefulSet web with nginx-ss as the headless Service.

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default  apply -f web.yaml
curl nginx-ss.default.svc.clusterset.local:8080

To access the Service in a specific cluster, prefix the query with <cluster-id>:

curl cluster-a.nginx-ss.default.svc.clusterset.local:8080

To access an individual pod in a specific cluster, prefix the query with <pod-hostname>.<cluster-id>:

curl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080

Perform automated verification

This will perform automated verifications between the clusters.

subctl verify cluster-a/auth/kubeconfig cluster-b/auth/kubeconfig --only service-discovery,connectivity --verbose