On GCP (LoadBalancer mode)

This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on GCP leveraging a cloud network load balancer service in front of the Submariner gateways.

The main benefit of this mode is that there is no need to dedicate specialized nodes with a public IP address to act as gateways. The administrator only needs to manually label any existing node or nodes in each cluster as Submariner gateways, and the Submariner Operator will take care of creating a LoadBalancer type Service pointing to the active Submariner gateway.

Please note that this mode is still experimental and may need more testing. For example we haven’t measured the impact on HA failover times.

Prerequisites

Before we begin, the following tools need to be downloaded and added to your $PATH:

  1. OpenShift installer, pull secret, and command line interface. All can be downloaded from the official Installer documenation.
  2. GCP CLI which can be downloaded from the official GCP documenation.

Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.

Setup Your GCP Profile

Configure the GCP Credentials like project_id, private_key etc in ~/.gcp/osServiceAccount.json file. Please refer to the official doc for detailed instructions

Create and Deploy cluster-a

In this step you will deploy cluster-a using the default IP CIDR ranges:

Pod CIDR Service CIDR
10.128.0.0/14 172.30.0.0/16
openshift-install create install-config --dir cluster-a
openshift-install create cluster --dir cluster-a

When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

Create and Deploy cluster-b

In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.

In this example, we will use the following IP ranges:

Pod CIDR Service CIDR
10.132.0.0/14 172.31.0.0/16
openshift-install create install-config --dir cluster-b

Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:

sed -i 's/10.128.0.0/10.132.0.0/g' cluster-b/install-config.yaml

Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:

sed -i 's/172.30.0.0/172.31.0.0/g' cluster-b/install-config.yaml

And finally deploy the cluster:

openshift-install create cluster --dir cluster-b

When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.

Install subctl

Download the subctl binary and make it available on your PATH.

curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile

If you have Go and the source code, you can build and install subctl instead:

cd go/src/submariner-io/subctl
go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd

(and ensure your go/bin directory is on your PATH).

Prepare GCP Clusters for Submariner

Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.

subctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.

Run the command for cluster-a:

export KUBECONFIG=cluster-a/auth/kubeconfig
subctl cloud prepare gcp --ocp-metadata cluster-a/metadata.json

Run the command for cluster-b:

export KUBECONFIG=cluster-b/auth/kubeconfig
subctl cloud prepare gcp --ocp-metadata cluster-b/metadata.json

Install Submariner with Service Discovery

To install Submariner with multi-cluster Service Discovery follow the steps below:

Use cluster-a as Broker

subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig

Join cluster-a and cluster-b to the Broker

subctl join --load-balancer --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a
subctl join --load-balancer  --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b

Verify Deployment

To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.

Deploy ClusterIP Service
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080
subctl export service --namespace default nginx
Deploy Headless Service
export KUBECONFIG=cluster-b/auth/kubeconfig
kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None
subctl export service --namespace default nginx
Verify

Run nettest from cluster-a to access the nginx service:

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash
curl nginx.default.svc.clusterset.local:8080

To access a Service in a specific cluster, prefix the query with <cluster-id> as follows:

curl cluster-b.nginx.default.svc.clusterset.local:8080

Verify StatefulSets

A StatefulSet uses a headless Service. Create a web.yaml as follows:

apiVersion: v1
kind: Service
metadata:
  name: nginx-ss
  labels:
    app.kubernetes.io/instance: nginx-ss
    app.kubernetes.io/name: nginx-ss
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app.kubernetes.io/instance: nginx-ss
    app.kubernetes.io/name: nginx-ss
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx-ss"
  replicas: 2
  selector:
    matchLabels:
        app.kubernetes.io/instance: nginx-ss
        app.kubernetes.io/name: nginx-ss
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: nginx-ss
        app.kubernetes.io/name: nginx-ss
    spec:
      containers:
      - name: nginx-ss
        image: nginxinc/nginx-unprivileged:stable-alpine
        ports:
        - containerPort: 80
          name: web

Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.

export KUBECONFIG=cluster-a/auth/kubeconfig
kubectl -n default apply -f web.yaml
subctl export service -n default nginx-ss
curl nginx-ss.default.svc.clusterset.local:8080

To access the Service in a specific cluster, prefix the query with <cluster-id>:

curl cluster-a.nginx-ss.default.svc.clusterset.local:8080

To access an individual pod in a specific cluster, prefix the query with <pod-hostname>.<cluster-id>:

curl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080

Perform automated verification

The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:

yq e -i '.contexts[0].name = "cluster-a" | .current-context = "cluster-a"' cluster-a/auth/kubeconfig
yq e -i '.contexts[0].context.user = "admin-a" | .users[0].name = "admin-a"' cluster-a/auth/kubeconfig
yq e -i '.contexts[0].name = "cluster-b" | .current-context = "cluster-a"' cluster-b/auth/kubeconfig
yq e -i '.contexts[0].context.user = "admin-b" | .users[0].name = "admin-b"' cluster-b/auth/kubeconfig

(if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).

More generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.

This will perform automated verifications between the clusters.

export KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig
subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose