Rancher

Prerequisites

These instructions were developed with Rancher v2.4.x

Make sure you are familiar with Rancher, and creating clusters. You can create either node driver clusters or Custom clusters, as long as your designated gateway nodes can communicate with each other.

Create and Deploy Cluster A

In this step you will deploy cluster A, with the default IP CIDRs

Pod CIDR Service CIDR
10.42.0.0/16 10.43.0.0/16

Use the Rancher UI to create a cluster, leaving the default options selected.

Make sure you create at least one node that has a publicly accessible IP with the label submariner.io/gateway: "true", either via node pool or via a custom node registration command.

Create and Deploy Cluster B

In this step you will deploy cluster B, modifying the default IP CIDRs

Pod CIDR Service CIDR
10.44.0.0/16 10.45.0.0/16

Create your cluster, but select Edit as YAML in the cluster creation UI. Edit the services stanza to reflect the options below, while making sure to keep the options that were already defined.

  services:
    kube-api:
      service_cluster_ip_range: 10.45.0.0/16
    kube-controller:
      cluster_cidr: 10.44.0.0/16
      service_cluster_ip_range: 10.45.0.0/16
    kubelet:
      cluster_domain: cluster.local
      cluster_dns_server: 10.45.0.10

Make sure you create at least one node that has a publicly accessible IP with the label submariner.io/gateway: "true", either via node pool or via a custom node registration command.

Once you have done this, you can deploy your cluster.

Install subctl

Download the subctl binary and make it available on your PATH.

curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile

If you have Go and the source code, you can build and install subctl instead:

cd go/src/submariner-io/subctl
go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd

(and ensure your go/bin directory is on your PATH).

Obtain the kubeconfig files from the Rancher UI for each of your clusters, placing them in the respective kubeconfigs.

Cluster Kubeconfig File Name
Cluster A kubeconfig-cluster-a
Cluster B kubeconfig-cluster-b

Edit the kubeconfig files so they use the context names “cluster-a” and “cluster-b”.

Use cluster-a as Broker

subctl deploy-broker --kubeconfig kubeconfig-cluster-a

Join cluster-a and cluster-b to the Broker

subctl join --kubeconfig kubeconfig-cluster-a broker-info.subm --clusterid cluster-a
subctl join --kubeconfig kubeconfig-cluster-b broker-info.subm --clusterid cluster-b

Verify connectivity

This will run a series of E2E tests to verify proper connectivity between the cluster Pods and Services

export KUBECONFIG=kubeconfig-cluster-a:kubeconfig-cluster-b
subctl verify --context cluster-a --tocontext cluster-b --only connectivity --verbose