The subctl command-line utility simplifies the deployment and maintenance of Submariner by automating interactions with the Submariner
Operator
subctl [command] [--flags] ...
curl -Ls https://get.submariner.io | bash
export PATH=$PATH:~/.local/bin
echo export PATH=\$PATH:~/.local/bin >> ~/.profile
By default, https://get.submariner.io will provide the latest release for subctl, and hence Submariner.
Specific versions can be requested by using the VERSION environment variable.
Avalailable options are:
latest: the latest stable release (default)devel: the master branch code.rc: the latest release candidate.x.x.x (like 0.6.1, 0.5.0, etc)For example
curl https://get.submariner.io | VERSION=devel bash
deploy-brokersubctl deploy-broker [flags]
The deploy-broker command configures the cluster specified by the --kubeconfig flag (or KUBECONFIG env var) and the --kubecontext
flag as the Broker. It installs the necessary CRDs and the submariner-k8s-broker namespace.
In addition, it generates a broker-info.subm file which can be used with the join command to connect clusters to the Broker. This file
contains the following details:
subctl runsdeploy-broker flags| Flag | Description |
|---|---|
--kubeconfig <string> |
Absolute path(s) to the kubeconfig file(s) (default $HOME/.kube/config) |
--kubecontext <string> |
kubeconfig context to use |
--service-discovery |
Enable Multi Cluster Service Discovery (default true) |
--globalnet |
Enable support for overlapping Cluster/Service CIDRs in connecting clusters (default disabled) |
--globalnet-cidr-range <string> |
Global CIDR supernet range for allocating GlobalCIDRs to each cluster (default “169.254.0.0/16”) |
--ipsec-psk-from <string> |
Import IPsec PSK from existing Submariner broker file, like broker-info.subm (default broker-info.subm) |
exportexport servicesubctl export service [flags] <name> creates a ServiceExport resource for the given Service name. This makes the corresponding Service
discoverable from other clusters in the Submariner deployment.
export service flags| Flag | Description |
|---|---|
--kubeconfig <string> |
Absolute path(s) to the kubeconfig file(s) (default $HOME/.kube/config) |
--kubecontext <string> |
Kubeconfig context to use |
--namespace <string> |
Namespace in which the Service to be exported belongs |
If no namespace flag is specified, it uses the default namespace from the current context, if present, otherwise it uses default.
joinsubctl join broker-info.subm [flags]
The join command deploys the Submariner Operator in a cluster using the settings provided in the broker-info.subm file. The service
account credentials needed for the new cluster to access the Broker cluster will be created and provided to the Submariner Operator
deployment.
join flags (general)| Flag | Description |
|---|---|
--cable-driver <string> |
Cable driver implementation. Available options are libreswan (default), strongswan, and wireguard |
--clusterid <string> |
Cluster ID used to identify the tunnels. Every cluster needs to have a unique cluster ID. If not provided, one will be generated by default based on the cluster name in the kubeconfig file |
--clustercidr <string> |
Specifies the cluster’s CIDR used to generate Pod IP addresses. If not specified, subctl will try to discover it and if unable to do so, it will prompt the user |
--no-label |
Skip gateway labeling. This disables the prompt for a Worker node to use as gateway |
--enable-pod-debugging |
Enable Submariner pod debugging (verbose logging in the deployed pods) |
join flags (Globalnet)| Flag | Description |
|---|---|
--globalnet-cluster-size <value> |
Cluster size for GlobalCIDR allocated to this cluster (amount of global IPs) |
--globalnet-cidr <string> |
GlobalCIDR to be allocated to the cluster, this setting is exclusive with --globalnet-cluster-size and configures a specific Globalnet CIDR for this cluster |
join flags (IPsec)| Flag | Description |
|---|---|
--disable-nat |
Disable NAT for IPsec |
--ikeport <value> |
IPsec IKE port (default 500) |
--ipsec-debug |
Enable IPsec debugging (verbose logging) |
--nattport <value> |
IPsec NAT-T port (default 4500) |
join flags (images and repositories)| Flag | Description |
|---|---|
--repository <string> |
The repository from where the various Submariner images will be sourced (default “quay.io/submariner”) |
--version <string> |
Image version |
--image-override <string>=<string> |
Component image override. This flag can be used more than once (example: –image-override=submariner=quay.io/myUser/submariner:latest) |
join flags (health check)| Flag | Description |
|---|---|
--health-check |
Enable/disable Gateway health check (default true) |
--health-check-interval <uint> |
The interval in seconds at which health check packets will be sent (default 1) |
--health-check-max-packet-loss-count <uint> |
The maximum number of packets lost at which the health checker will mark the connection as down (default 5) |
showshow networkssubctl show networks [flags]
Inspects the cluster and reports information about the detected network plugin and detected Cluster and Service CIDRs.
show versionssubctl show versions [flags]
Shows the version and image repository of each Submariner component in the cluster.
show gatewayssubctl show gateways [flags]
Shows summary information about the Submariner gateways in the cluster.
show connectionssubctl show connections [flags]
Shows information about the Submariner endpoint connections with other clusters.
show endpointssubctl show endpoints [flags]
Shows information about the Submariner endpoints in the cluster.
show allsubctl show all [flags]
Shows the aggregated information from all the other show commands.
show flags| Flag | Description |
|---|---|
--kubeconfig <string> |
Absolute path(s) to the kubeconfig file(s) (default $HOME/.kube/config) |
--kubecontext <string> |
Kubeconfig context to use |
verifysubctl verify <kubeConfig1> <kubeConfig2> [flags]
The verify command verifies a Submariner deployment between two clusters is functioning properly. The kubeConfig1 file will be
ClusterA in the reports, while kubeConfig2 will be ClusterB in the reports. The --verbose flag is recommended to see what’s
happening during the tests.
There are several suites of verifications that can be performed. By default all verifications are performed. Some verifications are deemed
disruptive in that they change some state of the clusters as a side effect. If running the command interactively, you will be prompted for
confirmation to perform disruptive verifications unless the --enable-disruptive flag is also specified. If running non-interactively (that
is with no stdin), --enable-disruptive must be specified otherwise disruptive verifications are skipped.
The connectivity suite verifies dataplane connectivity across the clusters for the following cases:
The service-discovery suite verifies DNS discovery of <service>.<namespace>.svc.clusterset.local entries across the clusters.
The gateway-failover suite verifies the continuity of cross-cluster dataplane connectivity after a gateway failure in a cluster occurs.
This suite requires a single gateway configured on ClusterA and other available Worker nodes capable of serving as gateways. Please note
that this verification is disruptive.
verify flags| Flag | Description |
|---|---|
--connection-attempts <value> |
The maximum number of connection attempts (default 2) |
--connection-timeout <value> |
The timeout in seconds per connection attempt (default 60) |
--operation-timeout <value> |
Operation timeout for Kubernetes API calls (default 240) |
--report-dir <string> |
XML report directory (default “.") |
--verbose |
Produce verbose logs during connectivity verification |
--only |
Comma separated list of specific verifications to perform |
--enable-disruptive |
Enable verifications which are potentially disruptive to your deployment |
benchmarkbenchmark throughputsubctl benchmark throughput <kubeconfig1> [<kubeconfig2>] [flags]
The benchmark throughput command runs a throughput benchmark test between two specified clusters or within a single cluster.
It deploys a Pod to run the iperf tool and logs the output to the console.
When running benchmark throughput, two types of tests will be executed:
benchmark latencysubctl benchmark latency <kubeconfig1> [<kubeconfig2>] [flags]
The benchmark latency command runs a latency benchmark test between two specified clusters or within a single cluster.
It deploys a Pod to run the netperf tool and logs the output to the console.
When running benchmark latency, two types of tests will be executed:
benchmark flags| Flag | Description |
|---|---|
--intra-cluster |
Performs the benchmark test within a single cluster between Pods from a Non-Gateway node to a Gateway node |
--verbose |
Produce verbose logs during benchmark tests |
versionsubctl version
Prints the version details for the subctl binary.