Linking Kubernetes Clusters

0
215
Linking Kubernetes Clusters

Dit bericht verscheen eerder bij FOSSlife

To be able to access the cluster in AWS in a configuration with kubectl, or with the API, you need to add matching entries to the .kube/config file. The AWS command-line interface (CLI) gives you a separate command for this,

aws eks update-kubeconfig <cluster-name>

which adds the required entries. On the management machine, besides the Kubernetes tools, the AWS CLI also needs to be installed and configured to have access to the AWS account.

The entries in the .kube/config file for the EKS cluster look similar. However, instead of logging in with a token, you log in with the output of a command. In the background, kubectl runs the

aws eks get-token --cluster-name vulkan

command, where vulkan is the name of the Kubernetes cluster in AWS in this example. In the Kubernetes config file syntax, the whole user entry then looks like Listing 1.

Listing 1: User Entry

- name: kubernetes-eks-user
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws
      args:
      - eks
      - get-token
      - --cluster-name
      - vulcan

Because the arguments of the command form a field, you write them as a YAML list one below the other. The cluster context is also named vulcan. If something goes wrong, it is best to scroll to the beginning of the command output, which usually has the most relevant part of the information. The remainder is a stack trace, some of which can be very long-winded. If everything works, a call to kubectl,

# kubectl -n kube-federation-system get kubefedclusters
NAME AGE READY
earth 2h True
vulcan 5m True

confirms that the federation now comprises two clusters.

Deployment

Before deployment can start, you need to federate the resource types you want to deploy. These can be pods, services, namespaces, and so on. Again, the kubefedctl tool is used for this step. For example,

kubecfedctl enable pods

lets you distribute resources of the Pod type. If a Kubernetes namespace exists that you want to federate as a whole, you can do so with the command:

# kubefedctl federate namespace demo --contents --enable-type

The --enable-type option ensures that the missing types are also federated immediately. On top of this, you can create a FederatedNamespace type resource that lets you control at namespace level the clusters on which K8s creates resources. This even works at the single deployment level, but it does cause more administrative overhead. However, if you rely on cloud providers where privacy concerns exist, this method gives you clear control over where containers are running.

Setting up a namespace the right way for use in a federation also means having a YAML file like the one in Listing 2. The fedtestns namespace has to exist before the create step. The placement parameter lets you manage the clusters to which K8s rolls out the resources. To distribute the pods on both sides of the cluster, you need to feed them in differently.

Listing 2: Federated Namespace

---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
  name: fedns
  namespace: fedtestns
spec:
  placement:
    clusters:
    - name: earth
    - name: vulcan

A simple web server is used as a test case. Normal unfederated deployment results in the pods running in the local cluster only. The command for federating the namespace along with its contents distributes all the existing resources but does not ensure that newly created resources are also evenly distributed in the same way. You now have the option of creating a deployment directly as a FederatedDeployment instead. The YAML file for this is shown in Listing 3. It looks very similar to the one for a simple deployment, but the placement parameter ensures that the specified clusters also inherit the deployment.

Listing 3: Federated Deployment

apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: fedhttp
  namespace: fedtestns
spec:
  template:
    metadata:
      name: http
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: httpbin
          version: v1
      template:
        metadata:
          labels:
            app: httpbin
            version: v1
        spec:
          containers:
          - image: docker.io/kennethreitz/httpbin
            imagePullPolicy: IfNotPresent
            name: httpbin
            ports:
            - containerPort: 80
  placement:
    clusters:
    - name: earth
    - name: vulcan

After a short wait, you will now have a pod and a deployment with the web service on both clusters. The pods on both clusters can also be accessed on port 80. In the case of AWS, however, you also need to enable this port in the correct security group of the AWS configuration.

For the pods in the clusters to work together, it is crucial to set up routing between the two clusters so that they can reach each other. You also need to configure firewall rules and security groups appropriately. It is advisable here to rely on an automation tool such as Ansible if the cluster at the cloud provider’s end will be on-demand only.

Like the deployment, you also need to roll out the service as a FederatedService, if you use it. Listing 4 shows the service for deployment from Listing 3, with service name resolution running locally; that is, fed-service resolves to the cluster IP in the local cluster. This feature is something to keep in mind when designing the service.

Listing 4: Federated Service

apiVersion: types.kubefed.io/v1beta1
kind: FederatedService
metadata:
  name: fed-service
  namespace: fedtestns
spec:
  template:
    spec:
      selector:
        app: httpbin
      type: NodePort
      ports:
        - name: http
          port: 80
  placement:
    clusters:
    - name: earth
    - name: vulcan

Components such as persistent volumes are also created locally on the clusters and must be populated there. Where pods access remote resources, you need to make sure they are accessible on both clusters. It is important to consider both IP accessibility and name resolution.

If you remove the federated resources, you remove the resources at the same time. To remove only one cluster from the deployment, remove the cluster under placement. You can remove the second cluster from the configuration with the command:

# kubefedctl unjoin vulcan --host-cluster-context earth --v=2

In the test, however, I did have to manually clean up the resources rolled out in the second cluster retroactively. You can save yourself this step by completely deleting the cluster from the commercial public cloud provider.

Conclusions

Kubernetes Cluster Federation makes it easy to link up one or multiple clusters as resource extensions. This solution doesn’t mean you don’t have to pay attention to what will be running where, and it is important to set up the extension such that the service’s consumers can use it. In the case of a special promo in a web store, for example, the customers’ requests need to reach the containers in the new cluster in the same way they did at the previously existing clusters; otherwise, federation will not offer you any performance benefits. If you expect such situations to occur frequently, it is advisable to run your own cluster as a federation with a single member. Then, you only need to complete the steps required for the extension. 

Dit bericht verscheen eerder bij FOSSlife

Vorig artikelGoogle Cloud expands European presence with opening of Madrid region
Volgend artikelMEC to gain sharper edge over next five years