Skip to main content

Your submission was sent successfully! Close

Thank you for signing up for our newsletter!
In these regular emails you will find the latest updates from Canonical and upcoming events where you can meet our team.Close

Thank you for contacting us. A member of our team will be in touch shortly. Close

  1. Blog
  2. Article

Marco Ceppi
on 12 April 2017

General availability of Kubernetes 1.6 on Ubuntu


We are proud to release the latest Canonical Distribution of Kubernetes (CDK) supporting Kubernetes version 1.6.1! Kubernetes 1.6 is a major milestone and it is recommended that you read the upstream release notes for new features and changes.

GA Features

Canonical’s Distribution of Kubernetes, CDK, offers a production grade method for installing, configuring, and managing Kubernetes lifecycle operations. With this release, CDK has the following GA features.

TLS encryption

Out of the box, CDK will deploy EasyRSA for private key infrastructure (PKI). This will be used to encrypt all traffic in the cluster:

  • kubernetes-master to etcd
  • kubernetes-master to kubelet
  • kubectl to kubernetes-master

Highly Available

CDK produces a robust and highly available cluster, multiple etcd units, masters, and workers all spread across availability zones to distill cloud best practices in your deployments on every cloud.

Deployable, everywhere

By leveraging tools like conjure-up, CDK can be deployed across any number of clouds. From Amazon, Google Cloud, Azure, Rackspace – to OpenStack, VMWare, bare metal, or your local computer. CDK offers a consistent Kubernetes experience across all these clouds and more!

Operations baked in

In this distribution, common operational tasks from upgrading, maintenance of nodes, backups, restorations, and more are included. This gives operators the tools they need to operate a Kubernetes cluster and an extensible way to add new actions should they need them.

Extensible

This distribution produces an opinionated starting point for your production Kubernetes cluster, however it’s designed to be something you mold into your Kubernetes. Whether that’s changing scale, placement of components, or architecture itself, CDK lets you make these modifications and more without getting stuck with a one-off deployment.

When it comes to scale, you get your choice in how large or small you want to start a cluster. You can change this anytime during a deploying adding more workers, different worker pools, or removing worker pools all together.

In addition to scale, you’ll also get the choice in exactly where and how a Kubernetes component is placed. This means you can start with just one machine and everything in it, a mixture of etcd on master, or any other architecture that suits your needs.

Finally, CDK allows you to choose which components to install. Out of the box you get etcd, Kubernetes, EasyRSA, and Flannel. However, if you want a different SDN for example, Calico, and you want to integrate Kubernetes to your existing Datadog dashboard or Nagios installation; you can manipulate the architecture to make a repeatable Kubernetes for your use cases.

Beta Features

In addition to the above GA features, the following beta features are available as of this release which we encourage everyone to try out!

GPU Support

With 1.6 release of Kubernetes, GPU/CUDA support has grown quite a bit. In celebration of that effort in upstream we’ve enabled GPU support for all kubernetes nodes which have CUDA cores. See the below examples on how to leverage, test, and use this new feature in Kubernetes.

Getting Started

Here’s the simplest way to get a Kubernetes cluster up and running on an Ubuntu 16.04 system:

sudo snap install conjure-up --classic
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Source code: https://github.com/kubernetes/kubernetes/tree/master/cluster/juju

Upgrading an existing cluster

The best way to get the latest Kubernetes code is to deploy a new cluster of CDK, or Kubernetes Core. If you wish to upgrade an existing deployment there are a few steps to upgrade the components on your cluster.

Upgrading etcd

Note: This step is mandatory and should be performed only when upgrading an existing cluster to the latest version.

Upgrading etcd from the previous stable (revision 24) to the current stable (revision 25) will trigger a status message indicating it requires operator intervention to continue. This migration process will incur a small period of down-time as etcd is migrated from version 2.x to 3.1. Please plan for this event properly and ensure you have taken a snapshot of the etcd data prior to running the upgrade so you may recover in the event of unforeseen issues.

Once you have captured the state of your etcd data, you may proceed with the upgrade process. The upgrade will need to be performed once per unit participating in the cluster. To upgrade the etcd charm use the juju upgrade-charm command:

juju upgrade-charm etcd

This command initiates an update of the code that operates the etcd application.  Wait until the software upgrade is complete before migrating the etcd data (a manual step). You will need to upgrade each etcd unit in your cluster. Presuming you have a 3 unit etcd cluster, the commands would be as follows:

juju run-action etcd/0 snap-upgrade
juju run-action etcd/1 snap-upgrade
juju run-action etcd/2 snap-upgrade

Once complete, the status messaging will return to the cluster health output. Future delivery of etcd will be performed via snaps and the version is configurable in the charm config.

Upgrading other cluster components

We have documented the upgrade scenarios in the Kubernetes documentation. If your cluster is running pods you should upgrade with a Blue/Green strategy outlined in the Upgrade documentation.
If you have made changes to the deployment bundle, such as adding additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no changes to the component names, and assumes you’ve upgraded etcd as outlined above.

# Upgrade charms
juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

# Add new relation
juju relate kubernetes-worker:kube-control kubernetes-master:kube-control

# Remove deprecated relation
juju remove-relation kubernetes-worker:kube-dns kubernetes-master:cluster-dns

This will upgrade the operations code and  the Kubernetes version 1.6.1.

Changes in this release

  • Support for Kubernetes v1.6, with the current release being 1.6.1
  • Installation of components via snaps: kubectl, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy. To learn more about snaps, check out the Snapcraft website.
  • Added allow-privileged config option on kubernetes-master and kubernetes-worker charms. Valid values are true|false|auto (default: auto). If the value is auto, containers will run in unprivileged mode unless GPU hardware is detected on a worker node. Otherwise Kubernetes will be run with --allow-privileged=true.
  • Added GPU support (beta). If Nvidia GPU hardware is detected on a worker node, Nvidia drivers and CUDA packages will be installed, and kubelet will be restarted with the flags required to use the GPU hardware. The allow-privileged config option must be true or auto.
    • Nvidia driver version = 375.26; CUDA version = 8.0.61; these will be configurable future charm releases.
    • GPU support does not currently work on lxd.
    • This feature is beta – feedback on the implementation is welcomed.
  • Added support for running your own private registry, setup guidelines here.

General Fixes:

  • kubeapi-load-balancer not properly forwarding SPDY/HTTP2 traffic for kubectl exec commands.

Etcd specific changes:

  • Installation of etcd and etcdctl is now delivered with snaps.
  • Support for upgrading the previous etcd charm, to the latest charm with snap delivery.  See manual upgrade process for updating existing etcd clusters.

Changes to the bundles and layers:

  • Add registry action to the kubernetes-worker layer, which deploys a Docker registry in Kubernetes.
  • Add support for kube-proxy cluster-cidr option.

GPU Example

With a kubernetes bundle deployed on AWS, add a GPU-enabled worker to an existing deployment:

# Add a new worker on gpu hardware
juju deploy cs:~containers/kubernetes-worker kubernetes-worker-gpu --constraints "instance-type=p2.xlarge"
# Relate the new worker to other components
juju relate kubernetes-worker-gpu:kube-control kubernetes-master:kube-control
juju relate kubernetes-worker-gpu:certificates easyrsa:client
juju relate kubernetes-worker-gpu:cni flannel:cni
juju relate kubernetes-worker-gpu kubeapi-load-balancer

# ...wait for workload status to become active in `juju status`

# Download a gpu-dependent job spec
wget -O /tmp/nvidia-smi.yaml https://raw.githubusercontent.com/madeden/blogposts/master/k8s-gpu-cloud/src/nvidia-smi.yaml

# Create the job
kubectl create -f /tmp/nvidia-smi.yaml

# You should see a new nvidia-smi-xxxxx pod created
kubectl get pods

# Wait a bit for the job to run, then view logs; you should see the
# nvidia-smi table output

kubectl logs $(kubectl get pods -l name=nvidia-smi -o=name -a)

Test results

The Canonical Distribution of Kubernetes is running daily tests to verify it works with the upstream code. As part of the Kubernetes test infrastructure we upload daily test runs. The test results are available on the Gubernator dashboard.

How to contact us

We’re normally found in the Kubernetes Slack channels and attend these Special Interest Group (SIG) meetings regularly:

sig-cluster-lifecycle
sig-cluster-ops
sig-onprem

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes!

Related posts


Lech Sandecki
23 October 2024

6 facts for CentOS users who are holding on

Cloud and server Article

Considering migrating to Ubuntu from other Linux platforms, such as CentOS? Find six useful facts to get started! ...


Kris Sharma
17 October 2024

Why is Ubuntu Linux the leading choice to replace CentOS for financial services?

Financial Services Article

Financial services are powered by technology. The customer experience is increasingly driven by data, with tailoring of products and services to reflect individual behaviors and preferences. All of this rests on a foundation of secure, stable technology that can support agility and flexibility to adapt to customer needs, whilst at the sam ...


Holly Hall
30 June 2022

Star Developers are here!

Ubuntu Article

We are happy to announce that the newest community feature of the Snap Store is here: Star Developers! Background  In the Snap Store, we have a fantastic community where members can discuss topics in the forum, develop snaps and help others. Currently, the Snap Store has verified accounts; verified companies have a green tick by ...