This is my old blog, it's not updated anymore, I blog here now.

If you’re interested in running Kubernetes you’ve probably heard of Kelsey Hightower’s Kubernetes the Hard Way. Exercises like these are important, they highlight the coordination needed between components in modern stacks, and it highlights how far the world has come when it comes to software automation. Could you imagine if you had to set everything up the hard way every time?

Learning is fun

Doing things the hard way is fun, once. After that, I’ve got work to do, and soon after I am looking around to see who else has worked on this problem and how I can best leverage the best open source has to offer.

It reminds me of the 1990’s when I was learning Linux. Sure, as a professional, you need to know systems and how they work, down to the kernel level if need. Having to do those things without a working keyboard or network makes that process much harder. Give me a working computer, and then I can begin. There’s value in learning how the components work together and understanding the architecture of Kubernetes, I encourage everyone to try the hard way at least one time, if anything it’ll make you appreciate the work people are putting into automating all of this for you in a composable and reusable way.

The easy way

I am starting a new series of videos on how we’re making the Canonical Distribution of Kubernetes easy for anyone to deploy on any cloud. All our code is open source and we love pull requests. Our goal is to help people get Kubernetes in as many places as quickly and easily as possible. We’ve incorporated lots of the things people tell us they’re looking for in a production-grade Kubernetes, and we’re always looking to codify those best practices.

Enjoy:

Following these steps will get you a working cluster, in this example I’m deploying to us-east-2, the shiny new AWS region. Subsequent videos will cover how to interact with the cluster and do more things with it.

We’ve been trailing the Kubernetes 1.3 release for the past few weeks, mostly to ensure that etcd data migrations are preserved from 1.2 to 1.3. We’re also in the process of adding TLS between all the nodes for security reasons, and that has led to use being a bit behind on getting Kubernetes 1.3 out to you, but don’t worry, we’re in the process of testing the upgrade path and this post will outline how to set up a Kubernetes 1.2 cluster and upgrade it to v1.3.3. Once we get good feedback from the community on how this is working out for you, we’ll go ahead and set v1.3.3 (or subsequent version) as the new default for Ubuntu Kubernetes.

Our bundle, which we call “observable-kubernetes” features the following model:

  • Kubernetes (automating deployment, operations, and scaling containers)
    • Three node Kubernetes cluster with one master and two worker nodes.
    • TLS used for communication between nodes for security.
  • Etcd (distributed key value store)
    • Three node cluster for reliability.
  • Elastic stack
    • Two nodes for ElasticSearch
    • One node for a Kibana dashboard
    • Beats on every Kubernetes and Etcd node:
      • Filebeat for forwarding logs to ElasticSearch
      • Topbeat for inserting server monitoring data to ElasticSearch

As usual, you get pure Kubernetes direct from upstream and of course it’s cross-cloud, making it easy for you to use your own bare metal for deployment.

Your First Kubernetes Cluster

After configuring Juju to use the cloud you prefer we can start the cluster deployment.

juju deploy observable-kubernetes

This will deploy the bundle with default constraints. This is great for testing out Kubernets but most clouds won’t give you enough CPU and memory to use the cluster in anger, so I recommend checking out the documentation on how to modify the bundle to more accurately reflect either the hardware you have on hand, or the instance size you prefer.

We can watch the cluster coming up with a watch juju status, this will give us a near-realtime view of the cluster as it comes up:

Making sure your cluster works

We just wait for things to come up and hit an idle state before moving on. Once we’re up we can manage the cluster with kubectl. We’ve provided this tool for you on the master node with a config file prepoluated for you, first let’s find the master node.

juju run --application kubernetes is-leader

The output will show you which node is the master node, you can then copy the tools to your local machine:

juju scp kubernetes/0:kubectl_package.tar.gz .

Untar that wherever you’d like and cd to that directory, you should have a kubectl binary and a kubeconfig file for you to use along with kubectl. You can now check the status of your cluster with:

./kubectl cluster-info --kubeconfig ./kubeconfig 
Kubernetes master is running at https://104.196.123.155:6443
KubeDNS is running at https://104.196.123.155:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns

Now let’s check the version of Kubernetes we’re running, note how it responds with both the client and server version.

./kubectl version --kubeconfig ./kubeconfig 
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}

Upgrading to a new version

So far we’ve done the usual bits on getting Kubernetes running on Ubuntu, now we’re ready to test the latest stuff from upstream.

juju set-config kubernetes version=v1.3.1

And then check juju status again while the model mutates. Now let’s see the version:

./kubectl version --kubeconfig ./kubeconfig 
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.1", GitCommit:"fe4aa01af2e1ce3d464e11bc465237e38dbcff27", GitTreeState:"clean"}

Aha! As you see here, the cluster has upgraded to v1.3.1, but my local tools are obviously still v1.2.3. Obviously I can just recopy the kubectl tarball from the master node again, but I appear to have made a mistake, as the latest upstream version of Kubernetes is actually v1.3.3. No worries man:

    juju set-config kubernetes version=v1.3.3

And let’s check our version:

./kubectl version --kubeconfig ./kubeconfig
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.3", GitCommit:"882d296a99218da8f6b2a340eb0e81c69e66ecc7", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.3", GitCommit:"c6411395e09da356c608896d3d9725acab821418", GitTreeState:"clean"}

Now that my cluster is up to date, don’t forget to copy a new version of kubectl from the master so that your client is also up to date. Now we’re ballin’ and on the latest upstream, now we can go ahead and dive into the Kubernetes docs to get started deploying a real workload inside your cluster.

Future Goals

So why isn’t v1.3.3 the default? Well we’d like to see some real feedback from people first and we’re still in the process of validating that your data will get migrated without issues as you upgrade. As you can see upgrading an empty cluster is trivial, and we’d like to make sure we’ve dotted all the t’s and i’s before moving on.

We’d also like to take the next few weeks to prep the charms for the upcoming v1.4 release, as well as revving the etcd and Elastic stacks to their latest upstream versions, as well as revving the OS itself to xenial so that the ZFS backed storage is more robust and has a better out-of-the-box experience. We should also make it so that getting kubectl from the master isn’t so annoying.

Got any feedback for us? You can find us on the Juju mailing list and #sig-cluster-ops and #sig-cluster-lifecycle on kubernetes.slack.com. Hope to see you there!

Welcome back to another exciting update of Nvidia driver downloads!

The biggest change is 364.15 is the new most popular version of the driver, and of course, we’ve added xenial as a new series. Here are the download statistics for the graphics-drivers PPA:

Version summaries
346.72: 57
346.96: 292
352.21: 219
352.79: 249
355.06: 3291
355.11: 11483
358.09: 16949
358.16: 22317
361.18: 4475
361.28: 20638
364.12: 12170
364.15: 37125

Series summaries
precise: 985
trusty: 51066
vivid: 11307
wily: 36171
xenial: 17996
yakkety: 11740

Arch summaries
amd64: 123560
armhf: 55
i386: 5650

Want to help? buy a game and check out ppa:graphics-drivers.

Apache Zeppelin has just graduated to become a top-level project at the Apache Foundation.

As always, our Big Data team has you covered, you can find all the goodness here:

But for most people you likely just want to be able to consume Zeppelin as part of your Spark cluster, check out these links below for some out-of-the-box clusters:

Happy Big-data-ing, and as always, you can join other big data enthusiasts on the mailing list: bigdata@lists.ubuntu.com

For all the fancy bits of technology in your infrastructure there are still plenty of simple things that are useful and will probably never go away. One of these is blobs. Blobs are useful, they can be workload payloads, binaries for software, or whatever bits you need to deploy and manage.

Juju never had a way of knowing about blobs. Sure, you could plop something on an http server and your charm could snag it. But then we’re not really solving any problems for you, you still need to deal with managing that blob, versioning it, using a server to serve it to clients, etc.

Ideally, these blobs are accounted for, just like anything else in your infrastructure, so it makes sense that as of Juju 2.0 we can model blobs as part of a model; we call it Juju Resources. That way we can track them, cache them, acl them, and so on, just like everything else.

Resources

A new concept has been introduced into Charms called “resources”. Resources are binary blobs that the charm can utilize, and are declared in the metadata for the Charm. All resources declared will have a version stored in the Charm store, however updates to these can be uploaded from an admin’s local machine to the controller.

Change to Metadata

A new clause has been added to metadata.yaml for resources. Resources can be declared as follows:

resources:
  name:
    type: file                         # the only type initially
    filename: filename.tgz
    description: "One line that is useful when operators need to push it."

New User Commands

Three new commands have been introduced:

  1. juju list-resources

    Pretty obvious, this command shows the resources required by and those in use by an existing service or unit in your model.

  2. juju push-resource

    This command uploads a file from your local disk to the juju controller to be used as a resource for a service.

  3. juju charm list-resources

    juju charm is the the juju CLI equivalent of the “charm” command used by charm authors, though only applicable functionality is mirrored.

In addition, resources may be uploaded when deploying or upgrading charms by specifying the resource option to the deploy command. Following the resource option should be name=filepath pair. This option may be repeated more than once to upload more than one resource.

juju deploy foo --resource bar=/some/file.tgz --resource baz=./docs/cfg.xml

or

juju upgrade-charm foo --resource bar=/some/file.tgz --resource baz=./docs/cfg.xml

Where bar and baz are resources named in the metadata for the foo charm.

Conclusion

It’s pretty simple. Put stuff in a place and be able to snag it later. People can use resources for all sorts of things.

  • Payloads for the service you’re deploying.
  • Software. Let’s face it, when you look at the amount of enterprise software in the wild, you’re not going to be able to apt eveything; you can now gate trusted binaries into resources to be used by charms.

Hope you enjoy it!

I’ve pushed new sample bundles to the Juju Charm Store. The first is a simple mediawiki with mysql:

For a more scalable approach I’ve also pushed up a version with MariaDB, haproxy, and memcached. This allows you to add more wiki units to horizontally scale:

I’ll be working on a “smoosh everything onto one machine” bundle next, so stay tuned!

One of the nice things about system containers is that it makes the cost of creating a new machine essentially zero. With LXD 2.0 around the corner this is now easier than ever. LXD now ships with native simplestreams support, which means we can now list and cache all sorts of interesting OS cloud images.

You can see every image provided by doing a lxc image list images:, but the nice thing is the syntax is easy to remember. You can now just launch all sorts of Linux Server cloud images from all sorts of vendors and projects:

$ lxc launch images:centos/7/amd64 test-centos
Creating test-centos
Retrieving image: 100%
Starting test-centos
$ lxc exec test-centos /bin/bash
[root@test-centos ~]# yum update
Loaded plugins: fastestmirror

… and so on. Give em a try:

lxc launch images:debian/sid/amd64
lxc launch images:gentoo/current/amd64
lxc launch images:oracle/6.5/i386

And of course ubuntu is supported:

lxc launch ubuntu:14.04

So if you’re sick of manually snagging ISOs for things and keeping those up to date, then you’ll dig 16.04, just install LXD and you can launch any Linux almost instantly. We’ll keep ‘em cached and updated for you too. I can lxc launch ubuntu:12.04 and do python --version faster than I can look it up on packages.ubuntu.com. That’s pretty slick.