KubeCon North America 2017 Notes - Part 1Jan 12, 2018
Lately all conferences that I care about seem to have started to publish video recordings of their talks. People like me that live very far away can get access to important information just a few days or weeks after these events. I’m really thankful for that.
I’ve watched some of the KubeCon North America 2017 videos and took a few notes. Here they are, in no particular order:
Kubernetes Distributions and ‘Kernels’ - There is some initial discussions about adopting a model similar to the Linux kernel and its distributions. This would allow the core project to move fast while distributions would focus on providing a more stable deployment target for users, add value through support, etc. Another alternative is to slow down the project (moving from quarterly releases to something less frequent), this is currently being discussed in the kubernetes-dev mailing list and I expect it will a big focus this year. This is much welcome since things are changing too fast for production clusters and putting it all together is no easy task.
Kata Containers - Even with the recent Meltdown/Spectre attacks on practically all processors sold in the last 20 years, the x86 ISA and hypervisors are still a much better understood separation layer than kernel containers (cgroups, namespaces, etc). That’s because the latter is a work in process that surely keeps getting better but you still have that single kernel acting as middleman for all your containers. So the idea of Kata Containers (formerly Intel Clear Containers and Hyper’s runv) is to make virtual machines so fast and well integrated with Kubernetes’ CRI that they can be a deployment target for pods. Supports x86 and KVM for now but plans to expand architectures and hypervisors. Pressure on memory due to many kernels is alleviated by memory deduplication (KSM). Excellent for multi-tenant environments.
Cloud Native at AWS - For sure the biggest announcement is Amazon EKS (currently in alpha). Getting Kubernetes running on AWS is not particularly terrible but it’s not a great experience either (according to CNCF, 63% of all cloud deployments of Kubernetes run on AWS. Impressive). Having to integrate Kops and Terraform or other solutions is a major distraction. Just like Google GKE and Azure ACS, Amazon is going to offer a managed Kubernetes service. This will simplify everything and there’s already work being done on Terraform to support Amazon EKS here. I doubt ECS will be phased out any time soon since a lot of customers depend on it and it’s very well integrated with all of AWS offerings, but I expect EKS will get a lot of attention moving forward (it’ll probably be the default target for new container deployments once it’s out of alpha). Better participation in open source projects is also very nice and well overdue!
This Job is Too Hard: Building New Tools, Patterns and Paradigms to Democratize - Brendan talks about the need to simplify the development process. There are a lot of tools, different languages you have to know, duplication of configuration data, etc. He touches on the need to have things in one place, build libraries instead of inflexible platforms and encourage re-use. Instead of YAMLs and more DSLs to learn, have cloud-native abstractions being programming language features. He draws a parallel between standard libraries and the Metaparticle project. You basically use language features to define how things will get built and deployed. I have seen this concept also mentioned as self-deploying applications and it seems very interesting. It’s something I want to explore more in detail because it completely changes how we approach CI/CD.
Istio: Weaving the Service Mesh - “Service Mesh is a network for services and not for bytes”. Network doesn’t help with issues at L7, which complicates application code. A single implementation that works across all languages your company might be using is a great advantage of adopting this. Implementation is through sidecar containers (Envoy or some other load balancer, like nginx). Control is enforced on outbound and inbound paths. Lots of metrics without instrumenting apps. Security policies specified in Spiffe. I really like the idea of a service mesh. It gives cluster administrators a lot of power in managing the infrastructure. That being said, I think it’s a bit too early for moderately risk-averse organizations to jump on this bandwagon considering a lot of features are still very alpha at the moment.
CRI-O: All the Runtime Kubernetes Needs, and Nothing More - Maintaing different runtimes (Docker, rkt, etc) was a burden. CRI was introduced to decouple k8s from runtimes. Kubelet implements the client side of CRI. CRI-O through containers/image fetches and verifies the images and through containers/storage manages layers and creates root filesystems. OCI-compatible containers need only a config.json and a root filesystem. Currently supports runC, ClearContainers and (soon) Kata containers. Differentiates between trusted and untrusted runtimes (configurable). CNI is used for setting up networking for the container (any CNI plugin should work). CRI-O can be restarted without affecting running containers. CRI-O releases match Kubernetes versions now.
Building a Secure, Multi-Protocol and Multi-Tenant Cluster for Internet-Facing Services - This talk is about Platform9’s decco. It’s interesting how Custom Resources are enabling all sort of customizations on top of Kubernetes. They have a controller and some CRDs that represent what they call spaces (domain name, project, certificate configuration), applications (thin wrapper around PodSpec, creates external DNS entry, URL path). It’s a kind of pod+service+ingress configuration in a single object. There’s also the concept of a project which connects many spaces, used when defining the Network Policy. There’s a global space where project is empty so it accepts traffic from any other space. They also automate TLS configuration and seem to be working to migrate to Istio. Interesting approach but very specific to Platform9.
I’ll cover more talks in future posts, as I get a chance to watch them.