Below is an evolving list of my hardware, it’s problems with various releases of Fedora, and my attempts to fix and/or workaround. Current Hardware Lenovo P1 Gen 3 Lenovo X1 Carbon 6th gen Lenovo Thinkpad T470s I swap laptops through a Lenovo Thunderbolt 3 Dock (model no DBB9003L1) where I run my main monitor, keyboard, and speakers through (via aux). Goal is to be able to switch between work/personal laptops with my same monitor keyboard mouse and speaker setup.
Nodes? Where we're going, we don't need.... nodes. (what if kube was just the API that could talk to anything?) prototype and call to ideate at https://t.co/XCv3iCd0rI — Clayton Coleman (@smarterclayton) May 5, 2021 Earlier this week Clayton Coleman presented Kubernetes as the Hybrid Cloud Control Plane as a keynote at KubeCon EU 2021, and revealed the kcp prototype. kcp is exploring re-use of the Kubernetes API at a higher level to orchestrate many different workloads and services across the hybrid cloud.
Earlier this year I did a short talk for Halihax, a local technology community, providing an introduction to the Kubernetes operator pattern. This was my first attempt at giving any kind of a talk (outside of demos at work), but hopefully it will prove useful to someone out there.
I made a resolution for 2020 to be less reliant on Google by the end of the year. This post is an update on where I ended up in that process. Replacements E-mail This one was actually pretty easy as e-mail has increasingly become less important to me over the years, but I ended up paying the $5 a month for FastMail. Absolutely no regrets here, they’re good at importing all your GMail, great Android app with dark mode, better privacy, does everything I need.
I’ve been using zsh for about 15 years but despite this I’ve noticed lately I’m pretty inefficient at editing commands in particular, mostly because I don’t have a clue about emacs keybindings. I am however very familiar with vi bindings but my config was never properly setup for zsh, I couldn’t search history like I could in emacs mode and I’ve been blundering along in this state for too long. (turns out it was just because the bindkey’s were not declared after doing bindkey -v to go to vi mode, oops)
I spent some time recently revamping my zsh setup, something I haven’t really spent any dedicated time with since about 2006. In transitioning to oh my zsh I discovered fasd, a command line productivity booster. Essentially it tracks the files and directories you work with in your terminal, and ranks them by “frecency”, both frequency and recency. You can then reference them with short, usually single character aliases and fuzzy matching.
One of my new year’s resolutions is to be less reliant on Google by this time next year. I’ve wanted to do this for a long time, I’m sure many others do as well once we realize what’s involved and why those services are free. However it requires time, maybe in some cases money, and most of all a loss of functionality because there’s no denying Google builds great stuff. In the end I’m generally just complacent and end up accempting the warm embrace of slick free services harvesting every possible detail about me to power advertising.
For my work on OpenShift I wanted a way to use my local workstation as a test cluster with vms for a master and multiple nodes. Ideally it would be possible to quickly teardown and rebuild the whole cluster, but I also want reliable hostnames (and IPs) across each rebuild. This post outlines a way to do this with Fedora (25 as of writing) and Vagrant. The key to getting Fedora configured such that the hostnames and DNS will work is this post by Dominic Cleal.
On my personal VPS I host a handful of websites accessed from a variety of domains and sub-domains, as well as a few more involved webapps such as tt-rss. Historically applications that cross multiple programming languages and databases have been a terrible pain to deploy and keep running on a private server, but since containers have arrived this has become a lot easier. On my server, I wanted to have a web server listening on the standard http/https ports proxying traffic for a variety of sites and applications, based on the domain/sub-domain in the request.
tito 0.6.10 was tagged and built this morning, brought to you almost entirely by the newest tito committer skuznets. Changelog Do not undo tags when git state is dirty (firstname.lastname@example.org) Parse options in tito init (email@example.com) Only use rpmbuild --noclean if it is supported (firstname.lastname@example.org) Explicitly define indicies in formatting statements (email@example.com) Achieve quiet output from rpmbuild without passing --quiet (firstname.lastname@example.org) Update the MANIFEST.in (email@example.com) Correctly pass verbosity options through the builder CLI (skuznets@redhat.
I’ve just pushed a release of tito 0.6.9 with the following changes: Simplified version and release update logic (firstname.lastname@example.org) Added --use-release flag for tito tag (email@example.com) Fix typos/errors in man pages (firstname.lastname@example.org) Explain how automatic tagging was done (email@example.com) Add support for bumping version for Cargo projects (firstname.lastname@example.org) Right now this is available in my Copr repo, and builds are on their way for Fedora and EPEL. My thanks to all who contributed patches!
A large portion of my time on the OpenShift team has been spent working on cluster lifecycle improvements, particularly in the realm of upgrades. Throughout this work we’ve been targeting the ability to upgrade clusters without requiring application downtime. I recently took some time to demonstrate that we can hit that target, please check out the results on the OpenShift Blog: Zero Downtime Upgrades With OpenShift Ansible
With Kubernetes 1.4 sig-cluster-lifecycle released an alpha of kubeadm, a new utility we’ve been working on to make cluster bootstrapping as simple as possible for new users, but also provide tooling and infrastructure that can be used for production clusters. The initial goal was simple, install the bits (now delivered via new OS packages), one command to create a cluster: $ kubeadm init And one very short command to copy and paste to join nodes to the cluster: