Experimenting with a Personal Fedora Atomic and OpenShift Origin Server

Last weekend I decided to play around with my old workstation that’s just been sitting around powered off for years now, mostly replaced by a RaspBerry Pi3 which handles most of my home network and storage. The workstation is from around 2009, isn’t particularly fast but has 6GB of RAM and doesn’t consume much power if you yank the video card. I have a small Rails 4 app running as a backend for one of my Android pet projects, and thought it might be fun to repurpose this machine to host it on OpenShift.

I really don’t want to have to think about upgrades much, CentOS would then be a pretty good option but I thought it would be better to get a taste of what’s been going on with Project Atomic, a minimal OS just for running containers, which allows you to upgrade and rollback the whole OS filesystem as if you were using git.

Fedora Atomic

For an Atomic OS I went with Fedora Atomic. Normally Fedora would be too much upgrade maintenance for what I want, but being able to upgrade everything and reboot so easily with Atomic seems like it should negate that, and I’d like access to be able to see the latest work being done in this area. Fedora installation has become incredibly refined over the years since I started using it, installing Atomic was no exception. (just had to dd an ISO to a USB drive and boot off it)

Post install you might want to do a one time root partition extension so you can do Atomic upgrades:

$ lvresize --resizefs --size=+10G /dev/mapper/fedora--atomic-root

For a headless server you might want Cockpit for remote management:

$ atomic install cockpit/ws
$ atomic run cockpit/ws

At this point you can see it running locally with a curl to https://yourip:9090, but accessing it remotely was quite difficult. Fedora Atomic does not appear to be using firewalld, but does appear to have a default deny iptables policy in play. There’s got to be a better way but I worked past this with:

$ systemctl stop iptables
$ vi /etc/sysconfig/iptables

Right after the ACCEPT for ssh (port 22) I added:

-A INPUT -p tcp -m state --state NEW -m tcp --dport 9090 -j ACCEPT


$ systemctl start iptables

OpenShift Origin

To install OpenShift Origin I went with openshift-ansible. Using something like oc cluster would probably be a logical option here for a single system but I wanted to stick to a more production focused path.

To do this we need an Ansible inventory:



# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'dgoodwin': 'HASHEDPW'}

[masters] schedulable=true

# Master would be unschedulable by default leaving us with nowhere for pods to run. schedulable=true


To get the hashed password you can just create a temporary htpasswd file:

$ htpasswd -c deleteme-file username

Copy the whole hashed password out of the file (including trailing ‘/’) and then delete it.

We’ll also need something for persistent storage. With a setup like this I just went with NFS using the following small playbook:

- hosts:
  - masters
  become: yes
  - file: path=/var/nfsshare state=directory mode=0777
  - lineinfile: dest=/etc/exports line="/var/nfsshare/,sync,no_root_squash)"
  - service: name=rpcbind state=started enabled=yes
  - service: name=nfs-server state=restarted enabled=yes
  - service: name=rpc-statd state=started enabled=yes
  - service: name=nfs-idmapd state=started enabled=yes

This should probably be modified to create a few PV’s just as sub-directories in /var/nfsshare.

At this point you should be ready to run openshift-ansible. A recent addition was the ability to run openshift-ansible playbooks via a container rather than having to git clone. You can read more about this here.

I however ran from my local git clone. Because of my hardware a couple OpenShift checks have to be skipped, these disk and memory checks should not be relevant for the scale I’m dealing with.

$ ansible-playbook -b -i ./hosts ~/src/openshift-ansible/playbooks/byo/openshift-cluster/config.yml -e openshift_disable_check=disk_availability,memory_availability

After installation I needed a couple tweaks because this is a single system “cluster”, by default the master is not schedulable and thus you’ve got nowhere to run pods.

$ oc label node master.local region=infra
$ oadm manage-node master.local --schedulable=trueoadm manage-node master.local --schedulable=true

We also need to define a single persistent volume:

$ cat nfspv.yml
apiVersion: v1
kind: PersistentVolume
  name: nfspv1
    storage: 5Gi
  - ReadWriteOnce
    path: /var/nfsshare
  persistentVolumeReclaimPolicy: Recycle
$ oc create -f /root/nfspv.yml

At this point I had OpenShift origin up and running containerized on Fedora Atomic Host. You can operate as the cluster admin by using root on the Atomic host, but you’ll naturally want to login as the regular user we created earlier for actually using it to run applications:

$ oc login -u dgoodwin https://yourip:8443


This of course isn’t the most logical thing to do, a single node Kubernetes or OpenShift cluster doesn’t make a lot of sense. It is however, kind of fun to play with and have running in your basement. I really like Atomic for servers, having that small footprint and not having to worry about upgrades is really satisfying. OpenShift, even at such a small scale, is handy for building from source and keeping an eye on your logs.

Not sure how long I’ll keep it running, my $5 Linode was doing this job previously and was able to run my app as a straight container (no OpenShift) just fine, so realistically it might end up back there soon, but regardless it is presently fun to tinker with and learn about.

comments powered by Disqus