Using Containers for Day 2 Operations

There is no shortage of material discussing deployment applications in containers. They’re small, lightweight, and a completely self-contained environment purpose built for running your application.

Those same features – in particular the purpose-built environment – makes containers excellent for performing Day 2 operations functions. Load your tools into a container customised specifically for them, then administer your application or environment.

In this blog post I’ll show you one technique – we’ll build a container using Buildah, copying in some playbooks, run it using Podman, mounting in a vault password, then execute the playbooks.

Prerequisites

Firstly, we’re not going to use Docker for this. Instead we’ll be using Buildah and Podman. Buildah and Podman enable you to build (and run!) OCI-compliant containers without needing a daemon process running as root in the background (looking at you, Docker).

Secondly, we’ll use Fedora as the base image for this container. We’ll need to make sure some key repositories are available on the build host. I’m building this on a standard Fedora 29 deployment, so the default enabled repositories will be fine.

Thirdly, while not required for this technique, consider placing your playbooks, roles and inventory into source control. You can then check out a known, specific version of your playbooks when building the container, tagging your container with the commit version.

Build and Run!

For this example I use some playbooks I wrote for deploying a CloudForms region, available here. You can use anything you like – in fact, for a proof of concept you could just use ansible -m ping all and test connectivity to a host.

With Buildah we can build with nothing more than a regular Bash script (although you can use a regular Dockerfile if you like). Here’s our build script, build_container.sh:

#!/bin/bash

build_container=$(buildah from fedora)
container_mount=$(buildah mount $build_container)

# install base packages - ansible and bash
dnf install --installroot $container_mount ansible --setopt=tsflags=nodocs --setopt=override_install_langs=en_US.utf8 -y --releasever 29
dnf clean all -y --installroot $container_mount --releasever 29

rm -rf /var/cache/yum

# copy the cloned repository
cp -r cloudforms-ansible $container_mount/opt/cloudforms-ansible

# set the entrypoint
buildah config --cmd /bin/bash $build_container

# name the container
buildah config --label name=cloudforms-ansible $build_container

# unmount the filesystem and commit the container
buildah unmount $build_container
buildah commit $build_container cloudforms-ansible

The build script will:

  1. Create a new build container from the Fedora image
  2. Install Ansible
  3. Clean out the yum cache to save space
  4. Copy in our cloned repository
  5. Set the entry point for the container
  6. Name and commit the container.

Note: when running this script on an XFS filesystem, you may need to install the fuse3-devel package. This contains the fuse-overlay driver.

We can build the container like so:

$ buildah unshare ./build_container.sh

Why ‘unshare’?

buildah unshare runs our script inside a new user namespace where our user ID and group ID are remapped to 0 – in other words, inside this namespace we appear to be root. This enables buildah to mount the container filesystem for modification. Without this, we’d need to run the build script as root. Here’s a good blog post on rootless Buildah.

After build completes the container will be available in the output of buildah images:

[agoossen@agoossen cloudforms-ansible-container]$ buildah images
IMAGE NAME                                               IMAGE TAG            IMAGE ID             CREATED AT             SIZE
docker.io/library/fedora                                 latest               d09302f77cfc         Mar 12, 2019 11:20     283 MB
localhost/cloudforms-ansible                             latest               a86e7f2952f8         May 5, 2019 17:18      407 MB

That’s it. Now we’ve got an image built on our local system. Let’s run it with Podman. Here’s the script, start_container.sh:

#!/bin/bash

sudo podman \
     run \
     --rm \
     --interactive \
     --tty \
     --mount type=bind,source="$(pwd)/.vaultpass",destination=/opt/cloudforms-ansible/.vaultpass \
     localhost/cloudforms-ansible:latest \
     /bin/bash

We run the container, removing it when complete (–rm), making it interactive and assign a TTY so we can use the console (–interactive and –tty), then I mount in my vault password as a file, and finally specify the container name and the entry point – in this case, bash. Let’s run it:

[agoossen@agoossen cloudforms-ansible-container]$ ./start_container.sh 
[sudo] password for agoossen: 
[root@4a547190188b /]$ ansible --version
ansible 2.7.10
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.7.2 (default, Jan 16 2019, 19:49:22) [GCC 8.2.1 20181215 (Red Hat 8.2.1-6)]
[root@4a547190188b /]$ cd opt/cloudforms-ansible/
[root@4a547190188b cloudforms-ansible]# ls
convert_region_to_vip  deploy_region  group_vars  hosts  README.md  tasks  update_cloudforms.yml

We’re in the container! Let’s run a quick ping to verify we have connectivity:

[root@4a547190188b cloudforms-ansible]$ ANSIBLE_HOST_KEY_CHECKING=False ansible -m ping -i hosts/cfme all -u root -k --vault-password-file ./.vaultpass
SSH password: 
cfme-3.home.ajg.id.au | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cfme-5.home.ajg.id.au | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cfme-2.home.ajg.id.au | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cfme-4.home.ajg.id.au | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
cfme-1.home.ajg.id.au | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

Success!

Secrets

In this model I’ve bind mounted the vault password file directly into the container. You can use this with other secrets (e.g. SSH keys) in order to pass them into the container for administration.

Conclusion

So there you go – another use for containers as a self-contained, completely controlled environment for performing Day 2 operations. We imported some playbooks into a new container we built with Buildah, ran that container with Podman and bind mounted in a secret, then verified connectivity to the hosts under management. Not bad!

Of course, I’m not suggesting you pack up your existing workflow and transition it to a container-based model, but this is another tool in the toolbox that you can draw on.

I’ve used Ansible playbooks as the example, but this can be used for a wide variety of use cases. Give it a try, and see what you can do!

Leave a Reply

Your email address will not be published. Required fields are marked *