OpenStack Heat vs Ansible

You’ve almost certainly heard of Ansible – the uber-simple IT automation engine developed by Red Hat. Perhaps you’ve also heard of OpenStack Heat, the orchestration engine built into the OpenStack platform.

In this post I’m going to try and summarise the major differences between these two technologies (and there are many). Mostly, however, I’m aiming to show how these two fantastic technologies can be combined to enable powerful, flawless orchestration and configuration of infrastructure deployed on OpenStack.

Read more “OpenStack Heat vs Ansible”

OSP13: clean up old images in a local container registry

Red Hat OpenStack Platform 13 (upstream Queens) is a fully containerised solution, meaning that all of its components are deployed as containers, rather than traditional RPM-based packages.

You have a few options for how you obtain these container images – you can point directly at the Red Hat Container Catalog, you can point to a container registry elsewhere in your environment, or you can create and use a registry on the undercloud. All of these options are covered in the documentation, but for this post I’m assuming you use a local registry on the undercloud.

As you update the overcloud and new container versions arrive, older versions remain in the registry consuming valuable disk space on the undercloud. Better to clean out older versions of images once your updates are successful! Unfortunately there’s no simple method to remove old images, so (with a little help from some Googling) I’ve developed a simple script to do just that.

Read more “OSP13: clean up old images in a local container registry”

Red Hat OpenStack 13 on a KVM hypervisor, part 3

See the other parts of this series:


In the last two posts we’ve discussed the physical configuration of the KVM hypervisor and the virtual networking configuration. In this post I’m going to cover the configuration of the VMs themselves, and some gotchas that bit me as I went through the process.

Read more “Red Hat OpenStack 13 on a KVM hypervisor, part 3”

Red Hat OpenStack 13 on a KVM hypervisor, part 2

Previous parts of this series:


In the second post in this series I’ll discuss the virtual setup for the undercloud and overcloud.

Enter stage left: TripleO QuickStart

My first plan was to use TripleO quickstart to rapidly deploy an OpenStack cloud. OOO quickstart would configure everything for me – libvirt packages, virtual networks, instances, the underclound, and the deployment of the overcloud. Sounds great, right?

The problem with using TripleO quickstart is that in order to be as straightforward as it is, it’s highly opinionated – that’s fine, and it’s a very fair trade-off if you’re happy to accept it. The killer for me was that it wants to download a pre-canned undercloud image – built on CentOS – and deploy from there.

Plus, TripleO quickstart is for upstream OpenStack. I’m after a Red Hat OpenStack cloud, so sadly TripleO Quickstart isn’t going to work for me.

Let’s see what’s behind door number two…

Enter stage right: the manual method

Door number two is do it yourself – build the undercloud, build the TripleO environment files and templates, and build the overcloud. I’m doing this as a learning experience, so the manual method doesn’t worry me too much.

So, here’s what I need to do:

  • Brutus, and the VMs on it, need access to my wider network. That means I need to setup a linux bridge and add Brutus’ external interface to it. My libvirt VMs can then be attached to this bridge also, giving them connectivity.
  • I need a dedicated virtual network for provisioning, as my VMs will PXE boot on this network.
  • I need a dedicated virtual network for cloud services networks – think storage, internal API, tenant isolation, etc.
  • Deploy the Virtual Baseboard Management Controller (VirtualBMC) package, and establish VirtualBMCs for each of my overcloud nodes. This enables the undercloud to start/stop/reboot nodes as needed during the introspection and provisioning processes.
  • Create a VM for the undercloud and configure it per the documentation.
  • Create placeholder VMs for my overcloud nodes. These have nothing installed, but are ready to PXE boot.
  • Build my instackenv.json and introspect my overcloud VMs.
  • Prepare the environment files for my deployment, in particular the NIC configuration for the overcloud nodes.
  • Deploy!

Home network linux bridge

Creating a linux bridge is a simple process, using NetworkManager. This is a great article that takes you through the process.

TLDR: create the bridge, add your home network interface as a member of the bridge, set an IP address resolvable on your home network on the bridge, down and then up the bridge.

End result (br0 is my bridge and em1 is my interface on my home network):

[root@iron2 ~]# nmcli con show
NAME    UUID                                  TYPE      DEVICE 
br0     d2d68553-f97e-7549-7a26-b34a26f29318  bridge    br0    
em1     1dad842d-1912-ef5a-a43a-bc238fb267e7  ethernet  em1

Extra virtual networks

I’ve created two new virtual networks via virt-manager: OS1 (for my OpenStack services) and “provisioning” (for provisioning of my nodes). These don’t have any DHCP or IPv4 address space definitions; they’re just plain L2 networks. The undercloud will assign subnets as needed:

[root@iron2 ~]# virsh net-dumpxml os1
<network>
  <name>os1</name>
  <uuid>802f717d-14a2-496e-a9c0-883e65251b1a</uuid>
  <bridge name='virbr2' stp='on' delay='0'/>
  <mac address='52:54:00:16:03:52'/>
  <domain name='tenant'/>
</network>

[root@iron2 ~]# virsh net-dumpxml provisioning
<network>
  <name>provisioning</name>
  <uuid>4437355e-c202-49e2-9ccb-e07d033a05c0</uuid>
  <bridge name='virbr1' stp='on' delay='0'/>
  <mac address='52:54:00:05:ca:77'/>
  <domain name='provisioning'/>
</network>

Don’t forget to mark them as autostart using virsh net-autostart, otherwise you’ll do what I did and wonder why PXE booting isn’t working (provisioning network was down…).

Virtual Baseboard Management Controller (VirtualBMC)

In order to provision nodes, the undercloud needs to be able to power on, power off, reset and change the boot device for the nodes. Since these are virtual machines that don’t have a physical IPMI interface, we make use of VirtualBMC to emulate the IPMI interface.

VBMC is available via Pip or you can roll your own from the github repo. I’ve installed mine into a virtualenv on Brutus:

[root@iron2 ~]# virtualenv virtualbmc
New python executable in /root/virtualbmc/bin/python
Installing setuptools, pip, wheel...done.
[root@iron2 ~]# . virtualbmc/bin/activate
(virtualbmc) [root@iron2 ~]# pip install virtualbmc
Collecting virtualbmc
...snip...
Successfully installed PrettyTable-0.7.2 PyYAML-3.13 asn1crypto-0.24.0 cffi-1.11.5 cliff-2.13.0 cmd2-0.8.8 contextlib2-0.5.5 cryptography-2.3 enum34-1.1.6 idna-2.7 ipaddress-1.0.22 libvirt-python-4.6.0 pbr-4.2.0 pycparser-2.18 pyghmi-1.2.4 pyparsing-2.2.0 pyperclip-1.6.4 pyzmq-17.1.2 six-1.11.0 stevedore-1.29.0 subprocess32-3.5.2 unicodecsv-0.14.1 virtualbmc-1.4.0 wcwidth-0.1.7
(virtualbmc) [root@iron2 ~]#

Here comes the catch

Jumping ahead a little, Ironic will not allow you to import nodes that have the same IP address for their power management driver, even if you specify a different port. I worked around this by setting aside a chunk of my home /24 to be addresses for my VBMC to listen on.

I added these addresses as /32s to my br0 using NetworkManager, then brought br0 down and up. The result looks like this:

virtualbmc) [root@iron2 ~]# ip a
...snip...
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f0:4d:a2:3b:a1:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.35/24 brd 192.168.0.255 scope global noprefixroute br0
       valid_lft forever preferred_lft forever
    inet 192.168.0.180/32 brd 192.168.0.180 scope global noprefixroute br0
       valid_lft forever preferred_lft forever
    inet 192.168.0.181/32 brd 192.168.0.181 scope global noprefixroute br0
       valid_lft forever preferred_lft forever
    inet 192.168.0.182/32 brd 192.168.0.182 scope global noprefixroute br0
       valid_lft forever preferred_lft forever

Pinging one of these works fine from elsewhere in my network:

agoossen@agoossen ~]$ ping 192.168.0.180
PING 192.168.0.180 (192.168.0.180) 56(84) bytes of data.
64 bytes from 192.168.0.180: icmp_seq=1 ttl=64 time=0.586 ms
64 bytes from 192.168.0.180: icmp_seq=2 ttl=64 time=0.315 ms
64 bytes from 192.168.0.180: icmp_seq=3 ttl=64 time=0.396 ms
^C
--- 192.168.0.180 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2050ms
rtt min/avg/max/mdev = 0.315/0.432/0.586/0.114 ms

In essence, these are a number of virtual IPs that exist on Brutus’ br0 specifically so I can attach VBMC to them.

Now we can go back to VBMC.

Back to VBMC – adding and starting the libvirt domains

To use VBMC, you need to add your libvirt domains (e.g. VMs) into VBMC, specify which address/port to listen for IPMI commands on, provide an optional username and password, then start the VBMC listener. Like so (I have a libvirt VM called ceph0 already defined):

(virtualbmc) [root@iron2 ~]# vbmc add ceph0 --address 192.168.0.188 --username root --password foobarbaz
(virtualbmc) [root@iron2 ~]# vbmc start ceph0
2018-08-14 18:03:39,790.790 2778 INFO VirtualBMC [-] Started vBMC instance for domain ceph0
(virtualbmc) [root@iron2 ~]# vbmc list
+-------------+---------+---------------+------+
| Domain name | Status  | Address       | Port |
+-------------+---------+---------------+------+
| ceph0       | running | 192.168.0.188 |  623 |
| ceph1       | down    | 192.168.0.189 |  623 |
| ceph2       | down    | 192.168.0.190 |  623 |
| compute0    | down    | 192.168.0.186 |  623 |
| compute1    | down    | 192.168.0.187 |  623 |
| controller0 | down    | 192.168.0.183 |  623 |
| controller1 | down    | 192.168.0.184 |  623 |
| controller2 | down    | 192.168.0.185 |  623 |
+-------------+---------+---------------+------+

And checking power status:

[root@agoossen ~]# ipmitool -I lanplus -H 192.168.0.188 -U root -P foobarbaz power status
Chassis Power is off

Turning on the VM…

[root@agoossen ~]# ipmitool -I lanplus -H 192.168.0.188 -U root -P foobarbaz power on
Chassis Power Control: Up/On

Success!


So where have we got to?

We have:

  • Bridged the home network so as to make it available to libvirt VMs that run on the hypervisor (br0).
  • Added two new virtual networks, OS1 and provisioning, that will be used by the VMs in order to PXE boot and communicate within the cloud. External connectivity will be provided by an interface on br0. In the end my overcloud VMs will have three interfaces on them – provisioning, external, and internal communication.
  • Created a number of virtual IPs for fake IPMI endpoints and exposed them as /32 routes on br0.
  • Installed VBMC, added a simple test libvirt domain that listens on one of the aforementioned VIPs and ensured we can power on/off the VM using IPMI tool.

In the next chapter we’ll look at building the undercloud and the shell VMs that will become our overcloud.

Stay tuned!

Red Hat OpenStack 13 on a KVM hypervisor, part 1

Other posts in this series:


I recently acquired a Dell PowerEdge R810 (used, of course!) server, and I wanted to put its 16 cores/32 threads and 128GB of RAM to good use. So I did what any logical person would do – I installed RHEL, KVM, and deployed a Red Hat OpenStack 13 cloud on it!

This series of blog posts covers some of the challenges I bumped into along the way.

Configuring Brutus

(Yes, Brutus is the name of the PowerEdge server…)

First problem I ran into: no VGA cable (naturally I’d purged them in the Great Cable Purge of 2017). A quick trip to MSY later, and we’re back in business.

Then I ran into the second problem – my USB keyboard wasn’t recognised by the server. It was clearly getting power (LED lights were lighting up), but no keystrokes were having any effect.

I suspected the keyboard itself was too new for the server (some fancy gaming keyboard that I don’t use for gaming) and using a USB protocol version that wasn’t supported. Sure enough, another quick trip to Officeworks for an $8 old school USB keyboard and it worked a treat.

New isn’t always better…

Disk configuration. Brutus has 6xSAS slots in its backplane. Of these, the server came with 2x300GB SAS HDDs. I bought a further 4x240GB Kingston SSDs and the associated after-market caddies and slotted them straight in. You can fit SATA into a SAS backplane, but not the other way around.

No firmware updates required in my case; I was unsure how the Dell hardware would react to having decidedly non-Dell, non-enterprise drives in it. As it turns out: just fine!

I configured the two SAS disks as a virtual drive in RAID 0, and the 4x SSDs as a further virtual drive in RAID 5. I’m taking a risk with no redundancy on the OS drives – perhaps I can rebuild at some point into a RAID 1 configuration.

The SAS disks are used for OS storage, and the RAID 5 SSDs are used as the storage pool for libvirt.

Deploying RHEL 7.5. This was very easy – I installed it via a USB stick onto the virtual drive I created for the OS disk. I did need to change the USB device emulation mode in the BIOS to hard disk, however. No further configuration necessary. It would be nice to PXE boot this one day, but that involves changes to my home network that I’m not prepared to make just yet.

Satellite configuration. Configured and registered Brutus to my Satellite instance, with a new Content View and activation key for OpenStack created. virt-who was deployed onto Brutus to enable my VMs to receive virtual subscriptions.

A yum update later and I’m ready to roll.


In the next post I’ll talk about the libvirt configuration, virtual baseboard management controller (VirtualBMC) setup, and the establishing of my VMs and networks in preparation for the undercloud deployment.

First experience installing OpenStack

Over the last two days I’ve been installing OpenStack Pike, following the installation guide for self-service networks. The installation guide was pretty straightforward and everything mostly worked fine.

There’s a couple of quirks about my installation that caught me. For context, I’m following a basic installation with one controller node and one compute node. Both of these are VMs running on a single hypervisor (AMD Ryzen 5 [6 core], 64 GB RAM, 500GB SSD). My OSP VMs are therefore VMs inside VMs – nested virt – which caused the first problem

Read more “First experience installing OpenStack”