OpenStack, Identity Management/IPA and TLS-Everywhere

novajoin is a WSGI application that serves dynamic vendordata to overcloud nodes (and instances, if you wish) via the cloud-init process. it’s purpose is to register a host to an IPA server and create any necessary services in IPA so certificates for them can be created on the hosts.

Thie purpose of this host is to describe how the “TLS Everywhere” functionality in OSP13 onwards operates. In particular, I wanted to answer these questions:

  • What does novajoin do?
  • How and when does novajoin register hosts in IPA?
  • What changes does novajoin make to IPA?
  • When does the host enrol to IPA, and how does it get its configuration?

Fundamental to this process is the TLS everywhere environment files:

Of these, enable-internal-tls.yaml does the bulk of the legwork. This creates a set of custom metadata that is passed to nova as part of the creation of the Server resource for each baremetal node. This metadata is specifically formed to be parsed by novajoin, and includes details of the hostname of the node, the services that run on it (and the networks they run on). If we start a deployment with TLS everywhere environment files, we can view this metadata on the nova instances on the undercloud (with a straight openstack server show):

| properties                          | compact_services='{"HTTP": ["ctlplane", "storage", "storagemgmt", "internalapi", "external", "management"], "mysql": ["internalapi"], "rabbitmq": ["internalapi"], "libvirt-vnc": ["internalapi"], "novnc-proxy": ["internalapi"], "neutron": ["internalapi"]}', ipa_enroll='true', managed_service_haproxyctlplane='haproxy/cloud.ctlplane.os1.home.ajg.id.au', managed_service_haproxyinternal_api='haproxy/cloud.internalapi.os1.home.ajg.id.au', managed_service_haproxystorage='haproxy/cloud.storage.os1.home.ajg.id.au', managed_service
_haproxystorage_mgmt='haproxy/cloud.storagemgmt.os1.home.ajg.id.au', managed_service_mysqlinternal_api='mysql/cloud.internalapi.os1.home.ajg.id.au', managed_service_redisinternal_api='redis
/cloud.internalapi.os1.home.ajg.id.au' |

This metadata is passed to novajoin by nova during resource creation. When received by novajoin, this is what the metadata looks like:

{ 
   "boot-roles": "admin,_member_",
   "hostname": "os1-controller-2",
   "image-id": "9fee86cc-e4a6-469d-b599-5bb66ccc4e70",
   "instance-id": "1d698f4a-bc91-43ca-be0a-6188314b6a0b",
   "metadata": {
       "compact_services": {
           "HTTP": [
               "ctlplane",
               "storage",
               "storagemgmt",
               "internalapi",
               "external",
               "management"
           ],
           "libvirt-vnc": [
               "internalapi"
           ],
           "mysql": [
               "internalapi"
           ],
           "neutron": [
               "internalapi"
           ],
           "novnc-proxy": [
               "internalapi"
           ],
           "rabbitmq": [
               "internalapi"
           ]
       },
       "ipa_enroll": "true",
       "managed_service_haproxyctlplane": "haproxy/cloud.ctlplane.os1.home.ajg.id.a",
       "managed_service_haproxyinternal_api": "haproxy/cloud.internalapi.os1.home.ajg.id.a",
       "managed_service_haproxystorage": "haproxy/cloud.storage.os1.home.ajg.id.a",
       "managed_service_haproxystorage_mgmt": "haproxy/cloud.storagemgmt.os1.home.ajg.id.a",
       "managed_service_mysqlinternal_api": "mysql/cloud.internalapi.os1.home.ajg.id.a",
       "managed_service_redisinternal_api": "redis/cloud.internalapi.os1.home.ajg.id.a"
   },
   "project-id": "908f75aa71aa47f095c6c666207eb1ba"
}

Notice how the metadata contains all the detail necessary to enrol this particular host in IPA, as well as create the sub-hosts and services for the host. In particular, the key “ipa_enroll=True” is used to kick the entire process off. Note that if this isn’t set as metadata on the instance, Novajoin will check if it has been set as metadata on the image used to provision the node.

Metadata in hand, novajoin creates the IPA host (see here) and all required services. Novajoin then returns an IPA One Time Password (OTP) back to Nova – this OTP will be used by the host to enrol with IPA and download its keytab. Nova stores this resulting data in vendordata2.json file and requests Ironic build a config drive with this metadata included.

Static metadata is also used to provide a cloud-init static script that will perform the actual enrolment. This is also provided on the config drive.

The config drive is a partition on the disk added by Ironic when it is provisioning the node. The drive contains a specific label, config-2, that is identified by cloud-init. Once booted, cloud-init writes the IPA enrolment script (provided by static vendordata, and obtained via the config drive) to /root and then executes it.

This script grabs the OTP out of the metadata and enrols the host to IPA. Done!

The output from the IPA setup script is available at /var/log/setup-ipa-client.log. You will also find the IPA enrolment script has been deployed to /root/setup-ipa-client.sh.

If using RHEL, the packages that the static script requests cloud-init install will not be available until the system has been registered to Satellite, which doesn’t occur until the first Puppet run on the host. As a result the package installation will fail – but in this case the required packages are baked into the image so they are always available at deploy time.

Novajoin also has hooks into the rabbit bus on the undercloud and listens for notification messages about instances being deleted (i.e. servers being deleted). Novajoin will un-enrol and remove the associated host, sub-hosts and services from IPA when the instance is destroyed.

What about the network metadata endpoint – 169.254.169.254?

As it turns out you can query this endpoint after the node has been provisioned and get feedback from Novajoin! Here’s an example:

[root@os1-controller-0 ~]# curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json | python -m json.tool                                                                            % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  3169  100  3169    0     0    411      0  0:00:07  0:00:07 --:--:--   859
{
    "join": {
        "hostname": "os1-controller-0.os1.home.ajg.id.au",
        "krb_realm": "OS1.HOME.AJG.ID.AU"
    },
    "static": {

I’ve skipped the bulk of the static data there – but notice that we still get the dynamic data (in the “join” key). This is the data from Novajoin. In this case it doesn’t include a host OTP, because this host is already registered – in fact, Novajoin does attempt to register the host again and gets an error from the IPA server.

But what if the host wasn’t registered? Is it possible to register a host after the fact, i.e. after it has been deployed – and use Novajoin to do it? Technically, yes.

This means that it is theoretically possible to use novajoin to enrol a host after the host has been provisioned, but you will need to ensure that the undercloud instance metadata is updated to include all of the hosts, sub-hosts and services that the node expects. Making a request to the vendor_data2.json file will cause all of this metadata to be sent to Novajoin, which in turn triggers IPA host creation.

Enrolling a host after deployment

Set the metadata manually on the instance:

undercloud) [stack@undercloud overcloud-osp13]$ openstack server set b7c6d332-6db4-4d32-8b93-78649c8d1fd9 --property compact_services='{"HTTP": ["ctlplane", "storage", "storagemgmt", "inte
rnalapi", "external", "management"], "mysql": ["internalapi"], "rabbitmq": ["internalapi"], "libvirt-vnc": ["internalapi"], "novnc-proxy": ["internalapi"], "neutron": ["internalapi"]}' --property ipa_enroll='true' --property managed_service_haproxyctlplane='haproxy/cloud.ctlplane.os1.home.ajg.id.au' --property managed_service_haproxyinternal_api='haproxy/cloud.internalapi.os1
.home.ajg.id.au' --property managed_service_haproxystorage='haproxy/cloud.storage.os1.home.ajg.id.au' --property managed_service_haproxystorage_mgmt='haproxy/cloud.storagemgmt.os1.home.ajg.
id.au' --property managed_service_mysqlinternal_api='mysql/cloud.internalapi.os1.home.ajg.id.au' --property managed_service_redisinternal_api='redis/cloud.internalapi.os1.home.ajg.id.au'
(undercloud) [stack@undercloud overcloud-osp13]$ openstack server show b7c6d332-6db4-4d32-8b93-78649c8d1fd9

curl the metadata endpoint from within the deployed node:

[root@os1-controller-0 ~]# curl http://169.254.169.254/openstack/2016-10-06/vendor_data2.json | python -m json.tool 
{
    "join": {
        "hostname": "os1-controller-0.os1.home.ajg.id.au",
        "ipaotp": "932757b15b9f44289a757237005459b6",
        "krb_realm": "OS1.HOME.AJG.ID.AU"
    },

We now have an IPA OTP! Run ipa-client-install manually…

[root@os1-controller-0 ~]# ipa-client-install -U -w 932757b15b9f44289a757237005459b6 --realm OS1.HOME.AJG.ID.AU --hostname os1-controller-0.os1.home.ajg.id.au
WARNING: ntpd time&date synchronization service will not be configured as
conflicting service (chronyd) is enabled
Use --force-ntpd option to disable it and force configuration of ntpd

Discovery was successful!
Client hostname: os1-controller-0.os1.home.ajg.id.au
Realm: OS1.HOME.AJG.ID.AU
DNS Domain: os1.home.ajg.id.au
IPA Server: idm.os1.home.ajg.id.au
BaseDN: dc=os1,dc=home,dc=ajg,dc=id,dc=au

And we are enrolled.

Leave a Reply

Your email address will not be published. Required fields are marked *