Finishing the Puppet 4 Migration

Two days ago I finished our migration to Puppet 4. Overall I’d say the process was pretty painless. The gist of what I did

  • start running rspec tests against Puppet 4
  • fix issues found in tests
  • run the catalog preview tool and fix any issues found
  • turn on the future parser on the existing master
  • turn off stringify facts
  • create new master and PuppeDB server
  • migrate agents to new master

Thankfully our code wasn’t too difficult to update and most of the forge modules we use had also been updated.

Creating the New Master

I did not want to upgrade our existing master for a variety of reasons I won’t get into here. Instead I took the opportunity to migrate it from an old vm to running on ec2 with PuppetDB in RDS. I have to give props to the puppet team for greatly simplifying the setup process of a new master in Puppet 4. Setting up puppetserver is significantly easier then the old passenger based server.

Migrating the Agents

Puppet provides a module to migrate your agents to the new master. It will copy the existing ssl certs to the new directory and upgrade the agent. I was not able to do this since I was not migrating certs to the new master (I needed to add new DNS alt names). The consequence of this was needing to find a way to upgrade and migrate the agents in an automated fashion. I accomplished this entirely with puppet! The process was

  • pre-create /etc/puppetlabs/puppet
  • drop a new config into /etc/puppetlabs.puppet/puppet.conf with the new master name
  • setup the puppetlabs puppet collection repo
  • install the new puppet-agent package
  • update cron job for new puppet paths (the fact that I already ran puppet using cron made this simple)
  • purge the old /etc/puppet directory

A puppet run would take place on the old master and prep things using the steps above. Then when the cronjob kicked in it would run against the new master and get a new cert issued. Overall this worked really well and we only had to touch 2 machines by hand.

Migrate Puppet Manifest

This is the manifest I used to migrate our linux machines. Available on GitHub here https://github.com/dschaaff/puppet-migrate.

class migrate {

file {'/etc/puppetlabs':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet/puppet.conf':
ensure => present,
source => 'puppet:///modules/migrate/puppet.conf'
}

if $facts['osfamily'] == 'Debian' {
include apt
apt::source { 'puppetlabs-pc1':
location => 'http://apt.puppetlabs.com',
repos    => 'PC1',
key      => {
'id'     => '6F6B15509CF8E59E6E469F327F438280EF8D349F',
'server' => 'pgp.mit.edu',
},
notify => Class['apt::update']
}
package {'puppet-agent':
ensure => present,
require => Class['apt::update']
}
}

if $facts['osfamily'] == 'RedHat' {
$version = $facts['operatingsystemmajrelease']
yumrepo {'puppetlabs-pc1':
baseurl  => "https://yum.puppetlabs.com/el/${version}/PC1/\$basearch",
descr    => 'Puppetlabs PC1 Repository',
enabled  => true,
gpgcheck => '1',
gpgkey   => 'https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs'
}
->
package {'puppet-agent':
ensure => present,
}
}

$time1 = fqdn_rand(30)
$time2 = $time1 + 30
$minute = [ $time1, $time2 ]

cron {'puppet-agent':
command => '/opt/puppetlabs/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}
->
cron {'puppet-client':
ensure  => 'absent',
command => '/usr/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}
file {'/etc/puppet':
ensure => purged,
force => true,
}
->
file { '/var/lib/puppet/ssl':
ensure => purged,
force => true,
}
}

I used a similar manifest for macOS

class migrate::mac {
$mac_vers = $facts['macosx_productversion_major']

file {'/etc/puppetlabs':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet/puppet.conf':
ensure => present,
source => 'puppet:///modules/migrate/puppet.conf'
}

package {"puppet-agent-1.8.2-1.osx${mac_vers}.dmg":
ensure => present,
source => "https://downloads.puppetlabs.com/mac/${mac_vers}/PC1/x86_64/puppet-agent-1.8.2-1.osx${mac_vers}.dmg"
}

$time1 = fqdn_rand(30)
$time2 = $time1 + 30
$minute = [ $time1, $time2 ]

cron {'puppet-agent':
command => '/opt/puppetlabs/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}

file {'/etc/puppet':
ensure => purged,
force => true,
}
->
file { '/var/lib/puppet/ssl':
ensure => purged,
force => true,
}
->
# using gem since puppet 3.8 did not have packages for Sierra
package {'puppet':
ensure => absent,
provider => 'gem',
}
}

Post Migration Experience

After migrating the agents I only ran into one piece of code that broke due to the upgrade. Somehow I had overlooked the removal of dynamic scoping in ERB templates. This piece of code was not covered by rspec tests, an area for improvement! I relied on this to configure logstash output to elasticsearch. Under Puppet 3 the relevant piece of ERB looked like this

output {
if [type] == "syslog" {
elasticsearch {
hosts => [<%= @es_input_nodes.collect { |node| '"' + node.to_s + ':' + @elasticsearch_port.to_s + '"' }.join(',') %>]
ssl => true
}
}

The value of es_input_nodes was pulled from the params class

class elk::logstash (
$syslog_port = $elk::params::syslog_port,
$elasticsearch_nodes = $elk::params::elasticsearch_nodes,
$es_input_nodes = $elk::params::es_input_nodes,
$elasticsearch_port = $elk::params::elasticsearch_port,
$netflow_port = $elk::params::netflow_port
)

The params class pulls the info from Puppet DB.

$es_input_nodes = sort(query_nodes('Class[Elk::elasticsearch] and elasticsearchrole=data or elasticsearchrole=client'))

The removal of dynamic scoping templates meant the template was putting empty values in the logstash config and breaking the service. To fix the variables needed to be scoped properly in the template and now look like this

output {
if [type] == "syslog" {
elasticsearch {
hosts => [<%= scope['elk::logstash::es_input_nodes'].collect { |node| '"' + node.to_s + ':' + scope['elk::logstash::elasticsearch_port'].to_s + '"' }.join(',') %>]
ssl => true
}
}

Remaining Work

Prior to the migration I relied on stephenrjohnson/pupptmodule to manage the puppet agent on Linux and macOS. Some work has been done on Puppet 4 compatability but there is still more to do. I’m close to updating the agent pieces for my needs but there is a lot of work to add puppet master support.

Terraform AMI Maps

Up until today we had been using a map variable in terraform to choose our ubuntu 14 ami based on region.

variable "ubuntu_amis" {
    description = "Mapping of Ubuntu 14.04 AMIs."
    default = {
        ap-northeast-1 = "ami-a25cffa2"
        ap-southeast-1 = "ami-967879c4"
        ap-southeast-2 = "ami-21ce8b1b"
        cn-north-1     = "ami-d44fd2ed"
        eu-central-1   = "ami-9cf9c281"
        eu-west-1      = "ami-664b0a11"
        sa-east-1      = "ami-c99518d4"
        us-east-1      = "ami-c135f3aa"
        us-gov-west-1  = "ami-91cfafb2"
        us-west-1      = "ami-bf3dccfb"
        us-west-2      = "ami-f15b5dc1"
    }
}

We would then set the ami id like so when creating an ec2 instance.

ami = "${lookup(var.ubuntu_amis, var.region)}"

The problem we ran into is that we now use Ubuntu 16 by default and wanted to expand the ami map to contain its ID’s as well. I quickly discovered that nested maps like the one below work.

 variable "ubuntu_amis" {
    description = "Mapping of Ubuntu 14.04 AMIs."
    default = {
        "ubuntu14" = {
          ap-northeast-1 = "ami-a25cffa2"
          ap-southeast-1 = "ami-967879c4"
          ap-southeast-2 = "ami-21ce8b1b"
          cn-north-1     = "ami-d44fd2ed"
          eu-central-1   = "ami-9cf9c281"
          eu-west-1      = "ami-664b0a11"
          sa-east-1      = "ami-c99518d4"
          us-east-1      = "ami-c135f3aa"
          us-gov-west-1  = "ami-91cfafb2"
          us-west-1      = "ami-bf3dccfb"
          us-west-2      = "ami-f15b5dc1"
      }
      "ubuntu16" = {
          ap-northeast-1 = "ami-a25cffa2"...

I also tried the solution from this old github issue but it is no longer valid since the concat function only accepts lists now. In the end I landed using a variable for os version and setting it like this.

variable "os-version" {
    description = "Whether to use ubuntu 14 or ubuntu 16"
    default     = "ubuntu16"
}
ariable "ubuntu_amis" {
    description = "Mapping of Ubuntu 14.04 AMIs."
    default = {
          ubuntu14.ap-northeast-1 = "ami-a25cffa2"
          ubuntu14.ap-southeast-1 = "ami-967879c4"
          ubuntu14.ap-southeast-2 = "ami-21ce8b1b"
          ubuntu14.cn-north-1     = "ami-d44fd2ed"
          ubuntu14.eu-central-1   = "ami-9cf9c281"
          ubuntu14.eu-west-1      = "ami-664b0a11"
          ubuntu14.sa-east-1      = "ami-c99518d4"
          ubuntu14.us-east-1      = "ami-c135f3aa"
          ubuntu14.us-gov-west-1  = "ami-91cfafb2"
          ubuntu14.us-west-1      = "ami-bf3dccfb"
          ubuntu14.us-west-2      = "ami-f15b5dc1"
          ubuntu16.ap-northeast-1 = "ami-a68e3ec7"
          ubuntu16.ap-southeast-1 = "ami-5b7ed338"
          ubuntu16.ap-southeast-2 = "ami-e2112881"
          ubuntu16.cn-north-1     = "ami-593feb34"
          ubuntu16.eu-central-1   = "ami-df02c5b0"
          ubuntu16.eu-west-1      = "ami-be376ecd"
          ubuntu16.sa-east-1      = "ami-8f34aae3"
          ubuntu16.us-east-1      = "ami-2808313f"
          ubuntu16.us-gov-west-1  = "ami-19d56d78"
          ubuntu16.us-west-1      = "ami-900255f0"
          ubuntu16.us-west-2      = "ami-7df25b1d"
    }
}

Then using a lookup such as this when creating an instance

ami = "${lookup(var.ubuntu_amis, "${var.os-version}.${var.region}")}"

Hopefully this helps someone out and if you know of a better way to accomplish this please share!

Adventures in Ruby

I’m learning ruby. Finding time to work towards this goal is proving difficult but I’m forcing myself to use ruby wherever possible to aid in my learning. I’ll be putting some of my lame code on here to chronicle my learning and hopefully get some feedback on how I can improve things. I recently came across a good opportunity when I needed to generate a list of nodes to use with the puppet catalog preview tool

I wanted to get a full picture of my infrastructure and represent all our nodes in the report output without having to manually type a large node list. Puppet already has all my node names so I just need to extract them . My first step was to query the nodes endpoint in puppetdb for all nodes and pipe it into a file.

curl http://puppetdb.example.com:8080/v3/nodes/ > nodesout.txt

The output of this is json with an array of hashes.

[{
"name" : "server2.example.com",
"deactivated" : null,
"catalog_timestamp" : "2016-11-28T19:28:14.828Z",
"facts_timestamp" : "2016-11-28T19:28:12.112Z",
"report_timestamp" : "2016-11-28T19:28:13.443Z"
},{
"name" : "server.example.com",
"deactivated" : null,
"catalog_timestamp" : "2016-11-28T19:28:14.828Z",
"facts_timestamp" : "2016-11-28T19:28:12.112Z",
"report_timestamp" : "2016-11-28T19:28:13.443Z"
}]

I only want the name of each node so I need to parse that out. It was a great opportunity to open pry and get some practice!

  • load json so I can parse the file
[1] pry(main)> require 'json'
=> true
  • read in the file
[2] pry(main)> file = File.read('nodesout.txt')
  • parse the file into a variable
pry(main)> data_hash = JSON.parse(file)
=> [{"name"=>"server.example.com",
"deactivated"=>nil,
"catalog_timestamp"=>"2016-11-29T00:37:03.202Z",
"facts_timestamp"=>"2016-11-29T00:37:00.972Z",
"report_timestamp"=>"2016-11-29T00:36:38.679Z"},
{"name"=>"server2.example.com",
"deactivated"=>nil,
"catalog_timestamp"=>"2016-11-29T00:37:03.202Z",
"facts_timestamp"=>"2016-11-29T00:37:00.972Z",
"report_timestamp"=>"2016-11-29T00:36:38.679Z"}]
[4] pry(main)> data_hash.class
=> Array
  • setup a method to iterate over the data and write each hostname to a new line in a file
[5] pry(main)> def list_nodes(input)
[5] pry(main)*   File.open('nodes_out.txt', 'a') do |f|
[5] pry(main)*     input.each do |i|
[5] pry(main)*       f.puts i["name"]
[5] pry(main)*     end
[5] pry(main)*   end
[5] pry(main)* end
=> :list_nodes
  • run the method against my data_hash
[6] pry(main)> list_nodes(data_hash)
[7] pry(main)> exit

I now have the list of nodes I was looking for!

$ cat nodes_out.txt
server.example.com
server2.example.com

This accomplished what I needed and also saved me a lot of time (like putting the puppetdb query directly in the ruby stuff). I’m certain there may be a cleaner way to do this, but that’s what learning is for!

Using Puppet Catalog Preview with FOSS Puppet

We’re working to upgrade our infrastructure to Puppet 4 and are making use of the catalog preview tool to help identify code that needs to be updated. The preview tool in and of itself is handy, but the output it produces can be a bit daunting. During the “Getting to the Latest Puppet” talk at puppetconf they pointed out a tool that professional services uses to create a nice html version of the output. Naturally I got excited to use this, but discovered it doesn’t properly work with open source Puppet due to some hardcoded Puppet Enterprise paths. Fortunately it was only 3 lines to update! My fork is here if its useful to others.

No Longer Barfing at the Mention of ChatOps

No Longer Barfing at the Mention of ChatOps

I’ve poked a lot of fun at chatops but I have found some value in portions of the practice. Let me state upfront that I do not believe paying attention to the chat room all day and having your attention interrupted non-stop is a productive or healthy practice. I have found some big benefits to “chatops” however.

Visibility

Work that is done in the chatroom, or filtered into the chatroom, is visible to the whole team. This helps the team be aware of what others are doing and stay up to date. I’ve picked up on quite a few things from this that I wouldn’t have learned other wise. This is also why we choose to route a fair amount of notifications into chat. For example we have Jira connected to HipChat and it makes it really easy to stay on top of issues. We also push commit notifications, build notifications, etc in the chatroom. The downside to this is that the rooms get noisy and make it harder to follow actual conversations between humans. One strategy we use to combat that is creating multiple rooms and focusing them around a subject.

Less Context Switching

I have the ability to accomplish a task without context switching using bots even better then the visibility wins. We have HipChat open all day. It is far quicker to switch the focus to the HipChat window, slam in a command, and get back to what we were doing then it is to ssh into a server, run a command, and then log back out. My co-worker got inspired by Kevin Paulisse’s talk at puppet conf and their use of Hubot and led us to get our chatops on using the open source bot Lita. We’ve already done some work on our own plugins and I’m now sold on working this way.

Using Lita

Working with Active Directory

Our lita-activedirectory plugin allows us to leverage our cratus gem to query and interact with Active Directory over ldap. It currently supports checking if a user account is locked, and unlocking the user account. We plan to extend to allow querying for group memberships and other attributes. One less reason to open Active Directory Users and Computers!

Working with Puppet

Our [lita-puppet[(https://github.com/knuedge/lita-puppet) plugin lets interact with our puppet infrastructure and puppetdb. Some tasks it currently supports
– list all nodes containing a specific class
– list all the profile and role classes applied to a node
– clean a cert off the puppet master
– run the puppet agent on a node
– deploy code with r10k on the puppet master

Other Cool Uses

We’ve implemented other helpful abilities
– run a dns lookup using dig
– ping a name or IP
– test for the availability of a URL
– Reset OTP tokens

It has been huge to accomplish these tasks without leaving the keyboard in an app I already have open all the time. Combine this with the visiblity chatops brings to our team and we have a winning combination. I’d encourage you to give it a shot, while we all remember to be mindful of people’s time and attention.