Update to lita-activedirectory

I updated our Active Directory lita plugin today with support for querying the members of a given group. See https://github.com/knuedge/lita-activedirectory. It still needs some work to properly present errors when a user or group doesn’t actually exist in the directory. Right now it returns nothing rather then a helpful error. It works splendidly with legit users and groups however.

ChatOps ftw!

Automated Puppet Tests with Bamboo

rnelson0 is walking through automated Puppet testing with Jenkins on his blog. I thought I’d highlight how you can use a similar workflow in Atlassian’s Bamboo application. This assumes you already have a working Bamboo setup and are familiar with the general process for testing Puppet modules with rspec.

Create a new plan

The first step is to set up a new plan to use for the testing. Click “Create” and then “Create a new plan” in the top menu bar.Pasted_Image_1_13_17__3_08_PM.png

Bamboo organizes related jobs, builds, etc. into projects. On the next screen either create a new Puppet project or select an existing project if you’ve already set that up.Pasted_Image_1_13_17__3_10_PM.png

Fill in the details and select the repository you’d like to start testing. Bamboo can read from a large number of version control systems. I happen to use Bitbucket so this is simple.  Once your happy with the selections click “Configure plan”.

Pasted_Image_1_13_17__3_15_PM.png

At the next screen we setup our tasks. The first task, Source Code Checkout, is added by default and checks out the repo configured in the previous step. I like to break things down into small script tasks so they are easier to troubleshoot and duplicate between jobs.

Click add task and select script.

Pasted_Image_1_13_17__3_17_PM.png

The first step is to setup the Ruby environment. This presumes you already have Bamboo build agents up and running and that rvm is installed. This script below is what I use. It performs some checks to ensure ruby is properly setup.


#!/bin/bash

if [ "$(ps -p "$$" -o comm=)" != "bash" ]; then
 /bin/bash "$0" "$@"
 exit "$?"
fi 
source /etc/profile.d/rvm.sh
ruby="ruby-2.1.8"
install_count=`rvm list | grep $ruby | wc -l`

if [ $install_count -lt 1 ]; then
  rvm install $ruby
fi
rvm use $ruby
rvm user gemsets

Now click “Add task” and add another script task. We will use this script step to create a new gemset and install the gems listed in the Gemfile using bundler. If you use environment variables to specify the version of puppet to install enter that in the environment variables field.Pasted_Image_1_13_17__3_55_PM.png

#!/bin/bash

if [ "$(ps -p "$$" -o comm=)" != "bash" ]; then
/bin/bash "$0" "$@"
exit "$?"
fi
source /etc/profile.d/rvm.sh
ruby="ruby-2.1.8"
gemset="puppet-4.8.0-validate"
rvm use $ruby
rvm user gemsets
rvm gemset create $gemset
rvm gemset use $gemset
gem install bundler
rm Gemfile.lock
bundle install

Now lets add another script step and use it to run the actual tests. At the end of the test this removes the Gemset to ensure a clean environment for each new run.  You can also add other options here using environment variables such as STRICT_VARIABLES=no.Pasted_Image_1_13_17__3_30_PM.png

#!/bin/bash

if [ "$(ps -p "$$" -o comm=)" != "bash" ]; then
/bin/bash "$0" "$@"
exit "$?"
fi
source /etc/profile.d/rvm.sh
ruby="ruby-2.1.8"
gemset="puppet-4.8.0-validate"
rvm use $ruby
rvm user gemsets
rvm gemset use $gemset

bundle exec rake validate
bundle exec rake spec
rvm gemset delete --force $gemset

We’ve now covered the basics needed to get started and can click create at the bottom to finalize the plan. After doing so we’ll be looking  at the job configuration page. The script tasks we just created are in the “Default Job”.

Pasted_Image_1_13_17__3_33_PM.pngThat name isn’t very clear so lets update it. Click “Default Job” and then select the “Job details” tab. Here we’ll enter a more descriptive name such as “Puppet 4.8 rspec” and click save. A descriptive name is handy because you can clone stages across plans to avoid repeating the script setup.

Repository Triggers

If you’re using the built in Bitbucket Server integration like my setup then Bamboo will automatically run the plan whenever a new commit is pushed to the repo. You can customize the type of trigger to use polling, scheduling, or other options as well. Simply select the “Triggers” tab under plan configuration and set appropriately.Pasted_Image_1_13_17__3_37_PM.png

Tracking Results

If you were to run this plan now you would get a pass or fail in Bamboo  but would have to look at the logs for the job to see the actual details of the results. We can use JUnit to get an better view of the test in Bamboo. To set this up create a .rspec file in the root of the Puppet module you’re testing with the following content

--format RspecJunitFormatter
--out results.xml

This will write the results of the test in JUnit format to the results.xml file. You’ll also want to add this file to .gitignore. Now we can add a step to our plan to parse this file. Go back to the plan configuration and select the stage we setup previously. Click add task and select “JUnit Parser.”Pasted_Image_1_13_17__3_41_PM.png

Configure the JUnit parser so it finds our results file.Pasted_Image_1_13_17__3_42_PM.png

Click save and run the plan again. You’ll now see your test results nicely formatted in the plan history. Here’s an example of what that looks like for failed tests.Pasted_Image_1_13_17__3_44_PM.pngPasted_Image_1_13_17__3_45_PM.png

You don’t need to redo this work for each repo. When setting up a plan for a new module start by removing the default stage. Then click add job and select clone an existing job. Pasted_Image_1_13_17__4_01_PM.png

You can then select which plan to clone from.Pasted_Image_1_13_17__4_02_PM.png

It takes a bit more setup then using Travis CI but its not difficult to get rspec testing up and running in Bamboo.

 

Finishing the Puppet 4 Migration

Two days ago I finished our migration to Puppet 4. Overall I’d say the process was pretty painless. The gist of what I did

  • start running rspec tests against Puppet 4
  • fix issues found in tests
  • run the catalog preview tool and fix any issues found
  • turn on the future parser on the existing master
  • turn off stringify facts
  • create new master and PuppeDB server
  • migrate agents to new master

Thankfully our code wasn’t too difficult to update and most of the forge modules we use had also been updated.

Creating the New Master

I did not want to upgrade our existing master for a variety of reasons I won’t get into here. Instead I took the opportunity to migrate it from an old vm to running on ec2 with PuppetDB in RDS. I have to give props to the puppet team for greatly simplifying the setup process of a new master in Puppet 4. Setting up puppetserver is significantly easier then the old passenger based server.

Migrating the Agents

Puppet provides a module to migrate your agents to the new master. It will copy the existing ssl certs to the new directory and upgrade the agent. I was not able to do this since I was not migrating certs to the new master (I needed to add new DNS alt names). The consequence of this was needing to find a way to upgrade and migrate the agents in an automated fashion. I accomplished this entirely with puppet! The process was

  • pre-create /etc/puppetlabs/puppet
  • drop a new config into /etc/puppetlabs.puppet/puppet.conf with the new master name
  • setup the puppetlabs puppet collection repo
  • install the new puppet-agent package
  • update cron job for new puppet paths (the fact that I already ran puppet using cron made this simple)
  • purge the old /etc/puppet directory

A puppet run would take place on the old master and prep things using the steps above. Then when the cronjob kicked in it would run against the new master and get a new cert issued. Overall this worked really well and we only had to touch 2 machines by hand.

Migrate Puppet Manifest

This is the manifest I used to migrate our linux machines. Available on GitHub here https://github.com/dschaaff/puppet-migrate.

class migrate {

file {'/etc/puppetlabs':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet/puppet.conf':
ensure => present,
source => 'puppet:///modules/migrate/puppet.conf'
}

if $facts['osfamily'] == 'Debian' {
include apt
apt::source { 'puppetlabs-pc1':
location => 'http://apt.puppetlabs.com',
repos    => 'PC1',
key      => {
'id'     => '6F6B15509CF8E59E6E469F327F438280EF8D349F',
'server' => 'pgp.mit.edu',
},
notify => Class['apt::update']
}
package {'puppet-agent':
ensure => present,
require => Class['apt::update']
}
}

if $facts['osfamily'] == 'RedHat' {
$version = $facts['operatingsystemmajrelease']
yumrepo {'puppetlabs-pc1':
baseurl  => "https://yum.puppetlabs.com/el/${version}/PC1/\$basearch",
descr    => 'Puppetlabs PC1 Repository',
enabled  => true,
gpgcheck => '1',
gpgkey   => 'https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs'
}
->
package {'puppet-agent':
ensure => present,
}
}

$time1 = fqdn_rand(30)
$time2 = $time1 + 30
$minute = [ $time1, $time2 ]

cron {'puppet-agent':
command => '/opt/puppetlabs/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}
->
cron {'puppet-client':
ensure  => 'absent',
command => '/usr/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}
file {'/etc/puppet':
ensure => purged,
force => true,
}
->
file { '/var/lib/puppet/ssl':
ensure => purged,
force => true,
}
}

I used a similar manifest for macOS

class migrate::mac {
$mac_vers = $facts['macosx_productversion_major']

file {'/etc/puppetlabs':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet/puppet.conf':
ensure => present,
source => 'puppet:///modules/migrate/puppet.conf'
}

package {"puppet-agent-1.8.2-1.osx${mac_vers}.dmg":
ensure => present,
source => "https://downloads.puppetlabs.com/mac/${mac_vers}/PC1/x86_64/puppet-agent-1.8.2-1.osx${mac_vers}.dmg"
}

$time1 = fqdn_rand(30)
$time2 = $time1 + 30
$minute = [ $time1, $time2 ]

cron {'puppet-agent':
command => '/opt/puppetlabs/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}

file {'/etc/puppet':
ensure => purged,
force => true,
}
->
file { '/var/lib/puppet/ssl':
ensure => purged,
force => true,
}
->
# using gem since puppet 3.8 did not have packages for Sierra
package {'puppet':
ensure => absent,
provider => 'gem',
}
}

Post Migration Experience

After migrating the agents I only ran into one piece of code that broke due to the upgrade. Somehow I had overlooked the removal of dynamic scoping in ERB templates. This piece of code was not covered by rspec tests, an area for improvement! I relied on this to configure logstash output to elasticsearch. Under Puppet 3 the relevant piece of ERB looked like this

output {
if [type] == "syslog" {
elasticsearch {
hosts => [<%= @es_input_nodes.collect { |node| '"' + node.to_s + ':' + @elasticsearch_port.to_s + '"' }.join(',') %>]
ssl => true
}
}

The value of es_input_nodes was pulled from the params class

class elk::logstash (
$syslog_port = $elk::params::syslog_port,
$elasticsearch_nodes = $elk::params::elasticsearch_nodes,
$es_input_nodes = $elk::params::es_input_nodes,
$elasticsearch_port = $elk::params::elasticsearch_port,
$netflow_port = $elk::params::netflow_port
)

The params class pulls the info from Puppet DB.

$es_input_nodes = sort(query_nodes('Class[Elk::elasticsearch] and elasticsearchrole=data or elasticsearchrole=client'))

The removal of dynamic scoping templates meant the template was putting empty values in the logstash config and breaking the service. To fix the variables needed to be scoped properly in the template and now look like this

output {
if [type] == "syslog" {
elasticsearch {
hosts => [<%= scope['elk::logstash::es_input_nodes'].collect { |node| '"' + node.to_s + ':' + scope['elk::logstash::elasticsearch_port'].to_s + '"' }.join(',') %>]
ssl => true
}
}

Remaining Work

Prior to the migration I relied on stephenrjohnson/pupptmodule to manage the puppet agent on Linux and macOS. Some work has been done on Puppet 4 compatability but there is still more to do. I’m close to updating the agent pieces for my needs but there is a lot of work to add puppet master support.

Terraform AMI Maps

Up until today we had been using a map variable in terraform to choose our ubuntu 14 ami based on region.

variable "ubuntu_amis" {
    description = "Mapping of Ubuntu 14.04 AMIs."
    default = {
        ap-northeast-1 = "ami-a25cffa2"
        ap-southeast-1 = "ami-967879c4"
        ap-southeast-2 = "ami-21ce8b1b"
        cn-north-1     = "ami-d44fd2ed"
        eu-central-1   = "ami-9cf9c281"
        eu-west-1      = "ami-664b0a11"
        sa-east-1      = "ami-c99518d4"
        us-east-1      = "ami-c135f3aa"
        us-gov-west-1  = "ami-91cfafb2"
        us-west-1      = "ami-bf3dccfb"
        us-west-2      = "ami-f15b5dc1"
    }
}

We would then set the ami id like so when creating an ec2 instance.

ami = "${lookup(var.ubuntu_amis, var.region)}"

The problem we ran into is that we now use Ubuntu 16 by default and wanted to expand the ami map to contain its ID’s as well. I quickly discovered that nested maps like the one below work.

 variable "ubuntu_amis" {
    description = "Mapping of Ubuntu 14.04 AMIs."
    default = {
        "ubuntu14" = {
          ap-northeast-1 = "ami-a25cffa2"
          ap-southeast-1 = "ami-967879c4"
          ap-southeast-2 = "ami-21ce8b1b"
          cn-north-1     = "ami-d44fd2ed"
          eu-central-1   = "ami-9cf9c281"
          eu-west-1      = "ami-664b0a11"
          sa-east-1      = "ami-c99518d4"
          us-east-1      = "ami-c135f3aa"
          us-gov-west-1  = "ami-91cfafb2"
          us-west-1      = "ami-bf3dccfb"
          us-west-2      = "ami-f15b5dc1"
      }
      "ubuntu16" = {
          ap-northeast-1 = "ami-a25cffa2"...

I also tried the solution from this old github issue but it is no longer valid since the concat function only accepts lists now. In the end I landed using a variable for os version and setting it like this.

variable "os-version" {
    description = "Whether to use ubuntu 14 or ubuntu 16"
    default     = "ubuntu16"
}
ariable "ubuntu_amis" {
    description = "Mapping of Ubuntu 14.04 AMIs."
    default = {
          ubuntu14.ap-northeast-1 = "ami-a25cffa2"
          ubuntu14.ap-southeast-1 = "ami-967879c4"
          ubuntu14.ap-southeast-2 = "ami-21ce8b1b"
          ubuntu14.cn-north-1     = "ami-d44fd2ed"
          ubuntu14.eu-central-1   = "ami-9cf9c281"
          ubuntu14.eu-west-1      = "ami-664b0a11"
          ubuntu14.sa-east-1      = "ami-c99518d4"
          ubuntu14.us-east-1      = "ami-c135f3aa"
          ubuntu14.us-gov-west-1  = "ami-91cfafb2"
          ubuntu14.us-west-1      = "ami-bf3dccfb"
          ubuntu14.us-west-2      = "ami-f15b5dc1"
          ubuntu16.ap-northeast-1 = "ami-a68e3ec7"
          ubuntu16.ap-southeast-1 = "ami-5b7ed338"
          ubuntu16.ap-southeast-2 = "ami-e2112881"
          ubuntu16.cn-north-1     = "ami-593feb34"
          ubuntu16.eu-central-1   = "ami-df02c5b0"
          ubuntu16.eu-west-1      = "ami-be376ecd"
          ubuntu16.sa-east-1      = "ami-8f34aae3"
          ubuntu16.us-east-1      = "ami-2808313f"
          ubuntu16.us-gov-west-1  = "ami-19d56d78"
          ubuntu16.us-west-1      = "ami-900255f0"
          ubuntu16.us-west-2      = "ami-7df25b1d"
    }
}

Then using a lookup such as this when creating an instance

ami = "${lookup(var.ubuntu_amis, "${var.os-version}.${var.region}")}"

Hopefully this helps someone out and if you know of a better way to accomplish this please share!

Adventures in Ruby

I’m learning ruby. Finding time to work towards this goal is proving difficult but I’m forcing myself to use ruby wherever possible to aid in my learning. I’ll be putting some of my lame code on here to chronicle my learning and hopefully get some feedback on how I can improve things. I recently came across a good opportunity when I needed to generate a list of nodes to use with the puppet catalog preview tool

I wanted to get a full picture of my infrastructure and represent all our nodes in the report output without having to manually type a large node list. Puppet already has all my node names so I just need to extract them . My first step was to query the nodes endpoint in puppetdb for all nodes and pipe it into a file.

curl http://puppetdb.example.com:8080/v3/nodes/ > nodesout.txt

The output of this is json with an array of hashes.

[{
"name" : "server2.example.com",
"deactivated" : null,
"catalog_timestamp" : "2016-11-28T19:28:14.828Z",
"facts_timestamp" : "2016-11-28T19:28:12.112Z",
"report_timestamp" : "2016-11-28T19:28:13.443Z"
},{
"name" : "server.example.com",
"deactivated" : null,
"catalog_timestamp" : "2016-11-28T19:28:14.828Z",
"facts_timestamp" : "2016-11-28T19:28:12.112Z",
"report_timestamp" : "2016-11-28T19:28:13.443Z"
}]

I only want the name of each node so I need to parse that out. It was a great opportunity to open pry and get some practice!

  • load json so I can parse the file
[1] pry(main)> require 'json'
=> true
  • read in the file
[2] pry(main)> file = File.read('nodesout.txt')
  • parse the file into a variable
pry(main)> data_hash = JSON.parse(file)
=> [{"name"=>"server.example.com",
"deactivated"=>nil,
"catalog_timestamp"=>"2016-11-29T00:37:03.202Z",
"facts_timestamp"=>"2016-11-29T00:37:00.972Z",
"report_timestamp"=>"2016-11-29T00:36:38.679Z"},
{"name"=>"server2.example.com",
"deactivated"=>nil,
"catalog_timestamp"=>"2016-11-29T00:37:03.202Z",
"facts_timestamp"=>"2016-11-29T00:37:00.972Z",
"report_timestamp"=>"2016-11-29T00:36:38.679Z"}]
[4] pry(main)> data_hash.class
=> Array
  • setup a method to iterate over the data and write each hostname to a new line in a file
[5] pry(main)> def list_nodes(input)
[5] pry(main)*   File.open('nodes_out.txt', 'a') do |f|
[5] pry(main)*     input.each do |i|
[5] pry(main)*       f.puts i["name"]
[5] pry(main)*     end
[5] pry(main)*   end
[5] pry(main)* end
=> :list_nodes
  • run the method against my data_hash
[6] pry(main)> list_nodes(data_hash)
[7] pry(main)> exit

I now have the list of nodes I was looking for!

$ cat nodes_out.txt
server.example.com
server2.example.com

This accomplished what I needed and also saved me a lot of time (like putting the puppetdb query directly in the ruby stuff). I’m certain there may be a cleaner way to do this, but that’s what learning is for!