Speeding up RSpec Tests in Bamboo

Now that roles and profiles are in my control repo my RSpec tests are taking longer then ever. As of this writing the control repo contains 938 tests and I’m still a long way from 100% coverage. This really slows down the feedback loop when running tests. When running locally I often just run RSpec against a specific spec file rather then run the whole test suite, but I still wanted a way to speed things up in Bamboo.

I had used parallel_tests before to run things quicker on my local machine but was having issues with each test overwriting the JUnit output file and giving me an incomplete result set at the end. I stumbled across a fix for this yesterday which I’m pretty happy with. My original .rspec file had the file name of the JUnit output hard coded.

--format documentation
--color
--format RspecJunitFormatter
--out results.xml

By making the following change each parallel test writes to its own JUnit output file.

--format documentation
--color
--format RspecJunitFormatter
--out results<%= ENV['TEST_ENV_NUMBER'] %>.xml

Bamboo was already parsing the results using a wildcard so no change was needed there (see this post for details on my Bamboo setup).  The last step was to change the rake task Bamboo is running from rake spec to rake parallel_spec. This change cut the test time down from an average of 24 minutes to 8 minutes and faster feedback is always a plus!

 

Tackling Tech Debt in Puppet

I spent some time tackling technical debt in our Puppet code this week. The biggest outstanding item was implementing eyaml for protecting secrets in Hiera. I’d also been encouraging developers to contribute to the Puppet code base  for some time, but they were restricted from the control repo due to some secrets kept in Hiera. This put a big damper on collaboration as Hiera is the data engine for our roles and profiles. Separate git repos were also used for the profile and role modules due to this workflow.

Hiera-eyaml to the rescue! Props to voxpupuli as this was dead simple to implement. Once the secrets were encrypted I tidied up a few more things before collaboration could rain down !

  • created a new branch on the existing control repo
  • moved the roles and profiles modules into the site directory of the control repo
  • create an environment.conf file to add the site dir to the module path
  • tested an r10k run on the new environment
  • spent some time fighting RSpec, as you do
  • merged into production
  • created a new git repo for the control module to remove commit history containing secrets
  • opened up access to the development team

We’ve now got a control repo with encrypted secrets open to contributions from across the org. I’m also enjoying the simplified workflow with environments now that hieradata, roles, and profiles are all in a single git repo.

Update to lita-activedirectory

I updated our Active Directory lita plugin today with support for querying the members of a given group. See https://github.com/knuedge/lita-activedirectory. It still needs some work to properly present errors when a user or group doesn’t actually exist in the directory. Right now it returns nothing rather then a helpful error. It works splendidly with legit users and groups however.

ChatOps ftw!

Automated Puppet Tests with Bamboo

rnelson0 is walking through automated Puppet testing with Jenkins on his blog. I thought I’d highlight how you can use a similar workflow in Atlassian’s Bamboo application. This assumes you already have a working Bamboo setup and are familiar with the general process for testing Puppet modules with rspec.

Create a new plan

The first step is to set up a new plan to use for the testing. Click “Create” and then “Create a new plan” in the top menu bar.Pasted_Image_1_13_17__3_08_PM.png

Bamboo organizes related jobs, builds, etc. into projects. On the next screen either create a new Puppet project or select an existing project if you’ve already set that up.Pasted_Image_1_13_17__3_10_PM.png

Fill in the details and select the repository you’d like to start testing. Bamboo can read from a large number of version control systems. I happen to use Bitbucket so this is simple.  Once your happy with the selections click “Configure plan”.

Pasted_Image_1_13_17__3_15_PM.png

At the next screen we setup our tasks. The first task, Source Code Checkout, is added by default and checks out the repo configured in the previous step. I like to break things down into small script tasks so they are easier to troubleshoot and duplicate between jobs.

Click add task and select script.

Pasted_Image_1_13_17__3_17_PM.png

The first step is to setup the Ruby environment. This presumes you already have Bamboo build agents up and running and that rvm is installed. This script below is what I use. It performs some checks to ensure ruby is properly setup.


#!/bin/bash

if [ "$(ps -p "$$" -o comm=)" != "bash" ]; then
 /bin/bash "$0" "$@"
 exit "$?"
fi 
source /etc/profile.d/rvm.sh
ruby="ruby-2.1.8"
install_count=`rvm list | grep $ruby | wc -l`

if [ $install_count -lt 1 ]; then
  rvm install $ruby
fi
rvm use $ruby
rvm user gemsets

Now click “Add task” and add another script task. We will use this script step to create a new gemset and install the gems listed in the Gemfile using bundler. If you use environment variables to specify the version of puppet to install enter that in the environment variables field.Pasted_Image_1_13_17__3_55_PM.png

#!/bin/bash

if [ "$(ps -p "$$" -o comm=)" != "bash" ]; then
/bin/bash "$0" "$@"
exit "$?"
fi
source /etc/profile.d/rvm.sh
ruby="ruby-2.1.8"
gemset="puppet-4.8.0-validate"
rvm use $ruby
rvm user gemsets
rvm gemset create $gemset
rvm gemset use $gemset
gem install bundler
rm Gemfile.lock
bundle install

Now lets add another script step and use it to run the actual tests. At the end of the test this removes the Gemset to ensure a clean environment for each new run.  You can also add other options here using environment variables such as STRICT_VARIABLES=no.Pasted_Image_1_13_17__3_30_PM.png

#!/bin/bash

if [ "$(ps -p "$$" -o comm=)" != "bash" ]; then
/bin/bash "$0" "$@"
exit "$?"
fi
source /etc/profile.d/rvm.sh
ruby="ruby-2.1.8"
gemset="puppet-4.8.0-validate"
rvm use $ruby
rvm user gemsets
rvm gemset use $gemset

bundle exec rake validate
bundle exec rake spec
rvm gemset delete --force $gemset

We’ve now covered the basics needed to get started and can click create at the bottom to finalize the plan. After doing so we’ll be looking  at the job configuration page. The script tasks we just created are in the “Default Job”.

Pasted_Image_1_13_17__3_33_PM.pngThat name isn’t very clear so lets update it. Click “Default Job” and then select the “Job details” tab. Here we’ll enter a more descriptive name such as “Puppet 4.8 rspec” and click save. A descriptive name is handy because you can clone stages across plans to avoid repeating the script setup.

Repository Triggers

If you’re using the built in Bitbucket Server integration like my setup then Bamboo will automatically run the plan whenever a new commit is pushed to the repo. You can customize the type of trigger to use polling, scheduling, or other options as well. Simply select the “Triggers” tab under plan configuration and set appropriately.Pasted_Image_1_13_17__3_37_PM.png

Tracking Results

If you were to run this plan now you would get a pass or fail in Bamboo  but would have to look at the logs for the job to see the actual details of the results. We can use JUnit to get an better view of the test in Bamboo. To set this up create a .rspec file in the root of the Puppet module you’re testing with the following content

--format RspecJunitFormatter
--out results.xml

This will write the results of the test in JUnit format to the results.xml file. You’ll also want to add this file to .gitignore. Now we can add a step to our plan to parse this file. Go back to the plan configuration and select the stage we setup previously. Click add task and select “JUnit Parser.”Pasted_Image_1_13_17__3_41_PM.png

Configure the JUnit parser so it finds our results file.Pasted_Image_1_13_17__3_42_PM.png

Click save and run the plan again. You’ll now see your test results nicely formatted in the plan history. Here’s an example of what that looks like for failed tests.Pasted_Image_1_13_17__3_44_PM.pngPasted_Image_1_13_17__3_45_PM.png

You don’t need to redo this work for each repo. When setting up a plan for a new module start by removing the default stage. Then click add job and select clone an existing job. Pasted_Image_1_13_17__4_01_PM.png

You can then select which plan to clone from.Pasted_Image_1_13_17__4_02_PM.png

It takes a bit more setup then using Travis CI but its not difficult to get rspec testing up and running in Bamboo.

 

Finishing the Puppet 4 Migration

Two days ago I finished our migration to Puppet 4. Overall I’d say the process was pretty painless. The gist of what I did

  • start running rspec tests against Puppet 4
  • fix issues found in tests
  • run the catalog preview tool and fix any issues found
  • turn on the future parser on the existing master
  • turn off stringify facts
  • create new master and PuppeDB server
  • migrate agents to new master

Thankfully our code wasn’t too difficult to update and most of the forge modules we use had also been updated.

Creating the New Master

I did not want to upgrade our existing master for a variety of reasons I won’t get into here. Instead I took the opportunity to migrate it from an old vm to running on ec2 with PuppetDB in RDS. I have to give props to the puppet team for greatly simplifying the setup process of a new master in Puppet 4. Setting up puppetserver is significantly easier then the old passenger based server.

Migrating the Agents

Puppet provides a module to migrate your agents to the new master. It will copy the existing ssl certs to the new directory and upgrade the agent. I was not able to do this since I was not migrating certs to the new master (I needed to add new DNS alt names). The consequence of this was needing to find a way to upgrade and migrate the agents in an automated fashion. I accomplished this entirely with puppet! The process was

  • pre-create /etc/puppetlabs/puppet
  • drop a new config into /etc/puppetlabs.puppet/puppet.conf with the new master name
  • setup the puppetlabs puppet collection repo
  • install the new puppet-agent package
  • update cron job for new puppet paths (the fact that I already ran puppet using cron made this simple)
  • purge the old /etc/puppet directory

A puppet run would take place on the old master and prep things using the steps above. Then when the cronjob kicked in it would run against the new master and get a new cert issued. Overall this worked really well and we only had to touch 2 machines by hand.

Migrate Puppet Manifest

This is the manifest I used to migrate our linux machines. Available on GitHub here https://github.com/dschaaff/puppet-migrate.

class migrate {

file {'/etc/puppetlabs':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet/puppet.conf':
ensure => present,
source => 'puppet:///modules/migrate/puppet.conf'
}

if $facts['osfamily'] == 'Debian' {
include apt
apt::source { 'puppetlabs-pc1':
location => 'http://apt.puppetlabs.com',
repos    => 'PC1',
key      => {
'id'     => '6F6B15509CF8E59E6E469F327F438280EF8D349F',
'server' => 'pgp.mit.edu',
},
notify => Class['apt::update']
}
package {'puppet-agent':
ensure => present,
require => Class['apt::update']
}
}

if $facts['osfamily'] == 'RedHat' {
$version = $facts['operatingsystemmajrelease']
yumrepo {'puppetlabs-pc1':
baseurl  => "https://yum.puppetlabs.com/el/${version}/PC1/\$basearch",
descr    => 'Puppetlabs PC1 Repository',
enabled  => true,
gpgcheck => '1',
gpgkey   => 'https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs'
}
->
package {'puppet-agent':
ensure => present,
}
}

$time1 = fqdn_rand(30)
$time2 = $time1 + 30
$minute = [ $time1, $time2 ]

cron {'puppet-agent':
command => '/opt/puppetlabs/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}
->
cron {'puppet-client':
ensure  => 'absent',
command => '/usr/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}
file {'/etc/puppet':
ensure => purged,
force => true,
}
->
file { '/var/lib/puppet/ssl':
ensure => purged,
force => true,
}
}

I used a similar manifest for macOS

class migrate::mac {
$mac_vers = $facts['macosx_productversion_major']

file {'/etc/puppetlabs':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet':
ensure => directory,
}
->
file {'/etc/puppetlabs/puppet/puppet.conf':
ensure => present,
source => 'puppet:///modules/migrate/puppet.conf'
}

package {"puppet-agent-1.8.2-1.osx${mac_vers}.dmg":
ensure => present,
source => "https://downloads.puppetlabs.com/mac/${mac_vers}/PC1/x86_64/puppet-agent-1.8.2-1.osx${mac_vers}.dmg"
}

$time1 = fqdn_rand(30)
$time2 = $time1 + 30
$minute = [ $time1, $time2 ]

cron {'puppet-agent':
command => '/opt/puppetlabs/bin/puppet agent --no-daemonize --onetime --logdest syslog > /dev/null 2>&1',
user    => 'root',
hour    => '*',
minute  => $minute,
}

file {'/etc/puppet':
ensure => purged,
force => true,
}
->
file { '/var/lib/puppet/ssl':
ensure => purged,
force => true,
}
->
# using gem since puppet 3.8 did not have packages for Sierra
package {'puppet':
ensure => absent,
provider => 'gem',
}
}

Post Migration Experience

After migrating the agents I only ran into one piece of code that broke due to the upgrade. Somehow I had overlooked the removal of dynamic scoping in ERB templates. This piece of code was not covered by rspec tests, an area for improvement! I relied on this to configure logstash output to elasticsearch. Under Puppet 3 the relevant piece of ERB looked like this

output {
if [type] == "syslog" {
elasticsearch {
hosts => [<%= @es_input_nodes.collect { |node| '"' + node.to_s + ':' + @elasticsearch_port.to_s + '"' }.join(',') %>]
ssl => true
}
}

The value of es_input_nodes was pulled from the params class

class elk::logstash (
$syslog_port = $elk::params::syslog_port,
$elasticsearch_nodes = $elk::params::elasticsearch_nodes,
$es_input_nodes = $elk::params::es_input_nodes,
$elasticsearch_port = $elk::params::elasticsearch_port,
$netflow_port = $elk::params::netflow_port
)

The params class pulls the info from Puppet DB.

$es_input_nodes = sort(query_nodes('Class[Elk::elasticsearch] and elasticsearchrole=data or elasticsearchrole=client'))

The removal of dynamic scoping templates meant the template was putting empty values in the logstash config and breaking the service. To fix the variables needed to be scoped properly in the template and now look like this

output {
if [type] == "syslog" {
elasticsearch {
hosts => [<%= scope['elk::logstash::es_input_nodes'].collect { |node| '"' + node.to_s + ':' + scope['elk::logstash::elasticsearch_port'].to_s + '"' }.join(',') %>]
ssl => true
}
}

Remaining Work

Prior to the migration I relied on stephenrjohnson/pupptmodule to manage the puppet agent on Linux and macOS. Some work has been done on Puppet 4 compatability but there is still more to do. I’m close to updating the agent pieces for my needs but there is a lot of work to add puppet master support.