-
Moving to Vox Pupuli
Today I decided to move two of my Puppet modules, danzilio/letsencrypt and danzilio/virtualbox, to Vox Pupuli. I have not been able to provide the attention that these modules need and deserve. I’ve let a number of important pull requests go stale, and I haven’t actually used these modules myself in quite a while. I know a number of users depend on these modules in their environments, and the right thing to do was to give them up.
If you’re not familiar with Vox Pupuli, it is a community of module authors and Puppet users who collectively maintain a growing number of Puppet modules. Vox Pupuli is a group of very talented and passionate folks, I know these modules are in very good hands with them. If you have a PR open on these modules, please feel free to join the Vox Pupuli folks in
#voxpupuli
on Freenode or Slack. I will do my best to help ease the transition and shepherd existing PRs as best I can.
-
Puppet Design Patterns: The Factory Pattern
Yesterday, at PuppetCamp Boston, I gave a talk on Puppet Design Patterns. Last week, I wrote a blog post about the Strategy Pattern. In this post I’ll talk about the Factory Pattern.
What’s in a name?
You may be familiar with the Factory Pattern by a different name: the
create_resources
pattern. I’ve decided to call this the Factory Pattern because thecreate_resources
function is no longer the only method available to implement this pattern. With the new language features introduced in Puppet 4, we can now use iteration natively in the DSL instead of relying on thecreate_resources
function.I chose the name “Factory” because this pattern closely aligns with the GoF Factory Method and Abstract Factory patterns. Others have used the Factory Pattern terminology to refer generically to an object that instantiates other objects based on some input or message. While Puppet doesn’t have objects (in the traditional sense), it does have resources. A Puppet class can act as a resource factory, creating other resources based on input from its interface. This is becoming an increasingly common pattern.
The Factory Pattern in Action
The Factory Pattern allows you to have a single point of entry for your module. Your entry point can use the information passed to it to determine what resources to create. Let’s take a look at an example using the
create_resources
function.class filefactory ( $files = {} ) { validate_hash($files) unless empty($files) { create_resources('file', $files) } }
The example above is contrived. The
filefactory
class does nothing but create other resources. In reality, your module would likely have other responsibilities in addition to creating resources. Here, we accept a single parameter named$files
and we expect the data passed to$files
to be a hash. We check to make sure the$files
variable is not empty, and then we pass that hash to thecreate_resources
function where we tell it to create the resource typefile
. Let’s take a look at what this would look like in Puppet 4.class filefactory ( Hash $files ) { $files.each |$name, $resource| { file { $name: * => $resource } } }
In the Puppet 4 example, we can use iteration inside the DSL instead of delegating that to the
create_resources
function. In this example we still accept a$files
parameter, but we don’t provide a default here. We also rely on the type system to ensure that the data passed to$files
is a hash instead of using thevalidate_hash
function. We iterate over the$files
hash and unpack the key as$name
and the value as$resource
. We then create a file, passing$name
as the resource title, and we use the splat operator to pass the$resource
hash. The splat operator passes hash keys to the resource as parameter names, and the hash values as the parameter values.In the above examples, the
filefactory
class is the resource factory. It creates new resources based on the data passed to the factory’s interface. Let’s take a look at a real-world example of this.The Factory Pattern in ghoneycutt/nrpe
The Factory Pattern allows you to create resources inside your entry point. We see this in a number of modules, but let’s take a look at Garrett Honeycutt’s
nrpe
module here. This module consists of one class and one defined type. The class installs and manages thenrpe
service while the defined type allows the user to configurenrpe
plugins and checks. Garrett uses the Factory Pattern to allow his users to simplyinclude nrpe
and let thenrpe
class create thenrpe::plugin
resources based on Hiera data. Let’s take a look at the code (I’ve truncated the code to focus on the parts we care about).class nrpe ( ... $plugins = undef, $hiera_merge_plugins = false, ... ) { ... if is_string($hiera_merge_plugins) { $hiera_merge_plugins_bool = str2bool($hiera_merge_plugins) } else { $hiera_merge_plugins_bool = $hiera_merge_plugins } ... if $plugins != undef { if $hiera_merge_plugins_bool { $plugins_real = hiera_hash(nrpe::plugins) } else { $plugins_real = $plugins } validate_hash($plugins_real) create_resources('nrpe::plugin',$plugins_real) } }
Here, as you can see, Garrett’s
nrpe
class accepts the$plugins
and$hiera_merge_plugins
parameters. If the$plugins
parameter is notundef
, it passes that data to thecreate_resources
function depending on whether the user has enabled hash merging via the$hiera_merge_plugins
parameter. This use of the Factory Pattern provides a safe and simple interface to this module.A Note on
create_resources
Many people in the community consider use of the
create_resources
function as an antipattern. This is mainly due to the fact that it was created as a workaround for the lack of iteration support in older versions of Puppet. In the past, thecreate_resources
function was often misused. I personally think thatcreate_resources
is perfectly fine when used judiciously. Now that we have powerful iteration support in Puppet 4, hopefully the Factory Pattern can begin to be seen at as a more ‘legitimate’ pattern without the baggage ofcreate_resources
weighing it down.A Note on the Name
I don’t claim to be an authority on names. I think this pattern closely resembles what others refer to as a Factory. Hopefully this terminology will catch on, but I don’t represent the community as a whole. This post is meant to start the discussion :)
-
Puppet Design Patterns: The Strategy Pattern
I’m currently in the process of putting together a talk about Design Patterns for Puppet, so I figured I’d blog a bit about it along the way. I’m pretty passionate about patterns for a number of reasons. I think they really help you understand the dynamics of the language you’re working in. They also help you understand design decisions involved in implementing a lasting solution to a problem.
Design patterns are frequently used solutions to common problems. They tend to emerge naturally, and are usually observed rather than invented. The seminal work on design patterns was Design Patterns: Elements of Reusable Object-Oriented Software, commonly referred to as the Gang of Four (GoF) book. If you’re interested in understanding more about design patterns in general, I highly recommend you pick up a copy of the GoF book as well as Design Patterns in Ruby by Russ Olsen.
Not all of the GoF patterns can be directly applied to Puppet, and most of the patterns that do apply need a little bit of massaging to get there. This is mostly due to the fact that Puppet is not an object oriented programming language. That being said, I think there are definitely some lessons to be learned from the GoF when it comes to Puppet. There’s also great value in simply identifying a pattern and giving it a name.
The Strategy Pattern
The Strategy pattern is used when you have a part of an algorithm that must vary under certain conditions. The Strategy pattern uses composition to achieve that variation. The GoF describes this as “pull[ing] the algorithm out into a separate object.”
There are two types of classes in the Strategy pattern: the strategy and the context. The strategy classes are parts of the code that need to vary; they are broken out into separate classes and (ideally) implement a common interface. The context class uses the strategy classes to achieve some complex behavior while abstracting the implementation from the user.
We often see the Strategy pattern used when trying to make our modules work across various platforms. Since Debian and RedHat based Linux distributions use different package managers, we must often vary our repository management logic to accommodate for the differences in configuration semantics and primitives available. Let’s take a look at the
puppetlabs/rabbitmq
module for a real-world example of the Strategy pattern.Strategy in Action:
puppetlabs/rabbitmq
In the
rabbitmq
class there’s amanage_repos
parameter to enable or disable the management of the package repository. Therabbitmq
module supportsyum
andapt
based distributions and breaks the logic to configure those package managers in two separate classes:rabbitmq::repo::rhel
andrabbitmq::repo::apt
. These are the strategy classes. Let’s take a look at how those classes are used in therabbitmq
class.if $manage_repos != false { case $::osfamily { 'RedHat', 'SUSE': { include '::rabbitmq::repo::rhel' $package_require = undef } 'Debian': { class { '::rabbitmq::repo::apt' : key_source => $package_gpg_key, key_content => $key_content, } $package_require = Class['apt::update'] } default: { $package_require = undef } } } else { $package_require = undef }
As you can see, this code looks at the
osfamily
fact to determine whichrabbitmq::repo
class to include. ForRedHat
andSUSE
based distrbutions, therabbitmq
class includes therabbitmq::repo::rhel
class. ForDebian
based distributions, it includes therabbitmq::repo::apt
class. Therabbitmq
class is the user of the strategy classes; the GoF called this the context class.The Strategy pattern here allows us to abstract away the innards of repository management based on the
osfamily
running on the client. By breaking out the repository management into separate classes, we’ve made the code more modular and easier to read. We’ve achieved a better separation of concerns by delegating the repository management to a separate class. We’ve also made it easier and less risky to add new platforms to this module.
-
Blog Month: Week 1 Roundup
Today marks the end of week one of Blog Month. I figured I’d take this opportunity to do a quick retrospective of the past week. Let’s take a look at this week’s progress.
- Sunday: Welcome to Blog Month!
- Monday: Changes
- Tuesday: RSpec For Ops Part 2: Diving in with rspec-puppet
- Wednesday: RSpec For Ops Part 3: Test driven design with rspec-puppet
- Thursday: RSpec For Ops Part 4: Test driven Docker
- Friday: No post
- Saturday: No post
I wrote five posts, three of which were technical in nature. To be on track I’d need to write seven posts each week, so I’m currently at ~71% of my target. I’ll need to make up for that in the coming weeks.
Observations
Blogging is hard, especially writing on technical topics. I probably spent, on average, about three hours on each technical post. That’s a lot of time writing. Although the technical posts were really rewarding, they’re also very challenging. I have a list of technical topics I want to cover this month, so I’m going to have to really ramp it up if I want to be on track to make 30 posts this month.
I’ve gotten some great feedback, but I’d love to hear more! If you’re reading my posts, please let me know how you think I’m doing! Stay tuned for more posts this coming week.
-
RSpec For Ops Part 4: Test driven Docker
This is the fourth part in a series of blog posts on RSpec for Ops. See the first post here, second post here, and third post here.
Over the course of this blog series I’ve talked about RSpec in general and TDD with
rspec-puppet
. Now I’d like to take a more platform-agnostic approach by exploring TDD with Serverspec. For this particular example look at how to build a test driven Docker image using Serverspec.Serverspec
Serverspec is a framework built on top of RSpec that allows you to write tests that examine the state of a running system. Serverspec provides a number of cross-platform matchers and helpers that build on the RSpec DSL. This enables us to examine a system’s resources and ensure that the system’s state matches our expectations. Let’s take a look at a simple Serverspec test.
describe 'a web server' do it 'should be installed and running' do expect(package('apache2')).to be_installed expect(process('apache2')).to be_running expect(port('80')).to be_listening end describe file('/etc/apache2/sites-enabled/000-default.conf') do it { is_expected.to be_symlink } its(:content) { is_expected.to match /DocumentRoot \/var\/www\/html/ } end end
In this snippet we have an example group describing a web server. We’ve written examples that test to make sure the
apache2
package is installed, that theapache2
process is running and listening on port 80, that the default virtual host is enabled, and the document root is configured correctly.Serverspec gives us several options for executing these tests. We can run these tests on the local system, we can use
ssh
to connect to a remote system, or we can run these commands inside a Docker container. There are many other execution backends, but these are the most common. Let’s take a look at what ourspec_helper.rb
would look like (we’re using thedocker-api
gem).require 'serverspec' require 'docker' set :backend, :docker project_root = File.expand_path(File.join(__FILE__, '..', '..', 'docker')) RSpec.configure do |c| c.before(:suite) do set :docker_image, Docker::Image.build_from_dir(project_root).id end end
This
spec_helper.rb
file sets up all of our testing dependencies, including loading theserverspec
anddocker
libraries we’ll be using in all of our tests. This file also tells Serverspec that we’re using thedocker
backend, sets theproject_root
variable, and configures abefore
hook for our RSpec tests.A note on
Dockerfile
builds. Builds with aDockerfile
are idempotent, meaning that the Docker container will only be rebuilt when a change is made to theDockerfile
. This allows us to run these tests frequently and with relatively low overhead. Here, we’ve configured thebefore
hook to build our Docker container before running our tests. You’ll notice that we’re using thesuite
before hook, because we only want to build the Docker container once during each run of Serverspec.In order to run these tests, we need a minimal Dockerfile. Remember, we wrote the tests first, so we want to run them and make sure they fail. Let’s create a minimal Dockerfile:
FROM ubuntu:14.04
This is the absolute bare minimum necessary to run these tests. This
Dockerfile
only sets the parent image toubuntu:14.04
. Let’s run our tests and see what we get:a web server should be installed and running (FAILED - 1) File "/etc/apache2/sites-enabled/000-default.conf" should be symlink (FAILED - 2) content should match /DocumentRoot \/var\/www\/html/ (FAILED - 3) Failures: 1) a web server should be installed and running Failure/Error: expect(package('apache2')).to be_installed expected Package "apache2" to be installed # ./spec/acceptance/apache_spec.rb:5:in `block (2 levels) in <top (required)>' 2) a web server File "/etc/apache2/sites-enabled/000-default.conf" should be symlink Failure/Error: it { is_expected.to be_symlink } expected `File "/etc/apache2/sites-enabled/000-default.conf".symlink?` to return true, got false # ./spec/acceptance/apache_spec.rb:11:in `block (3 levels) in <top (required)>' 3) a web server File "/etc/apache2/sites-enabled/000-default.conf" content should match /DocumentRoot \/var\/www\/html/ Failure/Error: its(:content) { is_expected.to match /DocumentRoot \/var\/www\/html/ } expected "" to match /DocumentRoot \/var\/www\/html/ Diff: @@ -1,2 +1,2 @@ -/DocumentRoot \/var\/www\/html/ +"" # ./spec/acceptance/apache_spec.rb:12:in `block (3 levels) in <top (required)>' Finished in 2.08 seconds (files took 0.34553 seconds to load) 3 examples, 3 failures Failed examples: rspec ./spec/acceptance/apache_spec.rb:4 # a web server should be installed and running rspec ./spec/acceptance/apache_spec.rb:11 # a web server File "/etc/apache2/sites-enabled/000-default.conf" should be symlink rspec ./spec/acceptance/apache_spec.rb:12 # a web server File "/etc/apache2/sites-enabled/000-default.conf" content should match /DocumentRoot \/var\/www\/html/
Great! We have our failing tests. Now, let’s implement those features in our
Dockerfile
. Remember, we want to write the minimum amount of code necessary to get our tests to pass.FROM ubuntu:14.04 RUN apt-get update && apt-get install -y apache2 CMD apache2ctl -D FOREGROUND
As you can see, we’ve edited our
Dockerfile
to install theapache2
package, and configured the web server to start with the container. Now let’s run our tests again and see how things look.a web server should be installed and running File "/etc/apache2/sites-enabled/000-default.conf" should be symlink content should match /DocumentRoot \/var\/www\/html/ Finished in 13.85 seconds (files took 0.4131 seconds to load) 3 examples, 0 failures
Awesome! We have a working Dockerized web server, and it’s WEBSCALE! Now you’re ready to conquer the internet. If you’re interested in using this example for your own Docker development purposes, you can find the code on my GitHub here. Happy containerizing!