• Puppet 4 Preparedness: Testing modules with the future parser and repec-puppet

    If you’re like me, you’re interested in making the transition to Puppet 4 as quickly and seamlessly as possible (er…at least a little easier and faster than from 2.7 to 3). Now that the Puppet 4 language is nearly complete, we can use the future parser in Puppet 3.7 to test our code to make sure it’s ready for the switch. Thanks to some recent changes in rspec-puppet, we can more easily integrate testing with the future parser into our CI workflow. I recently added this functionality to my virtualbox module on the Forge. Head on over to GitHub if you’re interested in the full example.

    First, you’ll need to use rspec-puppet version 2.0.0 or higher. As of this writing, version 2.0.0 has been tagged in the GitHub repo but has yet to be uploaded to rubygems.org. You’ll probably need to pull the code directly from GitHub by adding the :git and :ref parameters to the rsepc-puppet entry in your Gemfile:

    gem 'rspec-puppet',
      :git => 'https://github.com/rodjek/rspec-puppet.git',
      :ref => 'v2.0.0'
    

    Next, you’ll need to tweak your spec_helper a little bit to enable the future parser:

    # spec/spec_helper.rb
    require 'puppetlabs_spec_helper/module_spec_helper'
    
    if ENV['PARSER'] == 'future'
      RSpec.configure do |c|
        c.parser = 'future'
      end
    end
    

    The block of code I’ve added to spec_helper.rb checks the PARSER environment variable and enables the future parser based on the value of that environment variable. Now if you want to run your rspec-puppet tests with the future parser, you just need to set the PARSER environment variable to future. You can do this at the command line like this:

    $ PARSER='future' bundle exec rake test
    

    You can also do this very easily with Travis CI using the .travis.yml file:

    # .travis.yml
    ---
    language: ruby
    bundler_args: --without development
    before_install: rm Gemfile.lock || true
    sudo: false
    rvm:
      - 1.8.7
      - 1.9.3
      - 2.0.0
      - 2.1.0
    script: bundle exec rake test
    env:
      - PUPPET_VERSION="~> 2.7.0"
      - PUPPET_VERSION="~> 3.1.0"
      - PUPPET_VERSION="~> 3.2.0"
      - PUPPET_VERSION="~> 3.3.0"
      - PUPPET_VERSION="~> 3.4.0"
      - PUPPET_VERSION="~> 3.5.0"
      - PUPPET_VERSION="~> 3.6.0"
      - PUPPET_VERSION="~> 3.7.0"
    matrix:
      exclude:
      - rvm: 1.9.3
        env: PUPPET_VERSION="~> 2.7.0"
      - rvm: 2.0.0
        env: PUPPET_VERSION="~> 2.7.0"
      - rvm: 2.0.0
        env: PUPPET_VERSION="~> 3.1.0"
      - rvm: 2.1.0
        env: PUPPET_VERSION="~> 2.7.0"
      - rvm: 2.1.0
        env: PUPPET_VERSION="~> 3.1.0"
      - rvm: 2.1.0
        env: PUPPET_VERSION="~> 3.2.0"
      - rvm: 2.1.0
        env: PUPPET_VERSION="~> 3.3.0"
      - rvm: 2.1.0
        env: PUPPET_VERSION="~> 3.4.0"
      include:
      - rvm: 1.8.7
        env:
        - PUPPET_VERSION="~> 3.7.0"
        - PARSER="future"
      - rvm: 1.9.3
        env:
        - PUPPET_VERSION="~> 3.7.0"
        - PARSER="future"
      - rvm: 2.0.0
        env:
        - PUPPET_VERSION="~> 3.7.0"
        - PARSER="future"
      - rvm: 2.1.0
        env:
        - PUPPET_VERSION="~> 3.7.0"
        - PARSER="future"
    

    The easiest way I found, based on my test matrix, was using the include directive in .travis.yml but if you can think of a better way, let me know! In fact, if there’s a better way to do any of this, please let me know!


  • PuppetConf 2013: Day 2 and Beyond

    Well, PuppetConf is over. I’m writing this on the plane (with no WiFi…ahem) back to Albuquerque. It turns out that I failed to recognize the nuance between a “direct” and “nonstop” flight, so I’m taking the time to write this blog on the tarmac at LAX. Today began most disturbingly at about 6:40am with my coworker calling my room, rousing me from my deep and wondrous slumber to ask if I intended on making the flight back home. Apparently, another nuance that recently escaped me is the “weekday only” setting on my alarm. But, here I am.

    Day 2 of PuppetConf was definitely the stronger of the two in terms of content. I opted to try my hand at the Puppet Certified Professional exam that was being offered gratis by Puppet Labs. The exam was harder than I anticipated, but I passed. I’m now a PCP! (Cue the balloons and champagne.) Also, who came up with the name for this cert? PCP?

    I spent the day in the DevOps track (located in the Grand Ballroom). The first presentation after lunch was entitled “Puppet Module Reusability – What I Learned from Shipping to the Forge” by Gareth Rushgrove (UK Government Digital Service). Gareth is a top contributor to the forge and a pretty interesting guy to talk to. His presentation was one of the best I heard at PuppetConf. He’s put together a great tool for anybody writing Puppet code, it’s a skeleton for the puppet module generate command. You can find it at http://github.com/garethr/puppet-module-skeleton; I’ve been using it for a couple weeks now and highly recommend it. Gareth stressed the need for the community to become more opinionated on coding standards. I appreciate his pythonic approach to code. Gareth also recommended that open source module maintainers insist that contributors write tests for their code before approving a pull request. I couldn’t agree more. I’m not the best at writing tests, but I’m getting better and it’s because I have to—because there is no excuse not to test your modules anymore. He also talked a bit on how tests should be implemented, stressing testing the logic of the module. He stressed that we need to test the interface, not the implementation. We need to go beyond simply rewriting our modules in rspec and pay very close attention to the interfaces we expose in our modules. This is something I need to get better at and I will definitely be using Gareth’s modules as an example of how to do things right.

    The next presentation was “DevOps isn’t Just for WebOps: The Guerrilla’s Guide to Cultural Change” by Michael Stahnke. This was by far the best presentation I saw at PuppetConf and one of the best presentations I’ve ever seen. I highly recommend keeping an eye out for the video of this talk and sending it to everybody you know. I plan on sending it around the office. Michael talked about his experience in a large tech company and how he was able to take it from a complete and utter mess (think The Phoenix Project) to a marginally DevOps oriented workplace. This pattern is very familiar to me; an IT shop where servers are treated as one-of-a-kind works of art and where nobody took an architectural approach to the system. Michael is now an executive at PuppetLabs and his story couldn’t have been more compelling. He came up with five methods for changing the culture in an environment: reduce variability; stop, collaborate, and listen; shout your failures; experimentation matters; and solve causes, not symptoms. His approach makes a lot of sense and is in line with a lot of recent organizational behavior research.

    One of the things Michael really stressed was owning, and not being afraid of failure. Failure is inevitable; failure is still valid data. Knowing what doesn’t work can be just as valuable as knowing what does work. The obvious caveat is that you need to learn from your failures. This means doing post-mortems and root cause analysis for every failure. Shit happens; and if you can learn from it, you can prevent (or at least mitigate) it next time. Getting your team on board with the need to get smarter about these things will pay off very quickly. You need to make sure that the environment tolerates failure (obviously, to an extent) because, as he explains, experimentation matters. Experimentation leads to innovation, but sometimes experiments fail. If people are too afraid to experiment, innovation is going to come to a grinding halt.

    I also found his point about collaboration and communication particularly powerful. DevOps is supposed to be all about Dev and Ops folks getting together and living happily ever after, right? No. Yes, DevOps is about collaboration between Dev and Ops—but it’s also about a culture of trust and communication. Collaboration and communication are both two-way streets. Michael insisted that Devs need to be trusted to do some things that Ops is normally uncomfortable with, but that they need to be held accountable. For example, for every upgrade, there needs to be a Dev on call to champion the upgrade. He also insisted that Devs be included in pager duty for production servers. This allows Dev to get a better idea of how their code affects the system (and I’m using that term literally). This all boils down to breaking down the silos.  Silos are bad because people lose perspective. Dev and Ops are working on the same system; the only way to truly understand the system is to take a systems approach. Taking a systems approach is central to the DevOps philosophy. But not everybody thinks that way; and that’s one of the hardest things to change in a culture. The systems approach is a must for DevOps to succeed in an organization; own it, sell it, and don’t stop until everybody is thinking about the system instead of their own corner of the system.

    Michael’s presentation was followed by Leo Zhadanovsky’s presentation “The Road to The White House with Puppet and AWS”. Leo worked for the DNC and ran the Obama campaign’s IT organization. This tickled my particularly bleeding heart, but regardless of your politics, the tech was incredibly cool. Head on over to http://awsofa.info and take a look at the infrastructure—you won’t be disappointed. Of particular note was the colocation in US-West, which was done in response to the threat posed by Hurricane Sandy. They managed to colocate their entire infrastructure (in a reduced capacity) in just nine hours. NINE HOURS PEOPLE. It took three months to deploy the entire US-East environment, and NINE EFFING HOURS to deploy the entire US-West environment. Mind = blown.

    Sam Bashton gave a presentation entitled “Continuously Integrated Puppet in a Dynamic Environment” which turned out to be about something completely unrelated to continuous integration. I felt a little duped. He just talked about how they’ve implemented masterless Puppet in their environment. It wasn’t my cup of tea, but it was well done. I did enjoy his comparison of servers to cattle. He explained that most people treat servers like pets; they treat them very tenderly, care for them when they’re sick, and keep them around until they die. But servers should be treated like cattle—only kept around while they’re needed and when they’re sick or they aren’t needed anymore they should be taken out back and shot. I found this particularly amusing as someone who’s never set foot on a farm.

    Kris Buytaert closed with a fantastic presentation about “Monitoring in an Infrastructure as Code Age”. I love this term “Infrastructure as Code” and I promise to start incorporating it into everyday conversation. Kris was great to listen to. He boiled DevOps down to CLAMS which stood for culture, lean, automation, monitoring and metrics, and sharing. I’m going to get t-shirts made. Kris’s presentation can basically be summarized in four words: MONITOR ALL THE THINGS! He talked about making metrics central to the culture of your team (my coworker and I shared a frustrated glance at this). I would also recommend watching for this presentation on the PuppetLabs YouTube channel; I am not doing the presentation justice here.

    I’ve attempted to boil down the top talking points presented at PuppetConf into some helpful aphorisms.

    • Infrastructure as code IS code, and it must be treated as such. This means version control, code review, testing, and continuous integration. We’re all software developers now.
    • Stop writing shit modules. Write modules that are modular (ZOMG! Modular modules!) and sharable.
    • Externalizing data from your modules from the beginning is good. Do it from the start and avoid refactoring. Heira is your friend.
    • If you aren’t testing your modules, you’re doing it wrong. There’s no excuse anymore. The tooling is there; use it.
    • Roles and profiles help keep your modules modular and flexible. Roles and profiles allow you to create modules you can share. They are a good idea.
    • Follow the style guide. We need to get more pythonic about the Puppet language. Use puppet-lint. Seriously, do it.
    • Puppet is a programming language. Accept it folks. Puppet has become more and more robust and the sooner you realize it’s not just a DSL, the sooner you can start writing really robust modules. But Puppet also has a long way to go.
    • Remember the UNIX model. Pick one thing and do it really really well.

    I’ll close this post with some Python wisdom which is particularly applicable:

    >>> import this
    The Zen of Python, by Tim Peters
    
    Beautiful is better than ugly.
    Explicit is better than implicit.
    Simple is better than complex.
    Complex is better than complicated.
    Flat is better than nested.
    Sparse is better than dense.
    Readability counts.
    Special cases aren't special enough to break the rules.
    Although practicality beats purity.
    Errors should never pass silently.
    Unless explicitly silenced.
    In the face of ambiguity, refuse the temptation to guess.
    There should be one-- and preferably only one --obvious way to do it.
    Although that way may not be obvious at first unless you're Dutch.
    Now is better than never.
    Although never is often better than *right* now.
    If the implementation is hard to explain, it's a bad idea.
    If the implementation is easy to explain, it may be a good idea.
    Namespaces are one honking great idea -- let's do more of those!
    

  • PuppetConf 2013: Day 1, Part 2

    Will Farrington’s presentation “Puppet at GitHub” was fantastic. It was really interesting to see how a DevOps giant does things. Needless to say, we’re totally intrigued by the idea of ChatOps. We’re going to start playing with Hubot. Will talked about how they avoid having “hand crafted, artisanal, free range servers” by automating everything from the start. One command to Hubot boots the server using IPMI into memtest for an hour, then into a stress testing regimen for a day, and then into provisioning. We really like the sound of this. Will is a fantastic presenter with great stage presence. It’s incredibly clear that the work GitHub is doing is paving the way for innovation in DevOps.

    The last session of the day was “Building Data-Driven Infrastructure with Puppet” by James Fryman (also of GitHub). James’s presentation was incredibly well done; he made a great case for treating Puppet, and operations in general, like software development. I think he may have coined the term BeerOps right in front of us (BeerOps is all about spending less time working and more time drinking beer). He spoke very eloquently on the need to provide hooks and interfaces in your Puppet code that allow for more intelligent automation. Once you’ve included these interfaces, it’s significantly easier to take your automation to the next level and allow your systems to respond to events in real time. By making machine-readable inputs and outputs, we can configure our systems to respond to any kind of event however we choose. It was a fantastic talk and I highly recommend watching it when it’s available.


  • PuppetConf 2013: Day 1

    Okay so I guess a better title for this post is “PuppetConf 2013: Day 0 and the first half of day 1″ but we’re going for concise here. Yesterday we finished up the Advanced Puppet class. See my post PuppetConf 2013: Advanced Puppet Training for more info on that. My feelings didn’t change much, although we managed to get through all the topics (with varying degrees of coverage) but skipped the capstone project scheduled for the last day.

    Last night I checked in for the conference and attended the kickoff party. Arriving at the party I started to get a better idea of where the conference fees were going. Open bars. EVERYWHERE. It was fantastic. Also, the conference food got significantly better, with the exception of breakfast which had a noticeable dearth of protein. Muffins, croissants, and danishes, oh carbs!

    I had a fantastic time at the kickoff and met some really awesome people. I had a great chat with a guy from Visa. We talked about the difference between star performers and those who just float. We seemed to agree that it was about passion and drive. Star performers are motivated by an internal passion and drive for excellence. We work constantly on passion projects and geek out both at work and on our own time. Our spouses constantly yell at us for working too much, but they just don’t get it! We like this shit! It doesn’t just pay the mortgage, it keeps us going! It was really a great conversation.

    The first two keynotes were really great. Of particular note was Gordon Rowell’s presentation “Why Did We Think Large Scale Distributed Systems Would be Easy?” Gordon is a Site Reliability Manager at Google. He talked a lot about the challenges of managing large scale distributed systems and how he and his team have addressed some of them. He was pushing Anycast as an HA/LB solution pretty heavily.

    A lot of what Gordon said resonated with myself and my coworker. Specifically, his recommendation that operations do a post-mortem for every outage. We’re struggling with trying to hammer this behavior into our team at SNL. Also, with monitoring. Gordon stressed monitoring everything in every possible way. Again, something we’re having a very hard time convincing our SysAdmins the value of. We both came from environments with incredibly robust monitoring, where everything is monitored by default. Our department doesn’t monitor anything. When I say that, I really mean it. There is no monitoring system. Nothing. Except on our systems, of course. We’re a bit of a silo; I’ll have to write a post about the environment to give some context. It’s pretty bad though.

    After the keynotes, my coworker and I split up and I attended Gene Kim’s presentation “How Do We Better Sell DevOps?” Gene is just a fantastic presenter. If you don’t know who he is, you need to learn. Gene is the coauthor of The Phoenix Project and a veritable DevOps god. Also, if you haven’t read The Phoenix Project you need to drop everything and get to a bookstore or pick up that iPad/Kindle/whatever and read it. Seriously, I’ll wait…

    …all set? Okay, Gene’s presentation was fantastic and I’m not even going to try and parse it; wait for the video to be posted and watch it. He talks about how to sell DevOps to the right people in their language. Now, it’s time for Will Farrington’s presentation “Puppet at GitHub.” I’ll post more later!


  • PuppetConf 2013: Advanced Puppet Training

    So, it’s Wednesday morning and I’ve been in San Francisco since Sunday for PuppetConf. The conference doesn’t start until tomorrow (Thursday) but I opted to attend Advanced Puppet training which came with free admission to the conference. I’ve been using Puppet since 2009 and I’ve never attended any kind of training for it. I’ve read all the books, most of the documentation, and tons of blogs and other resources. I didn’t have particularly high expectations for the training; I guess I was looking to use it to fill in some gaps. I’m sorry to say that the class hasn’t been able to live up to my already low expectations.

    The instructor is very nice and seems generally knowledgeable, but he had a hard time explaining some of the more nuanced topics like anchors and inheritance. He also had a hard time pacing the class. The class is meant to be a three day class; the materials are split up into days, and they aren’t particularly ambitious as far as material coverage goes. Here’s the agenda according to the training guide:

    Day 1:

    • Puppet Basics Review
    • Facts and Functions
    • Classification
    • Advanced Coding Techniques
    • Roles and Profiles
    • Best Practices

    Day 2:

    • File Manipulation
    • Data Separation
    • Virtual Resources
    • Exported Resources and Collections
    • Server Scaling

    Day 3:

    • Advanced Reporting
    • Troubleshooting
    • Provisioning
    • MCollective

    Now, I’m not convinced all of these topics are appropriate for an “advanced” level class, and I’ll talk more about that–but by the end of Day 1 we had barely gotten through the “Facts and Functions” section. I knew something was up when, by the 2pm break we hadn’t finished the “Puppet Basics Review” section. Yesterday, by the end of the day, we had barely finished the “Data Separation” section. Class starts in about 30 minutes, and I can’t imagine we’re going to get through the last seven sections by the end of the day. What really bugs me is that most of the really “advanced” content is on the back end of the class! He spent WAY too much time on the fundamentals. I was talking to some of the trainers at lunch yesterday and most of them said that they were in about the same place with their advanced classes.

    I don’t get it. The Puppet training website clearly lists “Puppet Fundamentals” or “equivalent hands-on experience with Puppet” as a prerequisite for the Advanced course. The “Basics Review” section of the training manual is 33 pages long. For perspective, the “Advanced Coding Techniques” section is 32 pages, “Roles and Profiles” is 30 pages, and “Data Separation” is only 22 pages. I think the folks at Puppet Labs need to sit down and figure out what this “Advanced Puppet” class is meant to be. There’s no mention in this class about rspec, continuous integration, advanced module development, or any number or truly advanced topics necessary to take your Puppet code to the next level.

    Now, I knew the agenda when I bought the ticket, so I’m not faulting Puppet for that. I just think that there is a need for a truly advanced level Puppet class. I would like to see a class devoted to coding best practices, test driven module development, continuous integration and deployment of Puppet code, and integration of Vagrant and rspec-system into the module development workflow. These are the kinds of “advanced” topics that are really useful to the community.