Thursday, September 1, 2016

Making PS DSC modules available to nodes.

Editor's Note: Started working with Desired State Configuration (DSC) and boy do I miss the things Puppet automates for me!

I've been working with PowerShell DSC and have an interesting scenario:
  • You want to use other resources, like xFirewall, to manage firewall state on a node.
  • You are 'push'ing your configuration to nodes.
  • Problem: nodes need the modules installed on them.
You might run into an error similar to this:
Cannot find module xNetworking_2.11.0.0 from the server'c41c31d5-67e4-4f80-8204-f749ff1ae192',ModuleName='xNetworking',ModuleVersion='')/ModuleContent. Could not install module dependencies needed by the configuration.
You could switch your nodes from push to pull mode.  Nodes will pull modules in addition to their configuration.  But maybe you don't want to use pull mode.

First you'll need to configure your nodes to use a pull server.  Configure the DSC Local Manager like you normally would for a node but only use this configuration.  Leave the node in push mode.  But now the node knows where to look for modules, if required.  Replace ServerUrl with your pull server.  What do you mean you don't have a pull server?
  ConfigurationRepositoryWeb PullServer
  ServerUrl = ''
See the gist if you haven't opened it yet.  The gist shows a DSC where we're using xFirewall and xSmbShare resources, both resources aren't installed by default.  They come from the xNetworking and xSmbShare modules, respectively.

The gist includes some additional powershell to package the modules we're using and place them in the pull server's modules directory.

Packaging modules for download by nodes is a pain in the ass.  Maybe you've come across an error like this:
module file did not contain a module with required version
You cannot simply archive the module at its path:
C:\Program Files\WindowsPowerShell\Modules\xNetworking
You must archive the files within the directory corresponding the module version you want.  Assuming you want the latest version, you can use the cmdlet Get-Module find the version.
$version = (Get-Module -ListAvailable $module).Version.ToString()
Then you can package all the files in the module's base directory, ala the method .ModuleBase from Get-Module, into an archive named "$module_$"

I'd like to find a way to use the $modules array in the Import-DscResource statement on line 11.  It would make this sort of snippet more reusable.  I wouldn't be repeating myself by defining required modules in two places.

Tuesday, May 31, 2016

Speaking History

Over the past couple years I've been lucky enough to present and give a few talks either at conferences or on webinars.  Here's a list along with links to the slides (if any).

Tuesday, May 10, 2016

Testing Puppet Code: Bundle, Rake and Gems. Oh My! Part 3

This is the third post in a series on testing Puppet code.  The first two posts explored why we should test code and how to use Puppet's 'puppet parser validate' and rodjek's 'puppet-lint' commands.  In this post we'll set up the Ruby framework needed to run 'puppet parser validate' and 'puppet-lint' via the Ruby gem.  The Ruby framework will also allow us to add unit tests with rspec.

A collection of Puppet code, tests, and associated templates and files is called a Puppet Module.  All of those files should be in a single directory named 'your_puppet_module'.  Your Puppet manifests should be in a directory named 'manifests'.  See the Puppet documentation for more information on Puppet module layout.  Although not required, if your Puppet code is not currently in a module format you should try to get it into a module format now.

- your_puppet_module
| - manifests
  | - init.pp
  | - another_manifest.pp

Before following the examples please install ruby and bundler on your system.

The Gemfile and Rakefile

We'll use Ruby's 'rake' command to run puppet-lint.  'rake' is a Ruby tool used to run repetitive tasks, such as tests.  'rake' uses a special file named 'Rakefile' where we configure the tasks we want to run.

The 'Rakefile' can require other Ruby gems to run the tasks we write.  We will use the Ruby dependency manager bundler to manage the Ruby gems our 'Rakefile' needs.  'bundle' uses a configuration file named a 'Gemfile'.  The 'Gemfile' lists the Ruby gems and version.

Our sample Puppet module has a 'Rakefile', 'Gemfile', and a manifests directory with a sample manifest.  The 'Gemfile' contains our two dependencies: 'puppet-lint' and 'rake'.  The 'Rakefile' requires the 'puppet-lint' gem.

Clone the sample repository and look at the 'Rakefile' and 'Gemfile'.
git clone

Running puppet-lint

According to the 'puppet-lint' documentation all we have to do is run 'rake lint' after the module is installed.

Clone the sample repository.
git clone
Enter the directory and checkout example1.
cd testing_puppet
git checkout example1 
Use bundler to install required Ruby gems in a directory local to our project,
bundle install --path vendor/bundle
Run the puppet-lint task on our Puppet code. We prefix the 'rake' command with 'bundle exec'.  This runs 'rake' in context of the gems we installed.
bundle exec rake lint

Congratulations!  We just ran puppet-lint as a rake task and it checked all the Puppet manifests in our project.  puppet-lint found some problems but those files aren't part of our project.  We'll need to configure puppet-lint to ignore that directory.

Checkout the tag named 'example2'.
git checkout example2
In example2 I've made two modifications to the Rakefile:
  1. Clear any previous definition of the rake task named 'lint'.
  2. Configure puppet-lint to ignore vendor directory

Run the rake task again and puppet-lint is no longer checking files in the vendor directory.

puppet-lint can be configured entirely in the 'Rakefile'.  You can still use puppet-lint's configuration file named .puppet-lint.rc when puppet-lint is called as a rake task. 

Running puppet syntax checks

Adding Puppet syntax checks to the 'Rakefile' is just as easy.  Let's look at example 3 to see the changes.

Checkout the tag named 'example3'.
git checkout example3
We need two new Ruby gems: puppet-syntax and puppet.  The puppet-syntax gem has a rake task named 'syntax' that will check our manifests and depends on the puppet gem.  The Ruby gems are added to our Gemfile.

We require the puppet-syntax gem in the 'Rakefile' to be able to run the 'syntax' task and, like puppet-lint, we'll configure it to ignore the 'vendor' directory.

Run the rake command to check our manifests for any syntax errors.  No errors!
bundle exec rake syntax 

What if we had errors in our code?  What would that look like?  I've inserted an error into example 4 so we can see what happens.

Checkout the tag named 'example4'.
git checkout example4
 Run the 'rake' command to check our manifests for any syntax errors.
bundle exec rake syntax

Puppet could not parse our manifest, init.pp, due to a missing curly brace at the end of the file.  The syntax check will tell you the line number, the file and error message.  The error message is the same if we run the 'puppet parser validate' command.

The puppet-syntax rake task will also check templates and hiera YAML files if present.  You can read the full documentation for puppet-syntax on GitHub.


In the previous blog posts I demonstrated running 'puppet-lint' and 'puppet parser validate' on the command line.  Many shell scripts take advantage of those command line utilities and I relied heavily on scripts when I started.  The command line utilities may have a place in your environment but advancing to the next level of testing with rspec and beaker requires laying a foundation.

We started with our Puppet code and gathering our manifests, templates and file into something more resembling a Puppet module.  Then we install ruby and bundler in our development environment.  After bundler is installed we add the 'Gemfile' to our project to manage our Ruby gem dependencies.  Lastly we create a 'Rakefile' that defines the rake tasks we will repeatedly run to lint and syntax check our Puppet code.  Now we have a project that can be easily added to a continuous integration system and can start building a software development pipeline for our Puppet code.

Tuesday, May 3, 2016

Improving Code Quality With SonarQube and Jenkins CI

I recently had the opportunity to install/configure SonarQube with Puppet and Jenkins.  Not knowing what I was getting into I found it a little daunting at first but I love the insight SonarQube provides into the Puppet code.

SonarQube is platform for measuring and reporting code quality.  SonarQube scans the codebase for quality standards violations, aka the Developer's Seven Deadly Sins, and produces a list of issues and reports.  Although SonarQube can find syntactical errors it is more like a linter and does not replace unit testing.  SonarQube is complementary to the array of utilities software developers have at their disposal.

If you're using Puppet you can easily stand up a SonarQube server with either this or that puppet module.  I chose to fork Maestrodev's puppet module because it was tied to their wget module and caused a conflict for me.

Getting started with SonarQube is easy when integrated with Jenkins.  First install the SonarQube plugin for Jenkins.  Second, navigate to the Jenkins "Configure System" section for SonarQube.  Enter your SonarQube server information and it will be made available to jobs.

I prefer to create a service account for Jenkins to log into SonarQube and have all Jenkins jobs use it.  You could also assign developers, or a group of developers, their own SonarQube credentials.

In your job's configuration, check the box and Jenkins will automatically inject the required environment variables for the SonarQube server.

Maven projects can add sonar options as properties to the pom.xml as in this example.  Add the sonar goal and environment variables for the SonarQube server for the Maven target.

​Freestyle projects use the Execute Shell block to run sonar-scanner and a file.  The properties file specifies where the source is located, the language type, project name, and project key.  This example shows a typical properties file.  The options passed on the command line are required for connecting to the SonarQube server.

In the following example I test for a file in my project.  If that file exists then I run sonar-scanner with environment variables from the Jenkins global settings for SonarQube.
After your Jenkins job runs links are available to the SonarQube reports for each run.  You can click on the "SonarQube" link in the main menu or on the blue wavy lines for the job.  You will be taken the project report on the SonarQube server.

SonarQube by default comes with the Java plugin to scan Java code.  If you want to scan Puppet code you'll need to install the Puppet plugin via SonarQube's Update Center.  When you scan your codebase you'll get a nifty dashboard with the ability to drill down and get specific errors.

The level of effort to get the server running and integrated with Jenkins was minimal.  Not knowing anything about the platform I was able to get the server up and running in a day, and that included working on my Puppet module.  It took another 4hrs of messing around until I figured out how to get it working from Jenkins.  It's so easy to get started I highly recommend setting this up.

Monday, May 2, 2016

Testing Puppet Code, Linting Part 2

In the first post I reviewed testing Puppet code syntax.  If you are just getting starting testing Puppet code I suggest you start there and read about why we test code and how to get started with Puppet parser.  If you've already read that then let's get started!

Linting is the act of tidying up your code.  Each programming language has a certain style or way about writing.  This is just a matter of formatting spaces, tabs and newlines so that your code looks a certain way.

Linting is not related to how a program functions, whether it will compile, or anything technical.  Linting is purely a matter of style.  So why include linting in a series on testing code?  Because adhering to the language's style makes your code more readable to other programmers.  Linting is the act of testing how well your code adheres to that style and cleaning it up.

There's a nifty program for Puppet code called puppet-lint.  puppet-lint will check your Puppet code and ensure it adheres to the Puppet Style Guide.  Run this to install:

     gem install puppet-lint

Once installed run it, passing a manifest as the argument and will tell you if your code is OK.

     puppet-lint init.pp

Lets see a real example on a manifest from

In the above example we can see puppet-lint has thrown some warnings and one error.  puppet-lint has full documentation explaining each error and warning in detail.  In this case it's the autoload module layout is related to the directory name not matching the module name, see

We can ignore bogus errors and warnings by telling puppet-lint to turn off specific checks.

puppet-lint can also fix your poorly indented code with its handy 'fix' feature.  Here I missed an indent.

I run puppet-lint with the --fix option to fix any indentation problems.

Look back at the code and we see puppet-lint has fixed the issue for me.

Practical experience has showed me sometimes it gets confused with multiple nested hashes.  Sometimes I've wondered if the resulting linted code is easier to read or more confusing.  puppet-lint also has a thing for 80 characters in a line.  There are times when I just can't (or won't) fit a line of code into 80 characters.  I typically ignore that check every time I run puppet-lint.

puppet-lint is a great tool by turning ugly code into easily readable code.  Add it to your arsenal of tests along with "puppet parser validate" to ensure your code is top notch.  Use your best judgement when running puppet-lint.  The goal is to make your code readable in a standard format.  The style guide is just that, a guide.

Wednesday, March 30, 2016

Testing Puppet Code, Part 1

This is the first in a series of posts about how you, as a Puppet practitioner, can test your Puppet code.

Why do we test our code?  First, testing code saves time.  It’s counterintuitive but true.  When writing tests you may find new solutions to the problem or write something more efficient.  Second, running tests discovers bugs.  Problems arise from untested code and you’ll spend time fixing those problems later on.  Lastly adding tests enhances documentation.  Tests describe how the code should behave and are a self-documenting feature.  There are many other reasons and specific types of tests but we'll save that for later.

We start with basic Puppet syntax checks.  The command `puppet parser validate` will "validate(s) Puppet DSL syntax without compiling a catalog or syncing any resources".  Simply put it examines a Puppet manifest, ensures those curly braces `{ }` are correct and that we followed the Puppet DSL.

Run `puppet parser validate` without any arguments and it looks for the default manifest at `/etc/puppetlabs/puppet/manifests/site.pp`.  I ran the command but the file doesn’t exist and I get an error.

Note: I run `echo $?` to show the previous command's exit status.  Knowing the exit status is helpful when scripting.

To test our Puppet manifest we need to pass the filename as an argument to the command.  In this example we pass the manifest `init.pp`.  The old adage “No news is good news “ is true.  The command found no errors and there’s no output aside from some warnings.

Let's break the manifest and rerun the test.  I remove a single curly brace and now we have an error.

That output tells us the file and line number of the error: file init.pp, line number 105.  When I look at line 105 I can’t find the error.  But if we look a few lines up, at line 103, we see the `case` statement is missing a curly brace `{`.  We have to play detective to find errors as they are not always at the reported line number.

Usually we want to test more than one file at a time.  We could pass multiple files to `puppet parser` but it will stop at the first error.  Send one manifest at a time to the command and we get a complete list of all errors in all the manifests.  We can use the `find` command to run `puppet parser` on every manifest, like this example.

We test our code to avoid bugs and make our code more efficient.  Tests can be done quickly and easily with the simple syntax test, `puppet parser validate`.  We can save some time and perform multiple tests with simple shell scripting.  In the next article we will improve code readability with the command `puppet-lint`.

Monday, February 1, 2016

Packer, Puppet and Jenkins: Automating VM Images


VM templates.  We love to have them.  Hate to update them.  Worse is the problem of "template sprawl", a web server template, file server template, DNS server template, this one runs Elasticsearch and this one is for Mongo.

You can script the software update process but what if your Elasticsearch config changes slightly?  What if your webserver template uses apache and you want to change over to nginx?

Here's a process and series of tools that automate a VM template process and link to configuration management.  First we briefly look at the individual tools and what they do.  Then we identify our VM template creation process.  Lastly, we connect the tools together in a way that mirrors our process.

Process Overview

If you have ever created a VM template you have a list of steps.  Here are the steps I use to create a VM template with tools to automate the process.
  1. Jenkins - Start the image build process.
  2. Packer - Create a VM from specified configuration.
  3. Packer, VMware Fusion, VirtualBox - Boot the VM from installation media.
  4. Packer - Install the OS.
  5. Puppet - Configure VM using production Puppet code.
  6. Packer - Perform templating tasks to configure a "generic" OS install.
  7. Packer - Shutdown the VM.
  8. Packer - Mark as template or export to OVF.
  9. Jenkins - Archive build artifacts, deploy to Vagrant repository in Artifactory.
Now that we have a basic overview of the process I'll discuss the tools in detail.


Packer is developed by HashiCorp.  Packer is a tool to describe how to build a VM or container and the resulting output.  Possible outputs include VM formats VirtualBox, VMware, and AMI; or containers.

Packer is a way to describe how to build the same VM with control over the output format and user defined variables.  We can use the same set of scripts and configuration file to build a web server or a mysql server by changing how we call packer.  Our packer configuration can install VirtualBox tools or VMware tools depending on what output format was specified.

I really like the Chef bento repository.  It has a number of great packer configuration files and scripts for Linux.  I use the bento repository and added a few modifications to install my preferred configuration management tool, Puppet.


Puppet is a configuration management tool,  Puppet allows us to describe in code the system state.  There are many benefits to describing your infrastructure as code.  In this case, we are able to use the same code we use after a VM is deployed to also build a VM from scratch.

Packer describes how to build the VM by calling a provisioner.  There are many types of provisioners can be a script, shell or Powershell, Puppet, or Chef or other.  I modified the Packer scripts in the bento repository to install Puppet Enterprise in the VM.

I configured Puppet Enterprise to use my puppet-control repository.  If you are not familiar with this concept, read this post by Gary Larizza.  In short, I configure the Puppet Enterprise install in the VM to grab a copy of all my production Puppet code.  I use a role/profiles method so each server type corresponds to one role.

Then I run puppet apply and specify which role I want to configure.  Since the node has full Puppet Enterprise installed it also has my hieradata and all my code "just works".

At this point I have a fully provisioned node using my production code.  Packer takes over again when provisioning is complete and exports the VM image into my specified format.


Jenkins is primarily a continuous integration tool,  In this case it acts as job scheduler, worker and archiver for my VM template process.  I schedule the packer script to run once a week.  The Jenkins nodes have VMware and VirtualBox hypervisors installed and run the packer script in headless mode, no GUI required.  I modified the Jenkins VM settings to allow for nested virtualization.

Jenkins runs the entire provisioning process and archive the resulting packer image.  If the image is a Vagrant box, Jenkins uploads it to a Vagrant repository on the Artifactory server.  This requires setting a three artifact properties for Vagrant to detect the box.  These properties should be passed from the job to Artifactory.

  • box_provider
    • Hypervisor used to create VM.
  • box_name
    • Must match output .box file.
  • box_version
    • Must be a number.


Using Packer, Puppet, and Jenkins we can schedule and automate the VM template build process.  Puppet also allows us to create VM templates based on the types of nodes we use in our production environment.  We can have one VM template for each Puppet role.

The VM templates can be used in a number of ways and provide benefits:
  1. Developers have recent production VM templates to develop on.
  2. Provisioning VMs from templates is faster because Puppet has already installed and configured most of the software.
  3. Deploying to multiple cloud providers is more manageable with consistency between VMs.
  4. VM template library can grow in size as manual management and update of VM templates is eliminated.
There are some things I would like to work on.
  1. Resulting VM template is over 1GB.
    1. Some of my VM templates are over 1GB.  This may be OK for deploying to vSphere but not good if a developer needs to download a 1GB Vagrant box.  The Puppet Enterprise install is consuming some space.  It may be that our images are just large.  Maybe I can enable some compression?
  2. Jenkins, VirtualBox and VMWare Workstation don't always play nice.
    1. Sometimes the hypervisors on the Jenkins nodes crap out.  When that happens I need to run the packer scripts in GUI mode and try to debug the process on the Jenkins node.  This is a pain in the ass.  I think this is due to kernel updates, then the VirtualBox and VMware kernel drivers need to be rebuilt.  Until VMware Workstation 12 I had a problem with the USB Arbitrator service.
  3. Use Puppet in a way that doesn't require a full Puppet Enterprise install.
    1. I would like to use Puppet but not install Puppet Enterprise on the VM template.  I think there must be a way to install just the Puppet agent and still run the code I want.  I just haven't given much thought to how this would work.