Fork me on GitHub

Planet Puppet

Subscribe to Planet Puppet feed
Puppet blog aggregator!
Updated: 2 hours 16 min ago

Puppet Labs: The Quest To Learn Puppet: New Learning VM

Thu, 04/17/2014 - 23:29

Puppet skills are in big demand in the job market. The new Learning VM makes it easier to get started learning Puppet and monitor your own progress.

Puppet Labs: Patching the Heartbleed OpenSSL Vulnerability with Puppet Enterprise

Wed, 04/16/2014 - 20:26

Patch management can be quick and easy with Puppet Enterprise. In cases like the recent Heartbleed vulnerability, time is of the essence: As system administrators, we need to quickly and efficiently deploy patches for these security vulnerabilities, and just as important, be able to show our management team that we’ve done it.

Puppet Labs: What Heartbleed Tells Us About the Need for IT Automation

Tue, 04/15/2014 - 21:59

Heartbleed amply demonstrated how important IT automation is in security situations. But it's just as important for turning on a dime when the business demands it.

The Razor's Edge: puppet cloudera module 2.0.2

Tue, 04/15/2014 - 08:16
This is a minor bugfix release of my Puppet module to deploy Cloudera Manager. When I released the module, I had assumed that the testing I did for the C5 beta2 would be 100% valid for C5 GA.  It turns out that Cloudera shipped a newer version of the Oracle 7 JDK and a symlink […]

Puppet Labs: For Clojure Nerds: Puppet Labs Application Services

Mon, 04/14/2014 - 17:51

For Clojure developers: A look at the technology underlying Trapperkeeper, the new open source application services framework created by Puppet Labs.

OlinData: Clojure: an outsider's investigation

Mon, 04/14/2014 - 03:45

Last week, this post on the Puppet Labs blog caught my eye. It announces a services framework called TrapperKeeper, which seems interesting. To be honest I haven't looked into what it does and how it does the things it does.

I did however spend a bit of time investigating clojure as well as the community response to this announcement. I'll share my thoughts here. I do have to warn that this is all found through creative surfing, so welcome to how my mind works when investigating a (to me) new piece of open source technology.

Clojure

I started by looking at Clojure. Not so much at what the language can do and it's syntax and all that, since a) my programming days are (sadly) mostly over and b) there are far smarter minds that can say sensible things about that.

I am however increasingly interested in the continuity of technologies, as this seems to be an important thing in order for enterprises to adopt them. This in turn helps me to decide wether we should look into offering training for those technologies. So, I dug into the information that is publically and easily accessible:

  • The GitHub contributor stats page: as of this moment, the vast majority of the commits (1600+ vs 200+ by Stuart Halloway, the runner up) are done by Rich Hickey, the original author. In the past 3 months however, Alex Miller has the lead (stats here), indicating a possible shift of attention for Rich. Of course this is pure speculation and I don't claim to have any inside knowledge here, remember this is just an outsider's perspective.
  • Let's dig into which companies are behind those top contributors: This gives me a decent feeling. There's not a lot of commits going on to the clojure core repository, but control doesn't seem to be resting with a single company. That said, the main contributor is a professional services company, so people can turn somewhere if they want support.
    • 4 out of the top 6 are from Cognitect, a company fully focussed on Clojure and Datomic. Doing a bit of reading, they seem to be the "good kind" of open source company. Minor downside: the company is quite young.
    • the other 2 top contributors are Toby Crawley who works for Red Hat and Andy Fingerhut who according to a quick LinkedIn search works for Cisco. Good, two major enterprises who at the very least have people working on this. Toby's site clearly states he works on this professionally, for Andy this is less clear.
  • Going through profiles of the main contribs, I found an interesting blog post by Alex Miller. This blog summarizes the 2013 State of Clojure Survey. Inside it, we find some interesting nuggets:
    • Tooling is the biggest category of complaints. This is interesting, because it directly conflicts with what the Puppet Labs blog post lists as a good reason to go for the JVM. It seems like the culprit is "the relationship between Clojure code and bytecode is complex and not necessarily 1-to-1 – getting good s-expression level support is challenging". I don't pretend to know better then either party, but I am cautioned by this. Anyone who can shed a light on this is welcome to leave a comment.
    • The other big category of problems is documentation. That is the same thing we can read in the HackerNews discussion. Having spent a decent bit of time with half-assed documentation in the MySQL HA scene, I am not super thrilled when reading this.
  • Comments on the Puppet Labs blog post itself:
    • 'Engineer' said: "It's sad that the level of technical ability is so low that people adopt languages like clojure because they think its "concurrent". The JVM is not concurrent, therefore clojure is not concurrent therefore you're just signing up for a world of hurt. The Erlang VM has been around longer than the JVM, it really is concurrent, and it really is battle tested."
      Sadly, I have no idea if this is true, and without much further investigation it's hard to verify. I just see it as a possible red flag that I'd want to dig into later on.
    • 'Jeff Dickey' said: "First clue: the "we're so awesome we have to build our own infrastructure even when we're probably complete n00bs in our new hipster language" syndrome. Most of the dings that were laid out as "justification" were *operational nice-to-haves*; if your new environment isn't mature enough to have rock-solid operational support (and anything on the JVM really *should*), then you are fundamentally misunderstanding something."
      This doesn't pertain to clojure so much, but to the fact that Puppet Labs created TrapperKeeper. While the language used here is not my favorite, I am concerned about the underlying point: This framework is obviously not Puppet Labs' core business. While important for their products, I wonder wether it's a great long-term plan to build this stuff in-house (and thus spend resources on this). I guess the longer term will have to determine, mostly by wether the project will get outside contributors/contributions. Irrelevant but ironical: discussing this issue in this specific case, given that puppet was created to counter an exactly similar problem of everyone creating their own tooling in-house :)

That's all for now. It's too early to tell what this all means in the grander scheme of the Puppet ecosystem and where this will all lead in the next few years. Personally I'm not happy with the JVM from an operational perspective as it's startup time and memory usage are a bit of a turnoff. That said, PuppetDB has been a major step forward since Stored Configurations in Puppet 3.x, so I'm just going to sit back and digest all my newly found knowledge while waiting to see where this is all going.

Puppet Labs: A New Era of Application Services at Puppet Labs

Fri, 04/11/2014 - 19:15

Trapperkeeper is a new Clojure framework for hosting long-running applications and services. Built at Puppet Labs as an open source project, it combines learnings about high-performance server-side architecture with patterns for maximizing modular, reusable code.

Puppet Labs: It's Red Hat Summit Time

Fri, 04/11/2014 - 17:01

We're heading to Red Hat Summit next week, April 14-17 in San Francisco, and we’d love to see you there. Here’s your guide to making the most out of the Summit.

Visit Us and Win Cool Prizes

First stop: Visit the OpenStack Pavilion to pick up a Red Hat OpenStack Passport. Then visit us at booth No. 113 to get a stamp, watch a demo and pick up a Puppet Labs t-shirt or one of our new koozies.

Puppet Labs: Heartbleed Update: Regeneration Still the Safest Path

Fri, 04/11/2014 - 00:49

We believe the most conservative approach is still the safest and most secure: Regenerate your certificate authority and all OpenSSL certs throughout your Puppet-managed infrastructure.

Puppet Labs: Heartbleed and Puppet-Supported Operating Systems

Wed, 04/09/2014 - 23:48

Full list of Puppet Enterprise-supported operating systems that have the Heartbleed vulnerability, along with the vulnerable versions of OpenSSL that they shipped with.

The Foreman: Blogs: EC2 provisioning using Foreman

Wed, 04/09/2014 - 14:41
One of foreman goals, is to provide a simple and familiar process to provision systems, regardless of where they are located.

We've now added the ability to provision systems in EC2, alongside with the existing virtualization providers such as RHEVM, libvirt, VMWare etc.

In this blog, I'll try to describe step by step what is required in order to provision a new instance in EC2.

Requirements
  • You should be using a recent version of foreman, either directly from git, or using the nightly, see for git instruction, or use the debian, redhat or fedora nightly packages.
  • Have a working foreman server, this should include operating system definitions and unattended mode enabled In addition storeconfigs data must not be stored in the foreman database.
  • Amazon valid EC2 access and secret keys.
  • Security Group which allows foreman to SSH to the instance.
Configuring AWS
Click on the more tab, and select Compute Resources.
Compute Resources are services that can generate a host, e.g. VMWare, libvirt, openstack etc.

Click on New Compute Resource and fill in the information about your new compute resource, normally the name should represent something meaningful to you, such as a combination of the ec2 region and the account used.

if everything is entered correctly, you should be able to get back a list of regions and select the region that you would like to deploy to.

Foreman would then automatically create a new set of SSH keypairs, which would be used in order to configure the instance (you may remove them later on).

Then, the next step is to define which images are allowed to use and assign them to Foreman Operation systems / architectures.

Click on the image tab and select New Image.




Since foreman would SSH to the instance (at least for now, we've decided using ssh first, cloud-init later), it is very important that you define the correct user, that is configured on the ami (normally the ubuntu user, or ec2-user) and of course, the ami id.




Foreman is now ready to create your instance, however, in order to automate fully puppet to load upon instance launch, we would need to create a little post script, this is where the provisioning templates comes into play.

Configuring Provisioning Templates
Add or edit a new provisioning template, More => Provisioning Templates => New 

Select Finish and paste the following content in
Loading ....
Don't forget to associate the template, (in the association tab) and set a default per OS (in the OS settings)

and then add a the snippets too
etc-hosts
Loading .... puppet.confLoading .... An important note about UUID's for certnames, if you want to use this feature, please make sure that you enable use_uuid_for_certificates under more => settings, if not, you can simply use <%= @host.name %> for the certname.additionally, it is not compatible with storeconfigs at this time.
master_bootstrapand if you want to provision a whole puppet master in EC2, you can use the following snippet to get it up and running Loading ....


Now if you ask your self how variables like ntp-server get resolved, they are simply Foreman smart vars



Actual instance launch
Goto to the Hosts tab, click on New Host, among other settings, make sure you select your compute resource,  image and hardware profile
Primary tabOperating System Tab
Add captionProgress Bar

EC2 console
As always, since this is a new feature, any feedback, comments etc are welcomed!

The Foreman: Blogs: Creating a new host using foreman API

Wed, 04/09/2014 - 14:36
Using foreman API is fairly simple, in here I'll show an example using curl.

Using this simple script, you could automate your VM/Bare metal provision process + Puppet configuration in one simple step.


Create a new Host
Loading .... In this example, I've hidden most of the logic in the host group attribute in foreman.
meaning that it already knows the Provisioning and Puppet attributes, but its not a problem to define extend the script not to relay on a given hostgroup, or simply override certain default attributes ( such as memory size, or host operation system etc).

Best way to figure out the additional attributes (besides RTFM) is simply to look at foreman log during the creation POST command.

a typical response from foreman, would include the ipaddress (that was automatically assigned), mac address (that was auto generated if its a vm), etc etc.

Note that in order to use the hostgroup, you would need to know its ID, that's easily done if you simply look at the URL while you edit an existing hostgroup, for example, if your URL is https://foreman/hostgroups/1-base/edit, then the ID is 1.

Delete a Host
Loading ....

The Foreman: Blogs: Getting foreman search results into your Puppet manifest

Wed, 04/09/2014 - 14:32
Lets say you want to know all of the hosts your monitoring host need to monitor, or maybe, the hosts to which your database needs to allow access to, traditionally, the solution to this problem was using Puppet storeconfigs.

In this blog post, I mentioned how you could utilize foreman search language to get customized results.

While storeconfigs is a great solution, and if it works for you, by all means, please do keep using it, but in this post I would like to show you how to use Foreman to query for similar data + foreman data as well.

Lets say, we want to allow VPN access only to client hosts which ran puppet in the last week.

Loading ....

You could easily change the search conditions, for example, to get a list of hosts without any puppet failures, simply change the query to status.failed = 0.

we could easily search for conditions based on facts, puppet classes, owner, reports and combination of them.

the output from the puppet function, may include complex data, such as Arrays and Hashes as well, and it depends on the query object used, for example, host lists would mostly be an Array, however, host facts would be a hash, for example:

Loading ....
which you could utilize either in templates or versions of puppet that supports hashes.

Quick start

  1. Install and setup foreman (Foreman puppet modules might be a quick starting point).
  2. If you are not using the official foreman installer, download and put the following file in your modules lib directory, and ensure you are using pluginsync.
  3. adjust the file to point to your foreman server.
  4. use it in your manifest.

Puppet Labs: Heartbleed Security Bug: Update for Puppet Users

Wed, 04/09/2014 - 07:45

We've released step-by-step documentation for remediating the Heartbleed security vulnerability. Links in post.

Puppet on the Edge: Getting your Puppet Ducks in a Row

Tue, 04/08/2014 - 03:45
Getting your Puppet Ducks in a Row

A conversation that comes up frequently is if the Puppet Programming Language is declarative or not. This is usually the topic when someone has been fighting with how master side order of evaluation of manifests works and have left someone beaten by what sometimes may seem as random behavior. In this post I want to explain how Puppet works and try to straighten out some of the misconceptions.

First, lets get the terminology right (or this will remain confusing). It is common to refer to "parse order" instead of "evaluation order" and the use of the term "parse order" is deeply rooted in the Puppet community - this is unfortunate as it is quite misleading. A computer language is typically first parsed and then evaluated (Puppet does the same), and as you will see, almost all of the peculiarities occur during evaluation.

"Parse Order"

Parse Order is the order in which Puppet reads puppet manifests (.pp) from disk, turns them into tokens and checks their grammar. The result is something that can be evaluated (technically an Abstract Syntax Tree (AST)). The order in which this is done is actually of minor importance from a user perspective, you really do not need to think about how an expression such as $a = 1 + 2 becomes an AST.

The overall ordering of the execution is that Puppet starts with the site.pp file (or possibly the code setting in the configuration), then asks external services (such as the ENC) for additional things that are not included in the logic that was loaded from the site.pp. In versions from 3.5.1 the manifest setting can also refer to a directory of .pp files (preferred over using the now deprecated import statement).

After having parsed the initial manifest(s), Puppet then matches the information about the node making a request for a catalog with available node definitions, and selects the first matching node definition. At this point Puppet has the notion of:

  • node - a mapping of the node the request is for.
  • a set of classes to include and possibly parameters to set that it got from external sources.
  • parsed content in the form of one or several ASTs (one per file that was initially parsed)

Evaluation of the puppet logic (the ASTs) now starts. The evaluation order is imperative - lines in the logic are executed in the order they are written. However, All Classes and Defines in a file are defined prior to starting evaluation, but they are not evaluated (i.e. their bodies of code are just associated with the respective name and set aside for later "lazy" evaluation).

Which leads to the question what "being defined" really means.

Definition and Declaration

In computer science these terms are used as follows:

  • Declaration - introduces a named entity and possibly its type, but it does not fully define the entity (its value, functionality, etc.)
  • Definition - binds a full definition to a name (possibly declared somewhere). A definition is what gives a variable a value, or defines the body of code for a function.

A user-defined resource type is defined in puppet using a define expression. E.g. something like this:

define mytype($some_parameter) {
# body of definition
}

A host class is defined in puppet using the class expression. E.g. something like this:

class ourapp {
# body of class definition
}

After such a resource type definition or class definition has been made, if we try to ask whether mytype or ourapp is defined by using the function defined, we will be told that it is not! This is because the implementer of the function defined used the word in a very ambiguous manner - the defined function actually answers "is ourapp in the catalog?", not "do you know what a mytype is?".

The terminology is further muddled by the fact that the result of a resource expression is computed in two steps - the instruction is queued, and later evaluated. Thus, there is a period of time when it is defined, but what it defines does not yet exist (i.e. it is a kind of recorded desire / partial evaluation). The defined function will however return true for resources that are either in the queue or have been fully evaluated.

mytype { '/tmp/foo': ...}
notice defined(Mytype['tmp/foo']) # true

When this is evaluated, a declaration of a mytype resource is made in the catalog being built. The actual resource '/tmp/foo' is "on its way to be evaluated" and the defined function returns true since it is (about to be) "in the catalog" (only not quite yet).

Read on to learn more, or skip to the examples at the end if you want something concrete, and then come back and read about "Order of Evaluation".

Order of Evaluation

In order for a class to be evaluated, it must be included in the computation via a call to include, or by being instantiated via the resource instantiation expression. (In comparison to a classic Object Oriented programming language include is the same as creating a new instance of the class). If something is not included, then nothing that it in turn defines is visible. Also note that instances of Puppet classes are singletons (a class can only be instantiated once in one catalog).

Originally, the idea was that you could include a given class as many times you wanted. (Since there can only be one instance per class name, multiple calls to include a class only repeats the desire to include that single instance. There is no harm in this). Prior to the introduction of parameterized classes, it was easy to ensure that a class was included; a call to 'include' before using the class was all all that was required. Parameterized classes were then introduced, along with new expression syntax allowing you to "instantiate class as a resource". When a class is parameterized, the “signature” of the class is changed by the values given to the parameters, but the class name remains the same. (In other words, ourapp(“foo”) has a different signature than ourapp(42), even though the class itself is still ourapp.) Parameterization of classes therefore implies that including a class only works when that class does not have multiple signatures. This is because multiple signatures would require multiple singleton instantiations of the same class (a logical impossibility). Unfortunately puppet cannot handle this even if the parameter values are identical - it sees this as an attempt of creating a second (illegal) instance of the class.

When something includes a class (or uses the resource instantiation expression to do the same), the class is auto-loaded; this means that puppet maps the name to a file location, parses the content, and expects to find the class with a matching name. When it has found the class, this class is evaluated (its body of code is evaluated).

The result of the evaluation is a catalog - the catalog contains resources and edges and is declarative. The catalog is transported to the agent, which applies the catalog. The order resources are applied is determined by their dependencies as well as their containment, use of anchor pattern, or the contain function, and settings (apply in random, or by source order, etc.). No evaluation of any puppet logic takes place at this point (at least not in the current version of Puppet) - on the agent the evaluation is done by the providers operating on the resource in the order that is determined by the catalog application logic running on the agent.

The duality of this; a mostly imperative, but sometimes lazy production (as you will learn below) of a catalog and a declarative catalog application is something that confuses many users.

As an analog; if you are writing a web service in PHP, the PHP logic runs on the web server and produces HTML which is sent to the browser. The browser interprets the HTML (which consists of declarative markup) and decides what to render where and the order in which rendering will take place (images load in the background, some elements must be rendered first because their size is needed to position other elements etc.). Compared to Puppet; the imperative PHP backend corresponds to the master computing a catalog in a mostly imperative fashion, and an agent's application of the declarative catalog corresponds to the web browser's rendering of HTML.

Up to this point, the business of "doing things in a particular order" is actually quite clear; the initial set of puppet logic is loaded, parsed and evaluated, which defines nodes (and possibly other things), then the matching node is evaluated, things it references are then autoloaded, parsed and evaluated, etc. until everything that was included has been evaluated.

What still remains to be explained is the order in which the bodies of classes and user-defined types are evaluated, as well as when relationships (dependencies between resources) and queries are evaluated.

Producing the Catalog

The production of the catalog is handled by what is currently known as the "Puppet Compiler". This is again a misnomer, it is not a compiler in the sense that other computer languages have a compiler that translates the source text to machine code (or some intermediate form like Java Byte Code). It does however compile in the sense that it is assembling something (a catalog) out of many pieces of information (resources). Going forward (Puppet 4x) you will see us referring to Catalog Builder instead of Compiler - who knows, one day we may have an actual compiler (to machine code) that compiles the instructions that builds the catalog. Even if we do not, for anyone that has used a compiler it is not intuitive that the compiler runs the program, which is what the current Puppet Compiler does.

When puppet evaluates the AST, it does this imperatively - $a = $b + $c, will immediately look up the value of $b, then $c, then add them, and then assign that value to $a. The evaluation will use the values assigned to $b and $c at the time the assignment expression is evaluated. There is nothing "lazy" going on here - it is not waiting for $b or $c to get a value that will be produced elsewhere at some later point in time.

Some instructions have side effects - i.e. something that changes the state of something external to the function. This is in contrast to an operation like + which is a pure function - it takes two values, adds them, and produces the result - once this is done there is no memory of that having taken place (unless the result is used in yet another expression, etc. until it is assigned to some variable (a side effect).

The operations that have an effect on the catalog are evaluated for the sole purpose of their side effect. The include function tells the catalog builder about our desire to have a particular class included in the catalog. A resource expression tells the catalog builder about our desire to have a particular resource applied by the agent, a dependency formed between resources again tells the catalog builder about our desire that one resource should be applied before/after another. While the instructions that cause the side effects are immediate, the side effects are not completely finished, instead they are recorded for later action. This is the case for most operations that involve building a catalog. This is what we mean when we say that evaluation is lazy.

To summarize:

  • An include will evaluate the body of a class (since classes are singletons this happens only once). The fact that we have instantiated the class is recorded in the catalog - a class is a container of resources, and the class instance is fully evaluated and it exists as a container, but it does not yet containe the actual resources. In fact, it only contains instructions(i.e. our desire to have a particular resource with particular parameter values to be applied on the agent).
  • A class included via what looks like a resource expression i.e. class { name: } behaves like the include function wrt. evaluation order.
  • A dependency between two (or a chain of) resources is also instructions at this point.
  • A query (i.e. space-ship expressions) are instructions to find and realize resources.

When there are no more expressions to immediately evaluate, the catalog builder starts processing the queued up instructions to evaluate resources. Since a resource may be of user-defined type, and it in turn may include other classes, the processing of resources is interrupted while any included classes are evaluated (this typically adds additional resource instructions to the queue). This continues until all instructions about what to place in the catalog have been evaluated (and nothing new was added). Now, the queue is empty.

The lazy evaluation of the catalog building instructions are done in the order they were added to the catalog with the exception of application of default values, queries, and relations which are delayed until the very end. (Exactly how these work is beyond the topic of this already long blog post).

How many different Orders are there?

The different orders are:

  • Parse Order - a more or less insignificant term meaning the order in which text is translated into something the puppet runtime can act on. (If you have a problem with ordering, you are almost certainly not having a problem with parse order).
  • Evaluation Order - the order in which puppet logic is evaluated with the purpose of
    producing a catalog. Pure evaluation order issues are usually related to the order in which arguments are evaluated, the order case options are evaluated - these are usually not difficult to figure out.
  • Catalog Build Order - the order in which the catalog builder evaluates definitions. (If you are having problems with ordering, this is where things appears to be mysterious).
  • Application Order - the order in which the resources are applied on an agent (host). (If you are having ordering problems here, they are more apparent, "resource x" must come before "resource y", or something (like a file) that "resource y" needs will be missing). Solutions here are to use dependencies, the anchor pattern, or use the contain function.)
Please Make Puppet less Random!

This is a request that pops up from time to time. Usually because someone has blown a fuse over a Catalog Build Order problem. As you have learned, the order is far from random. It is however, still quite complex to figure out the order, especially in a large system.

Is there something we can do about this?

The mechanisms in the language have been around for quite some time, and they are not an easy thing to change due to the number of systems that rely on the current behavior. However, there are many ways around the pitfalls that work well for people creating complex configurations - i.e. there are "best practices". There are also some things that are impossible or difficult to achieve.

Many suggestions have been made about how the language should change to be both more powerful and easier to understand, and several options are being considered to help with the mysterious Catalog Build Order and the constraints it imposes. These options include:

  • Being able to include a resource multiple times if they are identical (or that they augment each other).
  • If using a resource expression to instantiate a class, consider a previous include of that class to be identical (since the include did not specify any parameters it can be considered as a desire of lower precedence). (The reverse interpretation is currently allowed).

Another common request is to support decoupling between resources, sometimes referred to as "co-op", where there is a desire to include things "if they are present" (as oppose to someone explicitly including them). The current set of functions and language mechanisms makes this hard to achieve (due to Catalog Build Order being complex to reason about).

Here the best bet is the ENC (for older versions), or the Node Classifier for newer Puppet versions. Related to this is the topic of "data in modules", which in part deals with the overall composition of the system. The features around "data in modules" have not been settled while there are experimental things to play with - none of the existing proposals is a clear winner at present.

I guess this was a long way of saying - we will get to it in time. What we have to do first (and what we are working on) is the semantics of evaluation and catalog building. At this point, the new evaluator (that evaluates the AST) is available when using the --parser future flag in the just to be released 3.5.1. We have just started up the work on the new Catalog Builder where we will more clearly (with the goal of being both strict and deterministic) define the semantics of the catalog and the process that constructs it. We currently do not have "inversion of control" as a feature under consideration (i.e. by adding a module to the module path you also make its starting point included), but are well aware that this feature is much wanted (in conjunction with being able to compose data).

What better way to end than with a couple of examples...

Getting Your Ducks in a Row

Here is an example of a manifest containing a number of ducks. In which order will they appear?

define duck($name) {
notice "duck $name"
include c
}

class c {
notice 'in c'
duck { 'duck0': name => 'mc scrooge' }
}

class a {
notice 'in a'
duck {'duck1': name => 'donald' }
include b
duck {'duck2': name => 'daisy' }
}

class b {
notice 'in b'
duck {'duck3': name => 'huey' }
duck {'duck4': name => 'dewey' }
duck {'duck5': name => 'louie' }
}

include a

This is the output:

Notice: Scope(Class[A]): in a
Notice: Scope(Class[B]): in b
Notice: Scope(Duck[duck1]): duck donald
Notice: Scope(Class[C]): in c
Notice: Scope(Duck[duck3]): duck huey
Notice: Scope(Duck[duck4]): duck dewey
Notice: Scope(Duck[duck5]): duck louie
Notice: Scope(Duck[duck2]): duck daisy
Notice: Scope(Duck[duck0]): duck mc scrooge

(This manifest is found in this gist if you want to get it and play with it yourself).

Here is a walk through:

  • class a is included and its body starts to evaluate
  • it placed duck1 - donald in the catalog builder's queue
  • it includes class b and starts evaluating its body (before it evaluates duck2 - daisy)
  • class b places ducks 3-5 (the nephews) in the catalog builder's queue
  • class a evaluation continues, and duck2 - daisy is now placed in the queue
  • the immediate evaluation is now done, and the catalog builder starts executing the queue
  • duck1 - donald is first, when it is evaluated the name is logged, and class c is included
  • class c queues duck0 - mc scrooge
  • catalog now processes the remaining queued ducks in order 3, 4, 5, 2, 0

The order in which resources are processed may seem to be random, but now you know the actual rules.

Summary

In this (very long) post, I tried to explain "how puppet master really works", and while the order in which puppet takes action may seem mysterious or random at first, it is actually both defined and deterministic - albeit quite unintuitive when reading the puppet logic at "face value".

Big thanks to Brian LaMetterey, and Charlie Sharpsteen who helped me proof read, edit, and put this post together. Any remaining mistakes are all mine...

Puppet Labs: New! Support for Non-Root Agents in Puppet Enterprise

Mon, 04/07/2014 - 17:52

Pro efficiency tip: People without root access privileges can manage resources with Puppet Enterprise 3.2. Learn how.

Puppet Labs: Quickly Deploy MySQL with Puppet Enterprise Supported Module Puppetlabs-MySQL

Fri, 04/04/2014 - 17:51

Managing a MySQL deployment is faster, simpler and more manageable with the puppetlabs-mysql module.

OlinData: How Puppet fits in Complex Enterprise IT Environments

Fri, 04/04/2014 - 16:23

This blog is part 1 of a 2 part series about using Puppet in Complex Enterprise Environments.

Enterprise IT environments are usually complex, heterogeneous and spread across multiple data centers. Server deployment usually takes multiple days unless the proper automation or system are in place. Configuration drift, IT compliance, agility and visibility are other challenges. To address such challenges, sys admins often prefer to go with configuration management and automation tools like Puppet, Chef, Ansible, CFengine, etc. In this blog, I will discuss Puppet. 

What is Puppet?

Puppet is a next generation IT automation software for system administrators. Puppet lets system administrators monitor the entire infrastructure life cycle. It also allows automation of repetitive tasks, deployment of critical applications and proactively change management.

In a complex enterprise IT environment, an ideal automation solution should:

  • Support multiple OS
  • Provision servers
  • Support virtualization
  • Integrate with monitoring solutions
  • Manage configuration of servers
  • Support compliance initiatives
  • Support cloud infrastructure

Now let's see how Puppet fits these roles.

SUPPORT FOR MULTIPLE OS 

Puppet supports the following operating systems. OS's marked with * support Puppet agents only. 

  • Red Hat Enterprise Linux (RHEL) 4*, 5, 6 
  • Windows* Server 2003/2008 R2/2012, and Windows* 7 
  • Ubuntu 10.04 LTS & 12.04 LTS 
  • Debian 6, 7 
  • Solaris* 10, 11 
  • SLES 11 SP1 or greater
  • Scientific Linux 4*, 5, 6 
  • CentOS 4*, 5, 6 
  • Oracle Linux 4*, 5, 6
  • AIX* 5.3, 6.1, 7.1 

Native Support for Microsoft Windows

Windows support is very important for many companies. In a lot of organisations Windows has a share of more than 75%. Puppet has recently significantly improved support for Windows. Puppet Enterprise offers native support for: 

  • Windows Server 2003, Windows Server 2008 R2, and Windows 7. 
  • Graphical installation (.msi package) or command line installation 
  • Puppet resource types: File, User, Group, Scheduled Task, Package (.msi), Service, Exec, Host 
  • Pre-Built Puppet Forge Modules - IIS, SQL Server, Azure, win_facts, windows registry, etc.

PROVISIONING NEW SERVERS 

You provision new servers up to a basic level only, using cobbler, kickstart, razor or any other provisioning tool of your choice. After that, you might go in manually and configure and set up everything else. Maybe you have scripts for it, but they are not super-flexible.

With Puppet, you integrate the setup of the Puppet agent into your provisioning process. Then, the Puppet agent runs and configures the whole server by itself. Just wait 10 minutes and the bare OS installation will have turned into a fully usable production ready machine.

Puppet & Kickstart 

When you create the OS image that goes onto the machine with kickstart you make sure that it contains the puppet agent already installed and configured to run on boot. Then when it boots the first time, it connects to puppet and you can use Puppet to have the desired configuration. In short Kickstart installs minimum to get Puppet running. For example, Puppet can convert the bare OS install into a web server or database server in minutes.

Puppet and SCCM 

System Center Configuration Manager (SCCM) brings to the table a Windows-native tool that is well-integrated with its target software and OS, capable of managing configuration from the provisioning step on up. Puppet is limited to the configuration layer only and does not descend as low as provisioning, and it doesn't come with a Windows-native GUI for setting up policies. What Puppet does differently than SCCM is offer true infrastructure-as-code configuration management. In terms of technical ability, Puppet core types and providers give a solid spread of out-of-the-box functionality that can be built on per typical Puppet practice to fashion larger abstractions either within the Puppet language or in Ruby. Puppet is explicitly designed to be a highly extensible framework, therefore additional resource types are easy to write, distribute, or find on the Puppet Forge. All of this, combined with the significantly lower per-node price, make Puppet Enterprise a compelling choice for hybrid Windows/Unix/Linux IT environments, an agile alternative to SCCM, and a tool complementary to Group Policy.

Puppet & Razor

Razor is an advanced provisioning application that can deploy both bare metal and virtual systems. Razor makes it easy to provision a node with no previously installed operating system and bring it under the management of Puppet Enterprise. Razor’s policy-based bare-metal provisioning enables you to inventory and manage the lifecycle of your physical machines. With Razor, you can automatically discover bare-metal hardware, dynamically configure operating systems and/or hypervisors and hand nodes off to Puppet Enterprise for workload configuration. Razor policies use discovered characteristics of the underlying hardware and on user-provided data to make provisioning decisions.

The following steps show a high-level view of provisioning a node with Razor:

  1. Razor identifies a new node - When a new node appears, Razor discovers its characteristics by booting it into the Razor microkernel and inventorying its facts.
  2. The node is tagged - The node is tagged based on its characteristics. Tags contain a match condition — a Boolean expression that has access to the node’s facts and determines whether the tag should be applied to the node or not.
  3. The node tags match a Razor policy - Node tags are compared to tags in the policy table. The first policy with tags that match the node’s tags is applied to the node.
  4. Policies pull together all the provisioning elements
  5. The node is provisioned with the designated OS and managed with PE

VIRTUALIZATION

System administrators face numerous challenges in today's virtualized world. VM sprawl, configuration drift, and the increasingly heterogeneous nature of IT environments - public, private, hybrid cloud platforms, multiple operating systems, new application stacks - make managing infrastructure even more complex. In addition, organizations' expectations for rapid response times and fast delivery of applications only seem to increase Using Puppet's declarative, model-based approach to IT automation, system administrators can take full advantage of the responsiveness of their VMware deployments without any loss in productivity. Furthermore, Puppet's abstraction layer enables sysadmins to reuse their configurations across physical, virtual, and cloud environments as well as operating systems, databases, and application servers. Sysadmins can benefit from using Puppet Labs and VMware integrations for configuring VMs and provisioning private cloud applications

Puppet & vSphere / ESXi

The sheer volume and dynamic nature of nodes makes managing the lifecycle of VMware virtual machines a challenge. In particular, keeping configurations consistent across dev, test, and prod environments while rapidly provisioning, configuring, updating, and terminating VMs requires automation in order to scale without impacting quality of service. Puppet Enterprise can help.

With its integration with VMware vCenter, Puppet Enterprise enables sysadmins to provision VMware VMs and automatically install, configure, and deploy applications like web servers and databases. Furthermore, these declarative, model-based configurations are reusable across operating systems, dev-test-prod environments, and even physical and public cloud infrastructures. Using Puppet Enterprise, IT teams can automate away the menial, repetitive tasks around lifecycle management of their VM infrastructure, allowing them to scale services and applications quickly, reliably, and efficiently.  

ENTERPRISE MONITORING

Out–of-the-box Puppet Enterprise contextual dashboards Leveraging the functionality of PuppetDB in Puppet Enterprise, you can centrally monitor advanced features such as inventory services and exported resources. This large inventory of metadata for each node can help sysadmins optimize their deployments and report on expected or unexpected behaviour. Puppet’s integration with products like ScienceLogic empower IT administrators to standardize, automate change and manage policies, while simultaneously ensuring the performance and availability of their systems and Puppet Enterprise deployments. This combined solution enables Puppet Enterprise customers to discover, configure, manage and monitor their dynamic infrastructure, especially in larger distributed environments. The integration includes support for automated discovery of all Puppet Enterprise resources. Aligning these resources in device categories and groups enables you to apply different KPIs and events to different classes of service. For example: identifying the top 10 most resource-consuming Puppet nodes per environment. Monitoring can also be done using external tools like Nagios which works quite well with Puppet.

In the next blog post in the series we shall discuss the following:

  • Puppet for configuration management
  • Puppet for compliance
  • Puppet for automation of cloud infrastructure

Puppet on the Edge: Stdlib Module Functions vs. Puppet Future Parser / Evaluator

Thu, 04/03/2014 - 21:11
The Stdlib is_xxx functions

Stdlib Module vs. Puppet Future Parser / Evaluator

Earlier in this series of blog posts about the future capabilities of Puppet, and the Puppet Type System in particular, you have seen how the match operator can be used to check the type of values. In Puppet 3.6 (with --parser future) there is a new function called assert_type that helps with type checking. This led to questions about the existing functionality in the puppetlabs-stdlib module, and how the new capabilities differ and offer alternatives.

In this post I am going to show examples of when to use type matching, and when to use the new assert_type function as well as showing examples of a few other stdlib functions and how the same tasks can be achieved with the future parser/evaluator available in Puppet 3.5.0 and later.

The Stdlib is_xxx functions

The puppetlabs-stdlib module has several functions for checking if the given value is an instance of a particular type. Here is a comparison:

stdlib type system is_array($x) $x =~ Array is_bool($x) $x =~ Boolean is_float($x) $x =~ Float is_hash($x) $x =~ Hash is_integer($x) $x =~ Integer is_numeric($x) $x =~ Numeric is_string($x) $x =~ String n/a $x =~ Regexp

Note that the type system operations does not coerce strings into numbers or vice versa. It also does not make a distinction about how a number was entered (decimal, hex, or octal). The stdlib functions vary in their behavior, but typically only treat strings with decimal notation as being numeric or integer (which is both wrong and confusing).

In addition to the basic type checking shown in the table above, you can also match against parameterized types to perform more advanced checks; range of numeric values, checking the size of an array, the size and type of elements in an array, arrays with a sequence of different types (i.e. using the Tuple type). You can do the same for Hash where the Struct type allows specification of expected keys and their respective type). See the earlier posts in this series for how to use those types.

The Stdlib validate_xxx functions

The puppetlabs-stdlib module has several functions to validate if the given value is an instance of a particular type. If not, an error is raised. The new assert_type function does the same, but it checks only one argument, and thus if you want to check multiple values at ones, you place them in an array, and then check against an Array type parameterized with the type you want each element of the array to be an instance of. Here are examples:

stdlib type system validate_array($x) assert_type(Array, $x) validate_array($x, $y) assert_type(Array[Array], [$x, y]) validate_bool($x) assert_type(Boolean, $x) validate_bool($x, $y) assert_type(Array[Boolean], [$x, $y]) validate_hash($x) assert_type(Hash, $x) validate_hash($x, $y) assert_type(Array[Hash], [$x, $y]) validate_re($x) assert_type(Regexp, $x) validate_re($x, $y) assert_type(Array[Regexp], [$x, $y]) validate_string($x) assert_type(String, $x) validate_string($x, $y) assert_type(Array[String], [$x, $y])

Note that the Regexp type only matches regular expressions. If the desire is to assert that a String is a valid regular expression it can be given as a parameter to the Regexp or Pattern type since it performs a regular expression compilation of the pattern string, and raises an error with details about the failure.

'foo' = Pattern["{}[?"] # this will fail with error

Note that the 3.5.0 --parser future does not validate the regular expression pattern until it is used in a match (not when it is constructed). This is fixed in Puppet 3.6.

The validate_slength function

The validate_slength function is a bit of a Swiss Army knife and it allows validation of length in various ways for one or more strings. It has the following signatures:

validate_slength(String value, Integer max, Integer min) - arg count {2,3}
validate_slength(Array[String] value, Integer max, Integer min) - arg count {2,3}

To achieve the same with the type system:

# matching (there is no is_xxx function for this)
$x =~ String[min, max]
[$x, $y] =~ Array[String[min, max]]

# validation
assert_type(String[min,max], $x)
assert_type(Array[String[min,max]], [$x, $y])

A common assertion is to check if a string is not empty:

assert_type(String[1], $x)
The Stdlib values_at function

The stdlib function values_at, can pick values from an array given a single index value, or a range. The same can now be achieved with the [] operator by simply giving it a range.

stdlib future parser values_at([1,2,3,4],2) [1,2,3,4][2] values_at([1,2,3,4],["1-2"]) [1,2,3,4][1,2]

The values_at, allows picking various values by giving it an array of elements to pick. This is not supported by the [] operator. OTOH, if you find that you often need to pick elements 1,6, 32-38, and 164 from an array, you are probably not doing it right.

The Stdlib type function

The type function returns the name of the type as a lower case string, i.e. 'array', 'hash', 'float', 'integer', 'boolean'. This stdlib function does not perform any inference or details about the types, it only returns the type name of the base type.

When writing this, there is currently no corresponding function for the new type system, but a type_of function will be added in 3.6 that returns a fully inferred Puppet Type (with all details intact). When this function is added it may have an option to make the type generic (i.e. reduce it to its most generic form).

The typical usage of type is to... uh, check the type - this is easily done with the match operator:

stdlib future parser type($x) == string $x =~ String
The Stdlib merge, concat, difference functions

Merging of hashes an concatenation of arrays can be performed with the + operator instead of calling concat and merge. The - operator can be used to compute the difference.

stdlib future parser merge($x,$y) $x + $y concat($x,$y) $x + $y difference($x,$y) $x - $y
Other functions

There are other functions that partially overlap new features (like the range function), but where the new feature does not completely replace the functionality provided by the function. There is also the possibility to enhance some of the functions to give them the ability to accept a block of code, or to make use of the type system.

At some point during the work on Puppet 4x we will need to revisit all of the stdlib functions.

Puppet Labs: New Integrations with Microsoft Azure and Visual Studio

Thu, 04/03/2014 - 17:56

Today we're excited to announce our integrations with Microsoft Azure and Visual Studio. This morning at Build 2014, Mark Russinovich, technical fellow, Cloud and Enterprise Division at Microsoft, and Luke Kanies, CEO of Puppet Labs, unveiled the integration between Puppet Enterprise and Azure.

Pages

Copyright © 2011 Lab42. Drupal theme by Kiwi Themes.