At Gracenote, one of the challenges we've consistently faced on projects is how to manage credentials for servers, databases and other services within our apps. We don't want to store them in our code repositories for obvious reasons. So we've traditionally fallen back to passing them from developer to developer on an ad hoc basis, as required. While this is functional (and can be secure if you take precautions to prevent sending credentials via email or chat), continually having to track down the right credentials to set up a specific environment can quickly become tedious. The real problem is that you only need those credentials at certain times (such as when creating configuration files on launching a new server instance). So wouldn't it be nice if you could store the credentials somewhere secure and simply pull them into processes as needed?
This post is an explanation of how you can do just that. We'll create a fully provisioned Amazon AMI for our application, use it to launch an EC2 instance, and automatically create our environment configuration for that instance by pulling down credentials from a secure S3 bucket.
Before we get started, it's worth mentioning that this post assumes you have some familiarity with Ansible, Packer and AWS. If you don't, or need a little refresher, please check their respective docs.
Chapter 1 – Initial Setup
First things first, let's set the stage. We'll be using Ansible and Packer to create our AMI and to later create our config files. So let's take a look at our setup. I've put up a repository of this example on Github (https://github.com/ponysmith/s3-credentials). Feel free to refer to that as we work our way through.
Chapter 2 – Ansible
Packer uses Ansible to provision AWS instances by creating a temporary EC2 instance, copying your Ansible playbooks and configurations to the server, and then executing the playbook locally on the server. Because the playbook will be executed locally, we can create a very simple hosts file for AWS:
2.1 The Ansible playbooks
We'll need to create two Ansible playbooks. The first will handle the full provisioning of the AMI that we'll be building via Packer. The second will be run during the startup (cloud-init) phase of launching an EC2 instance and will copy our credentials file from S3 and use it to generate our environment config files. Let's look at them one at a time:
This playbook will handle all of the server provisioning you need to do for your environments. You can adjust these as necessary. To accomplish our objectives with regards to storing our secure credentials in S3 though, we'll need to specifically include some tasks for installing the aws-cli on the machine. We've added a task to do that to our aws-cli role:
Our second playbook is the one that we'll be running during the cloud init phase of launching our EC2 instance. Here's the playbook:
And the matching task:
Chapter 3 – Packer
Next, we need to create the Packer template that will be responsible for creating our AMI. This is one of the bigger parts of the process so we'll step through it bit by bit:
In the variables section, we're setting blank strings for our AWS credentials. We'll be passing the actual values for these to Packer when we execute the build. But Packer requires that variables be initialized here, so we're just setting them to blank strings for now.
We're setting up two provisioners: one shell provisioner and one Ansible provisioner. Again, this is because to run Ansible on the temporary EC2 instance, Packer copies all the Ansible files to the server and runs the playbook locally. That means the EC2 instance has to have Ansible installed locally. So we're using a simple shell provisioner to install Ansible via apt.
Our second provisioner is the actual Ansible provisioner. Here we're simply pointing it to the playbook file, playbook directory, and inventory file.
The builders section of our template defines the basic configuration for the AMI that we want to create. Most of these settings are pretty self-explanatory. The two AWS credential properties are referencing the variables we defined above (which again will be passed in to Packer when we run the build – Yay for no passwords in the repo). It's important to note that in our example, we're basing our AMI off of Ubuntu 14.04. The OS and distribution you choose to use will affect what you choose for the source_ami and ssh_username properties. When you decide what you want to use as your base AMI, consult the AMI finder (https://cloud-images.ubuntu.com/locator/ec2/) to find the AMI-ID for the OS/Distribution/Zone you want.
Chapter 4 – Running Packer
With all that set, we can now run our Packer script to have it connect to AWS and build our AMI. Don't forget to adjust the credentials to match those for your account:
Chapter 5 – Adding credentials to S3
Since the whole point of this exercise is to pull our credentials from S3, we need to go ahead and put them there. In this case, we'll store some credentials for a MySQL database. We're going to be using Ansible to create our config file eventually, so we'll store our credentials in a YAML file that Ansible can easily ingest. This file will be uploaded directly into your S3 bucket.
Chapter 6 – Create an IAM role for S3 access
In order to access your credentials on S3 during the startup phase, you'll need to make sure your instance has permissions to access your S3 bucket. Go to the IAM page in the AWS console (https://console.aws.amazon.com/iam) and create a new role. Assign the role the ability to read documents from S3 (as of this writing, the policy name for read permissions on S3 is AmazonS3ReadOnlyAccess)
Chapter 7 – Launch your instance
With everything else done, it's time to launch your instance. In the EC2 dashboard, create a new EC2 instance. Select the custom AMI you built with Packer as the base AMI for your instance. After you select your instance type, make sure you continue on to the configuration details portion of the setup. On the configuration details page, you'll need to do two things:
- Assign the instance the IAM role you created earlier, so it has access to S3.
- Enter the following in the User data field (making sure to change the bucket name, file name and region as necessary):
This script will automatically run when the instance starts up for the first time. All we're doing here is copying our credentials file from S3 to the instance using the AWS CLI and then running our Ansible playbook. Once you've entered the user data, you can review your setup and launch the EC2 instance. Once the server launches, if you ssh into the machine, there should be a database.yml file complete with the credentials from your secret-credentials.yml file in S3. Success!
Chapter 8 – Conclusion
Using Packer, Ansible and S3 can provide a very efficient way of storing your secure credentials and using them to create configuration files. While this method does require you to pay a little set-up cost up front, it can potentially save you a lot of time provisioning remote servers, especially if you use auto-scaling. Beyond that, it's an effective method of automating the creation of configuration files in a way that doesn't require handing off credentials to every developer on your team.
Will it be ideal for every situation or project? Probably not. This doesn't really have a place in Development environments where the developers can maintain their own local credentials. It might also be slightly overkill for small teams.
But for large teams with many developers and a need for multiple environments or auto-scaling, this method might be a great solution. At the very least, it opens up the door for exploring new methods of configuration and automation. Give it a try and let us know if it suits your particular needs in Comments.