Select Page

Overview

When it came to using AWS (Amazon Web Service) in conjunction with our on-premise infrastructure, the Gracenote team quickly realized that EC2 (Amazon Elastic Compute Cloud) hosts in AWS would need to resolve names of on-premise systems and vice-versa. We were pretty confident this was an already-solved problem so we scanned the internet for a comprehensive end-to-end solution. After searching with no luck, we decided to design something ourselves. Now we’re sharing our solution with all of you in hopes it will save you time and headaches. If you use AWS in conjunction with on-premise infrastructure, then this blog post is for you.

Problem

To integrate on-premise internal DNS (Domain Name System) with AWS internal DNS and public DNS, allowing AWS hosts to resolve internal and external Gracenote names and other non-Gracenote domains.

Requirements
  • Easy-to-support name resolution in a large-scale multi-VPC (Virtual Private Cloud) AWS account
  • Easy-to-leverage for most public cloud providers or on-premise infrastructures
  • Minimal engineering effort
  • Efficient cost
  • High availability
  • Under $80 month in EC2 resources (Amazon EC2 is a web service that provides resizable compute capacity in the cloud)
  • Low administrative overhead
Solution

The obvious answer was Route53 from Amazon - a highly available and scalable cloud DNS web service.

However, the more we looked at it, the less Route53 made sense. Instead, we came up with the following:

  • Set up BIND (Berkeley Internet Name Domain) slaves in AWS with static private IP addresses leveraging ENIs (Elastic Network Interfaces).
  • In Cloudformation VPC (Virtual Private Cloud) DHCPOptions, use static private IP addresses that allow VPC EC2 (Elastic Compute Cloud) hosts to create static IPs that can resolve both internal and external names. Static private IP resolvers will live in and out of AWS.

Current VPC Setup

  • 9 AWS VPCs, each with multiple subnets spread across multiple zones, for both external and internal resources.
  • Each VPC peering with a central VPC (hub and spoke VPC topology) where our BIND slaves would live.
  • All VPCs having DirectConnect access to our internal network.
  • Various VPC ACLs, firewalls, and other security in place.
  • Linux and Windows EC2 instances.
  • Expansive data center and internal network footprint with thousands of records across various BIND masters and slaves.
  • EC2 instances in each VPC needing to communicate with servers, VMs (Virtual Machines), and other systems in our data centers and internal network by way of the various BIND masters and slaves already in place.

A central VPC can host shared services that all other VPC hosted systems can access. Not needing to deploy the same service into each VPC helps reduce costs. We still need to solve for a situation where the Central VPC goes down. However, that is beyond our scope here and would require another blog post or two.

Our VPC Setup: (As diagrammed by my daughter. I told her that if she drew this up, I'd buy her ice cream.)

VPC setup

Note about shared services. While they represent single points of failure, without them, the costs become prohibitive. Virtual Machines share hosts, Docker containers share Docker hosts, virtual hosts share Apache web servers, EC2 instances share hypervisors, cell phones share towers and so on. However, a question remains: how do we architect around a shared service failure? That topic is beyond the scope of this blog as well.

We could have set VPC DHCP options to point to Amazon-provided DNS (i.e. Route53). However, the following questions would remain:

  • How would Route53 know all the internal records of our BIND masters located in our various data centers and internal networks?
  • If we migrated to GCE (Google Compute Engine), or Microsoft's Azure, or one of our data centers, could we continue to use Route53 for internal name resolution?
  • What could we design that would work in any public cloud provider or internal data center with minimal re-engineering effort?

This is not about Docker containers, this is about architecture. Since Route53 was not a good fit, we decided to build BIND slaves in AWS to support our requirements.

Building BIND Slaves

When a VPC is created, it is given a default DNS server that all VPC EC2 instances will use to resolve names. However, if your AWS VPC is connected to your internal network and you expect EC2 instances to be able to resolve names of internal systems in your data center or office, then the default VPC resolver cannot help - it does not know the internal DNS and you cannot log in to it and configure it to do forwarding.

The solution? Override the default VPC name resolver and configure the VPC and all EC2 instances to know what name resolvers to use. This can be done using AWS Cloudformation templates. Cloudformation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Cloudformation Snippet to Define Name Servers

Here is a Cloudformation sample that defines specific name servers that all VPC EC2 instances can use.



‹Cloudformation snippet›

# Set name servers.
# As long as EC2 instances in VPC can reach internal name servers via DirectConnect or VPN you're OK.
# Where 10.0.0.232 is a BIND server we have built in the VPC
# Where 172.16.10.1 and 192.168.10.1 are BIND servers in the Gracenote internal network.
"MyDhcpOptionsVPC01" : {
"Type" : "AWS::EC2::DHCPOptions",
"Properties" : {
   "DomainName" : "some.thing.domain",
   "DomainNameServers" : [ "10.0.1.1", "172.16.10.1", "192.168.10.1" ],
   "Tags" : [
      { "Key" : "Name", "Value" : "some-value" }
    ]
}
},
# Associate name servers to particular VPC
"VPC01DHCPOptionsAssociation" : {
 "Type" : "AWS::EC2::VPCDHCPOptionsAssociation",
 "Properties" : {
   "VpcId" : "vpc-af09233bef",
   "DhcpOptionsId" : {"Ref" : "MyDhcpOptionsVPC01"}
},


Once you add this and update the Cloudformation template, all VPC EC2 instances will, in a few minutes thanks to DHCP, get the three domain name servers as their resolvers. However, this creates the following problems:

  • Every time a name server IP address changes, you will need to update the Cloudformation template.
  • Every time the Cloudformation template is updated, there is a chance name resolution will break for all VPC EC2 instances causing big problems.

Infrastructure ideally should be robust, dependable and as maintenance-free as possible. We certainly do not want to constantly deal with these issues. Rather, we want a way to have static private IP addresses in AWS to assign to the BIND server.

Resolver IP Address vs FQDN

We considered putting an internal ELB (Elastic Load Balancer) in front of it but decided that would not work. Resolvers need to be accessible via an IP address, not a Fully-Qualified Domain Name (FQDN) and ELBs only expose a FQDN. In general, with TCP/IP networks (the Internet and most networks) everything communicates via IP addresses, not names. Names are used as an IP address abstraction because it is easier to remember gracenote.com than, say, 209.10.40.1. The latter is an IPv4 address example. An IPv6 address, e.g. 2001:0db8:85a3:0000:0000:8a2e:0370:7334, would be even harder to remember. While an IPv4 or IPv6 address name can be gracenote.com, the only way to know that is to use a resolver to look it up. To start looking up names, then, you have to know a resolver's IP address, not its FQDN.

Creating Static Resolver IP Addresses

As mentioned earlier, we do not want to constantly change resolver IP addresses. Google uses the easy-to-remember "8.8.8.8" IPv4 address as a public resolver. If it were to change, numerous systems would break. For similar reasons, we want our internal resolvers to have static IP addresses. But how to get a static private IP address for a resolver EC2 instance running inside an AWS VPC? You cannot use an EIP (Elastic IP) since EIPs are public, not private, IPs. Our solution was to create an ENI (Elastic Network Interface) and “hard-code” a private IP address for it.

Something to note is that, when you create a VPC, Amazon takes the parent subnet's lowest IP addresses for internal use. For example, if your VPC's parent subnet is 10.0.0.0/24, you can further divide that into the following:

  • 10.0.0.0/26 (Private subnet for Availability Zone A)
  • 10.0.0.64/26 (Private subnet for Availability Zone B)
  • 10.0.0.128/26 (Private subnet for Availability Zone C)
  • 10.0.0.192/26 (Public subnet for Availability Zone A)

Amazon will then take 10.0.0.0-7 (for example) for its internal use, which then becomes the default resolver they set up in your VPC and anywhere else.

We decided to carve up the VPC's parent subnet in such a way as to reserve and isolate the lower part. For example:

  • 10.0.0.0/22 - Parent subnet for VPC (1x /22 = 4x /24s)
  • 10.0.0.0/24 - Reserved for Amazon use
  • 10.0.1.0/24 - Private subnet01 for Availability Zone A
  • 10.0.2.0/24 - Private subnet02 for Availability Zone B
  • 10.0.3.0/24 - Public subnet01 for Availability Zone A

We created some ENIs in private subnet01 and private subnet02. Then we hard-coded private IP addresses onto them. By doing this, we reserved a number of static private IP addresses we can use and not worry about random EC2 instances grabbing them.Cloudformation Snippet to Hard-Code Static IP Addresses


‹Cloudformation snippet›

"Mappings" : {
 "SandboxSubnet1" : {
         "z11" : { "ip" : "10.0.1.1" },
         "z12" : { "ip" : "10.0.1.2" },
         "z13" : { "ip" : "10.0.1.3" },
         "z14" : { "ip" : "10.0.1.4" }
"SandboxSubnet2" : {
         "z21" : { "ip" : "10.0.2.1" },
         "z22" : { "ip" : "10.0.2.2" },
         "z23" : { "ip" : "10.0.2.3" },
         "z24" : { "ip" : "10.0.2.4" }
"Resources" : {
 "SandboxENIz11" : {
         "Type" : "AWS::EC2::NetworkInterface",
         "Properties" : {
            "Description" : "Sandbox reserved ENI on us-west-2a for future admin use",
            "GroupSet" : [
                        { "Fn::FindInMap" : [ "usw2secgroups", "sandbox-default", "default"]}
                  ],
            "PrivateIpAddress" : { "Fn::FindInMap" : [ "SandboxSubnet1", "z11", "ip"]},
            "SubnetId" : { "Fn::FindInMap" : [ "usw2subnets", "sandbox-private", "z1"]},
            "Tags" : [
               { "Key" : "Name", "Value" : "bind-eni-01" },
               { "Key" : "Email", "Value" : "email@domain.com" }
               ]
            },
            "DeletionPolicy" : "Retain"
         },
 "SandboxENIz21" : {
      "Type" : "AWS::EC2::NetworkInterface",
      "Properties" : {
        "Description" : "Sandbox reserved ENI on us-west-2b for future admin use",
        "GroupSet" : [
                  { "Fn::FindInMap" : [ "usw2secgroups", "sandbox-default", "default"]}
               ],
        "PrivateIpAddress" : { "Fn::FindInMap" : [ "SandboxSubnet2", "z21", "ip"]},
        "SubnetId" : { "Fn::FindInMap" : [ "usw2subnets", "sandbox-private", "z2"]},
        "Tags" : [
          { "Key" : "Name", "Value" : "bind-eni-02" },
          { "Key" : "Email", "Value" : "email@domain.com" }
          ]
      },
      "DeletionPolicy" : "Retain"
    },


Preventing EC2 Failure

To prevent EC2 failure, we took the following steps:

  • Created an autoscaling group of Min 1 Max 1 to ensure that if the EC2 instance died it would come right back.
  • Wrote a script that runs on an EC2 instance after it boots. The script looks for the appropriate ENI/Private IP and attaches to it. If the EC2 instance dies and comes back it runs the script again and reattaches to the same ENI/Private IP.
Bash script run at EC2 Bootup

#!/bin/bash

# Get the AWS Zone to extract the AWS region
awsZone=$(ec2metadata --availability-zone)

# THe AWS Region is the AWS zone with out the last character
awsRegion=${awsZone::-1}

#Get the instance ID
instanceId=$(ec2metadata --instance-id)

# Get the instance name
instanceName=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$instanceId" "Name=key,Values=Name" --region $awsRegion --query 'Tags[0].[Value]' --output text)

# Get the ENI name using the instance name tag
eniName="$instanceName-eni"

# Get the instance ENI id
eniId=$(aws ec2 describe-network-interfaces --filter Name=tag:Name,Values=$eniName --region $awsRegion --query 'NetworkInterfaces[0].[NetworkInterfaceId]' --output text)

# Get the ENI IP
eniIp=$(aws ec2 describe-network-interfaces --filter Name=tag:Name,Values=$eniName --region $awsRegion --query 'NetworkInterfaces[0].[PrivateIpAddress]' --output text)

# Get the gateway
gateway=$(route -n |grep 'eth0' | awk '{print $2;}' | sed -n 1p)

# Get the netmask
netmask=$(ifconfig eth0 | grep 'Mask' | awk '{ print $4;}' | cut -c6-)

# Wait for the ENI status to be available
try=0
while [ $(aws ec2 describe-network-interfaces --network-interface-id $eniId --query 'NetworkInterfaces[0].[Status]' --output text --region $awsRegion) != "available" ]; do
  echo "Waiting for ENI to be available: try $try ..."
  if [ $try -eq 60 ] ; then
            echo "Failed to attach ENI $eniId:$eniName on $instanceId:$instanceName after $try tries"
            exit 1
      fi
      try=$((try + 1))
      sleep 5
done
# Attach the network interface
aws ec2 attach-network-interface --network-interface-id $eniId --instance-id $instanceId --device-index 1 --region $awsRegion
if [ $? -eq 0 ]
then
 try=0
 # Wait for ENI to attach and create the eth1 directory
 while [ ! -d /sys/class/net/eth1 ] ; do
   echo "Waiting for ENI to attach: try $try ..."
   try=$((try + 1))
   if [ $try -eq 10 ] ; then
     echo "Failed to attach ENI $eniId:$eniName on $instanceId:$instanceName after $try tries"
     exit 1
   fi
   sleep 5
 done
 echo "Succeeded to attach ENI $eniId:$eniName on $instanceId:$instanceName on try $try"
else
 echo "Failed to attach ENI $eniId:$eniName on $instanceId:$instanceName using AWS CLI"
 exit 1
fi
# Add eth1 route table
echo "200 eth1_rt" > /etc/iproute2/rt_tables

# Create eth1.config file
cat < /etc/network/interfaces.d/eth1.cfg

# The secondary network interface
auto eth1
iface eth1 inet static
 address $eniIp
 netmask $netmask
 post-up ip route add default via $gateway dev eth1 table eth1_rt
 post-up ip rule add from $eniIp/32 table eth1_rt
 post-up ip rule add to $eniIp/32 table eth1_rt
 post-up ip route flush cache
EOT

#wait 5 more seconds to bring up the interface (to reduce potential race conditions)
sleep 5

# bring up eth1 interface (verbose mode)
ifup -v eth1
if [ $? -eq 0 ]
then
 echo "Succeeded in bringing up eth1 ($eniId:$eniName on $instanceId:$instanceName)"
 exit 0
else
 echo "Failed to bring up up eth1 ($eniId:$eniName on $instanceId:$instanceName)"
 exit 1
fi


However, one problem remains. We are now able to launch EC2 instances and ensure they have static private IP addresses. We also put them in Autoscaling groups of Min 1 Max 1 to ensure they come back if they die. However, the instances are not yet resolvers as no software has been installed on them for that purpose. We then installed Chef, BIND9, AWS S3 and a cron job to make this happen.

Next, we wrote a BIND9 cookbook wrapper and uploaded it to an S3 bucket. When an EC2 instance boots, it downloads the cookbook from the bucket. Then a cron job checks the S3 bucket at regular intervals and converges the cookbook. If we want to change something we upload a new cookbook file to S3. Now we have a resolver with a static private IP address. If you want to check out the Chef cookbook, here it is:

https://github.com/gracenote/gnops_bind9

Finally, we put our resolver's static private IP address in the AWS VPC DHCPOptions mentioned earlier. We not only put the IP address of the resolver we built in the VPC, we also added the addresses of resolvers we have on-premise since things in our VPC can talk to things on our non-AWS internal network.

So how do we set hostnames in a dynamic environment? How do they talk to each other? That is subject matter for another Gracenote Tech Blog post coming soon. Thanks for reading.

by Justin Franks | November 29, 2016

Share This

Share this post with your friends!