As you saw in our previous blog “The Emergence of Containers” by Alan Conley, Infoblox has developed an IPAM plugin driver for Docker to bring our functionality to the container world. Container technology is getting lots of traction because it is adding speed to the development and deployment of new applications. Container platforms, such as Docker, provide some level of IPAM support but they don’t solve the problem for more complex multi-app deployments.
Though Docker provides IP address management for containers running on a single host, Docker doesn’t have the visibility to manage IP addresses across multiple hosts that comprise a container cluster. Real-world organizations rarely only have single container environments. They typically also have VMs, physical hosts, etc. This leads to the need to manage IP addresses in a holistic way across the organization. This is the IPAM problem. This paper is the beginning of a series of papers that will get into more detail of how Docker networking works and what are the needs as it pertains to our IPAM functionality. Docker enables 3rd party vendors to load network drivers including IPAM to support more sophisticated solutions.
We are going to start with a brief overview of the docker Container Network Model (CNM), more details are available here:
https://docs.docker.com/engine/userguide/networking/dockernetworks/
A “network” is a named object within the Docker network model that provides an abstraction for intercommunication between containers. In practice it represents a layer 2 broadcast domain with a layer 3 subnet. Each container attaches to a network at an “endpoint.” Containers need networks to communicate. You can create multiple networks. You can add containers to more than one network. Containers can only communicate within networks but not across networks without some sort of external router that is not supplied by Docker today. A container attached to two networks can communicate with member containers in either network. As you build a complex application with containers that span multiple networks you need to track all the IP addresses and subnets being created across multiple host within your data center and outside the data center.
Infoblox provides the glue to make this problem solvable at scale by providing a manageable approach to the creation and deletion of these IP addresses and subnets.
From the point of view of the container (and when you launch a container), you can tell docker to provide that container with one of several different types of networks:
- “None” – In this case, the container launches and only has a loopback interface defined.
- “Host” – In this case, the container will have the same networking as the host. That is, the container networking will not be isolated from the host at all.
- “Default” or “Bridge” – In this case, the container launches and has a loopback interface, plus an interface that is attached to an internal bridge on the host. If you do not specify any specific networking, then your container will run in this mode. All such containers are attached to the same bridge and can intercommunicate. The bridge is attached to the outside network and provides external access via NAT.
- “Shared network stack mode” – In this case, two (or more) containers reuse the same network namespace. This allows a set of related containers to use the same networking, while isolating them from the host and other containers on that host.
- “User Defined” – In this case, you may create different networks using different drivers, and attach containers to them as needed. For example, instead of using the default bridge network, you can create a network “foo” using the “bridge” driver. This would create a new bridge on the host, and any container started with –net=foo would be attached to that bridge instead of the default bridge. As of this writing there are four Docker pre-defined “User Defined” network drivers:
- Bridge
- Overlay
- Macvlan (starting with Docker 1.12)
- Experimental Ipvlan
Figure 1 above shows the “bridge” driver, with the network “foo” as described earlier. One thing to note is that each of these bridges gets an IP address and this serves as the default gateway in the attached containers. The host then uses iptables, represented by the firewall icon, with network address translation (NAT) to provide outbound access from the bridge subnet to the outside world. However, routing is not provided between the container subnets, so containers on different bridges cannot intercommunicate.
The overlay driver uses VXLAN to make the network span all hosts in the cluster. In this case, each container is ALSO attached to a special “Gateway” bridge that is used to give external access with NAT. For example, if we create a “foobar” overlay network, it will look something like this:
A bridge network is useful in cases where you want to run a relatively small network on a single host. You can, however, create significantly larger networks by creating an overlay network. However, the overlay driver requires the deployment of a distributed key/value store (such as etcd or consul), which substantially increases the complexity of the deployment.
As of the writing of this blog, there are other types of user-defined networks being created – in particular, the Macvlan driver has just been added in Docker 1.12, and there is an experimental Ipvlan driver. These drivers show a lot of promise for simplifying container networking and improving its scalability.
The Macvlan driver, shown in Figure 3, enables you to give IP addresses from the external network to your containers by creating an L2 broadcast domain that is shared with a host interface. With this technology the containers are using IPs that are routable outside of the host. This makes integration with the enterprise IPAM system – either through an our IPAM driver or via the experimental DHCP driver – absolutely critical in order to avoid duplicate IPs or subnets.
The Ipvlan provides a similar function, but can also run in a mode which acts as a virtual router within the Docker host. You can then advertise internal subnets to upstream routers allowing the rest of the network to route into those subnets. (Note, however, that the route advertisement is not part of the driver and must be done through other means.) For a detailed discussion of these drivers, see the Docker GitHub repository: https://github.com/docker/docker/blob/master/experimental/vlan-networks.md.
Going back to ‘User-defined’ Bridge, you can achieve a similar result as Macvlan by adding an external interface (physical or a virtual dot1Q VLAN interface) into the docker bridge, as shown below. This creates a shared L2 broadcast domain between the docker bridge network and the physical network attached to the external interface. This requires manual intervention on the host, however, as the bridge driver cannot do this for you automatically. This diagram also illustrates how the IPAM driver fits into the network architecture. If you used the default Docker IPAM driver, each of the bridges on the hosts would have overlapping IP addresses. That is, they would each be considered as owning the entire 10.0.0.0/24 subnet. This would make it impossible to bridge them with the physical network. By utilizing the centralized IPAM from Infoblox, the hosts will share the subnet without producing any IP overlap.
One other aspect of docker networking we haven’t covered is the concept of global and local address spaces. The network driver (not the IPAM driver) specifies which address space to use. Each network driver defines itself as using either local or global IP addresses. The bridge network driver, for example, declares itself as local. This means that the IP address only have meaning within the host. The overlay driver declares itself as global, meaning the IP addresses have meaning across the cluster of hosts. In the example above, you can configure the Infoblox driver to use the same address space (called a network view in Infoblox terminology) for both global and local addresses, avoiding any IP conflict or overlap.
In the next blog we will get into more details on the actual commands and operation of the Infoblox Libnetwork IPAM driver to deal with orderly allocation of network address spaces and IP addresses for both the bridge and overlay user defined networks.