Category Archives: Docker

General networking-related stuff about Docker

Docker, IPv6 and –net=”host”

As you recall from the last few blog posts, I went through basic IPv6 deployment for Docker Engine, Docker HubDocker Registry and Docker Swarm.  All of those configurations were using default Docker networking using the Docker-provided bridge layout.

Over the past few months, I have met with several customers who don’t use the Docker bridge setup at all. They use the Docker run option of –net=”host” to do what they call “native” networking.  Simply put, this flag has containers run using the networking configuration of the underlying Linux host.  The advantage of this is that it is brain-dead simple to understand, troubleshoot and use.  The one drawback to this setup is that you can very easily have port conflicts.  Meaning that if I run a container on port 80 which is listening natively on the Linux host and I run another container that needs that same port number then there is a conflict because only one listener can be active for that port at a time.

All of the customers I have met with have no need to run  containers on the same host that use the exact same port so this is a magical option for them.

IPv6 works just as it should in this networking scenario.  Let’s take a look at an example setup.

In the ‘ip a’ output below, you can see that ‘docker-v6-1’ has IPv6 addressing on the eth0 interface (See the Docker Engine post on enabling IPv6):

root@docker-v6-1:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f3:f8:48 brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.200/24 brd 192.168.80.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd15:4ba5:5a2b:1009:e91e:221:a4a0:2223/64 scope global temporary dynamic
       valid_lft 83957sec preferred_lft 11957sec
    inet6 fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848/64 scope global dynamic
       valid_lft 83957sec preferred_lft 11957sec
    inet6 fe80::20c:29ff:fef3:f848/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:0c:29:f3:f8:52 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:93:33:cc:66 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fd15:4ba5:5a2b:100a::1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:93ff:fe33:cc66/64 scope link
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link
       valid_lft forever preferred_lft forever

As a test, I will run a NGINX container using the –net=”host” option. Before I run the container, I disable the “ipv6only” functionality in the NGINX default.conf file so that I have dual stack support.

Create/edit a NGINX default.conf file with the following setting changed:

listen       [::]:80 ipv6only=off;

Now, run the container with the –net-“host” option set and bind mount a volume to where that default.conf file is located on the Docker host:

root@docker-v6-1:~# docker run -itd --net="host" -v ~/default.conf:/etc/nginx/conf.d/default.conf nginx

Using the –net=”host”, the new container will use the same IPv4 and IPv6 address of the host and listen on port 80 (the default in the NGINX setup):

root@docker-v6-1:~# netstat -nlp | grep nginx
tcp6       0      0 :::80                   :::*                    LISTEN      2554/nginx: master

Test accessing the NGINX default page over IPv4 using the 192.168.80.200 address (reference eth0 above):

root@docker-v6-1:~# wget -O - http://192.168.80.200
--2016-04-21 11:00:50--  http://192.168.80.200/
Connecting to 192.168.80.200:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 
Saving to: ‘STDOUT’

 0% [                                                                                                 ] 0           --.-K/s              

Welcome to nginx!

.....[output truncated for clarity]

Test accessing the NGINX default page over IPv6 using the fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848 address (Reference eth0 above)”

root@docker-v6-1:~# wget -O - http://[fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848]
--2016-04-21 11:01:13--  http://[fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848]/
Connecting to [fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 
Saving to: ‘STDOUT’

 0% [                                                                                                 ] 0           --.-K/s              

Welcome to nginx!

.....[output truncated for clarity]

It works!

Remember that this is cool and all but watch out for port conflicts between containers.  Shown below is an example of what you will see if you run two containers on the same host with –net=”host” set and both use the same port.  You will see one or more of the containers exit and likely pop-up a message in the log that looks like this:

root@docker-v6-1:~# docker logs b47aa56cc822
2016/06/17 17:49:34 [emerg] 1#1: bind() to [::]:80 failed (98: Address already in use)
nginx: [emerg] bind() to [::]:80 failed (98: Address already in use)

Docker Swarm and etcd with IPv6

Hey gang!

It has been awhile since I posted here.  Part of that is due to pure laziness and business travel nonsense and the other part is that I was waiting on a couple of bugs to get resolved for Docker Swarm.  Well, the bugs have been resolved and I have gotten this whole thing to work, finally.

As you recall from the last few blog posts, I have went through basic IPv6 deployment for Docker Engine, Docker Hub and Docker Registry.  In this post I will talk about using IPv6 with Docker Swarm.

There is a bunch of content out there to teach you Swarm and how to get it running. Here is a link to the main Docker Swarm docs: https://docs.docker.com/swarm/.  What I care about is doing basic stuff with Docker Swarm over IPv6.

As with my previous Docker posts, I am not teaching you what the various Docker tools are and the same goes for this post with Docker Swarm.  Normally, I don’t even tell you how to install Docker Engine.  This time I will give you some basic guidance to get it running as you will need to grab the latest Docker Swarm Release Candidate that has some IPv6 issues merged.  There are a few issues that got resolved that allowed me to get this thing running:

One issue was with the command “docker -H tcp://[ipv6_address]”: https://github.com/docker/docker/issues/18879 and that was resolved in https://github.com/docker/docker/pull/16950. This fix was in Docker Engine 1.10.

Another issue was with Swarm discovery and the use of ‘swarm join’ command: https://github.com/docker/swarm/issues/1906 and resolved in  https://github.com/docker/docker/pull/20842. This fix is in Docker Swarm 1.2.0-rc1 and above.

Like most things with IPv6, you can reference a host via a DNS name just so it has a AAAA record or a local /etc/hosts reference.  In this post I will be using IPv6 literals so you see the real IPv6 address stuff working.  NOTE: As with web browsers or pretty much anything where you need to call an IPv6 address along with a port number, you must put the IPv6 address in [] brackets so that the address can be differentiated from the port number.

In this post I am expanding on my earlier Docker Engine post where I added the –ipv6 flag to the DOCKER_OPTS line in the /etc/default/docker config . In this post I am using the following line in the /etc/default/docker file:

DOCKER_OPTS="-H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock --ipv6 --fixed-cidr-v6=2001:db8:cafe:1::/64"

Note, that each host running docker will have a configuration similar to this only the IPv6 prefix will be different (see diagram and interface output below).

Let’s get started.

Below is a diagram that shows the high-level layout of the lab setup I am testing with.  I have four VMs running Ubuntu and deployed via Vagrant.  The four VMs are assigned IPv6 addresses out of the 2001:db8:cafe:14::/64 network. The “etcd-01” VM will run the etcd (a distributed key-value store) service, “manager-01” will run as the Docker Swarm Manager role and nodes “node-01” and “node-02” will be Docker Swarm nodes that run the containerized workloads.  As I stated above, the Docker daemon needs to be told which IPv6 prefix to use for containers. The “–ipv6 –fixed-cidr-v6” prefix shown in the “DOCKER_OPTS” line I referenced above is what does that. You can see on manager-01, node-01 and node-02 that under the “docker0” bridge there are prefixes specific to each host. manager-01 = 2001:db8:cafe:1::/64, node-01 = 2001:db8:cafe:2::/64 and node-02 = 2001:db8:cafe:3::/64.  Containers that launch on these nodes will get an IPv6 address out of the corresponding prefix on that specific Docker bridge.  Again, I spell all of this out in the blog post here.

swarm-v6-topo

Note: Vagrant is limiting in that when you add a new “config.vm.network” entry you have no option to add it to an existing NIC. Each line of “config.vm.network” creates a new NIC so I didn’t have the option to purely dual stack a single interface. Instead, it created an eth2 with the IPv6 address I assigned.  No biggie but it does look odd when you look at the interface output (example shown below).

vagrant@node-01:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:08:9d:5f brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe08:9d5f/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:27:f1:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.12/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe27:f128/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b3:7a:d2 brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8:cafe:14::c/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb3:7ad2/64 scope link
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fa:77:dd:e0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:cafe:2::1/64 scope global tentative
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link tentative
       valid_lft forever preferred_lft forever

I am not recommending or talking about how to use Vagrant to deploy this as you may want to deploy Docker Swarm on another type of setup but you can grab my super basic Vagrantfile that I used.  I have a few Vagrant plugins such as virtualbox, vagrant-hostmanager and vagrant-vbguest, so make sure you have that stuff squared away first before you use my Vagrantfile – https://github.com/shmcfarl/swarm-etcd

When IPv6 is enabled for the Docker daemon, it deals with adding a route for that prefix that is defined as well as enabling the appropriate forwarding settings (see: https://docs.docker.com/engine/userguide/networking/default_network/ipv6/) but you do have to ensure that routing is correctly setup for getting to each Docker IPv6 prefix on all nodes or you will end up with broken connectivity to<>from containers running on different nodes.  The configurations below are to statically set the IPv6 routes on each node to reach the appropriate Docker IPv6 prefix:

There are three routes added. One for each Docker IPv6 prefix on manager-01, node-01 and node-02 (reference the diagram above) etcd-01:

vagrant@etcd-01:~$ sudo ip -6 route add 2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b
vagrant@etcd-01:~$ sudo ip -6 route add 2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c
vagrant@etcd-01:~$ sudo ip -6 route add 2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d

Here is the IPv6 route table for etcd-01:

vagrant@etcd-01:~$ sudo ip -6 route
2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b dev eth2  metric 1024
2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c dev eth2  metric 1024
2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d dev eth2  metric 1024
2001:db8:cafe:14::/64 dev eth2  proto kernel  metric 256
fe80::/64 dev eth0  proto kernel  metric 256
fe80::/64 dev eth1  proto kernel  metric 256
fe80::/64 dev eth2  proto kernel  metric 256

manager-01: (Note: only routes to node-01 and node-02 are needed as the etcd-01 service is not running in a container in this example:

vagrant@manager-01:~$ sudo ip -6 route add 2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c
vagrant@manager-01:~$ sudo ip -6 route add 2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d

Here is the IPv6 route table for manager-01:

vagrant@manager-01:~$ sudo ip -6 route
2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c dev eth2  metric 1024
2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d dev eth2  metric 1024
2001:db8:cafe:14::/64 dev eth2  proto kernel  metric 256
fe80::/64 dev eth0  proto kernel  metric 256
fe80::/64 dev eth1  proto kernel  metric 256
fe80::/64 dev eth2  proto kernel  metric 256

node-01:

vagrant@node-01:~$ sudo ip -6 route add 2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b
vagrant@node-01:~$ sudo ip -6 route add 2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d

node-02:

vagrant@node-02:~$ sudo ip -6 route add 2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b
vagrant@node-02:~$ sudo ip -6 route add 2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c

Test reachability between each nodes (i.e. node-01 can reach etcd-01 at 2001:db8:cafe:14::a):

vagrant@node-01:~$ ping6 2001:db8:cafe:14::a
PING 2001:db8:cafe:14::a(2001:db8:cafe:14::a) 56 data bytes
64 bytes from 2001:db8:cafe:14::a: icmp_seq=1 ttl=64 time=0.638 ms
64 bytes from 2001:db8:cafe:14::a: icmp_seq=2 ttl=64 time=0.421 ms
64 bytes from 2001:db8:cafe:14::a: icmp_seq=3 ttl=64 time=0.290 ms

The basic setup of each node is complete and it is time to setup etcd and Docker Swarm.

Disclaimer: I am not recommending a specific approach for deploying etcd or Docker Swarm. Please check the documentation for each one of those for the recommended deployment options.

On the etcd-01 node, download and untar etcd:

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.5/etcd-v2.2.5-linux-amd64.tar.gz -o etcd-v2.2.5-linux-amd64.tar.gz

tar xzvf etcd-v2.2.5-linux-amd64.tar.gz

rm etcd-v2.2.5-linux-amd64.tar.gz

cd etcd-v2.2.5-linux-amd64/

In the example below, I am setting a setting a name with the IPv6 address of the etcd-01 node and then running etcd on the console:

vagrant@etcd-01:~/etcd-v2.2.5-linux-amd64$ MY_IPv6="[2001:db8:cafe:14::a]"
vagrant@etcd-01:~/etcd-v2.2.5-linux-amd64$
vagrant@etcd-01:~/etcd-v2.2.5-linux-amd64$ ./etcd \
> -initial-advertise-peer-urls http://$MY_IPv6:2380 \
> -listen-peer-urls="http://0.0.0.0:2380,http://0.0.0.0:7001" \
> -listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
> -advertise-client-urls="http://$MY_IPv6:2379" \
> -initial-cluster-token etcd-01 \
> -initial-cluster="default=http://$MY_IPv6:2380" \
> -initial-cluster-state new
2016-04-01 00:34:41.915304 I | etcdmain: etcd Version: 2.2.5
2016-04-01 00:34:41.915508 I | etcdmain: Git SHA: bc9ddf2
2016-04-01 00:34:41.915922 I | etcdmain: Go Version: go1.5.3
2016-04-01 00:34:41.916287 I | etcdmain: Go OS/Arch: linux/amd64
2016-04-01 00:34:41.916676 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2016-04-01 00:34:41.917671 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
2016-04-01 00:34:41.917858 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2016-04-01 00:34:41.918340 I | etcdmain: listening for peers on http://0.0.0.0:2380
2016-04-01 00:34:41.918809 I | etcdmain: listening for peers on http://0.0.0.0:7001
2016-04-01 00:34:41.919324 I | etcdmain: listening for client requests on http://0.0.0.0:2379
2016-04-01 00:34:41.919644 I | etcdmain: listening for client requests on http://0.0.0.0:4001
2016-04-01 00:34:41.920224 I | etcdserver: name = default
2016-04-01 00:34:41.920416 I | etcdserver: data dir = default.etcd
2016-04-01 00:34:41.921540 I | etcdserver: member dir = default.etcd/member
2016-04-01 00:34:41.921949 I | etcdserver: heartbeat = 100ms
2016-04-01 00:34:41.922321 I | etcdserver: election = 1000ms
2016-04-01 00:34:41.922612 I | etcdserver: snapshot count = 10000
2016-04-01 00:34:41.923036 I | etcdserver: advertise client URLs = http://[2001:db8:cafe:14::a]:2379
2016-04-01 00:34:41.923664 I | etcdserver: restarting member d68162a449565404 in cluster 89b5c84d35f7a1e at commit index 10
2016-04-01 00:34:41.923919 I | raft: d68162a449565404 became follower at term 2
2016-04-01 00:34:41.924187 I | raft: newRaft d68162a449565404 [peers: [], term: 2, commit: 10, applied: 0, lastindex: 10, lastterm: 2]
2016-04-01 00:34:41.924780 I | etcdserver: starting server... [version: 2.2.5, cluster version: to_be_decided]
2016-04-01 00:34:41.926767 N | etcdserver: added local member d68162a449565404 [http://[2001:db8:cafe:14::a]:2380] to cluster 89b5c84d35f7a1e
2016-04-01 00:34:41.926885 N | etcdserver: set the initial cluster version to 2.2
2016-04-01 00:34:43.325015 I | raft: d68162a449565404 is starting a new election at term 2
2016-04-01 00:34:43.325357 I | raft: d68162a449565404 became candidate at term 3
2016-04-01 00:34:43.326087 I | raft: d68162a449565404 received vote from d68162a449565404 at term 3
2016-04-01 00:34:43.326594 I | raft: d68162a449565404 became leader at term 3
2016-04-01 00:34:43.327107 I | raft: raft.node: d68162a449565404 elected leader d68162a449565404 at term 3
2016-04-01 00:34:43.328579 I | etcdserver: published {Name:default ClientURLs:[http://[2001:db8:cafe:14::a]:2379]} to cluster 89b5c84d35f7a1e

Note: You can run etcd with the “-debug” flag as well which is very helpful in seeing the GETs and PUTS for the swarm nodes.

On the manager-01 node, fire up the Swarm manager. I am doing a ‘docker run’ and publishing the manager port (4000 – user defined) on the host which maps to the Docker Engine port (2375) for the Swarm container. I am specifically calling for the Swarm 1.2.0-rc1 image that I referenced before which has the latest IPv6 bug fixes. The manager role is launched in the container and it references etcd at etcd-01’s IPv6 address and port (see “client URLs” in etcd output above) (Note: I am running it without the ‘-d’ flag so that it runs in the foreground):

vagrant@manager-01:~$ docker run -p 4000:2375 swarm:1.2.0-rc1 manage etcd://[2001:db8:cafe:14::a]:2379
time="2016-04-01T00:36:01Z" level=info msg="Initializing discovery without TLS"
time="2016-04-01T00:36:01Z" level=info msg="Listening for HTTP" addr=":2375" proto=tcp

On each Swarm node, kick off a “swarm join”. Similar to the manager example above, a “docker run” is used to launch a container using the 1.2.0-rc1 Swarm image. The node is doing a “join” (participating in the discovery process) and advertising its own IPv6 address and references etcd at etcd-01’s IPv6 address and port (see why testing reachability from the node AND its containers was required?) 😉
node-01:

vagrant@node-01:~$ docker run swarm:1.2.0-rc1 join --advertise=[2001:db8:cafe:14::c]:2375 etcd://[2001:db8:cafe:14::a]:2379
time="2016-04-01T00:36:36Z" level=info msg="Initializing discovery without TLS"
time="2016-04-01T00:36:36Z" level=info msg="Registering on the discovery service every 1m0s..." addr="[2001:db8:cafe:14::c]:2375" discovery="etcd://[2001:db8:cafe:14::a]:2379"

node-02:

vagrant@node-02:~$ docker run swarm:1.2.0-rc1 join --advertise=[2001:db8:cafe:14::d]:2375 etcd://[2001:db8:cafe:14::a]:2379
time="2016-04-01T00:36:57Z" level=info msg="Initializing discovery without TLS"
time="2016-04-01T00:36:57Z" level=info msg="Registering on the discovery service every 1m0s..." addr="[2001:db8:cafe:14::d]:2375" discovery="etcd://[2001:db8:cafe:14::a]:2379"

Back on the Manager node you will see some messages like these:

time="2016-04-01T00:37:39Z" level=info msg="Registered Engine node-01 at [2001:db8:cafe:14::c]:2375"
time="2016-04-01T00:38:00Z" level=info msg="Registered Engine node-02 at [2001:db8:cafe:14::d]:2375"

SWEET!

Now, do some Docker-looking stuff. Take a look at the running Docker containers on the Swarm cluster. Point Docker at the manager-01 IPv6 address and published port. The “docker ps -a” shows that the swarm containers (running the join –advertise) is running on node-01 and node-02.

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
335952181ea4        swarm:1.2.0-rc1     "/swarm join --advert"   5 minutes ago       Up 5 minutes        2375/tcp            node-02/compassionate_wilson
057159f355b0        swarm:1.2.0-rc1     "/swarm join --advert"   5 minutes ago       Up 5 minutes        2375/tcp            node-01/adoring_mirzakhani

“docker images” shows the swarm image:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
swarm               1.2.0-rc1           2fe11064a124        8 days ago          18.68 MB

“docker info” shows basic info about the Swarm cluster to include the two nodes (node-01/node-02):

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 info
Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 2
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 node-01: [2001:db8:cafe:14::c]:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.019 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-83-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-01T00:42:33Z
 node-02: [2001:db8:cafe:14::d]:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.019 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-83-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-01T00:42:47Z
Plugins:
 Volume:
 Network:
Kernel Version: 3.13.0-83-generic
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.038 GiB
Name: 39ed14412c1f
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support

Run another container:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 run -it ubuntu /bin/bash
root@61324b3d7117:/#

Check that the container shows up under “docker ps -a” and check which Swarm node it is running on:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
61324b3d7117        ubuntu              "/bin/bash"              19 seconds ago      Up 18 seconds                           node-01/tender_blackwell
335952181ea4        swarm:1.2.0-rc1     "/swarm join --advert"   7 minutes ago       Up 7 minutes        2375/tcp            node-02/compassionate_wilson
057159f355b0        swarm:1.2.0-rc1     "/swarm join --advert"   8 minutes ago       Up 8 minutes        2375/tcp            node-01/adoring_mirzakhani

Run another container:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 run -itd ubuntu /bin/bash

Check that the container shows up and that it is running on the other Swarm node (because Swarm scheduled it there):

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
650d6678eba0        ubuntu              "/bin/bash"              12 seconds ago      Up 11 seconds                           node-02/pedantic_ardinghelli
61324b3d7117        ubuntu              "/bin/bash"              2 minutes ago       Up 2 minutes                            node-01/tender_blackwell
335952181ea4        swarm:1.2.0-rc1     "/swarm join --advert"   9 minutes ago       Up 9 minutes        2375/tcp            node-02/compassionate_wilson
057159f355b0        swarm:1.2.0-rc1     "/swarm join --advert"   9 minutes ago       Up 9 minutes        2375/tcp            node-01/adoring_mirzakhani

Check the IPv6 address on each container (Hint: Each container running on a different Swarm node should have a different IPv6 prefix):

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 attach 6132
root@61324b3d7117:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:cafe:2:0:242:ac11:3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link
       valid_lft forever preferred_lft forever
vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 attach 650d
root@650d6678eba0:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:cafe:3:0:242:ac11:3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link
       valid_lft forever preferred_lft forever

Check IPv6 reachability between the two containers that are running on different Swarm nodes:

root@61324b3d7117:/# ping6 2001:db8:cafe:3:0:242:ac11:3
PING 2001:db8:cafe:3:0:242:ac11:3(2001:db8:cafe:3:0:242:ac11:3) 56 data bytes
64 bytes from 2001:db8:cafe:3:0:242:ac11:3: icmp_seq=1 ttl=62 time=0.993 ms
64 bytes from 2001:db8:cafe:3:0:242:ac11:3: icmp_seq=2 ttl=62 time=0.493 ms
64 bytes from 2001:db8:cafe:3:0:242:ac11:3: icmp_seq=3 ttl=62 time=0.362 ms

Very nice! We have one container on node-01 with an IPv6 address from the IPv6 prefix that is set in the DOCKER_OPTS line on node-01 and we have another container running on node-02 that has an IPv6 address from a different IPv6 prefix from the DOCKER_OPTS line on node-02. The routes we created earlier are allowing these nodes and containers to communicate with each other over IPv6.

TROUBLESHOOTING:

Here is a quick summary of troubleshooting tips:

  • Make sure you are on Docker Engine 1.11 or above
  • Make sure you are on Docker Swarm 1.2.0 or above
  • Make sure that you can ping6 between every node and container from every other node. The routes created on each node (or on the first hop router) are critical in ensuring the containers can reach each other. This is the #1 issue with making this work correctly.
  • Run etcd with the “-d” flag for debugging
  • Run “docker swarm” with the debugging enabled (“debug manage”) on the manager
  • Check the etcd node to make sure the Swarm nodes are registered in the K/V store:
vagrant@node-01:~$ curl -L -g http://[2001:db8:cafe:14::a]:2379/v2/keys/?recursive=true | json_pp
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 661 100 661 0 0 80560 0 --:--:-- --:--:-- --:--:-- 94428
{
"action" : "get",
"node" : {
"nodes" : [
{
"dir" : true,
"key" : "/docker",
"modifiedIndex" : 5,
"createdIndex" : 5,
"nodes" : [
{
"createdIndex" : 5,
"nodes" : [
{
"createdIndex" : 5,
"nodes" : [
{
"key" : "/docker/swarm/nodes/[2001:db8:cafe:14::c]:2375",
"modifiedIndex" : 50,
"ttl" : 131,
"value" : "[2001:db8:cafe:14::c]:2375",
"expiration" : "2016-04-01T01:02:39.198366301Z",
"createdIndex" : 50
},
{
"ttl" : 153,
"modifiedIndex" : 51,
"key" : "/docker/swarm/nodes/[2001:db8:cafe:14::d]:2375",
"value" : "[2001:db8:cafe:14::d]:2375",
"expiration" : "2016-04-01T01:03:00.867453746Z",
"createdIndex" : 51
}
],
"dir" : true,
"key" : "/docker/swarm/nodes",
"modifiedIndex" : 5
}
],
"key" : "/docker/swarm",
"modifiedIndex" : 5,
"dir" : true
}
]
}
],
"dir" : true
}
}

Enjoy!

Docker Registry with IPv6

If you have been following along, you know that I started a series of posts aimed at identifying IPv6 support for the various Docker components/services.

The first blog post was focused on Docker Engine, which has pretty reasonable support for basic IPv6.

The second blog post was focused on Docker Hub, which has zero IPv6 support. This is due to it being hosted on AWS and no IPv6-enabled front-end is deployed.

This blog post will focus on Docker Registry.

As I stated in the past two blog entries, I am not here to teach you Docker (what it is, how to deploy it, etc..). I am simply showing basic functionality of various Docker components/services when used with IPv6.

For information on setting up your own Docker Registry, check out:

https://docs.docker.com/registry/

I am using Docker version 1.8.3, Docker Compose version 1.4.2 and Docker Registry version 2.

I am using the same Ubuntu 14.04.3 hosts that I have used in the last two blog posts.

My setup uses two hosts with the following configuration:

  • docker-v6-1:
    • Role: Docker Registry
    • IPv6 Address: fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848/64
  • docker-v6-2:
    • Role: Docker Host/Client
    • IPv6 Address: fd15:4ba5:5a2b:1009:20c:29ff:febb:cbf8/64

My Docker Registry (running on ‘docker-v6-1’) uses self-signed cert and is started using either the ‘docker run’ syntax or Docker Compose. I show both examples below:

docker run:

docker run -d -p 5000:5000 --restart=always --name registry \
  -v `pwd`/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  registry:2

Use Docker Compose to run Docker Registry
I am using a file named “docker-compose.yml” to launch my registry.

registry:
  restart: always
  image: registry:2
  ports:
    - 5000:5000
  environment:
    REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
    REGISTRY_HTTP_TLS_KEY: /certs/domain.key
  volumes:
    - /certs:/certs

Run Docker Compose:

docker-compose up -d

Verify Connectivity
On the Docker host/client (“docker-v6-2”),  verify that the Docker Registry host (“docker-v6-1”) can be reached over IPv6:

root@docker-v6-2:~# ping6 -n docker-v6-1.example.com
PING docker-v6-1.example.com(fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848) 56 data bytes
64 bytes from fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848: icmp_seq=1 ttl=64 time=0.402 ms
64 bytes from fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848: icmp_seq=2 ttl=64 time=0.367 ms

Docker Registry Push/Pull Verification
Now that connectivity to the Docker Registry host is working, tag a local Docker image and then push it (over IPv6) to the Docker Registry:

root@docker-v6-2:~# docker tag ubuntu docker-v6-1.example.com:5000/ubuntu

root@docker-v6-2:~# docker push docker-v6-1.example.com:5000/ubuntu
The push refers to a repository [docker-v6-1.example.com:5000/ubuntu] (len: 1)
a005e6b7dd01: Image successfully pushed
002fa881df8a: Image successfully pushed
66395c31eb82: Image successfully pushed
0105f98ced6d: Image successfully pushed
latest: digest: sha256:167f1c34ead8f1779db7827a55de0d517b7f0e015d8f08cf032c7e5cd6979a84 size: 6800

A tcpdump on the Docker Registry shows traffic between docker-v6-1 and docker-v6-2 for the ‘push’ using the previously defined port 5000:

root@docker-v6-1:~# tcpdump -n -vvv ip6 -i eth0
tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
19:36:09.283820 IP6 (hlim 64, next-header TCP (6) payload length: 40) fd15:4ba5:5a2b:1009:20c:29ff:febb:cbf8.56066 > fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848.5000: Flags [S], cksum 0x65b1 (correct), seq 2754754540, win 28800, options [mss 1440,sackOK,TS val 645579 ecr 0,nop,wscale 7], length 0
19:36:09.283930 IP6 (hlim 64, next-header TCP (6) payload length: 40) fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848.5000 > fd15:4ba5:5a2b:1009:6540:bb36:2e23:f5a2.56066: Flags [S.], cksum 0xcd92 (incorrect -> 0x50bd), seq 1577491031, ack 2754754541, win 28560, options [mss 1440,sackOK,TS val 859496 ecr 645579,nop,wscale 7], length 0

It works!

Happy Dockering,

Shannon

Docker Hub – We don’t need no stinking IPv6!

I got a lot of great feedback on my last post about the basic configuration of Docker Engine with IPv6.  The next topic that I wanted to cover (and was excited to test) was Docker Hub with IPv6.

My hopes and dreams were smashed in about 15 seconds when I found out that IPv6 is not enabled for hub.docker.com and none of my docker login, docker search, docker pull, docker push or even a browser session to https://hub.docker.com would work over IPv6.

An nslookup reveals no IPv6. Nada. Zip. :

> hub.docker.com
Server:	208.67.222.222
Address:	208.67.222.222#53

Non-authoritative answer:
hub.docker.com	canonical name = elb-default.us-east-1.aws.dckr.io.

Authoritative answers can be found from:
> docker.com
Server:	208.67.222.222
Address:	208.67.222.222#53

Non-authoritative answer:
docker.com	nameserver = ns-1289.awsdns-33.org.
docker.com	nameserver = ns-1981.awsdns-55.co.uk.
docker.com	nameserver = ns-207.awsdns-25.com.
docker.com	nameserver = ns-568.awsdns-07.net.
docker.com
origin = ns-207.awsdns-25.com
mail addr = awsdns-hostmaster.amazon.com
serial = 1
refresh = 7200
retry = 900
expire = 1209600
minimum = 86400

Insert your favorite sad panda image here. 🙁

I know the Docker folks are in a bind with this since they are using Amazon who is likely the last cloud provider on earth who does not have real IPv6 support (none in EC2-VPC but you can in EC2-Classic).

I will move on from Docker Hub and start checking out other Docker stuff like Registry, Compose, etc…

See you next time. Sorry for the epic failure of this post.

Shannon

Basic Configuration of Docker Engine with IPv6

This is the start of a blog series dedicated to enabling IPv6 for the various components in the Docker toolbox.

I am starting the series off by talking about the basic configuration for enabling IPv6 with Docker Engine.  There are some good examples that the Docker folks have put together that you will want to read through: https://docs.docker.com/engine/userguide/networking/default_network/ipv6/

Disclaimer: I am not teaching you Docker.  There are a zillion places to go learn Docker.  I am making the dangerous assumption that you already know what Docker is, how to install it and how to use it.

I am also not teaching you IPv6.  There are also a zillion places to go learn IPv6.  I am making the even more dangerous assumption that you know what IPv6 is, what the addressing details are and how to use it.

Diagram

The graphic below shows a high-level view of my setup.  I have two Docker hosts (docker-v6-1 and docker-v6-2) that are running Ubuntu 14.04.  As of this first post, I am using Docker 1.8.2. Both hosts are attached to a Layer-2 switch via their eth0 interfaces.  I am using static IPv4 addresses (not relevant here) for the host and StateLess Address AutoConfiguration (SLAAC) for IPv6 address assignment out of the Unique Local Address (ULA) FD15:4BA5:5A2B:1009::/64 range.

Blog- Docker Engine - Basic IPv6

Preparing the Docker Host for IPv6:

As I mentioned before, I am using SLAAC-based assignment for IPv6 addressing on each host.  You can use static, SLAAC, Stateful DHCPv6 or Stateless DHCPv6 if you want.  I am not covering any of that as they don’t pertain directly to Docker.

Each Docker host as an IPv6 address and can reach the outside world:

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:f3:f8:48 brd ff:ff:ff:ff:ff:ff
    inet 192.168.80.200/24 brd 192.168.80.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd15:4ba5:5a2b:1009:cc7:2609:38b7:e6c6/64 scope global temporary dynamic
       valid_lft 86388sec preferred_lft 14388sec
    inet6 fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848/64 scope global dynamic
       valid_lft 86388sec preferred_lft 14388sec
    inet6 fe80::20c:29ff:fef3:f848/64 scope link
       valid_lft forever preferred_lft forever
root@docker-v6-1:~# ping6 -n www.google.com
PING www.google.com(2607:f8b0:400f:802::2004) 56 data bytes
64 bytes from 2607:f8b0:400f:802::2004: icmp_seq=1 ttl=255 time=13.7 ms
64 bytes from 2607:f8b0:400f:802::2004: icmp_seq=2 ttl=255 time=14.5 ms

Since I am using router advertisements (RAs) for my IPv6 address assignment, it is important to force the acceptance of RAs even when forwarding is enabled:

sysctl net.ipv6.conf.eth0.accept_ra=2

Now, if you haven’t already, install Docker using whatever method you are comfortable with.  Again, this is not a primer on Docker. 🙂

Docker! Docker! Docker!

Now that the IPv6 basics are there on the host and you have Docker installed, it is time to set the IPv6 subnet for Docker.  You can do this via the ‘docker daemon’ command or you can set it in the /etc/default/docker file.  Below is the example using the ‘docker daemon’ command. Here, I am setting the fixed IPv6 prefix as FD15:4BA5:5A2B:100A::/64.

root@docker-v6-1:~# docker daemon --ipv6 --fixed-cidr-v6="fd15:4ba5:5a2b:100a::/64

Here is the same IPv6 prefix being set, but this is using the /etc/default/docker file:

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ipv6 --fixed-cidr-v6=fd15:4ba5:5a2b:100a::/64"

Let’s fire up a container and see what happens. The example below shows that the container got an IPv6 address out of the prefix we set above:

root@docker-v6-1:~# docker run -it ubuntu bash
root@aea405985524:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
5: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:01 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fd15:4ba5:5a2b:100a:0:242:ac11:1/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:1/64 scope link
       valid_lft forever preferred_lft forever

Ping the outside world:

root@aea405985524:/# ping6 www.google.com
PING www.google.com(den03s10-in-x04.1e100.net) 56 data bytes
64 bytes from den03s10-in-x04.1e100.net: icmp_seq=1 ttl=254 time=14.6 ms
64 bytes from den03s10-in-x04.1e100.net: icmp_seq=2 ttl=254 time=12.5 ms

Fire up another container and ping the first container over IPv6:

root@docker-v6-1:~# docker run -it ubuntu bash
root@e8a8662fad76:/# ping6 fd15:4ba5:5a2b:100a:0:242:ac11:1
PING fd15:4ba5:5a2b:100a:0:242:ac11:1(fd15:4ba5:5a2b:100a:0:242:ac11:1) 56 data bytes
64 bytes from fd15:4ba5:5a2b:100a:0:242:ac11:1: icmp_seq=1 ttl=64 time=0.094 ms
64 bytes from fd15:4ba5:5a2b:100a:0:242:ac11:1: icmp_seq=2 ttl=64 time=0.057 ms
Add the 2nd Docker host

Sweet! We have one host (docker-v6-1) running with two containers that can reach each other over IPv6 and reach the outside world.  Now let’s add the second Docker host (docker-v6-2).

Repeat all of the steps from above but change the IPv6 prefix that Docker is going to use. Here is an example using FD15:4BA5:5A2B:100B::/64:

DOCKER_OPTS="--dns 8.8.8.8 --dns 8.8.4.4 --ipv6 --fixed-cidr-v6=fd15:4ba5:5a2b:100b::/64

In order to have containers on one host reach containers on another host over IPv6, we have to figure out routing. You can enable host-based routing (the example I will show below) or you can just use the Layer-3 infrastructure you likely already have in your Data Center. I would recommend the latter option. Remember that Docker is not doing NAT for IPv6 so you have to have some mechanism to allow for pure L3 reachability between the various IPv6 address spaces you are using.
Here is an example of using host-based routing on each of the two Docker hosts. First, configure a static IPv6 route on the first Docker host (i.e. docker-v6-1). The route statement below says to route all traffic destined for the fd15:4ba5:5a2b:100b::/64 prefix (the one being used on docker-v6-2) to the IPv6 address of the docker-v6-2 eth0 interface.

root@docker-v6-1:~# ip -6 route add fd15:4ba5:5a2b:100b::/64 via fd15:4ba5:5a2b:1009:20c:29ff:febb:cbf8

Now, do the same on the 2nd Docker host (docker-v6-2). This route statement says to route all traffic destined for the fd15:4ba5:5a2b:100a::/64 prefix (used on docker-v6-1) to the IPv6 address of the docker-v6-1 eth0 interface:

root@docker-v6-2:~# ip -6 route add fd15:4ba5:5a2b:100a::/64 via fd15:4ba5:5a2b:1009:20c:29ff:fef3:f848

The final test is to ping from one container on docker-v6-1 to a container on docker-v6-2:

root@e8a8662fad76:/# ping6 fd15:4ba5:5a2b:100b:0:242:ac11:1
PING fd15:4ba5:5a2b:100b:0:242:ac11:1(fd15:4ba5:5a2b:100b:0:242:ac11:1) 56 data bytes
64 bytes from fd15:4ba5:5a2b:100b:0:242:ac11:1: icmp_seq=3 ttl=62 time=0.570 ms
64 bytes from fd15:4ba5:5a2b:100b:0:242:ac11:1: icmp_seq=4 ttl=62 time=0.454 ms

It works!

We will build on this scenario in upcoming posts as we walk through enabling IPv6 functionality in a variety of Docker network scenarios and other Docker services.

Shannon