Docker Swarm and etcd with IPv6

Hey gang!

It has been awhile since I posted here.  Part of that is due to pure laziness and business travel nonsense and the other part is that I was waiting on a couple of bugs to get resolved for Docker Swarm.  Well, the bugs have been resolved and I have gotten this whole thing to work, finally.

As you recall from the last few blog posts, I have went through basic IPv6 deployment for Docker Engine, Docker Hub and Docker Registry.  In this post I will talk about using IPv6 with Docker Swarm.

There is a bunch of content out there to teach you Swarm and how to get it running. Here is a link to the main Docker Swarm docs: https://docs.docker.com/swarm/.  What I care about is doing basic stuff with Docker Swarm over IPv6.

As with my previous Docker posts, I am not teaching you what the various Docker tools are and the same goes for this post with Docker Swarm.  Normally, I don’t even tell you how to install Docker Engine.  This time I will give you some basic guidance to get it running as you will need to grab the latest Docker Swarm Release Candidate that has some IPv6 issues merged.  There are a few issues that got resolved that allowed me to get this thing running:

One issue was with the command “docker -H tcp://[ipv6_address]”: https://github.com/docker/docker/issues/18879 and that was resolved in https://github.com/docker/docker/pull/16950. This fix was in Docker Engine 1.10.

Another issue was with Swarm discovery and the use of ‘swarm join’ command: https://github.com/docker/swarm/issues/1906 and resolved in  https://github.com/docker/docker/pull/20842. This fix is in Docker Swarm 1.2.0-rc1 and above.

Like most things with IPv6, you can reference a host via a DNS name just so it has a AAAA record or a local /etc/hosts reference.  In this post I will be using IPv6 literals so you see the real IPv6 address stuff working.  NOTE: As with web browsers or pretty much anything where you need to call an IPv6 address along with a port number, you must put the IPv6 address in [] brackets so that the address can be differentiated from the port number.

In this post I am expanding on my earlier Docker Engine post where I added the –ipv6 flag to the DOCKER_OPTS line in the /etc/default/docker config . In this post I am using the following line in the /etc/default/docker file:

DOCKER_OPTS="-H=tcp://0.0.0.0:2375 -H=unix:///var/run/docker.sock --ipv6 --fixed-cidr-v6=2001:db8:cafe:1::/64"

Note, that each host running docker will have a configuration similar to this only the IPv6 prefix will be different (see diagram and interface output below).

Let’s get started.

Below is a diagram that shows the high-level layout of the lab setup I am testing with.  I have four VMs running Ubuntu and deployed via Vagrant.  The four VMs are assigned IPv6 addresses out of the 2001:db8:cafe:14::/64 network. The “etcd-01” VM will run the etcd (a distributed key-value store) service, “manager-01” will run as the Docker Swarm Manager role and nodes “node-01” and “node-02” will be Docker Swarm nodes that run the containerized workloads.  As I stated above, the Docker daemon needs to be told which IPv6 prefix to use for containers. The “–ipv6 –fixed-cidr-v6” prefix shown in the “DOCKER_OPTS” line I referenced above is what does that. You can see on manager-01, node-01 and node-02 that under the “docker0” bridge there are prefixes specific to each host. manager-01 = 2001:db8:cafe:1::/64, node-01 = 2001:db8:cafe:2::/64 and node-02 = 2001:db8:cafe:3::/64.  Containers that launch on these nodes will get an IPv6 address out of the corresponding prefix on that specific Docker bridge.  Again, I spell all of this out in the blog post here.

swarm-v6-topo

Note: Vagrant is limiting in that when you add a new “config.vm.network” entry you have no option to add it to an existing NIC. Each line of “config.vm.network” creates a new NIC so I didn’t have the option to purely dual stack a single interface. Instead, it created an eth2 with the IPv6 address I assigned.  No biggie but it does look odd when you look at the interface output (example shown below).

vagrant@node-01:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:08:9d:5f brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe08:9d5f/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:27:f1:28 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.12/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe27:f128/64 scope link
       valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b3:7a:d2 brd ff:ff:ff:ff:ff:ff
    inet6 2001:db8:cafe:14::c/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb3:7ad2/64 scope link
       valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fa:77:dd:e0 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:cafe:2::1/64 scope global tentative
       valid_lft forever preferred_lft forever
    inet6 fe80::1/64 scope link tentative
       valid_lft forever preferred_lft forever

I am not recommending or talking about how to use Vagrant to deploy this as you may want to deploy Docker Swarm on another type of setup but you can grab my super basic Vagrantfile that I used.  I have a few Vagrant plugins such as virtualbox, vagrant-hostmanager and vagrant-vbguest, so make sure you have that stuff squared away first before you use my Vagrantfile – https://github.com/shmcfarl/swarm-etcd

When IPv6 is enabled for the Docker daemon, it deals with adding a route for that prefix that is defined as well as enabling the appropriate forwarding settings (see: https://docs.docker.com/engine/userguide/networking/default_network/ipv6/) but you do have to ensure that routing is correctly setup for getting to each Docker IPv6 prefix on all nodes or you will end up with broken connectivity to<>from containers running on different nodes.  The configurations below are to statically set the IPv6 routes on each node to reach the appropriate Docker IPv6 prefix:

There are three routes added. One for each Docker IPv6 prefix on manager-01, node-01 and node-02 (reference the diagram above) etcd-01:

vagrant@etcd-01:~$ sudo ip -6 route add 2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b
vagrant@etcd-01:~$ sudo ip -6 route add 2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c
vagrant@etcd-01:~$ sudo ip -6 route add 2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d

Here is the IPv6 route table for etcd-01:

vagrant@etcd-01:~$ sudo ip -6 route
2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b dev eth2  metric 1024
2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c dev eth2  metric 1024
2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d dev eth2  metric 1024
2001:db8:cafe:14::/64 dev eth2  proto kernel  metric 256
fe80::/64 dev eth0  proto kernel  metric 256
fe80::/64 dev eth1  proto kernel  metric 256
fe80::/64 dev eth2  proto kernel  metric 256

manager-01: (Note: only routes to node-01 and node-02 are needed as the etcd-01 service is not running in a container in this example:

vagrant@manager-01:~$ sudo ip -6 route add 2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c
vagrant@manager-01:~$ sudo ip -6 route add 2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d

Here is the IPv6 route table for manager-01:

vagrant@manager-01:~$ sudo ip -6 route
2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c dev eth2  metric 1024
2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d dev eth2  metric 1024
2001:db8:cafe:14::/64 dev eth2  proto kernel  metric 256
fe80::/64 dev eth0  proto kernel  metric 256
fe80::/64 dev eth1  proto kernel  metric 256
fe80::/64 dev eth2  proto kernel  metric 256

node-01:

vagrant@node-01:~$ sudo ip -6 route add 2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b
vagrant@node-01:~$ sudo ip -6 route add 2001:db8:cafe:3::/64 via 2001:db8:cafe:14::d

node-02:

vagrant@node-02:~$ sudo ip -6 route add 2001:db8:cafe:1::/64 via 2001:db8:cafe:14::b
vagrant@node-02:~$ sudo ip -6 route add 2001:db8:cafe:2::/64 via 2001:db8:cafe:14::c

Test reachability between each nodes (i.e. node-01 can reach etcd-01 at 2001:db8:cafe:14::a):

vagrant@node-01:~$ ping6 2001:db8:cafe:14::a
PING 2001:db8:cafe:14::a(2001:db8:cafe:14::a) 56 data bytes
64 bytes from 2001:db8:cafe:14::a: icmp_seq=1 ttl=64 time=0.638 ms
64 bytes from 2001:db8:cafe:14::a: icmp_seq=2 ttl=64 time=0.421 ms
64 bytes from 2001:db8:cafe:14::a: icmp_seq=3 ttl=64 time=0.290 ms

The basic setup of each node is complete and it is time to setup etcd and Docker Swarm.

Disclaimer: I am not recommending a specific approach for deploying etcd or Docker Swarm. Please check the documentation for each one of those for the recommended deployment options.

On the etcd-01 node, download and untar etcd:

curl -L  https://github.com/coreos/etcd/releases/download/v2.2.5/etcd-v2.2.5-linux-amd64.tar.gz -o etcd-v2.2.5-linux-amd64.tar.gz

tar xzvf etcd-v2.2.5-linux-amd64.tar.gz

rm etcd-v2.2.5-linux-amd64.tar.gz

cd etcd-v2.2.5-linux-amd64/

In the example below, I am setting a setting a name with the IPv6 address of the etcd-01 node and then running etcd on the console:

vagrant@etcd-01:~/etcd-v2.2.5-linux-amd64$ MY_IPv6="[2001:db8:cafe:14::a]"
vagrant@etcd-01:~/etcd-v2.2.5-linux-amd64$
vagrant@etcd-01:~/etcd-v2.2.5-linux-amd64$ ./etcd \
> -initial-advertise-peer-urls http://$MY_IPv6:2380 \
> -listen-peer-urls="http://0.0.0.0:2380,http://0.0.0.0:7001" \
> -listen-client-urls="http://0.0.0.0:2379,http://0.0.0.0:4001" \
> -advertise-client-urls="http://$MY_IPv6:2379" \
> -initial-cluster-token etcd-01 \
> -initial-cluster="default=http://$MY_IPv6:2380" \
> -initial-cluster-state new
2016-04-01 00:34:41.915304 I | etcdmain: etcd Version: 2.2.5
2016-04-01 00:34:41.915508 I | etcdmain: Git SHA: bc9ddf2
2016-04-01 00:34:41.915922 I | etcdmain: Go Version: go1.5.3
2016-04-01 00:34:41.916287 I | etcdmain: Go OS/Arch: linux/amd64
2016-04-01 00:34:41.916676 I | etcdmain: setting maximum number of CPUs to 1, total number of available CPUs is 1
2016-04-01 00:34:41.917671 W | etcdmain: no data-dir provided, using default data-dir ./default.etcd
2016-04-01 00:34:41.917858 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2016-04-01 00:34:41.918340 I | etcdmain: listening for peers on http://0.0.0.0:2380
2016-04-01 00:34:41.918809 I | etcdmain: listening for peers on http://0.0.0.0:7001
2016-04-01 00:34:41.919324 I | etcdmain: listening for client requests on http://0.0.0.0:2379
2016-04-01 00:34:41.919644 I | etcdmain: listening for client requests on http://0.0.0.0:4001
2016-04-01 00:34:41.920224 I | etcdserver: name = default
2016-04-01 00:34:41.920416 I | etcdserver: data dir = default.etcd
2016-04-01 00:34:41.921540 I | etcdserver: member dir = default.etcd/member
2016-04-01 00:34:41.921949 I | etcdserver: heartbeat = 100ms
2016-04-01 00:34:41.922321 I | etcdserver: election = 1000ms
2016-04-01 00:34:41.922612 I | etcdserver: snapshot count = 10000
2016-04-01 00:34:41.923036 I | etcdserver: advertise client URLs = http://[2001:db8:cafe:14::a]:2379
2016-04-01 00:34:41.923664 I | etcdserver: restarting member d68162a449565404 in cluster 89b5c84d35f7a1e at commit index 10
2016-04-01 00:34:41.923919 I | raft: d68162a449565404 became follower at term 2
2016-04-01 00:34:41.924187 I | raft: newRaft d68162a449565404 [peers: [], term: 2, commit: 10, applied: 0, lastindex: 10, lastterm: 2]
2016-04-01 00:34:41.924780 I | etcdserver: starting server... [version: 2.2.5, cluster version: to_be_decided]
2016-04-01 00:34:41.926767 N | etcdserver: added local member d68162a449565404 [http://[2001:db8:cafe:14::a]:2380] to cluster 89b5c84d35f7a1e
2016-04-01 00:34:41.926885 N | etcdserver: set the initial cluster version to 2.2
2016-04-01 00:34:43.325015 I | raft: d68162a449565404 is starting a new election at term 2
2016-04-01 00:34:43.325357 I | raft: d68162a449565404 became candidate at term 3
2016-04-01 00:34:43.326087 I | raft: d68162a449565404 received vote from d68162a449565404 at term 3
2016-04-01 00:34:43.326594 I | raft: d68162a449565404 became leader at term 3
2016-04-01 00:34:43.327107 I | raft: raft.node: d68162a449565404 elected leader d68162a449565404 at term 3
2016-04-01 00:34:43.328579 I | etcdserver: published {Name:default ClientURLs:[http://[2001:db8:cafe:14::a]:2379]} to cluster 89b5c84d35f7a1e

Note: You can run etcd with the “-debug” flag as well which is very helpful in seeing the GETs and PUTS for the swarm nodes.

On the manager-01 node, fire up the Swarm manager. I am doing a ‘docker run’ and publishing the manager port (4000 – user defined) on the host which maps to the Docker Engine port (2375) for the Swarm container. I am specifically calling for the Swarm 1.2.0-rc1 image that I referenced before which has the latest IPv6 bug fixes. The manager role is launched in the container and it references etcd at etcd-01’s IPv6 address and port (see “client URLs” in etcd output above) (Note: I am running it without the ‘-d’ flag so that it runs in the foreground):

vagrant@manager-01:~$ docker run -p 4000:2375 swarm:1.2.0-rc1 manage etcd://[2001:db8:cafe:14::a]:2379
time="2016-04-01T00:36:01Z" level=info msg="Initializing discovery without TLS"
time="2016-04-01T00:36:01Z" level=info msg="Listening for HTTP" addr=":2375" proto=tcp

On each Swarm node, kick off a “swarm join”. Similar to the manager example above, a “docker run” is used to launch a container using the 1.2.0-rc1 Swarm image. The node is doing a “join” (participating in the discovery process) and advertising its own IPv6 address and references etcd at etcd-01’s IPv6 address and port (see why testing reachability from the node AND its containers was required?) 😉
node-01:

vagrant@node-01:~$ docker run swarm:1.2.0-rc1 join --advertise=[2001:db8:cafe:14::c]:2375 etcd://[2001:db8:cafe:14::a]:2379
time="2016-04-01T00:36:36Z" level=info msg="Initializing discovery without TLS"
time="2016-04-01T00:36:36Z" level=info msg="Registering on the discovery service every 1m0s..." addr="[2001:db8:cafe:14::c]:2375" discovery="etcd://[2001:db8:cafe:14::a]:2379"

node-02:

vagrant@node-02:~$ docker run swarm:1.2.0-rc1 join --advertise=[2001:db8:cafe:14::d]:2375 etcd://[2001:db8:cafe:14::a]:2379
time="2016-04-01T00:36:57Z" level=info msg="Initializing discovery without TLS"
time="2016-04-01T00:36:57Z" level=info msg="Registering on the discovery service every 1m0s..." addr="[2001:db8:cafe:14::d]:2375" discovery="etcd://[2001:db8:cafe:14::a]:2379"

Back on the Manager node you will see some messages like these:

time="2016-04-01T00:37:39Z" level=info msg="Registered Engine node-01 at [2001:db8:cafe:14::c]:2375"
time="2016-04-01T00:38:00Z" level=info msg="Registered Engine node-02 at [2001:db8:cafe:14::d]:2375"

SWEET!

Now, do some Docker-looking stuff. Take a look at the running Docker containers on the Swarm cluster. Point Docker at the manager-01 IPv6 address and published port. The “docker ps -a” shows that the swarm containers (running the join –advertise) is running on node-01 and node-02.

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
335952181ea4        swarm:1.2.0-rc1     "/swarm join --advert"   5 minutes ago       Up 5 minutes        2375/tcp            node-02/compassionate_wilson
057159f355b0        swarm:1.2.0-rc1     "/swarm join --advert"   5 minutes ago       Up 5 minutes        2375/tcp            node-01/adoring_mirzakhani

“docker images” shows the swarm image:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
swarm               1.2.0-rc1           2fe11064a124        8 days ago          18.68 MB

“docker info” shows basic info about the Swarm cluster to include the two nodes (node-01/node-02):

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 info
Containers: 2
 Running: 2
 Paused: 0
 Stopped: 0
Images: 2
Server Version: swarm/1.2.0
Role: primary
Strategy: spread
Filters: health, port, dependency, affinity, constraint
Nodes: 2
 node-01: [2001:db8:cafe:14::c]:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.019 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-83-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-01T00:42:33Z
 node-02: [2001:db8:cafe:14::d]:2375
  └ Status: Healthy
  └ Containers: 1
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 1.019 GiB
  └ Labels: executiondriver=, kernelversion=3.13.0-83-generic, operatingsystem=Ubuntu 14.04.4 LTS, storagedriver=aufs
  └ Error: (none)
  └ UpdatedAt: 2016-04-01T00:42:47Z
Plugins:
 Volume:
 Network:
Kernel Version: 3.13.0-83-generic
Operating System: linux
Architecture: amd64
CPUs: 2
Total Memory: 2.038 GiB
Name: 39ed14412c1f
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support

Run another container:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 run -it ubuntu /bin/bash
root@61324b3d7117:/#

Check that the container shows up under “docker ps -a” and check which Swarm node it is running on:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
61324b3d7117        ubuntu              "/bin/bash"              19 seconds ago      Up 18 seconds                           node-01/tender_blackwell
335952181ea4        swarm:1.2.0-rc1     "/swarm join --advert"   7 minutes ago       Up 7 minutes        2375/tcp            node-02/compassionate_wilson
057159f355b0        swarm:1.2.0-rc1     "/swarm join --advert"   8 minutes ago       Up 8 minutes        2375/tcp            node-01/adoring_mirzakhani

Run another container:

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 run -itd ubuntu /bin/bash

Check that the container shows up and that it is running on the other Swarm node (because Swarm scheduled it there):

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
650d6678eba0        ubuntu              "/bin/bash"              12 seconds ago      Up 11 seconds                           node-02/pedantic_ardinghelli
61324b3d7117        ubuntu              "/bin/bash"              2 minutes ago       Up 2 minutes                            node-01/tender_blackwell
335952181ea4        swarm:1.2.0-rc1     "/swarm join --advert"   9 minutes ago       Up 9 minutes        2375/tcp            node-02/compassionate_wilson
057159f355b0        swarm:1.2.0-rc1     "/swarm join --advert"   9 minutes ago       Up 9 minutes        2375/tcp            node-01/adoring_mirzakhani

Check the IPv6 address on each container (Hint: Each container running on a different Swarm node should have a different IPv6 prefix):

vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 attach 6132
root@61324b3d7117:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:cafe:2:0:242:ac11:3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link
       valid_lft forever preferred_lft forever
vagrant@node-01:~$ docker -H tcp://[2001:db8:cafe:14::b]:4000 attach 650d
root@650d6678eba0:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
8: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2001:db8:cafe:3:0:242:ac11:3/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link
       valid_lft forever preferred_lft forever

Check IPv6 reachability between the two containers that are running on different Swarm nodes:

root@61324b3d7117:/# ping6 2001:db8:cafe:3:0:242:ac11:3
PING 2001:db8:cafe:3:0:242:ac11:3(2001:db8:cafe:3:0:242:ac11:3) 56 data bytes
64 bytes from 2001:db8:cafe:3:0:242:ac11:3: icmp_seq=1 ttl=62 time=0.993 ms
64 bytes from 2001:db8:cafe:3:0:242:ac11:3: icmp_seq=2 ttl=62 time=0.493 ms
64 bytes from 2001:db8:cafe:3:0:242:ac11:3: icmp_seq=3 ttl=62 time=0.362 ms

Very nice! We have one container on node-01 with an IPv6 address from the IPv6 prefix that is set in the DOCKER_OPTS line on node-01 and we have another container running on node-02 that has an IPv6 address from a different IPv6 prefix from the DOCKER_OPTS line on node-02. The routes we created earlier are allowing these nodes and containers to communicate with each other over IPv6.

TROUBLESHOOTING:

Here is a quick summary of troubleshooting tips:

  • Make sure you are on Docker Engine 1.11 or above
  • Make sure you are on Docker Swarm 1.2.0 or above
  • Make sure that you can ping6 between every node and container from every other node. The routes created on each node (or on the first hop router) are critical in ensuring the containers can reach each other. This is the #1 issue with making this work correctly.
  • Run etcd with the “-d” flag for debugging
  • Run “docker swarm” with the debugging enabled (“debug manage”) on the manager
  • Check the etcd node to make sure the Swarm nodes are registered in the K/V store:
vagrant@node-01:~$ curl -L -g http://[2001:db8:cafe:14::a]:2379/v2/keys/?recursive=true | json_pp
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 661 100 661 0 0 80560 0 --:--:-- --:--:-- --:--:-- 94428
{
"action" : "get",
"node" : {
"nodes" : [
{
"dir" : true,
"key" : "/docker",
"modifiedIndex" : 5,
"createdIndex" : 5,
"nodes" : [
{
"createdIndex" : 5,
"nodes" : [
{
"createdIndex" : 5,
"nodes" : [
{
"key" : "/docker/swarm/nodes/[2001:db8:cafe:14::c]:2375",
"modifiedIndex" : 50,
"ttl" : 131,
"value" : "[2001:db8:cafe:14::c]:2375",
"expiration" : "2016-04-01T01:02:39.198366301Z",
"createdIndex" : 50
},
{
"ttl" : 153,
"modifiedIndex" : 51,
"key" : "/docker/swarm/nodes/[2001:db8:cafe:14::d]:2375",
"value" : "[2001:db8:cafe:14::d]:2375",
"expiration" : "2016-04-01T01:03:00.867453746Z",
"createdIndex" : 51
}
],
"dir" : true,
"key" : "/docker/swarm/nodes",
"modifiedIndex" : 5
}
],
"key" : "/docker/swarm",
"modifiedIndex" : 5,
"dir" : true
}
]
}
],
"dir" : true
}
}

Enjoy!