Docker network types with focus on Bridged networks
How to create and handle networks with Docker? We will create bridge networks, and use them with our containers.
Introduction
Last time we talked about networking with Docker Compose. Compose did the heavy lifting for us in the background, so it may not be apparent what exactly had to be configured for the two containers to be able to communicate with each other. This time we will take a closer look at the networking capabilities of Docker. There’s even more to Docker Compose and its networking, but today our focus is on plain Docker, so we can learn the basics first.
In this article we will be creating custom bridge networks, see how we can utilize them with our containers, and how to make them configurable on the operating system level.
Before we begin, a word from our friends
Do you enjoy getting exposed to new ideas outside your usual bubble? Me too. The Sample was designed to amplify independent writers who you haven't heard of already. You can set a few topics you're interested in, and then each day The Sample picks a different article to send you. When you get one you like, you can subscribe to the author's newsletter in a single click. Easy as pie. Sign up here.
What are Docker networks?
As we already discussed in the previous post Docker Compose networking, we can list currently existing networks with the following command:
docker network ls
This outputs for example:
NETWORK ID NAME DRIVER SCOPE
56a19d16e12f bridge bridge local
a26d7826640e host host local
7e93d3c8fbc2 none null local
There’s three networks by default. But what are they, and why do they exists? What are the different types of network drivers we see on the list? Are there more than just those three? Let’s take a look.
The following are all the possible driver types Docker currently supports:
Bridge - This is the default network type. When you create a network without specifying a type, you are creating a Bridge type network. Bridge networks are used to connect containers running on the same docker engine. Most common use case is to connect multiple containers on the same Bridge network, so they can communicate with each other on the same host. Containers connected to Bridge networks are not exposed to host machine’s network, unless ports are published by command.
Host - Containers attached to Host network are fully exposed to the host’s network. Container’s ports do not need to be exposed individually (nor they can be). The containers do not get their own IP-address - they share the host’s IP-address. This is not recommended to be used, since all ports are available from host to the container, and it could pose a security threat.
Null - Only loopback interface is available for the container.
Macvlan - Allows assigning physical MAC addresses to containers. When creating this kind of network, you can define which physical interface the data goes through on the host.
IPvlan - In the default, L2 mode (OSI Layer 2), containers join the the same sub-network as the host. In L3 mode (OSI Layer 3), containers must join a different sub-network as the host’s interface. Each container can be in its own subnet, and still all containers can reach each other. In L3 mode, the host acts as a router between the container subnets. In both modes containers get their own IP-addresses. When creating the network, the host’s interface to use can be selected.
Overlay - Creates a distributed network between multiple docker engine hosts.
How do you create networks?
To create networks, you use the docker client interface like this:
docker network create net
If successful, Docker outputs the new network’s identifier. You can then list all the network again:
docker network ls
You will see that your new network appears on the list.
NETWORK ID NAME DRIVER SCOPE
43bf90c457f1 net bridge local
...
Because we didn’t provide any parameters to the network, it became a bridge type network (as it is the default).
Bridge networks
The default bridge network
When you first start up Docker, a default bridge network already exists with the name bridge. There are some reasons why you should consider creating your custom bridge network:
All containers join the default bridge network by default. If you do not explicitly define which network your container should join, it will join this default one.
Because of point 1., unrelated containers may join the same network even if not intended. This could pose a security issue.
The default bridge network does not provide DNS resolution. You must use IP-addresses to connect to other containers.
For these reasons, it is better to create a custom bridge network, since none of these bullet points are relevant to those. With a custom bridge, you have to explicitly join containers to it, so you know for sure which containers can communicate with each other. DNS resolution also works, and you can reference the other containers by name, instead of only IP-address.
Why use bridge networks?
If your use case is to run multiple containers on one host, and they need to be able to communicate with each other and the host, bridge network is for you. This is probably the most common use case.
Let’s take a look at another example. We have three containers:
Database container (for example PostgreSQL)
Custom data service container that communicates with the database
Custom reporting service that creates reports from the data it reads from the data service
We could connect all of these to the same bridge network - but we don’t have to. We could make two bridge networks:
For connecting database and data service
For connecting data service and reporting service
The reporting service is not interested in the database at all. It should not even care what the database is, since only the data service is connected to the database. Why should the reporting service be in the same network as the database? This is a simplified use case of real-life scenarios. Having two networks is a bit more work for us to define, but especially using Docker Compose makes it really simple. The added benefit for us is:
More isolation → More secure
Clearer responsibilities for networks → Clearer design for future work
How to create a bridge network
Here’s the full command:
docker network create --driver bridge bridge1
There, we have our bridge network created with the name bridge1.
Join container to the network
To make our containers join the network we want, we need to provide the network parameter to docker run command, like this:
docker run -dit --name alpine1 --network bridge1 alpine:latest
Here we run a Alpine Linux container with the network parameter to join our bridge1 network. Also notice -dit parameters: d for daemon, i for interactive, and t for tty. These parameters makes the container run in the background, and allow us to attach to its shell at a later time.
Now if you inspect the container:
docker inspect alpine1
You will see, that it has an IP-address defined for the bridge1 network.
...
"Networks": {
"bridge1": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"f3a8b6b916b9"
],
"NetworkID": "f3f92b82e62421466...",
"EndpointID": "66f6b3ab66168d84...",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
}
}
Now let’s create another container and join the same network:
docker run -dit --name alpine2 --network bridge1 alpine:latest
Then attach to it:
docker attach alpine2
And ping the other container:
/ # ping alpine1
PING alpine1 (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.188 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.089 ms
^C
--- alpine1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
Connection works, and we can reference the other container by its name. Nice!
Network interface on operating system level
When we created the network, Docker created a network interface for us in the operating system. On Ubuntu for example, we can list all network interfaces with the command ifconfig
. Below is an snippet of what it might look like:
...
br-f3f92b82e624: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:3eff:fe05:c4ab prefixlen 64 scopeid 0x20<link>
ether 02:42:3e:05:c4:ab txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 33 bytes 4446 (4.4 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
As you see, the name is generated. Sometimes we might need to for example control firewall settings for the network we create, so a generated name is no good. Luckily we can set the interface’s name too. It is done with the option com.docker.network.bridge.name
:
docker network create bridge2 -o com.docker.network.bridge.name=ex_bridge2
Now we have the interface with the name we defined:
ex_bridge2: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.19.0.1 netmask 255.255.0.0 broadcast 172.19.255.255
ether 02:42:5f:a9:14:c8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
There we have it: our custom bridge networks up and running, and configurable on the operating system level. Next week we will look at MacVLAN networks.
Happy coding!