Compare commits

..

84 Commits

Author SHA1 Message Date
Tibor Vass
363e9a88a1 Merge pull request #42061 from thaJeztah/20.10_backport_bump_buildkit
[20.10 backport] vendor: github.com/moby/buildkit v0.8.2
2021-02-24 22:27:18 -08:00
Tibor Vass
35f5f9e624 builder: fix incorrect cache match for inline cache with empty layers
See https://github.com/moby/buildkit/pull/1993

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 9bf93e90fa)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-02-25 01:50:37 +00:00
Tibor Vass
035cb276d9 Merge pull request #42070 from thaJeztah/20.10_backport_rootless_typo_guard
[20.10 backport] dockerd-rootless.sh: add typo guard
2021-02-24 17:42:50 -08:00
Sebastiaan van Stijn
3ce37a6aa4 vendor: github.com/moby/buildkit v0.8.2
full diff: 68bb095353...9065b18ba4

- fix seccomp compatibility in 32bit arm
    - fixes Unable to build alpine:edge containers for armv7
    - fixes Buildx failing to build for arm/v7 platform on arm64 machine
- resolver: avoid error caching on token fetch
    - fixes "Error: i/o timeout should not be cached"
- fileop: fix checksum to contain indexes of inputs
- frontend/dockerfile: add RunCommand.FlagsUsed field
    - relates to [20.10] Classic builder silently ignores unsupported Dockerfile command flags
- update qemu emulators
    - relates to "Impossible to run git clone inside buildx with non x86 architecture"
- Fix reference count issues on typed errors with mount references
    - fixes errors on releasing mounts with typed execerror refs
    - fixes / addresses invalid mutable ref when using shared cache mounts
- dockerfile/docs: fix frontend image tags
- git: set token only for main remote access
    - fixes "Loading repositories with submodules is repeated. Failed to clone submodule from googlesource"
- allow skipping empty layer detection on cache export

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 9962a3f74e)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-02-25 01:41:11 +00:00
Akihiro Suda
5e8c1b4f7d dockerd-rootless.sh: add typo guard
`dockerd-rootless.sh install` is a common typo of `dockerd-rootless-setuptool.sh install`.

Now `dockerd-rootless.sh install` shows human-readable error.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 8dc6c109b5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-24 22:13:50 +01:00
Tibor Vass
7f547e15c7 Merge pull request #42060 from thaJeztah/20.10_backport_bump_swarmkit
[20.10 backport] Update Swarmkit to pick up fixes to heartbeat period and stalled tasks
2021-02-24 12:46:03 -08:00
Tibor Vass
830471acf5 Merge pull request #42066 from thaJeztah/20.10_backport_check_config
[20.10 backport] check-config.sh: add NETFILTER_XT_MARK
2021-02-24 12:45:33 -08:00
Tibor Vass
7ae42f5797 Merge pull request #42065 from thaJeztah/20.10_backport_lease_blobs_fixes
[20.10 backport] builder: fix blobs releasing via leases after pull
2021-02-24 12:44:51 -08:00
Sebastiaan van Stijn
f3d130d743 Merge pull request #42049 from thaJeztah/20.10_backport_builder_pull_fix
[20.10 backport] builder: fix pull synchronization regression
2021-02-23 21:15:32 +01:00
Piotr Karbowski
a24d92f95b check-config.sh: add NETFILTER_XT_MARK
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: https://github.com/moby/libnetwork/issues/2227
Bug: https://github.com/docker/for-linux/issues/644
Bug: https://github.com/docker/for-linux/issues/525
Signed-off-by: Piotr Karbowski <piotr.karbowski@protonmail.ch>
(cherry picked from commit e8ceb97646)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-23 19:25:47 +01:00
Tonis Tiigi
80019e1b0e builder: fix blobs releasing via leases after pull
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 5c01d06f72)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-23 19:22:35 +01:00
Tibor Vass
dc1606ad79 Merge pull request #42046 from thaJeztah/20.10_labels_regex_length_check
[20.10 backport] Check the length of the correct variable #42039
2021-02-23 10:00:58 -08:00
Adam Williams
2a220f1f3d Update Swarmkit to pick up fixes to heartbeat period and stalled tasks
Signed-off-by: Adam Williams <awilliams@mirantis.com>
(cherry picked from commit cbd2f726bf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-23 09:57:37 +01:00
Sebastiaan van Stijn
148e6c9514 Merge pull request #42017 from thaJeztah/20.10_backport_build_fixes
[20.10 backport]: avoid creating parent dirs for XGlobalHeader, and fix permissions
2021-02-22 20:04:04 +01:00
Tonis Tiigi
da1a672102 builder: fix pull synchronization regression
Config resolution was synchronized based on a wrong key as ref
variable is initialized only after in the same function. Using
the right key isn't fully correct either as the synchronized method
changes properties of the puller instance and can't be just skipped.
Added better error handling for the same case as well.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit b53ea19c49)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-19 10:08:04 +01:00
Nathan Carlson
0e001154f9 Check the length of the correct variable #42039
Signed-off-by: Nathan Carlson <carl4403@umn.edu>
(cherry picked from commit 8d73c1ad68)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-18 22:23:34 +01:00
Sebastiaan van Stijn
df2cfb4d33 Merge pull request #42045 from cpuguy83/20.10_fallback_manifest_on_bad_plat
[20.10] Fallback to manifest list when no platform match
2021-02-18 21:37:34 +01:00
Tibor Vass
7f6776fb5e Merge pull request #41971 from thaJeztah/20.10_backport_seccomp_update
[20.10 backport] profiles: seccomp: update to Linux 5.11 syscall list
2021-02-18 12:36:47 -08:00
Tibor Vass
caa48de224 Merge pull request #41974 from thaJeztah/20.10_backport_for_linux_1169_plugins_custom_runtime-panic
[20.10 backport] Add shim config for custom runtimes for plugins
2021-02-18 12:36:21 -08:00
Tibor Vass
6a86c25cf0 Merge pull request #41972 from thaJeztah/20.10_backport_net_leak_fix
[20.10 backport] builder: ensure libnetwork state file do not leak
2021-02-18 12:34:14 -08:00
Tibor Vass
ff486ae873 Merge pull request #41973 from thaJeztah/20.10_backport_fix_builder_inconsisent_platform
[20.10 backport] Fix builder inconsistent error on buggy platform
2021-02-18 12:32:53 -08:00
Tibor Vass
b55d9e1b91 Merge pull request #41976 from thaJeztah/20.10_backport_reuse
[20.10 backport] replace json.Unmarshal with NewFromJSON in Create
2021-02-18 12:30:18 -08:00
Tibor Vass
b81e649d2b Merge pull request #41977 from thaJeztah/20.10_backport_minor_fixes
[20.10 backport] assorted small fixes, docs changes, and contrib
2021-02-18 12:29:07 -08:00
Tibor Vass
5bb85a962a Merge pull request #42001 from thaJeztah/20.10_backport_fix_cgroup_rule_panic
[20.10 backport] Fix daemon panic when starting container with invalid device cgroup rule
2021-02-18 12:27:38 -08:00
Tibor Vass
6de7dbd225 Merge pull request #42012 from thaJeztah/20.10_backport_fix_nanocpus_casing
[20.10 backport] api/docs: fix NanoCPUs casing in swagger
2021-02-18 12:26:04 -08:00
Tibor Vass
8e2c5fc178 Merge pull request #42013 from thaJeztah/20.10_backport_42003_fix_userns_uid_username_match
[20.10 backport] Fix userns-remap option when username & UID match
2021-02-18 12:25:13 -08:00
Tibor Vass
f88c4aeaa0 Merge pull request #42014 from thaJeztah/20.10_backport_bump_runc_binary
[20.10 backport] update runc binary to v1.0.0-rc93
2021-02-18 12:24:02 -08:00
Tibor Vass
c981698f9a Merge pull request #42025 from thaJeztah/20.10_backport_bump_rootlesskit
[20.10 backport] Update rootlesskit to v0.13.1 to fix handling of IPv6 addresses
2021-02-18 12:17:55 -08:00
Tibor Vass
d6ae06a70a Merge pull request #42042 from thaJeztah/20.10_backport_docker_dind_integration_test_fix_subnet_missmatch
[20.10 backport] Update TestDaemonRestartWithLiveRestore: fix docker0 subnet missmatch
2021-02-18 12:15:05 -08:00
Brian Goff
3beb2e4422 Move cpu variant checks into platform matcher
Wrap platforms.Only and fallback to our ignore mismatches due to  empty
CPU variants. This just cleans things up and makes the logic re-usable
in other places.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 50f39e7247)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-02-18 20:12:07 +00:00
Brian Goff
0caf485abb Fallback to manifest list when no platform match
In some cases, in fact many in the wild, an image may have the incorrect
platform on the image config.
This can lead to failures to run an image, particularly when a user
specifies a `--platform`.
Typically what we see in the wild is a manifest list with an an entry
for, as an example, linux/arm64 pointing to an image config that has
linux/amd64 on it.

This change falls back to looking up the manifest list for an image to
see if the manifest list shows the image as the correct one for that
platform.

In order to accomplish this we need to traverse the leases associated
with an image. Each image, if pulled with Docker 20.10, will have the
manifest list stored in the containerd content store with the resource
assigned to a lease keyed on the image ID.
So we look up the lease for the image, then look up the assocated
resources to find the manifest list, then check the manifest list for a
platform match, then ensure that manifest referes to our image config.

This is only used as a fallback when a user specified they want a
particular platform and the image config that we have does not match
that platform.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 4be5453215)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-02-18 20:12:00 +00:00
Brian Goff
24e1d7fa59 Merge pull request #41975 from thaJeztah/20.10_backport_41794_sized_logger
[20.10 backport] Handle long log messages correctly on SizedLogger
2021-02-17 16:51:24 -08:00
Aleksa Sarai
a6a88b3145 profiles: seccomp: update to Linux 5.11 syscall list
These syscalls (some of which have been in Linux for a while but were
missing from the profile) fall into a few buckets:

 * close_range(2), epoll_pwait2(2) are just extensions of existing "safe
   for everyone" syscalls.

 * The mountv2 API syscalls (fs*(2), move_mount(2), open_tree(2)) are
   all equivalent to aspects of mount(2) and thus go into the
   CAP_SYS_ADMIN category.

 * process_madvise(2) is similar to the other process_*(2) syscalls and
   thus goes in the CAP_SYS_PTRACE category.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
(cherry picked from commit 54eff4354b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:22:12 +01:00
Tonis Tiigi
e3750357a5 builder: ensure libnetwork state file do not leak
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 7c7e168902)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:21:25 +01:00
Brian Goff
ab5711e619 Fix builder inconsistent error on buggy platform
When pulling an image by platform, it is possible for the image's
configured platform to not match what was in the manifest list.
The image itself is buggy because either the manifest list is incorrect
or the image config is incorrect. In any case, this is preventing people
from upgrading because many times users do not have control over these
buggy images.

This was not a problem in 19.03 because we did not compare on platform
before. It just assumed if we had the image it was the one we wanted
regardless of platform, which has its own problems.

Example Dockerfile that has this problem:

```Dockerfile
FROM --platform=linux/arm64 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0
RUN echo hello
```

This fails the first time you try to build after it finishes pulling but
before performing the `RUN` command.
On the second attempt it works because the image is already there and
does not hit the code that errors out on platform mismatch (Actually it
ignores errors if an image is returned at all).

Must be run with the classic builder (DOCKER_BUILDKIT=0).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 399695305c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:20:46 +01:00
Brian Goff
df2a989769 Add shim config for custom runtimes for plugins
This fixes a panic when an admin specifies a custom default runtime,
when a plugin is started the shim config is nil.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 2903863a1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:20:03 +01:00
Kazuyoshi Kato
d13e162a63 Handle long log messages correctly on SizedLogger
Loggers that implement BufSize() (e.g. awslogs) uses the method to
tell Copier about the maximum log line length. However loggerWithCache
and RingBuffer hide the method by wrapping loggers.

As a result, Copier uses its default 16KB limit which breaks log
lines > 16kB even the destinations can handle that.

This change implements BufSize() on loggerWithCache and RingBuffer to
make sure these logger wrappes don't hide the method on the underlying
loggers.

Fixes #41794.

Signed-off-by: Kazuyoshi Kato <katokazu@amazon.com>
(cherry picked from commit bb11365e96)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:19:02 +01:00
Jim Lin
34446d0343 replace json.Unmarshal with NewFromJSON in Create
Signed-off-by: Jim Lin <b04705003@ntu.edu.tw>
(cherry picked from commit c9ec21e17a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:18:19 +01:00
Sebastiaan van Stijn
c00fb1383f docs: fix double "the" in existing API versions
Backport of 2db5676c6e to the swagger files
used in the documentation

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 240d0b37bb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:42 +01:00
Frederico F. de Oliveira
b7e6803ec4 swagger.yaml: Remove extra 'the' wrapped by newline
This PR was originally proposed by @phillc here: https://github.com/docker/engine/pull/456

Signed-off-by: FreddieOliveira <fredf_oliveira@ufu.br>
(cherry picked from commit 2db5676c6e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:40 +01:00
Kir Kolyshkin
420de4c569 contrib/check-config.sh: fix INET_XFRM_MODE_TRANSPORT
This parameter was removed by kernel commit 4c145dce260137,
which made its way to kernel v5.3-rc1. Since that commit,
the functionality is built-in (i.e. it is available as long
as CONFIG_XFRM is on).

Make the check conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 06d9020fac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:39 +01:00
Kir Kolyshkin
8412078b1e contrib/check-config.sh: fix IOSCHED_CFQ CFQ_GROUP_IOSCHED
These config options are removed by kernel commit f382fb0bcef4,
which made its way into kernel v5.0-rc1.

Make the check conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 18e0543587)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:37 +01:00
Kir Kolyshkin
bb0866f04e contrib/check-config.sh: fix MEMCG_SWAP_ENABLED
Kernel commit 2d1c498072de69e (which made its way into kernel v5.8-rc1)
removed CONFIG_MEMCG_SWAP_ENABLED Kconfig option, making swap accounting
always enabled (unless swapaccount=0 boot option is provided).

Make the check conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 070f9d9dd3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:35 +01:00
Kir Kolyshkin
db47bec3c7 contrib/check-config.sh: fix NF_NAT_NEEDED
CONFIG_NF_NAT_NEEDED was removed in kernel commit 4806e975729f99c7,
which made its way into v5.2-rc1. The functionality is now under
NF_NAT which we already check for.

Make the check for NF_NAT_NEEDED conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 03da41152a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:33 +01:00
Kir Kolyshkin
6bc47ca4b4 contrib/check-config.sh: fix NF_NAT_IPV4
CONFIG_NF_NAT_IPV4 was removed in kernel commit 3bf195ae6037e310,
which made its way into v5.1-rc1. The functionality is now under
NF_NAT which we already check for.

Make the check for NF_NAT_IPV4 conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit eeb53c1f22)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:31 +01:00
Kir Kolyshkin
491642e696 contrib/check-config.sh: support for cgroupv2
Before:

> Generally Necessary:
> - cgroup hierarchy: nonexistent??
>     (see https://github.com/tianon/cgroupfs-mount)

After:

> Generally Necessary:
> - cgroup hierarchy: cgroupv2

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 76b59065ae)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:30 +01:00
gunadhya
cda6988478 Fix Error in daemon_unix.go and docker_cli_run_unit_test.go
Signed-off-by: gunadhya <6939749+gunadhya@users.noreply.github.com>
(cherry picked from commit 64465f3b5f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:28 +01:00
Sebastiaan van Stijn
18543cd8c8 Merge pull request #42000 from thaJeztah/20.10_backport_fix_dockerfile_simple
[20.10 backport] Dockerfile.simple: Fix compile docker binary error with btrfs
2021-02-17 21:17:02 +01:00
Sebastiaan van Stijn
1640d7b986 Fix daemon panic when starting container with invalid device cgroup rule
This fixes a panic when an invalid "device cgroup rule" is passed, resulting
in an "index out of range".

This bug was introduced in the original implementation in 1756af6faf,
but was not reproducible when using the CLI, because the same commit also added
client-side validation on the flag before making an API request. The following
example, uses an invalid rule (`c *:*  rwm` - two spaces before the permissions);

```console
$ docker run --rm --network=host --device-cgroup-rule='c *:*  rwm' busybox
invalid argument "c *:*  rwm" for "--device-cgroup-rule" flag: invalid device cgroup format 'c *:*  rwm'
```

Doing the same, but using the API results in a daemon panic when starting the container;

Create a container with an invalid device cgroup rule:

```console
curl -v \
  --unix-socket /var/run/docker.sock \
  "http://localhost/v1.41/containers/create?name=foobar" \
  -H "Content-Type: application/json" \
  -d '{"Image":"busybox:latest", "HostConfig":{"DeviceCgroupRules": ["c *:*  rwm"]}}'
```

Start the container:

```console
curl -v \
  --unix-socket /var/run/docker.sock \
  -X POST \
  "http://localhost/v1.41/containers/foobar/start"
```

Observe the daemon logs:

```
2021-01-22 12:53:03.313806 I | http: panic serving @: runtime error: index out of range [0] with length 0
goroutine 571 [running]:
net/http.(*conn).serve.func1(0xc000cb2d20)
	/usr/local/go/src/net/http/server.go:1795 +0x13b
panic(0x2f32380, 0xc000aebfc0)
	/usr/local/go/src/runtime/panic.go:679 +0x1b6
github.com/docker/docker/oci.AppendDevicePermissionsFromCgroupRules(0xc000175c00, 0x8, 0x8, 0xc0000bd380, 0x1, 0x4, 0x0, 0x0, 0xc0000e69c0, 0x0, ...)
	/go/src/github.com/docker/docker/oci/oci.go:34 +0x64f
```

This patch:

- fixes the panic, allowing the daemon to return an error on container start
- adds a unit-test to validate various permutations
- adds a "todo" to verify the regular expression (and handling) of the "a" (all) value

We should also consider performing this validation when _creating_ the container,
so that an error is produced early.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5cc1753f2c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:16:01 +01:00
Sebastiaan van Stijn
6e3f2acdac docs: fix NanoCPUs casing
While the field in the Go struct is named `NanoCPUs`, it has a JSON label to
use `NanoCpus`, which was added in the original pull request (not clear what
the reason was); 846baf1fd3

Some notes:

- Golang processes field names case-insensitive, so when *using* the API,
  both cases should work, but when inspecting a container, the field is
  returned as `NanoCpus`.
- This only affects Containers.Resources. The `Limits` and `Reservation`
  for SwarmKit services and SwarmKit "nodes" do not override the name
  for JSON, so have the canonical (`NanoCPUs`) casing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8e2343ffd4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:15:21 +01:00
Sebastiaan van Stijn
ad777ff3bc api: fix NanoCPUs casing in swagger
While the field in the Go struct is named `NanoCPUs`, it has a JSON label to
use `NanoCpus`, which was added in the original pull request (not clear what
the reason was); 846baf1fd3

Some notes:

- Golang processes field names case-insensitive, so when *using* the API,
  both cases should work, but when inspecting a container, the field is
  returned as `NanoCpus`.
- This only affects Containers.Resources. The `Limits` and `Reservation`
  for SwarmKit services and SwarmKit "nodes" do not override the name
  for JSON, so have the canonical (`NanoCPUs`) casing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2bd46ed7e5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:15:19 +01:00
Grant Millar
94d2467613 Fix userns-remap option when username & UID match
Signed-off-by: Grant Millar <rid@cylo.io>
(cherry picked from commit 2ad187fd4a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:14:40 +01:00
Sebastiaan van Stijn
acb8a48a3c update runc binary to v1.0.0-rc93
full diff: https://github.com/opencontainers/runc/compare/v1.0.0-rc92...v1.0.0-rc93
release notes: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc93

Release notes for runc v1.0.0-rc93
-------------------------------------------------

This is the last feature-rich RC release and we are in a feature-freeze until
1.0. 1.0.0~rc94 will be released in a few weeks with minimal bug fixes only,
and 1.0.0 will be released soon afterwards.

- runc's cgroupv2 support is no longer considered experimental. It is now
  believed to be fully ready for production deployments. In addition, runc's
  cgroup code has been improved:
    - The systemd cgroup driver has been improved to be more resilient and
      handle more systemd properties correctly.
    - We now make use of openat2(2) when possible to improve the security of
      cgroup operations (in future runc will be wholesale ported to libpathrs to
      get this protection in all codepaths).
- runc's mountinfo parsing code has been reworked significantly, making
  container startup times significantly faster and less wasteful in general.
- runc now has special handling for seccomp profiles to avoid making new
  syscalls unusable for glibc. This is done by installing a custom prefix to
  all seccomp filters which returns -ENOSYS for syscalls that are newer than
  any syscall in the profile (meaning they have a larger syscall number).

  This should not cause any regressions (because previously users would simply
  get -EPERM rather than -ENOSYS, and the rule applied above is the most
  conservative rule possible) but please report any regressions you find as a
  result of this change -- in particular, programs which have special fallback
  code that is only run in the case of -EPERM.
- runc now supports the following new runtime-spec features:
    - The umask of a container can now be specified.
    - The new Linux 5.9 capabilities (CAP_PERFMON, CAP_BPF, and
      CAP_CHECKPOINT_RESTORE) are now supported.
    - The "unified" cgroup configuration option, which allows users to explicitly
      specify the limits based on the cgroup file names rather than abstracting
      them through OCI configuration. This is currently limited in scope to
      cgroupv2.
- Various rootless containers improvements:
    - runc will no longer cause conflicts if a user specifies a custom device
      which conflicts with a user-configured device -- the user device takes
      precedence.
    - runc no longer panics if /sys/fs/cgroup is missing in rootless mode.
- runc --root is now always treated as local to the current working directory.
- The --no-pivot-root hardening was improved to handle nested mounts properly
  (please note that we still strongly recommend that users do not use
  --no-pivot-root -- it is still an insecure option).
- A large number of code cleanliness and other various cleanups, including
  fairly large changes to our tests and CI to make them all run more
  efficiently.

For packagers the following changes have been made which will have impact on
your packaging of runc:

- The "selinux" and "apparmor" buildtags have been removed, and now all runc
  builds will have SELinux and AppArmor support enabled. Note that "seccomp"
  is still optional (though we very highly recommend you enable it).
- make install DESTDIR= now functions correctly.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 28e5a3c5a4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:13:50 +01:00
Sebastiaan van Stijn
5d442b1cb7 pkg/archive: Unpack() use 0755 permissions for missing directories
Commit edb62a3ace fixed a bug in MkdirAllAndChown()
that caused the specified permissions to not be applied correctly. As a result
of that bug, the configured umask would be applied.

When extracting archives, Unpack() used 0777 permissions when creating missing
parent directories for files that were extracted.
Before edb62a3ace, this resulted in actual
permissions of those directories to be 0755 on most configurations (using a
default 022 umask).

Creating these directories should not depend on the host's umask configuration.
This patch changes the permissions to 0755 to match the previous behavior,
and to reflect the original intent of using 0755 as default.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25ada76437)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:12:57 +01:00
Tonis Tiigi
5db18e0aba archive: avoid creating parent dirs for XGlobalHeader
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit ba7906aef3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:12:55 +01:00
Sebastiaan van Stijn
94feac18d2 Update rootlesskit to v0.13.1 to fix handling of IPv6 addresses
v0.13.1

- Refactor `ParsePortSpec` to handle IPv6 addresses, and improve validation

v0.13.0

- `rootlesskit --pidns`: fix propagating exit status
- Support cgroup2 evacuation, e.g., `systemd-run -p Delegate=yes --user -t rootlesskit --cgroupns --pidns --evacuate-cgroup2=evac --net=slirp4netns bash`

v0.12.0

- Port forwarding API now supports setting `ChildIP`
- The `vendor` directory is no longer included in this repo. Run `go mod vendor` if you need

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e32ae1973a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:11:17 +01:00
Alexis Ries
cc377d27ac Update TestDaemonRestartWithLiveRestore: fix docker0 subnet missmatch
Fix docker0 subnet missmatch when running from docker in docker (dind)

Signed-off-by: Alexis Ries <ries.alexis@gmail.com>
(cherry picked from commit 96e103feb1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:07:36 +01:00
Brian Goff
fae366b323 Merge pull request #41970 from thaJeztah/20.10_backport_testing_fixes 2021-02-17 09:37:19 -08:00
Brian Goff
dfce527001 Merge pull request #42030 from thaJeztah/20.10_backport_cgroup2ci_jenkins 2021-02-16 09:37:37 -08:00
Sebastiaan van Stijn
fc07fecfb5 TestBuildUserNamespaceValidateCapabilitiesAreV2: verify build completed
Check if the `docker build` completed successfully before continuing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fa480403c7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 14:20:02 +01:00
Sebastiaan van Stijn
f7893961de TestBuildUserNamespaceValidateCapabilitiesAreV2: use correct image name
This currently doesn't make a difference, because load.FrozenImagesLinux()
currently loads all frozen images, not just the specified one, but in case
that is fixed/implemented at some point.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 26965fbfa0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 14:20:00 +01:00
Akihiro Suda
d31b2141ae Jenkinsfile: add cgroup2
Thanks to Stefan Scherer for setting up the Jenkins nodes.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit c23b99f4db)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 09:22:30 +01:00
Akihiro Suda
5de9bc7e01 TestInspectOomKilledTrue: skip on cgroup v2
The test fails intermittently on cgroup v2.

```
=== FAIL: amd64.integration.container TestInspectOomKilledTrue (0.53s)
    kill_test.go:171: assertion failed: true (true bool) != false (inspect.State.OOMKilled bool)
```

Tracked in issue 41929

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit c316dd7cc5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 09:22:27 +01:00
Tibor Vass
11443bc681 Merge pull request #41957 from AkihiroSuda/cherrypick-41892-2010
[20.10 backport] pkg/archive: allow mknodding FIFO inside userns
2021-02-11 11:56:54 -08:00
Lei Jiang
ff49cb3e33 Dockerfile.simple: Fix compile docker binary error with btrfs
Use the image build from Dockerfile.simple to build docker binary failed
with not find <brtfs/ioctl.h>, we need to install libbtrfs-dev to fix this.
```
Building: bundles/dynbinary-daemon/dockerd-dev
GOOS="" GOARCH="" GOARM=""
.gopath/src/github.com/docker/docker/daemon/graphdriver/btrfs/btrfs.go:8:10: fatal error: btrfs/ioctl.h: No such file or directory
 #include <btrfs/ioctl.h>

```

Signed-off-by: Lei Jitang <leijitang@outlook.com>
(cherry picked from commit dd7ee8ea3e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-08 17:47:03 +01:00
Sebastiaan van Stijn
49e706e14c Dockerfile.buildx: update buildx to v0.5.1
full diff: https://github.com/docker/buildx/compare/v0.3.1...v0.5.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 30b20a6bdd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:27 +01:00
Sebastiaan van Stijn
0211909bde testing: update docker-py 4.4.1
run docker-py integration tests of the latest release;

full diff: https://github.com/docker/docker-py/compare/4.3.0...4.4.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 14fb165085)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:22 +01:00
Sebastiaan van Stijn
faf6442f80 integration: fix TestBuildUserNamespaceValidateCapabilitiesAreV2 not using frozen image
Commit f2f5106c92 added this test to verify loading
of images that were built with user-namespaces enabled.

However, because this test spins up a new daemon, not the daemon that's set up by
the test-suite's `TestMain()` (which loads the frozen images).

As a result, the `debian:bullseye` image was pulled from Docker Hub when running
the test;

    Calling POST /v1.41/images/load?quiet=1
    Applying tar in /go/src/github.com/docker/docker/bundles/test-integration/TestBuildUserNamespaceValidateCapabilitiesAreV2/d4d366b15997b/root/165536.165536/overlay2/3f7f9375197667acaf7bc810b34689c21f8fed9c52c6765c032497092ca023d6/diff" storage-driver=overlay
    Applied tar sha256:845f0e5159140e9dbcad00c0326c2a506fbe375aa1c229c43f082867d283149c to 3f7f9375197667acaf7bc810b34689c21f8fed9c52c6765c032497092ca023d6, size: 5922359
    Calling POST /v1.41/build?buildargs=null&cachefrom=null&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=&labels=null&memory=0&memswap=0&networkmode=&rm=0&shmsize=0&t=capabilities%3A1.0&target=&ulimits=null&version=
    Trying to pull debian from https://registry-1.docker.io v2
    Fetching manifest from remote" digest="sha256:f169dbadc9021fc0b08e371d50a772809286a167f62a8b6ae86e4745878d283d" error="<nil>" remote="docker.io/library/debian:bullseye
    Pulling ref from V2 registry: debian:bullseye
    ...

This patch updates `TestBuildUserNamespaceValidateCapabilitiesAreV2` to load the
frozen image. `StartWithBusybox` is also changed to `Start`, because the test
is not using the busybox image, so there's no need to load it.

In a followup, we should probably add some utilities to make this easier to set up
(and to allow passing the list frozen images that we want to load, without having
to "hard-code" the image name to load).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 46dfc31342)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:16 +01:00
Brian Goff
f0e526f43e Make test work with rootless mode
Using `d.Kill()` with rootless mode causes the restarted daemon to not
be able to start containerd (it times out).

Originally this was SIGKILLing the daemon because we were hoping to not
have to manipulate on disk state, but since we need to anyway we can
shut it down normally.

I also tested this to ensure the test fails correctly without the fix
that the test was added to check for.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e6591a9c7a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:09 +01:00
Brian Goff
11ecfe8a81 Merge pull request #41959 from AkihiroSuda/cherrypick-41917-2010
[20.10 backport] TestCgroupNamespacesRunOlderClient: support cgroup v2
2021-02-02 10:54:01 -08:00
Brian Goff
49df387b71 Merge pull request #41958 from AkihiroSuda/cherrypick-41894-2010 2021-02-02 10:52:42 -08:00
Brian Goff
54f561aeb9 Merge pull request #41956 from AkihiroSuda/cherrypick-41947-2010
[20.10 backport] rootless: prevent the service hanging when stopping (set systemd KillMode to mixed)
2021-02-02 10:51:08 -08:00
Akihiro Suda
519a55f491 TestCgroupNamespacesRunOlderClient: support cgroup v2
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:34:08 +09:00
Akihiro Suda
b6a6a35684 docker info: adjust warning strings for cgroup v2
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 00225e220f)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:32:13 +09:00
Akihiro Suda
25bd941ae4 docker info: silence unhandleable warnings
The following warnings in `docker info` are now discarded,
because there is no action user can actually take.

On cgroup v1:
- "WARNING: No blkio weight support"
- "WARNING: No blkio weight_device support"

On cgroup v2:
- "WARNING: No kernel memory TCP limit support"
- "WARNING: No oom kill disable support"

`docker run` still prints warnings when the missing feature is being attempted to use.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 8086443a44)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:32:00 +09:00
Akihiro Suda
a287e76e15 pkg/archive: allow mknodding FIFO inside userns
Fix #41803

Also attempt to mknod devices.
Mknodding devices are likely to fail, but still worth trying when
running with a seccomp user notification.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit d5d5cccb7e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:29:49 +09:00
Akihiro Suda
58283298d7 rootless: set systemd KillMode to mixed
Now `systemctl --user stop docker` completes just with in 1 or 2 seconds.

Fix issue 41944 ("Docker rootless does not exit properly if containers are running")

See systemd.kill(5) https://www.freedesktop.org/software/systemd/man/systemd.kill.html

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 05566adf71)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:26:47 +09:00
Tibor Vass
46229ca1d8 Use golang.org/x/sys/execabs
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7ca0cb7ffa)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:33:12 +00:00
Brian Goff
a7d4af84bd pull: Validate layer digest format
Otherwise a malformed or empty digest may cause a panic.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-28 21:33:12 +00:00
Brian Goff
611eb6ffb3 buildkit: Apply apparmor profile
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-28 21:33:12 +00:00
Tibor Vass
4afe620fac vendor buildkit 68bb095353c65bc3993fd534c26cf77fe05e61b1
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 20:20:56 +00:00
Brian Goff
e908cc3901 Use real root with 0701 perms
Various dirs in /var/lib/docker contain data that needs to be mounted
into a container. For this reason, these dirs are set to be owned by the
remapped root user, otherwise there can be permissions issues.
However, this uneccessarily exposes these dirs to an unprivileged user
on the host.

Instead, set the ownership of these dirs to the real root (or rather the
UID/GID of dockerd) with 0701 permissions, which allows the remapped
root to enter the directories but not read/write to them.
The remapped root needs to enter these dirs so the container's rootfs
can be configured... e.g. to mount /etc/resolve.conf.

This prevents an unprivileged user from having read/write access to
these dirs on the host.
The flip side of this is now any user can enter these directories.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-26 17:23:32 +00:00
Brian Goff
bfedd27259 Do not set DOCKER_TMP to be owned by remapped root
The remapped root does not need access to this dir.
Having this owned by the remapped root opens the host up to an
uprivileged user on the host being able to escalate privileges.

While it would not be normal for the remapped UID to be used outside of
the container context, it could happen.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-26 17:23:32 +00:00
Brian Goff
edb62a3ace Ensure MkdirAllAndChown also sets perms
Generally if we ever need to change perms of a dir, between versions,
this ensures the permissions actually change when we think it should
change without having to handle special cases if it already existed.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-26 17:23:32 +00:00
11367 changed files with 697882 additions and 1596059 deletions

22
.DEREK.yml Normal file
View File

@@ -0,0 +1,22 @@
curators:
- aboch
- alexellis
- andrewhsu
- anonymuse
- arkodg
- chanwit
- ehazlett
- fntlnz
- gianarb
- kolyshkin
- mgoelzer
- olljanat
- programmerq
- rheinwein
- ripcurld0
- thajeztah
features:
- comments
- pr_description_required

View File

@@ -1,21 +0,0 @@
{
"name": "moby",
"build": {
"context": "..",
"dockerfile": "../Dockerfile",
"target": "devcontainer"
},
"workspaceFolder": "/go/src/github.com/docker/docker",
"workspaceMount": "source=${localWorkspaceFolder},target=/go/src/github.com/docker/docker,type=bind,consistency=cached",
"remoteUser": "root",
"runArgs": ["--privileged"],
"customizations": {
"vscode": {
"extensions": [
"golang.go"
]
}
}
}

View File

@@ -1,6 +1,6 @@
/.git
.git
.go-pkg-cache
.gopath
bundles
vendor/pkg
# build artifacts
/bundles/
/cli/winresources/dockerd/winres.json
/cli/winresources/dockerd/*.syso

3
.gitattributes vendored
View File

@@ -1,3 +0,0 @@
Dockerfile* linguist-language=Dockerfile
vendor.mod linguist-language=Go-Module
vendor.sum linguist-language=Go-Checksums

3
.github/CODEOWNERS vendored
View File

@@ -5,6 +5,9 @@
builder/** @tonistiigi
contrib/mkimage/** @tianon
daemon/graphdriver/devmapper/** @rhvgoyal
daemon/graphdriver/lcow/** @johnstep
daemon/graphdriver/overlay/** @dmcgowan
daemon/graphdriver/overlay2/** @dmcgowan
daemon/graphdriver/windows/** @johnstep
daemon/logger/awslogs/** @samuelkarp

70
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,70 @@
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/moby/moby/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
GENERAL SUPPORT INFORMATION
---------------------------------------------------
The GitHub issue tracker is for bug reports and feature requests.
General support for **docker** can be found at the following locations:
- Docker Support Forums - https://forums.docker.com
- Slack - community.docker.com #general channel
- Post a question on StackOverflow, using the Docker tag
General support for **moby** can be found at the following locations:
- Moby Project Forums - https://forums.mobyproject.org
- Slack - community.docker.com #moby-project channel
- Post a question on StackOverflow, using the Moby tag
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Description**
<!--
Briefly describe the problem you are having in a few paragraphs.
-->
**Steps to reproduce the issue:**
1.
2.
3.
**Describe the results you received:**
**Describe the results you expected:**
**Additional information you deem important (e.g. issue happens only occasionally):**
**Output of `docker version`:**
```
(paste your output here)
```
**Output of `docker info`:**
```
(paste your output here)
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**

View File

@@ -1,146 +0,0 @@
name: Bug report
description: Create a report to help us improve
labels:
- kind/bug
- status/0-triage
body:
- type: markdown
attributes:
value: |
Thank you for taking the time to report a bug!
If this is a security issue please report it to the [Docker Security team](mailto:security@docker.com).
- type: textarea
id: description
attributes:
label: Description
description: Please give a clear and concise description of the bug
validations:
required: true
- type: textarea
id: repro
attributes:
label: Reproduce
description: Steps to reproduce the bug
placeholder: |
1. docker run ...
2. docker kill ...
3. docker rm ...
validations:
required: true
- type: textarea
id: expected
attributes:
label: Expected behavior
description: What is the expected behavior?
placeholder: |
E.g. "`docker rm` should remove the container and cleanup all associated data"
- type: textarea
id: version
attributes:
label: docker version
description: Output of `docker version`
render: bash
placeholder: |
Client:
Version: 20.10.17
API version: 1.41
Go version: go1.17.11
Git commit: 100c70180fde3601def79a59cc3e996aa553c9b9
Built: Mon Jun 6 21:36:39 UTC 2022
OS/Arch: linux/amd64
Context: default
Experimental: true
Server:
Engine:
Version: 20.10.17
API version: 1.41 (minimum version 1.12)
Go version: go1.17.11
Git commit: a89b84221c8560e7a3dee2a653353429e7628424
Built: Mon Jun 6 22:32:38 2022
OS/Arch: linux/amd64
Experimental: true
containerd:
Version: 1.6.6
GitCommit: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc:
Version: 1.1.2
GitCommit: a916309fff0f838eb94e928713dbc3c0d0ac7aa4
docker-init:
Version: 0.19.0
GitCommit:
validations:
required: true
- type: textarea
id: info
attributes:
label: docker info
description: Output of `docker info`
render: bash
placeholder: |
Client:
Context: default
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc., 0.8.2)
compose: Docker Compose (Docker Inc., 2.6.0)
Server:
Containers: 4
Running: 2
Paused: 0
Stopped: 2
Images: 80
Server Version: 20.10.17
Storage Driver: overlay2
Backing Filesystem: xfs
Supports d_type: true
Native Overlay Diff: false
userxattr: false
Logging Driver: local
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 10c12954828e7c7c9b6e0ea9b0c02b01407d3ae1
runc version: a916309fff0f838eb94e928713dbc3c0d0ac7aa4
init version:
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.13.0-1031-azure
Operating System: Ubuntu 20.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.63GiB
Name: dev
ID: UC44:2RFL:7NQ5:GGFW:34O5:DYRE:CLOH:VLGZ:64AZ:GFXC:PY6H:SAHY
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 46
Goroutines: 134
System Time: 2022-07-06T18:07:54.812439392Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: true
validations:
required: true
- type: textarea
id: additional
attributes:
label: Additional Info
description: Additional info you want to provide such as logs, system info, environment, etc.
validations:
required: false

View File

@@ -1,8 +0,0 @@
blank_issues_enabled: false
contact_links:
- name: Security and Vulnerabilities
url: https://github.com/moby/moby/blob/master/SECURITY.md
about: Please report any security issues or vulnerabilities responsibly to the Docker security team. Please do not use the public issue tracker.
- name: Questions and Discussions
url: https://github.com/moby/moby/discussions/new
about: Use Github Discussions to ask questions and/or open discussion topics.

View File

@@ -1,13 +0,0 @@
name: Feature request
description: Missing functionality? Come tell us about it!
labels:
- kind/feature
- status/0-triage
body:
- type: textarea
id: description
attributes:
label: Description
description: What is the feature you want to see?
validations:
required: true

View File

@@ -19,18 +19,12 @@ Please provide the following information:
**- How to verify it**
**- Human readable description for the release notes**
**- Description for the changelog**
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog.
It must be placed inside the below triple backticks section.
NOTE: Only fill this section if changes introduced in this PR are user-facing.
The PR must have a relevant impact/ label.
pull request for inclusion in the changelog:
-->
```markdown changelog
```
**- A picture of a cute animal (not mandatory but encouraged)**

View File

@@ -1,27 +0,0 @@
name: 'Setup Runner'
description: 'Composite action to set up the GitHub Runner for jobs in the test.yml workflow'
runs:
using: composite
steps:
- run: |
sudo modprobe ip_vs
sudo modprobe ipv6
sudo modprobe ip6table_filter
sudo modprobe -r overlay
sudo modprobe overlay redirect_dir=off
shell: bash
- run: |
if [ ! -e /etc/docker/daemon.json ]; then
echo '{}' | sudo tee /etc/docker/daemon.json >/dev/null
fi
DOCKERD_CONFIG=$(jq '.+{"experimental":true,"live-restore":true,"ipv6":true,"fixed-cidr-v6":"2001:db8:1::/64"}' /etc/docker/daemon.json)
sudo tee /etc/docker/daemon.json <<<"$DOCKERD_CONFIG" >/dev/null
sudo service docker restart
shell: bash
- run: |
./contrib/check-config.sh || true
shell: bash
- run: |
docker info
shell: bash

View File

@@ -1,14 +0,0 @@
name: 'Setup Tracing'
description: 'Composite action to set up the tracing for test jobs'
runs:
using: composite
steps:
- run: |
set -e
# Jaeger is set up on Windows through an inline run step. If you update Jaeger here, don't forget to update
# the version set in .github/workflows/.windows.yml.
docker run -d --net=host --name jaeger -e COLLECTOR_OTLP_ENABLED=true jaegertracing/all-in-one:1.46
docker0_ip="$(ip -f inet addr show docker0 | grep -Po 'inet \K[\d.]+')"
echo "OTEL_EXPORTER_OTLP_ENDPOINT=http://${docker0_ip}:4318" >> "${GITHUB_ENV}"
shell: bash

View File

@@ -1,60 +0,0 @@
# reusable workflow
name: .dco
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
env:
ALPINE_VERSION: "3.21"
jobs:
run:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Dump context
uses: actions/github-script@v7
with:
script: |
console.log(JSON.stringify(context, null, 2));
-
name: Get base ref
id: base-ref
uses: actions/github-script@v7
with:
result-encoding: string
script: |
if (/^refs\/pull\//.test(context.ref) && context.payload?.pull_request?.base?.ref != undefined) {
return context.payload.pull_request.base.ref;
}
return context.ref.replace(/^refs\/heads\//g, '');
-
name: Validate
run: |
docker run --rm \
--quiet \
-v ./:/workspace \
-w /workspace \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && hack/validate/dco'
env:
VALIDATE_REPO: ${{ github.server_url }}/${{ github.repository }}.git
VALIDATE_BRANCH: ${{ steps.base-ref.outputs.result }}

View File

@@ -1,45 +0,0 @@
# reusable workflow
name: .test-prepare
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
outputs:
matrix:
description: Test matrix
value: ${{ jobs.run.outputs.matrix }}
jobs:
run:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: set
uses: actions/github-script@v7
with:
script: |
let matrix = ['graphdriver'];
if ("${{ contains(github.event.pull_request.labels.*.name, 'containerd-integration') || github.event_name != 'pull_request' }}" == "true") {
matrix.push('snapshotter');
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(matrix)}`);
core.setOutput('matrix', JSON.stringify(matrix));
});

View File

@@ -1,123 +0,0 @@
# reusable workflow
name: .test-unit
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
env:
GO_VERSION: "1.24.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
unit:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
strategy:
fail-fast: false
matrix:
mode:
- ""
- firewalld
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=${{ env.CACHE_DEV_SCOPE }}
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit--${{ matrix.mode }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
pattern: test-reports-unit-*
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -1,472 +0,0 @@
# reusable workflow
name: .test
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
inputs:
storage:
required: true
type: string
default: "graphdriver"
env:
GO_VERSION: "1.24.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
ITG_CLI_MATRIX_SIZE: 6
DOCKER_EXPERIMENTAL: 1
DOCKER_GRAPHDRIVER: ${{ inputs.storage == 'snapshotter' && 'overlayfs' || 'overlay2' }}
TEST_INTEGRATION_USE_SNAPSHOTTER: ${{ inputs.storage == 'snapshotter' && '1' || '' }}
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
docker-py:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make TEST_SKIP_INTEGRATION_CLI=1 -o build test-docker-py
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
curl -sSLf localhost:16686/api/traces?service=integration-test-client > /tmp/reports/jaeger-trace.json
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-docker-py/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-docker-py-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
integration-flaky:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration-flaky
env:
TEST_SKIP_INTEGRATION_CLI: 1
integration-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
includes: ${{ steps.set.outputs.includes }}
steps:
-
name: Create matrix includes
id: set
uses: actions/github-script@v7
with:
script: |
let includes = [
{ os: 'ubuntu-22.04', mode: '' },
{ os: 'ubuntu-22.04', mode: 'rootless' },
{ os: 'ubuntu-22.04', mode: 'systemd' },
{ os: 'ubuntu-24.04', mode: '' },
// { os: 'ubuntu-24.04', mode: 'rootless' }, // FIXME: https://github.com/moby/moby/pull/49579#issuecomment-2698622223
{ os: 'ubuntu-24.04', mode: 'systemd' },
// { os: 'ubuntu-24.04', mode: 'rootless-systemd' }, // FIXME: https://github.com/moby/moby/issues/44084
];
if ("${{ inputs.storage }}" == "snapshotter") {
includes.push({ os: 'ubuntu-24.04', mode: 'firewalld' });
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(includes)}`);
core.setOutput('includes', JSON.stringify(includes));
});
-
name: Show matrix
run: |
echo ${{ steps.set.outputs.includes }}
integration:
runs-on: ${{ matrix.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- integration-prepare
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.integration-prepare.outputs.includes) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
echo "DOCKER_ROOTLESS=1" >> $GITHUB_ENV
fi
if [[ "${{ matrix.mode }}" == *"systemd"* ]]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=${{ env.CACHE_DEV_SCOPE }}
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsName=${{ matrix.os }}
if [ -n "${{ matrix.mode }}" ]; then
reportsName="$reportsName-${{ matrix.mode }}"
fi
reportsPath="/tmp/reports/$reportsName"
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration,${{ matrix.mode }}
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- integration
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create test matrix
id: tests
working-directory: ./integration-cli
run: |
# This step creates a matrix for integration-cli tests. Tests suites
# are distributed in integration-cli job through a matrix. There is
# also overrides being added to the matrix like "./..." to run
# "Test integration" step exclusively and specific tests suites that
# take a long time to run.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} -o "./..." -o "DockerSwarmSuite" -o "DockerNetworkSuite|DockerExternalVolumeSuite" ./...)"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Create gha matrix
id: set
uses: actions/github-script@v7
with:
script: |
let matrix = {
test: ${{ steps.tests.outputs.matrix }},
include: [],
};
// For some reasons, GHA doesn't combine a dynamically defined
// 'include' with other matrix variables that aren't part of the
// include items.
// Moreover, since the goal is to run only relevant tests with
// firewalld enabled to minimize the number of CI jobs, we
// statically define the list of test suites that we want to run.
if ("${{ inputs.storage }}" == "snapshotter") {
matrix.include.push({
'mode': 'firewalld',
'test': 'DockerCLINetworkSuite|DockerCLIPortSuite|DockerDaemonSuite'
});
matrix.include.push({
'mode': 'firewalld',
'test': 'DockerSwarmSuite'
});
matrix.include.push({
'mode': 'firewalld',
'test': 'DockerNetworkSuite'
});
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(matrix)}`);
core.setOutput('matrix', JSON.stringify(matrix));
});
-
name: Show final gha matrix
run: |
echo ${{ steps.set.outputs.matrix }}
integration-cli:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- integration-cli-prepare
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION: 1
TESTCOVERAGE: 1
TESTFLAGS: "-test.run (${{ matrix.test }})/"
-
name: Prepare reports
if: always()
run: |
reportsName=$(echo -n "${{ matrix.test }}" | sha256sum | cut -d " " -f 1)
reportsPath=/tmp/reports/$reportsName
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
echo "${{ matrix.test }}" | tr -s '|' '\n' | tee -a "$reportsPath/tests.txt"
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration-cli
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-cli-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
retention-days: 1
integration-cli-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- integration-cli
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-cli-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -1,515 +0,0 @@
# reusable workflow
name: .windows
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
inputs:
os:
required: true
type: string
storage:
required: true
type: string
default: "graphdriver"
send_coverage:
required: false
type: boolean
default: false
env:
GO_VERSION: "1.24.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2022: ltsc2022
WINDOWS_BASE_TAG_2025: ltsc2025
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
DOCKER_BUILDKIT: 0
ITG_CLI_MATRIX_SIZE: 6
jobs:
build:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
-
name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
-
name: Copy artifacts
run: |
New-Item -ItemType "directory" -Path "${{ env.BIN_OUT }}"
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\docker.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\dockerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\bin\gotestsum.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-${{ inputs.storage }}-${{ inputs.os }}
path: ${{ env.BIN_OUT }}/*
if-no-files-found: error
retention-days: 2
unit-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
-
name: Test
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ env.GOPATH }}\src\github.com\docker\docker\bundles:C:\gopath\src\github.com\docker\docker\bundles" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -TestUnit
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v4
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-unit-reports
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
unit-test-report:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: always()
needs:
- unit-test
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download artifacts
uses: actions/download-artifact@v4
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-unit-reports
path: /tmp/artifacts
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/artifacts -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-test-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# This step creates a matrix for integration-cli tests. Tests suites
# are distributed in integration-test job through a matrix. There is
# also an override being added to the matrix like "./..." to run
# "Test integration" step exclusively.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} -o "./..." ./...)"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
needs:
- build
- integration-test-prepare
strategy:
fail-fast: false
matrix:
storage:
- ${{ inputs.storage }}
runtime:
- builtin
- containerd
test: ${{ fromJson(needs.integration-test-prepare.outputs.matrix) }}
exclude:
- storage: snapshotter
runtime: builtin
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Set up Jaeger
run: |
# Jaeger is set up on Linux through the setup-tracing action. If you update Jaeger here, don't forget to
# update the version set in .github/actions/setup-tracing/action.yml.
Invoke-WebRequest -Uri "https://github.com/jaegertracing/jaeger/releases/download/v1.46.0/jaeger-1.46.0-windows-amd64.tar.gz" -OutFile ".\jaeger-1.46.0-windows-amd64.tar.gz"
tar -zxvf ".\jaeger-1.46.0-windows-amd64.tar.gz"
Start-Process '.\jaeger-1.46.0-windows-amd64\jaeger-all-in-one.exe'
echo "OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4318" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
shell: pwsh
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Download artifacts
uses: actions/download-artifact@v4
with:
name: build-${{ inputs.storage }}-${{ inputs.os }}
path: ${{ env.BIN_OUT }}
-
name: Init
run: |
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2025") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2025 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
$testName = ([System.BitConverter]::ToString((New-Object System.Security.Cryptography.SHA256Managed).ComputeHash([System.Text.Encoding]::UTF8.GetBytes("${{ matrix.test }}"))) -replace '-').ToLower()
echo "TESTREPORTS_NAME=$testName" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
-
# removes docker service that is currently installed on the runner. we
# could use Uninstall-Package but not yet available on Windows runners.
# more info: https://github.com/actions/virtual-environments/blob/d3a5bad25f3b4326c5666bab0011ac7f1beec95e/images/win/scripts/Installers/Install-Docker.ps1#L11
name: Removing current daemon
run: |
if (Get-Service docker -ErrorAction SilentlyContinue) {
$dockerVersion = (docker version -f "{{.Server.Version}}")
Write-Host "Current installed Docker version: $dockerVersion"
# remove service
Stop-Service -Force -Name docker
Remove-Service -Name docker
# removes event log entry. we could use "Remove-EventLog -LogName -Source docker"
# but this cmd is not available atm
$ErrorActionPreference = "SilentlyContinue"
& reg delete "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application\docker" /f 2>&1 | Out-Null
$ErrorActionPreference = "Stop"
Write-Host "Service removed"
}
-
name: Starting test daemon
run: |
Write-Host "Creating service"
If ("${{ matrix.runtime }}" -eq "containerd") {
$runtimeArg="--default-runtime=io.containerd.runhcs.v1"
echo "DOCKER_WINDOWS_CONTAINERD_RUNTIME=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
New-Item -ItemType Directory "$env:TEMP\moby-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\moby-exec" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait -NoNewWindow "${{ env.BIN_OUT }}\dockerd" `
-ArgumentList $runtimeArg, "--debug", `
"--host=npipe:////./pipe/docker_engine", `
"--data-root=$env:TEMP\moby-root", `
"--exec-root=$env:TEMP\moby-exec", `
"--pidfile=$env:TEMP\docker.pid", `
"--register-service"
If ("${{ inputs.storage }}" -eq "snapshotter") {
# Make the env-var visible to the service-managed dockerd, as there's no CLI flag for this option.
& reg add "HKLM\SYSTEM\CurrentControlSet\Services\docker" /v Environment /t REG_MULTI_SZ /s '@' /d TEST_INTEGRATION_USE_SNAPSHOTTER=1
echo "TEST_INTEGRATION_USE_SNAPSHOTTER=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Host "Starting service"
Start-Service -Name docker
Write-Host "Service started successfully!"
-
name: Waiting for test daemon to start
run: |
$tries=20
Write-Host "Waiting for the test daemon to start..."
While ($true) {
$ErrorActionPreference = "SilentlyContinue"
& "${{ env.BIN_OUT }}\docker" version
$ErrorActionPreference = "Stop"
If ($LastExitCode -eq 0) {
break
}
$tries--
If ($tries -le 0) {
Throw "Failed to get a response from the daemon"
}
Write-Host -NoNewline "."
Start-Sleep -Seconds 1
}
Write-Host "Test daemon started and replied!"
If ("${{ matrix.runtime }}" -eq "containerd") {
$containerdProcesses = Get-Process -Name containerd -ErrorAction:SilentlyContinue
If (-not $containerdProcesses) {
Throw "containerd process is not running"
} else {
foreach ($process in $containerdProcesses) {
$processPath = (Get-Process -Id $process.Id -FileVersionInfo).FileName
Write-Output "Running containerd instance binary Path: $($processPath)"
}
}
}
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Building contrib/busybox
run: |
& "${{ env.BIN_OUT }}\docker" build -t busybox `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
.\contrib\busybox\
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: List images
run: |
& "${{ env.BIN_OUT }}\docker" images
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Test integration
if: matrix.test == './...'
run: |
.\hack\make.ps1 -TestIntegration
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
-
name: Test integration-cli
if: matrix.test != './...'
run: |
.\hack\make.ps1 -TestIntegrationCli
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
INTEGRATION_TESTRUN: ${{ matrix.test }}
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v4
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: integration,${{ matrix.runtime }}
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Stop daemon
if: always()
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name docker
$ErrorActionPreference = "Stop"
-
# as the daemon is registered as a service we have to check the event
# logs against the docker provider.
name: Daemon event logs
if: always()
run: |
Get-WinEvent -ea SilentlyContinue `
-FilterHashtable @{ProviderName= "docker"; LogName = "application"} |
Sort-Object @{Expression="TimeCreated";Descending=$false} |
ForEach-Object {"$($_.TimeCreated.ToUniversalTime().ToString("o")) [$($_.LevelDisplayName)] $($_.Message)"} |
Tee-Object -file ".\bundles\daemon.log"
-
name: Download Jaeger traces
if: always()
run: |
Invoke-WebRequest `
-Uri "http://127.0.0.1:16686/api/traces?service=integration-test-client" `
-OutFile ".\bundles\jaeger-trace.json"
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-${{ env.TESTREPORTS_NAME }}
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
integration-test-report:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
if: always()
needs:
- integration-test
strategy:
fail-fast: false
matrix:
storage:
- ${{ inputs.storage }}
runtime:
- builtin
- containerd
exclude:
- storage: snapshotter
runtime: builtin
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -1,286 +0,0 @@
name: arm64
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.24.9"
TESTSTAT_VERSION: v0.1.25
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
DOCKER_EXPERIMENTAL: 1
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-24.04-arm
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
targets: ${{ matrix.target }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
build-dev:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev-arm64
*.cache-to=type=gha,scope=dev-arm64
*.output=type=cacheonly
test-unit:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev-arm64
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit-arm64-graphdriver
path: /tmp/reports/*
retention-days: 1
test-unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always() && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only'))
needs:
- test-unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
pattern: test-reports-unit-arm64-*
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
test-integration:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev-arm64
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsPath="/tmp/reports/arm64-graphdriver"
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-arm64-graphdriver
path: /tmp/reports/*
retention-days: 1
test-integration-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always() && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only'))
needs:
- test-integration
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-arm64-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -1,215 +0,0 @@
name: bin-image
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
tags:
- 'v*'
pull_request:
env:
MOBYBIN_REPO_SLUG: moby/moby-bin
DOCKER_GITCOMMIT: ${{ github.sha }}
VERSION: ${{ github.ref }}
PLATFORM: Moby Engine - Nightly
PRODUCT: moby-bin
PACKAGER_NAME: The Moby Project
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
if: ${{ !startsWith(github.ref, 'refs/tags/v') }}
uses: ./.github/workflows/.dco.yml
prepare:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
outputs:
platforms: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.MOBYBIN_REPO_SLUG }}
### versioning strategy
## push semver tag v23.0.0
# moby/moby-bin:23.0.0
# moby/moby-bin:latest
## push semver prerelease tag v23.0.0-beta.1
# moby/moby-bin:23.0.0-beta.1
## push on master
# moby/moby-bin:master
## push on 23.0 branch
# moby/moby-bin:23.0
tags: |
type=semver,pattern={{version}}
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{major}}
type=semver,pattern={{major}}.{{minor}}
-
name: Rename meta bake definition file
# see https://github.com/docker/metadata-action/issues/381#issuecomment-1918607161
run: |
bakeFile="${{ steps.meta.outputs.bake-file }}"
mv "${bakeFile#cwd://}" "/tmp/bake-meta.json"
-
name: Upload meta bake definition
uses: actions/upload-artifact@v4
with:
name: bake-meta
path: /tmp/bake-meta.json
if-no-files-found: error
retention-days: 1
-
name: Create platforms matrix
id: platforms
run: |
echo "matrix=$(docker buildx bake bin-image-cross --print | jq -cr '.target."bin-image-cross".platforms')" >>${GITHUB_OUTPUT}
build:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && (github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only')) }}
needs:
- validate-dco
- prepare
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare.outputs.platforms) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Download meta bake definition
uses: actions/download-artifact@v4
with:
name: bake-meta
path: /tmp
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Login to Docker Hub
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_MOBYBIN_USERNAME }}
password: ${{ secrets.DOCKERHUB_MOBYBIN_TOKEN }}
-
name: Build
id: bake
uses: docker/bake-action@v6
with:
source: .
files: |
./docker-bake.hcl
/tmp/bake-meta.json
targets: bin-image
set: |
*.platform=${{ matrix.platform }}
*.output=type=image,name=${{ env.MOBYBIN_REPO_SLUG }},push-by-digest=true,name-canonical=true,push=${{ github.event_name != 'pull_request' && github.repository == 'moby/moby' }}
*.tags=
-
name: Export digest
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
run: |
mkdir -p /tmp/digests
digest="${{ fromJSON(steps.bake.outputs.metadata)['bin-image']['containerimage.digest'] }}"
touch "/tmp/digests/${digest#sha256:}"
-
name: Upload digest
if: github.event_name != 'pull_request' && github.repository == 'moby/moby'
uses: actions/upload-artifact@v4
with:
name: digests-${{ env.PLATFORM_PAIR }}
path: /tmp/digests/*
if-no-files-found: error
retention-days: 1
merge:
runs-on: ubuntu-24.04
timeout-minutes: 40 # guardrails timeout for the whole job
if: ${{ always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && github.event_name != 'pull_request' && github.repository == 'moby/moby' }}
needs:
- build
steps:
-
name: Download meta bake definition
uses: actions/download-artifact@v4
with:
name: bake-meta
path: /tmp
-
name: Download digests
uses: actions/download-artifact@v4
with:
path: /tmp/digests
pattern: digests-*
merge-multiple: true
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_MOBYBIN_USERNAME }}
password: ${{ secrets.DOCKERHUB_MOBYBIN_TOKEN }}
-
name: Create manifest list and push
working-directory: /tmp/digests
run: |
set -x
docker buildx imagetools create $(jq -cr '.target."docker-metadata-action".tags | map("-t " + .) | join(" ")' /tmp/bake-meta.json) \
$(printf '${{ env.MOBYBIN_REPO_SLUG }}@sha256:%s ' *)
-
name: Inspect image
run: |
set -x
docker buildx imagetools inspect ${{ env.MOBYBIN_REPO_SLUG }}:$(jq -cr '.target."docker-metadata-action".args.DOCKER_META_VERSION' /tmp/bake-meta.json)

View File

@@ -1,373 +0,0 @@
name: buildkit
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.24.9"
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build-linux:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
targets: binary
-
name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: binary
path: ${{ env.DESTDIR }}
if-no-files-found: error
retention-days: 1
test-linux:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-linux
env:
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildkit-tests"
strategy:
fail-fast: false
matrix:
worker:
- dockerd
- dockerd-containerd
pkg:
- client
- cmd/buildctl
- solver
- frontend
- frontend/dockerfile
typ:
- integration
steps:
-
name: Prepare
run: |
disabledFeatures="cache_backend_azblob,cache_backend_s3"
if [ "${{ matrix.worker }}" = "dockerd" ]; then
disabledFeatures="${disabledFeatures},merge_diff"
fi
echo "BUILDKIT_TEST_DISABLE_FEATURES=${disabledFeatures}" >> $GITHUB_ENV
# Expose `ACTIONS_RUNTIME_TOKEN` and `ACTIONS_CACHE_URL`, which is used
# in BuildKit's test suite to skip/unskip cache exporters:
# https://github.com/moby/buildkit/blob/567a99433ca23402d5e9b9f9124005d2e59b8861/client/client_test.go#L5407-L5411
-
name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3
-
name: Checkout
uses: actions/checkout@v4
with:
path: moby
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
-
name: BuildKit ref
run: |
echo "$(./hack/buildkit-ref)" >> $GITHUB_ENV
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v4
with:
repository: ${{ env.BUILDKIT_REPO }}
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Download binary artifacts
uses: actions/download-artifact@v4
with:
name: binary
path: ./buildkit/build/moby/
-
name: Update daemon.json
run: |
sudo rm -f /etc/docker/daemon.json
sudo service docker restart
docker version
docker info
-
name: Build test image
uses: docker/bake-action@v6
with:
source: .
workdir: ./buildkit
targets: integration-tests
set: |
*.output=type=docker,name=${{ env.TEST_IMAGE_ID }}
-
name: Test
run: |
./hack/test ${{ matrix.typ }}
env:
CONTEXT: "."
TEST_DOCKERD: "1"
TEST_DOCKERD_BINARY: "./build/moby/dockerd"
TESTPKGS: "./${{ matrix.pkg }}"
TESTFLAGS: "-v --parallel=1 --timeout=30m --run=//worker=${{ matrix.worker }}$"
working-directory: buildkit
build-windows:
runs-on: windows-2022
timeout-minutes: 120
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2022: ltsc2022
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
- name: Checkout
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
- name: Env
run: |
Get-ChildItem Env: | Out-String
- name: Moby - Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
- name: Docker info
run: |
docker info
- name: Build base image
run: |
& docker build `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
-t ${{ env.TEST_IMAGE_NAME }} `
-f Dockerfile.windows .
- name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
go install github.com/distribution/distribution/v3/cmd/registry@latest
- name: Checkout BuildKit
uses: actions/checkout@v4
with:
repository: moby/buildkit
ref: master
path: buildkit
- name: Add buildctl to binaries
run: |
go install ./cmd/buildctl
working-directory: buildkit
- name: Copy artifacts
run: |
New-Item -ItemType "directory" -Path "${{ env.BIN_OUT }}"
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\docker.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\dockerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\bin\gotestsum.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
cp ${{ env.GOPATH }}\bin\registry.exe ${{ env.BIN_OUT }}
cp ${{ env.GOPATH }}\bin\buildctl.exe ${{ env.BIN_OUT }}
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: build-windows
path: ${{ env.BIN_OUT }}/*
if-no-files-found: error
retention-days: 2
test-windows:
runs-on: windows-2022
timeout-minutes: 120 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-windows
env:
TEST_IMAGE_BUILD: "0"
TEST_IMAGE_ID: "buildkit-tests"
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
TESTFLAGS: "-v --timeout=90m"
TEST_DOCKERD: "1"
strategy:
fail-fast: false
matrix:
worker:
- dockerd-containerd
pkg:
- ./client#1-4
- ./client#2-4
- ./client#3-4
- ./client#4-4
- ./cmd/buildctl
- ./frontend
- ./frontend/dockerfile#1-12
- ./frontend/dockerfile#2-12
- ./frontend/dockerfile#3-12
- ./frontend/dockerfile#4-12
- ./frontend/dockerfile#5-12
- ./frontend/dockerfile#6-12
- ./frontend/dockerfile#7-12
- ./frontend/dockerfile#8-12
- ./frontend/dockerfile#9-12
- ./frontend/dockerfile#10-12
- ./frontend/dockerfile#11-12
- ./frontend/dockerfile#12-12
steps:
- name: Prepare
shell: bash
run: |
disabledFeatures="cache_backend_azblob,cache_backend_s3"
if [ "${{ matrix.worker }}" = "dockerd" ]; then
disabledFeatures="${disabledFeatures},merge_diff"
fi
echo "BUILDKIT_TEST_DISABLE_FEATURES=${disabledFeatures}" >> $GITHUB_ENV
- name: Expose GitHub Runtime
uses: crazy-max/ghaction-github-runtime@v3
- name: Checkout
uses: actions/checkout@v4
with:
path: moby
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
- name: BuildKit ref
shell: bash
run: |
echo "$(./hack/buildkit-ref)" >> $GITHUB_ENV
working-directory: moby
- name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v4
with:
repository: ${{ env.BUILDKIT_REPO }}
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
- name: Download Moby artifacts
uses: actions/download-artifact@v4
with:
name: build-windows
path: ${{ env.BIN_OUT }}
- name: Add binaries to PATH
run: |
ls ${{ env.BIN_OUT }}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
- name: Test Prep
shell: bash
run: |
TESTPKG=$(echo "${{ matrix.pkg }}" | awk '-F#' '{print $1}')
echo "TESTPKG=$TESTPKG" >> $GITHUB_ENV
echo "TEST_REPORT_NAME=${{ github.job }}-$(echo "${{ matrix.pkg }}-${{ matrix.worker }}" | tr -dc '[:alnum:]-\n\r' | tr '[:upper:]' '[:lower:]')" >> $GITHUB_ENV
testFlags="${{ env.TESTFLAGS }}"
testSlice=$(echo "${{ matrix.pkg }}" | awk '-F#' '{print $2}')
testSliceOffset=""
if [ -n "$testSlice" ]; then
testSliceOffset="slice=$testSlice/"
fi
if [ -n "${{ matrix.worker }}" ]; then
testFlags="${testFlags} --run=TestIntegration/$testSliceOffset.*/worker=${{ matrix.worker }}"
fi
echo "TESTFLAGS=${testFlags}" >> $GITHUB_ENV
- name: Test
shell: bash
run: |
mkdir -p ./bin/testreports
gotestsum \
--jsonfile="./bin/testreports/go-test-report-${{ env.TEST_REPORT_NAME }}.json" \
--junitfile="./bin/testreports/junit-report-${{ env.TEST_REPORT_NAME }}.xml" \
--packages="${{ env.TESTPKG }}" \
-- \
"-mod=vendor" \
"-coverprofile" "./bin/testreports/coverage-${{ env.TEST_REPORT_NAME }}.txt" \
"-covermode" "atomic" ${{ env.TESTFLAGS }}
working-directory: buildkit

View File

@@ -1,180 +0,0 @@
name: ci
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
targets: ${{ matrix.target }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
prepare-cross:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
run: |
matrix="$(docker buildx bake binary-cross --print | jq -cr '.target."binary-cross".platforms')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.platforms.outputs.matrix }}
cross:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
- prepare-cross
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.prepare-cross.outputs.matrix) }}
steps:
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
targets: all
set: |
*.platform=${{ matrix.platform }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
govulncheck:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
# Always run security checks, even with 'ci/validate-only' label
permissions:
# required to write sarif report
security-events: write
# required to check out the repository
contents: read
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Run
uses: docker/bake-action@v6
with:
targets: govulncheck
env:
GOVULNCHECK_FORMAT: sarif
-
name: Upload SARIF report
if: ${{ github.event_name != 'pull_request' && github.repository == 'moby/moby' }}
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: ${{ env.DESTDIR }}/govulncheck.out
build-dind:
runs-on: ubuntu-24.04
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dind image
uses: docker/bake-action@v6
with:
targets: dind
set: |
*.output=type=cacheonly

View File

@@ -1,75 +0,0 @@
name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
tags:
- 'v*'
pull_request:
# The branches below must be a subset of the branches above
branches: ["master"]
schedule:
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
- cron: '0 9 * * 4'
env:
GO_VERSION: "1.24.9"
jobs:
codeql:
runs-on: ubuntu-24.04
timeout-minutes: 10
permissions:
actions: read
contents: read
security-events: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 2
# CodeQL 2.16.4's auto-build added support for multi-module repositories,
# and is trying to be smart by searching for modules in every directory,
# including vendor directories. If no module is found, it's creating one
# which is ... not what we want, so let's give it a "go.mod".
# see: https://github.com/docker/cli/pull/4944#issuecomment-2002034698
- name: Create go.mod
run: |
ln -s vendor.mod go.mod
ln -s vendor.sum go.sum
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache: false
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -1,211 +0,0 @@
name: test
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.24.9"
GIT_PAGER: "cat"
PAGER: "cat"
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
mode:
- ""
- systemd
- firewalld
steps:
-
name: Prepare
run: |
if [ "${{ matrix.mode }}" = "systemd" ]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
fi
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev${{ matrix.mode }}
*.cache-to=type=gha,scope=dev${{ matrix.mode }}
*.output=type=cacheonly
test:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
- validate-dco
uses: ./.github/workflows/.test.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage:
- graphdriver
- snapshotter
with:
storage: ${{ matrix.storage }}
test-unit:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- build-dev
- validate-dco
uses: ./.github/workflows/.test-unit.yml
secrets: inherit
validate-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
needs:
- validate-dco
outputs:
matrix: ${{ steps.scripts.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: scripts
run: |
scripts=$(cd ./hack/validate && jq -nc '$ARGS.positional - ["all", "default", "dco"] | map(select(test("[.]")|not)) + ["generate-files"]' --args *)
echo "matrix=$scripts" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-24.04
timeout-minutes: 30 # guardrails timeout for the whole job
needs:
- validate-prepare
- build-dev
strategy:
fail-fast: true
matrix:
script: ${{ fromJson(needs.validate-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Validate
run: |
make -o build validate-${{ matrix.script }}
smoke-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- validate-dco
outputs:
matrix: ${{ steps.platforms.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
run: |
matrix="$(docker buildx bake binary-smoketest --print | jq -cr '.target."binary-smoketest".platforms')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.platforms.outputs.matrix }}
smoke:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- smoke-prepare
strategy:
fail-fast: false
matrix:
platform: ${{ fromJson(needs.smoke-prepare.outputs.matrix) }}
steps:
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up QEMU
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Test
uses: docker/bake-action@v6
with:
targets: binary-smoketest
set: |
*.platform=${{ matrix.platform }}

View File

@@ -1,88 +0,0 @@
name: validate-pr
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
pull_request:
types: [opened, edited, labeled, unlabeled, synchronize]
jobs:
check-area-label:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
steps:
- name: Missing `area/` label
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'area/')
run: |
echo "::error::Every PR with an 'impact/*' label should also have an 'area/*' label"
exit 1
- name: OK
run: exit 0
check-changelog:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
env:
HAS_IMPACT_LABEL: ${{ contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') }}
PR_BODY: |
${{ github.event.pull_request.body }}
steps:
- name: Check changelog description
run: |
# Extract the `markdown changelog` note code block
block=$(echo -n "$PR_BODY" | tr -d '\r' | awk '/^```markdown changelog$/{flag=1;next}/^```$/{flag=0}flag')
# Strip empty lines
desc=$(echo "$block" | awk NF)
if [ "$HAS_IMPACT_LABEL" = "true" ]; then
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
exit 1
fi
else
if [ -n "$desc" ]; then
echo "::error::PR has a changelog description, but no changelog label"
echo "::error::Please add the relevant 'impact/' label to the PR or remove the changelog description"
exit 1
fi
fi
echo "This PR will be included in the release notes with the following note:"
echo "$desc"
check-pr-branch:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
env:
PR_TITLE: ${{ github.event.pull_request.title }}
steps:
# Backports or PR that target a release branch directly should mention the target branch in the title, for example:
# [X.Y backport] Some change that needs backporting to X.Y
# [X.Y] Change directly targeting the X.Y branch
- name: Check release branch
id: title_branch
run: |
# get the intended major version prefix ("[27.1 backport]" -> "27.") from the PR title.
[[ "$PR_TITLE" =~ ^\[([0-9]*\.)[^]]*\] ]] && branch="${BASH_REMATCH[1]}"
# get major version prefix from the release branch ("27.x -> "27.")
[[ "$GITHUB_BASE_REF" =~ ^([0-9]*\.) ]] && target_branch="${BASH_REMATCH[1]}" || target_branch="$GITHUB_BASE_REF"
if [[ "$target_branch" != "$branch" ]] && ! [[ "$GITHUB_BASE_REF" == "master" && "$branch" == "" ]]; then
echo "::error::PR is opened against the $GITHUB_BASE_REF branch, but its title suggests otherwise."
exit 1
fi

View File

@@ -1,43 +0,0 @@
name: windows-2022
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
test-prepare:
uses: ./.github/workflows/.test-prepare.yml
needs:
- validate-dco
run:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- test-prepare
uses: ./.github/workflows/.windows.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2022
storage: ${{ matrix.storage }}
send_coverage: true

View File

@@ -1,47 +0,0 @@
name: windows-2025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
test-prepare:
uses: ./.github/workflows/.test-prepare.yml
needs:
- validate-dco
run:
if: ${{ github.event_name != 'pull_request' || !contains(github.event.pull_request.labels.*.name, 'ci/validate-only') }}
needs:
- test-prepare
uses: ./.github/workflows/.windows.yml
secrets: inherit
strategy:
fail-fast: false
matrix:
storage: ${{ fromJson(needs.test-prepare.outputs.matrix) }}
with:
os: windows-2025
storage: ${{ matrix.storage }}
send_coverage: false

36
.gitignore vendored
View File

@@ -1,27 +1,23 @@
# If you want to ignore files created by your editor/tools, please consider a
# [global .gitignore](https://help.github.com/articles/ignoring-files).
*~
*.bak
# Docker project generated files to ignore
# if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.gz
*.orig
test.main
.*.swp
.DS_Store
thumbs.db
# local repository customization
.envrc
# a .bashrc may be added to customize the build environment
.bashrc
.editorconfig
# build artifacts
/bundles/
/cli/winresources/dockerd/*.syso
/cli/winresources/dockerd/winres.json
# ci artifacts
*.exe
*.gz
.gopath/
.go-pkg-cache/
autogen/
bundles/
cmd/dockerd/dockerd
contrib/builder/rpm/*/changelog
vendor/pkg/
go-test-report.json
junit-report.xml
profile.out
test.main
junit-report.xml

View File

@@ -1,352 +0,0 @@
version: "2"
run:
# prevent golangci-lint from deducting the go version to lint for through go.mod,
# which causes it to fallback to go1.17 semantics.
go: "1.24.9"
concurrency: 2
# Only supported with go modules enabled (build flag -mod=vendor only valid when using modules)
# modules-download-mode: vendor
formatters:
enable:
- gofmt
- goimports
linters:
enable:
- asasalint # Detects "[]any" used as argument for variadic "func(...any)".
- copyloopvar # Detects places where loop variables are copied.
- depguard
- dogsled # Detects assignments with too many blank identifiers.
- dupword # Detects duplicate words.
- durationcheck # Detect cases where two time.Duration values are being multiplied in possibly erroneous ways.
- errorlint # Detects code that will cause problems with the error wrapping scheme introduced in Go 1.13.
- errchkjson # Detects unsupported types passed to json encoding functions and reports if checks for the returned error can be omitted.
- exhaustive # Detects missing options in enum switch statements.
- exptostd # Detects functions from golang.org/x/exp/ that can be replaced by std functions.
- fatcontext # Detects nested contexts in loops and function literals.
- forbidigo
- gocheckcompilerdirectives # Detects invalid go compiler directive comments (//go:).
- gocritic # Detects for bugs, performance and style issues.
- gosec # Detects security problems.
- govet
- iface # Detects incorrect use of interfaces. Currently only used for "identical" interfaces in the same package.
- importas
- ineffassign
- makezero # Finds slice declarations with non-zero initial length.
- mirror # Detects wrong mirror patterns of bytes/strings usage.
- misspell # Detects commonly misspelled English words in comments.
- nakedret # Detects uses of naked returns.
- nilnesserr # Detects returning nil errors. It combines the features of nilness and nilerr,
- nosprintfhostport # Detects misuse of Sprintf to construct a host with port in a URL.
- reassign # Detects reassigning a top-level variable in another package.
- revive # Metalinter; drop-in replacement for golint.
- spancheck # Detects mistakes with OpenTelemetry/Census spans.
- staticcheck
- thelper
- unconvert # Detects unnecessary type conversions.
- unused
- usestdlibvars # Detects the possibility to use variables/constants from the Go standard library.
- wastedassign # Detects wasted assignment statements.
disable:
- errcheck
- spancheck # FIXME
settings:
depguard:
rules:
main:
deny:
- pkg: "github.com/stretchr/testify/assert"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/require"
desc: Use "gotest.tools/v3/assert" instead
- pkg: "github.com/stretchr/testify/suite"
desc: Do not use
- pkg: "github.com/containerd/containerd/pkg/userns"
desc: Use github.com/moby/sys/userns instead.
- pkg: "github.com/tonistiigi/fsutil"
desc: The fsutil module does not have a stable API, so we should not have a direct dependency unless necessary.
dupword:
ignore:
- "true" # some tests use this as expected output
- "false" # some tests use this as expected output
- "root" # for tests using "ls" output with files owned by "root:root"
errorlint:
# Check whether fmt.Errorf uses the %w verb for formatting errors.
# See the https://github.com/polyfloyd/go-errorlint for caveats.
errorf: false
# Check for plain type assertions and type switches.
asserts: false
exhaustive:
# Program elements to check for exhaustiveness.
# Default: [ switch ]
check:
- switch
# - map # TODO(thaJeztah): also enable for maps
# Presence of "default" case in switch statements satisfies exhaustiveness,
# even if all enum members are not listed.
# Default: false
#
# TODO(thaJeztah): consider not allowing this to catch new values being added (and falling through to "default")
default-signifies-exhaustive: true
forbidigo:
forbid:
- pkg: ^sync/atomic$
pattern: ^atomic\.(Add|CompareAndSwap|Load|Store|Swap).
msg: Go 1.19 atomic types should be used instead.
- pkg: ^regexp$
pattern: ^regexp\.MustCompile
msg: Use internal/lazyregexp.New instead.
- pkg: github.com/vishvananda/netlink$
pattern: ^netlink\.(Handle\.)?(AddrList|BridgeVlanList|ChainList|ClassList|ConntrackTableList|ConntrackDeleteFilter$|ConntrackDeleteFilters|DevLinkGetDeviceList|DevLinkGetAllPortList|DevlinkGetDeviceParams|FilterList|FouList|GenlFamilyList|GTPPDPList|LinkByName|LinkByAlias|LinkList|LinkSubscribeWithOptions|NeighList$|NeighProxyList|NeighListExecute|NeighSubscribeWithOptions|LinkGetProtinfo|QdiscList|RdmaLinkList|RdmaLinkByName|RdmaLinkDel|RouteList|RouteListFilteredIter|RuleListFiltered$|RouteSubscribeWithOptions|RuleList$|RuleListFiltered|SocketGet|SocketDiagTCPInfo|SocketDiagTCP|SocketDiagUDPInfo|SocketDiagUDP|UnixSocketDiagInfo|UnixSocketDiag|VDPAGetDevConfigList|VDPAGetDevList|VDPAGetMGMTDevList|XfrmPolicyList|XfrmStateList)
msg: Use internal nlwrap package for EINTR handling.
- pkg: github.com/docker/docker/internal/nlwrap$
pattern: ^nlwrap.Handle.(BridgeVlanList|ChainList|ClassList|ConntrackDeleteFilter$|DevLinkGetDeviceList|DevLinkGetAllPortList|DevlinkGetDeviceParams|FilterList|FouList|GenlFamilyList|GTPPDPList|LinkByAlias|LinkSubscribeWithOptions|NeighList$|NeighProxyList|NeighListExecute|NeighSubscribeWithOptions|LinkGetProtinfo|QdiscList|RdmaLinkList|RdmaLinkByName|RdmaLinkDel|RouteListFilteredIter|RuleListFiltered$|RouteSubscribeWithOptions|RuleList$|RuleListFiltered|SocketGet|SocketDiagTCPInfo|SocketDiagTCP|SocketDiagUDPInfo|SocketDiagUDP|UnixSocketDiagInfo|UnixSocketDiag|VDPAGetDevConfigList|VDPAGetDevList|VDPAGetMGMTDevList)
msg: Add a wrapper to nlwrap.Handle for EINTR handling and update the list in .golangci.yml.
analyze-types: true
gocritic:
disabled-checks:
- appendAssign
- appendCombine
- assignOp
- builtinShadow
- builtinShadowDecl
- captLocal
- commentedOutCode
- deferInLoop
- dupImport
- dupSubExpr
- elseif
- emptyFallthrough
- equalFold
- evalOrder
- exitAfterDefer
- exposedSyncMutex
- filepathJoin
- hexLiteral
- hugeParam
- ifElseChain
- importShadow
- indexAlloc
- methodExprCall
- nestingReduce
- nilValReturn
- octalLiteral
- paramTypeCombine
- preferStringWriter
- ptrToRefParam
- rangeValCopy
- redundantSprint
- regexpMust
- regexpSimplify
- singleCaseSwitch
- sloppyReassign
- stringXbytes
- typeAssertChain
- typeDefFirst
- typeUnparen
- uncheckedInlineErr
- unlambda
- unnamedResult
- unnecessaryDefer
- unslice
- valSwap
- whyNoLint
enable-all: true
gosec:
excludes:
- G104 # G104: Errors unhandled; (TODO: reduce unhandled errors, or explicitly ignore)
- G115 # G115: integer overflow conversion; (TODO: verify these: https://github.com/moby/moby/issues/48358)
- G204 # G204: Subprocess launched with variable; too many false positives.
- G301 # G301: Expect directory permissions to be 0750 or less (also EXC0009); too restrictive
- G302 # G302: Expect file permissions to be 0600 or less (also EXC0009); too restrictive
- G304 # G304: Potential file inclusion via variable.
- G306 # G306: Expect WriteFile permissions to be 0600 or less (too restrictive; also flags "0o644" permissions)
- G307 # G307: Deferring unsafe method "*os.File" on type "Close" (also EXC0008); (TODO: evaluate these and fix where needed: G307: Deferring unsafe method "*os.File" on type "Close")
- G504 # G504: Blocklisted import net/http/cgi: Go versions < 1.6.3 are vulnerable to Httpoxy attack: (CVE-2016-5386); (only affects go < 1.6.3)
govet:
enable-all: true
disable:
- fieldalignment # TODO: evaluate which ones should be updated.
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
alias:
# Enforce alias to prevent it accidentally being used instead of our
# own errdefs package (or vice-versa).
- pkg: github.com/containerd/errdefs
alias: cerrdefs
- pkg: github.com/containerd/containerd/images
alias: c8dimages
- pkg: github.com/opencontainers/image-spec/specs-go/v1
alias: ocispec
- pkg: go.etcd.io/bbolt
alias: bolt
# Enforce that gotest.tools/v3/assert/cmp is always aliased as "is"
- pkg: gotest.tools/v3/assert/cmp
alias: is
nakedret:
# Disallow naked returns if func has more lines of code than this setting.
# Default: 30
max-func-lines: 0
revive:
rules:
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
staticcheck:
checks:
- all
- -QF1008 # Omit embedded fields from selector expression; https://staticcheck.dev/docs/checks/#QF1008
- -ST1000 # Incorrect or missing package comment; https://staticcheck.dev/docs/checks/#ST1000
- -ST1003 # Poorly chosen identifier; https://staticcheck.dev/docs/checks/#ST1003
- -ST1005 # Incorrectly formatted error string; https://staticcheck.dev/docs/checks/#ST1005
spancheck:
# Default: ["end"]
checks:
- end # check that `span.End()` is called
- record-error # check that `span.RecordError(err)` is called when an error is returned
- set-status # check that `span.SetStatus(codes.Error, msg)` is called when an error is returned
thelper:
test:
# Check *testing.T is first param (or after context.Context) of helper function.
first: false
# Check t.Helper() begins helper function.
begin: false
benchmark:
# Check *testing.B is first param (or after context.Context) of helper function.
first: false
# Check b.Helper() begins helper function.
begin: false
tb:
# Check *testing.TB is first param (or after context.Context) of helper function.
first: false
# Check *testing.TB param has name tb.
name: false
# Check tb.Helper() begins helper function.
begin: false
fuzz:
# Check *testing.F is first param (or after context.Context) of helper function.
first: false
# Check f.Helper() begins helper function.
begin: false
usestdlibvars:
# Suggest the use of http.MethodXX.
http-method: true
# Suggest the use of http.StatusXX.
http-status-code: true
exclusions:
paths:
- volume/drivers/proxy.go # TODO: this is a generated file but with an invalid header, see https://github.com/moby/moby/pull/46274
rules:
# We prefer to use an "linters.exclusions.rules" so that new "default" exclusions are not
# automatically inherited. We can decide whether or not to follow upstream
# defaults when updating golang-ci-lint versions.
# Unfortunately, this means we have to copy the whole exclusion pattern, as
# (unlike the "include" option), the "exclude" option does not take exclusion
# ID's.
#
# These exclusion patterns are copied from the default excludes at:
# https://github.com/golangci/golangci-lint/blob/v1.61.0/pkg/config/issues.go#L11-L104
#
# The default list of exclusions can be found at:
# https://golangci-lint.run/usage/false-positives/#default-exclusions
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- errcheck
- text: "G404: Use of weak random number generator"
path: _test\.go
linters:
- gosec
# Suppress golint complaining about generated types in api/types/
- text: "type name will be used as (container|volume)\\.(Container|Volume).* by other packages, and that stutters; consider calling this"
path: "api/types/(volume|container)/"
linters:
- revive
# FIXME: ignoring unused assigns to ctx for now; too many hits in libnetwork/xxx functions that setup traces
- text: "assigned to ctx, but never used afterwards"
linters:
- wastedassign
- text: "ineffectual assignment to ctx"
source: "ctx[, ].*=.*\\(ctx[,)]"
linters:
- ineffassign
- text: "SA4006: this value of ctx is never used"
source: "ctx[, ].*=.*\\(ctx[,)]"
linters:
- staticcheck
# FIXME(thaJeztah): ignoring these transitional utilities until BuildKit is vendored with https://github.com/moby/moby/pull/49743
- text: "SA1019: idtools\\.(ToUserIdentityMapping|FromUserIdentityMapping|IdentityMapping) is deprecated"
linters:
- staticcheck
# Ignore "nested context in function literal (fatcontext)" as we intentionally set up tracing on a base-context for tests.
# FIXME(thaJeztah): see if there's a more iodiomatic way to do this.
- text: 'nested context in function literal'
path: '((main|check)_(linux_|)test\.go)|testutil/helpers\.go'
linters:
- fatcontext
- text: '^shadow: declaration of "(ctx|err|ok)" shadows declaration'
linters:
- govet
- text: '^shadow: declaration of "(out)" shadows declaration'
path: _test\.go
linters:
- govet
- text: 'use of `regexp.MustCompile` forbidden'
path: _test\.go
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "internal/lazyregexp"
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "libnetwork/cmd/networkdb-test/dbclient"
linters:
- forbidigo
# Ignore deprecated disk usage type warnings which will be moved internal to the daemon backend in the next major release.
- text: "SA1019: ((buildtypes|build).CacheDiskUsage|(container|containertypes|image|volume|volumetypes).DiskUsage) is deprecated"
linters:
- staticcheck
# Log a warning if an exclusion rule is unused.
# Default: false
warn-unused: true
issues:
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0

256
.mailmap
View File

@@ -1,23 +1,15 @@
# This file lists the canonical name and email of contributors, and is used to
# generate AUTHORS (in hack/generate-authors.sh).
#
# To find new duplicates, regenerate AUTHORS and scan for name duplicates, or
# run the following to find email duplicates:
# git log --format='%aE - %aN' | sort -uf | awk -v IGNORECASE=1 '$1 in a {print a[$1]; print}; {a[$1]=$0}'
#
# For an explanation of this file format, consult gitmailmap(5).
# Generate AUTHORS: hack/generate-authors.sh
Aaron Yoshitake <airandfingers@gmail.com>
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
# duplicates that aren't also email duplicates): scan the output of:
# git log --format='%aE - %aN' | sort -uf
#
# For explanation on this file format: man git-shortlog
<21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
<mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron L. Xu <liker.xu@foxmail.com> <likexu@harmonycloud.cn>
Aaron Lehmann <alehmann@netflix.com>
Aaron Lehmann <alehmann@netflix.com> <aaron.lehmann@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com>
Abhinandan Prativadi <aprativadi@gmail.com> <abhi@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com> abhi <user.email>
Abhishek Chanda <abhishek.becs@gmail.com>
Abhishek Chanda <abhishek.becs@gmail.com> <abhishek.chanda@emc.com>
Ada Mancini <ada@docker.com>
Abhinandan Prativadi <abhi@docker.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Dobrawy <naczelnik@jawnosc.tk> <ad-m@users.noreply.github.com>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
@@ -30,42 +22,22 @@ Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akshay Moghe <akshay.moghe@gmail.com>
Alano Terblanche <alano.terblanche@docker.com>
Alano Terblanche <alano.terblanche@docker.com> <18033717+Benehiko@users.noreply.github.com>
Albin Kerouanton <albinker@gmail.com>
Albin Kerouanton <albinker@gmail.com> <557933+akerouanton@users.noreply.github.com>
Albin Kerouanton <albinker@gmail.com> <albin@akerouanton.name>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
Aleksandrs Fadins <aleks@s-ko.net>
Alessandro Boch <aboch@tetrationanalytics.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@docker.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@socketplane.io>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@users.noreply.github.com>
Alex Chan <alex@alexwlchan.net>
Alex Chan <alex@alexwlchan.net> <alex.chan@metaswitch.com>
Alex Chen <alexchenunix@gmail.com> <root@localhost.localdomain>
Alex Ellis <alexellis2@gmail.com>
Alex Goodman <wagoodman@gmail.com> <wagoodman@users.noreply.github.com>
Alexander Larsson <alexl@redhat.com> <alexander.larsson@gmail.com>
Alexander Morozov <lk4d4math@gmail.com>
Alexander Morozov <lk4d4math@gmail.com> <lk4d4@docker.com>
Alexander Morozov <lk4d4@docker.com>
Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com>
Alexandre González <agonzalezro@gmail.com>
Alexis Ries <ries.alexis@gmail.com>
Alexis Ries <ries.alexis@gmail.com> <alexis.ries.ext@orange.com>
Alexis Thomas <fr.alexisthomas@gmail.com>
Alicia Lauerman <alicia@eta.im> <allydevour@me.com>
Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Anca Iordache <anca.iordache@docker.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Baxter <423qpsxzhh8k3h@s.rendaw.me>
Andrew Baxter <423qpsxzhh8k3h@s.rendaw.me> andrew <>
Andrew Kim <taeyeonkim90@gmail.com>
Andrew Kim <taeyeonkim90@gmail.com> <akim01@fortinet.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com>
@@ -73,8 +45,6 @@ Andrey Kolomentsev <andrey.kolomentsev@docker.com> <andrey.kolomentsev@gmail.com
André Martins <aanm90@gmail.com> <martins@noironetworks.com>
Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Andy Zhang <andy.zhangtao@hotmail.com>
Andy Zhang <andy.zhangtao@hotmail.com> <ztao@tibco-support.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
@@ -84,19 +54,11 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Anyu Wang <wanganyu@outlook.com>
Arko Dasgupta <arko@tetrate.io>
Arko Dasgupta <arko@tetrate.io> <arko.dasgupta@docker.com>
Arko Dasgupta <arko@tetrate.io> <arkodg@users.noreply.github.com>
Arnaud Porterie <icecrime@gmail.com>
Arnaud Porterie <icecrime@gmail.com> <arnaud.porterie@docker.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com> <elboulangero@gmail.com>
Arko Dasgupta <arko.dasgupta@docker.com>
Arko Dasgupta <arko.dasgupta@docker.com> <arkodg@users.noreply.github.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Artur Meyster <arthurfbi@yahoo.com>
Austin Vazquez <austin.vazquez.dev@gmail.com>
Austin Vazquez <austin.vazquez.dev@gmail.com> <55906459+austinvazquez@users.noreply.github.com>
Austin Vazquez <austin.vazquez.dev@gmail.com> <macedonv@amazon.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
@@ -112,23 +74,14 @@ Bily Zhang <xcoder@tenxcloud.com>
Bin Liu <liubin0329@gmail.com>
Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Bjorn Neergaard <bjorn@neersighted.com>
Bjorn Neergaard <bjorn@neersighted.com> <bjorn.neergaard@docker.com>
Bjorn Neergaard <bjorn@neersighted.com> <bneergaard@mirantis.com>
Boaz Shuster <ripcurld.github@gmail.com>
Bojun Zhu <bojun.zhu@foxmail.com>
Boqin Qin <bobbqqin@gmail.com>
Boshi Lian <farmer1992@gmail.com>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Brian Goff <cpuguy83@gmail.com>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Brian Goff <cpuguy83@gmail.com> <brian.goff@microsoft.com>
Brian Goff <cpuguy83@gmail.com> <cpuguy@hey.com>
Calvin Liu <flycalvin@qq.com>
Cameron Sparr <gh@sparr.email>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain>
@@ -139,18 +92,12 @@ Chen Mingjie <chenmingjie0828@163.com>
Chen Qiu <cheney-90@hotmail.com>
Chen Qiu <cheney-90@hotmail.com> <21321229@zju.edu.cn>
Chengfei Shang <cfshang@alauda.io>
Chengyu Zhu <hudson@cyzhu.com>
Chentianze <cmoman@126.com>
Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
Chris Price <cprice@mirantis.com> <chris.price@docker.com>
Chris Telfer <ctelfer@docker.com>
Chris Telfer <ctelfer@docker.com> <ctelfer@users.noreply.github.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com>
Christopher Petito <chrisjpetito@gmail.com>
Christopher Petito <chrisjpetito@gmail.com> <47751006+krissetto@users.noreply.github.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Corbin Coleman <corbin.coleman@docker.com>
@@ -158,8 +105,6 @@ Cristian Ariza <dev@cristianrz.com>
Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
cui fliter <imcusg@gmail.com>
cui fliter <imcusg@gmail.com> cuishuang <imcusg@gmail.com>
CUI Wei <ghostplant@qq.com> cuiwei13 <cuiwei13@pku.edu.cn>
Daehyeok Mun <daehyeok@gmail.com>
Daehyeok Mun <daehyeok@gmail.com> <daehyeok@daehyeok-ui-MacBook-Air.local>
@@ -186,23 +131,16 @@ Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
Dave Goodchild <buddhamagnet@gmail.com>
Dave Henderson <dhenderson@gmail.com> <Dave.Henderson@ca.ibm.com>
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
David Dooling <dooling@gmail.com>
David Dooling <dooling@gmail.com> <david.dooling@docker.com>
David M. Karr <davidmichaelkarr@gmail.com>
David Sheets <dsheets@docker.com> <sheets@alum.mit.edu>
David Sissitka <me@dsissitka.com>
David Williamson <david.williamson@docker.com> <davidwilliamson@users.noreply.github.com>
Derek Ch <denc716@gmail.com>
Derek McGowan <derek@mcg.dev>
Derek McGowan <derek@mcg.dev> <derek@mcgstyle.net>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Dhilip Kumars <dhilip.kumar.s@huawei.com>
Diego Siqueira <dieg0@live.com>
Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com>
Dmytro Iakovliev <dmytro.iakovliev@zodiacsystems.com>
Dominic Yin <yindongchao@inspur.com>
Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
@@ -212,9 +150,6 @@ Drew Erny <derny@mirantis.com> <drew.erny@docker.com>
Elan Ruusamäe <glen@pld-linux.org>
Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee>
Elango Sivanandam <elango.siva@docker.com>
Elango Sivanandam <elango.siva@docker.com> <elango@docker.com>
Eli Uriegas <seemethere101@gmail.com>
Eli Uriegas <seemethere101@gmail.com> <eli.uriegas@docker.com>
Eric G. Noriega <enoriega@vizuri.com> <egnoriega@users.noreply.github.com>
Eric Hanchrow <ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Eric Rosenberg <ehaydenr@gmail.com> <ehaydenr@users.noreply.github.com>
@@ -234,14 +169,10 @@ Felix Hupfeld <felix@quobyte.com> <quofelix@users.noreply.github.com>
Felix Ruess <felix.ruess@gmail.com> <felix.ruess@roboception.de>
Feng Yan <fy2462@gmail.com>
Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Filipe Pina <hzlu1ot0@duck.com>
Filipe Pina <hzlu1ot0@duck.com> <636320+fopina@users.noreply.github.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frank Yang <yyb196@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Fu JinLin <withlin@yeah.net>
Gabriel Goller <gabrielgoller123@gmail.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com>
@@ -261,62 +192,42 @@ Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@charmes.net>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@docker.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@dotcloud.com>
Gunadhya S. <6939749+gunadhya@users.noreply.github.com>
Guoqiang QI <guoqiang.qi1@gmail.com>
Guri <odg0318@gmail.com>
Gurjeet Singh <gurjeet@singh.im> <singh.gurjeet@gmail.com>
Gustav Sinder <gustav.sinder@gmail.com>
Günther Jungbluth <gunther@gameslabs.net>
Hakan Özler <hakan.ozler@kodcu.com>
Hao Shu Wei <haoshuwei24@gmail.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haoshuwei1989@163.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com> <haoshuwei1989@163.com>
Harald Albers <github@albersweb.de> <albers@users.noreply.github.com>
Harald Niesche <harald@niesche.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh>
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
Harry Zhang <harryz@hyper.sh> <resouer@163.com>
Harry Zhang <harryz@hyper.sh> <resouer@gmail.com>
Harry Zhang <resouer@163.com>
Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com>
He Simei <hesimei@zju.edu.cn>
Helen Xie <chenjg@harmonycloud.cn>
Hiroyuki Sasagawa <hs19870702@gmail.com>
Hollie Teal <hollie@docker.com>
Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Hu Keping <hukeping@huawei.com>
Huajin Tong <fliterdashen@gmail.com>
Hui Kang <hkang.sunysb@gmail.com>
Hui Kang <hkang.sunysb@gmail.com> <kangh@us.ibm.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
Hyeongkyu Lee <hyeongkyu.lee@navercorp.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com>
Ian Campbell <ian.campbell@docker.com>
Ian Campbell <ian.campbell@docker.com> <ijc@docker.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Iskander Sharipov <quasilyte@gmail.com>
Ivan Babrou <ibobrik@gmail.com>
Ivan Markin <sw@nogoegst.net> <twim@riseup.net>
Jack Laxson <jackjrabbit@gmail.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com>
Jaivish Kothari <janonymous.codevulture@gmail.com>
Jake Moshenko <jake@devtable.com>
Jakub Drahos <jdrahos@pulsepoint.com>
Jakub Drahos <jdrahos@pulsepoint.com> <jack.drahos@gmail.com>
James Nesbitt <jnesbitt@mirantis.com>
James Nesbitt <jnesbitt@mirantis.com> <james.nesbitt@wunderkraut.com>
Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com>
Jan Götte <jaseg@jaseg.net>
Jana Radhakrishnan <mrjana@docker.com>
Jana Radhakrishnan <mrjana@docker.com> <mrjana@socketplane.io>
Javier Bassi <javierbassi@gmail.com>
Javier Bassi <javierbassi@gmail.com> <CrimsonGlory@users.noreply.github.com>
Jay Lim <jay@imjching.com>
Jay Lim <jay@imjching.com> <imjching@hotmail.com>
Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
@@ -346,16 +257,13 @@ Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Harris <john@johnharris.io>
John Howard <github@lowenna.com>
John Howard <github@lowenna.com> <10522484+lowenna@users.noreply.github.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com>
John Howard <github@lowenna.com> <jhoward@ntdev.microsoft.com>
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com>
John Howard <github@lowenna.com> <John.Howard@microsoft.com>
John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard <github@lowenna.com> <john@lowenna.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan A. Sternberg <jonathansternberg@gmail.com>
Jonathan A. Sternberg <jonathansternberg@gmail.com> <jonathan.sternberg@docker.com>
Jonathan Choy <jonathan.j.choy@gmail.com>
Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com>
Jordan Arentsen <blissdev@gmail.com>
@@ -373,12 +281,9 @@ Josh Wilson <josh.wilson@fivestars.com> <jcwilson@users.noreply.github.com>
Joyce Jang <mail@joycejang.com>
Julien Bordellier <julienbordellier@gmail.com> <git@julienbordellier.com>
Julien Bordellier <julienbordellier@gmail.com> <me@julienbordellier.com>
Jun Du <dujun5@huawei.com>
Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com> <jkeller@vb-jkeller-mbp.local>
Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
Justin Terry <juterry@microsoft.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com>
@@ -395,9 +300,6 @@ Ken Cochrane <kencochrane@gmail.com> <KenCochrane@gmail.com>
Ken Herner <kherner@progress.com> <chosenken@gmail.com>
Ken Reese <krrgithub@gmail.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Alvarez <github@crazymax.dev>
Kevin Alvarez <github@crazymax.dev> <1951866+crazy-max@users.noreply.github.com>
Kevin Alvarez <github@crazymax.dev> <crazy-max@users.noreply.github.com>
Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
Kevin Meredith <kevin.m.meredith@gmail.com>
@@ -408,17 +310,11 @@ Konrad Kleine <konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
Konstantin Gribov <grossws@gmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <btkushuwahak@KUNAL-PC.swh.swh.nttdata.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com>
Kyle Squizzato <ksquizz@gmail.com>
Kyle Squizzato <ksquizz@gmail.com> <kyle.squizzato@docker.com>
Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com>
Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com>
Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com>
Lei Jitang <leijitang@huawei.com> <leijitang@outlook.com>
Leiiwang <u2takey@gmail.com>
Liang Mingqiang <mqliang.zju@gmail.com>
Liang-Chi Hsieh <viirya@gmail.com>
Liao Qingwei <liaoqingwei@huawei.com>
@@ -435,11 +331,8 @@ Lyn <energylyn@zju.edu.cn>
Lynda O'Leary <lyndaoleary29@gmail.com>
Lynda O'Leary <lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
Ma Müller <mueller-ma@users.noreply.github.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@corp.microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@microsoft.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@docker.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@socketplane.io>
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Mageee <fangpuyi@foxmail.com> <21521230.zju.edu.cn>
Mansi Nahar <mmn4185@rit.edu> <mansi.nahar@macbookpro-mansinahar.local>
Mansi Nahar <mmn4185@rit.edu> <mansinahar@users.noreply.github.com>
@@ -452,12 +345,10 @@ Markan Patel <mpatel678@gmail.com>
Markus Kortlang <hyp3rdino@googlemail.com> <markus.kortlang@lhsystems.com>
Martin Redmond <redmond.martin@gmail.com> <martin@tinychat.com>
Martin Redmond <redmond.martin@gmail.com> <xgithub@redmond5.com>
Maru Newby <mnewby@thesprawl.net>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Masato Ohba <over.rye@gmail.com>
Mathieu Paturel <mathieu.paturel@gmail.com>
Matt Bentley <matt.bentley@docker.com> <mbentley@mbentley.net>
Matt Schurenko <matt.schurenko@gmail.com>
Matt Williams <mattyw@me.com>
@@ -469,23 +360,15 @@ Matthias Kühnle <git.nivoc@neverbox.com> <kuehnle@online.de>
Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com>
Maxwell <csuhp007@gmail.com>
Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com>
Menghui Chen <menghui.chen@alibaba-inc.com>
Michael Beskin <mrbeskin@gmail.com>
Michael Crosby <crosbymichael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <crosby.michael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <michael@crosbymichael.com>
Michael Crosby <crosbymichael@gmail.com> <michael@docker.com>
Michael Crosby <crosbymichael@gmail.com> <michael@thepasture.io>
Michael Crosby <michael@docker.com> <crosby.michael@gmail.com>
Michael Crosby <michael@docker.com> <crosbymichael@gmail.com>
Michael Crosby <michael@docker.com> <michael@crosbymichael.com>
Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org>
Michael Huettermann <michael@huettermann.net>
Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com>
Michael Spetsiotis <michael_spets@hotmail.com>
Michael Stapelberg <michael+gh@stapelberg.de>
Michael Stapelberg <michael+gh@stapelberg.de> <stapelberg@google.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com> <kostrzewa.michal@o2.pl>
Michal Minář <miminar@redhat.com>
Michał Gryko <github@odkurzacz.org>
Michiel de Jong <michiel@unhosted.org>
@@ -493,25 +376,14 @@ Mickaël Fortunato <morsi.morsicus@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
Mikael Davranche <mikael.davranche@corp.ovh.com>
Mikael Davranche <mikael.davranche@corp.ovh.com> <mikael.davranche@corp.ovh.net>
Mike Casas <mkcsas0@gmail.com> <mikecasas@users.noreply.github.com>
Mike Goelzer <mike.goelzer@docker.com> <mgoelzer@docker.com>
Milas Bowman <devnull@milas.dev>
Milas Bowman <devnull@milas.dev> <milas.bowman@docker.com>
Milas Bowman <devnull@milas.dev> <milasb@gmail.com>
Milind Chawre <milindchawre@gmail.com>
Misty Stanley-Jones <misty@docker.com> <misty@apache.org>
Mohammad Banikazemi <MBanikazemi@gmail.com>
Mohammad Banikazemi <MBanikazemi@gmail.com> <mb@us.ibm.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <42430865+msadiq058@users.noreply.github.com>
Mohd Sadiq <mohdsadiq058@gmail.com> <mohdsadiq058@gmail.com>
Mohit Soni <mosoni@ebay.com> <mohitsoni1989@gmail.com>
Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
mrfly <mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Myeongjoon Kim <kimmj8409@gmail.com>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@@ -528,14 +400,9 @@ Oh Jinkyun <tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
Oliver Reason <oli@overrateddev.co>
Olli Janatuinen <olli.janatuinen@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Onur Filiz <onur.filiz@microsoft.com>
Onur Filiz <onur.filiz@microsoft.com> <ofiliz@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com>
Patrick St. laurent <patrick@saint-laurent.us>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
Paweł Gronowski <pawel.gronowski@docker.com>
Paweł Gronowski <pawel.gronowski@docker.com> <me@woland.xyz>
Pavel Tikhomirov <ptikhomirov@virtuozzo.com> <ptikhomirov@parallels.com>
Pawel Konczalski <mail@konczalski.de>
Peter Choi <phkchoi89@gmail.com> <reikani@Peters-MacBook-Pro.local>
@@ -543,56 +410,32 @@ Peter Dave Hello <hsu@peterdavehello.org> <PeterDaveHello@users.noreply.github.c
Peter Jaffe <pjaffe@nevo.com>
Peter Nagy <xificurC@gmail.com> <pnagy@gratex.com>
Peter Waller <p@pwaller.net> <peter@scraperwiki.com>
Phil Estes <estesp@gmail.com>
Phil Estes <estesp@gmail.com> <estesp@amazon.com>
Phil Estes <estesp@gmail.com> <estesp@linux.vnet.ibm.com>
Phil Estes <estesp@linux.vnet.ibm.com> <estesp@gmail.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com>
Prasanna Gautam <prasannagautam@gmail.com>
Puneet Pruthi <puneet.pruthi@oracle.com>
Puneet Pruthi <puneet.pruthi@oracle.com> <puneetpruthi@gmail.com>
Qiang Huang <h.huangqiang@huawei.com>
Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Qin TianHuan <tianhuan@bingotree.cn>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Richard Scothern <richard.scothern@gmail.com>
Rob Murray <rob.murray@docker.com>
Rob Murray <rob.murray@docker.com> <148866618+robmry@users.noreply.github.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Rodrigo Campos <rodrigoca@microsoft.com>
Rodrigo Campos <rodrigoca@microsoft.com> <rodrigo@kinvolk.io>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>
Rony Weng <ronyweng@synology.com>
Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io>
Rui JingAn <quiterace@gmail.com>
Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Ryoga Saito <contact@proelbtn.com>
Ryoga Saito <contact@proelbtn.com> <proelbtn@users.noreply.github.com>
Sainath Grandhi <sainath.grandhi@intel.com>
Sainath Grandhi <sainath.grandhi@intel.com> <saiallforums@gmail.com>
Sakeven Jiang <jc5930@sina.cn>
Samuel Karp <me@samuelkarp.com> <skarp@amazon.com>
Sandeep Bansal <sabansal@microsoft.com>
Sandeep Bansal <sabansal@microsoft.com> <msabansal@microsoft.com>
Santhosh Manohar <santhosh@docker.com>
Sargun Dhillon <sargun@netflix.com> <sargun@sargun.me>
Satoshi Tagomori <tagomoris@gmail.com>
Sean Lee <seanlee@tw.ibm.com> <scaleoutsean@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <moby@example.com>
Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Sebastian Thomschke <sebthom@users.noreply.github.com>
Seongyeol Lim <seongyeol37@gmail.com>
Serhii Nakon <serhii.n@thescimus.com>
Shaun Kaasten <shaunk@gmail.com>
Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com>
Shengbo Song <thomassong@tencent.com>
@@ -601,10 +444,10 @@ Shih-Yuan Lee <fourdollars@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
Shukui Yang <yangshukui@huawei.com>
Shuwei Hao <haosw@cn.ibm.com>
Shuwei Hao <haosw@cn.ibm.com> <haoshuwei24@gmail.com>
Sidhartha Mani <sidharthamn@gmail.com>
Sjoerd Langkemper <sjoerd-github@linuxonly.nl> <sjoerd@byte.nl>
Smark Meng <smark@freecoop.net>
Smark Meng <smark@freecoop.net> <smarkm@users.noreply.github.com>
Solomon Hykes <solomon@docker.com> <s@docker.com>
Solomon Hykes <solomon@docker.com> <solomon.hykes@dotcloud.com>
Solomon Hykes <solomon@docker.com> <solomon@dotcloud.com>
@@ -635,48 +478,29 @@ Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
Sylvain Baubeau <lebauce@gmail.com>
Sylvain Baubeau <lebauce@gmail.com> <sbaubeau@redhat.com>
Sylvain Bellemare <sylvain@ascribe.io>
Sylvain Bellemare <sylvain@ascribe.io> <sylvain.bellemare@ezeep.com>
Takuto Sato <tockn.jp@gmail.com>
Tangi Colin <tangicolin@gmail.com>
Tejesh Mehta <tejesh.mehta@gmail.com> <tj@init.me>
Terry Chu <zue.hterry@gmail.com>
Terry Chu <zue.hterry@gmail.com> <jubosh.tw@gmail.com>
Thatcher Peskens <thatcher@docker.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@gmx.net>
Thiago Alves Silva <thiago.alves@aurea.com>
Thiago Alves Silva <thiago.alves@aurea.com> <thiagoalves@users.noreply.github.com>
Thomas Gazagnaire <thomas@gazagnaire.org> <thomas@gazagnaire.com>
Thomas Ledos <thomas.ledos92@gmail.com>
Thomas Léveil <thomasleveil@gmail.com>
Thomas Léveil <thomasleveil@gmail.com> <thomasleveil@users.noreply.github.com>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Till Claassen <pixelistik@users.noreply.github.com>
Tim Bart <tim@fewagainstmany.com>
Tim Bosse <taim@bosboot.org> <maztaim@users.noreply.github.com>
Tim Potter <tpot@hpe.com>
Tim Potter <tpot@hpe.com> <tpot@Tims-MacBook-Pro.local>
Tim Ruffles <oi@truffles.me.uk> <timruffles@googlemail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Tim Wagner <tim.wagner@freenet.ag>
Tim Wagner <tim.wagner@freenet.ag> <33624860+herrwagner@users.noreply.github.com>
Tim Zju <21651152@zju.edu.cn>
Timothy Hobbs <timothyhobbs@seznam.cz>
Toli Kuznets <toli@docker.com>
Tom Barlow <tomwbarlow@gmail.com>
Tom Denham <tom@tomdee.co.uk>
Tom Denham <tom@tomdee.co.uk> <tom.denham@metaswitch.com>
Tom Sweeney <tsweeney@redhat.com>
Tom Wilkie <tom.wilkie@gmail.com>
Tom Wilkie <tom.wilkie@gmail.com> <tom@weave.works>
Tõnis Tiigi <tonistiigi@gmail.com>
Trace Andreason <tandreason@gmail.com>
Trapier Marshall <tmarshall@mirantis.com>
Trapier Marshall <tmarshall@mirantis.com> <trapier.marshall@docker.com>
Trishna Guha <trishnaguha17@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com>
@@ -690,15 +514,12 @@ Victor Vieux <victor.vieux@docker.com> <victor@docker.com>
Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com>
Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com>
Victor Vieux <victor.vieux@docker.com> <vieux@docker.com>
Vikas Choudhary <choudharyvikas16@gmail.com>
Vikram bir Singh <vsingh@mirantis.com>
Vikram bir Singh <vsingh@mirantis.com> <vikrambir.singh@docker.com>
Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Bernat <vincent@bernat.ch>
Vincent Bernat <vincent@bernat.ch> <bernat@luffy.cx>
Vincent Bernat <vincent@bernat.ch> <Vincent.Bernat@exoscale.ch>
Vincent Bernat <vincent@bernat.ch> <vincent@bernat.im>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <vincent@bernat.im>
Vincent Boulineau <vincent.boulineau@datadoghq.com>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
@@ -707,8 +528,6 @@ Vishnu Kannan <vishnuk@google.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com> <tmp6154@yandex.ru>
Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com>
Vladislav Kolesnikov <vkolesnikov@beget.ru>
Vladislav Kolesnikov <vkolesnikov@beget.ru> <prime@vladqa.ru>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>
Wang Chao <chao.wang@ucloud.cn> <wcwxyz@gmail.com>
@@ -720,27 +539,18 @@ Wang Yuexiao <wang.yuexiao@zte.com.cn>
Wayne Chang <wayne@neverfear.org>
Wayne Song <wsong@docker.com> <wsong@users.noreply.github.com>
Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com>
Wei-Ting Kuo <waitingkuo0527@gmail.com>
Wen Cheng Ma <wenchma@cn.ibm.com>
Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
Will Weaver <monkey@buildingbananas.com>
Wing-Kam Wong <wingkwong.code@gmail.com>
WuLonghui <wlh6666@qq.com>
Xian Chaobo <xianchaobo@huawei.com>
Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn>
Xianglin Gao <xlgao@zju.edu.cn>
Xianjie <guxianjie@gmail.com>
Xianjie <guxianjie@gmail.com> <datastream@datastream-laptop.local>
Xianlu Bird <xianlubird@gmail.com>
Xiao YongBiao <xyb4638@gmail.com>
Xiao Zhang <xiaozhang0210@hotmail.com>
Xiaodong Liu <liuxiaodong@loongson.cn>
Xiaodong Zhang <a4012017@sina.com>
Xiaohua Ding <xiao_hua_ding@sina.cn>
Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
Xinfeng Liu <XinfengLiu@icloud.com>
Xinfeng Liu <XinfengLiu@icloud.com> <xinfeng.liu@gmail.com>
Xuecong Liao <satorulogic@gmail.com>
Yamasaki Masahide <masahide.y@gmail.com>
Yao Zaiyong <yaozaiyong@hotmail.com>
@@ -757,15 +567,9 @@ Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>
Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn>
Yuan Sun <sunyuan3@huawei.com>
Yue Zhang <zy675793960@yeah.net>
Yufei Xiong <yufei.xiong@qq.com>
Zach Gershman <zachgersh@gmail.com>
Zach Gershman <zachgersh@gmail.com> <zachgersh@users.noreply.github.com>
Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu>
Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org>
Zhang Kun <zkazure@gmail.com>
Zhang Wentao <zhangwentao234@huawei.com>
ZhangHang <stevezhang2014@gmail.com>
Zhenkun Bi <bi.zhenkun@zte.com.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
@@ -775,5 +579,3 @@ Ziheng Liu <lzhfromustc@gmail.com>
Zou Yu <zouyu7@huawei.com>
Zuhayr Elahi <zuhayr.elahi@docker.com>
Zuhayr Elahi <zuhayr.elahi@docker.com> <elahi.zuhayr@gmail.com>
정재영 <jjy600901@gmail.com>
정재영 <jjy600901@gmail.com> <43400316+J-jaeyoung@users.noreply.github.com>

411
AUTHORS

File diff suppressed because it is too large Load Diff

3609
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -72,7 +72,7 @@ anybody starts working on it.
We are always thrilled to receive pull requests. We do our best to process them
quickly. If your pull request is not accepted on the first try,
don't get discouraged! Our contributor's guide explains [the review process we
use for simple changes](https://docs.docker.com/contribute/overview/).
use for simple changes](https://docs.docker.com/opensource/workflow/make-a-contribution/).
### Design and cleanup proposals
@@ -83,39 +83,6 @@ contributions, see [the advanced contribution
section](https://docs.docker.com/opensource/workflow/advanced-contributing/) in
the contributors guide.
### Where to put your changes
You can make changes to any Go package within Moby outside of the vendor directory. There are no
restrictions on packages but a few guidelines to follow for deciding on making these changes.
When adding new packages, first consider putting them in an internal directory to prevent
unintended importing from other modules. Code changes should either go under `api`, `client`,
or `daemon` modules, or one of the integration test directories.
Try to put a new package under the appropriate directories. The root directory is reserved for
configuration and build files, no source files will be accepted in the root.
- `api` - All types shared by client and daemon along with swagger definitions.
- `client` - All Go files for the docker client
- `contrib` - Files, configurations, and packages related to external tools or libraries
- `daemon` - All Go files and packages for building the daemon
- `docs` - All Moby technical documentation using markdown
- `hack` - All scripts used for testing, development, and CI
- `integration` - Testing the integration of the API, client, and daemon
- `integration-cli` - Deprecated integration tests of the docker cli with the daemon, no new tests allowed
- `pkg` - Legacy Go packages used externally, no new packages should be added here
- `project` - All files related to Moby project governance
- `vendor` - Autogenerated vendor files from `make vendor` command, do not manually edit files here
The daemon module has many subpackages. Consider putting new packages under one of these
directories.
- `daemon/cmd` - All Go main packages and the packages used only for that main package
- `daemon/internal` - All utility packages used by daemon and not intended for external use
- `daemon/man`- All Moby reference manuals used for the `man` command
- `daemon/plugins` - All included daemon plugins which are intended to be registered via init
- `daemon/pkg` - All libraries used by daemon and for integration testing
- `daemon/version` - Version package with the current daemon version
### Connect with other Moby Project contributors
<table class="tg">
@@ -134,8 +101,9 @@ directories.
<td>
<p>
Register for the Docker Community Slack at
<a href="https://dockr.ly/comm-slack" target="_blank">https://dockr.ly/comm-slack</a>.
<a href="https://community.docker.com/registrations/groups/4316" target="_blank">https://community.docker.com/registrations/groups/4316</a>.
We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd.
Archives are available at <a href="https://dockercommunity.slackarchive.io/" target="_blank">https://dockercommunity.slackarchive.io/</a>.
</p>
</td>
</tr>
@@ -342,6 +310,36 @@ Don't forget: being a maintainer is a time investment. Make sure you
will have time to make yourself available. You don't have to be a
maintainer to make a difference on the project!
### Manage issues and pull requests using the Derek bot
If you want to help label, assign, close or reopen issues or pull requests
without commit rights, ask a maintainer to add your Github handle to the
`.DEREK.yml` file. [Derek](https://github.com/alexellis/derek) is a bot that extends
Github's user permissions to help non-committers to manage issues and pull requests simply by commenting.
For example:
* Labels
```
Derek add label: kind/question
Derek remove label: status/claimed
```
* Assign work
```
Derek assign: username
Derek unassign: me
```
* Manage issues and PRs
```
Derek close
Derek reopen
```
## Moby community guidelines
We want to keep the Moby community awesome, growing and collaborative. We need
@@ -455,6 +453,6 @@ The rules:
guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](https://go.dev/doc/effective_go). The
[Go Blog](https://go.dev/blog/) is also a great resource. Drinking the
reading through [Effective Go](https://golang.org/doc/effective_go.html). The
[Go Blog](https://blog.golang.org) is also a great resource. Drinking the
kool-aid is a lot easier than going thirsty.

View File

@@ -1,113 +1,88 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.24.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
ARG XX_VERSION=1.6.1
# VPNKIT_VERSION is the version of the vpnkit binary which is used as a fallback
# network driver for rootless.
ARG VPNKIT_VERSION=0.6.0
# DOCKERCLI_VERSION is the version of the CLI to install in the dev-container.
ARG DOCKERCLI_VERSION=v28.2.2
ARG DOCKERCLI_REPOSITORY="https://github.com/docker/cli.git"
# cli version used for integration-cli tests
ARG DOCKERCLI_INTEGRATION_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_INTEGRATION_VERSION=v18.06.3-ce
# BUILDX_VERSION is the version of buildx to install in the dev container.
ARG BUILDX_VERSION=0.24.0
# COMPOSE_VERSION is the version of compose to install in the dev container.
ARG COMPOSE_VERSION=v2.36.2
# syntax=docker/dockerfile:1.1.7-experimental
ARG CROSS="false"
ARG SYSTEMD="false"
ARG FIREWALLD="false"
ARG DOCKER_STATIC=1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ARG GO_VERSION=1.13.15
ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_VERSION=0.4.0
ARG DOCKER_BUILDTAGS="apparmor seccomp"
# REGISTRY_VERSION specifies the version of the registry to download from
# https://hub.docker.com/r/distribution/distribution. This version of
# the registry is used to test schema 2 manifests. Generally, the version
# specified here should match a current release.
ARG REGISTRY_VERSION=3.0.0
ARG BASE_DEBIAN_DISTRO="buster"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.24.1/pkg/proc/native/support_sentinel.go#L1
# https://github.com/go-delve/delve/blob/v1.24.1/pkg/proc/native/support_sentinel_linux.go#L1
#
# ppc64le support was added in v1.21.1, but is still experimental, and requires
# the "-tags exp.linuxppc64le" build-tag to be set:
# https://github.com/go-delve/delve/commit/71f12207175a1cc09668f856340d8a543c87dcca
ARG DELVE_SUPPORTED=${TARGETPLATFORM#linux/amd64} DELVE_SUPPORTED=${DELVE_SUPPORTED#linux/arm64} DELVE_SUPPORTED=${DELVE_SUPPORTED#linux/ppc64le}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:+"unsupported"}
ARG DELVE_SUPPORTED=${DELVE_SUPPORTED:-"supported"}
# cross compilation helper
FROM --platform=$BUILDPLATFORM tonistiigi/xx:${XX_VERSION} AS xx
# dummy stage to make sure the image is built for deps that don't support some
# architectures
FROM --platform=$BUILDPLATFORM busybox AS build-dummy
RUN mkdir -p /build
FROM scratch AS binary-dummy
COPY --from=build-dummy /build /build
# base
FROM --platform=$BUILDPLATFORM ${GOLANG_IMAGE} AS base
COPY --from=xx / /
# Disable collecting local telemetry, as collected by Go and Delve;
#
# - https://github.com/go-delve/delve/blob/v1.24.1/CHANGELOG.md#1231-2024-09-23
# - https://go.dev/doc/telemetry#background
RUN go telemetry off && [ "$(go telemetry)" = "off" ] || { echo "Failed to disable Go telemetry"; exit 1; }
FROM ${GOLANG_IMAGE} AS base
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN apt-get update && apt-get install --no-install-recommends -y file
ARG APT_MIRROR
RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \
&& sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list
ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
FROM base AS criu
ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc
ARG DEBIAN_FRONTEND
# Install dependency packages specific to criu
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_12/ /' > /etc/apt/sources.list.d/criu.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends criu \
&& install -D /usr/sbin/criu /build/criu \
&& /build/criu --version
apt-get update && apt-get install -y --no-install-recommends \
libcap-dev \
libnet-dev \
libnl-3-dev \
libprotobuf-c-dev \
libprotobuf-dev \
protobuf-c-compiler \
protobuf-compiler \
python-protobuf
# registry
FROM distribution/distribution:$REGISTRY_VERSION AS registry
RUN mkdir /build && mv /bin/registry /build/registry
# Install CRIU for checkpoint/restore support
ARG CRIU_VERSION=3.14
RUN mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
# go-swagger
FROM base AS swagger-src
WORKDIR /usr/src/swagger
# Currently uses a fork from https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
RUN git init . && git remote add origin "https://github.com/kolyshkin/go-swagger.git"
# GO_SWAGGER_COMMIT specifies the version of the go-swagger binary to build and
# install. Go-swagger is used in CI for validating swagger.yaml in hack/validate/swagger-gen
ARG GO_SWAGGER_COMMIT=c56166c036004ba7a3a321e5951ba472b9ae298c
RUN git fetch -q --depth 1 origin "${GO_SWAGGER_COMMIT}" && git checkout -q FETCH_HEAD
FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution
# Install two versions of the registry. The first one is a recent version that
# supports both schema 1 and 2 manifests. The second one is an older version that
# only supports schema1 manifests. This allows integration-cli tests to cover
# push/pull with both schema1 and schema2 manifests.
# The old version of the registry is not working on arm64, so installation is
# skipped on that architecture.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/docker/distribution.git . \
&& git checkout -q "$REGISTRY_COMMIT" \
&& GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|armhf|ppc64*|s390x) \
git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \
GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac
FROM base AS swagger
WORKDIR /go/src/github.com/go-swagger/go-swagger
ARG TARGETPLATFORM
RUN --mount=from=swagger-src,src=/usr/src/swagger,rw \
--mount=type=cache,target=/root/.cache/go-build,id=swagger-build-$TARGETPLATFORM \
WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger
# Install go-swagger for validating swagger.yaml
# This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
ENV GO_SWAGGER_COMMIT 5e6cb12f7c82ce78e45ba71fa6cb1928094db050
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ <<EOT
set -e
xx-go build -o /build/swagger ./cmd/swagger
xx-verify /build/swagger
EOT
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/kolyshkin/go-swagger.git . \
&& git checkout -q "$GO_SWAGGER_COMMIT" \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger
# frozen-images
# See also frozenImages in "testutil/environment/protect.go" (which needs to
# be updated when adding images to this list)
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
@@ -117,346 +92,227 @@ RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/l
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
ARG TARGETARCH
ARG TARGETVARIANT
RUN /download-frozen-image-v2.sh /build \
buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bookworm-slim@sha256:2bc5c236e9b262645a323e9088dfa3bb1ecb16cc75811daf40a23a824d665be9 \
debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 \
hello-world:amd64@sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 \
hello-world:arm64@sha256:963612c5503f3f1674f315c67089dee577d8cc6afc18565e0b4183ae355fb343
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
# delve
FROM base AS delve-src
WORKDIR /usr/src/delve
RUN git init . && git remote add origin "https://github.com/go-delve/delve.git"
# DELVE_VERSION specifies the version of the Delve debugger binary
# from the https://github.com/go-delve/delve repository.
# It can be used to run Docker with a possibility of
# attaching debugger to it.
ARG DELVE_VERSION=v1.24.1
RUN git fetch -q --depth 1 origin "${DELVE_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS cross-false
FROM base AS delve-supported
WORKDIR /usr/src/delve
ARG TARGETPLATFORM
RUN --mount=from=delve-src,src=/usr/src/delve,rw \
--mount=type=cache,target=/root/.cache/go-build,id=delve-build-$TARGETPLATFORM \
--mount=type=cache,target=/go/pkg/mod <<EOT
set -e
GO111MODULE=on xx-go build -o /build/dlv ./cmd/dlv
xx-verify /build/dlv
EOT
FROM binary-dummy AS delve-unsupported
FROM delve-${DELVE_SUPPORTED} AS delve
FROM base AS gowinres
# GOWINRES_VERSION defines go-winres tool version
ARG GOWINRES_VERSION=v0.3.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/tc-hib/go-winres@${GOWINRES_VERSION}" \
&& /build/go-winres --help
# containerd
FROM base AS containerd-src
WORKDIR /usr/src/containerd
RUN git init . && git remote add origin "https://github.com/containerd/containerd.git"
# CONTAINERD_VERSION is used to build containerd binaries, and used for the
# integration tests. The distributed docker .deb and .rpm packages depend on a
# separate (containerd.io) package, which may be a different version as is
# specified here. The containerd golang package is also pinned in vendor.mod.
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.7.29
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build
WORKDIR /go/src/github.com/containerd/containerd
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=containerd-src,src=/usr/src/containerd,rw \
--mount=type=cache,target=/root/.cache/go-build,id=containerd-build-$TARGETPLATFORM <<EOT
set -e
export CC=$(xx-info)-gcc
export CGO_ENABLED=$([ "$DOCKER_STATIC" = "1" ] && echo "0" || echo "1")
xx-go --wrap
make $([ "$DOCKER_STATIC" = "1" ] && echo "STATIC=1") binaries
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") bin/containerd
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") bin/containerd-shim-runc-v2
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") bin/ctr
mkdir /build
mv bin/containerd bin/containerd-shim-runc-v2 bin/ctr /build
EOT
FROM containerd-build AS containerd-linux
FROM binary-dummy AS containerd-windows
FROM containerd-${TARGETOS} AS containerd
FROM base AS golangci_lint
ARG GOLANGCI_LINT_VERSION=v2.1.5
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/v2/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
&& /build/golangci-lint --version
FROM base AS gotestsum
# GOTESTSUM_VERSION is the version of gotest.tools/gotestsum to install.
ARG GOTESTSUM_VERSION=v1.12.3
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \
&& /build/gotestsum --version
FROM base AS shfmt
ARG SHFMT_VERSION=v3.8.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
&& /build/shfmt --version
FROM base AS gopls
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "golang.org/x/tools/gopls@latest" \
&& /build/gopls version
FROM base AS dockercli
WORKDIR /go/src/github.com/docker/cli
ARG DOCKERCLI_REPOSITORY
ARG DOCKERCLI_VERSION
ARG TARGETPLATFORM
RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \
--mount=type=cache,id=dockercli-git-$TARGETPLATFORM,sharing=locked,target=./.git \
--mount=type=cache,target=/root/.cache/go-build,id=dockercli-build-$TARGETPLATFORM \
rm -f ./.git/*.lock \
&& /download-or-build-cli.sh ${DOCKERCLI_VERSION} ${DOCKERCLI_REPOSITORY} /build \
&& /build/docker --version \
&& /build/docker completion bash >/completion.bash
FROM base AS dockercli-integration
WORKDIR /go/src/github.com/docker/cli
ARG DOCKERCLI_INTEGRATION_REPOSITORY
ARG DOCKERCLI_INTEGRATION_VERSION
ARG TARGETPLATFORM
RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \
--mount=type=cache,id=dockercli-git-$TARGETPLATFORM,sharing=locked,target=./.git \
--mount=type=cache,target=/root/.cache/go-build,id=dockercli-build-$TARGETPLATFORM \
rm -f ./.git/*.lock \
&& /download-or-build-cli.sh ${DOCKERCLI_INTEGRATION_VERSION} ${DOCKERCLI_INTEGRATION_REPOSITORY} /build \
&& /build/docker --version
# runc
FROM base AS runc-src
WORKDIR /usr/src/runc
RUN git init . && git remote add origin "https://github.com/opencontainers/runc.git"
# RUNC_VERSION should match the version that is used by the containerd version
# that is used. If you need to update runc, open a pull request in the containerd
# project first, and update both after that is merged.
ARG RUNC_VERSION=v1.3.3
RUN git fetch -q --depth 1 origin "${RUNC_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS runc-build
WORKDIR /go/src/github.com/opencontainers/runc
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-runc-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-runc-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc \
libc6-dev \
libseccomp-dev \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=runc-src,src=/usr/src/runc,rw \
--mount=type=cache,target=/root/.cache/go-build,id=runc-build-$TARGETPLATFORM <<EOT
set -e
xx-go --wrap
CGO_ENABLED=1 make "$([ "$DOCKER_STATIC" = "1" ] && echo "static" || echo "runc")"
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") runc
mkdir /build
mv runc /build/
EOT
FROM runc-build AS runc-linux
FROM binary-dummy AS runc-windows
FROM runc-${TARGETOS} AS runc
# tini
FROM base AS tini-src
WORKDIR /usr/src/tini
RUN git init . && git remote add origin "https://github.com/krallin/tini.git"
# TINI_VERSION specifies the version of tini (docker-init) to build. This
# binary is used when starting containers with the `--init` option.
ARG TINI_VERSION=v0.19.0
RUN git fetch -q --depth 1 origin "${TINI_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS tini-build
WORKDIR /go/src/github.com/krallin/tini
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends cmake
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
xx-apt-get install -y --no-install-recommends \
gcc \
libc6-dev \
pkg-config
RUN --mount=from=tini-src,src=/usr/src/tini,rw \
--mount=type=cache,target=/root/.cache/go-build,id=tini-build-$TARGETPLATFORM <<EOT
set -e
CC=$(xx-info)-gcc cmake .
make tini-static
xx-verify --static tini-static
mkdir /build
mv tini-static /build/docker-init
EOT
FROM tini-build AS tini-linux
FROM binary-dummy AS tini-windows
FROM tini-${TARGETOS} AS tini
# rootlesskit
FROM base AS rootlesskit-src
WORKDIR /usr/src/rootlesskit
RUN git init . && git remote add origin "https://github.com/rootless-containers/rootlesskit.git"
# When updating, also update vendor.mod and hack/dockerfile/install/rootlesskit.installer accordingly.
ARG ROOTLESSKIT_VERSION=v2.3.4
RUN git fetch -q --depth 1 origin "${ROOTLESSKIT_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS rootlesskit-build
WORKDIR /go/src/github.com/rootless-containers/rootlesskit
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-rootlesskit-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-rootlesskit-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc \
libc6-dev \
pkg-config
ENV GO111MODULE=on
ARG DOCKER_STATIC
RUN --mount=from=rootlesskit-src,src=/usr/src/rootlesskit,rw \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=cache,target=/root/.cache/go-build,id=rootlesskit-build-$TARGETPLATFORM <<EOT
set -e
export CGO_ENABLED=$([ "$DOCKER_STATIC" = "1" ] && echo "0" || echo "1")
xx-go build -o /build/rootlesskit -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit
EOT
COPY --link ./contrib/dockerd-rootless.sh /build/
COPY --link ./contrib/dockerd-rootless-setuptool.sh /build/
FROM rootlesskit-build AS rootlesskit-linux
FROM binary-dummy AS rootlesskit-windows
FROM rootlesskit-${TARGETOS} AS rootlesskit
FROM base AS crun
# CRUN_VERSION is the version of crun to install in the dev-container.
ARG CRUN_VERSION=1.21
RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-crun-aptcache,target=/var/cache/apt \
FROM --platform=linux/amd64 base AS cross-true
ARG DEBIAN_FRONTEND
RUN dpkg --add-architecture arm64
RUN dpkg --add-architecture armel
RUN dpkg --add-architecture armhf
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
build-essential \
libcap-dev \
libprotobuf-c-dev \
crossbuild-essential-arm64 \
crossbuild-essential-armel \
crossbuild-essential-armhf
FROM cross-${CROSS} as dev-base
FROM dev-base AS runtime-dev-cross-false
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
libsystemd-dev \
libtool \
libyajl-dev \
python3 \
;
RUN --mount=type=tmpfs,target=/tmp/crun-build \
git clone https://github.com/containers/crun.git /tmp/crun-build && \
cd /tmp/crun-build && \
git checkout -q "${CRUN_VERSION}" && \
./autogen.sh && \
./configure --bindir=/build && \
make -j install
libudev-dev
# vpnkit
# use dummy scratch stage to avoid build to fail for unsupported platforms
FROM scratch AS vpnkit-windows
FROM scratch AS vpnkit-linux-386
FROM scratch AS vpnkit-linux-arm
FROM scratch AS vpnkit-linux-ppc64le
FROM scratch AS vpnkit-linux-riscv64
FROM scratch AS vpnkit-linux-s390x
FROM moby/vpnkit-bin:${VPNKIT_VERSION} AS vpnkit-linux-amd64
FROM moby/vpnkit-bin:${VPNKIT_VERSION} AS vpnkit-linux-arm64
FROM vpnkit-linux-${TARGETARCH} AS vpnkit-linux
FROM vpnkit-${TARGETOS} AS vpnkit
FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true
ARG DEBIAN_FRONTEND
# These crossbuild packages rely on gcc-<arch>, but this doesn't want to install
# on non-amd64 systems.
# Additionally, the crossbuild-amd64 is currently only on debian:buster, so
# other architectures cannnot crossbuild amd64.
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
libapparmor-dev:armhf \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libseccomp-dev:armhf
# containerutility
FROM base AS containerutil-src
WORKDIR /usr/src/containerutil
RUN git init . && git remote add origin "https://github.com/docker-archive/windows-container-utility.git"
ARG CONTAINERUTILITY_VERSION=aa1ba87e99b68e0113bd27ec26c60b88f9d4ccd9
RUN git fetch -q --depth 1 origin "${CONTAINERUTILITY_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM runtime-dev-cross-${CROSS} AS runtime-dev
FROM base AS containerutil-build
WORKDIR /usr/src/containerutil
ARG TARGETPLATFORM
RUN xx-apt-get install -y --no-install-recommends \
gcc \
g++ \
libc6-dev \
pkg-config
RUN --mount=from=containerutil-src,src=/usr/src/containerutil,rw \
--mount=type=cache,target=/root/.cache/go-build,id=containerutil-build-$TARGETPLATFORM <<EOT
set -e
CC="$(xx-info)-gcc" CXX="$(xx-info)-g++" make
xx-verify --static containerutility.exe
mkdir /build
mv containerutility.exe /build/
EOT
FROM base AS tomlv
ARG TOMLV_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tomlv
FROM binary-dummy AS containerutil-linux
FROM containerutil-build AS containerutil-windows-amd64
FROM containerutil-windows-${TARGETARCH} AS containerutil-windows
FROM containerutil-${TARGETOS} AS containerutil
FROM docker/buildx-bin:${BUILDX_VERSION} AS buildx
FROM docker/compose-bin:${COMPOSE_VERSION} AS compose
FROM base AS vndr
ARG VNDR_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh vndr
FROM base AS dev-systemd-false
COPY --link --from=frozen-images /build/ /docker-frozen-images
COPY --link --from=swagger /build/ /usr/local/bin/
COPY --link --from=delve /build/ /usr/local/bin/
COPY --link --from=gowinres /build/ /usr/local/bin/
COPY --link --from=tini /build/ /usr/local/bin/
COPY --link --from=registry /build/ /usr/local/bin/
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev
ARG CONTAINERD_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh containerd
# Skip the CRIU stage for now, as the opensuse package repository is sometimes
# unstable, and we're currently not using it in CI.
#
# FIXME(thaJeztah): re-enable this stage when https://github.com/moby/moby/issues/38963 is resolved (see https://github.com/moby/moby/pull/38984)
# COPY --link --from=criu /build/ /usr/local/bin/
COPY --link --from=gotestsum /build/ /usr/local/bin/
COPY --link --from=golangci_lint /build/ /usr/local/bin/
COPY --link --from=shfmt /build/ /usr/local/bin/
COPY --link --from=runc /build/ /usr/local/bin/
COPY --link --from=containerd /build/ /usr/local/bin/
COPY --link --from=rootlesskit /build/ /usr/local/bin/
COPY --link --from=vpnkit / /usr/local/bin/
COPY --link --from=containerutil /build/ /usr/local/bin/
COPY --link --from=crun /build/ /usr/local/bin/
COPY --link hack/dockerfile/etc/docker/ /etc/docker/
COPY --link --from=buildx /buildx /usr/local/libexec/docker/cli-plugins/docker-buildx
COPY --link --from=compose /docker-compose /usr/libexec/docker/cli-plugins/docker-compose
FROM dev-base AS proxy
ARG LIBNETWORK_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh proxy
FROM base AS golangci_lint
ARG GOLANGCI_LINT_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh golangci_lint
FROM base AS gotestsum
ARG GOTESTSUM_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh gotestsum
FROM base AS shfmt
ARG SHFMT_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh shfmt
FROM dev-base AS dockercli
ARG DOCKERCLI_CHANNEL
ARG DOCKERCLI_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh dockercli
FROM runtime-dev AS runc
ARG RUNC_COMMIT
ARG RUNC_BUILDTAGS
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh runc
FROM dev-base AS tini
ARG DEBIAN_FRONTEND
ARG TINI_COMMIT
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
cmake \
vim-common
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tini
FROM dev-base AS rootlesskit
ARG ROOTLESSKIT_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh rootlesskit
COPY ./contrib/dockerd-rootless.sh /build
COPY ./contrib/dockerd-rootless-setuptool.sh /build
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit
# TODO: Some of this is only really needed for testing, it would be nice to split this up
FROM runtime-dev AS dev-systemd-false
ARG DEBIAN_FRONTEND
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig
# This should only install packages that are specifically needed for the dev environment and nothing else
# Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
aufs-tools \
bash-completion \
bzip2 \
iptables \
jq \
libcap2-bin \
libnet1 \
libnl-3-200 \
libprotobuf-c1 \
net-tools \
pigz \
python3-pip \
python3-setuptools \
python3-wheel \
sudo \
thin-provisioning-tools \
uidmap \
vim \
vim-common \
xfsprogs \
xz-utils \
zip
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
RUN pip3 install yamllint==1.16.0
COPY --from=dockercli /build/ /usr/local/cli
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=swagger /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=registry /build/ /usr/local/bin/
COPY --from=criu /build/ /usr/local/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=gotestsum /build/ /usr/local/bin/
COPY --from=golangci_lint /build/ /usr/local/bin/
COPY --from=shfmt /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit /vpnkit /usr/local/bin/vpnkit.x86_64
COPY --from=proxy /build/ /usr/local/bin/
ENV PATH=/usr/local/cli:$PATH
ENV TEST_CLIENT_BINARY=/usr/local/cli-integration/docker
ENV CONTAINERD_ADDRESS=/run/docker/containerd/containerd.sock
ENV CONTAINERD_NAMESPACE=moby
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
WORKDIR /go/src/github.com/docker/docker
VOLUME /var/lib/docker
VOLUME /home/unprivilegeduser/.local/share/docker
@@ -471,176 +327,63 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
dbus-user-session \
systemd \
systemd-sysv
RUN mkdir -p hack \
&& curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \
&& chmod +x hack/dind-systemd
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev-firewalld-false
FROM dev-systemd-${SYSTEMD} AS dev
FROM dev-systemd-true AS dev-firewalld-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
firewalld
FROM dev-firewalld-${FIREWALLD} AS dev-base
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ldconfig
# Set dev environment as safe git directory to prevent "dubious ownership" errors
# when bind-mounting the source into the dev-container. See https://github.com/moby/moby/pull/44930
RUN git config --global --add safe.directory $GOPATH/src/github.com/docker/docker
# This should only install packages that are specifically needed for the dev environment and nothing else
# Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
bash-completion \
bzip2 \
inetutils-ping \
iproute2 \
iptables \
nftables \
jq \
libcap2-bin \
libnet1 \
libnl-3-200 \
libprotobuf-c1 \
libyajl2 \
nano \
net-tools \
netcat-openbsd \
patch \
pigz \
sudo \
systemd-journal-remote \
thin-provisioning-tools \
uidmap \
vim \
vim-common \
xfsprogs \
xz-utils \
zip \
zstd
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
gcc \
pkg-config \
libseccomp-dev \
libsystemd-dev \
yamllint
COPY --link --from=dockercli /build/ /usr/local/cli
COPY --link --from=dockercli /completion.bash /etc/bash_completion.d/docker
COPY --link --from=dockercli-integration /build/ /usr/local/cli-integration
FROM base AS build
COPY --from=gowinres /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
ENV GO111MODULE=off
ENV CGO_ENABLED=1
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
clang \
lld \
llvm
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
xx-apt-get install --no-install-recommends -y \
gcc \
libc6-dev \
libseccomp-dev \
libsystemd-dev \
pkg-config
ARG DOCKER_BUILDTAGS
ARG DOCKER_DEBUG
FROM runtime-dev AS binary-base
ARG DOCKER_GITCOMMIT=HEAD
ARG DOCKER_LDFLAGS
ARG DOCKER_STATIC
ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT}
ARG VERSION
ENV VERSION=${VERSION}
ARG PLATFORM
ENV PLATFORM=${PLATFORM}
ARG PRODUCT
ENV PRODUCT=${PRODUCT}
ARG DEFAULT_PRODUCT_LICENSE
ARG PACKAGER_NAME
# PREFIX overrides DEST dir in make.sh script otherwise it fails because of
# read only mount in current work dir
ENV PREFIX=/tmp
RUN <<EOT
# in bullseye arm64 target does not link with lld so configure it to use ld instead
if [ "$(xx-info arch)" = "arm64" ]; then
XX_CC_PREFER_LINKER=ld xx-clang --setup-target-triple
fi
EOT
RUN --mount=type=bind,target=.,rw \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=cache,target=/root/.cache/go-build,id=moby-build-$TARGETPLATFORM <<EOT
set -e
target=$([ "$DOCKER_STATIC" = "1" ] && echo "binary" || echo "dynbinary")
xx-go --wrap
PKG_CONFIG=$(xx-go env PKG_CONFIG) ./hack/make.sh $target
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/dockerd$([ "$(xx-info os)" = "windows" ] && echo ".exe")
[ "$(xx-info os)" != "linux" ] || xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/docker-proxy
mkdir /build
mv /tmp/bundles/${target}-daemon/* /build/
EOT
ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE}
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV PREFIX=/build
# TODO: This is here because hack/make.sh binary copies these extras binaries
# from $PATH into the bundles dir.
# It would be nice to handle this in a different way.
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=vpnkit /vpnkit /usr/local/bin/vpnkit.x86_64
WORKDIR /go/src/github.com/docker/docker
FROM binary-base AS build-binary
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh binary
FROM binary-base AS build-dynbinary
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh dynbinary
FROM binary-base AS build-cross
ARG DOCKER_CROSSPLATFORMS
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
--mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \
hack/make.sh cross
# usage:
# > docker buildx bake binary
# > DOCKER_STATIC=0 docker buildx bake binary
# or
# > make binary
# > make dynbinary
FROM scratch AS binary
COPY --from=build /build/ /
COPY --from=build-binary /build/bundles/ /
# usage:
# > docker buildx bake all
FROM scratch AS all
COPY --link --from=tini /build/ /
COPY --link --from=runc /build/ /
COPY --link --from=containerd /build/ /
COPY --link --from=rootlesskit /build/ /
COPY --link --from=containerutil /build/ /
COPY --link --from=vpnkit / /
COPY --link --from=build /build /
FROM scratch AS dynbinary
COPY --from=build-dynbinary /build/bundles/ /
# smoke tests
# usage:
# > docker buildx bake binary-smoketest
FROM base AS smoketest
WORKDIR /usr/local/bin
COPY --from=build /build .
RUN <<EOT
set -ex
file dockerd
dockerd --version
file docker-proxy
docker-proxy --version
EOT
FROM scratch AS cross
COPY --from=build-cross /build/bundles/ /
# devcontainer is a stage used by .devcontainer/devcontainer.json
FROM dev-base AS devcontainer
COPY --link . .
COPY --link --from=gopls /build/ /usr/local/bin/
# usage:
# > docker buildx bake dind
# > docker run -d --restart always --privileged --name devdind -p 12375:2375 docker-dind --debug --host=tcp://0.0.0.0:2375 --tlsverify=false
FROM docker:dind AS dind
COPY --link --from=dockercli /build/docker /usr/local/bin/
COPY --link --from=buildx /buildx /usr/local/libexec/docker/cli-plugins/docker-buildx
COPY --link --from=compose /docker-compose /usr/local/libexec/docker/cli-plugins/docker-compose
COPY --link --from=all / /usr/local/bin/
# usage:
# > make shell
# > SYSTEMD=true make shell
FROM dev-base AS dev
COPY --link . .
FROM dev AS final
COPY . /go/src/github.com/docker/docker

26
Dockerfile.buildx Normal file
View File

@@ -0,0 +1,26 @@
ARG GO_VERSION=1.13.15
ARG BUILDX_COMMIT=v0.5.1
ARG BUILDX_REPO=https://github.com/docker/buildx.git
FROM golang:${GO_VERSION}-buster AS build
ARG BUILDX_REPO
RUN git clone "${BUILDX_REPO}" /buildx
WORKDIR /buildx
ARG BUILDX_COMMIT
RUN git fetch origin "${BUILDX_COMMIT}":build && git checkout build
ARG GOOS
ARG GOARCH
# Keep these essentially no-op var settings for debug purposes.
# It allows us to see what the GOOS/GOARCH that's being built for is.
RUN GOOS="${GOOS}" GOARCH="${GOARCH}" BUILDX_COMMIT="${BUILDX_COMMIT}"; \
pkg="github.com/docker/buildx"; \
ldflags="\
-X \"${pkg}/version.Version=$(git describe --tags)\" \
-X \"${pkg}/version.Revision=$(git rev-parse --short HEAD)\" \
-X \"${pkg}/version.Package=buildx\" \
"; \
go build -mod=vendor -ldflags "${ldflags}" -o /usr/bin/buildx ./cmd/buildx
FROM golang:${GO_VERSION}-buster
COPY --from=build /usr/bin/buildx /usr/bin/buildx
ENTRYPOINT ["/usr/bin/buildx"]

84
Dockerfile.e2e Normal file
View File

@@ -0,0 +1,84 @@
ARG GO_VERSION=1.13.15
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off
RUN apk --no-cache add \
bash \
btrfs-progs-dev \
build-base \
curl \
lvm2-dev \
jq
RUN mkdir -p /build/
RUN mkdir -p /go/src/github.com/docker/docker/
WORKDIR /go/src/github.com/docker/docker/
FROM base AS frozen-images
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
RUN /download-frozen-image-v2.sh /build \
buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
# Build DockerSuite.TestBuild* dependency
FROM base AS contrib
COPY contrib/syscall-test /build/syscall-test
COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile
COPY contrib/httpserver contrib/httpserver
RUN CGO_ENABLED=0 go build -buildmode=pie -o /build/httpserver/httpserver github.com/docker/docker/contrib/httpserver
# Build the integration tests and copy the resulting binaries to /build/tests
FROM base AS builder
# Set tag and add sources
COPY . .
# Copy test sources tests that use assert can print errors
RUN mkdir -p /build${PWD} && find integration integration-cli -name \*_test.go -exec cp --parents '{}' /build${PWD} \;
# Build and install test binaries
ARG DOCKER_GITCOMMIT=undefined
RUN hack/make.sh build-integration-test-binary
RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \;
## Generate testing image
FROM alpine:3.10 as runner
ENV DOCKER_REMOTE_DAEMON=1
ENV DOCKER_INTEGRATION_DAEMON_DEST=/
ENTRYPOINT ["/scripts/run.sh"]
# Add an unprivileged user to be used for tests which need it
RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash
# GNU tar is used for generating the emptyfs image
RUN apk --no-cache add \
bash \
ca-certificates \
g++ \
git \
iptables \
pigz \
tar \
xz
COPY hack/test/e2e-run.sh /scripts/run.sh
COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh
COPY integration/testdata /tests/integration/testdata
COPY integration/build/testdata /tests/integration/build/testdata
COPY integration-cli/fixtures /tests/integration-cli/fixtures
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=dockercli /build/ /usr/bin/
COPY --from=contrib /build/ /tests/contrib/
COPY --from=builder /build/ /

View File

@@ -5,14 +5,14 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.24.9
ARG GO_VERSION=1.13.15
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE}
FROM golang:${GO_VERSION}-buster
ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
# Compile and runtime deps
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
@@ -21,7 +21,11 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
curl \
cmake \
gcc \
git \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
ca-certificates \
e2fsprogs \
@@ -32,13 +36,14 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
xfsprogs \
xz-utils \
\
aufs-tools \
vim-common \
&& rm -rf /var/lib/apt/lists/*
# Install runc, containerd, and tini
# Install runc, containerd, tini and docker-proxy
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN set -e; for i in runc containerd tini dockercli; \
RUN for i in runc containerd tini proxy dockercli; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH

View File

@@ -154,35 +154,29 @@
# The number of build steps below are explicitly minimised to improve performance.
ARG WINDOWS_BASE_IMAGE=mcr.microsoft.com/windows/servercore
ARG WINDOWS_BASE_IMAGE_TAG=ltsc2022
FROM ${WINDOWS_BASE_IMAGE}:${WINDOWS_BASE_IMAGE_TAG}
# Extremely important - do not change the following line to reference a "specific" image,
# such as `mcr.microsoft.com/windows/servercore:ltsc2019`. If using this Dockerfile in process
# isolated containers, the kernel of the host must match the container image, and hence
# would fail between Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).
# It is expected that the image `microsoft/windowsservercore:latest` is present, and matches
# the hosts kernel version before doing a build.
FROM microsoft/windowsservercore
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.24.9
# GOTESTSUM_VERSION is the version of gotest.tools/gotestsum to install.
ARG GOTESTSUM_VERSION=v1.12.3
# GOWINRES_VERSION is the version of go-winres to install.
ARG GOWINRES_VERSION=v0.3.3
ARG CONTAINERD_VERSION=v1.7.29
ARG GO_VERSION=1.13.15
ARG GOTESTSUM_COMMIT=v0.5.3
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - CONTAINERD_VERSION must be consistent with 'hack/dockerfile/install/containerd.installer' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=${GO_VERSION} `
CONTAINERD_VERSION=${CONTAINERD_VERSION} `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
GOTOOLCHAIN=local `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION} `
GOWINRES_VERSION=${GOWINRES_VERSION}
GOTESTSUM_COMMIT=${GOTESTSUM_COMMIT}
RUN `
Function Test-Nano() { `
@@ -217,15 +211,15 @@ RUN `
} `
} `
`
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin;C:\containerd\bin'); `
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin'); `
`
Write-Host INFO: Downloading git...; `
$location='https://www.nuget.org/api/v2/package/GitForWindows/'+$Env:GIT_VERSION; `
Download-File $location C:\gitsetup.zip; `
`
Write-Host INFO: Downloading go...; `
$dlGoVersion=$Env:GO_VERSION; `
Download-File "https://go.dev/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; `
$dlGoVersion=$Env:GO_VERSION -replace '\.0$',''; `
Download-File "https://golang.org/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; `
`
Write-Host INFO: Downloading compiler 1 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
@@ -258,13 +252,6 @@ RUN `
Remove-Item C:\binutils.zip; `
Remove-Item C:\gitsetup.zip; `
`
Write-Host INFO: Downloading containerd; `
$location='https://github.com/containerd/containerd/releases/download/'+$Env:CONTAINERD_VERSION+'/containerd-'+$Env:CONTAINERD_VERSION.TrimStart('v')+'-windows-amd64.tar.gz'; `
Download-File $location C:\containerd.tar.gz; `
New-Item -Path C:\containerd -ItemType Directory; `
tar -xzf C:\containerd.tar.gz -C C:\containerd; `
Remove-Item C:\containerd.tar.gz; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `
Write-Host INFO: Ensuring existence of directory $srcDir...; `
@@ -274,36 +261,21 @@ RUN `
C:\git\cmd\git config --global core.autocrlf true;
RUN `
Function Install-GoTestSum() { `
Function Build-GoTestSum() { `
Write-Host "INFO: Building gotestsum version $Env:GOTESTSUM_COMMIT in $Env:GOPATH"; `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
&go get -buildmode=exe "gotest.tools/gotestsum@${Env:GOTESTSUM_COMMIT}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
Throw '"gotestsum build failed..."'; `
} `
Write-Host "INFO: Build done for gotestsum..."; `
} `
`
Install-GoTestSum
RUN `
Function Install-GoWinres() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing go-winres version $Env:GOWINRES_VERSION in $Env:GOBIN"; `
&go install "github.com/tc-hib/go-winres@${Env:GOWINRES_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"go-winres install failed..."'; `
} `
} `
`
Install-GoWinres
Build-GoTestSum
# Make PowerShell the default entrypoint
ENTRYPOINT ["powershell.exe"]

1175
Jenkinsfile vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,33 +1,494 @@
# Moby maintainers file
#
# See project/GOVERNANCE.md for committer versus reviewer roles
# This file describes the maintainer groups within the moby/moby project.
# More detail on Moby project governance is available in the
# project/GOVERNANCE.md file found in this repository.
#
# COMMITTERS
# GitHub ID, Name, Email address, GPG fingerprint
"akerouanton","Albin Kerouanton","albinker@gmail.com"
"AkihiroSuda","Akihiro Suda","akihiro.suda.cz@hco.ntt.co.jp"
"austinvazquez","Austin Vazquez","macedonv@amazon.com"
"corhere","Cory Snider","csnider@mirantis.com"
"cpuguy83","Brian Goff","cpuguy83@gmail.com"
"robmry","Rob Murray","rob.murray@docker.com"
"thaJeztah","Sebastiaan van Stijn","github@gone.nl"
"tianon","Tianon Gravi","admwiggin@gmail.com"
"tonistiigi","Tõnis Tiigi","tonis@docker.com"
"vvoland","Paweł Gronowski","pawel.gronowski@docker.com"
# It is structured to be consumable by both humans and programs.
# To extract its contents programmatically, use any TOML-compliant
# parser.
#
# REVIEWERS
# GitHub ID, Name, Email address, GPG fingerprint
"coolljt0725","Lei Jitang","leijitang@huawei.com"
"crazy-max","Kevin Alvarez","contact@crazymax.dev"
"dmcgowan","Derek McGowan","derek@mcgstyle.net"
"estesp","Phil Estes","estesp@linux.vnet.ibm.com"
"justincormack","Justin Cormack","justin.cormack@docker.com"
"kolyshkin","Kir Kolyshkin","kolyshkin@gmail.com"
"laurazard","Laura Brehm","laurabrehm@hey.com"
"neersighted","Bjorn Neergaard","bjorn@neersighted.com"
"rumpl","Djordje Lukic","djordje.lukic@docker.com"
"samuelkarp","Samuel Karp","me@samuelkarp.com"
"stevvooe","Stephen Day","stephen.day@docker.com"
"thompson-shaun","Shaun Thompson","shaun.thompson@docker.com"
"tiborvass","Tibor Vass","tibor@docker.com"
"unclejack","Cristian Staretu","cristian.staretu@gmail.com"
# TODO(estesp): This file should not necessarily depend on docker/opensource
# This file is compiled into the MAINTAINERS file in docker/opensource.
#
[Org]
[Org."Core maintainers"]
# The Core maintainers are the ghostbusters of the project: when there's a problem others
# can't solve, they show up and fix it with bizarre devices and weaponry.
# They have final say on technical implementation and coding style.
# They are ultimately responsible for quality in all its forms: usability polish,
# bugfixes, performance, stability, etc. When ownership can cleanly be passed to
# a subsystem, they are responsible for doing so and holding the
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [
"akihirosuda",
"anusha",
"coolljt0725",
"cpuguy83",
"crosbymichael",
"estesp",
"johnstep",
"justincormack",
"kolyshkin",
"lowenna",
"mhbauer",
"runcom",
"stevvooe",
"thajeztah",
"tianon",
"tibor",
"tonistiigi",
"unclejack",
"vdemeester",
"vieux",
"yongtang"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
# that our various contribution and reviewing processes are respected. With their knowledge of
# the repository activity, they can also guide contributors to relevant material or
# discussions.
#
# They are neither code nor docs reviewers, so they are never expected to merge. They can
# however:
# - close an issue or pull request when it's an exact duplicate
# - close an issue or pull request when it's inappropriate or off-topic
people = [
"alexellis",
"andrewhsu",
"fntlnz",
"gianarb",
"olljanat",
"programmerq",
"ripcurld",
"samwhited",
"thajeztah"
]
[Org.Alumni]
# This list contains maintainers that are no longer active on the project.
# It is thanks to these people that the project has become what it is today.
# Thank you!
people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository.
"albers",
# Andrea Luzzardi started contributing to the Docker codebase in the "dotCloud"
# era, even before it was called "Docker". He is one of the architects of both
# Swarm and SwarmKit, and its integration into the Docker engine.
"aluzzardi",
# David Calavera contributed many features to Docker, such as an improved
# event system, dynamic configuration reloading, volume plugins, fancy
# new templating options, and an external client credential store. As a
# maintainer, David was release captain for Docker 1.8, and competing
# with Jess Frazelle to be "top dream killer".
# David is now doing amazing stuff as CTO for https://www.netlify.com,
# and tweets as @calavera.
"calavera",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
# (https://github.com/contiv) and various open source projects
# in his own repository https://github.com/erikh. You may
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Evan Hazlett is the creator of the Shipyard and Interlock open source projects,
# and the author of "Orca", which became the foundation of Docker Universal Control
# Plane (UCP). As a maintainer, Evan helped integrating SwarmKit (secrets, tasks)
# into the Docker engine.
"ehazlett",
# Arnaud Porterie (AKA "icecrime") was in charge of maintaining the maintainers.
# As a maintainer, he made life easier for contributors to the Docker open-source
# projects, bringing order in the chaos by designing a triage- and review workflow
# using labels (see https://icecrime.net/technology/a-structured-approach-to-labeling/),
# and automating the hell out of things with his buddies GordonTheTurtle and Poule
# (a chicken!).
#
# A lesser-known fact is that he created the first commit in the libnetwork repository
# even though he didn't know anything about it. Some say, he's now selling stuff on
# the internet ;-)
"icecrime",
# After a false start with his first PR being rejected, James Turnbull became a frequent
# contributor to the documentation, and became a docs maintainer on December 5, 2013. As
# a maintainer, James lifted the docs to a higher standard, and introduced the community
# guidelines ("three strikes"). James is currently changing the world as CTO of https://www.empatico.org,
# meanwhile authoring various books that are worth checking out. You can find him on Twitter,
# rambling as @kartar, and although no longer active as a maintainer, he's always "game" to
# help out reviewing docs PRs, so you may still see him around in the repository.
"jamtur01",
# Jessica Frazelle, also known as the "Keyser Söze of containers",
# runs *everything* in containers. She started contributing to
# Docker with a (fun fun) change involving both iptables and regular
# expressions (coz, YOLO!) on July 10, 2014
# https://github.com/docker/docker/pull/6950/commits/f3a68ffa390fb851115c77783fa4031f1d3b2995.
# Jess was Release Captain for Docker 1.4, 1.6 and 1.7, and contributed
# many features and improvement, among which "seccomp profiles" (making
# containers a lot more secure). Besides being a maintainer, she
# set up the CI infrastructure for the project, giving everyone
# something to shout at if a PR failed ("noooo Janky!").
# Be sure you don't miss her talks at a conference near you (a must-see),
# read her blog at https://blog.jessfraz.com (a must-read), and
# check out her open source projects on GitHub https://github.com/jessfraz (a must-try).
"jessfraz",
# Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
# Not many know that Alexander is a master negotiator, being able to change course
# of action with a single "Nope, we're not gonna do that".
"lk4d4",
# Madhu Venugopal was part of the SocketPlane team that joined Docker.
# As a maintainer, he was working with Jana for the Container Network
# Model (CNM) implemented through libnetwork, and the "routing mesh" powering
# Swarm mode networking.
"mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of
# docker/docker project, and implemented a new docs.docker.com theme and
# nav for 2015 Dockercon. Fun fact: the most inherited layer in DockerHub
# public repositories was originally referenced in
# maryatdocker/docker-whale back in May 2015.
"moxiegirl",
# Jana Radhakrishnan was part of the SocketPlane team that joined Docker.
# As a maintainer, he was the lead architect for the Container Network
# Model (CNM) implemented through libnetwork, and the "routing mesh" powering
# Swarm mode networking.
#
# Jana started new adventures in networking, but you can find him tweeting as @mrjana,
# coding on GitHub https://github.com/mrjana, and he may be hiding on the Docker Community
# slack channel :-)
"mrjana",
# Sven Dowideit became a well known person in the Docker ecosphere, building
# boot2docker, and became a regular contributor to the project, starting as
# early as October 2013 (https://github.com/docker/docker/pull/2119), to become
# a maintainer less than two months later (https://github.com/docker/docker/pull/3061).
#
# As a maintainer, Sven took on the task to convert the documentation from
# ReStructuredText to Markdown, migrate to Hugo for generating the docs, and
# writing tooling for building, testing, and publishing them.
#
# If you're not in the occasion to visit "the Australian office", you
# can keep up with Sven on Twitter (@SvenDowideit), his blog http://fosiki.com,
# and of course on GitHub.
"sven",
# Vincent "vbatts!" Batts made his first contribution to the project
# in November 2013, to become a maintainer a few months later, on
# May 10, 2014 (https://github.com/docker/docker/commit/d6e666a87a01a5634c250358a94c814bf26cb778).
# As a maintainer, Vincent made important contributions to core elements
# of Docker, such as "distribution" (tarsum) and graphdrivers (btrfs, devicemapper).
# He also contributed the "tar-split" library, an important element
# for the content-addressable store.
# Vincent is currently a member of the Open Containers Initiative
# Technical Oversight Board (TOB), besides his work at Red Hat and
# Project Atomic. You can still find him regularly hanging out in
# our repository and the #docker-dev and #docker-maintainers IRC channels
# for a chat, as he's always a lot of fun.
"vbatts",
# Vishnu became a maintainer to help out on the daemon codebase and
# libcontainer integration. He's currently involved in the
# Open Containers Initiative, working on the specifications,
# besides his work on cAdvisor and Kubernetes for Google.
"vishh"
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
[people.aaronlehmann]
Name = "Aaron Lehmann"
Email = "aaron.lehmann@docker.com"
GitHub = "aaronlehmann"
[people.alexellis]
Name = "Alex Ellis"
Email = "alexellis2@gmail.com"
GitHub = "alexellis"
[people.akihirosuda]
Name = "Akihiro Suda"
Email = "akihiro.suda.cz@hco.ntt.co.jp"
GitHub = "AkihiroSuda"
[people.aluzzardi]
Name = "Andrea Luzzardi"
Email = "al@docker.com"
GitHub = "aluzzardi"
[people.albers]
Name = "Harald Albers"
Email = "github@albersweb.de"
GitHub = "albers"
[people.andrewhsu]
Name = "Andrew Hsu"
Email = "andrewhsu@docker.com"
GitHub = "andrewhsu"
[people.anusha]
Name = "Anusha Ragunathan"
Email = "anusha@docker.com"
GitHub = "anusha-ragunathan"
[people.calavera]
Name = "David Calavera"
Email = "david.calavera@gmail.com"
GitHub = "calavera"
[people.coolljt0725]
Name = "Lei Jitang"
Email = "leijitang@huawei.com"
GitHub = "coolljt0725"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
GitHub = "crosbymichael"
[people.dnephin]
Name = "Daniel Nephin"
Email = "dnephin@gmail.com"
GitHub = "dnephin"
[people.duglin]
Name = "Doug Davis"
Email = "dug@us.ibm.com"
GitHub = "duglin"
[people.ehazlett]
Name = "Evan Hazlett"
Email = "ejhazlett@gmail.com"
GitHub = "ehazlett"
[people.erikh]
Name = "Erik Hollensbe"
Email = "erik@docker.com"
GitHub = "erikh"
[people.estesp]
Name = "Phil Estes"
Email = "estesp@linux.vnet.ibm.com"
GitHub = "estesp"
[people.fntlnz]
Name = "Lorenzo Fontana"
Email = "fontanalorenz@gmail.com"
GitHub = "fntlnz"
[people.gianarb]
Name = "Gianluca Arbezzano"
Email = "ga@thumpflow.com"
GitHub = "gianarb"
[people.icecrime]
Name = "Arnaud Porterie"
Email = "icecrime@gmail.com"
GitHub = "icecrime"
[people.jamtur01]
Name = "James Turnbull"
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jessfraz]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
GitHub = "jessfraz"
[people.johnstep]
Name = "John Stephens"
Email = "johnstep@docker.com"
GitHub = "johnstep"
[people.justincormack]
Name = "Justin Cormack"
Email = "justin.cormack@docker.com"
GitHub = "justincormack"
[people.kolyshkin]
Name = "Kir Kolyshkin"
Email = "kolyshkin@gmail.com"
GitHub = "kolyshkin"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo]
Name = "Madhu Venugopal"
Email = "madhu@docker.com"
GitHub = "mavenugo"
[people.mhbauer]
Name = "Morgan Bauer"
Email = "mbauer@us.ibm.com"
GitHub = "mhbauer"
[people.mlaventure]
Name = "Kenfe-Mickaël Laventure"
Email = "mickael.laventure@gmail.com"
GitHub = "mlaventure"
[people.moxiegirl]
Name = "Mary Anthony"
Email = "mary.anthony@docker.com"
GitHub = "moxiegirl"
[people.mrjana]
Name = "Jana Radhakrishnan"
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
GitHub = "olljanat"
[people.programmerq]
Name = "Jeff Anderson"
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.ripcurld]
Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com"
GitHub = "ripcurld"
[people.runcom]
Name = "Antonio Murdaca"
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"
GitHub = "shykes"
[people.stevvooe]
Name = "Stephen Day"
Email = "stephen.day@docker.com"
GitHub = "stevvooe"
[people.sven]
Name = "Sven Dowideit"
Email = "SvenDowideit@home.org.au"
GitHub = "SvenDowideit"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"
GitHub = "thaJeztah"
[people.tianon]
Name = "Tianon Gravi"
Email = "admwiggin@gmail.com"
GitHub = "tianon"
[people.tibor]
Name = "Tibor Vass"
Email = "tibor@docker.com"
GitHub = "tiborvass"
[people.tonistiigi]
Name = "Tõnis Tiigi"
Email = "tonis@docker.com"
GitHub = "tonistiigi"
[people.unclejack]
Name = "Cristian Staretu"
Email = "cristian.staretu@gmail.com"
GitHub = "unclejack"
[people.vbatts]
Name = "Vincent Batts"
Email = "vbatts@redhat.com"
GitHub = "vbatts"
[people.vdemeester]
Name = "Vincent Demeester"
Email = "vincent@sbr.pm"
GitHub = "vdemeester"
[people.vieux]
Name = "Victor Vieux"
Email = "vieux@docker.com"
GitHub = "vieux"
[people.vishh]
Name = "Vishnu Kannan"
Email = "vishnuk@google.com"
GitHub = "vishh"
[people.yongtang]
Name = "Yong Tang"
Email = "yong.tang.github@outlook.com"
GitHub = "yongtang"

231
Makefile
View File

@@ -1,7 +1,29 @@
DOCKER ?= docker
BUILDX ?= $(DOCKER) buildx
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate win
DOCKER_GITCOMMIT := $(shell git rev-parse HEAD)
ifdef USE_BUILDX
BUILDX ?= $(shell command -v buildx)
BUILDX ?= $(shell command -v docker-buildx)
DOCKER_BUILDX_CLI_PLUGIN_PATH ?= ~/.docker/cli-plugins/docker-buildx
BUILDX ?= $(shell if [ -x "$(DOCKER_BUILDX_CLI_PLUGIN_PATH)" ]; then echo $(DOCKER_BUILDX_CLI_PLUGIN_PATH); fi)
endif
ifndef USE_BUILDX
DOCKER_BUILDKIT := 1
export DOCKER_BUILDKIT
endif
BUILDX ?= bundles/buildx
DOCKER ?= docker
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
export DOCKER_GRAPHDRIVER
# get OS/Arch of docker engine
DOCKER_OSARCH := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKER_ENGINE_OSARCH}')
DOCKERFILE := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKERFILE}')
DOCKER_GITCOMMIT := $(shell git rev-parse --short HEAD || echo unsupported)
export DOCKER_GITCOMMIT
# allow overriding the repository and branch that validation scripts are running
@@ -10,9 +32,6 @@ export VALIDATE_REPO
export VALIDATE_BRANCH
export VALIDATE_ORIGIN_BRANCH
export PAGER
export GIT_PAGER
# env vars passed through directly to Docker's build scripts
# to allow things like `make KEEPBUNDLE=1 binary` easily
# `project/PACKAGERS.md` have some limited documentation of some of these
@@ -21,9 +40,11 @@ export GIT_PAGER
# option of "go build". For example, a built-in graphdriver priority list
# can be changed during build time like this:
#
# make DOCKER_LDFLAGS="-X github.com/docker/docker/daemon/graphdriver.priority=overlay2,zfs" dynbinary
# make DOCKER_LDFLAGS="-X github.com/docker/docker/daemon/graphdriver.priority=overlay2,devicemapper" dynbinary
#
DOCKER_ENVS := \
-e DOCKER_CROSSPLATFORMS \
-e BUILD_APT_MIRROR \
-e BUILDFLAGS \
-e KEEPBUNDLE \
-e DOCKER_BUILD_ARGS \
@@ -31,11 +52,8 @@ DOCKER_ENVS := \
-e DOCKER_BUILD_OPTS \
-e DOCKER_BUILD_PKGS \
-e DOCKER_BUILDKIT \
-e DOCKER_BASH_COMPLETION_PATH \
-e DOCKER_CLI_PATH \
-e DOCKERCLI_VERSION \
-e DOCKERCLI_REPOSITORY \
-e DOCKERCLI_INTEGRATION_VERSION \
-e DOCKERCLI_INTEGRATION_REPOSITORY \
-e DOCKER_DEBUG \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT \
@@ -48,16 +66,10 @@ DOCKER_ENVS := \
-e DOCKER_TEST_HOST \
-e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \
-e DELVE_PORT \
-e FIREWALLD \
-e GITHUB_ACTIONS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
-e TEST_INTEGRATION_USE_SNAPSHOTTER \
-e TEST_INTEGRATION_FAIL_FAST \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e TESTCOVERAGE \
-e TESTDEBUG \
-e TESTDIRS \
-e TESTFLAGS \
@@ -68,26 +80,26 @@ DOCKER_ENVS := \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
-e VALIDATE_ORIGIN_BRANCH \
-e HTTP_PROXY \
-e HTTPS_PROXY \
-e NO_PROXY \
-e http_proxy \
-e https_proxy \
-e no_proxy \
-e VERSION \
-e PLATFORM \
-e DEFAULT_PRODUCT_LICENSE \
-e PRODUCT \
-e PACKAGER_NAME \
-e PAGER \
-e GIT_PAGER \
-e OTEL_EXPORTER_OTLP_ENDPOINT \
-e OTEL_EXPORTER_OTLP_PROTOCOL \
-e OTEL_SERVICE_NAME
-e PRODUCT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`
# (default to no bind mount if DOCKER_HOST is set)
# note: BINDDIR is supported for backwards-compatibility here
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,.))
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
# DOCKER_MOUNT can be overridden, but use at your own risk!
# DOCKER_MOUNT can be overriden, but use at your own risk!
ifndef DOCKER_MOUNT
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT))
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
@@ -95,26 +107,23 @@ DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDD
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git"
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache -v docker-mod-cache:/go/pkg/mod/
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
ifdef BIND_GIT
# Gets the common .git directory (even from inside a git worktree)
GITDIR := $(shell realpath $(shell git rev-parse --git-common-dir))
MOUNT_GITDIR := $(if $(GITDIR),-v "$(GITDIR):$(GITDIR)")
endif
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION) $(MOUNT_GITDIR)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),)
DOCKER_IMAGE := docker-dev
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",)
DELVE_PORT_FORWARD := $(if $(DELVE_PORT),-p "$(DELVE_PORT)",)
DOCKER_FLAGS := $(DOCKER) run --rm --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD) $(DELVE_PORT_FORWARD)
DOCKER_FLAGS := $(DOCKER) run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
BUILD_APT_MIRROR := $(if $(DOCKER_BUILD_APT_MIRROR),--build-arg APT_MIRROR=$(DOCKER_BUILD_APT_MIRROR))
export BUILD_APT_MIRROR
SWAGGER_DOCS_PORT ?= 9000
@@ -131,49 +140,53 @@ ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t
endif
# on GitHub Runners input device is not a TTY but we allocate a pseudo-one,
# otherwise keep STDIN open even if not attached if not a GitHub Runner.
ifeq ($(GITHUB_ACTIONS),true)
DOCKER_FLAGS += -t
else
DOCKER_FLAGS += -i
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
DOCKER_BUILD_ARGS += --build-arg=GO_VERSION
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_VERSION
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_REPOSITORY
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_INTEGRATION_VERSION
DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_INTEGRATION_REPOSITORY
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
ifdef FIREWALLD
DOCKER_BUILD_ARGS += --build-arg=FIREWALLD=true
BUILD_OPTS := ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -f "$(DOCKERFILE)"
ifdef USE_BUILDX
BUILD_OPTS += $(BUILDX_BUILD_EXTRA_OPTS)
BUILD_CMD := $(BUILDX) build
else
BUILD_CMD := $(DOCKER) build
endif
BUILD_OPTS := ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS}
BUILD_CMD := $(BUILDX) build
BAKE_CMD := $(BUILDX) bake
# This is used for the legacy "build" target and anything still depending on it
BUILD_CROSS =
ifdef DOCKER_CROSS
BUILD_CROSS = --build-arg CROSS=$(DOCKER_CROSS)
endif
ifdef DOCKER_CROSSPLATFORMS
BUILD_CROSS = --build-arg CROSS=true
endif
VERSION_AUTOGEN_ARGS = --build-arg VERSION --build-arg DOCKER_GITCOMMIT --build-arg PRODUCT --build-arg PLATFORM --build-arg DEFAULT_PRODUCT_LICENSE
default: binary
.PHONY: all
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
.PHONY: binary
binary: bundles ## build statically linked linux binaries
$(BAKE_CMD) binary
# This is only used to work around read-only bind mounts of the source code into
# binary build targets. We end up mounting a tmpfs over autogen which allows us
# to write build-time generated assets even though the source is mounted read-only
# ...But in order to do so, this dir needs to already exist.
autogen:
mkdir -p autogen
.PHONY: dynbinary
dynbinary: bundles ## build dynamically linked linux binaries
$(BAKE_CMD) dynbinary
binary: buildx autogen ## build statically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
.PHONY: cross
cross: bundles ## cross build the binaries
$(BAKE_CMD) binary-cross
dynbinary: buildx autogen ## build dynamically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
cross: BUILD_OPTS += --build-arg CROSS=true --build-arg DOCKER_CROSSPLATFORMS
cross: buildx autogen ## cross build the binaries for darwin, freebsd and\nwindows
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
bundles:
mkdir bundles
@@ -182,46 +195,41 @@ bundles:
clean: clean-cache
.PHONY: clean-cache
clean-cache: ## remove the docker volumes that are used for caching in the dev-container
docker volume rm -f docker-dev-cache docker-mod-cache
clean-cache:
docker volume rm -f docker-dev-cache
.PHONY: help
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
.PHONY: install
install: ## install the linux binaries
KEEPBUNDLE=1 hack/make.sh install-binary
.PHONY: run
run: build ## run the docker daemon in a container
$(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run"
.PHONY: build
ifeq ($(BIND_DIR), .)
build: shell_target := --target=dev-base
else
build: shell_target := --target=dev
else
build: shell_target := --target=final
endif
build: validate-bind-dir bundles
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) --load -t "$(DOCKER_IMAGE)" .
ifdef USE_BUILDX
build: buildx_load := --load
endif
build: buildx
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) $(buildx_load) $(BUILD_CROSS) -t "$(DOCKER_IMAGE)" .
.PHONY: shell
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
.PHONY: test
test: build test-unit ## run the unit, integration and docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration test-docker-py
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary cross test-integration test-docker-py
.PHONY: test-docker-py
test-docker-py: build ## run the docker-py tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-docker-py
.PHONY: test-integration-cli
test-integration-cli: test-integration ## (DEPRECATED) use test-integration
.PHONY: test-integration
ifneq ($(and $(TEST_SKIP_INTEGRATION),$(TEST_SKIP_INTEGRATION_CLI)),)
test-integration:
@echo Both integrations suites skipped per environment variables
@@ -230,31 +238,17 @@ test-integration: build ## run the integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration
endif
.PHONY: test-integration-flaky
test-integration-flaky: build ## run the stress test for all new integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration-flaky
.PHONY: test-unit
test-unit: build ## run the unit tests
$(DOCKER_RUN_DOCKER) hack/test/unit
.PHONY: validate
validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor
$(DOCKER_RUN_DOCKER) hack/validate/all
.PHONY: validate-generate-files
validate-generate-files:
$(BUILD_CMD) --target "validate" \
--output "type=cacheonly" \
--file "./hack/dockerfiles/generate-files.Dockerfile" .
.PHONY: validate-%
validate-%: build ## validate specific check
$(DOCKER_RUN_DOCKER) hack/validate/$*
.PHONY: win
win: bundles ## cross build the binary for windows
$(BAKE_CMD) --set *.platform=windows/amd64 binary
win: build ## cross build the binary for windows
$(DOCKER_RUN_DOCKER) DOCKER_CROSSPLATFORMS=windows/amd64 hack/make.sh cross
.PHONY: swagger-gen
swagger-gen:
@@ -270,24 +264,31 @@ swagger-docs: ## preview the API documentation
@docker run --rm -v $(PWD)/api/swagger.yaml:/usr/share/nginx/html/swagger.yaml \
-e 'REDOC_OPTIONS=hide-hostname="true" lazy-rendering' \
-p $(SWAGGER_DOCS_PORT):80 \
bfirsh/redoc:1.14.0
bfirsh/redoc:1.6.2
.PHONY: generate-files
generate-files:
$(eval $@_TMP_OUT := $(shell mktemp -d -t moby-output.XXXXXXXXXX))
@if [ -z "$($@_TMP_OUT)" ]; then \
echo "Temp dir is not set"; \
exit 1; \
.PHONY: buildx
ifdef USE_BUILDX
ifeq ($(BUILDX), bundles/buildx)
buildx: bundles/buildx ## build buildx cli tool
endif
endif
# This intentionally is not using the `--output` flag from the docker CLI, which
# is a buildkit option. The idea here being that if buildx is being used, it's
# because buildkit is not supported natively
bundles/buildx: bundles ## build buildx CLI tool
docker build -f $${BUILDX_DOCKERFILE:-Dockerfile.buildx} -t "moby-buildx:$${BUILDX_COMMIT:-latest}" \
--build-arg BUILDX_COMMIT \
--build-arg BUILDX_REPO \
--build-arg GOOS=$$(if [ -n "$(GOOS)" ]; then echo $(GOOS); else go env GOHOSTOS || uname | awk '{print tolower($$0)}' || true; fi) \
--build-arg GOARCH=$$(if [ -n "$(GOARCH)" ]; then echo $(GOARCH); else go env GOHOSTARCH || true; fi) \
.
id=$$(docker create moby-buildx:$${BUILDX_COMMIT:-latest}); \
if [ -n "$${id}" ]; then \
docker cp $${id}:/usr/bin/buildx $@ \
&& touch $@; \
docker rm -f $${id}; \
fi
$(BUILD_CMD) --target "update" \
--output "type=local,dest=$($@_TMP_OUT)" \
--file "./hack/dockerfiles/generate-files.Dockerfile" .
cp -R "$($@_TMP_OUT)"/. .
rm -rf "$($@_TMP_OUT)"/*
.PHONY: validate-bind-dir
validate-bind-dir:
@case "$(BIND_DIR)" in \
".."*|"/"*) echo "Make needs to be run from the project-root directory, with BIND_DIR set to \".\" or a subdir"; \
exit 1 ;; \
esac
$@ version

View File

@@ -1,11 +1,6 @@
The Moby Project
================
[![PkgGoDev](https://pkg.go.dev/badge/github.com/docker/docker)](https://pkg.go.dev/github.com/docker/docker)
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/docker)](https://goreportcard.com/report/github.com/docker/docker)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/moby/moby/badge)](https://scorecard.dev/viewer/?uri=github.com/moby/moby)
![Moby Project logo](docs/static_files/moby-project-logo.png "The Moby Project")
Moby is an open-source project created by Docker to enable and accelerate software containerization.
@@ -19,7 +14,7 @@ Moby is an open project guided by strong principles, aiming to be modular, flexi
It is open to the community to help set its direction.
- Modular: the project includes lots of components that have well-defined functions and APIs that work together.
- Batteries included but swappable: Moby includes enough components to build fully featured container systems, but its modular architecture ensures that most of the components can be swapped by different implementations.
- Batteries included but swappable: Moby includes enough components to build fully featured container system, but its modular architecture ensures that most of the components can be swapped by different implementations.
- Usable security: Moby provides secure defaults without compromising usability.
- Developer focused: The APIs are intended to be functional and useful to build powerful tools.
They are not necessarily intended as end user tools but as components aimed at developers.
@@ -37,7 +32,7 @@ New projects can be added if they fit with the community goals. Docker is commit
However, other projects are also encouraged to use Moby as an upstream, and to reuse the components in diverse ways, and all these uses will be treated in the same way. External maintainers and contributors are welcomed.
The Moby project is not intended as a location for support or feature requests for Docker products, but as a place for contributors to work on open source code, fix bugs, and make the code more useful.
The releases are supported by the maintainers, community and users, on a best efforts basis only. For customers who want enterprise or commercial support, [Docker Desktop](https://www.docker.com/products/docker-desktop/) and [Mirantis Container Runtime](https://www.mirantis.com/software/mirantis-container-runtime/) are the appropriate products for these use cases.
The releases are supported by the maintainers, community and users, on a best efforts basis only, and are not intended for customers who want enterprise or commercial support; Docker EE is the appropriate product for these use cases.
-----

View File

@@ -1,42 +1,9 @@
# Security Policy
# Reporting security issues
The maintainers of the Moby project take security seriously. If you discover
a security issue, please bring it to their attention right away!
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
## Reporting a Vulnerability
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately
to [security@docker.com](mailto:security@docker.com).
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
This project uses long-lived branches to maintain releases. Refer to
[BRANCHES-AND-TAGS.md](project/BRANCHES-AND-TAGS.md) in the default branch to
learn about the current maintenance status of each branch.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View File

@@ -121,6 +121,6 @@ automatically set the other above mentioned environment variables accordingly.
You can change a version of golang used for building stuff that is being tested
by setting `GO_VERSION` variable, for example:
```bash
make GO_VERSION=1.24.8 test
```
make GO_VERSION=1.12.8 test
```

View File

@@ -37,6 +37,6 @@ There is hopefully enough example material in the file for you to copy a similar
When you make edits to `swagger.yaml`, you may want to check the generated API documentation to ensure it renders correctly.
Run `make swagger-docs` and a preview will be running at `http://localhost:9000`. Some of the styling may be incorrect, but you'll be able to ensure that it is generating the correct documentation.
Run `make swagger-docs` and a preview will be running at `http://localhost`. Some of the styling may be incorrect, but you'll be able to ensure that it is generating the correct documentation.
The production documentation is generated by vendoring `swagger.yaml` into [docker/docker.github.io](https://github.com/docker/docker.github.io).

View File

@@ -1,18 +1,9 @@
package api
package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of the current REST API.
DefaultVersion = "1.51"
// MinSupportedAPIVersion is the minimum API version that can be supported
// by the API server, specified as "major.minor". Note that the daemon
// may be configured with a different minimum API version, as returned
// in [github.com/docker/docker/api/types.Version.MinAPIVersion].
//
// API requests for API versions lower than the configured version produce
// an error.
MinSupportedAPIVersion = "1.24"
// DefaultVersion of Current REST API
DefaultVersion = "1.41"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.

6
api/common_unix.go Normal file
View File

@@ -0,0 +1,6 @@
// +build !windows
package api // import "github.com/docker/docker/api"
// MinVersion represents Minimum REST API version supported
const MinVersion = "1.12"

8
api/common_windows.go Normal file
View File

@@ -0,0 +1,8 @@
package api // import "github.com/docker/docker/api"
// MinVersion represents Minimum REST API version supported
// Technically the first daemon API version released on Windows is v1.25 in
// engine version 1.13. However, some clients are explicitly using downlevel
// APIs (e.g. docker-compose v2.1 file format) and that is just too restrictive.
// Hence also allowing 1.24 on Windows.
const MinVersion string = "1.24"

View File

@@ -1,13 +1,13 @@
package build
package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
"fmt"
"strconv"
"github.com/distribution/reference"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/build"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/builder"
buildkit "github.com/docker/docker/builder/builder-next"
@@ -21,7 +21,7 @@ import (
// ImageComponent provides an interface for working with images
type ImageComponent interface {
SquashImage(from string, to string) (string, error)
TagImage(context.Context, image.ID, reference.Named) error
TagImageWithReference(image.ID, reference.Named) error
}
// Builder defines interface for running a build
@@ -52,62 +52,64 @@ func (b *Backend) RegisterGRPC(s *grpc.Server) {
// Build builds an image from a Source
func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string, error) {
options := config.Options
useBuildKit := options.Version == build.BuilderBuildKit
useBuildKit := options.Version == types.BuilderBuildKit
tags, err := sanitizeRepoAndTags(options.Tags)
tagger, err := NewTagger(b.imageComponent, config.ProgressWriter.StdoutFormatter, options.Tags)
if err != nil {
return "", err
}
var buildResult *builder.Result
var build *builder.Result
if useBuildKit {
buildResult, err = b.buildkit.Build(ctx, config)
build, err = b.buildkit.Build(ctx, config)
if err != nil {
return "", err
}
} else {
buildResult, err = b.builder.Build(ctx, config)
build, err = b.builder.Build(ctx, config)
if err != nil {
return "", err
}
}
if buildResult == nil {
if build == nil {
return "", nil
}
imageID := buildResult.ImageID
var imageID = build.ImageID
if options.Squash {
if imageID, err = squashBuild(buildResult, b.imageComponent); err != nil {
if imageID, err = squashBuild(build, b.imageComponent); err != nil {
return "", err
}
if config.ProgressWriter.AuxFormatter != nil {
if err = config.ProgressWriter.AuxFormatter.Emit("moby.image.id", build.Result{ID: imageID}); err != nil {
if err = config.ProgressWriter.AuxFormatter.Emit("moby.image.id", types.BuildResult{ID: imageID}); err != nil {
return "", err
}
}
}
if imageID != "" && !useBuildKit {
if !useBuildKit {
stdout := config.ProgressWriter.StdoutFormatter
_, _ = fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID))
err = tagImages(ctx, b.imageComponent, config.ProgressWriter.StdoutFormatter, image.ID(imageID), tags)
fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID))
}
if imageID != "" {
err = tagger.TagImages(image.ID(imageID))
}
return imageID, err
}
// PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context, opts build.CachePruneOptions) (*build.CachePruneReport, error) {
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
buildCacheSize, cacheIDs, err := b.buildkit.Prune(ctx, opts)
if err != nil {
return nil, errors.Wrap(err, "failed to prune build cache")
}
b.eventsService.Log(events.ActionPrune, events.BuilderEventType, events.Actor{
b.eventsService.Log("prune", events.BuilderEventType, events.Actor{
Attributes: map[string]string{
"reclaimed": strconv.FormatInt(buildCacheSize, 10),
},
})
return &build.CachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
return &types.BuildCachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
}
// Cancel cancels the build by ID

View File

@@ -1,31 +1,55 @@
package build
package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
"fmt"
"io"
"github.com/distribution/reference"
"github.com/docker/distribution/reference"
"github.com/docker/docker/image"
"github.com/pkg/errors"
)
// tagImages creates image tags for the imageID.
func tagImages(ctx context.Context, ic ImageComponent, stdout io.Writer, imageID image.ID, repoAndTags []reference.Named) error {
for _, rt := range repoAndTags {
if err := ic.TagImage(ctx, imageID, rt); err != nil {
// Tagger is responsible for tagging an image created by a builder
type Tagger struct {
imageComponent ImageComponent
stdout io.Writer
repoAndTags []reference.Named
}
// NewTagger returns a new Tagger for tagging the images of a build.
// If any of the names are invalid tags an error is returned.
func NewTagger(backend ImageComponent, stdout io.Writer, names []string) (*Tagger, error) {
reposAndTags, err := sanitizeRepoAndTags(names)
if err != nil {
return nil, err
}
return &Tagger{
imageComponent: backend,
stdout: stdout,
repoAndTags: reposAndTags,
}, nil
}
// TagImages creates image tags for the imageID
func (bt *Tagger) TagImages(imageID image.ID) error {
for _, rt := range bt.repoAndTags {
if err := bt.imageComponent.TagImageWithReference(imageID, rt); err != nil {
return err
}
_, _ = fmt.Fprintln(stdout, "Successfully tagged", reference.FamiliarString(rt))
fmt.Fprintf(bt.stdout, "Successfully tagged %s\n", reference.FamiliarString(rt))
}
return nil
}
// sanitizeRepoAndTags parses the raw "t" parameter received from the client
// to a slice of repoAndTag. It removes duplicates, and validates each name
// to not contain a digest.
func sanitizeRepoAndTags(names []string) (repoAndTags []reference.Named, _ error) {
uniqNames := map[string]struct{}{}
// to a slice of repoAndTag.
// It also validates each repoName and tag.
func sanitizeRepoAndTags(names []string) ([]reference.Named, error) {
var (
repoAndTags []reference.Named
// This map is used for deduplicating the "-t" parameter.
uniqNames = make(map[string]struct{})
)
for _, repo := range names {
if repo == "" {
continue
@@ -36,12 +60,14 @@ func sanitizeRepoAndTags(names []string) (repoAndTags []reference.Named, _ error
return nil, err
}
if _, ok := ref.(reference.Digested); ok {
if _, isCanonical := ref.(reference.Canonical); isCanonical {
return nil, errors.New("build tag cannot contain a digest")
}
ref = reference.TagNameOnly(ref)
nameWithTag := ref.String()
if _, exists := uniqNames[nameWithTag]; !exists {
uniqNames[nameWithTag] = struct{}{}
repoAndTags = append(repoAndTags, ref)

View File

@@ -1,124 +0,0 @@
package httpstatus
import (
"context"
"fmt"
"net/http"
cerrdefs "github.com/containerd/errdefs"
"github.com/containerd/log"
"github.com/docker/distribution/registry/api/errcode"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
// FromError retrieves status code from error message.
func FromError(err error) int {
if err == nil {
log.G(context.TODO()).WithError(err).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
// Resolve the error to ensure status is chosen from the first outermost error
rerr := cerrdefs.Resolve(err)
// Note that the below functions are already checking the error causal chain for matches.
// Only check errors from the errdefs package, no new error type checking may be added
switch {
case cerrdefs.IsNotFound(rerr):
return http.StatusNotFound
case cerrdefs.IsInvalidArgument(rerr):
return http.StatusBadRequest
case cerrdefs.IsConflict(rerr):
return http.StatusConflict
case cerrdefs.IsUnauthorized(rerr):
return http.StatusUnauthorized
case cerrdefs.IsUnavailable(rerr):
return http.StatusServiceUnavailable
case cerrdefs.IsPermissionDenied(rerr):
return http.StatusForbidden
case cerrdefs.IsNotModified(rerr):
return http.StatusNotModified
case cerrdefs.IsNotImplemented(rerr):
return http.StatusNotImplemented
case cerrdefs.IsInternal(rerr) || cerrdefs.IsDataLoss(rerr) || cerrdefs.IsDeadlineExceeded(rerr) || cerrdefs.IsCanceled(rerr):
return http.StatusInternalServerError
default:
if statusCode := statusCodeFromGRPCError(err); statusCode != http.StatusInternalServerError {
return statusCode
}
if statusCode := statusCodeFromDistributionError(err); statusCode != http.StatusInternalServerError {
return statusCode
}
switch e := err.(type) {
case interface{ Unwrap() error }:
return FromError(e.Unwrap())
case interface{ Unwrap() []error }:
for _, ue := range e.Unwrap() {
if statusCode := FromError(ue); statusCode != http.StatusInternalServerError {
return statusCode
}
}
}
if !cerrdefs.IsUnknown(err) {
log.G(context.TODO()).WithFields(log.Fields{
"module": "api",
"error": err,
"error_type": fmt.Sprintf("%T", err),
}).Debug("FIXME: Got an API for which error does not match any expected type!!!")
}
return http.StatusInternalServerError
}
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}

View File

@@ -1,4 +1,4 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"io"
@@ -12,4 +12,5 @@ import (
// container configuration.
type ContainerDecoder interface {
DecodeConfig(src io.Reader) (*container.Config, *container.HostConfig, *network.NetworkingConfig, error)
DecodeHostConfig(src io.Reader) (*container.HostConfig, error)
}

View File

@@ -0,0 +1,9 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import "github.com/docker/docker/errdefs"
// GetHTTPErrorStatusCode retrieves status code from error message.
//
// Deprecated: use errdefs.GetHTTPErrorStatusCode
func GetHTTPErrorStatusCode(err error) int {
return errdefs.GetHTTPErrorStatusCode(err)
}

View File

@@ -1,27 +1,15 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"github.com/distribution/reference"
"github.com/docker/docker/errdefs"
"github.com/pkg/errors"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// BoolValue transforms a form value in different formats into a boolean type.
func BoolValue(r *http.Request, k string) bool {
switch strings.ToLower(strings.TrimSpace(r.FormValue(k))) {
case "", "0", "no", "false", "none":
return false
default:
return true
}
s := strings.ToLower(strings.TrimSpace(r.FormValue(k)))
return !(s == "" || s == "0" || s == "no" || s == "false" || s == "none")
}
// BoolValueOrDefault returns the default bool passed if the query param is
@@ -33,29 +21,6 @@ func BoolValueOrDefault(r *http.Request, k string, d bool) bool {
return BoolValue(r, k)
}
// Uint32Value parses a form value into an uint32 type. It returns an error
// if the field is not set, empty, incorrectly formatted, or out of range.
func Uint32Value(r *http.Request, field string) (uint32, error) {
// strconv.ParseUint returns an "strconv.ErrSyntax" for negative values,
// not an "out of range". Strip the prefix before parsing, and use it
// later to detect valid, but negative values.
v, isNeg := strings.CutPrefix(r.Form.Get(field), "-")
if v == "" || v[0] == '+' {
// Fast-path for invalid values.
return 0, strconv.ErrSyntax
}
i, err := strconv.ParseUint(v, 10, 32)
if err != nil {
// Unwrap to remove the 'strconv.ParseUint: parsing "some-invalid-value":' prefix.
return 0, errors.Unwrap(err)
}
if isNeg {
return 0, strconv.ErrRange
}
return uint32(i), nil
}
// Int64ValueOrZero parses a form value into an int64 type.
// It returns 0 if the parsing fails.
func Int64ValueOrZero(r *http.Request, k string) int64 {
@@ -76,38 +41,6 @@ func Int64ValueOrDefault(r *http.Request, field string, def int64) (int64, error
return def, nil
}
// RepoTagReference parses form values "repo" and "tag" and returns a valid
// reference with repository and tag.
// If repo is empty, then a nil reference is returned.
// If no tag is given, then the default "latest" tag is set.
func RepoTagReference(repo, tag string) (reference.NamedTagged, error) {
if repo == "" {
return nil, nil
}
ref, err := reference.ParseNormalizedNamed(repo)
if err != nil {
return nil, err
}
if _, isDigested := ref.(reference.Digested); isDigested {
return nil, fmt.Errorf("cannot import digest reference")
}
if tag != "" {
return reference.WithTag(ref, tag)
}
withDefaultTag := reference.TagNameOnly(ref)
namedTagged, ok := withDefaultTag.(reference.NamedTagged)
if !ok {
return nil, fmt.Errorf("unexpected reference: %q", ref.String())
}
return namedTagged, nil
}
// ArchiveOptions stores archive information for different operations.
type ArchiveOptions struct {
Name string
@@ -141,43 +74,3 @@ func ArchiveFormValues(r *http.Request, vars map[string]string) (ArchiveOptions,
}
return ArchiveOptions{name, path}, nil
}
// DecodePlatform decodes the OCI platform JSON string into a Platform struct.
func DecodePlatform(platformJSON string) (*ocispec.Platform, error) {
var p ocispec.Platform
if err := json.Unmarshal([]byte(platformJSON), &p); err != nil {
return nil, errdefs.InvalidParameter(errors.Wrap(err, "failed to parse platform"))
}
hasAnyOptional := (p.Variant != "" || p.OSVersion != "" || len(p.OSFeatures) > 0)
if p.OS == "" && p.Architecture == "" && hasAnyOptional {
return nil, errdefs.InvalidParameter(errors.New("optional platform fields provided, but OS and Architecture are missing"))
}
if p.OS == "" || p.Architecture == "" {
return nil, errdefs.InvalidParameter(errors.New("both OS and Architecture must be provided"))
}
return &p, nil
}
// DecodePlatforms decodes the OCI platform JSON string into a Platform struct.
//
// Typically, the argument is a value of: r.Form["platform"]
func DecodePlatforms(platformJSONs []string) ([]ocispec.Platform, error) {
if len(platformJSONs) == 0 {
return nil, nil
}
var output []ocispec.Platform
for _, platform := range platformJSONs {
p, err := DecodePlatform(platform)
if err != nil {
return nil, err
}
output = append(output, *p)
}
return output, nil
}

View File

@@ -1,16 +1,9 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"math"
"net/http"
"net/url"
"strconv"
"testing"
cerrdefs "github.com/containerd/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestBoolValue(t *testing.T) {
@@ -30,7 +23,7 @@ func TestBoolValue(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", http.NoBody)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
a := BoolValue(r, "test")
@@ -41,14 +34,14 @@ func TestBoolValue(t *testing.T) {
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest(http.MethodGet, "", http.NoBody)
r, _ := http.NewRequest(http.MethodGet, "", nil)
if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest(http.MethodGet, "", http.NoBody)
r, _ = http.NewRequest(http.MethodGet, "", nil)
r.Form = v
if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
@@ -66,7 +59,7 @@ func TestInt64ValueOrZero(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", http.NoBody)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
a := Int64ValueOrZero(r, "test")
@@ -86,7 +79,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", http.NoBody)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1)
@@ -102,7 +95,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{}
v.Set("test", "invalid")
r, _ := http.NewRequest(http.MethodPost, "", http.NoBody)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1)
@@ -110,126 +103,3 @@ func TestInt64ValueOrDefaultWithError(t *testing.T) {
t.Fatal("Expected an error.")
}
}
func TestUint32Value(t *testing.T) {
const valueNotSet = "unset"
tests := []struct {
value string
expected uint32
expectedErr error
}{
{
value: "0",
expected: 0,
},
{
value: strconv.FormatUint(math.MaxUint32, 10),
expected: math.MaxUint32,
},
{
value: valueNotSet,
expectedErr: strconv.ErrSyntax,
},
{
value: "",
expectedErr: strconv.ErrSyntax,
},
{
value: "-1",
expectedErr: strconv.ErrRange,
},
{
value: "4294967296", // MaxUint32+1
expectedErr: strconv.ErrRange,
},
{
value: "not-a-number",
expectedErr: strconv.ErrSyntax,
},
}
for _, tc := range tests {
t.Run(tc.value, func(t *testing.T) {
r, _ := http.NewRequest(http.MethodPost, "", http.NoBody)
r.Form = url.Values{}
if tc.value != valueNotSet {
r.Form.Set("field", tc.value)
}
out, err := Uint32Value(r, "field")
assert.Check(t, is.Equal(tc.expected, out))
assert.Check(t, is.ErrorIs(err, tc.expectedErr))
})
}
}
func TestDecodePlatform(t *testing.T) {
tests := []struct {
doc string
platformJSON string
expected *ocispec.Platform
expectedErr string
}{
{
doc: "empty platform",
expectedErr: `failed to parse platform: unexpected end of JSON input`,
},
{
doc: "not JSON",
platformJSON: `linux/ams64`,
expectedErr: `failed to parse platform: invalid character 'l' looking for beginning of value`,
},
{
doc: "malformed JSON",
platformJSON: `{"architecture"`,
expectedErr: `failed to parse platform: unexpected end of JSON input`,
},
{
doc: "missing os",
platformJSON: `{"architecture":"amd64","os":""}`,
expectedErr: `both OS and Architecture must be provided`,
},
{
doc: "variant without architecture",
platformJSON: `{"architecture":"","os":"","variant":"v7"}`,
expectedErr: `optional platform fields provided, but OS and Architecture are missing`,
},
{
doc: "missing architecture",
platformJSON: `{"architecture":"","os":"linux"}`,
expectedErr: `both OS and Architecture must be provided`,
},
{
doc: "os.version without os and architecture",
platformJSON: `{"architecture":"","os":"","os.version":"12.0"}`,
expectedErr: `optional platform fields provided, but OS and Architecture are missing`,
},
{
doc: "os.features without os and architecture",
platformJSON: `{"architecture":"","os":"","os.features":["a","b"]}`,
expectedErr: `optional platform fields provided, but OS and Architecture are missing`,
},
{
doc: "valid platform",
platformJSON: `{"architecture":"arm64","os":"linux","os.version":"12.0", "os.features":["a","b"], "variant": "v7"}`,
expected: &ocispec.Platform{
Architecture: "arm64",
OS: "linux",
OSVersion: "12.0",
OSFeatures: []string{"a", "b"},
Variant: "v7",
},
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
p, err := DecodePlatform(tc.platformJSON)
assert.Check(t, is.DeepEqual(p, tc.expected))
if tc.expectedErr != "" {
assert.Check(t, cerrdefs.IsInvalidArgument(err))
assert.Check(t, is.Error(err, tc.expectedErr))
} else {
assert.Check(t, err)
}
})
}
}

View File

@@ -1,15 +1,19 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"context"
"encoding/json"
"io"
"mime"
"net/http"
"strings"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/status"
)
// APIVersionKey is the client's requested API version.
@@ -49,50 +53,17 @@ func CheckForJSON(r *http.Request) error {
ct := r.Header.Get("Content-Type")
// No Content-Type header is ok as long as there's no Body
if ct == "" && (r.Body == nil || r.ContentLength == 0) {
return nil
if ct == "" {
if r.Body == nil || r.ContentLength == 0 {
return nil
}
}
// Otherwise it better be json
return matchesContentType(ct, "application/json")
}
// ReadJSON validates the request to have the correct content-type, and decodes
// the request's Body into out.
func ReadJSON(r *http.Request, out interface{}) error {
err := CheckForJSON(r)
if err != nil {
return err
}
if r.Body == nil || r.ContentLength == 0 {
// an empty body is not invalid, so don't return an error; see
// https://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html
if matchesContentType(ct, "application/json") {
return nil
}
dec := json.NewDecoder(r.Body)
err = dec.Decode(out)
defer r.Body.Close()
if err != nil {
if errors.Is(err, io.EOF) {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return errdefs.InvalidParameter(errors.Wrap(err, "invalid JSON"))
}
if dec.More() {
return errdefs.InvalidParameter(errors.New("unexpected content after JSON"))
}
return nil
}
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
return errdefs.InvalidParameter(errors.Errorf("Content-Type specified (%s) must be 'application/json'", ct))
}
// ParseForm ensures the request form is parsed even with invalid content types.
@@ -121,14 +92,33 @@ func VersionFromContext(ctx context.Context) string {
return ""
}
// MakeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func MakeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := errdefs.GetHTTPErrorStatusCode(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}
// matchesContentType validates the content type against the expected one
func matchesContentType(contentType, expectedType string) error {
func matchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "malformed Content-Type header (%s)", contentType))
logrus.Errorf("Error parsing media type: %s error: %v", contentType, err)
}
if mimetype != expectedType {
return errdefs.InvalidParameter(errors.Errorf("unsupported Content-Type header (%s): must be '%s'", contentType, expectedType))
}
return nil
return err == nil && mimetype == expectedType
}

View File

@@ -1,130 +1,18 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"net/http"
"strings"
"testing"
)
import "testing"
// matchesContentType
func TestJsonContentType(t *testing.T) {
err := matchesContentType("application/json", "application/json")
if err != nil {
t.Error(err)
if !matchesContentType("application/json", "application/json") {
t.Fail()
}
err = matchesContentType("application/json; charset=utf-8", "application/json")
if err != nil {
t.Error(err)
if !matchesContentType("application/json; charset=utf-8", "application/json") {
t.Fail()
}
expected := "unsupported Content-Type header (dockerapplication/json): must be 'application/json'"
err = matchesContentType("dockerapplication/json", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
}
expected = "malformed Content-Type header (foo;;;bar): mime: invalid media parameter"
err = matchesContentType("foo;;;bar", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
if matchesContentType("dockerapplication/json", "application/json") {
t.Fail()
}
}
func TestReadJSON(t *testing.T) {
t.Run("nil body", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", http.NoBody)
if err != nil {
t.Error(err)
}
foo := struct{}{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
})
t.Run("empty body", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(""))
if err != nil {
t.Error(err)
}
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "" {
t.Errorf("expected: '', got: %s", foo.SomeField)
}
})
t.Run("with valid request", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"}`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with whitespace", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`
{"SomeField":"some value"}
`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with extra content", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"} and more content`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "unexpected content after JSON"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
t.Run("invalid JSON", func(t *testing.T) {
req, err := http.NewRequest(http.MethodPost, "https://example.com/some/path", strings.NewReader(`{invalid json`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "invalid JSON: invalid character 'i' looking for beginning of object key string"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
}

View File

@@ -0,0 +1,15 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"encoding/json"
"net/http"
)
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
}

View File

@@ -1,30 +1,22 @@
package httputils
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"context"
"fmt"
"io"
"net/http"
"net/url"
"sort"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/jsonmessage"
"github.com/docker/docker/pkg/stdcopy"
)
// rfc3339NanoFixed is time.RFC3339Nano with nanoseconds padded using zeros to
// ensure the formatted time isalways the same number of characters.
const rfc3339NanoFixed = "2006-01-02T15:04:05.000000000Z07:00"
// WriteLogStream writes an encoded byte stream of log messages from the
// messages channel, multiplexing them with a stdcopy.Writer if mux is true
func WriteLogStream(_ context.Context, w http.ResponseWriter, msgs <-chan *backend.LogMessage, config *container.LogsOptions, mux bool) {
// See https://github.com/moby/moby/issues/47448
// Trigger headers to be written immediately.
w.WriteHeader(http.StatusOK)
func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMessage, config *types.ContainerLogsOptions, mux bool) {
wf := ioutils.NewWriteFlusher(w)
defer wf.Close()
@@ -56,7 +48,7 @@ func WriteLogStream(_ context.Context, w http.ResponseWriter, msgs <-chan *backe
logLine = append(logLine, msg.Line...)
}
if config.Timestamps {
logLine = append([]byte(msg.Timestamp.Format(rfc3339NanoFixed)+" "), logLine...)
logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...)
}
if msg.Source == "stdout" && config.ShowStdout {
_, _ = outStream.Write(logLine)

View File

@@ -1,9 +1,9 @@
package server
package server // import "github.com/docker/docker/api/server"
import (
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/sirupsen/logrus"
)
// handlerWithGlobalMiddlewares wraps the handler function for a request with
@@ -16,7 +16,7 @@ func (s *Server) handlerWithGlobalMiddlewares(handler httputils.APIFunc) httputi
next = m.WrapHandler(next)
}
if log.GetLevel() == log.DebugLevel {
if s.cfg.Logging && logrus.GetLevel() == logrus.DebugLevel {
next = middleware.DebugRequestMiddleware(next)
}

View File

@@ -0,0 +1,37 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"net/http"
"github.com/sirupsen/logrus"
)
// CORSMiddleware injects CORS headers to each request
// when it's configured.
type CORSMiddleware struct {
defaultHeaders string
}
// NewCORSMiddleware creates a new CORSMiddleware with default headers.
func NewCORSMiddleware(d string) CORSMiddleware {
return CORSMiddleware{defaultHeaders: d}
}
// WrapHandler returns a new handler function wrapping the previous one in the request chain.
func (c CORSMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If "api-cors-header" is not given, but "api-enable-cors" is true, we set cors to "*"
// otherwise, all head values will be passed to HTTP handler
corsHeaders := c.defaultHeaders
if corsHeaders == "" {
corsHeaders = "*"
}
logrus.Debugf("CORS header is enabled and set to: %s", corsHeaders)
w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, X-Registry-Auth")
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
return handler(ctx, w, r, vars)
}
}

View File

@@ -1,4 +1,4 @@
package middleware
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"bufio"
@@ -8,8 +8,6 @@ import (
"net/http"
"strings"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/pkg/ioutils"
"github.com/sirupsen/logrus"
@@ -18,37 +16,17 @@ import (
// DebugRequestMiddleware dumps the request to logger
func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
logger := log.G(ctx)
// Use a variable for fields to prevent overhead of repeatedly
// calling WithFields.
fields := log.Fields{
"module": "api",
"method": r.Method,
"request-url": r.RequestURI,
"vars": vars,
}
handleWithLogs := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
logger.WithFields(fields).Debugf("handling %s request", r.Method)
err := handler(ctx, w, r, vars)
if err != nil {
// TODO(thaJeztah): unify this with Server.makeHTTPHandler, which also logs internal server errors as error-log. See https://github.com/moby/moby/pull/48740#discussion_r1816675574
fields["error-response"] = err
fields["status"] = httpstatus.FromError(err)
logger.WithFields(fields).Debugf("error response for %s request", r.Method)
}
return err
}
logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost {
return handleWithLogs(ctx, w, r, vars)
return handler(ctx, w, r, vars)
}
if err := httputils.CheckForJSON(r); err != nil {
return handleWithLogs(ctx, w, r, vars)
return handler(ctx, w, r, vars)
}
maxBodySize := 4096 // 4KB
if r.ContentLength > int64(maxBodySize) {
return handleWithLogs(ctx, w, r, vars)
return handler(ctx, w, r, vars)
}
body := r.Body
@@ -58,25 +36,21 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
b, err := bufReader.Peek(maxBodySize)
if err != io.EOF {
// either there was an error reading, or the buffer is full (in which case the request is too large)
return handleWithLogs(ctx, w, r, vars)
return handler(ctx, w, r, vars)
}
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
maskSecretKeys(postForm)
// TODO(thaJeztah): is there a better way to detect if we're using JSON-formatted logs?
if _, ok := logger.Logger.Formatter.(*logrus.JSONFormatter); ok {
fields["form-data"] = postForm
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
logrus.Debugf("form data: %s", string(formStr))
} else {
if data, err := json.Marshal(postForm); err != nil {
fields["form-data"] = postForm
} else {
fields["form-data"] = string(data)
}
logrus.Debugf("form data: %q", postForm)
}
}
return handleWithLogs(ctx, w, r, vars)
return handler(ctx, w, r, vars)
}
}

View File

@@ -1,4 +1,4 @@
package middleware
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"testing"

View File

@@ -1,4 +1,4 @@
package middleware
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"

View File

@@ -1,4 +1,4 @@
package middleware
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"

View File

@@ -1,4 +1,4 @@
package middleware
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
@@ -6,7 +6,6 @@ import (
"net/http"
"runtime"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/versions"
)
@@ -14,40 +13,19 @@ import (
// VersionMiddleware is a middleware that
// validates the client and server versions.
type VersionMiddleware struct {
serverVersion string
// defaultAPIVersion is the default API version provided by the API server,
// specified as "major.minor". It is usually configured to the latest API
// version [github.com/docker/docker/api.DefaultVersion].
//
// API requests for API versions greater than this version are rejected by
// the server and produce a [versionUnsupportedError].
defaultAPIVersion string
// minAPIVersion is the minimum API version provided by the API server,
// specified as "major.minor".
//
// API requests for API versions lower than this version are rejected by
// the server and produce a [versionUnsupportedError].
minAPIVersion string
serverVersion string
defaultVersion string
minVersion string
}
// NewVersionMiddleware creates a VersionMiddleware with the given versions.
func NewVersionMiddleware(serverVersion, defaultAPIVersion, minAPIVersion string) (*VersionMiddleware, error) {
if versions.LessThan(defaultAPIVersion, api.MinSupportedAPIVersion) || versions.GreaterThan(defaultAPIVersion, api.DefaultVersion) {
return nil, fmt.Errorf("invalid default API version (%s): must be between %s and %s", defaultAPIVersion, api.MinSupportedAPIVersion, api.DefaultVersion)
// NewVersionMiddleware creates a new VersionMiddleware
// with the default versions.
func NewVersionMiddleware(s, d, m string) VersionMiddleware {
return VersionMiddleware{
serverVersion: s,
defaultVersion: d,
minVersion: m,
}
if versions.LessThan(minAPIVersion, api.MinSupportedAPIVersion) || versions.GreaterThan(minAPIVersion, api.DefaultVersion) {
return nil, fmt.Errorf("invalid minimum API version (%s): must be between %s and %s", minAPIVersion, api.MinSupportedAPIVersion, api.DefaultVersion)
}
if versions.GreaterThan(minAPIVersion, defaultAPIVersion) {
return nil, fmt.Errorf("invalid API version: the minimum API version (%s) is higher than the default version (%s)", minAPIVersion, defaultAPIVersion)
}
return &VersionMiddleware{
serverVersion: serverVersion,
defaultAPIVersion: defaultAPIVersion,
minAPIVersion: minAPIVersion,
}, nil
}
type versionUnsupportedError struct {
@@ -67,20 +45,21 @@ func (e versionUnsupportedError) InvalidParameter() {}
func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Server", fmt.Sprintf("Docker/%s (%s)", v.serverVersion, runtime.GOOS))
w.Header().Set("Api-Version", v.defaultAPIVersion)
w.Header().Set("Ostype", runtime.GOOS)
w.Header().Set("API-Version", v.defaultVersion)
w.Header().Set("OSType", runtime.GOOS)
apiVersion := vars["version"]
if apiVersion == "" {
apiVersion = v.defaultAPIVersion
apiVersion = v.defaultVersion
}
if versions.LessThan(apiVersion, v.minAPIVersion) {
return versionUnsupportedError{version: apiVersion, minVersion: v.minAPIVersion}
if versions.LessThan(apiVersion, v.minVersion) {
return versionUnsupportedError{version: apiVersion, minVersion: v.minVersion}
}
if versions.GreaterThan(apiVersion, v.defaultAPIVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultAPIVersion}
if versions.GreaterThan(apiVersion, v.defaultVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultVersion}
}
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
return handler(ctx, w, r, vars)
}
}

View File

@@ -1,85 +1,31 @@
package middleware
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"runtime"
"testing"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestNewVersionMiddlewareValidation(t *testing.T) {
tests := []struct {
doc, defaultVersion, minVersion, expectedErr string
}{
{
doc: "defaults",
defaultVersion: api.DefaultVersion,
minVersion: api.MinSupportedAPIVersion,
},
{
doc: "invalid default lower than min",
defaultVersion: api.MinSupportedAPIVersion,
minVersion: api.DefaultVersion,
expectedErr: fmt.Sprintf("invalid API version: the minimum API version (%s) is higher than the default version (%s)", api.DefaultVersion, api.MinSupportedAPIVersion),
},
{
doc: "invalid default too low",
defaultVersion: "0.1",
minVersion: api.MinSupportedAPIVersion,
expectedErr: fmt.Sprintf("invalid default API version (0.1): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid default too high",
defaultVersion: "9999.9999",
minVersion: api.DefaultVersion,
expectedErr: fmt.Sprintf("invalid default API version (9999.9999): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid minimum too low",
defaultVersion: api.MinSupportedAPIVersion,
minVersion: "0.1",
expectedErr: fmt.Sprintf("invalid minimum API version (0.1): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
{
doc: "invalid minimum too high",
defaultVersion: api.DefaultVersion,
minVersion: "9999.9999",
expectedErr: fmt.Sprintf("invalid minimum API version (9999.9999): must be between %s and %s", api.MinSupportedAPIVersion, api.DefaultVersion),
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
_, err := NewVersionMiddleware("1.2.3", tc.defaultVersion, tc.minVersion)
if tc.expectedErr == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.Error(err, tc.expectedErr))
}
})
}
}
func TestVersionMiddlewareVersion(t *testing.T) {
expectedVersion := "<not set>"
defaultVersion := "1.10.0"
minVersion := "1.2.0"
expectedVersion := defaultVersion
handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := httputils.VersionFromContext(ctx)
assert.Check(t, is.Equal(expectedVersion, v))
return nil
}
m, err := NewVersionMiddleware("1.2.3", api.DefaultVersion, api.MinSupportedAPIVersion)
assert.NilError(t, err)
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", http.NoBody)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
@@ -89,19 +35,19 @@ func TestVersionMiddlewareVersion(t *testing.T) {
errString string
}{
{
expectedVersion: api.DefaultVersion,
expectedVersion: "1.10.0",
},
{
reqVersion: api.MinSupportedAPIVersion,
expectedVersion: api.MinSupportedAPIVersion,
reqVersion: "1.9.0",
expectedVersion: "1.9.0",
},
{
reqVersion: "0.1",
errString: fmt.Sprintf("client version 0.1 is too old. Minimum supported API version is %s, please upgrade your client to a newer version", api.MinSupportedAPIVersion),
errString: "client version 0.1 is too old. Minimum supported API version is 1.2.0, please upgrade your client to a newer version",
},
{
reqVersion: "9999.9999",
errString: fmt.Sprintf("client version 9999.9999 is too new. Maximum supported API version is %s", api.DefaultVersion),
errString: "client version 9999.9999 is too new. Maximum supported API version is 1.10.0",
},
}
@@ -121,25 +67,26 @@ func TestVersionMiddlewareVersion(t *testing.T) {
func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
handler := func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v := httputils.VersionFromContext(ctx)
assert.Check(t, v != "")
assert.Check(t, len(v) != 0)
return nil
}
m, err := NewVersionMiddleware("1.2.3", api.DefaultVersion, api.MinSupportedAPIVersion)
assert.NilError(t, err)
defaultVersion := "1.10.0"
minVersion := "1.2.0"
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", http.NoBody)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
vars := map[string]string{"version": "0.1"}
err = h(ctx, resp, req, vars)
err := h(ctx, resp, req, vars)
assert.Check(t, is.ErrorContains(err, ""))
hdr := resp.Result().Header
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/1.2.3"))
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/"+defaultVersion))
assert.Check(t, is.Contains(hdr.Get("Server"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("Api-Version"), api.DefaultVersion))
assert.Check(t, is.Equal(hdr.Get("Ostype"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("API-Version"), defaultVersion))
assert.Check(t, is.Equal(hdr.Get("OSType"), runtime.GOOS))
}

View File

@@ -1,10 +1,10 @@
package build
package build // import "github.com/docker/docker/api/server/router/build"
import (
"context"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/build"
)
// Backend abstracts an image builder whose only purpose is to build an image referenced by an imageID.
@@ -13,8 +13,9 @@ type Backend interface {
// TODO: make this return a reference instead of string
Build(context.Context, backend.BuildConfig) (string, error)
// PruneCache prunes the build cache.
PruneCache(context.Context, build.CachePruneOptions) (*build.CachePruneReport, error)
// Prune build cache
PruneCache(context.Context, types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
Cancel(context.Context, string) error
}

View File

@@ -1,66 +1,52 @@
package build
package build // import "github.com/docker/docker/api/server/router/build"
import (
"runtime"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types/build"
"github.com/docker/docker/api/types"
)
// buildRouter is a router to talk with the build controller
type buildRouter struct {
backend Backend
daemon experimentalProvider
routes []router.Route
backend Backend
daemon experimentalProvider
routes []router.Route
features *map[string]bool
}
// NewRouter initializes a new build router
func NewRouter(b Backend, d experimentalProvider) router.Router {
func NewRouter(b Backend, d experimentalProvider, features *map[string]bool) router.Router {
r := &buildRouter{
backend: b,
daemon: d,
backend: b,
daemon: d,
features: features,
}
r.initRoutes()
return r
}
// Routes returns the available routers to the build controller
func (br *buildRouter) Routes() []router.Route {
return br.routes
func (r *buildRouter) Routes() []router.Route {
return r.routes
}
func (br *buildRouter) initRoutes() {
br.routes = []router.Route{
router.NewPostRoute("/build", br.postBuild),
router.NewPostRoute("/build/prune", br.postPrune),
router.NewPostRoute("/build/cancel", br.postCancel),
func (r *buildRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/build", r.postBuild),
router.NewPostRoute("/build/prune", r.postPrune),
router.NewPostRoute("/build/cancel", r.postCancel),
}
}
// BuilderVersion derives the default docker builder version from the config.
//
// The default on Linux is version "2" (BuildKit), but the daemon can be
// configured to recommend version "1" (classic Builder). Windows does not
// yet support BuildKit for native Windows images, and uses "1" (classic builder)
// as a default.
//
// This value is only a recommendation as advertised by the daemon, and it is
// up to the client to choose which builder to use.
func BuilderVersion(features map[string]bool) build.BuilderVersion {
// TODO(thaJeztah) move the default to daemon/config
bv := build.BuilderBuildKit
if runtime.GOOS == "windows" {
// BuildKit is not yet the default on Windows.
bv = build.BuilderV1
}
// Allow the features field in the daemon config to override the
// default builder to advertise.
if enable, ok := features["buildkit"]; ok {
if enable {
bv = build.BuilderBuildKit
// BuilderVersion derives the default docker builder version from the config
// Note: it is valid to have BuilderVersion unset which means it is up to the
// client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion {
var bv types.BuilderVersion
if v, ok := features["buildkit"]; ok {
if v {
bv = types.BuilderBuildKit
} else {
bv = build.BuilderV1
bv = types.BuilderV1
}
}
return bv

View File

@@ -1,4 +1,4 @@
package build
package build // import "github.com/docker/docker/api/server/router/build"
import (
"bufio"
@@ -13,36 +13,37 @@ import (
"strconv"
"strings"
"sync"
"syscall"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/build"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter"
units "github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type invalidParam struct {
error
type invalidIsolationError string
func (e invalidIsolationError) Error() string {
return fmt.Sprintf("Unsupported isolation: %q", string(e))
}
func (e invalidParam) InvalidParameter() {}
func (e invalidIsolationError) InvalidParameter() {}
func newImageBuildOptions(ctx context.Context, r *http.Request) (*build.ImageBuildOptions, error) {
options := &build.ImageBuildOptions{
Version: build.BuilderV1, // Builder V1 is the default, but can be overridden
func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) {
options := &types.ImageBuildOptions{
Version: types.BuilderV1, // Builder V1 is the default, but can be overridden
Dockerfile: r.FormValue("dockerfile"),
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
PullParent: httputils.BoolValue(r, "pull"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
@@ -63,27 +64,29 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*build.ImageBui
}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
// SecurityOpt only supports "credentials-spec" on Windows, and not used on other platforms.
return nil, invalidParam{errors.New("security options are not supported on " + runtime.GOOS)}
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
if httputils.BoolValue(r, "forcerm") {
version := httputils.VersionFromContext(ctx)
if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else if r.FormValue("rm") == "" {
} else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else {
options.Remove = httputils.BoolValue(r, "rm")
}
version := httputils.VersionFromContext(ctx)
if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform")
}
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []build.ImageBuildOutput
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, invalidParam{errors.Wrap(err, "invalid outputs specified")}
return nil, err
}
options.Outputs = outputs
}
@@ -100,14 +103,14 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*build.ImageBui
if i := r.FormValue("isolation"); i != "" {
options.Isolation = container.Isolation(i)
if !options.Isolation.IsValid() {
return nil, invalidParam{errors.Errorf("unsupported isolation: %q", i)}
return nil, invalidIsolationError(options.Isolation)
}
}
if ulimitsJSON := r.FormValue("ulimits"); ulimitsJSON != "" {
buildUlimits := []*container.Ulimit{}
var buildUlimits = []*units.Ulimit{}
if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading ulimit settings")}
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading ulimit settings")
}
options.Ulimits = buildUlimits
}
@@ -125,25 +128,25 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*build.ImageBui
// so that it can print a warning about "foo" being unused if there is
// no "ARG foo" in the Dockerfile.
if buildArgsJSON := r.FormValue("buildargs"); buildArgsJSON != "" {
buildArgs := map[string]*string{}
var buildArgs = map[string]*string{}
if err := json.Unmarshal([]byte(buildArgsJSON), &buildArgs); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading build args")}
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading build args")
}
options.BuildArgs = buildArgs
}
if labelsJSON := r.FormValue("labels"); labelsJSON != "" {
labels := map[string]string{}
var labels = map[string]string{}
if err := json.Unmarshal([]byte(labelsJSON), &labels); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading labels")}
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading labels")
}
options.Labels = labels
}
if cacheFromJSON := r.FormValue("cachefrom"); cacheFromJSON != "" {
cacheFrom := []string{}
var cacheFrom = []string{}
if err := json.Unmarshal([]byte(cacheFromJSON), &cacheFrom); err != nil {
return nil, invalidParam{errors.Wrap(err, "error reading cache-from")}
return nil, err
}
options.CacheFrom = cacheFrom
}
@@ -159,14 +162,14 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*build.ImageBui
return options, nil
}
func parseVersion(s string) (build.BuilderVersion, error) {
switch build.BuilderVersion(s) {
case build.BuilderV1:
return build.BuilderV1, nil
case build.BuilderBuildKit:
return build.BuilderBuildKit, nil
func parseVersion(s string) (types.BuilderVersion, error) {
switch types.BuilderVersion(s) {
case types.BuilderV1:
return types.BuilderV1, nil
case types.BuilderBuildKit:
return types.BuilderBuildKit, nil
default:
return "", invalidParam{errors.Errorf("invalid version %q", s)}
return "", errors.Errorf("invalid version %q", s)
}
}
@@ -176,58 +179,21 @@ func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *
}
fltrs, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
return errors.Wrap(err, "could not parse filters")
}
ksfv := r.FormValue("keep-storage")
if ksfv == "" {
ksfv = "0"
}
ks, err := strconv.Atoi(ksfv)
if err != nil {
return errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)
}
opts := build.CachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
}
parseBytesFromFormValue := func(name string) (int64, error) {
if fv := r.FormValue(name); fv != "" {
bs, err := strconv.Atoi(fv)
if err != nil {
return 0, invalidParam{errors.Wrapf(err, "%s is in bytes and expects an integer, got %v", name, fv)}
}
return int64(bs), nil
}
return 0, nil
}
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.48") {
if bs, err := parseBytesFromFormValue("reserved-space"); err != nil {
return err
} else {
if bs == 0 {
// Deprecated parameter. Only checked if reserved-space is not used.
bs, err = parseBytesFromFormValue("keep-storage")
if err != nil {
return err
}
}
opts.ReservedSpace = bs
}
if bs, err := parseBytesFromFormValue("max-used-space"); err != nil {
return err
} else {
opts.MaxUsedSpace = bs
}
if bs, err := parseBytesFromFormValue("min-free-space"); err != nil {
return err
} else {
opts.MinFreeSpace = bs
}
} else {
// Only keep-storage was valid in pre-1.48 versions.
if bs, err := parseBytesFromFormValue("keep-storage"); err != nil {
return err
} else {
opts.ReservedSpace = bs
}
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
KeepStorage: int64(ks),
}
report, err := br.backend.PruneCache(ctx, opts)
@@ -242,7 +208,7 @@ func (br *buildRouter) postCancel(ctx context.Context, w http.ResponseWriter, r
id := r.FormValue("id")
if id == "" {
return invalidParam{errors.New("build ID not provided")}
return errors.Errorf("build ID not provided")
}
return br.backend.Cancel(ctx, id)
@@ -272,6 +238,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
defer func() { _ = output.Close() }()
errf := func(err error) error {
if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 {
_, _ = output.Write(notVerboseBuffer.Bytes())
}
@@ -282,9 +249,8 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return err
}
_, err = output.Write(streamformatter.FormatError(err))
// don't log broken pipe errors as this is the normal case when a client aborts.
if err != nil && !errors.Is(err, syscall.EPIPE) {
log.G(ctx).WithError(err).Warn("could not write error response")
if err != nil {
logrus.Warnf("could not write error response: %v", err)
}
return nil
}
@@ -296,7 +262,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
buildOptions.AuthConfigs = getAuthConfigs(r.Header)
if buildOptions.Squash && !br.daemon.HasExperimental() {
return invalidParam{errors.New("squash is only supported with experimental mode")}
return errdefs.InvalidParameter(errors.New("squash is only supported with experimental mode"))
}
out := io.Writer(output)
@@ -319,9 +285,6 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
ProgressWriter: buildProgressWriter(out, wantAux, createProgressReader),
})
if err != nil {
if errors.Is(err, context.Canceled) {
log.G(ctx).Debug("build canceled")
}
return errf(err)
}
@@ -333,8 +296,8 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return nil
}
func getAuthConfigs(header http.Header) map[string]registry.AuthConfig {
authConfigs := map[string]registry.AuthConfig{}
func getAuthConfigs(header http.Header) map[string]types.AuthConfig {
authConfigs := map[string]types.AuthConfig{}
authConfigsEncoded := header.Get("X-Registry-Config")
if authConfigsEncoded == "" {
@@ -353,14 +316,14 @@ type syncWriter struct {
mu sync.Mutex
}
func (s *syncWriter) Write(b []byte) (int, error) {
func (s *syncWriter) Write(b []byte) (count int, err error) {
s.mu.Lock()
defer s.mu.Unlock()
return s.w.Write(b)
count, err = s.w.Write(b)
s.mu.Unlock()
return
}
func buildProgressWriter(out io.Writer, wantAux bool, createProgressReader func(io.ReadCloser) io.ReadCloser) backend.ProgressWriter {
// see https://github.com/moby/moby/pull/21406
out = &syncWriter{w: out}
var aux *streamformatter.AuxFormatter
@@ -381,12 +344,8 @@ type flusher interface {
Flush()
}
type nopFlusher struct{}
func (f *nopFlusher) Flush() {}
func wrapOutputBufferedUntilRequestRead(rc io.ReadCloser, out io.Writer) (io.ReadCloser, io.Writer) {
var fl flusher = &nopFlusher{}
var fl flusher = &ioutils.NopFlusher{}
if f, ok := out.(flusher); ok {
fl = f
}

View File

@@ -1,10 +1,10 @@
package checkpoint
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import "github.com/docker/docker/api/types/checkpoint"
import "github.com/docker/docker/api/types"
// Backend for Checkpoint
type Backend interface {
CheckpointCreate(container string, config checkpoint.CreateOptions) error
CheckpointDelete(container string, config checkpoint.DeleteOptions) error
CheckpointList(container string, config checkpoint.ListOptions) ([]checkpoint.Summary, error)
CheckpointCreate(container string, config types.CheckpointCreateOptions) error
CheckpointDelete(container string, config types.CheckpointDeleteOptions) error
CheckpointList(container string, config types.CheckpointListOptions) ([]types.Checkpoint, error)
}

View File

@@ -1,4 +1,4 @@
package checkpoint
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import (
"github.com/docker/docker/api/server/httputils"
@@ -23,14 +23,14 @@ func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
}
// Routes returns the available routers to the checkpoint controller
func (cr *checkpointRouter) Routes() []router.Route {
return cr.routes
func (r *checkpointRouter) Routes() []router.Route {
return r.routes
}
func (cr *checkpointRouter) initRoutes() {
cr.routes = []router.Route{
router.NewGetRoute("/containers/{name:.*}/checkpoints", cr.getContainerCheckpoints, router.Experimental),
router.NewPostRoute("/containers/{name:.*}/checkpoints", cr.postContainerCheckpoint, router.Experimental),
router.NewDeleteRoute("/containers/{name}/checkpoints/{checkpoint}", cr.deleteContainerCheckpoint, router.Experimental),
func (r *checkpointRouter) initRoutes() {
r.routes = []router.Route{
router.NewGetRoute("/containers/{name:.*}/checkpoints", r.getContainerCheckpoints, router.Experimental),
router.NewPostRoute("/containers/{name:.*}/checkpoints", r.postContainerCheckpoint, router.Experimental),
router.NewDeleteRoute("/containers/{name}/checkpoints/{checkpoint}", r.deleteContainerCheckpoint, router.Experimental),
}
}

View File

@@ -1,24 +1,27 @@
package checkpoint
package checkpoint // import "github.com/docker/docker/api/server/router/checkpoint"
import (
"context"
"encoding/json"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/checkpoint"
"github.com/docker/docker/api/types"
)
func (cr *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var options checkpoint.CreateOptions
if err := httputils.ReadJSON(r, &options); err != nil {
var options types.CheckpointCreateOptions
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&options); err != nil {
return err
}
err := cr.backend.CheckpointCreate(vars["name"], options)
err := s.backend.CheckpointCreate(vars["name"], options)
if err != nil {
return err
}
@@ -27,33 +30,32 @@ func (cr *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.
return nil
}
func (cr *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *checkpointRouter) getContainerCheckpoints(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
checkpoints, err := cr.backend.CheckpointList(vars["name"], checkpoint.ListOptions{
checkpoints, err := s.backend.CheckpointList(vars["name"], types.CheckpointListOptions{
CheckpointDir: r.Form.Get("dir"),
})
if err != nil {
return err
}
if checkpoints == nil {
checkpoints = []checkpoint.Summary{}
}
return httputils.WriteJSON(w, http.StatusOK, checkpoints)
}
func (cr *checkpointRouter) deleteContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *checkpointRouter) deleteContainerCheckpoint(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
err := cr.backend.CheckpointDelete(vars["name"], checkpoint.DeleteOptions{
err := s.backend.CheckpointDelete(vars["name"], types.CheckpointDeleteOptions{
CheckpointDir: r.Form.Get("dir"),
CheckpointID: vars["checkpoint"],
})
if err != nil {
return err
}

View File

@@ -1,56 +1,60 @@
package container
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"io"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/moby/go-archive"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/pkg/archive"
)
// execBackend includes functions to implement to provide exec functionality.
type execBackend interface {
ContainerExecCreate(name string, options *container.ExecOptions) (string, error)
ContainerExecCreate(name string, config *types.ExecConfig) (string, error)
ContainerExecInspect(id string) (*backend.ExecInspect, error)
ContainerExecResize(ctx context.Context, name string, height, width uint32) error
ContainerExecStart(ctx context.Context, name string, options backend.ExecStartConfig) error
ContainerExecResize(name string, height, width int) error
ContainerExecStart(ctx context.Context, name string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error
ExecExists(name string) (bool, error)
}
// copyBackend includes functions to implement to provide container copy functionality.
type copyBackend interface {
ContainerArchivePath(name string, path string) (content io.ReadCloser, stat *container.PathStat, err error)
ContainerExport(ctx context.Context, name string, out io.Writer) error
ContainerArchivePath(name string, path string) (content io.ReadCloser, stat *types.ContainerPathStat, err error)
ContainerCopy(name string, res string) (io.ReadCloser, error)
ContainerExport(name string, out io.Writer) error
ContainerExtractToDir(name, path string, copyUIDGID, noOverwriteDirNonDir bool, content io.Reader) error
ContainerStatPath(name string, path string) (stat *container.PathStat, err error)
ContainerStatPath(name string, path string) (stat *types.ContainerPathStat, err error)
}
// stateBackend includes functions to implement to provide container state lifecycle functionality.
type stateBackend interface {
ContainerCreate(ctx context.Context, config backend.ContainerCreateConfig) (container.CreateResponse, error)
ContainerKill(name string, signal string) error
ContainerCreate(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error)
ContainerKill(name string, sig uint64) error
ContainerPause(name string) error
ContainerRename(oldName, newName string) error
ContainerResize(ctx context.Context, name string, height, width uint32) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error
ContainerRm(name string, config *backend.ContainerRmConfig) error
ContainerStart(ctx context.Context, name string, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error
ContainerResize(name string, height, width int) error
ContainerRestart(name string, seconds *int) error
ContainerRm(name string, config *types.ContainerRmConfig) error
ContainerStart(name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(name string, seconds *int) error
ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.UpdateResponse, error)
ContainerWait(ctx context.Context, name string, condition container.WaitCondition) (<-chan container.StateStatus, error)
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)
}
// monitorBackend includes functions to implement to provide containers monitoring functionality.
type monitorBackend interface {
ContainerChanges(ctx context.Context, name string) ([]archive.Change, error)
ContainerInspect(ctx context.Context, name string, options backend.ContainerInspectOptions) (*container.InspectResponse, error)
ContainerLogs(ctx context.Context, name string, config *container.LogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error)
ContainerChanges(name string) ([]archive.Change, error)
ContainerInspect(name string, size bool, version string) (interface{}, error)
ContainerLogs(ctx context.Context, name string, config *types.ContainerLogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error)
ContainerStats(ctx context.Context, name string, config *backend.ContainerStatsConfig) error
ContainerTop(name string, psArgs string) (*container.TopResponse, error)
Containers(ctx context.Context, config *container.ListOptions) ([]*container.Summary, error)
ContainerTop(name string, psArgs string) (*container.ContainerTopOKBody, error)
Containers(config *types.ContainerListOptions) ([]*types.Container, error)
}
// attachBackend includes function to implement to provide container attaching functionality.
@@ -60,11 +64,11 @@ type attachBackend interface {
// systemBackend includes functions to implement to provide system wide containers functionality
type systemBackend interface {
ContainersPrune(ctx context.Context, pruneFilters filters.Args) (*container.PruneReport, error)
ContainersPrune(ctx context.Context, pruneFilters filters.Args) (*types.ContainersPruneReport, error)
}
type commitBackend interface {
CreateImageFromContainer(ctx context.Context, name string, config *backend.CreateImageConfig) (imageID string, err error)
CreateImageFromContainer(name string, config *backend.CreateImageConfig) (imageID string, err error)
}
// Backend is all the methods that need to be implemented to provide container specific functionality.

View File

@@ -1,4 +1,4 @@
package container
package container // import "github.com/docker/docker/api/server/router/container"
import (
"github.com/docker/docker/api/server/httputils"
@@ -25,47 +25,48 @@ func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) rout
}
// Routes returns the available routes to the container controller
func (c *containerRouter) Routes() []router.Route {
return c.routes
func (r *containerRouter) Routes() []router.Route {
return r.routes
}
// initRoutes initializes the routes in container router
func (c *containerRouter) initRoutes() {
c.routes = []router.Route{
func (r *containerRouter) initRoutes() {
r.routes = []router.Route{
// HEAD
router.NewHeadRoute("/containers/{name:.*}/archive", c.headContainersArchive),
router.NewHeadRoute("/containers/{name:.*}/archive", r.headContainersArchive),
// GET
router.NewGetRoute("/containers/json", c.getContainersJSON),
router.NewGetRoute("/containers/{name:.*}/export", c.getContainersExport),
router.NewGetRoute("/containers/{name:.*}/changes", c.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", c.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", c.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", c.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", c.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/attach/ws", c.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", c.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", c.getContainersArchive),
router.NewGetRoute("/containers/json", r.getContainersJSON),
router.NewGetRoute("/containers/{name:.*}/export", r.getContainersExport),
router.NewGetRoute("/containers/{name:.*}/changes", r.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", r.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", r.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/attach/ws", r.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", r.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", r.getContainersArchive),
// POST
router.NewPostRoute("/containers/create", c.postContainersCreate),
router.NewPostRoute("/containers/{name:.*}/kill", c.postContainersKill),
router.NewPostRoute("/containers/{name:.*}/pause", c.postContainersPause),
router.NewPostRoute("/containers/{name:.*}/unpause", c.postContainersUnpause),
router.NewPostRoute("/containers/{name:.*}/restart", c.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", c.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", c.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", c.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", c.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", c.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/exec", c.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", c.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", c.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", c.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", c.postContainerUpdate),
router.NewPostRoute("/containers/prune", c.postContainersPrune),
router.NewPostRoute("/commit", c.postCommit),
router.NewPostRoute("/containers/create", r.postContainersCreate),
router.NewPostRoute("/containers/{name:.*}/kill", r.postContainersKill),
router.NewPostRoute("/containers/{name:.*}/pause", r.postContainersPause),
router.NewPostRoute("/containers/{name:.*}/unpause", r.postContainersUnpause),
router.NewPostRoute("/containers/{name:.*}/restart", r.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", r.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", r.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8, Errors out since 1.12
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", r.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", r.postContainerUpdate),
router.NewPostRoute("/containers/prune", r.postContainersPrune),
router.NewPostRoute("/commit", r.postCommit),
// PUT
router.NewPutRoute("/containers/{name:.*}/archive", c.putContainersArchive),
router.NewPutRoute("/containers/{name:.*}/archive", r.putContainersArchive),
// DELETE
router.NewDeleteRoute("/containers/{name:.*}", c.deleteContainers),
router.NewDeleteRoute("/containers/{name:.*}", r.deleteContainers),
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,350 +0,0 @@
package container
import (
"strings"
"testing"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/libnetwork/netlabel"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestHandleMACAddressBC(t *testing.T) {
testcases := []struct {
name string
apiVersion string
ctrWideMAC string
networkMode container.NetworkMode
epConfig map[string]*network.EndpointSettings
expEpWithCtrWideMAC string
expEpWithNoMAC string
expCtrWideMAC string
expWarning string
expError string
}{
{
name: "old api ctr-wide mac mix id and name",
apiVersion: "1.43",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api clear ep mac",
apiVersion: "1.43",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "11:22:33:44:55:66"}},
expEpWithNoMAC: "aNetName",
},
{
name: "old api no-network ctr-wide mac",
apiVersion: "1.43",
networkMode: "none",
ctrWideMAC: "11:22:33:44:55:66",
expError: "conflicting options: mac-address and the network mode",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api create ep",
apiVersion: "1.43",
networkMode: "aNetId",
ctrWideMAC: "11:22:33:44:55:66",
epConfig: map[string]*network.EndpointSettings{},
expEpWithCtrWideMAC: "aNetId",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "old api migrate ctr-wide mac",
apiVersion: "1.43",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "new api no macs",
apiVersion: "1.44",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
},
{
name: "new api ep specific mac",
apiVersion: "1.44",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "11:22:33:44:55:66"}},
},
{
name: "new api migrate ctr-wide mac to new ep",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{},
expEpWithCtrWideMAC: "aNetName",
expWarning: "The container-wide MacAddress field is now deprecated",
expCtrWideMAC: "",
},
{
name: "new api migrate ctr-wide mac to existing ep",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expEpWithCtrWideMAC: "aNetName",
expWarning: "The container-wide MacAddress field is now deprecated",
expCtrWideMAC: "",
},
{
name: "new api mode vs name mismatch",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetId",
epConfig: map[string]*network.EndpointSettings{"aNetName": {}},
expError: "unable to migrate container-wide MAC address to a specific network: HostConfig.NetworkMode must match the identity of a network in NetworkSettings.Networks",
expCtrWideMAC: "11:22:33:44:55:66",
},
{
name: "new api mac mismatch",
apiVersion: "1.44",
ctrWideMAC: "11:22:33:44:55:66",
networkMode: "aNetName",
epConfig: map[string]*network.EndpointSettings{"aNetName": {MacAddress: "00:11:22:33:44:55"}},
expError: "the container-wide MAC address must match the endpoint-specific MAC address",
expCtrWideMAC: "11:22:33:44:55:66",
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
cfg := &container.Config{
MacAddress: tc.ctrWideMAC, //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
}
hostCfg := &container.HostConfig{
NetworkMode: tc.networkMode,
}
epConfig := make(map[string]*network.EndpointSettings, len(tc.epConfig))
for k, v := range tc.epConfig {
v := *v
epConfig[k] = &v
}
netCfg := &network.NetworkingConfig{
EndpointsConfig: epConfig,
}
warning, err := handleMACAddressBC(cfg, hostCfg, netCfg, tc.apiVersion)
if tc.expError == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.ErrorContains(err, tc.expError))
}
if tc.expWarning == "" {
assert.Check(t, is.Equal(warning, ""))
} else {
assert.Check(t, is.Contains(warning, tc.expWarning))
}
if tc.expEpWithCtrWideMAC != "" {
got := netCfg.EndpointsConfig[tc.expEpWithCtrWideMAC].MacAddress
assert.Check(t, is.Equal(got, tc.ctrWideMAC))
}
if tc.expEpWithNoMAC != "" {
got := netCfg.EndpointsConfig[tc.expEpWithNoMAC].MacAddress
assert.Check(t, is.Equal(got, ""))
}
gotCtrWideMAC := cfg.MacAddress //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.44.
assert.Check(t, is.Equal(gotCtrWideMAC, tc.expCtrWideMAC))
})
}
}
func TestEpConfigForNetMode(t *testing.T) {
testcases := []struct {
name string
apiVersion string
networkMode string
epConfig map[string]*network.EndpointSettings
expEpId string
expNumEps int
expError bool
}{
{
name: "old api no eps",
apiVersion: "1.43",
networkMode: "mynet",
expNumEps: 1,
},
{
name: "new api no eps",
apiVersion: "1.44",
networkMode: "mynet",
expNumEps: 1,
},
{
name: "old api with ep",
apiVersion: "1.43",
networkMode: "mynet",
epConfig: map[string]*network.EndpointSettings{
"anything": {EndpointID: "epone"},
},
expEpId: "epone",
expNumEps: 1,
},
{
name: "new api with matching ep",
apiVersion: "1.44",
networkMode: "mynet",
epConfig: map[string]*network.EndpointSettings{
"mynet": {EndpointID: "epone"},
},
expEpId: "epone",
expNumEps: 1,
},
{
name: "new api with mismatched ep",
apiVersion: "1.44",
networkMode: "mynet",
epConfig: map[string]*network.EndpointSettings{
"shortid": {EndpointID: "epone"},
},
expError: true,
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
netConfig := &network.NetworkingConfig{
EndpointsConfig: tc.epConfig,
}
ep, err := epConfigForNetMode(tc.apiVersion, container.NetworkMode(tc.networkMode), netConfig)
if tc.expError {
assert.Check(t, is.ErrorContains(err, "HostConfig.NetworkMode must match the identity of a network in NetworkSettings.Networks"))
} else {
assert.Assert(t, err)
assert.Check(t, is.Equal(ep.EndpointID, tc.expEpId))
assert.Check(t, is.Len(netConfig.EndpointsConfig, tc.expNumEps))
}
})
}
}
func TestHandleSysctlBC(t *testing.T) {
testcases := []struct {
name string
apiVersion string
networkMode string
sysctls map[string]string
epConfig map[string]*network.EndpointSettings
expEpSysctls []string
expSysctls map[string]string
expWarningContains []string
expError string
}{
{
name: "migrate to new ep",
apiVersion: "1.46",
networkMode: "mynet",
sysctls: map[string]string{
"net.ipv6.conf.all.disable_ipv6": "0",
"net.ipv6.conf.eth0.accept_ra": "2",
"net.ipv6.conf.eth0.forwarding": "1",
},
expSysctls: map[string]string{
"net.ipv6.conf.all.disable_ipv6": "0",
},
expEpSysctls: []string{"net.ipv6.conf.IFNAME.forwarding=1", "net.ipv6.conf.IFNAME.accept_ra=2"},
expWarningContains: []string{
"Migrated",
"net.ipv6.conf.eth0.accept_ra", "net.ipv6.conf.IFNAME.accept_ra=2",
"net.ipv6.conf.eth0.forwarding", "net.ipv6.conf.IFNAME.forwarding=1",
},
},
{
name: "migrate nothing",
apiVersion: "1.46",
networkMode: "mynet",
sysctls: map[string]string{
"net.ipv6.conf.all.disable_ipv6": "0",
},
expSysctls: map[string]string{
"net.ipv6.conf.all.disable_ipv6": "0",
},
},
{
name: "migration disabled for newer api",
apiVersion: "1.48",
networkMode: "mynet",
sysctls: map[string]string{
"net.ipv6.conf.eth0.accept_ra": "2",
},
expError: "must be supplied using driver option 'com.docker.network.endpoint.sysctls'",
},
{
name: "only migrate eth0",
apiVersion: "1.46",
networkMode: "mynet",
sysctls: map[string]string{
"net.ipv6.conf.eth1.accept_ra": "2",
},
expError: "unable to determine network endpoint",
},
{
name: "net name mismatch",
apiVersion: "1.46",
networkMode: "mynet",
epConfig: map[string]*network.EndpointSettings{
"shortid": {EndpointID: "epone"},
},
sysctls: map[string]string{
"net.ipv6.conf.eth1.accept_ra": "2",
},
expError: "unable to find a network for sysctl",
},
}
for _, tc := range testcases {
t.Run(tc.name, func(t *testing.T) {
hostCfg := &container.HostConfig{
NetworkMode: container.NetworkMode(tc.networkMode),
Sysctls: map[string]string{},
}
for k, v := range tc.sysctls {
hostCfg.Sysctls[k] = v
}
netCfg := &network.NetworkingConfig{
EndpointsConfig: tc.epConfig,
}
warnings, err := handleSysctlBC(hostCfg, netCfg, tc.apiVersion)
for _, s := range tc.expWarningContains {
assert.Check(t, is.Contains(warnings, s))
}
if tc.expError != "" {
assert.Check(t, is.ErrorContains(err, tc.expError))
} else {
assert.Check(t, err)
assert.Check(t, is.DeepEqual(hostCfg.Sysctls, tc.expSysctls))
ep := netCfg.EndpointsConfig[tc.networkMode]
if ep == nil {
assert.Check(t, is.Nil(tc.expEpSysctls))
} else {
got, ok := ep.DriverOpts[netlabel.EndpointSysctls]
assert.Check(t, ok)
// Check for expected ep-sysctls.
for _, want := range tc.expEpSysctls {
assert.Check(t, is.Contains(got, want))
}
// Check for unexpected ep-sysctls.
assert.Check(t, is.Len(got, len(strings.Join(tc.expEpSysctls, ","))))
}
}
})
}
}

View File

@@ -1,4 +1,4 @@
package container
package container // import "github.com/docker/docker/api/server/router/container"
import (
"compress/flate"
@@ -6,16 +6,62 @@ import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
gddohttputil "github.com/golang/gddo/httputil"
)
// setContainerPathStatHeader encodes the stat to JSON, base64 encode, and place in a header.
func setContainerPathStatHeader(stat *container.PathStat, header http.Header) error {
type pathError struct{}
func (pathError) Error() string {
return "Path cannot be empty"
}
func (pathError) InvalidParameter() {}
// postContainersCopy is deprecated in favor of getContainersArchive.
func (s *containerRouter) postContainersCopy(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// Deprecated since 1.8, Errors out since 1.12
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.24") {
w.WriteHeader(http.StatusNotFound)
return nil
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
cfg := types.CopyConfig{}
if err := json.NewDecoder(r.Body).Decode(&cfg); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if cfg.Resource == "" {
return pathError{}
}
data, err := s.backend.ContainerCopy(vars["name"], cfg.Resource)
if err != nil {
return err
}
defer data.Close()
w.Header().Set("Content-Type", "application/x-tar")
_, err = io.Copy(w, data)
return err
}
// // Encode the stat to JSON, base64 encode, and place in a header.
func setContainerPathStatHeader(stat *types.ContainerPathStat, header http.Header) error {
statJSON, err := json.Marshal(stat)
if err != nil {
return err
@@ -29,13 +75,13 @@ func setContainerPathStatHeader(stat *container.PathStat, header http.Header) er
return nil
}
func (c *containerRouter) headContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) headContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
stat, err := c.backend.ContainerStatPath(v.Name, v.Path)
stat, err := s.backend.ContainerStatPath(v.Name, v.Path)
if err != nil {
return err
}
@@ -66,13 +112,13 @@ func writeCompressedResponse(w http.ResponseWriter, r *http.Request, body io.Rea
return err
}
func (c *containerRouter) getContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) getContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
tarArchive, stat, err := c.backend.ContainerArchivePath(v.Name, v.Path)
tarArchive, stat, err := s.backend.ContainerArchivePath(v.Name, v.Path)
if err != nil {
return err
}
@@ -86,7 +132,7 @@ func (c *containerRouter) getContainersArchive(ctx context.Context, w http.Respo
return writeCompressedResponse(w, r, tarArchive)
}
func (c *containerRouter) putContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) putContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
@@ -95,5 +141,5 @@ func (c *containerRouter) putContainersArchive(ctx context.Context, w http.Respo
noOverwriteDirNonDir := httputils.BoolValue(r, "noOverwriteDirNonDir")
copyUIDGID := httputils.BoolValue(r, "copyUIDGID")
return c.backend.ContainerExtractToDir(v.Name, v.Path, copyUIDGID, noOverwriteDirNonDir, r.Body)
return s.backend.ContainerExtractToDir(v.Name, v.Path, copyUIDGID, noOverwriteDirNonDir, r.Body)
}

View File

@@ -1,24 +1,24 @@
package container
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strconv"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/stdcopy"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
func (c *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
eConfig, err := c.backend.ContainerExecInspect(vars["id"])
func (s *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
eConfig, err := s.backend.ContainerExecInspect(vars["id"])
if err != nil {
return err
}
@@ -34,74 +34,71 @@ func (execCommandError) Error() string {
func (execCommandError) InvalidParameter() {}
func (c *containerRouter) postContainerExecCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
execConfig := &container.ExecOptions{}
if err := httputils.ReadJSON(r, execConfig); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
name := vars["name"]
execConfig := &types.ExecConfig{}
if err := json.NewDecoder(r.Body).Decode(execConfig); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if len(execConfig.Cmd) == 0 {
return execCommandError{}
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.42") {
// Not supported by API versions before 1.42
execConfig.ConsoleSize = nil
}
// Register an instance of Exec in container.
id, err := c.backend.ContainerExecCreate(vars["name"], execConfig)
id, err := s.backend.ContainerExecCreate(name, execConfig)
if err != nil {
log.G(ctx).Errorf("Error setting up exec command in container %s: %v", vars["name"], err)
logrus.Errorf("Error setting up exec command in container %s: %v", name, err)
return err
}
return httputils.WriteJSON(w, http.StatusCreated, &container.ExecCreateResponse{
return httputils.WriteJSON(w, http.StatusCreated, &types.IDResponse{
ID: id,
})
}
// TODO(vishh): Refactor the code to avoid having to specify stream config as part of both create and start.
func (c *containerRouter) postContainerExecStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.GreaterThan(version, "1.21") {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
}
var (
execName = vars["name"]
stdin, inStream io.ReadCloser
stdout, stderr, outStream io.Writer
)
options := &container.ExecStartOptions{}
if err := httputils.ReadJSON(r, options); err != nil {
execStartCheck := &types.ExecStartCheck{}
if err := json.NewDecoder(r.Body).Decode(execStartCheck); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if exists, err := s.backend.ExecExists(execName); !exists {
return err
}
if exists, err := c.backend.ExecExists(execName); !exists {
return err
}
if options.ConsoleSize != nil {
version := httputils.VersionFromContext(ctx)
// Not supported before 1.42
if versions.LessThan(version, "1.42") {
options.ConsoleSize = nil
}
// No console without tty
if !options.Tty {
options.ConsoleSize = nil
}
}
if !options.Detach {
if !execStartCheck.Detach {
var err error
// Setting up the streaming http interface.
inStream, outStream, err = httputils.HijackConnection(w)
@@ -111,61 +108,49 @@ func (c *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
defer httputils.CloseStreams(inStream, outStream)
if _, ok := r.Header["Upgrade"]; ok {
contentType := types.MediaTypeRawStream
if !options.Tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
_, _ = fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
} else {
_, _ = fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n")
fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n")
}
// copy headers that were removed as part of hijack
if err := w.Header().WriteSubset(outStream, nil); err != nil {
return err
}
_, _ = fmt.Fprint(outStream, "\r\n")
fmt.Fprint(outStream, "\r\n")
stdin = inStream
if options.Tty {
stdout = outStream
} else {
stdout = outStream
if !execStartCheck.Tty {
stderr = stdcopy.NewStdWriter(outStream, stdcopy.Stderr)
stdout = stdcopy.NewStdWriter(outStream, stdcopy.Stdout)
}
}
// Now run the user process in container.
//
// TODO: Maybe we should we pass ctx here if we're not detaching?
err := c.backend.ContainerExecStart(context.Background(), execName, backend.ExecStartConfig{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
ConsoleSize: options.ConsoleSize,
})
if err != nil {
if options.Detach {
// Maybe we should we pass ctx here if we're not detaching?
if err := s.backend.ContainerExecStart(context.Background(), execName, stdin, stdout, stderr); err != nil {
if execStartCheck.Detach {
return err
}
_, _ = fmt.Fprintf(stdout, "%v\r\n", err)
log.G(ctx).Errorf("Error running exec %s in container: %v", execName, err)
stdout.Write([]byte(err.Error() + "\r\n"))
logrus.Errorf("Error running exec %s in container: %v", execName, err)
}
return nil
}
func (c *containerRouter) postContainerExecResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerExecResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
height, err := httputils.Uint32Value(r, "h")
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "invalid resize height %q", r.Form.Get("h")))
return errdefs.InvalidParameter(err)
}
width, err := httputils.Uint32Value(r, "w")
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "invalid resize width %q", r.Form.Get("w")))
return errdefs.InvalidParameter(err)
}
return c.backend.ContainerExecResize(ctx, vars["name"], height, width)
return s.backend.ContainerExecResize(vars["name"], height, width)
}

View File

@@ -1,41 +1,21 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package container
package container // import "github.com/docker/docker/api/server/router/container"
import (
"context"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/internal/sliceutil"
"github.com/docker/docker/pkg/stringid"
)
// getContainersByName inspects container's configuration and serializes it as json.
func (c *containerRouter) getContainersByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
ctr, err := c.backend.ContainerInspect(ctx, vars["name"], backend.ContainerInspectOptions{
Size: httputils.BoolValue(r, "size"),
})
func (s *containerRouter) getContainersByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
displaySize := httputils.BoolValue(r, "size")
version := httputils.VersionFromContext(ctx)
json, err := s.backend.ContainerInspect(vars["name"], displaySize, version)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.45") {
shortCID := stringid.TruncateID(ctr.ID)
for nwName, ep := range ctr.NetworkSettings.Networks {
if container.NetworkMode(nwName).IsUserDefined() {
ep.Aliases = sliceutil.Dedup(append(ep.Aliases, shortCID, ctr.Config.Hostname))
}
}
}
if versions.LessThan(version, "1.48") {
ctr.ImageManifestDescriptor = nil
}
return httputils.WriteJSON(w, http.StatusOK, ctr)
return httputils.WriteJSON(w, http.StatusOK, json)
}

View File

@@ -1,54 +0,0 @@
package container
import (
"context"
"net"
"syscall"
"github.com/containerd/log"
"github.com/docker/docker/internal/unix_noeintr"
"golang.org/x/sys/unix"
)
func notifyClosed(ctx context.Context, conn net.Conn, notify func()) {
sc, ok := conn.(syscall.Conn)
if !ok {
log.G(ctx).Debug("notifyClosed: conn does not support close notifications")
return
}
rc, err := sc.SyscallConn()
if err != nil {
log.G(ctx).WithError(err).Warn("notifyClosed: failed get raw conn for close notifications")
return
}
epFd, err := unix_noeintr.EpollCreate()
if err != nil {
log.G(ctx).WithError(err).Warn("notifyClosed: failed to create epoll fd")
return
}
defer unix.Close(epFd)
err = rc.Control(func(fd uintptr) {
err := unix_noeintr.EpollCtl(epFd, unix.EPOLL_CTL_ADD, int(fd), &unix.EpollEvent{
Events: unix.EPOLLHUP,
Fd: int32(fd),
})
if err != nil {
log.G(ctx).WithError(err).Warn("notifyClosed: failed to register fd for close notifications")
return
}
events := make([]unix.EpollEvent, 1)
if _, err := unix_noeintr.EpollWait(epFd, events, -1); err != nil {
log.G(ctx).WithError(err).Warn("notifyClosed: failed to wait for close notifications")
return
}
notify()
})
if err != nil {
log.G(ctx).WithError(err).Warn("notifyClosed: failed to register for close notifications")
return
}
}

View File

@@ -1,10 +0,0 @@
//go:build !linux
package container
import (
"context"
"net"
)
func notifyClosed(ctx context.Context, conn net.Conn, notify func()) {}

View File

@@ -1,4 +1,4 @@
package debug
package debug // import "github.com/docker/docker/api/server/router/debug"
import (
"context"
@@ -24,13 +24,13 @@ type debugRouter struct {
func (r *debugRouter) initRoutes() {
r.routes = []router.Route{
router.NewGetRoute("/debug/vars", frameworkAdaptHandler(expvar.Handler())),
router.NewGetRoute("/debug/pprof/", frameworkAdaptHandlerFunc(pprof.Index)),
router.NewGetRoute("/debug/pprof/cmdline", frameworkAdaptHandlerFunc(pprof.Cmdline)),
router.NewGetRoute("/debug/pprof/profile", frameworkAdaptHandlerFunc(pprof.Profile)),
router.NewGetRoute("/debug/pprof/symbol", frameworkAdaptHandlerFunc(pprof.Symbol)),
router.NewGetRoute("/debug/pprof/trace", frameworkAdaptHandlerFunc(pprof.Trace)),
router.NewGetRoute("/debug/pprof/{name}", handlePprof),
router.NewGetRoute("/vars", frameworkAdaptHandler(expvar.Handler())),
router.NewGetRoute("/pprof/", frameworkAdaptHandlerFunc(pprof.Index)),
router.NewGetRoute("/pprof/cmdline", frameworkAdaptHandlerFunc(pprof.Cmdline)),
router.NewGetRoute("/pprof/profile", frameworkAdaptHandlerFunc(pprof.Profile)),
router.NewGetRoute("/pprof/symbol", frameworkAdaptHandlerFunc(pprof.Symbol)),
router.NewGetRoute("/pprof/trace", frameworkAdaptHandlerFunc(pprof.Trace)),
router.NewGetRoute("/pprof/{name}", handlePprof),
}
}

View File

@@ -1,4 +1,4 @@
package debug
package debug // import "github.com/docker/docker/api/server/router/debug"
import (
"context"

View File

@@ -1,15 +1,15 @@
package distribution
package distribution // import "github.com/docker/docker/api/server/router/distribution"
import (
"context"
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/docker/api/types/registry"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
)
// Backend is all the methods that need to be implemented
// to provide image specific functionality.
type Backend interface {
GetRepositories(context.Context, reference.Named, *registry.AuthConfig) ([]distribution.Repository, error)
GetRepository(context.Context, reference.Named, *types.AuthConfig) (distribution.Repository, bool, error)
}

View File

@@ -1,4 +1,4 @@
package distribution
package distribution // import "github.com/docker/docker/api/server/router/distribution"
import "github.com/docker/docker/api/server/router"
@@ -18,14 +18,14 @@ func NewRouter(backend Backend) router.Router {
}
// Routes returns the available routes
func (dr *distributionRouter) Routes() []router.Route {
return dr.routes
func (r *distributionRouter) Routes() []router.Route {
return r.routes
}
// initRoutes initializes the routes in the distribution router
func (dr *distributionRouter) initRoutes() {
dr.routes = []router.Route{
func (r *distributionRouter) initRoutes() {
r.routes = []router.Route{
// GET
router.NewGetRoute("/distribution/{name:.*}/json", dr.getDistributionInfo),
router.NewGetRoute("/distribution/{name:.*}/json", r.getDistributionInfo),
}
}

View File

@@ -1,34 +1,50 @@
package distribution
package distribution // import "github.com/docker/docker/api/server/router/distribution"
import (
"context"
"encoding/base64"
"encoding/json"
"net/http"
"strings"
"github.com/distribution/reference"
"github.com/docker/distribution"
"github.com/docker/distribution/manifest/manifestlist"
"github.com/docker/distribution/manifest/schema1"
"github.com/docker/distribution/manifest/schema2"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/registry"
distributionpkg "github.com/docker/docker/distribution"
"github.com/docker/docker/api/types"
registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
imgName := vars["name"]
var (
config = &types.AuthConfig{}
authEncoded = r.Header.Get("X-Registry-Auth")
distributionInspect registrytypes.DistributionInspect
)
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(&config); err != nil {
// for a search it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
config = &types.AuthConfig{}
}
}
image := vars["name"]
// TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors?
ref, err := reference.ParseAnyReference(imgName)
ref, err := reference.ParseAnyReference(image)
if err != nil {
return errdefs.InvalidParameter(err)
}
@@ -38,58 +54,28 @@ func (dr *distributionRouter) getDistributionInfo(ctx context.Context, w http.Re
// full image ID
return errors.Errorf("no manifest found for full image ID")
}
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", imgName))
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
}
// For a search it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
repos, err := dr.backend.GetRepositories(ctx, namedRef, authConfig)
distrepo, _, err := s.backend.GetRepository(ctx, namedRef, config)
if err != nil {
return err
}
blobsrvc := distrepo.Blobs(ctx)
// Fetch the manifest; if a mirror is configured, try the mirror first,
// but continue with upstream on failure.
//
// FIXME(thaJeztah): construct "repositories" on-demand;
// GetRepositories() will attempt to connect to all endpoints (registries),
// but we may only need the first one if it contains the manifest we're
// looking for, or if the configured mirror is a pull-through mirror.
//
// Logic for this could be implemented similar to "distribution.Pull()",
// which uses the "pullEndpoints" utility to iterate over the list
// of endpoints;
//
// - https://github.com/moby/moby/blob/12c7411b6b7314bef130cd59f1c7384a7db06d0b/distribution/pull.go#L17-L31
// - https://github.com/moby/moby/blob/12c7411b6b7314bef130cd59f1c7384a7db06d0b/distribution/pull.go#L76-L152
var lastErr error
for _, repo := range repos {
distributionInspect, err := dr.fetchManifest(ctx, repo, namedRef)
if err != nil {
lastErr = err
continue
}
return httputils.WriteJSON(w, http.StatusOK, distributionInspect)
}
return lastErr
}
func (dr *distributionRouter) fetchManifest(ctx context.Context, distrepo distribution.Repository, namedRef reference.Named) (registry.DistributionInspect, error) {
var distributionInspect registry.DistributionInspect
if canonicalRef, ok := namedRef.(reference.Canonical); !ok {
namedRef = reference.TagNameOnly(namedRef)
taggedRef, ok := namedRef.(reference.NamedTagged)
if !ok {
return registry.DistributionInspect{}, errdefs.InvalidParameter(errors.Errorf("image reference not tagged: %s", namedRef))
return errdefs.InvalidParameter(errors.Errorf("image reference not tagged: %s", image))
}
descriptor, err := distrepo.Tags(ctx).Get(ctx, taggedRef.Tag())
if err != nil {
return registry.DistributionInspect{}, err
return err
}
distributionInspect.Descriptor = ocispec.Descriptor{
distributionInspect.Descriptor = v1.Descriptor{
MediaType: descriptor.MediaType,
Digest: descriptor.Digest,
Size: descriptor.Size,
@@ -104,26 +90,26 @@ func (dr *distributionRouter) fetchManifest(ctx context.Context, distrepo distri
// we have a digest, so we can retrieve the manifest
mnfstsrvc, err := distrepo.Manifests(ctx)
if err != nil {
return registry.DistributionInspect{}, err
return err
}
mnfst, err := mnfstsrvc.Get(ctx, distributionInspect.Descriptor.Digest)
if err != nil {
switch {
case errors.Is(err, reference.ErrReferenceInvalidFormat),
errors.Is(err, reference.ErrTagInvalidFormat),
errors.Is(err, reference.ErrDigestInvalidFormat),
errors.Is(err, reference.ErrNameContainsUppercase),
errors.Is(err, reference.ErrNameEmpty),
errors.Is(err, reference.ErrNameTooLong),
errors.Is(err, reference.ErrNameNotCanonical):
return registry.DistributionInspect{}, errdefs.InvalidParameter(err)
switch err {
case reference.ErrReferenceInvalidFormat,
reference.ErrTagInvalidFormat,
reference.ErrDigestInvalidFormat,
reference.ErrNameContainsUppercase,
reference.ErrNameEmpty,
reference.ErrNameTooLong,
reference.ErrNameNotCanonical:
return errdefs.InvalidParameter(err)
}
return registry.DistributionInspect{}, err
return err
}
mediaType, payload, err := mnfst.Payload()
if err != nil {
return registry.DistributionInspect{}, err
return err
}
// update MediaType because registry might return something incorrect
distributionInspect.Descriptor.MediaType = mediaType
@@ -135,7 +121,7 @@ func (dr *distributionRouter) fetchManifest(ctx context.Context, distrepo distri
switch mnfstObj := mnfst.(type) {
case *manifestlist.DeserializedManifestList:
for _, m := range mnfstObj.Manifests {
distributionInspect.Platforms = append(distributionInspect.Platforms, ocispec.Platform{
distributionInspect.Platforms = append(distributionInspect.Platforms, v1.Platform{
Architecture: m.Platform.Architecture,
OS: m.Platform.OS,
OSVersion: m.Platform.OSVersion,
@@ -144,19 +130,21 @@ func (dr *distributionRouter) fetchManifest(ctx context.Context, distrepo distri
})
}
case *schema2.DeserializedManifest:
blobStore := distrepo.Blobs(ctx)
configJSON, err := blobStore.Get(ctx, mnfstObj.Config.Digest)
var platform ocispec.Platform
configJSON, err := blobsrvc.Get(ctx, mnfstObj.Config.Digest)
var platform v1.Platform
if err == nil {
err := json.Unmarshal(configJSON, &platform)
if err == nil && (platform.OS != "" || platform.Architecture != "") {
distributionInspect.Platforms = append(distributionInspect.Platforms, platform)
}
}
// TODO(thaJeztah); we only use this to produce a nice error, but as a result, we can't remove libtrust as dependency - see if we can reduce the dependencies, but still able to detect it's a deprecated manifest
case *schema1.SignedManifest:
return registry.DistributionInspect{}, distributionpkg.DeprecatedSchema1ImageError(namedRef)
platform := v1.Platform{
Architecture: mnfstObj.Architecture,
OS: "linux",
}
distributionInspect.Platforms = append(distributionInspect.Platforms, platform)
}
return distributionInspect, nil
return httputils.WriteJSON(w, http.StatusOK, distributionInspect)
}

View File

@@ -1,4 +1,4 @@
package router
package router // import "github.com/docker/docker/api/server/router"
import (
"context"

View File

@@ -1,4 +1,4 @@
package grpc
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import "google.golang.org/grpc"

View File

@@ -1,22 +1,7 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package grpc
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"context"
"fmt"
"os"
"strings"
"github.com/containerd/containerd/v2/defaults"
"github.com/containerd/log"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/internal/otelutil"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/stack"
"github.com/moby/buildkit/util/tracing"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
@@ -29,18 +14,9 @@ type grpcRouter struct {
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
tp, _ := otelutil.NewTracerProvider(context.Background(), false)
opts := []grpc.ServerOption{
grpc.StatsHandler(tracing.ServerStatsHandler(otelgrpc.WithTracerProvider(tp))),
grpc.ChainUnaryInterceptor(unaryInterceptor, grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
grpc.MaxRecvMsgSize(defaults.DefaultMaxRecvMsgSize),
grpc.MaxSendMsgSize(defaults.DefaultMaxSendMsgSize),
}
r := &grpcRouter{
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(opts...),
grpcServer: grpc.NewServer(),
}
for _, b := range backends {
b.RegisterGRPC(r.grpcServer)
@@ -50,30 +26,12 @@ func NewRouter(backends ...Backend) router.Router {
}
// Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route {
return gr.routes
func (r *grpcRouter) Routes() []router.Route {
return r.routes
}
func (gr *grpcRouter) initRoutes() {
gr.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC),
func (r *grpcRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/grpc", r.serveGRPC),
}
}
func unaryInterceptor(ctx context.Context, req any, info *grpc.UnaryServerInfo, handler grpc.UnaryHandler) (resp any, _ error) {
// This method is used by the clients to send their traces to buildkit so they can be included
// in the daemon trace and stored in the build history record. This method can not be traced because
// it would cause an infinite loop.
if strings.HasSuffix(info.FullMethod, "opentelemetry.proto.collector.trace.v1.TraceService/Export") {
return handler(ctx, req)
}
resp, err := handler(ctx, req)
if err != nil {
log.G(ctx).WithError(err).Error(info.FullMethod)
if log.GetLevel() >= log.DebugLevel {
_, _ = fmt.Fprintf(os.Stderr, "%+v", stack.Formatter(grpcerrors.FromGRPC(err)))
}
}
return resp, err
}

View File

@@ -1,4 +1,4 @@
package grpc
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"context"

View File

@@ -1,16 +1,14 @@
package image
package image // import "github.com/docker/docker/api/server/router/image"
import (
"context"
"io"
"github.com/distribution/reference"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
dockerimage "github.com/docker/docker/image"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// Backend is all the methods that need to be implemented
@@ -22,26 +20,22 @@ type Backend interface {
}
type imageBackend interface {
ImageDelete(ctx context.Context, imageRef string, options image.RemoveOptions) ([]image.DeleteResponse, error)
ImageHistory(ctx context.Context, imageName string, platform *ocispec.Platform) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts image.ListOptions) ([]*image.Summary, error)
GetImage(ctx context.Context, refOrID string, options backend.GetImageOpts) (*dockerimage.Image, error)
ImageInspect(ctx context.Context, refOrID string, options backend.ImageInspectOpts) (*image.InspectResponse, error)
TagImage(ctx context.Context, id dockerimage.ID, newRef reference.Named) error
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*image.PruneReport, error)
ImageDelete(imageRef string, force, prune bool) ([]types.ImageDeleteResponseItem, error)
ImageHistory(imageName string) ([]*image.HistoryResponseItem, error)
Images(imageFilters filters.Args, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error)
LookupImage(name string) (*types.ImageInspect, error)
TagImage(imageName, repository, tag string) (string, error)
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error)
}
type importExportBackend interface {
LoadImage(ctx context.Context, inTar io.ReadCloser, platform *ocispec.Platform, outStream io.Writer, quiet bool) error
ImportImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, msg string, layerReader io.Reader, changes []string) (dockerimage.ID, error)
ExportImage(ctx context.Context, names []string, platform *ocispec.Platform, outStream io.Writer) error
LoadImage(inTar io.ReadCloser, outStream io.Writer, quiet bool) error
ImportImage(src string, repository, platform string, tag string, msg string, inConfig io.ReadCloser, outStream io.Writer, changes []string) error
ExportImage(names []string, outStream io.Writer) error
}
type registryBackend interface {
PullImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error
PushImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, metaHeaders map[string][]string, authConfig *registry.AuthConfig, outStream io.Writer) error
}
type Searcher interface {
Search(ctx context.Context, searchFilters filters.Args, term string, limit int, authConfig *registry.AuthConfig, headers map[string][]string) ([]registry.SearchResult, error)
PullImage(ctx context.Context, image, tag string, platform *specs.Platform, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
PushImage(ctx context.Context, image, tag string, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
SearchRegistryForImages(ctx context.Context, filtersArgs string, term string, limit int, authConfig *types.AuthConfig, metaHeaders map[string][]string) (*registry.SearchResults, error)
}

View File

@@ -1,4 +1,4 @@
package image
package image // import "github.com/docker/docker/api/server/router/image"
import (
"github.com/docker/docker/api/server/router"
@@ -6,43 +6,39 @@ import (
// imageRouter is a router to talk with the image controller
type imageRouter struct {
backend Backend
searcher Searcher
routes []router.Route
backend Backend
routes []router.Route
}
// NewRouter initializes a new image router
func NewRouter(backend Backend, searcher Searcher) router.Router {
ir := &imageRouter{
backend: backend,
searcher: searcher,
}
ir.initRoutes()
return ir
func NewRouter(backend Backend) router.Router {
r := &imageRouter{backend: backend}
r.initRoutes()
return r
}
// Routes returns the available routes to the image controller
func (ir *imageRouter) Routes() []router.Route {
return ir.routes
func (r *imageRouter) Routes() []router.Route {
return r.routes
}
// initRoutes initializes the routes in the image router
func (ir *imageRouter) initRoutes() {
ir.routes = []router.Route{
func (r *imageRouter) initRoutes() {
r.routes = []router.Route{
// GET
router.NewGetRoute("/images/json", ir.getImagesJSON),
router.NewGetRoute("/images/search", ir.getImagesSearch),
router.NewGetRoute("/images/get", ir.getImagesGet),
router.NewGetRoute("/images/{name:.*}/get", ir.getImagesGet),
router.NewGetRoute("/images/{name:.*}/history", ir.getImagesHistory),
router.NewGetRoute("/images/{name:.*}/json", ir.getImagesByName),
router.NewGetRoute("/images/json", r.getImagesJSON),
router.NewGetRoute("/images/search", r.getImagesSearch),
router.NewGetRoute("/images/get", r.getImagesGet),
router.NewGetRoute("/images/{name:.*}/get", r.getImagesGet),
router.NewGetRoute("/images/{name:.*}/history", r.getImagesHistory),
router.NewGetRoute("/images/{name:.*}/json", r.getImagesByName),
// POST
router.NewPostRoute("/images/load", ir.postImagesLoad),
router.NewPostRoute("/images/create", ir.postImagesCreate),
router.NewPostRoute("/images/{name:.*}/push", ir.postImagesPush),
router.NewPostRoute("/images/{name:.*}/tag", ir.postImagesTag),
router.NewPostRoute("/images/prune", ir.postImagesPrune),
router.NewPostRoute("/images/load", r.postImagesLoad),
router.NewPostRoute("/images/create", r.postImagesCreate),
router.NewPostRoute("/images/{name:.*}/push", r.postImagesPush),
router.NewPostRoute("/images/{name:.*}/tag", r.postImagesTag),
router.NewPostRoute("/images/prune", r.postImagesPrune),
// DELETE
router.NewDeleteRoute("/images/{name:.*}", ir.deleteImages),
router.NewDeleteRoute("/images/{name:.*}", r.deleteImages),
}
}

View File

@@ -1,50 +1,42 @@
package image
package image // import "github.com/docker/docker/api/server/router/image"
import (
"context"
"fmt"
"io"
"encoding/base64"
"encoding/json"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/containerd/platforms"
"github.com/distribution/reference"
"github.com/docker/docker/api"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
imagetypes "github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/builder/remotecontext"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter"
"github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/registry"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
// Creates an image from Pull or from Import
func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var (
img = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
comment = r.Form.Get("message")
progressErr error
output = ioutils.NewWriteFlusher(w)
platform *ocispec.Platform
image = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
message = r.Form.Get("message")
err error
output = ioutils.NewWriteFlusher(w)
platform *specs.Platform
)
defer output.Close()
@@ -52,16 +44,20 @@ func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrit
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.32") {
if p := r.FormValue("platform"); p != "" {
sp, err := platforms.Parse(p)
apiPlatform := r.FormValue("platform")
if apiPlatform != "" {
sp, err := platforms.Parse(apiPlatform)
if err != nil {
return errdefs.InvalidParameter(err)
return err
}
if err := system.ValidatePlatform(sp); err != nil {
return err
}
platform = &sp
}
}
if img != "" { // pull
if image != "" { // pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
@@ -69,94 +65,39 @@ func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrit
}
}
// Special case: "pull -a" may send an image name with a
// trailing :. This is ugly, but let's not break API
// compatibility.
imgName := strings.TrimSuffix(img, ":")
ref, err := reference.ParseNormalizedNamed(imgName)
if err != nil {
return errdefs.InvalidParameter(err)
}
// TODO(thaJeztah) this could use a WithTagOrDigest() utility
if tag != "" {
// The "tag" could actually be a digest.
var dgst digest.Digest
dgst, err = digest.Parse(tag)
if err == nil {
ref, err = reference.WithDigest(reference.TrimNamed(ref), dgst)
} else {
ref, err = reference.WithTag(ref, tag)
}
if err != nil {
return errdefs.InvalidParameter(err)
authEncoded := r.Header.Get("X-Registry-Auth")
authConfig := &types.AuthConfig{}
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
}
}
if err := validateRepoName(ref); err != nil {
return errdefs.Forbidden(err)
}
// For a pull it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
//
// TODO(thaJeztah): accept empty values but return an error when failing to decode.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
progressErr = ir.backend.PullImage(ctx, ref, platform, metaHeaders, authConfig, output)
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { // import
src := r.Form.Get("fromSrc")
tagRef, err := httputils.RepoTagReference(repo, tag)
if err != nil {
return errdefs.InvalidParameter(err)
}
if comment == "" {
comment = "Imported from " + src
}
var layerReader io.ReadCloser
defer r.Body.Close()
if src == "-" {
layerReader = r.Body
} else {
if len(strings.Split(src, "://")) == 1 {
src = "http://" + src
}
u, err := url.Parse(src)
if err != nil {
return errdefs.InvalidParameter(err)
}
resp, err := remotecontext.GetWithStatusError(u.String())
if err != nil {
return err
}
output.Write(streamformatter.FormatStatus("", "Downloading from %s", u))
progressOutput := streamformatter.NewJSONProgressOutput(output, true)
layerReader = progress.NewProgressReader(resp.Body, progressOutput, resp.ContentLength, "", "Importing")
defer layerReader.Close()
}
var id image.ID
id, progressErr = ir.backend.ImportImage(ctx, tagRef, platform, comment, layerReader, r.Form["changes"])
if progressErr == nil {
_, _ = output.Write(streamformatter.FormatStatus("", "%v", id.String()))
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
if progressErr != nil {
if err != nil {
if !output.Flushed() {
return progressErr
return err
}
_, _ = output.Write(streamformatter.FormatError(progressErr))
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
}
func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
@@ -166,56 +107,32 @@ func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter
if err := httputils.ParseForm(r); err != nil {
return err
}
authConfig := &types.AuthConfig{}
// Handle the authConfig as a header, but ignore invalid AuthConfig
// to increase compatibility with the existing API.
//
// TODO(thaJeztah): accept empty values but return an error when failing to decode.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
authEncoded := r.Header.Get("X-Registry-Auth")
if authEncoded != "" {
// the new format is to handle the authConfig as a header
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// to increase compatibility to existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
}
} else {
// the old format is supported for compatibility if there was no authConfig header
if err := json.NewDecoder(r.Body).Decode(authConfig); err != nil {
return errors.Wrap(errdefs.InvalidParameter(err), "Bad parameters and missing X-Registry-Auth")
}
}
image := vars["name"]
tag := r.Form.Get("tag")
output := ioutils.NewWriteFlusher(w)
defer output.Close()
w.Header().Set("Content-Type", "application/json")
img := vars["name"]
tag := r.Form.Get("tag")
var ref reference.Named
// Tag is empty only in case PushOptions.All is true.
if tag != "" {
r, err := httputils.RepoTagReference(img, tag)
if err != nil {
return errdefs.InvalidParameter(err)
}
ref = r
} else {
r, err := reference.ParseNormalizedNamed(img)
if err != nil {
return errdefs.InvalidParameter(err)
}
ref = r
}
var platform *ocispec.Platform
// Platform is optional, and only supported in API version 1.46 and later.
// However the PushOptions struct previously was an alias for the PullOptions struct
// which also contained a Platform field.
// This means that older clients may be sending a platform field, even
// though it wasn't really supported by the server.
// Don't break these clients and just ignore the platform field on older APIs.
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.46") {
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
if err := ir.backend.PushImage(ctx, ref, platform, metaHeaders, authConfig, output); err != nil {
if err := s.backend.PushImage(ctx, image, tag, metaHeaders, authConfig, output); err != nil {
if !output.Flushed() {
return err
}
@@ -224,7 +141,7 @@ func (ir *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter
return nil
}
func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -240,22 +157,7 @@ func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter,
names = r.Form["names"]
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
if formPlatforms := r.Form["platform"]; len(formPlatforms) > 1 {
// TODO(thaJeztah): remove once we support multiple platforms: see https://github.com/moby/moby/issues/48759
return errdefs.InvalidParameter(errors.New("multiple platform parameters not supported"))
}
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
if err := ir.backend.ExportImage(ctx, names, platform, output); err != nil {
if err := s.backend.ExportImage(names, output); err != nil {
if !output.Flushed() {
return err
}
@@ -264,32 +166,17 @@ func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter,
return nil
}
func (ir *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
if formPlatforms := r.Form["platform"]; len(formPlatforms) > 1 {
// TODO(thaJeztah): remove once we support multiple platforms: see https://github.com/moby/moby/issues/48759
return errdefs.InvalidParameter(errors.New("multiple platform parameters not supported"))
}
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
quiet := httputils.BoolValueOrDefault(r, "quiet", true)
w.Header().Set("Content-Type", "application/json")
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := ir.backend.LoadImage(ctx, r.Body, platform, output, quiet); err != nil {
if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
@@ -303,7 +190,7 @@ func (missingImageError) Error() string {
func (missingImageError) InvalidParameter() {}
func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -317,20 +204,7 @@ func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter,
force := httputils.BoolValue(r, "force")
prune := !httputils.BoolValue(r, "noprune")
var platforms []ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.50") {
p, err := httputils.DecodePlatforms(r.Form["platforms"])
if err != nil {
return err
}
platforms = p
}
list, err := ir.backend.ImageDelete(ctx, name, imagetypes.RemoveOptions{
Force: force,
PruneChildren: prune,
Platforms: platforms,
})
list, err := s.backend.ImageDelete(name, force, prune)
if err != nil {
return err
}
@@ -338,79 +212,16 @@ func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter,
return httputils.WriteJSON(w, http.StatusOK, list)
}
func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var manifests bool
if r.Form.Get("manifests") != "" && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
manifests = httputils.BoolValue(r, "manifests")
}
var platform *ocispec.Platform
if r.Form.Get("platform") != "" && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.49") {
p, err := httputils.DecodePlatform(r.Form.Get("platform"))
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = p
}
if manifests && platform != nil {
return errdefs.InvalidParameter(errors.New("conflicting options: manifests and platform options cannot both be set"))
}
resp, err := ir.backend.ImageInspect(ctx, vars["name"], backend.ImageInspectOpts{
Manifests: manifests,
Platform: platform,
})
func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
imageInspect, err := s.backend.LookupImage(vars["name"])
if err != nil {
return err
}
// inspectResponse preserves fields in the response that have an
// "omitempty" in the OCI spec, but didn't omit such fields in
// legacy responses before API v1.50.
imageInspect := &inspectCompatResponse{
InspectResponse: resp,
legacyConfig: legacyConfigFields["current"],
}
// Make sure we output empty arrays instead of nil. While Go nil slice is functionally equivalent to an empty slice,
// it matters for the JSON representation.
if imageInspect.RepoTags == nil {
imageInspect.RepoTags = []string{}
}
if imageInspect.RepoDigests == nil {
imageInspect.RepoDigests = []string{}
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.44") {
imageInspect.VirtualSize = imageInspect.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
if imageInspect.Created == "" {
// backwards compatibility for Created not existing returning "0001-01-01T00:00:00Z"
// https://github.com/moby/moby/issues/47368
imageInspect.Created = time.Time{}.Format(time.RFC3339Nano)
}
}
if versions.GreaterThanOrEqualTo(version, "1.45") {
imageInspect.Container = "" //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
imageInspect.ContainerConfig = nil //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
}
if versions.LessThan(version, "1.48") {
imageInspect.Descriptor = nil
}
if versions.LessThan(version, "1.50") {
imageInspect.legacyConfig = legacyConfigFields["v1.49"]
}
return httputils.WriteJSON(w, http.StatusOK, imageInspect)
}
func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -422,82 +233,23 @@ func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
// NOTE: filter is a shell glob string applied to repository names.
filterParam := r.Form.Get("filter")
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
}
var sharedSize bool
if versions.GreaterThanOrEqualTo(version, "1.42") {
// NOTE: Support for the "shared-size" parameter was added in API 1.42.
sharedSize = httputils.BoolValue(r, "shared-size")
}
var manifests bool
if versions.GreaterThanOrEqualTo(version, "1.47") {
manifests = httputils.BoolValue(r, "manifests")
}
images, err := ir.backend.Images(ctx, imagetypes.ListOptions{
All: httputils.BoolValue(r, "all"),
Filters: imageFilters,
SharedSize: sharedSize,
Manifests: manifests,
})
images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)
if err != nil {
return err
}
useNone := versions.LessThan(version, "1.43")
withVirtualSize := versions.LessThan(version, "1.44")
noDescriptor := versions.LessThan(version, "1.48")
noContainers := versions.LessThan(version, "1.51")
for _, img := range images {
if useNone {
if len(img.RepoTags) == 0 && len(img.RepoDigests) == 0 {
img.RepoTags = append(img.RepoTags, "<none>:<none>")
img.RepoDigests = append(img.RepoDigests, "<none>@<none>")
}
} else {
if img.RepoTags == nil {
img.RepoTags = []string{}
}
if img.RepoDigests == nil {
img.RepoDigests = []string{}
}
}
if withVirtualSize {
img.VirtualSize = img.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
}
if noDescriptor {
img.Descriptor = nil
}
if noContainers {
img.Containers = -1
}
}
return httputils.WriteJSON(w, http.StatusOK, images)
}
func (ir *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
history, err := ir.backend.ImageHistory(ctx, vars["name"], platform)
func (s *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
name := vars["name"]
history, err := s.backend.ImageHistory(name)
if err != nil {
return err
}
@@ -505,71 +257,56 @@ func (ir *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWrit
return httputils.WriteJSON(w, http.StatusOK, history)
}
func (ir *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) postImagesTag(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
ref, err := httputils.RepoTagReference(r.Form.Get("repo"), r.Form.Get("tag"))
if ref == nil || err != nil {
return errdefs.InvalidParameter(err)
}
refName := reference.FamiliarName(ref)
if refName == string(digest.Canonical) {
return errdefs.InvalidParameter(errors.New("refusing to create an ambiguous tag using digest algorithm as name"))
}
img, err := ir.backend.GetImage(ctx, vars["name"], backend.GetImageOpts{})
if err != nil {
return errdefs.NotFound(err)
}
if err := ir.backend.TagImage(ctx, img.ID(), ref); err != nil {
if _, err := s.backend.TagImage(vars["name"], r.Form.Get("repo"), r.Form.Get("tag")); err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
return nil
}
func (ir *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var (
config *types.AuthConfig
authEncoded = r.Header.Get("X-Registry-Auth")
headers = map[string][]string{}
)
var limit int
if r.Form.Get("limit") != "" {
var err error
limit, err = strconv.Atoi(r.Form.Get("limit"))
if err != nil || limit < 0 {
return errdefs.InvalidParameter(errors.Wrap(err, "invalid limit specified"))
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(&config); err != nil {
// for a search it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
config = &types.AuthConfig{}
}
}
searchFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
// For a search it is not an error if no auth was given. Ignore invalid
// AuthConfig to increase compatibility with the existing API.
authConfig, _ := registry.DecodeAuthConfig(r.Header.Get(registry.AuthHeader))
headers := http.Header{}
for k, v := range r.Header {
k = http.CanonicalHeaderKey(k)
if strings.HasPrefix(k, "X-Meta-") {
headers[k] = v
}
}
headers.Set("User-Agent", dockerversion.DockerUserAgent(ctx))
res, err := ir.searcher.Search(ctx, searchFilters, r.Form.Get("term"), limit, authConfig, headers)
limit := registry.DefaultSearchLimit
if r.Form.Get("limit") != "" {
limitValue, err := strconv.Atoi(r.Form.Get("limit"))
if err != nil {
return err
}
limit = limitValue
}
query, err := s.backend.SearchRegistryForImages(ctx, r.Form.Get("filters"), r.Form.Get("term"), limit, config, headers)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, res)
return httputils.WriteJSON(w, http.StatusOK, query.Results)
}
func (ir *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -579,18 +316,9 @@ func (ir *imageRouter) postImagesPrune(ctx context.Context, w http.ResponseWrite
return err
}
pruneReport, err := ir.backend.ImagesPrune(ctx, pruneFilters)
pruneReport, err := s.backend.ImagesPrune(ctx, pruneFilters)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, pruneReport)
}
// validateRepoName validates the name of a repository.
func validateRepoName(name reference.Named) error {
familiarName := reference.FamiliarName(name)
if familiarName == api.NoBaseImageSpecifier {
return fmt.Errorf("'%s' is a reserved name", familiarName)
}
return nil
}

View File

@@ -1,88 +0,0 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.23
package image
import (
"encoding/json"
"maps"
"github.com/docker/docker/api/types/image"
)
// legacyConfigFields defines legacy image-config fields to include in
// API responses on older API versions.
var legacyConfigFields = map[string]map[string]any{
// Legacy fields for API v1.49 and lower. These fields are deprecated
// and omitted in newer API versions; see https://github.com/moby/moby/pull/48457
"v1.49": {
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": nil,
"Domainname": "",
"Entrypoint": nil,
"Env": nil,
"Hostname": "",
"Image": "",
"Labels": nil,
"OnBuild": nil,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": nil,
"WorkingDir": "",
},
// Legacy fields for current API versions (v1.50 and up). These fields
// did not have an "omitempty" and were always included in the response,
// even if not set; see https://github.com/moby/moby/issues/50134
"current": {
"Cmd": nil,
"Entrypoint": nil,
"Env": nil,
"Labels": nil,
"OnBuild": nil,
"User": "",
"Volumes": nil,
"WorkingDir": "",
},
}
// inspectCompatResponse is a wrapper around [image.InspectResponse] with a
// custom marshal function for legacy [api/types/container.Config} fields
// that have been removed, or did not have omitempty.
type inspectCompatResponse struct {
*image.InspectResponse
legacyConfig map[string]any
}
// MarshalJSON implements a custom marshaler to include legacy fields
// in API responses.
func (ir *inspectCompatResponse) MarshalJSON() ([]byte, error) {
type tmp *image.InspectResponse
base, err := json.Marshal((tmp)(ir.InspectResponse))
if err != nil {
return nil, err
}
if len(ir.legacyConfig) == 0 {
return base, nil
}
type resp struct {
*image.InspectResponse
Config map[string]any
}
var merged resp
err = json.Unmarshal(base, &merged)
if err != nil {
return base, nil
}
// prevent mutating legacyConfigFields.
cfg := maps.Clone(ir.legacyConfig)
maps.Copy(cfg, merged.Config)
merged.Config = cfg
return json.Marshal(merged)
}

View File

@@ -1,74 +0,0 @@
package image
import (
"encoding/json"
"testing"
"github.com/docker/docker/api/types/image"
dockerspec "github.com/moby/docker-image-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestInspectResponse(t *testing.T) {
tests := []struct {
doc string
cfg *ocispec.ImageConfig
legacyConfig map[string]any
expected string
}{
{
doc: "empty",
expected: `null`,
},
{
doc: "no legacy config",
cfg: &ocispec.ImageConfig{
Cmd: []string{"/bin/sh"},
StopSignal: "SIGQUIT",
},
expected: `{"Cmd":["/bin/sh"],"StopSignal":"SIGQUIT"}`,
},
{
doc: "api < v1.50",
cfg: &ocispec.ImageConfig{
Cmd: []string{"/bin/sh"},
StopSignal: "SIGQUIT",
},
legacyConfig: legacyConfigFields["v1.49"],
expected: `{"AttachStderr":false,"AttachStdin":false,"AttachStdout":false,"Cmd":["/bin/sh"],"Domainname":"","Entrypoint":null,"Env":null,"Hostname":"","Image":"","Labels":null,"OnBuild":null,"OpenStdin":false,"StdinOnce":false,"StopSignal":"SIGQUIT","Tty":false,"User":"","Volumes":null,"WorkingDir":""}`,
},
{
doc: "api >= v1.50",
cfg: &ocispec.ImageConfig{
Cmd: []string{"/bin/sh"},
StopSignal: "SIGQUIT",
},
legacyConfig: legacyConfigFields["current"],
expected: `{"Cmd":["/bin/sh"],"Entrypoint":null,"Env":null,"Labels":null,"OnBuild":null,"StopSignal":"SIGQUIT","User":"","Volumes":null,"WorkingDir":""}`,
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
imgInspect := &image.InspectResponse{}
if tc.cfg != nil {
// Verify that fields that are set override the legacy values,
// or appended if not part of the legacy values.
imgInspect.Config = &dockerspec.DockerOCIImageConfig{
ImageConfig: *tc.cfg,
}
}
out, err := json.Marshal(&inspectCompatResponse{
InspectResponse: imgInspect,
legacyConfig: tc.legacyConfig,
})
assert.NilError(t, err)
var outMap struct{ Config json.RawMessage }
err = json.Unmarshal(out, &outMap)
assert.NilError(t, err)
assert.Check(t, is.Equal(string(outMap.Config), tc.expected))
})
}
}

View File

@@ -1,4 +1,4 @@
package router
package router // import "github.com/docker/docker/api/server/router"
import (
"net/http"

View File

@@ -1,30 +1,32 @@
package network
package network // import "github.com/docker/docker/api/server/router/network"
import (
"context"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network"
"github.com/docker/libnetwork"
)
// Backend is all the methods that need to be implemented
// to provide network specific functionality.
type Backend interface {
GetNetworks(filters.Args, backend.NetworkListConfig) ([]network.Inspect, error)
CreateNetwork(ctx context.Context, nc network.CreateRequest) (*network.CreateResponse, error)
ConnectContainerToNetwork(ctx context.Context, containerName, networkName string, endpointConfig *network.EndpointSettings) error
FindNetwork(idName string) (libnetwork.Network, error)
GetNetworks(filters.Args, types.NetworkListConfig) ([]types.NetworkResource, error)
CreateNetwork(nc types.NetworkCreateRequest) (*types.NetworkCreateResponse, error)
ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error
DisconnectContainerFromNetwork(containerName string, networkName string, force bool) error
DeleteNetwork(networkID string) error
NetworksPrune(ctx context.Context, pruneFilters filters.Args) (*network.PruneReport, error)
NetworksPrune(ctx context.Context, pruneFilters filters.Args) (*types.NetworksPruneReport, error)
}
// ClusterBackend is all the methods that need to be implemented
// to provide cluster network specific functionality.
type ClusterBackend interface {
GetNetworks(filters.Args) ([]network.Inspect, error)
GetNetwork(name string) (network.Inspect, error)
GetNetworksByName(name string) ([]network.Inspect, error)
CreateNetwork(nc network.CreateRequest) (string, error)
GetNetworks(filters.Args) ([]types.NetworkResource, error)
GetNetwork(name string) (types.NetworkResource, error)
GetNetworksByName(name string) ([]types.NetworkResource, error)
CreateNetwork(nc types.NetworkCreateRequest) (string, error)
RemoveNetwork(name string) error
}

View File

@@ -0,0 +1 @@
package network // import "github.com/docker/docker/api/server/router/network"

View File

@@ -1,4 +1,4 @@
package network
package network // import "github.com/docker/docker/api/server/router/network"
import (
"github.com/docker/docker/api/server/router"
@@ -22,22 +22,22 @@ func NewRouter(b Backend, c ClusterBackend) router.Router {
}
// Routes returns the available routes to the network controller
func (n *networkRouter) Routes() []router.Route {
return n.routes
func (r *networkRouter) Routes() []router.Route {
return r.routes
}
func (n *networkRouter) initRoutes() {
n.routes = []router.Route{
func (r *networkRouter) initRoutes() {
r.routes = []router.Route{
// GET
router.NewGetRoute("/networks", n.getNetworksList),
router.NewGetRoute("/networks/", n.getNetworksList),
router.NewGetRoute("/networks/{id:.+}", n.getNetwork),
router.NewGetRoute("/networks", r.getNetworksList),
router.NewGetRoute("/networks/", r.getNetworksList),
router.NewGetRoute("/networks/{id:.+}", r.getNetwork),
// POST
router.NewPostRoute("/networks/create", n.postNetworkCreate),
router.NewPostRoute("/networks/{id:.*}/connect", n.postNetworkConnect),
router.NewPostRoute("/networks/{id:.*}/disconnect", n.postNetworkDisconnect),
router.NewPostRoute("/networks/prune", n.postNetworksPrune),
router.NewPostRoute("/networks/create", r.postNetworkCreate),
router.NewPostRoute("/networks/{id:.*}/connect", r.postNetworkConnect),
router.NewPostRoute("/networks/{id:.*}/disconnect", r.postNetworkDisconnect),
router.NewPostRoute("/networks/prune", r.postNetworksPrune),
// DELETE
router.NewDeleteRoute("/networks/{id:.*}", n.deleteNetwork),
router.NewDeleteRoute("/networks/{id:.*}", r.deleteNetwork),
}
}

View File

@@ -1,19 +1,21 @@
package network
package network // import "github.com/docker/docker/api/server/router/network"
import (
"context"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/libnetwork"
"github.com/docker/docker/libnetwork/scope"
"github.com/docker/libnetwork"
netconst "github.com/docker/libnetwork/datastore"
"github.com/pkg/errors"
)
@@ -28,10 +30,10 @@ func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWrit
}
if err := network.ValidateFilters(filter); err != nil {
return err
return errdefs.InvalidParameter(err)
}
var list []network.Summary
var list []types.NetworkResource
nr, err := n.cluster.GetNetworks(filter)
if err == nil {
list = nr
@@ -39,7 +41,7 @@ func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWrit
// Combine the network list returned by Docker daemon if it is not already
// returned by the cluster manager
localNetworks, err := n.backend.GetNetworks(filter, backend.NetworkListConfig{Detailed: versions.LessThan(httputils.VersionFromContext(ctx), "1.28")})
localNetworks, err := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: versions.LessThan(httputils.VersionFromContext(ctx), "1.28")})
if err != nil {
return err
}
@@ -59,7 +61,7 @@ func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWrit
}
if list == nil {
list = []network.Summary{}
list = []types.NetworkResource{}
}
return httputils.WriteJSON(w, http.StatusOK, list)
@@ -75,13 +77,17 @@ func (e invalidRequestError) Error() string {
func (e invalidRequestError) InvalidParameter() {}
type ambiguousResultsError string
type ambigousResultsError string
func (e ambiguousResultsError) Error() string {
func (e ambigousResultsError) Error() string {
return "network " + string(e) + " is ambiguous"
}
func (ambiguousResultsError) InvalidParameter() {}
func (ambigousResultsError) InvalidParameter() {}
func nameConflict(name string) error {
return errdefs.Conflict(libnetwork.NetworkNameError(name))
}
func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
@@ -98,7 +104,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
return errors.Wrapf(invalidRequestError{err}, "invalid value for verbose: %s", v)
}
}
networkScope := r.URL.Query().Get("scope")
scope := r.URL.Query().Get("scope")
// In case multiple networks have duplicate names, return error.
// TODO (yongtang): should we wrap with version here for backward compatibility?
@@ -108,29 +114,29 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
// For full name and partial ID, save the result first, and process later
// in case multiple records was found based on the same term
listByFullName := map[string]network.Inspect{}
listByPartialID := map[string]network.Inspect{}
listByFullName := map[string]types.NetworkResource{}
listByPartialID := map[string]types.NetworkResource{}
// TODO(@cpuguy83): All this logic for figuring out which network to return does not belong here
// Instead there should be a backend function to just get one network.
filter := filters.NewArgs(filters.Arg("idOrName", term))
if networkScope != "" {
filter.Add("scope", networkScope)
if scope != "" {
filter.Add("scope", scope)
}
networks, _ := n.backend.GetNetworks(filter, backend.NetworkListConfig{Detailed: true, Verbose: verbose})
for _, nw := range networks {
if nw.ID == term {
return httputils.WriteJSON(w, http.StatusOK, nw)
nw, _ := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: true, Verbose: verbose})
for _, network := range nw {
if network.ID == term {
return httputils.WriteJSON(w, http.StatusOK, network)
}
if nw.Name == term {
if network.Name == term {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByFullName[nw.ID] = nw
listByFullName[network.ID] = network
}
if strings.HasPrefix(nw.ID, term) {
if strings.HasPrefix(network.ID, term) {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByPartialID[nw.ID] = nw
listByPartialID[network.ID] = network
}
}
@@ -140,37 +146,37 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
// or if the get network was passed with a network name and scope as swarm
// return the network. Skipped using isMatchingScope because it is true if the scope
// is not set which would be case if the client API v1.30
if strings.HasPrefix(nwk.ID, term) || networkScope == scope.Swarm {
if strings.HasPrefix(nwk.ID, term) || (netconst.SwarmScope == scope) {
// If we have a previous match "backend", return it, we need verbose when enabled
// ex: overlay/partial_ID or name/swarm_scope
if nwv, ok := listByPartialID[nwk.ID]; ok {
nwk = nwv
} else if nwv, ok = listByFullName[nwk.ID]; ok {
} else if nwv, ok := listByFullName[nwk.ID]; ok {
nwk = nwv
}
return httputils.WriteJSON(w, http.StatusOK, nwk)
}
}
networks, _ = n.cluster.GetNetworks(filter)
for _, nw := range networks {
if nw.ID == term {
return httputils.WriteJSON(w, http.StatusOK, nw)
nr, _ := n.cluster.GetNetworks(filter)
for _, network := range nr {
if network.ID == term {
return httputils.WriteJSON(w, http.StatusOK, network)
}
if nw.Name == term {
if network.Name == term {
// Check the ID collision as we are in swarm scope here, and
// the map (of the listByFullName) may have already had a
// network with the same ID (from local scope previously)
if _, ok := listByFullName[nw.ID]; !ok {
listByFullName[nw.ID] = nw
if _, ok := listByFullName[network.ID]; !ok {
listByFullName[network.ID] = network
}
}
if strings.HasPrefix(nw.ID, term) {
if strings.HasPrefix(network.ID, term) {
// Check the ID collision as we are in swarm scope here, and
// the map (of the listByPartialID) may have already had a
// network with the same ID (from local scope previously)
if _, ok := listByPartialID[nw.ID]; !ok {
listByPartialID[nw.ID] = nw
if _, ok := listByPartialID[network.ID]; !ok {
listByPartialID[network.ID] = network
}
}
}
@@ -182,7 +188,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
}
if len(listByFullName) > 1 {
return errors.Wrapf(ambiguousResultsError(term), "%d matches found based on name", len(listByFullName))
return errors.Wrapf(ambigousResultsError(term), "%d matches found based on name", len(listByFullName))
}
// Find based on partial ID, returns true only if no duplicates
@@ -192,39 +198,46 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
}
if len(listByPartialID) > 1 {
return errors.Wrapf(ambiguousResultsError(term), "%d matches found based on ID prefix", len(listByPartialID))
return errors.Wrapf(ambigousResultsError(term), "%d matches found based on ID prefix", len(listByPartialID))
}
return libnetwork.ErrNoSuchNetwork(term)
}
func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var create types.NetworkCreateRequest
if err := httputils.ParseForm(r); err != nil {
return err
}
var create network.CreateRequest
if err := httputils.ReadJSON(r, &create); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&create); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if nws, err := n.cluster.GetNetworksByName(create.Name); err == nil && len(nws) > 0 {
return libnetwork.NetworkNameError(create.Name)
return nameConflict(create.Name)
}
version := httputils.VersionFromContext(ctx)
// EnableIPv4 was introduced in API 1.48.
if versions.LessThan(version, "1.48") {
create.EnableIPv4 = nil
}
// For a Swarm-scoped network, this call to backend.CreateNetwork is used to
// validate the configuration. The network will not be created but, if the
// configuration is valid, ManagerRedirectError will be returned and handled
// below.
nw, err := n.backend.CreateNetwork(ctx, create)
nw, err := n.backend.CreateNetwork(create)
if err != nil {
var warning string
if _, ok := err.(libnetwork.NetworkNameError); ok {
// check if user defined CheckDuplicate, if set true, return err
// otherwise prepare a warning message
if create.CheckDuplicate {
return nameConflict(create.Name)
}
warning = libnetwork.NetworkNameError(create.Name).Error()
}
if _, ok := err.(libnetwork.ManagerRedirectError); !ok {
return err
}
@@ -232,8 +245,9 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
if err != nil {
return err
}
nw = &network.CreateResponse{
ID: id,
nw = &types.NetworkCreateResponse{
ID: id,
Warning: warning,
}
}
@@ -241,32 +255,46 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
}
func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var connect types.NetworkConnect
if err := httputils.ParseForm(r); err != nil {
return err
}
var connect network.ConnectOptions
if err := httputils.ReadJSON(r, &connect); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&connect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
// Unlike other operations, we does not check ambiguity of the name/ID here.
// The reason is that, In case of attachable network in swarm scope, the actual local network
// may not be available at the time. At the same time, inside daemon `ConnectContainerToNetwork`
// does the ambiguity check anyway. Therefore, passing the name to daemon would be enough.
return n.backend.ConnectContainerToNetwork(ctx, connect.Container, vars["id"], connect.EndpointConfig)
return n.backend.ConnectContainerToNetwork(connect.Container, vars["id"], connect.EndpointConfig)
}
func (n *networkRouter) postNetworkDisconnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var disconnect types.NetworkDisconnect
if err := httputils.ParseForm(r); err != nil {
return err
}
var disconnect network.DisconnectOptions
if err := httputils.ReadJSON(r, &disconnect); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&disconnect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
return n.backend.DisconnectContainerFromNetwork(disconnect.Container, vars["id"], disconnect.Force)
}
@@ -317,47 +345,47 @@ func (n *networkRouter) postNetworksPrune(ctx context.Context, w http.ResponseWr
// For full name and partial ID, save the result first, and process later
// in case multiple records was found based on the same term
// TODO (yongtang): should we wrap with version here for backward compatibility?
func (n *networkRouter) findUniqueNetwork(term string) (network.Inspect, error) {
listByFullName := map[string]network.Inspect{}
listByPartialID := map[string]network.Inspect{}
func (n *networkRouter) findUniqueNetwork(term string) (types.NetworkResource, error) {
listByFullName := map[string]types.NetworkResource{}
listByPartialID := map[string]types.NetworkResource{}
filter := filters.NewArgs(filters.Arg("idOrName", term))
networks, _ := n.backend.GetNetworks(filter, backend.NetworkListConfig{Detailed: true})
for _, nw := range networks {
if nw.ID == term {
return nw, nil
nw, _ := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: true})
for _, network := range nw {
if network.ID == term {
return network, nil
}
if nw.Name == term && !nw.Ingress {
if network.Name == term && !network.Ingress {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByFullName[nw.ID] = nw
listByFullName[network.ID] = network
}
if strings.HasPrefix(nw.ID, term) {
if strings.HasPrefix(network.ID, term) {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByPartialID[nw.ID] = nw
listByPartialID[network.ID] = network
}
}
networks, _ = n.cluster.GetNetworks(filter)
for _, nw := range networks {
if nw.ID == term {
return nw, nil
nr, _ := n.cluster.GetNetworks(filter)
for _, network := range nr {
if network.ID == term {
return network, nil
}
if nw.Name == term {
if network.Name == term {
// Check the ID collision as we are in swarm scope here, and
// the map (of the listByFullName) may have already had a
// network with the same ID (from local scope previously)
if _, ok := listByFullName[nw.ID]; !ok {
listByFullName[nw.ID] = nw
if _, ok := listByFullName[network.ID]; !ok {
listByFullName[network.ID] = network
}
}
if strings.HasPrefix(nw.ID, term) {
if strings.HasPrefix(network.ID, term) {
// Check the ID collision as we are in swarm scope here, and
// the map (of the listByPartialID) may have already had a
// network with the same ID (from local scope previously)
if _, ok := listByPartialID[nw.ID]; !ok {
listByPartialID[nw.ID] = nw
if _, ok := listByPartialID[network.ID]; !ok {
listByPartialID[network.ID] = network
}
}
}
@@ -369,7 +397,7 @@ func (n *networkRouter) findUniqueNetwork(term string) (network.Inspect, error)
}
}
if len(listByFullName) > 1 {
return network.Inspect{}, errdefs.InvalidParameter(errors.Errorf("network %s is ambiguous (%d matches found based on name)", term, len(listByFullName)))
return types.NetworkResource{}, errdefs.InvalidParameter(errors.Errorf("network %s is ambiguous (%d matches found based on name)", term, len(listByFullName)))
}
// Find based on partial ID, returns true only if no duplicates
@@ -379,8 +407,8 @@ func (n *networkRouter) findUniqueNetwork(term string) (network.Inspect, error)
}
}
if len(listByPartialID) > 1 {
return network.Inspect{}, errdefs.InvalidParameter(errors.Errorf("network %s is ambiguous (%d matches found based on ID prefix)", term, len(listByPartialID)))
return types.NetworkResource{}, errdefs.InvalidParameter(errors.Errorf("network %s is ambiguous (%d matches found based on ID prefix)", term, len(listByPartialID)))
}
return network.Inspect{}, errdefs.NotFound(libnetwork.ErrNoSuchNetwork(term))
return types.NetworkResource{}, errdefs.NotFound(libnetwork.ErrNoSuchNetwork(term))
}

Some files were not shown because too many files have changed in this diff Show More