Compare commits

..

177 Commits

Author SHA1 Message Date
Brian Goff
b0f5bc36fe Merge pull request #42352 from AkihiroSuda/cherrypick-41724
[20.10 backport] Use v2 capabilities in layer archives
2021-06-01 15:34:42 -07:00
Brian Goff
497c50a575 Merge pull request #42448 from thaJeztah/20.10_backport_update_buildkit
[20.10 backport] vendor: github.com/moby/buildkit v0.8.3-3-g244e8cde
2021-06-01 15:13:27 -07:00
Sebastiaan van Stijn
6474dada20 vendor: github.com/moby/buildkit v0.8.3-3-g244e8cde
full diff: https://github.com/moby/buildkit/compare/v0.8.3...v0.8.3-3-g244e8cde

- Transform relative mountpoints for exec mounts in the executor
- Add test for handling relative mountpoints

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 61b04b3a02)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-31 16:08:36 +02:00
Sebastiaan van Stijn
895eaacdd4 vendor: github.com/moby/buildkit v0.8.3
full diff: https://github.com/moby/buildkit/compare/v0.8.2...v0.8.3

- vendor containerd (required for rootless overlayfs on kernel 5.11)
    - not included to avoid depending on a fork
- Add retry on image push 5xx errors
- contenthash: include basename in content checksum for wildcards
- Fix missing mounts in execOp cache map
- Add regression test for run cache not considering mounts
- Add hack to preserve Dockerfile RUN cache compatibility after mount cache bugfix

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 79ee285d76)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-31 16:08:33 +02:00
Sebastiaan van Stijn
eab37e5676 Merge pull request #42413 from thaJeztah/20.10_backport_bump_libnetwork
[20.10 backport] vendor: github.com/docker/libnetwork 64b7a4574d1426139437d20e81c0b6d391130ec8
2021-05-27 10:44:33 +02:00
Akihiro Suda
d844987b77 Merge pull request #42421 from thaJeztah/20.10_backport_more_ignore 2021-05-27 02:46:50 +09:00
Sebastiaan van Stijn
003e3c0551 pkg/signal: ignore SIGURG on all platforms
Other Unix platforms (e.g. Darwin) are also affected by the Go
runtime sending SIGURG.

This patch changes how we match the signal by just looking for the
"URG" name, which should handle any platform that has this signal
defined in the SignalMap.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05f520dd3c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-25 21:42:21 +02:00
Sebastiaan van Stijn
95551168ac vendor: github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be
full diff: 6e2cb13661...f2269e66cd

- support SO_SNDBUF/SO_RCVBUF handling
- Support Go Modules
- license clarificaton
- ci: drop 1.6, 1.7, 1.8 support
- Add support for SocketConfig
- support goarch mips64le architecture.
- fix possible socket leak when bind fails

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 22b9e2a7e5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-25 11:39:46 +02:00
Sebastiaan van Stijn
d29a55c6c3 vendor: github.com/docker/libnetwork 64b7a4574d1426139437d20e81c0b6d391130ec8
Update libnetwork to make `docker run -p 80:80` functional again on environments
with kernel boot parameter `ipv6.disable=1`.

full diff: b3507428be...64b7a4574d

- fix port forwarding with ipv6.disable=1
    - fixes moby/moby/42288 Docker 20.10.6: all containers stopped and cannot start if ipv6 is disabled on host
    - fixes docker/libnetwork/2629 Network issue with IPv6 following update to version 20.10.6
    - fixesdocker/for-linux/1233 Since 20.10.6 it's not possible to run docker on a machine with disabled IPv6 interfaces
- vendor: github.com/ishidawataru/sctp f2269e66cdee387bd321445d5d300893449805be
- Enforce order of lock acquisitions on network/controller, fixes #2632
    - fixes docker/libnetwork/2632 Name resolution stuck due to deadlock between different network struct methods
    - fixes moby/moby/42032 Docker deamon get's stuck, can't serve DNS requests

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e4109b3b6b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-25 11:39:44 +02:00
Brian Goff
3db8385b18 Merge pull request #42382 from thaJeztah/20.10_backport_schema1_cache_fix
[20.10 backport] builder-next: relax second cache key requirements for schema1
2021-05-20 12:08:53 -07:00
Brian Goff
df371af560 Merge pull request #42401 from thaJeztah/20.10_catch_almost_all
[20.10 backport] pkg/signal.CatchAll: ignore SIGURG on Linux
2021-05-20 11:08:47 -07:00
Sebastiaan van Stijn
c1ebb81514 Merge pull request #42398 from thaJeztah/20.10_update_containerd_1.4.6
[20.10] update containerd binary to v1.4.6
2021-05-20 12:33:05 +02:00
Sebastiaan van Stijn
41cf01fa93 pkg/signal.CatchAll: ignore SIGURG on Linux
Do not handle SIGURG on Linux, as in go1.14+, the go runtime issues
SIGURG as an interrupt to support preemptable system calls on Linux.

This issue was caught in TestCatchAll, which could fail when updating to Go 1.14 or above;

    === Failed
    === FAIL: pkg/signal TestCatchAll (0.01s)
        signal_linux_test.go:32: assertion failed: urgent I/O condition (string) != continued (string)
        signal_linux_test.go:32: assertion failed: continued (string) != hangup (string)
        signal_linux_test.go:32: assertion failed: hangup (string) != child exited (string)
        signal_linux_test.go:32: assertion failed: child exited (string) != illegal instruction (string)
        signal_linux_test.go:32: assertion failed: illegal instruction (string) != floating point exception (string)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b7ebf32ba3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-20 10:11:39 +02:00
Sebastiaan van Stijn
a23a880c6e Merge pull request #42395 from thaJeztah/20.10_backport_runc_rc95
[20.10 backport] update runc binary to v1.0.0-rc95
2021-05-19 20:49:52 +02:00
Sebastiaan van Stijn
56541eca9a [20.10] update containerd binary to v1.4.6
full diff: https://github.com/containerd/containerd/compare/v1.4.5...v1.4.6

The sixth patch release for containerd 1.4 is a security release to update
runc for CVE-2021-30465

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-19 20:43:04 +02:00
Sebastiaan van Stijn
fb179ff098 update runc binary to v1.0.0-rc95
full diff: https://github.com/opencontainers/runc/compare/v1.0.0-rc94...v1.0.0-rc95

Release notes:

This release of runc contains a fix for CVE-2021-30465, and users are
strongly recommended to update (especially if you are providing
semi-limited access to spawn containers to untrusted users).

Aside from this security fix, only a few other changes were made since
v1.0.0-rc94 (the only user-visible change was the addition of support
for defaultErrnoRet in seccomp profiles).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit efec2bb368)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-19 13:20:30 +02:00
Akihiro Suda
989c08c367 Merge pull request #42388 from thaJeztah/20.10_backport_update_runc
[20.10 backport] Update runc binary to v1.0.0-rc94
2021-05-19 12:26:26 +09:00
Akihiro Suda
4c801fdb7d integration: remove KernelMemory tests
Starting with runc v1.0.0-rc94, runc no longer supports KernelMemory.

52390d6804

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 2f0d6664a1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-18 09:25:38 +02:00
Jintao Zhang
6174e3cf22 Update runc binary to v1.0.0-rc94
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 8c019e830a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-18 09:25:36 +02:00
Akihiro Suda
afbb1277a3 Swarm config: use absolute paths for mount destination strings
Needed for runc >= 1.0.0-rc94.

See runc issue 2928.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9303376242)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-18 09:25:34 +02:00
Tonis Tiigi
94c1890d39 builder-next: relax second cache key requirements for schema1
Schema1 images can not have a config based cache key
before the layers are pulled. Avoid validation and reuse
manifest digest as a second key.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 85167fc634)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-17 14:58:16 +02:00
Akihiro Suda
e4b9915803 Merge pull request #42372 from thaJeztah/20.10_containerd_1.4.5
[20.10] update containerd binary to v1.4.5
2021-05-13 12:51:54 +09:00
Sebastiaan van Stijn
01f734cb4f [20.10] update containerd binary to v1.4.5
release notes: https://github.com/containerd/containerd/releases/tag/v1.4.5

- Update runc to rc94
- Fix leaking socket path in runc shim v2
- Fix cleanup logic in new container in runc shim v2
- Fix registry mirror authorization logic in CRI plugin
- Add support for userxattr in overlay snapshotter for kernel 5.11+

(Note that the update to runc is done separately)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-05-12 10:46:44 +02:00
Sebastiaan van Stijn
e3cb5adc0d Merge pull request #42292 from cpuguy83/20.10_hcsshim_no_error_details
[20.10] Bump hcsshim for error details fix
2021-05-06 16:35:18 +02:00
Eric Mountain
2a0c446866 Use v2 capabilities in layer archives
When building images in a user-namespaced container, v3 capabilities are
stored including the root UID of the creator of the user-namespace.

This UID does not make sense outside the build environment however. If
the image is run in a non-user-namespaced runtime, or if a user-namespaced
runtime uses a different UID, the capabilities requested by the effective
bit will not be honoured by `execve(2)` due to this mismatch.

Instead, we convert v3 capabilities to v2, dropping the root UID on the
fly.

Signed-off-by: Eric Mountain <eric.mountain@datadoghq.com>
(cherry picked from commit 95eb490780)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-05-05 21:46:31 +09:00
Brian Goff
614a70a11b Merge pull request #42294 from AkihiroSuda/rootlesskit-0.14.2-2010
[20.10 backport] bump up rootlesskit to v0.14.2 (Fix `Timed out proxy starting the userland proxy.` error with `DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER=slirp4netns`)
2021-04-30 09:18:11 -07:00
Tianon Gravi
114310d76a Merge pull request #42342 from AkihiroSuda/dind-fix-cgroup2-evac-2010
[20.10 backport] hack/dind: fix cgroup v2 evacuation with `docker run --init`
2021-04-30 08:10:12 -07:00
Akihiro Suda
21391bb7f7 hack/dind: fix cgroup v2 evacuation with docker run --init
Evacuate all the processes in `/sys/fs/cgroup/cgroup.procs`, not just PID 1.

Before:
```console
$ docker run --rm --privileged --init $(docker build -q .) cat /sys/fs/cgroup/cgroup.subtree_control
sed: couldn't flush stdout: Device or resource busy
```

After:
```console
$ docker run --rm --privileged --init $(docker build -q .) cat /sys/fs/cgroup/cgroup.subtree_control
cpuset cpu io memory hugetlb pids rdma
```

Fix docker-library/docker issue 308

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 42b1175eda)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-30 19:08:07 +09:00
Brian Goff
12b03bcb27 Error string match: do not match command path
Whether or not the command path is in the error message is a an
implementation detail.
For example, on Windows the only reason this ever matched was because it
dumped the entire container config into the error message, but this had
nothing to do with the actual error.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 225e046d9d)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-04-27 18:46:33 +00:00
Tianon Gravi
ce82693823 Merge pull request #42324 from AkihiroSuda/whichless-2010
[20.10 backport] dockerd-rootless.sh: use `command -v` instead of `which`
2021-04-26 12:47:40 -07:00
Akihiro Suda
8a7f77cb2f dockerd-rootless.sh: use command -v instead of which
`which` binary is often missing

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit e928692c69)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-26 16:04:08 +09:00
Tianon Gravi
4af54f15ed Merge pull request #42297 from thaJeztah/20.10_backport_update_yamllint
[20.10 backport] Dockerfile: update yamllint to v1.26.1 to fix build
2021-04-23 11:47:44 -07:00
Akihiro Suda
9ca66776fa bump up rootlesskit to v0.14.2
Fix `Timed out proxy starting the userland proxy.` error with `DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER=slirp4netns`.
(https://github.com/rootless-containers/rootlesskit/issues/250)

Full changes: https://github.com/rootless-containers/rootlesskit/compare/v0.14.1...v0.14.2

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 11bddf330d4fec818e17333c360c25e8641f221d)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-21 18:11:00 +09:00
Sebastiaan van Stijn
08b27e45d8 Dockerfile: update yamllint to v1.26.1 to fix build
Installation of yamllint started failing, on non-amd64 builds, which could
be if the version we were using wasn't specific enough about a dependency
to install.

    copying Cython/Utility/CppSupport.cpp -> build/lib.linux-aarch64-3.7/Cython/Utility
    running build_ext
    building 'Cython.Plex.Scanners' extension
    creating build/temp.linux-aarch64-3.7
    creating build/temp.linux-aarch64-3.7/tmp
    creating build/temp.linux-aarch64-3.7/tmp/pip-install-jasgbmp7
    creating build/temp.linux-aarch64-3.7/tmp/pip-install-jasgbmp7/Cython
    creating build/temp.linux-aarch64-3.7/tmp/pip-install-jasgbmp7/Cython/Cython
    creating build/temp.linux-aarch64-3.7/tmp/pip-install-jasgbmp7/Cython/Cython/Plex
    aarch64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.7m -c /tmp/pip-install-jasgbmp7/Cython/Cython/Plex/Scanners.c -o build/temp.linux-aarch64-3.7/tmp/pip-install-jasgbmp7/Cython/Cython/Plex/Scanners.o
         /tmp/pip-install-jasgbmp7/Cython/Cython/Plex/Scanners.c:21:10: fatal error: Python.h: No such file or directory
          #include "Python.h"
                   ^~~~~~~~~~
         compilation terminated.
         error: command 'aarch64-linux-gnu-gcc' failed with exit status 1

         ----------------------------------------
     Command "/usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-jasgbmp7/Cython/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-if5qclwe/install-record.txt --single-version-externally-managed --prefix /tmp/pip-build-env-_dtiuyfw --compile" failed with error code 1 in /tmp/pip-install-jasgbmp7/Cython/

      ----------------------------------------
    Command "/usr/bin/python3 -m pip install --ignore-installed --no-user --prefix /tmp/pip-build-env-_dtiuyfw --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- setuptools wheel Cython" failed with error code 1 in None
    #22 ERROR: executor failed running [/bin/sh -c pip3 install yamllint==1.16.0]: exit code: 1

Trying if updating to the latest version fixes this.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c35cefb489)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-04-15 22:27:44 +02:00
Brian Goff
404ede5737 Bump hcsshim for error details fix
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-04-14 18:57:05 +00:00
Tibor Vass
8728dd246c Merge pull request #42263 from AkihiroSuda/move-cgroup2-out-of-experimental-20.10
[20.10 backport] Move cgroup v2 out of experimental
2021-04-09 15:06:18 -07:00
Tibor Vass
94854bcbd6 Merge pull request #42109 from AkihiroSuda/rootless-add-context-2010
[20.10 backport] dockerd-rootless-setuptool.sh: create CLI context "rootless"
2021-04-07 16:58:09 -07:00
Tibor Vass
ffd037de36 Merge pull request #42253 from AkihiroSuda/btrfs-allow-unprivileged-20.10
[20.10 backport] btrfs: Allow unprivileged user to delete subvolumes (kernel >= 4.18)
2021-04-07 16:55:56 -07:00
Sebastiaan van Stijn
419b3706ea Merge pull request #42256 from cpuguy83/20.10_plugin_layer_mediatype
[20.10 backport] Use docker media type for plugin layers
2021-04-07 21:26:18 +02:00
Tibor Vass
76b0df9b6e Merge pull request #42257 from thaJeztah/20.10_backport_fix_testinspect
[20.10 backport] Fix TestInspect(), and pin arm64 machines to a specific Ubuntu version
2021-04-07 01:23:47 -07:00
Akihiro Suda
255c79a1e8 Move cgroup v2 out of experimental
We have upgraded runc to rc93 and added CI for cgroup 2.
So we can move cgroup v2 out of experimental.

Fix issue 41916

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 1d2a660093)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-07 13:55:48 +09:00
Stefan Scherer
f2c0b3688a Pin arm64 machines to a specific Ubuntu version
Signed-off-by: Stefan Scherer <stefan.scherer@docker.com>
(cherry picked from commit b7c3548c82)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-04-06 18:55:24 +02:00
Sebastiaan van Stijn
29ff2af2d3 Fix flaky TestInspect
This test has been flaky for a long time, failing with:

    --- FAIL: TestInspect (12.04s)
        inspect_test.go:39: timeout hit after 10s: waiting for tasks to enter run state. task failed with error: task: non-zero exit (1)

While looking through logs, noticed tasks were started, entering RUNNING stage,
and then exited, to be started again.

    state.transition="STARTING->RUNNING"
    ...
    msg="fatal task error" error="task: non-zero exit (1)"
    ...
    state.transition="RUNNING->FAILED"

Looking for possible reasons, first considering network issues (possibly we ran
out of IP addresses or networking not cleaned up), then I spotted the issue.

The service is started with;

    Command:         []string{"/bin/top"},
    Args:            []string{"-u", "root"},

The `-u root` is not an argument for the service, but for `/bin/top`. While the
Ubuntu/Debian/GNU version `top` has a -u/-U option;

    docker run --rm ubuntu:20.04 top -h 2>&1 | grep '\-u'
      top -hv | -bcEHiOSs1 -d secs -n max -u|U user -p pid(s) -o field -w [cols]

The *busybox* version of top does not:

    docker run --rm busybox top --help 2>&1 | grep '\-u'

So running `top -u root` would cause the task to fail;

    docker run --rm busybox top -u root
    top: invalid option -- u
    ...

    echo $?
    1

As a result, the service went into a crash-loop, and because the `poll.WaitOn()`
was running with a short interval, in many cases would _just_ find the RUNNING
state, perform the `service inspect`, and pass, but in other cases, it would not
be that lucky, and continue polling untill we reached the 10 seconds timeout,
and mark the test as failed.

Looking for history of this option (was it previously using a different image?) I
found this was added in 6cd6d8646a, but probably
just missed during review.

Given that the option is only set to have "something" to inspect, I replaced
the `-u root` with `-d 5`, which makes top refresh with a 5 second interval.

Note that there is another test (`TestServiceListWithStatuses) that uses the same
spec, however, that test is skipped based on API version of the test-daemon, and
(to be looked into), when performing that check, no API version is known, causing
the test to (always?) be skipped:

    === RUN   TestServiceListWithStatuses
        --- SKIP: TestServiceListWithStatuses (0.00s)
            list_test.go:34: versions.LessThan(testEnv.DaemonInfo.ServerVersion, "1.41")

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00cb3073f4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-04-06 18:55:06 +02:00
Brian Goff
60310e2409 Use docker media type for plugin layers
This was changed as part of a refactor to use containerd dist code. The
problem is the OCI media types are not compatible with older versions of
Docker.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit a876ede24f)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-04-06 16:52:54 +00:00
Akihiro Suda
8088859bab btrfs: Allow unprivileged user to delete subvolumes (kernel >= 4.18)
Fix issue 41762

Cherry-pick "drivers: btrfs: Allow unprivileged user to delete subvolumes" from containers/storage
831e32b6bd

> In btrfs, subvolume can be deleted by IOC_SNAP_DESTROY ioctl but there
> is one catch: unprivileged IOC_SNAP_DESTROY call is restricted by default.
>
> This is because IOC_SNAP_DESTROY only performs permission checks on
> the top directory(subvolume) and unprivileged user might delete dirs/files
> which cannot be deleted otherwise. This restriction can be relaxed if
> user_subvol_rm_allowed mount option is used.
>
> Although the above ioctl had been the only way to delete a subvolume,
> btrfs now allows deletion of subvolume just like regular directory
> (i.e. rmdir sycall) since kernel 4.18.
>
> So if we fail to cleanup subvolume in subvolDelete(), just fallback to
> system.EnsureRmoveall() to try to cleanup subvolumes again.
> (Note: quota needs privilege, so if quota is enabled we do not fallback)
>
> This fix will allow non-privileged container works with btrfs backend.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 62b5194f62)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-06 14:45:01 +09:00
Tibor Vass
88bd96d6e5 Merge pull request #42233 from AkihiroSuda/fix-rootless-bind-EPERM-20.10
[20.10 backport] rootless: bind mount: fix "operation not permitted"
2021-04-01 07:41:54 -07:00
Tibor Vass
b6f4e8ba12 Merge pull request #42235 from AkihiroSuda/fix-overlay2-nativediff-2010
[20.10 backport] rootless: overlay2: fix "createDirWithOverlayOpaque(...) ... input/output error"
2021-04-01 06:23:42 -07:00
Tibor Vass
d6ca8a8e16 Merge pull request #42236 from thaJeztah/20.10_backport_specconv_fix_trimspace
[20.10 backport] rootless: fix getCurrentOOMScoreAdj
2021-04-01 05:31:04 -07:00
Tibor Vass
3499626899 Merge pull request #42232 from AkihiroSuda/rootlesskit-0.14.1-20.10
[20.10 backport] bump up rootlesskit to v0.14.1 (Fix `DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER=slirp4netns` regression)
2021-04-01 04:08:47 -07:00
Akihiro Suda
d22dde8eb1 rootless: fix getCurrentOOMScoreAdj
`getCurrentOOMScoreAdj()` was broken because `strconv.Atoi()` was called
without trimming "\n".

Fix issue 40068: `rootless docker in kubernetes: "getting the final child's pid from
pipe caused \"EOF\": unknown"

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit d6ddfb6118)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-04-01 12:11:27 +02:00
Akihiro Suda
c1e7924f7c archive: do not use overlayWhiteoutConverter for UserNS
overlay2 no longer sets `archive.OverlayWhiteoutFormat` when
running in UserNS, so we can remove the complicated logic in the
archive package.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 6322dfc217)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-01 19:00:42 +09:00
Akihiro Suda
22dc1597b9 overlay2: doesSupportNativeDiff: add fast path for userns
When running in userns, returns error (i.e. "use naive, not native")
immediately.

No substantial change to the logic.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 67aa418df2)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-01 19:00:37 +09:00
Akihiro Suda
daae27bfce overlay2: call d.naiveDiff.ApplyDiff when useNaiveDiff==true
Previously, `d.naiveDiff.ApplyDiff` was not used even when
`useNaiveDiff()==true`

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit dd97134232)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-01 19:00:32 +09:00
Akihiro Suda
e974cb638c rootless: bind mount: fix "operation not permitted"
The following was failing previously, because `getUnprivilegedMountFlags()` was not called:
```console
$ sudo mount -t tmpfs -o noexec none /tmp/foo
$ $ docker --context=rootless run -it --rm -v /tmp/foo:/mnt:ro alpine
docker: Error response from daemon: OCI runtime create failed: container_linux.go:367: starting container process caused: process_linux.go:520: container init caused: rootfs_linux.go:60: mounting "/tmp/foo" to rootfs at "/home/suda/.local/share/docker/overlay2/b8e7ea02f6ef51247f7f10c7fb26edbfb308d2af8a2c77915260408ed3b0a8ec/merged/mnt" caused: operation not permitted: unknown.
```

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 248f98ef5e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-01 18:45:23 +09:00
Akihiro Suda
7022b1e12e bump up rootlesskit to v0.14.1
Fix `DOCKERD_ROOTLESS_ROOTLESSKIT_PORT_DRIVER=slirp4netns` regression.

Full changes: https://github.com/rootless-containers/rootlesskit/compare/v0.14.0...v0.14.1

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 45021ee354)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-04-01 18:39:07 +09:00
Tibor Vass
cdd71c6736 Merge pull request #42205 from thaJeztah/20.10_backport_bump_libnetwork
[20.10 backport] vendor: docker/libnetwork b3507428be5b458cb0e2b4086b13531fb0706e46
2021-04-01 02:37:21 -07:00
Akihiro Suda
4ccd7382e0 Merge pull request #42141 from thaJeztah/20.10_backport_vpnkit_arm64 2021-03-26 09:15:42 +09:00
Sebastiaan van Stijn
88470052e7 vendor: docker/libnetwork b3507428be5b458cb0e2b4086b13531fb0706e46
full diff: fa125a3512...b3507428be

- fixed IPv6 iptables rules for enabled firewalld (libnetwork#2609)
    - fixes "Docker uses 'iptables' instead of 'ip6tables' for IPv6 NAT rule, crashes"
- Fix regression in docker-proxy
    - introduced in "Fix IPv6 Port Forwarding for the Bridge Driver" (libnetwork#2604)
    - fixes/addresses: "IPv4 and IPv6 addresses are not bound by default anymore" (libnetwork#2607)
    - fixes/addresses "IPv6 is no longer proxied by default anymore" (moby#41858)
- Use hostIP to decide on Portmapper version
    - fixes docker-proxy not being stopped correctly

Port mapping of containers now contain separatet mappings for IPv4 and IPv6 addresses, when
listening on "any" IP address. Various tests had to be updated to take multiple mappings into
account.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0450728267)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-25 22:29:47 +01:00
Sebastiaan van Stijn
d26ed2c33b fix assertPortList normalizing being too strict
The normalizing was updated with the output of the "docker port" command
in mind, but we're normalizing the "expected" output, which is passed
without the "->" in front of the mapping, causing some tests to fail;

    === RUN   TestDockerSuite/TestPortHostBinding
        --- FAIL: TestDockerSuite/TestPortHostBinding (1.21s)
            docker_cli_port_test.go:324: assertion failed: error is not nil: |:::9876!=[::]:9876|
    === RUN   TestDockerSuite/TestPortList
        --- FAIL: TestDockerSuite/TestPortList (0.96s)
            docker_cli_port_test.go:25: assertion failed: error is not nil: |:::9876!=[::]:9876|

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c8599a6537)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-25 22:29:44 +01:00
Sebastiaan van Stijn
8926328927 Merge pull request #42197 from thaJeztah/20.10_backport_improve_build_errors
[20.10 backport] builder: produce error when using unsupported Dockerfile option
2021-03-25 21:27:00 +01:00
Sebastiaan van Stijn
eef17706da Merge pull request #42194 from thaJeztah/20.10_backport_rootlesskit_goproxy
[20.10 backport] hack: use GOPROXY for rootlesskit to workaround issue with old git on CentOS/RHEL 7
2021-03-25 20:55:02 +01:00
Sebastiaan van Stijn
6b1da9492f Merge pull request #42186 from thaJeztah/20.10_backport_update_rootlesskit
[20.10 backport] update rootlesskit to v0.14.0
2021-03-25 20:49:46 +01:00
Brian Goff
60aa0f2f6b Merge pull request #42079 from thaJeztah/20.10_backport_update_docs_links
[20.10 backport] Update documentation links
2021-03-25 12:48:52 -07:00
Sebastiaan van Stijn
122ef5ce94 Merge pull request #42072 from AkihiroSuda/prohibit-rootless-as-root-2010
[20.10 backport] dockerd-rootless.sh: prohibit running as root
2021-03-25 20:46:59 +01:00
Akihiro Suda
7c84bf929b Merge pull request #42196 from thaJeztah/20.10_backport_ci_fixes
[20.10 backport] CI: update tests to be more resilient against CLI output format and for libnetwork changes
2021-03-25 13:23:58 +09:00
Sebastiaan van Stijn
915b239519 builder: produce error when using unsupported Dockerfile option
With the promotion of the experimental Dockerfile syntax to "stable", the Dockerfile
syntax now includes some options that are supported by BuildKit, but not (yet)
supported in the classic builder.

As a result, parsing a Dockerfile may succeed, but any flag that's known to BuildKit,
but not supported by the classic builder is silently ignored;

    $ mkdir buildkit_flags && cd buildkit_flags
    $ touch foo.txt

For example, `RUN --mount`:

    DOCKER_BUILDKIT=0 docker build --no-cache -f- . <<EOF
    FROM busybox
    RUN --mount=type=cache,target=/foo echo hello
    EOF

    Sending build context to Docker daemon  2.095kB
    Step 1/2 : FROM busybox
     ---> 219ee5171f80
    Step 2/2 : RUN --mount=type=cache,target=/foo echo hello
     ---> Running in 022fdb856bc8
    hello
    Removing intermediate container 022fdb856bc8
     ---> e9f0988844d1
    Successfully built e9f0988844d1

Or `COPY --chmod` (same for `ADD --chmod`):

    DOCKER_BUILDKIT=0 docker build --no-cache -f- . <<EOF
    FROM busybox
    COPY --chmod=0777 /foo.txt /foo.txt
    EOF

    Sending build context to Docker daemon  2.095kB
    Step 1/2 : FROM busybox
     ---> 219ee5171f80
    Step 2/2 : COPY --chmod=0777 /foo.txt /foo.txt
     ---> 8b7117932a2a
    Successfully built 8b7117932a2a

Note that unknown flags still produce and error, for example, the below fails because `--hello` is an unknown flag;

    DOCKER_BUILDKIT=0 docker build -<<EOF
    FROM busybox
    RUN --hello echo hello
    EOF

    Sending build context to Docker daemon  2.048kB
    Error response from daemon: dockerfile parse error line 2: Unknown flag: hello

With this patch applied
----------------------------

With this patch applied, flags that are known in the Dockerfile spec, but are not
supported by the classic builder, produce an error, which includes a link to the
documentation how to enable BuildKit:

    DOCKER_BUILDKIT=0 docker build --no-cache -f- . <<EOF
    FROM busybox
    RUN --mount=type=cache,target=/foo echo hello
    EOF

    Sending build context to Docker daemon  2.048kB
    Step 1/2 : FROM busybox
     ---> b97242f89c8a
    Step 2/2 : RUN --mount=type=cache,target=/foo echo hello
    the --mount option requires BuildKit. Refer to https://docs.docker.com/go/buildkit/ to learn how to build images with BuildKit enabled

    DOCKER_BUILDKIT=0 docker build --no-cache -f- . <<EOF
    FROM busybox
    COPY --chmod=0777 /foo.txt /foo.txt
    EOF

    Sending build context to Docker daemon  2.095kB
    Step 1/2 : FROM busybox
     ---> b97242f89c8a
    Step 2/2 : COPY --chmod=0777 /foo.txt /foo.txt
    the --chmod option requires BuildKit. Refer to https://docs.docker.com/go/buildkit/ to learn how to build images with BuildKit enabled

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a09c0276a2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-24 23:57:54 +01:00
Sebastiaan van Stijn
ef2351b416 integration-cli: rely less on "docker port" output format
Also re-formatting some lines for readability.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c6038b4884)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-24 22:04:10 +01:00
Sebastiaan van Stijn
86d98f5711 integration: update getExternalAddress to prefer IPv4
Rootlesskit doesn't currently handle IPv6 addresses, causing TestNetworkLoopbackNat
and TestNetworkNat to fail;

    Error starting userland proxy:
    error while calling PortManager.AddPort(): listen tcp: address :::8080: too many colons in address

This patch:

- Updates `getExternalAddress()` to pick IPv4 address if both IPv6 and IPv4 are found
- Update TestNetworkNat to net.JoinHostPort(), so that square brackets are used for
  IPv6 addresses (e.g. `[::]:8080`)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f845b98ca6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-24 22:04:08 +01:00
Sebastiaan van Stijn
b41e2d4dc1 integration/container: wrap some long lines for readability
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 54ca929a70)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-24 22:04:06 +01:00
Tibor Vass
407a61cdb2 hack: use GOPROXY for rootlesskit to workaround issue with old git on CentOS/RHEL 7
Since rootlesskit removed vendor folder, building it has to rely on go mod.

Dockerfile in docker-ce-packaging uses GOPROXY=direct, which makes "go mod"
commands use git to fetch modules. "go mod" in Go versions before 1.14.1 are
incompatible with older git versions, including the version of git that ships
with CentOS/RHEL 7 (which have git 1.8), see golang/go#38373

This patch switches rootlesskit install script to set GOPROXY to
https://proxy.golang.org so that git is not required for downloading modules.

Once all our code has upgraded to Go 1.14+, this workaround should be
removed.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit cbc6cefdcb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-24 09:28:31 +01:00
Sebastiaan van Stijn
653b58cc8a Merge pull request #42111 from thaJeztah/20.10_backport_dockerfile_stable
[20.10 backport] Dockerfile: switch to "stable" dockerfile front-end
2021-03-22 22:53:03 +01:00
Sebastiaan van Stijn
a35e1f451e update rootlesskit to v0.14.0
full diff: https://github.com/rootless-containers/rootlesskit/compare/v0.13.1...v0.14.0

v0.14.0 Changes (since v0.13.2)
--------------------------------------

- CLI: improve --help output
- API: support GET /info
- Port API: support specifying IP version explicitly ("tcp4", "tcp6")
- rootlesskit-docker-proxy: support libnetwork >= 20201216 convention
- Allow vendoring with moby/sys/mountinfo@v0.1.3 as well as @v0.4.0
- Remove socat port driver
    - socat driver has been deprecated since v0.7.1 (Dec 2019)
- New experimental flag: --ipv6
    - Enables IPv6 routing (slirp4netns --enable-ipv6). Unrelated to port driver.

v0.13.2
--------------------------------------

- Fix cleaning up crashed state dir
- Update Go to 1.16
- Misc fixes

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e166af959d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-22 22:19:06 +01:00
Sebastiaan van Stijn
9f2f85c53b Merge pull request #42176 from cpuguy83/20.10_testPushMultipleTags
[20.10] TestPushMultipleTags: Add support for 20.10 CLI
2021-03-22 15:31:21 +01:00
Akihiro Suda
257cf10ee1 Merge pull request #42175 from cpuguy83/20.10_TestDockerNetworkFlagAlias 2021-03-21 01:11:32 +09:00
Sebastiaan van Stijn
5a697ae130 Merge pull request #42174 from thaJeztah/20.10_backport_41820_fix_json_unexpected_eof
[20.10 backport] Fix handling for json-file io.UnexpectedEOF
2021-03-20 10:10:24 +01:00
Akihiro Suda
cdb77eca0e Merge pull request #42168 from AkihiroSuda/ovl-k511-2010
[20.10 backport] overlay2: support "userxattr" option (kernel 5.11)
2021-03-20 12:31:02 +09:00
Brian Goff
9780942e20 Remove cli test for duplicate --net/--network opts
This seems to be testing a strange case, specifically that one can set
the `--net` and `--network` in the same command with the same network.

Indeed this used to work with older CLIs but newer ones error out when
validating the request before sending it to the daemon.

Opening this for discussion because:

1. This doesn't seem to be testing anything at all related to the rest
   of the test
2. Not really providing any value here.
3. Is testing that a technically invalid option is successful (whether
   the option should be valid as it relates to the CLI accepting it is
   debatable).
4. Such a case seems fringe and even a bug in whatever is calling the
   CLI with such options.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e31086320e)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-03-19 23:13:48 +00:00
Brian Goff
e1ee2823ec TestPushMultipleTags: Add support for 20.10 CLI
In 20.10 we no longer implicitly push all tags and require a
"--all-tags" flag, so add this to the test when the CLI is >= 20.10

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 601707a655)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-03-19 23:08:40 +00:00
Brian Goff
969bde2009 jsonfile: more defensive reader implementation
Tonis mentioned that we can run into issues if there is more error
handling added here. This adds a custom reader implementation which is
like io.MultiReader except it does not cache EOF's.
What got us into trouble in the first place is `io.MultiReader` will
always return EOF once it has received an EOF, however the error
handling that we are going for is to recover from an EOF because the
underlying file is a file which can have more data added to it after
EOF.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 5a664dc87d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-19 18:18:55 +01:00
Brian Goff
cb501700e8 Fix handling for json-file io.UnexpectedEOF
When the multireader hits EOF, we will always get EOF from it, so we
cannot store the multrireader fro later error handling, only for the
decoder.

Thanks @tobiasstadler for pointing this error out.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 4be98a38e7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-19 18:18:52 +01:00
Sebastiaan van Stijn
6d72bb6acf Merge pull request #42147 from thaJeztah/20.10_backport_41704_update_libseccomp
[20.10 backport] Use buster backports to build with libseccomp-2.4.4
2021-03-19 13:51:56 +01:00
Akihiro Suda
2d39a44c1c overlayutils/userxattr.go: add "fast path" for kernel >= 5.11.0
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit a8008f7313)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-03-19 03:36:07 +09:00
Akihiro Suda
95d2b686be overlay2: support "userxattr" option (kernel 5.11)
The "userxattr" option is needed for mounting overlayfs inside a user namespace with kernel >= 5.11.

The "userxattr" option is NOT needed for the initial user namespace (aka "the host").

Also, Ubuntu (since circa 2015) and Debian (since 10) with kernel < 5.11 can mount the overlayfs in a user namespace without the "userxattr" option.

The corresponding kernel commit: 2d2f2d7322ff43e0fe92bf8cccdc0b09449bf2e1
> **ovl: user xattr**
>
> Optionally allow using "user.overlay." namespace instead of "trusted.overlay."
> ...
> Disable redirect_dir and metacopy options, because these would allow privilege escalation through direct manipulation of the
> "user.overlay.redirect" or "user.overlay.metacopy" xattrs.

Fix issue 42055

Related to containerd/containerd PR 5076

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 11ef8d3ba9)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-03-19 03:35:59 +09:00
Jeremy Huntwork
074270703c Use buster backports to build with libseccomp-2.4.4
Fixes #41704

The latest released versions of the static binaries (20.10.3) are still unable
to use faccessat2 with musl-1.2.2 even though this was addressed in #41353 and
related issues. The underlying cause seems to be that the build system
here still uses the default version of libseccomp shipped with buster.
An updated version is available in buster backports:
https://packages.debian.org/buster-backports/libseccomp-dev

Signed-off-by: Jeremy Huntwork <jhuntwork@lightcubesolutions.com>
(cherry picked from commit 1600e851b5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-15 08:58:30 +01:00
Akihiro Suda
fed6ba2790 Include VPNkit binary for arm64
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 088e6ee790)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-12 09:02:45 +01:00
Tibor Vass
59ad0b6a86 Merge pull request #42124 from thaJeztah/20.10_backport_bump_containerd
[20.10 backport] update containerd binary to v1.4.4
2021-03-08 16:23:30 -08:00
Sebastiaan van Stijn
2ab3cd8c9e update containerd binary to v1.4.4
full diff: https://github.com/containerd/containerd/compare/v1.4.3...v1.4.4

Release notes:

The fourth patch release for `containerd` 1.4 contains a fix for CVE-2021-21334
along with various other minor issues.
See [GHSA-36xw-fx78-c5r4](https://github.com/containerd/containerd/security/advisories/GHSA-36xw-fx78-c5r4)
for more details related to CVE-2021-21334.

Notable Updates

- Fix container create in CRI to prevent possible environment variable leak between containers
- Update shim server to return grpc NotFound error
- Add bounds on max `oom_score_adj` value for shim's AdjustOOMScore
- Update task manager to use fresh context when calling shim shutdown
- Update Docker resolver to avoid possible concurrent map access panic
- Update shim's log file open flags to avoid containerd hang on syscall open
- Fix incorrect usage calculation

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1a49393403)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-08 18:58:05 +01:00
Akihiro Suda
bc795b19bf Merge pull request #42103 from thaJeztah/20.10_backport_skip_test 2021-03-04 00:23:01 +09:00
Sebastiaan van Stijn
d3188dc164 Dockerfile: switch to "stable" dockerfile front-end
The `RUN --mount` options have been promoted to the stable channel,
so we can switch from "experimental" to "stable".

Note that the syntax directive should no longer be needed now, but
it's good practice to add a syntax-directive, to allow building on
older versions of docker.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 083dbe9fcd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-03 11:11:54 +01:00
Akihiro Suda
98273a606a dockerd-rootless-setuptool.sh: create CLI context "rootless"
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit f2f1c0fe38)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-03-03 13:51:28 +09:00
Shengjing Zhu
a0670c6d3d pkg/archive: TestUntarParentPathPermissions requires root
=== RUN   TestUntarParentPathPermissions
    archive_unix_test.go:171: assertion failed: error is not nil: chown /tmp/TestUntarParentPathPermissions694189715/foo: operation not permitted

Signed-off-by: Shengjing Zhu <zhsj@debian.org>
(cherry picked from commit f23c1c297d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-03-01 22:05:09 +01:00
Sebastiaan van Stijn
04d9b581e9 Update documentation links
- Using "/go/" redirects for some topics, which allows us to
  redirect to new locations if topics are moved around in the
  documentation.
- Updated some old URLs to their new location.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 328de0b8d9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-25 21:54:39 +01:00
Tibor Vass
363e9a88a1 Merge pull request #42061 from thaJeztah/20.10_backport_bump_buildkit
[20.10 backport] vendor: github.com/moby/buildkit v0.8.2
2021-02-24 22:27:18 -08:00
Akihiro Suda
1015b5b438 dockerd-rootless.sh: prohibit running as root
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 9351e19658)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-25 13:33:15 +09:00
Tibor Vass
35f5f9e624 builder: fix incorrect cache match for inline cache with empty layers
See https://github.com/moby/buildkit/pull/1993

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 9bf93e90fa)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-02-25 01:50:37 +00:00
Tibor Vass
035cb276d9 Merge pull request #42070 from thaJeztah/20.10_backport_rootless_typo_guard
[20.10 backport] dockerd-rootless.sh: add typo guard
2021-02-24 17:42:50 -08:00
Sebastiaan van Stijn
3ce37a6aa4 vendor: github.com/moby/buildkit v0.8.2
full diff: 68bb095353...9065b18ba4

- fix seccomp compatibility in 32bit arm
    - fixes Unable to build alpine:edge containers for armv7
    - fixes Buildx failing to build for arm/v7 platform on arm64 machine
- resolver: avoid error caching on token fetch
    - fixes "Error: i/o timeout should not be cached"
- fileop: fix checksum to contain indexes of inputs
- frontend/dockerfile: add RunCommand.FlagsUsed field
    - relates to [20.10] Classic builder silently ignores unsupported Dockerfile command flags
- update qemu emulators
    - relates to "Impossible to run git clone inside buildx with non x86 architecture"
- Fix reference count issues on typed errors with mount references
    - fixes errors on releasing mounts with typed execerror refs
    - fixes / addresses invalid mutable ref when using shared cache mounts
- dockerfile/docs: fix frontend image tags
- git: set token only for main remote access
    - fixes "Loading repositories with submodules is repeated. Failed to clone submodule from googlesource"
- allow skipping empty layer detection on cache export

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 9962a3f74e)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-02-25 01:41:11 +00:00
Akihiro Suda
5e8c1b4f7d dockerd-rootless.sh: add typo guard
`dockerd-rootless.sh install` is a common typo of `dockerd-rootless-setuptool.sh install`.

Now `dockerd-rootless.sh install` shows human-readable error.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 8dc6c109b5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-24 22:13:50 +01:00
Tibor Vass
7f547e15c7 Merge pull request #42060 from thaJeztah/20.10_backport_bump_swarmkit
[20.10 backport] Update Swarmkit to pick up fixes to heartbeat period and stalled tasks
2021-02-24 12:46:03 -08:00
Tibor Vass
830471acf5 Merge pull request #42066 from thaJeztah/20.10_backport_check_config
[20.10 backport] check-config.sh: add NETFILTER_XT_MARK
2021-02-24 12:45:33 -08:00
Tibor Vass
7ae42f5797 Merge pull request #42065 from thaJeztah/20.10_backport_lease_blobs_fixes
[20.10 backport] builder: fix blobs releasing via leases after pull
2021-02-24 12:44:51 -08:00
Sebastiaan van Stijn
f3d130d743 Merge pull request #42049 from thaJeztah/20.10_backport_builder_pull_fix
[20.10 backport] builder: fix pull synchronization regression
2021-02-23 21:15:32 +01:00
Piotr Karbowski
a24d92f95b check-config.sh: add NETFILTER_XT_MARK
Points out another symbol that Docker might need. in this case Docker's
mesh network in swarm mode does not route Virtual IPs if it's unset.

From /var/logs/docker.log:
time="2021-02-19T18:15:39+01:00" level=error msg="set up rule failed, [-t mangle -A INPUT -d 10.0.1.2/32 -j MARK --set-mark 257]:  (iptables failed: iptables --wait -t mang
le -A INPUT
-d 10.0.1.2/32 -j MARK --set-mark 257: iptables v1.8.7 (legacy): unknown option \"--set-mark\"\nTry `iptables -h' or 'iptables --help' for more information.\n (exit status 2))"

Bug: https://github.com/moby/libnetwork/issues/2227
Bug: https://github.com/docker/for-linux/issues/644
Bug: https://github.com/docker/for-linux/issues/525
Signed-off-by: Piotr Karbowski <piotr.karbowski@protonmail.ch>
(cherry picked from commit e8ceb97646)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-23 19:25:47 +01:00
Tonis Tiigi
80019e1b0e builder: fix blobs releasing via leases after pull
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 5c01d06f72)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-23 19:22:35 +01:00
Tibor Vass
dc1606ad79 Merge pull request #42046 from thaJeztah/20.10_labels_regex_length_check
[20.10 backport] Check the length of the correct variable #42039
2021-02-23 10:00:58 -08:00
Adam Williams
2a220f1f3d Update Swarmkit to pick up fixes to heartbeat period and stalled tasks
Signed-off-by: Adam Williams <awilliams@mirantis.com>
(cherry picked from commit cbd2f726bf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-23 09:57:37 +01:00
Sebastiaan van Stijn
148e6c9514 Merge pull request #42017 from thaJeztah/20.10_backport_build_fixes
[20.10 backport]: avoid creating parent dirs for XGlobalHeader, and fix permissions
2021-02-22 20:04:04 +01:00
Tonis Tiigi
da1a672102 builder: fix pull synchronization regression
Config resolution was synchronized based on a wrong key as ref
variable is initialized only after in the same function. Using
the right key isn't fully correct either as the synchronized method
changes properties of the puller instance and can't be just skipped.
Added better error handling for the same case as well.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit b53ea19c49)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-19 10:08:04 +01:00
Nathan Carlson
0e001154f9 Check the length of the correct variable #42039
Signed-off-by: Nathan Carlson <carl4403@umn.edu>
(cherry picked from commit 8d73c1ad68)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-18 22:23:34 +01:00
Sebastiaan van Stijn
df2cfb4d33 Merge pull request #42045 from cpuguy83/20.10_fallback_manifest_on_bad_plat
[20.10] Fallback to manifest list when no platform match
2021-02-18 21:37:34 +01:00
Tibor Vass
7f6776fb5e Merge pull request #41971 from thaJeztah/20.10_backport_seccomp_update
[20.10 backport] profiles: seccomp: update to Linux 5.11 syscall list
2021-02-18 12:36:47 -08:00
Tibor Vass
caa48de224 Merge pull request #41974 from thaJeztah/20.10_backport_for_linux_1169_plugins_custom_runtime-panic
[20.10 backport] Add shim config for custom runtimes for plugins
2021-02-18 12:36:21 -08:00
Tibor Vass
6a86c25cf0 Merge pull request #41972 from thaJeztah/20.10_backport_net_leak_fix
[20.10 backport] builder: ensure libnetwork state file do not leak
2021-02-18 12:34:14 -08:00
Tibor Vass
ff486ae873 Merge pull request #41973 from thaJeztah/20.10_backport_fix_builder_inconsisent_platform
[20.10 backport] Fix builder inconsistent error on buggy platform
2021-02-18 12:32:53 -08:00
Tibor Vass
b55d9e1b91 Merge pull request #41976 from thaJeztah/20.10_backport_reuse
[20.10 backport] replace json.Unmarshal with NewFromJSON in Create
2021-02-18 12:30:18 -08:00
Tibor Vass
b81e649d2b Merge pull request #41977 from thaJeztah/20.10_backport_minor_fixes
[20.10 backport] assorted small fixes, docs changes, and contrib
2021-02-18 12:29:07 -08:00
Tibor Vass
5bb85a962a Merge pull request #42001 from thaJeztah/20.10_backport_fix_cgroup_rule_panic
[20.10 backport] Fix daemon panic when starting container with invalid device cgroup rule
2021-02-18 12:27:38 -08:00
Tibor Vass
6de7dbd225 Merge pull request #42012 from thaJeztah/20.10_backport_fix_nanocpus_casing
[20.10 backport] api/docs: fix NanoCPUs casing in swagger
2021-02-18 12:26:04 -08:00
Tibor Vass
8e2c5fc178 Merge pull request #42013 from thaJeztah/20.10_backport_42003_fix_userns_uid_username_match
[20.10 backport] Fix userns-remap option when username & UID match
2021-02-18 12:25:13 -08:00
Tibor Vass
f88c4aeaa0 Merge pull request #42014 from thaJeztah/20.10_backport_bump_runc_binary
[20.10 backport] update runc binary to v1.0.0-rc93
2021-02-18 12:24:02 -08:00
Tibor Vass
c981698f9a Merge pull request #42025 from thaJeztah/20.10_backport_bump_rootlesskit
[20.10 backport] Update rootlesskit to v0.13.1 to fix handling of IPv6 addresses
2021-02-18 12:17:55 -08:00
Tibor Vass
d6ae06a70a Merge pull request #42042 from thaJeztah/20.10_backport_docker_dind_integration_test_fix_subnet_missmatch
[20.10 backport] Update TestDaemonRestartWithLiveRestore: fix docker0 subnet missmatch
2021-02-18 12:15:05 -08:00
Brian Goff
3beb2e4422 Move cpu variant checks into platform matcher
Wrap platforms.Only and fallback to our ignore mismatches due to  empty
CPU variants. This just cleans things up and makes the logic re-usable
in other places.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 50f39e7247)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-02-18 20:12:07 +00:00
Brian Goff
0caf485abb Fallback to manifest list when no platform match
In some cases, in fact many in the wild, an image may have the incorrect
platform on the image config.
This can lead to failures to run an image, particularly when a user
specifies a `--platform`.
Typically what we see in the wild is a manifest list with an an entry
for, as an example, linux/arm64 pointing to an image config that has
linux/amd64 on it.

This change falls back to looking up the manifest list for an image to
see if the manifest list shows the image as the correct one for that
platform.

In order to accomplish this we need to traverse the leases associated
with an image. Each image, if pulled with Docker 20.10, will have the
manifest list stored in the containerd content store with the resource
assigned to a lease keyed on the image ID.
So we look up the lease for the image, then look up the assocated
resources to find the manifest list, then check the manifest list for a
platform match, then ensure that manifest referes to our image config.

This is only used as a fallback when a user specified they want a
particular platform and the image config that we have does not match
that platform.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 4be5453215)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-02-18 20:12:00 +00:00
Brian Goff
24e1d7fa59 Merge pull request #41975 from thaJeztah/20.10_backport_41794_sized_logger
[20.10 backport] Handle long log messages correctly on SizedLogger
2021-02-17 16:51:24 -08:00
Aleksa Sarai
a6a88b3145 profiles: seccomp: update to Linux 5.11 syscall list
These syscalls (some of which have been in Linux for a while but were
missing from the profile) fall into a few buckets:

 * close_range(2), epoll_pwait2(2) are just extensions of existing "safe
   for everyone" syscalls.

 * The mountv2 API syscalls (fs*(2), move_mount(2), open_tree(2)) are
   all equivalent to aspects of mount(2) and thus go into the
   CAP_SYS_ADMIN category.

 * process_madvise(2) is similar to the other process_*(2) syscalls and
   thus goes in the CAP_SYS_PTRACE category.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
(cherry picked from commit 54eff4354b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:22:12 +01:00
Tonis Tiigi
e3750357a5 builder: ensure libnetwork state file do not leak
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 7c7e168902)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:21:25 +01:00
Brian Goff
ab5711e619 Fix builder inconsistent error on buggy platform
When pulling an image by platform, it is possible for the image's
configured platform to not match what was in the manifest list.
The image itself is buggy because either the manifest list is incorrect
or the image config is incorrect. In any case, this is preventing people
from upgrading because many times users do not have control over these
buggy images.

This was not a problem in 19.03 because we did not compare on platform
before. It just assumed if we had the image it was the one we wanted
regardless of platform, which has its own problems.

Example Dockerfile that has this problem:

```Dockerfile
FROM --platform=linux/arm64 k8s.gcr.io/build-image/debian-iptables:buster-v1.3.0
RUN echo hello
```

This fails the first time you try to build after it finishes pulling but
before performing the `RUN` command.
On the second attempt it works because the image is already there and
does not hit the code that errors out on platform mismatch (Actually it
ignores errors if an image is returned at all).

Must be run with the classic builder (DOCKER_BUILDKIT=0).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 399695305c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:20:46 +01:00
Brian Goff
df2a989769 Add shim config for custom runtimes for plugins
This fixes a panic when an admin specifies a custom default runtime,
when a plugin is started the shim config is nil.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 2903863a1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:20:03 +01:00
Kazuyoshi Kato
d13e162a63 Handle long log messages correctly on SizedLogger
Loggers that implement BufSize() (e.g. awslogs) uses the method to
tell Copier about the maximum log line length. However loggerWithCache
and RingBuffer hide the method by wrapping loggers.

As a result, Copier uses its default 16KB limit which breaks log
lines > 16kB even the destinations can handle that.

This change implements BufSize() on loggerWithCache and RingBuffer to
make sure these logger wrappes don't hide the method on the underlying
loggers.

Fixes #41794.

Signed-off-by: Kazuyoshi Kato <katokazu@amazon.com>
(cherry picked from commit bb11365e96)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:19:02 +01:00
Jim Lin
34446d0343 replace json.Unmarshal with NewFromJSON in Create
Signed-off-by: Jim Lin <b04705003@ntu.edu.tw>
(cherry picked from commit c9ec21e17a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:18:19 +01:00
Sebastiaan van Stijn
c00fb1383f docs: fix double "the" in existing API versions
Backport of 2db5676c6e to the swagger files
used in the documentation

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 240d0b37bb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:42 +01:00
Frederico F. de Oliveira
b7e6803ec4 swagger.yaml: Remove extra 'the' wrapped by newline
This PR was originally proposed by @phillc here: https://github.com/docker/engine/pull/456

Signed-off-by: FreddieOliveira <fredf_oliveira@ufu.br>
(cherry picked from commit 2db5676c6e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:40 +01:00
Kir Kolyshkin
420de4c569 contrib/check-config.sh: fix INET_XFRM_MODE_TRANSPORT
This parameter was removed by kernel commit 4c145dce260137,
which made its way to kernel v5.3-rc1. Since that commit,
the functionality is built-in (i.e. it is available as long
as CONFIG_XFRM is on).

Make the check conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 06d9020fac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:39 +01:00
Kir Kolyshkin
8412078b1e contrib/check-config.sh: fix IOSCHED_CFQ CFQ_GROUP_IOSCHED
These config options are removed by kernel commit f382fb0bcef4,
which made its way into kernel v5.0-rc1.

Make the check conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 18e0543587)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:37 +01:00
Kir Kolyshkin
bb0866f04e contrib/check-config.sh: fix MEMCG_SWAP_ENABLED
Kernel commit 2d1c498072de69e (which made its way into kernel v5.8-rc1)
removed CONFIG_MEMCG_SWAP_ENABLED Kconfig option, making swap accounting
always enabled (unless swapaccount=0 boot option is provided).

Make the check conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 070f9d9dd3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:35 +01:00
Kir Kolyshkin
db47bec3c7 contrib/check-config.sh: fix NF_NAT_NEEDED
CONFIG_NF_NAT_NEEDED was removed in kernel commit 4806e975729f99c7,
which made its way into v5.2-rc1. The functionality is now under
NF_NAT which we already check for.

Make the check for NF_NAT_NEEDED conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 03da41152a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:33 +01:00
Kir Kolyshkin
6bc47ca4b4 contrib/check-config.sh: fix NF_NAT_IPV4
CONFIG_NF_NAT_IPV4 was removed in kernel commit 3bf195ae6037e310,
which made its way into v5.1-rc1. The functionality is now under
NF_NAT which we already check for.

Make the check for NF_NAT_IPV4 conditional.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit eeb53c1f22)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:31 +01:00
Kir Kolyshkin
491642e696 contrib/check-config.sh: support for cgroupv2
Before:

> Generally Necessary:
> - cgroup hierarchy: nonexistent??
>     (see https://github.com/tianon/cgroupfs-mount)

After:

> Generally Necessary:
> - cgroup hierarchy: cgroupv2

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 76b59065ae)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:30 +01:00
gunadhya
cda6988478 Fix Error in daemon_unix.go and docker_cli_run_unit_test.go
Signed-off-by: gunadhya <6939749+gunadhya@users.noreply.github.com>
(cherry picked from commit 64465f3b5f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:17:28 +01:00
Sebastiaan van Stijn
18543cd8c8 Merge pull request #42000 from thaJeztah/20.10_backport_fix_dockerfile_simple
[20.10 backport] Dockerfile.simple: Fix compile docker binary error with btrfs
2021-02-17 21:17:02 +01:00
Sebastiaan van Stijn
1640d7b986 Fix daemon panic when starting container with invalid device cgroup rule
This fixes a panic when an invalid "device cgroup rule" is passed, resulting
in an "index out of range".

This bug was introduced in the original implementation in 1756af6faf,
but was not reproducible when using the CLI, because the same commit also added
client-side validation on the flag before making an API request. The following
example, uses an invalid rule (`c *:*  rwm` - two spaces before the permissions);

```console
$ docker run --rm --network=host --device-cgroup-rule='c *:*  rwm' busybox
invalid argument "c *:*  rwm" for "--device-cgroup-rule" flag: invalid device cgroup format 'c *:*  rwm'
```

Doing the same, but using the API results in a daemon panic when starting the container;

Create a container with an invalid device cgroup rule:

```console
curl -v \
  --unix-socket /var/run/docker.sock \
  "http://localhost/v1.41/containers/create?name=foobar" \
  -H "Content-Type: application/json" \
  -d '{"Image":"busybox:latest", "HostConfig":{"DeviceCgroupRules": ["c *:*  rwm"]}}'
```

Start the container:

```console
curl -v \
  --unix-socket /var/run/docker.sock \
  -X POST \
  "http://localhost/v1.41/containers/foobar/start"
```

Observe the daemon logs:

```
2021-01-22 12:53:03.313806 I | http: panic serving @: runtime error: index out of range [0] with length 0
goroutine 571 [running]:
net/http.(*conn).serve.func1(0xc000cb2d20)
	/usr/local/go/src/net/http/server.go:1795 +0x13b
panic(0x2f32380, 0xc000aebfc0)
	/usr/local/go/src/runtime/panic.go:679 +0x1b6
github.com/docker/docker/oci.AppendDevicePermissionsFromCgroupRules(0xc000175c00, 0x8, 0x8, 0xc0000bd380, 0x1, 0x4, 0x0, 0x0, 0xc0000e69c0, 0x0, ...)
	/go/src/github.com/docker/docker/oci/oci.go:34 +0x64f
```

This patch:

- fixes the panic, allowing the daemon to return an error on container start
- adds a unit-test to validate various permutations
- adds a "todo" to verify the regular expression (and handling) of the "a" (all) value

We should also consider performing this validation when _creating_ the container,
so that an error is produced early.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5cc1753f2c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:16:01 +01:00
Sebastiaan van Stijn
6e3f2acdac docs: fix NanoCPUs casing
While the field in the Go struct is named `NanoCPUs`, it has a JSON label to
use `NanoCpus`, which was added in the original pull request (not clear what
the reason was); 846baf1fd3

Some notes:

- Golang processes field names case-insensitive, so when *using* the API,
  both cases should work, but when inspecting a container, the field is
  returned as `NanoCpus`.
- This only affects Containers.Resources. The `Limits` and `Reservation`
  for SwarmKit services and SwarmKit "nodes" do not override the name
  for JSON, so have the canonical (`NanoCPUs`) casing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8e2343ffd4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:15:21 +01:00
Sebastiaan van Stijn
ad777ff3bc api: fix NanoCPUs casing in swagger
While the field in the Go struct is named `NanoCPUs`, it has a JSON label to
use `NanoCpus`, which was added in the original pull request (not clear what
the reason was); 846baf1fd3

Some notes:

- Golang processes field names case-insensitive, so when *using* the API,
  both cases should work, but when inspecting a container, the field is
  returned as `NanoCpus`.
- This only affects Containers.Resources. The `Limits` and `Reservation`
  for SwarmKit services and SwarmKit "nodes" do not override the name
  for JSON, so have the canonical (`NanoCPUs`) casing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2bd46ed7e5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:15:19 +01:00
Grant Millar
94d2467613 Fix userns-remap option when username & UID match
Signed-off-by: Grant Millar <rid@cylo.io>
(cherry picked from commit 2ad187fd4a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:14:40 +01:00
Sebastiaan van Stijn
acb8a48a3c update runc binary to v1.0.0-rc93
full diff: https://github.com/opencontainers/runc/compare/v1.0.0-rc92...v1.0.0-rc93
release notes: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc93

Release notes for runc v1.0.0-rc93
-------------------------------------------------

This is the last feature-rich RC release and we are in a feature-freeze until
1.0. 1.0.0~rc94 will be released in a few weeks with minimal bug fixes only,
and 1.0.0 will be released soon afterwards.

- runc's cgroupv2 support is no longer considered experimental. It is now
  believed to be fully ready for production deployments. In addition, runc's
  cgroup code has been improved:
    - The systemd cgroup driver has been improved to be more resilient and
      handle more systemd properties correctly.
    - We now make use of openat2(2) when possible to improve the security of
      cgroup operations (in future runc will be wholesale ported to libpathrs to
      get this protection in all codepaths).
- runc's mountinfo parsing code has been reworked significantly, making
  container startup times significantly faster and less wasteful in general.
- runc now has special handling for seccomp profiles to avoid making new
  syscalls unusable for glibc. This is done by installing a custom prefix to
  all seccomp filters which returns -ENOSYS for syscalls that are newer than
  any syscall in the profile (meaning they have a larger syscall number).

  This should not cause any regressions (because previously users would simply
  get -EPERM rather than -ENOSYS, and the rule applied above is the most
  conservative rule possible) but please report any regressions you find as a
  result of this change -- in particular, programs which have special fallback
  code that is only run in the case of -EPERM.
- runc now supports the following new runtime-spec features:
    - The umask of a container can now be specified.
    - The new Linux 5.9 capabilities (CAP_PERFMON, CAP_BPF, and
      CAP_CHECKPOINT_RESTORE) are now supported.
    - The "unified" cgroup configuration option, which allows users to explicitly
      specify the limits based on the cgroup file names rather than abstracting
      them through OCI configuration. This is currently limited in scope to
      cgroupv2.
- Various rootless containers improvements:
    - runc will no longer cause conflicts if a user specifies a custom device
      which conflicts with a user-configured device -- the user device takes
      precedence.
    - runc no longer panics if /sys/fs/cgroup is missing in rootless mode.
- runc --root is now always treated as local to the current working directory.
- The --no-pivot-root hardening was improved to handle nested mounts properly
  (please note that we still strongly recommend that users do not use
  --no-pivot-root -- it is still an insecure option).
- A large number of code cleanliness and other various cleanups, including
  fairly large changes to our tests and CI to make them all run more
  efficiently.

For packagers the following changes have been made which will have impact on
your packaging of runc:

- The "selinux" and "apparmor" buildtags have been removed, and now all runc
  builds will have SELinux and AppArmor support enabled. Note that "seccomp"
  is still optional (though we very highly recommend you enable it).
- make install DESTDIR= now functions correctly.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 28e5a3c5a4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:13:50 +01:00
Sebastiaan van Stijn
5d442b1cb7 pkg/archive: Unpack() use 0755 permissions for missing directories
Commit edb62a3ace fixed a bug in MkdirAllAndChown()
that caused the specified permissions to not be applied correctly. As a result
of that bug, the configured umask would be applied.

When extracting archives, Unpack() used 0777 permissions when creating missing
parent directories for files that were extracted.
Before edb62a3ace, this resulted in actual
permissions of those directories to be 0755 on most configurations (using a
default 022 umask).

Creating these directories should not depend on the host's umask configuration.
This patch changes the permissions to 0755 to match the previous behavior,
and to reflect the original intent of using 0755 as default.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25ada76437)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:12:57 +01:00
Tonis Tiigi
5db18e0aba archive: avoid creating parent dirs for XGlobalHeader
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit ba7906aef3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:12:55 +01:00
Sebastiaan van Stijn
94feac18d2 Update rootlesskit to v0.13.1 to fix handling of IPv6 addresses
v0.13.1

- Refactor `ParsePortSpec` to handle IPv6 addresses, and improve validation

v0.13.0

- `rootlesskit --pidns`: fix propagating exit status
- Support cgroup2 evacuation, e.g., `systemd-run -p Delegate=yes --user -t rootlesskit --cgroupns --pidns --evacuate-cgroup2=evac --net=slirp4netns bash`

v0.12.0

- Port forwarding API now supports setting `ChildIP`
- The `vendor` directory is no longer included in this repo. Run `go mod vendor` if you need

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e32ae1973a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:11:17 +01:00
Alexis Ries
cc377d27ac Update TestDaemonRestartWithLiveRestore: fix docker0 subnet missmatch
Fix docker0 subnet missmatch when running from docker in docker (dind)

Signed-off-by: Alexis Ries <ries.alexis@gmail.com>
(cherry picked from commit 96e103feb1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-17 21:07:36 +01:00
Brian Goff
fae366b323 Merge pull request #41970 from thaJeztah/20.10_backport_testing_fixes 2021-02-17 09:37:19 -08:00
Brian Goff
dfce527001 Merge pull request #42030 from thaJeztah/20.10_backport_cgroup2ci_jenkins 2021-02-16 09:37:37 -08:00
Sebastiaan van Stijn
fc07fecfb5 TestBuildUserNamespaceValidateCapabilitiesAreV2: verify build completed
Check if the `docker build` completed successfully before continuing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fa480403c7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 14:20:02 +01:00
Sebastiaan van Stijn
f7893961de TestBuildUserNamespaceValidateCapabilitiesAreV2: use correct image name
This currently doesn't make a difference, because load.FrozenImagesLinux()
currently loads all frozen images, not just the specified one, but in case
that is fixed/implemented at some point.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 26965fbfa0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 14:20:00 +01:00
Akihiro Suda
d31b2141ae Jenkinsfile: add cgroup2
Thanks to Stefan Scherer for setting up the Jenkins nodes.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit c23b99f4db)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 09:22:30 +01:00
Akihiro Suda
5de9bc7e01 TestInspectOomKilledTrue: skip on cgroup v2
The test fails intermittently on cgroup v2.

```
=== FAIL: amd64.integration.container TestInspectOomKilledTrue (0.53s)
    kill_test.go:171: assertion failed: true (true bool) != false (inspect.State.OOMKilled bool)
```

Tracked in issue 41929

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit c316dd7cc5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-16 09:22:27 +01:00
Tibor Vass
11443bc681 Merge pull request #41957 from AkihiroSuda/cherrypick-41892-2010
[20.10 backport] pkg/archive: allow mknodding FIFO inside userns
2021-02-11 11:56:54 -08:00
Lei Jiang
ff49cb3e33 Dockerfile.simple: Fix compile docker binary error with btrfs
Use the image build from Dockerfile.simple to build docker binary failed
with not find <brtfs/ioctl.h>, we need to install libbtrfs-dev to fix this.
```
Building: bundles/dynbinary-daemon/dockerd-dev
GOOS="" GOARCH="" GOARM=""
.gopath/src/github.com/docker/docker/daemon/graphdriver/btrfs/btrfs.go:8:10: fatal error: btrfs/ioctl.h: No such file or directory
 #include <btrfs/ioctl.h>

```

Signed-off-by: Lei Jitang <leijitang@outlook.com>
(cherry picked from commit dd7ee8ea3e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-08 17:47:03 +01:00
Sebastiaan van Stijn
49e706e14c Dockerfile.buildx: update buildx to v0.5.1
full diff: https://github.com/docker/buildx/compare/v0.3.1...v0.5.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 30b20a6bdd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:27 +01:00
Sebastiaan van Stijn
0211909bde testing: update docker-py 4.4.1
run docker-py integration tests of the latest release;

full diff: https://github.com/docker/docker-py/compare/4.3.0...4.4.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 14fb165085)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:22 +01:00
Sebastiaan van Stijn
faf6442f80 integration: fix TestBuildUserNamespaceValidateCapabilitiesAreV2 not using frozen image
Commit f2f5106c92 added this test to verify loading
of images that were built with user-namespaces enabled.

However, because this test spins up a new daemon, not the daemon that's set up by
the test-suite's `TestMain()` (which loads the frozen images).

As a result, the `debian:bullseye` image was pulled from Docker Hub when running
the test;

    Calling POST /v1.41/images/load?quiet=1
    Applying tar in /go/src/github.com/docker/docker/bundles/test-integration/TestBuildUserNamespaceValidateCapabilitiesAreV2/d4d366b15997b/root/165536.165536/overlay2/3f7f9375197667acaf7bc810b34689c21f8fed9c52c6765c032497092ca023d6/diff" storage-driver=overlay
    Applied tar sha256:845f0e5159140e9dbcad00c0326c2a506fbe375aa1c229c43f082867d283149c to 3f7f9375197667acaf7bc810b34689c21f8fed9c52c6765c032497092ca023d6, size: 5922359
    Calling POST /v1.41/build?buildargs=null&cachefrom=null&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=&labels=null&memory=0&memswap=0&networkmode=&rm=0&shmsize=0&t=capabilities%3A1.0&target=&ulimits=null&version=
    Trying to pull debian from https://registry-1.docker.io v2
    Fetching manifest from remote" digest="sha256:f169dbadc9021fc0b08e371d50a772809286a167f62a8b6ae86e4745878d283d" error="<nil>" remote="docker.io/library/debian:bullseye
    Pulling ref from V2 registry: debian:bullseye
    ...

This patch updates `TestBuildUserNamespaceValidateCapabilitiesAreV2` to load the
frozen image. `StartWithBusybox` is also changed to `Start`, because the test
is not using the busybox image, so there's no need to load it.

In a followup, we should probably add some utilities to make this easier to set up
(and to allow passing the list frozen images that we want to load, without having
to "hard-code" the image name to load).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 46dfc31342)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:16 +01:00
Brian Goff
f0e526f43e Make test work with rootless mode
Using `d.Kill()` with rootless mode causes the restarted daemon to not
be able to start containerd (it times out).

Originally this was SIGKILLing the daemon because we were hoping to not
have to manipulate on disk state, but since we need to anyway we can
shut it down normally.

I also tested this to ensure the test fails correctly without the fix
that the test was added to check for.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e6591a9c7a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2021-02-03 13:54:09 +01:00
Brian Goff
11ecfe8a81 Merge pull request #41959 from AkihiroSuda/cherrypick-41917-2010
[20.10 backport] TestCgroupNamespacesRunOlderClient: support cgroup v2
2021-02-02 10:54:01 -08:00
Brian Goff
49df387b71 Merge pull request #41958 from AkihiroSuda/cherrypick-41894-2010 2021-02-02 10:52:42 -08:00
Brian Goff
54f561aeb9 Merge pull request #41956 from AkihiroSuda/cherrypick-41947-2010
[20.10 backport] rootless: prevent the service hanging when stopping (set systemd KillMode to mixed)
2021-02-02 10:51:08 -08:00
Akihiro Suda
519a55f491 TestCgroupNamespacesRunOlderClient: support cgroup v2
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:34:08 +09:00
Akihiro Suda
b6a6a35684 docker info: adjust warning strings for cgroup v2
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 00225e220f)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:32:13 +09:00
Akihiro Suda
25bd941ae4 docker info: silence unhandleable warnings
The following warnings in `docker info` are now discarded,
because there is no action user can actually take.

On cgroup v1:
- "WARNING: No blkio weight support"
- "WARNING: No blkio weight_device support"

On cgroup v2:
- "WARNING: No kernel memory TCP limit support"
- "WARNING: No oom kill disable support"

`docker run` still prints warnings when the missing feature is being attempted to use.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 8086443a44)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:32:00 +09:00
Akihiro Suda
a287e76e15 pkg/archive: allow mknodding FIFO inside userns
Fix #41803

Also attempt to mknod devices.
Mknodding devices are likely to fail, but still worth trying when
running with a seccomp user notification.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit d5d5cccb7e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:29:49 +09:00
Akihiro Suda
58283298d7 rootless: set systemd KillMode to mixed
Now `systemctl --user stop docker` completes just with in 1 or 2 seconds.

Fix issue 41944 ("Docker rootless does not exit properly if containers are running")

See systemd.kill(5) https://www.freedesktop.org/software/systemd/man/systemd.kill.html

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 05566adf71)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2021-02-02 14:26:47 +09:00
Tibor Vass
46229ca1d8 Use golang.org/x/sys/execabs
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7ca0cb7ffa)
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 21:33:12 +00:00
Brian Goff
a7d4af84bd pull: Validate layer digest format
Otherwise a malformed or empty digest may cause a panic.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-28 21:33:12 +00:00
Brian Goff
611eb6ffb3 buildkit: Apply apparmor profile
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-28 21:33:12 +00:00
Tibor Vass
4afe620fac vendor buildkit 68bb095353c65bc3993fd534c26cf77fe05e61b1
Signed-off-by: Tibor Vass <tibor@docker.com>
2021-01-28 20:20:56 +00:00
Brian Goff
e908cc3901 Use real root with 0701 perms
Various dirs in /var/lib/docker contain data that needs to be mounted
into a container. For this reason, these dirs are set to be owned by the
remapped root user, otherwise there can be permissions issues.
However, this uneccessarily exposes these dirs to an unprivileged user
on the host.

Instead, set the ownership of these dirs to the real root (or rather the
UID/GID of dockerd) with 0701 permissions, which allows the remapped
root to enter the directories but not read/write to them.
The remapped root needs to enter these dirs so the container's rootfs
can be configured... e.g. to mount /etc/resolve.conf.

This prevents an unprivileged user from having read/write access to
these dirs on the host.
The flip side of this is now any user can enter these directories.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-26 17:23:32 +00:00
Brian Goff
bfedd27259 Do not set DOCKER_TMP to be owned by remapped root
The remapped root does not need access to this dir.
Having this owned by the remapped root opens the host up to an
uprivileged user on the host being able to escalate privileges.

While it would not be normal for the remapped UID to be used outside of
the container context, it could happen.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-26 17:23:32 +00:00
Brian Goff
edb62a3ace Ensure MkdirAllAndChown also sets perms
Generally if we ever need to change perms of a dir, between versions,
this ensures the permissions actually change when we think it should
change without having to handle special cases if it already existed.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2021-01-26 17:23:32 +00:00
6210 changed files with 350627 additions and 604142 deletions

22
.DEREK.yml Normal file
View File

@@ -0,0 +1,22 @@
curators:
- aboch
- alexellis
- andrewhsu
- anonymuse
- arkodg
- chanwit
- ehazlett
- fntlnz
- gianarb
- kolyshkin
- mgoelzer
- olljanat
- programmerq
- rheinwein
- ripcurld0
- thajeztah
features:
- comments
- pr_description_required

View File

@@ -1,4 +1,6 @@
.git
bundles/
cli/winresources/**/winres.json
cli/winresources/**/*.syso
.go-pkg-cache
.gopath
bundles
vendor/pkg

3
.gitattributes vendored
View File

@@ -1,3 +0,0 @@
Dockerfile* linguist-language=Dockerfile
vendor.mod linguist-language=Go-Module
vendor.sum linguist-language=Go-Checksums

1
.github/CODEOWNERS vendored
View File

@@ -6,6 +6,7 @@
builder/** @tonistiigi
contrib/mkimage/** @tianon
daemon/graphdriver/devmapper/** @rhvgoyal
daemon/graphdriver/lcow/** @johnstep
daemon/graphdriver/overlay/** @dmcgowan
daemon/graphdriver/overlay2/** @dmcgowan
daemon/graphdriver/windows/** @johnstep

View File

@@ -1,27 +0,0 @@
name: 'Setup Runner'
description: 'Composite action to set up the GitHub Runner for jobs in the test.yml workflow'
runs:
using: composite
steps:
- run: |
sudo modprobe ip_vs
sudo modprobe ipv6
sudo modprobe ip6table_filter
sudo modprobe -r overlay
sudo modprobe overlay redirect_dir=off
shell: bash
- run: |
if [ ! -e /etc/docker/daemon.json ]; then
echo '{}' | tee /etc/docker/daemon.json >/dev/null
fi
DOCKERD_CONFIG=$(jq '.+{"experimental":true,"live-restore":true,"ipv6":true,"fixed-cidr-v6":"2001:db8:1::/64"}' /etc/docker/daemon.json)
sudo tee /etc/docker/daemon.json <<<"$DOCKERD_CONFIG" >/dev/null
sudo service docker restart
shell: bash
- run: |
./contrib/check-config.sh || true
shell: bash
- run: |
docker info
shell: bash

View File

@@ -1,48 +0,0 @@
# reusable workflow
name: .dco
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
env:
ALPINE_VERSION: 3.16
jobs:
run:
runs-on: ubuntu-20.04
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Dump context
uses: actions/github-script@v6
with:
script: |
console.log(JSON.stringify(context, null, 2));
-
name: Get base ref
id: base-ref
uses: actions/github-script@v6
with:
result-encoding: string
script: |
if (/^refs\/pull\//.test(context.ref) && context.payload?.pull_request?.base?.ref != undefined) {
return context.payload.pull_request.base.ref;
}
return context.ref.replace(/^refs\/heads\//g, '');
-
name: Validate
run: |
docker run --rm \
-v "$(pwd):/workspace" \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
env:
VALIDATE_REPO: ${{ github.server_url }}/${{ github.repository }}.git
VALIDATE_BRANCH: ${{ steps.base-ref.outputs.result }}

View File

@@ -1,498 +0,0 @@
# reusable workflow
name: .windows
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
inputs:
os:
required: true
type: string
send_coverage:
required: false
type: boolean
default: false
env:
GO_VERSION: 1.19.3
GOTESTLIST_VERSION: v0.2.0
TESTSTAT_VERSION: v0.1.3
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2019: ltsc2019
WINDOWS_BASE_TAG_2022: ltsc2022
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
DOCKER_BUILDKIT: 0
ITG_CLI_MATRIX_SIZE: 6
jobs:
build:
runs-on: ${{ inputs.os }}
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
docker pull ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }}
docker tag ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }} microsoft/windowsservercore
docker build --build-arg GO_VERSION -t ${{ env.TEST_IMAGE_NAME }} -f Dockerfile.windows .
-
name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
-
name: Copy artifacts
run: |
New-Item -ItemType "directory" -Path "${{ env.BIN_OUT }}"
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\docker.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\dockerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\bin\gotestsum.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: build-${{ inputs.os }}
path: ${{ env.BIN_OUT }}/*
if-no-files-found: error
retention-days: 2
unit-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
docker pull ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }}
docker tag ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }} microsoft/windowsservercore
docker build --build-arg GO_VERSION -t ${{ env.TEST_IMAGE_NAME }} -f Dockerfile.windows .
-
name: Test
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
-v "${{ env.GOPATH }}\src\github.com\docker\docker\bundles:C:\gopath\src\github.com\docker\docker\bundles" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -TestUnit
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v3
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: unit
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: ${{ inputs.os }}-unit-reports
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
unit-test-report:
runs-on: ubuntu-latest
if: always()
needs:
- unit-test
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v3
with:
name: ${{ inputs.os }}-unit-reports
path: /tmp/artifacts
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/artifacts -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
integration-test-prepare:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# Distribute integration-cli tests for the matrix in integration-test job.
# Also prepend ./... to the matrix. This is a special case to run "Test integration" step exclusively.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} ./...)"
matrix="$(echo "$matrix" | jq -c '. |= ["./..."] + .')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
needs:
- build
- integration-test-prepare
strategy:
fail-fast: false
matrix:
runtime:
- builtin
- containerd
test: ${{ fromJson(needs.integration-test-prepare.outputs.matrix) }}
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Download artifacts
uses: actions/download-artifact@v3
with:
name: build-${{ inputs.os }}
path: ${{ env.BIN_OUT }}
-
name: Init
run: |
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
-
# removes docker service that is currently installed on the runner. we
# could use Uninstall-Package but not yet available on Windows runners.
# more info: https://github.com/actions/virtual-environments/blob/d3a5bad25f3b4326c5666bab0011ac7f1beec95e/images/win/scripts/Installers/Install-Docker.ps1#L11
name: Removing current daemon
run: |
if (Get-Service docker -ErrorAction SilentlyContinue) {
$dockerVersion = (docker version -f "{{.Server.Version}}")
Write-Host "Current installed Docker version: $dockerVersion"
# remove service
Stop-Service -Force -Name docker
Remove-Service -Name docker
# removes event log entry. we could use "Remove-EventLog -LogName -Source docker"
# but this cmd is not available atm
$ErrorActionPreference = "SilentlyContinue"
& reg delete "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application\docker" /f 2>&1 | Out-Null
$ErrorActionPreference = "Stop"
Write-Host "Service removed"
}
-
name: Starting containerd
if: matrix.runtime == 'containerd'
run: |
Write-Host "Generating config"
& "${{ env.BIN_OUT }}\containerd.exe" config default | Out-File "$env:TEMP\ctn.toml" -Encoding ascii
Write-Host "Creating service"
New-Item -ItemType Directory "$env:TEMP\ctn-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\ctn-state" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait "${{ env.BIN_OUT }}\containerd.exe" `
-ArgumentList "--log-level=debug", `
"--config=$env:TEMP\ctn.toml", `
"--address=\\.\pipe\containerd-containerd", `
"--root=$env:TEMP\ctn-root", `
"--state=$env:TEMP\ctn-state", `
"--log-file=$env:TEMP\ctn.log", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name containerd
Start-Sleep -Seconds 5
Write-Host "Service started successfully!"
-
name: Starting test daemon
run: |
Write-Host "Creating service"
If ("${{ matrix.runtime }}" -eq "containerd") {
$runtimeArg="--containerd=\\.\pipe\containerd-containerd"
echo "DOCKER_WINDOWS_CONTAINERD_RUNTIME=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
New-Item -ItemType Directory "$env:TEMP\moby-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\moby-exec" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait -NoNewWindow "${{ env.BIN_OUT }}\dockerd" `
-ArgumentList $runtimeArg, "--debug", `
"--host=npipe:////./pipe/docker_engine", `
"--data-root=$env:TEMP\moby-root", `
"--exec-root=$env:TEMP\moby-exec", `
"--pidfile=$env:TEMP\docker.pid", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name docker
Write-Host "Service started successfully!"
-
name: Waiting for test daemon to start
run: |
$tries=20
Write-Host "Waiting for the test daemon to start..."
While ($true) {
$ErrorActionPreference = "SilentlyContinue"
& "${{ env.BIN_OUT }}\docker" version
$ErrorActionPreference = "Stop"
If ($LastExitCode -eq 0) {
break
}
$tries--
If ($tries -le 0) {
Throw "Failed to get a response from the daemon"
}
Write-Host -NoNewline "."
Start-Sleep -Seconds 1
}
Write-Host "Test daemon started and replied!"
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Building contrib/busybox
run: |
& "${{ env.BIN_OUT }}\docker" build -t busybox `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
.\contrib\busybox\
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: List images
run: |
& "${{ env.BIN_OUT }}\docker" images
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Test integration
if: matrix.test == './...'
run: |
.\hack\make.ps1 -TestIntegration
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
-
name: Test integration-cli
if: matrix.test != './...'
run: |
.\hack\make.ps1 -TestIntegrationCli
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
INTEGRATION_TESTRUN: ${{ matrix.test }}
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v3
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: integration,${{ matrix.runtime }}
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Stop containerd
if: always() && matrix.runtime == 'containerd'
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name containerd
$ErrorActionPreference = "Stop"
-
name: Containerd logs
if: always() && matrix.runtime == 'containerd'
run: |
Copy-Item "$env:TEMP\ctn.log" -Destination ".\bundles\containerd.log"
Get-Content "$env:TEMP\ctn.log" | Out-Host
-
name: Stop daemon
if: always()
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name docker
$ErrorActionPreference = "Stop"
-
# as the daemon is registered as a service we have to check the event
# logs against the docker provider.
name: Daemon event logs
if: always()
run: |
Get-WinEvent -ea SilentlyContinue `
-FilterHashtable @{ProviderName= "docker"; LogName = "application"} |
Select-Object -Property TimeCreated, @{N='Detailed Message'; E={$_.Message}} |
Sort-Object @{Expression="TimeCreated";Descending=$false} |
Select-Object -ExpandProperty 'Detailed Message' | Tee-Object -file ".\bundles\daemon.log"
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: ${{ inputs.os }}-integration-reports-${{ matrix.runtime }}
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
integration-test-report:
runs-on: ubuntu-latest
if: always()
needs:
- integration-test
strategy:
fail-fast: false
matrix:
runtime:
- builtin
- containerd
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v3
with:
name: ${{ inputs.os }}-integration-reports-${{ matrix.runtime }}
path: /tmp/artifacts
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/artifacts -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY

View File

@@ -1,113 +0,0 @@
name: buildkit
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]{2}'
pull_request:
env:
BUNDLES_OUTPUT: ./bundles
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/bake-action@v2
with:
targets: binary
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: binary
path: ${{ env.BUNDLES_OUTPUT }}
if-no-files-found: error
retention-days: 1
test:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build
strategy:
fail-fast: false
matrix:
pkg:
- client
- cmd/buildctl
- solver
- frontend
- frontend/dockerfile
typ:
- integration
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: moby
-
name: BuildKit ref
run: |
./hack/go-mod-prepare.sh
# FIXME(thaJeztah) temporarily overriding version to use for tests; remove with the next release of buildkit
# echo "BUILDKIT_REF=$(./hack/buildkit-ref)" >> $GITHUB_ENV
echo "BUILDKIT_REF=4febae4f874bd8ef52dec30e988c8fe0bc96b3b9" >> $GITHUB_ENV
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v3
with:
repository: "moby/buildkit"
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Download binary artifacts
uses: actions/download-artifact@v3
with:
name: binary
path: ./buildkit/build/moby/
-
name: Update daemon.json
run: |
sudo rm /etc/docker/daemon.json
sudo service docker restart
docker version
docker info
-
name: Test
run: |
./hack/test ${{ matrix.typ }}
env:
CONTEXT: "."
TEST_DOCKERD: "1"
TEST_DOCKERD_BINARY: "./build/moby/binary-daemon/dockerd"
TESTPKGS: "./${{ matrix.pkg }}"
TESTFLAGS: "-v --parallel=1 --timeout=30m --run=//worker=dockerd$"
working-directory: buildkit

View File

@@ -1,102 +0,0 @@
name: ci
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
tags:
- 'v*'
pull_request:
env:
BUNDLES_OUTPUT: ./bundles
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/bake-action@v2
with:
targets: ${{ matrix.target }}
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.target }}
path: ${{ env.BUNDLES_OUTPUT }}
if-no-files-found: error
retention-days: 7
cross:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
platform:
- linux/amd64
- linux/arm/v5
- linux/arm/v6
- linux/arm/v7
- linux/arm64
- linux/ppc64le
- linux/s390x
- windows/amd64
- windows/arm64
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/bake-action@v2
with:
targets: cross
env:
DOCKER_CROSSPLATFORMS: ${{ matrix.platform }}
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: cross-${{ env.PLATFORM_PAIR }}
path: ${{ env.BUNDLES_OUTPUT }}
if-no-files-found: error
retention-days: 7

View File

@@ -1,504 +0,0 @@
name: test
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
tags:
- 'v*'
pull_request:
env:
GO_VERSION: 1.19.3
GOTESTLIST_VERSION: v0.2.0
TESTSTAT_VERSION: v0.1.3
ITG_CLI_MATRIX_SIZE: 6
DOCKER_EXPERIMENTAL: 1
DOCKER_GRAPHDRIVER: overlay2
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
mode:
- ""
- systemd
steps:
-
name: Prepare
run: |
if [ "${{ matrix.mode }}" = "systemd" ]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev${{ matrix.mode }}
*.cache-to=type=gha,scope=dev${{ matrix.mode }},mode=max
*.output=type=cacheonly
validate-prepare:
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
matrix: ${{ steps.scripts.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Create matrix
id: scripts
run: |
scripts=$(jq -ncR '[inputs]' <<< "$(ls -I .validate -I all -I default -I dco -I golangci-lint.yml -I yamllint.yaml -A ./hack/validate/)")
echo "matrix=$scripts" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-20.04
needs:
- validate-prepare
- build-dev
strategy:
fail-fast: true
matrix:
script: ${{ fromJson(needs.validate-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Validate
run: |
make -o build validate-${{ matrix.script }}
unit:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v3
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: unit-reports
path: /tmp/reports/*
unit-report:
runs-on: ubuntu-20.04
if: always()
needs:
- unit
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
with:
name: unit-reports
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-docker-py
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-docker-py/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: docker-py-reports
path: /tmp/reports/*
integration-flaky:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration-flaky
env:
TEST_SKIP_INTEGRATION_CLI: 1
integration:
runs-on: ${{ matrix.os }}
timeout-minutes: 120
needs:
- build-dev
strategy:
fail-fast: false
matrix:
os:
- ubuntu-20.04
- ubuntu-22.04
mode:
- ""
- rootless
- systemd
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
echo "DOCKER_ROOTLESS=1" >> $GITHUB_ENV
fi
if [[ "${{ matrix.mode }}" == *"systemd"* ]]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=${{ env.CACHE_DEV_SCOPE }}
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsPath="/tmp/reports/${{ matrix.os }}"
if [ -n "${{ matrix.mode }}" ]; then
reportsPath="$reportsPath-${{ matrix.mode }}"
fi
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
-
name: Send to Codecov
uses: codecov/codecov-action@v3
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration,${{ matrix.mode }}
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: integration-reports
path: /tmp/reports/*
integration-report:
runs-on: ubuntu-20.04
if: always()
needs:
- integration
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
with:
name: integration-reports
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# Distribute integration-cli tests for the matrix in integration-test job.
# Also prepend ./... to the matrix. This is a special case to run "Test integration" step exclusively.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} ./...)"
matrix="$(echo "$matrix" | jq -c '. |= ["./..."] + .')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
- integration-cli-prepare
strategy:
fail-fast: false
matrix:
test: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION: 1
TESTCOVERAGE: 1
TESTFLAGS: "-test.run (${{ matrix.test }})/"
-
name: Prepare reports
if: always()
run: |
reportsPath=/tmp/reports/$(echo -n "${{ matrix.test }}" | sha256sum | cut -d " " -f 1)
mkdir -p bundles $reportsPath
echo "${{ matrix.test }}" | tr -s '|' '\n' | tee -a "$reportsPath/tests.txt"
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
-
name: Send to Codecov
uses: codecov/codecov-action@v3
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration-cli
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: integration-cli-reports
path: /tmp/reports/*
integration-cli-report:
runs-on: ubuntu-20.04
if: always()
needs:
- integration-cli
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
with:
name: integration-cli-reports
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY

View File

@@ -1,22 +0,0 @@
name: windows-2019
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
run:
needs:
- validate-dco
uses: ./.github/workflows/.windows.yml
with:
os: windows-2019
send_coverage: false

View File

@@ -1,25 +0,0 @@
name: windows-2022
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
run:
needs:
- validate-dco
uses: ./.github/workflows/.windows.yml
with:
os: windows-2022
send_coverage: true

35
.gitignore vendored
View File

@@ -1,30 +1,23 @@
# If you want to ignore files created by your editor/tools, please consider a
# [global .gitignore](https://help.github.com/articles/ignoring-files).
*~
*.bak
# Docker project generated files to ignore
# if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.gz
*.orig
test.main
.*.swp
.DS_Store
thumbs.db
# local repository customization
.envrc
# a .bashrc may be added to customize the build environment
.bashrc
.editorconfig
# top-level go.mod is not meant to be checked in
/go.mod
# build artifacts
.gopath/
.go-pkg-cache/
autogen/
bundles/
cli/winresources/*/*.syso
cli/winresources/*/winres.json
cmd/dockerd/dockerd
contrib/builder/rpm/*/changelog
# ci artifacts
*.exe
*.gz
vendor/pkg/
go-test-report.json
junit-report.xml
profile.out
test.main
junit-report.xml

210
.mailmap
View File

@@ -1,22 +1,15 @@
# This file lists the canonical name and email of contributors, and is used to
# generate AUTHORS (in hack/generate-authors.sh).
#
# To find new duplicates, regenerate AUTHORS and scan for name duplicates, or
# run the following to find email duplicates:
# git log --format='%aE - %aN' | sort -uf | awk -v IGNORECASE=1 '$1 in a {print a[$1]; print}; {a[$1]=$0}'
#
# For an explanation of this file format, consult gitmailmap(5).
# Generate AUTHORS: hack/generate-authors.sh
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
# duplicates that aren't also email duplicates): scan the output of:
# git log --format='%aE - %aN' | sort -uf
#
# For explanation on this file format: man git-shortlog
<21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
<mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron L. Xu <liker.xu@foxmail.com> <likexu@harmonycloud.cn>
Aaron Lehmann <alehmann@netflix.com>
Aaron Lehmann <alehmann@netflix.com> <aaron.lehmann@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com>
Abhinandan Prativadi <aprativadi@gmail.com> <abhi@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com> abhi <user.email>
Abhishek Chanda <abhishek.becs@gmail.com>
Abhishek Chanda <abhishek.becs@gmail.com> <abhishek.chanda@emc.com>
Ada Mancini <ada@docker.com>
Abhinandan Prativadi <abhi@docker.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Dobrawy <naczelnik@jawnosc.tk> <ad-m@users.noreply.github.com>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
@@ -29,37 +22,22 @@ Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akshay Moghe <akshay.moghe@gmail.com>
Albin Kerouanton <albinker@gmail.com>
Albin Kerouanton <albinker@gmail.com> <albin@akerouanton.name>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
Aleksandrs Fadins <aleks@s-ko.net>
Alessandro Boch <aboch@tetrationanalytics.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@docker.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@socketplane.io>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@users.noreply.github.com>
Alex Chan <alex@alexwlchan.net>
Alex Chan <alex@alexwlchan.net> <alex.chan@metaswitch.com>
Alex Chen <alexchenunix@gmail.com> <root@localhost.localdomain>
Alex Ellis <alexellis2@gmail.com>
Alex Goodman <wagoodman@gmail.com> <wagoodman@users.noreply.github.com>
Alexander Larsson <alexl@redhat.com> <alexander.larsson@gmail.com>
Alexander Morozov <lk4d4math@gmail.com>
Alexander Morozov <lk4d4math@gmail.com> <lk4d4@docker.com>
Alexander Morozov <lk4d4@docker.com>
Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com>
Alexandre González <agonzalezro@gmail.com>
Alexis Ries <ries.alexis@gmail.com>
Alexis Ries <ries.alexis@gmail.com> <alexis.ries.ext@orange.com>
Alexis Thomas <fr.alexisthomas@gmail.com>
Alicia Lauerman <alicia@eta.im> <allydevour@me.com>
Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Anca Iordache <anca.iordache@docker.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Kim <taeyeonkim90@gmail.com>
Andrew Kim <taeyeonkim90@gmail.com> <akim01@fortinet.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com>
@@ -67,8 +45,6 @@ Andrey Kolomentsev <andrey.kolomentsev@docker.com> <andrey.kolomentsev@gmail.com
André Martins <aanm90@gmail.com> <martins@noironetworks.com>
Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Andy Zhang <andy.zhangtao@hotmail.com>
Andy Zhang <andy.zhangtao@hotmail.com> <ztao@tibco-support.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
@@ -78,16 +54,11 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Anyu Wang <wanganyu@outlook.com>
Arko Dasgupta <arko@tetrate.io>
Arko Dasgupta <arko@tetrate.io> <arko.dasgupta@docker.com>
Arko Dasgupta <arko@tetrate.io> <arkodg@users.noreply.github.com>
Arnaud Porterie <icecrime@gmail.com>
Arnaud Porterie <icecrime@gmail.com> <arnaud.porterie@docker.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com> <elboulangero@gmail.com>
Arko Dasgupta <arko.dasgupta@docker.com>
Arko Dasgupta <arko.dasgupta@docker.com> <arkodg@users.noreply.github.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Artur Meyster <arthurfbi@yahoo.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
@@ -104,18 +75,13 @@ Bin Liu <liubin0329@gmail.com>
Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Boaz Shuster <ripcurld.github@gmail.com>
Bojun Zhu <bojun.zhu@foxmail.com>
Boqin Qin <bobbqqin@gmail.com>
Boshi Lian <farmer1992@gmail.com>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Brian Goff <cpuguy83@gmail.com>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Brian Goff <cpuguy83@gmail.com> <brian.goff@microsoft.com>
Brian Goff <cpuguy83@gmail.com> <cpuguy@hey.com>
Cameron Sparr <gh@sparr.email>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain>
@@ -130,8 +96,6 @@ Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
Chris Price <cprice@mirantis.com> <chris.price@docker.com>
Chris Telfer <ctelfer@docker.com>
Chris Telfer <ctelfer@docker.com> <ctelfer@users.noreply.github.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com>
Christy Norman <christy@linux.vnet.ibm.com>
@@ -171,17 +135,12 @@ David M. Karr <davidmichaelkarr@gmail.com>
David Sheets <dsheets@docker.com> <sheets@alum.mit.edu>
David Sissitka <me@dsissitka.com>
David Williamson <david.williamson@docker.com> <davidwilliamson@users.noreply.github.com>
Derek Ch <denc716@gmail.com>
Derek McGowan <derek@mcg.dev>
Derek McGowan <derek@mcg.dev> <derek@mcgstyle.net>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Dhilip Kumars <dhilip.kumar.s@huawei.com>
Diego Siqueira <dieg0@live.com>
Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com>
Dmytro Iakovliev <dmytro.iakovliev@zodiacsystems.com>
Dominic Yin <yindongchao@inspur.com>
Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
@@ -191,9 +150,6 @@ Drew Erny <derny@mirantis.com> <drew.erny@docker.com>
Elan Ruusamäe <glen@pld-linux.org>
Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee>
Elango Sivanandam <elango.siva@docker.com>
Elango Sivanandam <elango.siva@docker.com> <elango@docker.com>
Eli Uriegas <seemethere101@gmail.com>
Eli Uriegas <seemethere101@gmail.com> <eli.uriegas@docker.com>
Eric G. Noriega <enoriega@vizuri.com> <egnoriega@users.noreply.github.com>
Eric Hanchrow <ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Eric Rosenberg <ehaydenr@gmail.com> <ehaydenr@users.noreply.github.com>
@@ -215,10 +171,8 @@ Feng Yan <fy2462@gmail.com>
Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frank Yang <yyb196@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Fu JinLin <withlin@yeah.net>
Gabriel Goller <gabrielgoller123@gmail.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com>
@@ -238,61 +192,42 @@ Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@charmes.net>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@docker.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@dotcloud.com>
Gunadhya S. <6939749+gunadhya@users.noreply.github.com>
Guoqiang QI <guoqiang.qi1@gmail.com>
Guri <odg0318@gmail.com>
Gurjeet Singh <gurjeet@singh.im> <singh.gurjeet@gmail.com>
Gustav Sinder <gustav.sinder@gmail.com>
Günther Jungbluth <gunther@gameslabs.net>
Hakan Özler <hakan.ozler@kodcu.com>
Hao Shu Wei <haoshuwei24@gmail.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haoshuwei1989@163.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com> <haoshuwei1989@163.com>
Harald Albers <github@albersweb.de> <albers@users.noreply.github.com>
Harald Niesche <harald@niesche.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh>
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
Harry Zhang <harryz@hyper.sh> <resouer@163.com>
Harry Zhang <harryz@hyper.sh> <resouer@gmail.com>
Harry Zhang <resouer@163.com>
Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com>
He Simei <hesimei@zju.edu.cn>
Helen Xie <chenjg@harmonycloud.cn>
Hiroyuki Sasagawa <hs19870702@gmail.com>
Hollie Teal <hollie@docker.com>
Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Hu Keping <hukeping@huawei.com>
Hui Kang <hkang.sunysb@gmail.com>
Hui Kang <hkang.sunysb@gmail.com> <kangh@us.ibm.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
Hyeongkyu Lee <hyeongkyu.lee@navercorp.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com>
Ian Campbell <ian.campbell@docker.com>
Ian Campbell <ian.campbell@docker.com> <ijc@docker.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Iskander Sharipov <quasilyte@gmail.com>
Ivan Babrou <ibobrik@gmail.com>
Ivan Markin <sw@nogoegst.net> <twim@riseup.net>
Jack Laxson <jackjrabbit@gmail.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com>
Jaivish Kothari <janonymous.codevulture@gmail.com>
Jake Moshenko <jake@devtable.com>
Jakub Drahos <jdrahos@pulsepoint.com>
Jakub Drahos <jdrahos@pulsepoint.com> <jack.drahos@gmail.com>
James Nesbitt <jnesbitt@mirantis.com>
James Nesbitt <jnesbitt@mirantis.com> <james.nesbitt@wunderkraut.com>
Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com>
Jan Götte <jaseg@jaseg.net>
Jana Radhakrishnan <mrjana@docker.com>
Jana Radhakrishnan <mrjana@docker.com> <mrjana@socketplane.io>
Javier Bassi <javierbassi@gmail.com>
Javier Bassi <javierbassi@gmail.com> <CrimsonGlory@users.noreply.github.com>
Jay Lim <jay@imjching.com>
Jay Lim <jay@imjching.com> <imjching@hotmail.com>
Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
@@ -322,12 +257,11 @@ Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Harris <john@johnharris.io>
John Howard <github@lowenna.com>
John Howard <github@lowenna.com> <10522484+lowenna@users.noreply.github.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com>
John Howard <github@lowenna.com> <jhoward@ntdev.microsoft.com>
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com>
John Howard <github@lowenna.com> <John.Howard@microsoft.com>
John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard <github@lowenna.com> <john@lowenna.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan Choy <jonathan.j.choy@gmail.com>
@@ -347,12 +281,9 @@ Josh Wilson <josh.wilson@fivestars.com> <jcwilson@users.noreply.github.com>
Joyce Jang <mail@joycejang.com>
Julien Bordellier <julienbordellier@gmail.com> <git@julienbordellier.com>
Julien Bordellier <julienbordellier@gmail.com> <me@julienbordellier.com>
Jun Du <dujun5@huawei.com>
Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com> <jkeller@vb-jkeller-mbp.local>
Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
Justin Terry <juterry@microsoft.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com>
@@ -369,7 +300,6 @@ Ken Cochrane <kencochrane@gmail.com> <KenCochrane@gmail.com>
Ken Herner <kherner@progress.com> <chosenken@gmail.com>
Ken Reese <krrgithub@gmail.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Alvarez <crazy-max@users.noreply.github.com>
Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
Kevin Meredith <kevin.m.meredith@gmail.com>
@@ -380,17 +310,11 @@ Konrad Kleine <konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
Konstantin Gribov <grossws@gmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <btkushuwahak@KUNAL-PC.swh.swh.nttdata.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com>
Kyle Squizzato <ksquizz@gmail.com>
Kyle Squizzato <ksquizz@gmail.com> <kyle.squizzato@docker.com>
Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com>
Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com>
Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com>
Lei Jitang <leijitang@huawei.com> <leijitang@outlook.com>
Leiiwang <u2takey@gmail.com>
Liang Mingqiang <mqliang.zju@gmail.com>
Liang-Chi Hsieh <viirya@gmail.com>
Liao Qingwei <liaoqingwei@huawei.com>
@@ -407,11 +331,8 @@ Lyn <energylyn@zju.edu.cn>
Lynda O'Leary <lyndaoleary29@gmail.com>
Lynda O'Leary <lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
Ma Müller <mueller-ma@users.noreply.github.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@corp.microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@microsoft.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@docker.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@socketplane.io>
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Mageee <fangpuyi@foxmail.com> <21521230.zju.edu.cn>
Mansi Nahar <mmn4185@rit.edu> <mansi.nahar@macbookpro-mansinahar.local>
Mansi Nahar <mmn4185@rit.edu> <mansinahar@users.noreply.github.com>
@@ -424,12 +345,10 @@ Markan Patel <mpatel678@gmail.com>
Markus Kortlang <hyp3rdino@googlemail.com> <markus.kortlang@lhsystems.com>
Martin Redmond <redmond.martin@gmail.com> <martin@tinychat.com>
Martin Redmond <redmond.martin@gmail.com> <xgithub@redmond5.com>
Maru Newby <mnewby@thesprawl.net>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Masato Ohba <over.rye@gmail.com>
Mathieu Paturel <mathieu.paturel@gmail.com>
Matt Bentley <matt.bentley@docker.com> <mbentley@mbentley.net>
Matt Schurenko <matt.schurenko@gmail.com>
Matt Williams <mattyw@me.com>
@@ -441,23 +360,15 @@ Matthias Kühnle <git.nivoc@neverbox.com> <kuehnle@online.de>
Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com>
Maxwell <csuhp007@gmail.com>
Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com>
Menghui Chen <menghui.chen@alibaba-inc.com>
Michael Beskin <mrbeskin@gmail.com>
Michael Crosby <crosbymichael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <crosby.michael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <michael@crosbymichael.com>
Michael Crosby <crosbymichael@gmail.com> <michael@docker.com>
Michael Crosby <crosbymichael@gmail.com> <michael@thepasture.io>
Michael Crosby <michael@docker.com> <crosby.michael@gmail.com>
Michael Crosby <michael@docker.com> <crosbymichael@gmail.com>
Michael Crosby <michael@docker.com> <michael@crosbymichael.com>
Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org>
Michael Huettermann <michael@huettermann.net>
Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com>
Michael Spetsiotis <michael_spets@hotmail.com>
Michael Stapelberg <michael+gh@stapelberg.de>
Michael Stapelberg <michael+gh@stapelberg.de> <stapelberg@google.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com> <kostrzewa.michal@o2.pl>
Michal Minář <miminar@redhat.com>
Michał Gryko <github@odkurzacz.org>
Michiel de Jong <michiel@unhosted.org>
@@ -465,19 +376,14 @@ Mickaël Fortunato <morsi.morsicus@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
Mikael Davranche <mikael.davranche@corp.ovh.com>
Mikael Davranche <mikael.davranche@corp.ovh.com> <mikael.davranche@corp.ovh.net>
Mike Casas <mkcsas0@gmail.com> <mikecasas@users.noreply.github.com>
Mike Goelzer <mike.goelzer@docker.com> <mgoelzer@docker.com>
Milind Chawre <milindchawre@gmail.com>
Misty Stanley-Jones <misty@docker.com> <misty@apache.org>
Mohammad Banikazemi <MBanikazemi@gmail.com>
Mohammad Banikazemi <MBanikazemi@gmail.com> <mb@us.ibm.com>
Mohit Soni <mosoni@ebay.com> <mohitsoni1989@gmail.com>
Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
mrfly <mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@@ -494,8 +400,6 @@ Oh Jinkyun <tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
Oliver Reason <oli@overrateddev.co>
Olli Janatuinen <olli.janatuinen@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Onur Filiz <onur.filiz@microsoft.com>
Onur Filiz <onur.filiz@microsoft.com> <ofiliz@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
@@ -506,20 +410,14 @@ Peter Dave Hello <hsu@peterdavehello.org> <PeterDaveHello@users.noreply.github.c
Peter Jaffe <pjaffe@nevo.com>
Peter Nagy <xificurC@gmail.com> <pnagy@gratex.com>
Peter Waller <p@pwaller.net> <peter@scraperwiki.com>
Phil Estes <estesp@gmail.com>
Phil Estes <estesp@gmail.com> <estesp@amazon.com>
Phil Estes <estesp@gmail.com> <estesp@linux.vnet.ibm.com>
Phil Estes <estesp@linux.vnet.ibm.com> <estesp@gmail.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com>
Prasanna Gautam <prasannagautam@gmail.com>
Puneet Pruthi <puneet.pruthi@oracle.com>
Puneet Pruthi <puneet.pruthi@oracle.com> <puneetpruthi@gmail.com>
Qiang Huang <h.huangqiang@huawei.com>
Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Qin TianHuan <tianhuan@bingotree.cn>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Richard Scothern <richard.scothern@gmail.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
@@ -527,28 +425,17 @@ Robin Thoni <robin@rthoni.com>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>
Rony Weng <ronyweng@synology.com>
Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io>
Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Ryoga Saito <contact@proelbtn.com>
Ryoga Saito <contact@proelbtn.com> <proelbtn@users.noreply.github.com>
Sainath Grandhi <sainath.grandhi@intel.com>
Sainath Grandhi <sainath.grandhi@intel.com> <saiallforums@gmail.com>
Sakeven Jiang <jc5930@sina.cn>
Samuel Karp <me@samuelkarp.com> <skarp@amazon.com>
Sandeep Bansal <sabansal@microsoft.com>
Sandeep Bansal <sabansal@microsoft.com> <msabansal@microsoft.com>
Santhosh Manohar <santhosh@docker.com>
Sargun Dhillon <sargun@netflix.com> <sargun@sargun.me>
Satoshi Tagomori <tagomoris@gmail.com>
Sean Lee <seanlee@tw.ibm.com> <scaleoutsean@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <moby@example.com>
Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Seongyeol Lim <seongyeol37@gmail.com>
Shaun Kaasten <shaunk@gmail.com>
Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com>
Shengbo Song <thomassong@tencent.com>
@@ -557,10 +444,10 @@ Shih-Yuan Lee <fourdollars@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
Shukui Yang <yangshukui@huawei.com>
Shuwei Hao <haosw@cn.ibm.com>
Shuwei Hao <haosw@cn.ibm.com> <haoshuwei24@gmail.com>
Sidhartha Mani <sidharthamn@gmail.com>
Sjoerd Langkemper <sjoerd-github@linuxonly.nl> <sjoerd@byte.nl>
Smark Meng <smark@freecoop.net>
Smark Meng <smark@freecoop.net> <smarkm@users.noreply.github.com>
Solomon Hykes <solomon@docker.com> <s@docker.com>
Solomon Hykes <solomon@docker.com> <solomon.hykes@dotcloud.com>
Solomon Hykes <solomon@docker.com> <solomon@dotcloud.com>
@@ -591,48 +478,29 @@ Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
Sylvain Baubeau <lebauce@gmail.com>
Sylvain Baubeau <lebauce@gmail.com> <sbaubeau@redhat.com>
Sylvain Bellemare <sylvain@ascribe.io>
Sylvain Bellemare <sylvain@ascribe.io> <sylvain.bellemare@ezeep.com>
Takuto Sato <tockn.jp@gmail.com>
Tangi Colin <tangicolin@gmail.com>
Tejesh Mehta <tejesh.mehta@gmail.com> <tj@init.me>
Terry Chu <zue.hterry@gmail.com>
Terry Chu <zue.hterry@gmail.com> <jubosh.tw@gmail.com>
Thatcher Peskens <thatcher@docker.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@gmx.net>
Thiago Alves Silva <thiago.alves@aurea.com>
Thiago Alves Silva <thiago.alves@aurea.com> <thiagoalves@users.noreply.github.com>
Thomas Gazagnaire <thomas@gazagnaire.org> <thomas@gazagnaire.com>
Thomas Ledos <thomas.ledos92@gmail.com>
Thomas Léveil <thomasleveil@gmail.com>
Thomas Léveil <thomasleveil@gmail.com> <thomasleveil@users.noreply.github.com>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Till Claassen <pixelistik@users.noreply.github.com>
Tim Bart <tim@fewagainstmany.com>
Tim Bosse <taim@bosboot.org> <maztaim@users.noreply.github.com>
Tim Potter <tpot@hpe.com>
Tim Potter <tpot@hpe.com> <tpot@Tims-MacBook-Pro.local>
Tim Ruffles <oi@truffles.me.uk> <timruffles@googlemail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Tim Wagner <tim.wagner@freenet.ag>
Tim Wagner <tim.wagner@freenet.ag> <33624860+herrwagner@users.noreply.github.com>
Tim Zju <21651152@zju.edu.cn>
Timothy Hobbs <timothyhobbs@seznam.cz>
Toli Kuznets <toli@docker.com>
Tom Barlow <tomwbarlow@gmail.com>
Tom Denham <tom@tomdee.co.uk>
Tom Denham <tom@tomdee.co.uk> <tom.denham@metaswitch.com>
Tom Sweeney <tsweeney@redhat.com>
Tom Wilkie <tom.wilkie@gmail.com>
Tom Wilkie <tom.wilkie@gmail.com> <tom@weave.works>
Tõnis Tiigi <tonistiigi@gmail.com>
Trace Andreason <tandreason@gmail.com>
Trapier Marshall <tmarshall@mirantis.com>
Trapier Marshall <tmarshall@mirantis.com> <trapier.marshall@docker.com>
Trishna Guha <trishnaguha17@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com>
@@ -646,15 +514,12 @@ Victor Vieux <victor.vieux@docker.com> <victor@docker.com>
Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com>
Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com>
Victor Vieux <victor.vieux@docker.com> <vieux@docker.com>
Vikas Choudhary <choudharyvikas16@gmail.com>
Vikram bir Singh <vsingh@mirantis.com>
Vikram bir Singh <vsingh@mirantis.com> <vikrambir.singh@docker.com>
Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Bernat <vincent@bernat.ch>
Vincent Bernat <vincent@bernat.ch> <bernat@luffy.cx>
Vincent Bernat <vincent@bernat.ch> <Vincent.Bernat@exoscale.ch>
Vincent Bernat <vincent@bernat.ch> <vincent@bernat.im>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <vincent@bernat.im>
Vincent Boulineau <vincent.boulineau@datadoghq.com>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
@@ -663,8 +528,6 @@ Vishnu Kannan <vishnuk@google.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com> <tmp6154@yandex.ru>
Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com>
Vladislav Kolesnikov <vkolesnikov@beget.ru>
Vladislav Kolesnikov <vkolesnikov@beget.ru> <prime@vladqa.ru>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>
Wang Chao <chao.wang@ucloud.cn> <wcwxyz@gmail.com>
@@ -676,24 +539,17 @@ Wang Yuexiao <wang.yuexiao@zte.com.cn>
Wayne Chang <wayne@neverfear.org>
Wayne Song <wsong@docker.com> <wsong@users.noreply.github.com>
Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com>
Wei-Ting Kuo <waitingkuo0527@gmail.com>
Wen Cheng Ma <wenchma@cn.ibm.com>
Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
Will Weaver <monkey@buildingbananas.com>
Wing-Kam Wong <wingkwong.code@gmail.com>
WuLonghui <wlh6666@qq.com>
Xian Chaobo <xianchaobo@huawei.com>
Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn>
Xianglin Gao <xlgao@zju.edu.cn>
Xianjie <guxianjie@gmail.com>
Xianjie <guxianjie@gmail.com> <datastream@datastream-laptop.local>
Xianlu Bird <xianlubird@gmail.com>
Xiao YongBiao <xyb4638@gmail.com>
Xiao Zhang <xiaozhang0210@hotmail.com>
Xiaodong Liu <liuxiaodong@loongson.cn>
Xiaodong Zhang <a4012017@sina.com>
Xiaohua Ding <xiao_hua_ding@sina.cn>
Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
Xuecong Liao <satorulogic@gmail.com>
Yamasaki Masahide <masahide.y@gmail.com>
@@ -711,15 +567,9 @@ Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>
Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn>
Yuan Sun <sunyuan3@huawei.com>
Yue Zhang <zy675793960@yeah.net>
Yufei Xiong <yufei.xiong@qq.com>
Zach Gershman <zachgersh@gmail.com>
Zach Gershman <zachgersh@gmail.com> <zachgersh@users.noreply.github.com>
Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu>
Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org>
Zhang Kun <zkazure@gmail.com>
Zhang Wentao <zhangwentao234@huawei.com>
ZhangHang <stevezhang2014@gmail.com>
Zhenkun Bi <bi.zhenkun@zte.com.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
@@ -729,5 +579,3 @@ Ziheng Liu <lzhfromustc@gmail.com>
Zou Yu <zouyu7@huawei.com>
Zuhayr Elahi <zuhayr.elahi@docker.com>
Zuhayr Elahi <zuhayr.elahi@docker.com> <elahi.zuhayr@gmail.com>
정재영 <jjy600901@gmail.com>
정재영 <jjy600901@gmail.com> <43400316+J-jaeyoung@users.noreply.github.com>

283
AUTHORS

File diff suppressed because it is too large Load Diff

3609
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -101,8 +101,9 @@ the contributors guide.
<td>
<p>
Register for the Docker Community Slack at
<a href="https://dockr.ly/slack" target="_blank">https://dockr.ly/slack</a>.
<a href="https://community.docker.com/registrations/groups/4316" target="_blank">https://community.docker.com/registrations/groups/4316</a>.
We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd.
Archives are available at <a href="https://dockercommunity.slackarchive.io/" target="_blank">https://dockercommunity.slackarchive.io/</a>.
</p>
</td>
</tr>

View File

@@ -1,12 +1,14 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile:1.2
ARG CROSS="false"
ARG SYSTEMD="false"
ARG GO_VERSION=1.19.3
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ARG GO_VERSION=1.13.15
ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_VERSION=0.5.0
ARG DOCKER_BUILDTAGS="apparmor seccomp"
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG BASE_DEBIAN_DISTRO="buster"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE} AS base
@@ -18,40 +20,48 @@ ENV GO111MODULE=off
FROM base AS criu
ARG DEBIAN_FRONTEND
ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc
# Install dependency packages specific to criu
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/ /' > /etc/apt/sources.list.d/criu.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends criu \
&& install -D /usr/sbin/criu /build/criu
apt-get update && apt-get install -y --no-install-recommends \
libcap-dev \
libnet-dev \
libnl-3-dev \
libprotobuf-c-dev \
libprotobuf-dev \
protobuf-c-compiler \
protobuf-compiler \
python-protobuf
# Install CRIU for checkpoint/restore support
ARG CRIU_VERSION=3.14
RUN mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution
# REGISTRY_VERSION specifies the version of the registry to build and install
# from the https://github.com/docker/distribution repository. This version of
# the registry is used to test both schema 1 and schema 2 manifests. Generally,
# the version specified here should match a current release.
ARG REGISTRY_VERSION=v2.3.0
# REGISTRY_VERSION_SCHEMA1 specifies the version of the registry to build and
# install from the https://github.com/docker/distribution repository. This is
# an older (pre v2.3.0) version of the registry that only supports schema1
# manifests. This version of the registry is not working on arm64, so installation
# is skipped on that architecture.
ARG REGISTRY_VERSION_SCHEMA1=v2.1.0
# Install two versions of the registry. The first one is a recent version that
# supports both schema 1 and 2 manifests. The second one is an older version that
# only supports schema1 manifests. This allows integration-cli tests to cover
# push/pull with both schema1 and schema2 manifests.
# The old version of the registry is not working on arm64, so installation is
# skipped on that architecture.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/docker/distribution.git . \
&& git checkout -q "$REGISTRY_VERSION" \
&& git checkout -q "$REGISTRY_COMMIT" \
&& GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|armhf|ppc64*|s390x) \
git checkout -q "$REGISTRY_VERSION_SCHEMA1"; \
git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \
GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
@@ -59,13 +69,10 @@ RUN --mount=type=cache,target=/root/.cache/go-build \
FROM base AS swagger
WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger
# GO_SWAGGER_COMMIT specifies the version of the go-swagger binary to build and
# install. Go-swagger is used in CI for validating swagger.yaml in hack/validate/swagger-gen
#
# Currently uses a fork from https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix,
# Install go-swagger for validating swagger.yaml
# This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
ENV GO_SWAGGER_COMMIT c56166c036004ba7a3a321e5951ba472b9ae298c
ENV GO_SWAGGER_COMMIT 5e6cb12f7c82ce78e45ba71fa6cb1928094db050
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
@@ -74,9 +81,6 @@ RUN --mount=type=cache,target=/root/.cache/go-build \
&& git checkout -q "$GO_SWAGGER_COMMIT" \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger
# frozen-images
# See also frozenImages in "testutil/environment/protect.go" (which needs to
# be updated when adding images to this list)
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
@@ -88,13 +92,14 @@ RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/l
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
ARG TARGETARCH
ARG TARGETVARIANT
RUN /download-frozen-image-v2.sh /build \
buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
FROM base AS cross-false
@@ -103,21 +108,18 @@ ARG DEBIAN_FRONTEND
RUN dpkg --add-architecture arm64
RUN dpkg --add-architecture armel
RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture ppc64el
RUN dpkg --add-architecture s390x
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
crossbuild-essential-arm64 \
crossbuild-essential-armel \
crossbuild-essential-armhf \
crossbuild-essential-ppc64el \
crossbuild-essential-s390x
crossbuild-essential-armhf
FROM cross-${CROSS} AS dev-base
FROM cross-${CROSS} as dev-base
FROM dev-base AS runtime-dev-cross-false
ARG DEBIAN_FRONTEND
RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list
RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
@@ -126,71 +128,42 @@ RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
libseccomp-dev/buster-backports \
libsystemd-dev \
libudev-dev
FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true
ARG DEBIAN_FRONTEND
# These crossbuild packages rely on gcc-<arch>, but this doesn't want to install
# on non-amd64 systems, so other architectures cannot crossbuild amd64.
# on non-amd64 systems.
# Additionally, the crossbuild-amd64 is currently only on debian:buster, so
# other architectures cannnot crossbuild amd64.
RUN echo 'deb http://deb.debian.org/debian buster-backports main' > /etc/apt/sources.list.d/backports.list
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
libapparmor-dev:armhf \
libapparmor-dev:ppc64el \
libapparmor-dev:s390x \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libseccomp-dev:armhf \
libseccomp-dev:ppc64el \
libseccomp-dev:s390x
libseccomp-dev:arm64/buster-backports \
libseccomp-dev:armel/buster-backports \
libseccomp-dev:armhf/buster-backports
FROM runtime-dev-cross-${CROSS} AS runtime-dev
FROM base AS delve
# DELVE_VERSION specifies the version of the Delve debugger binary
# from the https://github.com/go-delve/delve repository.
# It can be used to run Docker with a possibility of
# attaching debugger to it.
#
ARG DELVE_VERSION=v1.8.1
# Delve on Linux is currently only supported on amd64 and arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
FROM base AS tomlv
ARG TOMLV_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
case $(dpkg --print-architecture) in \
amd64|arm64) \
GOBIN=/build/ GO111MODULE=on go install "github.com/go-delve/delve/cmd/dlv@${DELVE_VERSION}" \
&& /build/dlv --help \
;; \
*) \
mkdir -p /build/ \
;; \
esac
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tomlv
FROM base AS tomll
# GOTOML_VERSION specifies the version of the tomll binary to build and install
# from the https://github.com/pelletier/go-toml repository. This binary is used
# in CI in the hack/validate/toml script.
#
# When updating this version, consider updating the github.com/pelletier/go-toml
# dependency in vendor.mod accordingly.
ARG GOTOML_VERSION=v1.8.1
FROM base AS vndr
ARG VNDR_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" \
&& /build/tomll --help
FROM base AS gowinres
# GOWINRES_VERSION defines go-winres tool version
ARG GOWINRES_VERSION=v0.3.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/tc-hib/go-winres@${GOWINRES_VERSION}" \
&& /build/go-winres --help
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh vndr
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
@@ -198,112 +171,85 @@ RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev
ARG CONTAINERD_VERSION
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/containerd.installer /
ARG CONTAINERD_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh containerd
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh containerd
FROM dev-base AS proxy
ARG LIBNETWORK_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh proxy
FROM base AS golangci_lint
# FIXME: when updating golangci-lint, remove the temporary "nolint" in https://github.com/moby/moby/blob/7860686a8df15eea9def9e6189c6f9eca031bb6f/libnetwork/networkdb/cluster.go#L246
ARG GOLANGCI_LINT_VERSION=v1.49.0
ARG GOLANGCI_LINT_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
&& /build/golangci-lint --version
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh golangci_lint
FROM base AS gotestsum
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOTESTSUM_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \
&& /build/gotestsum --version
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh gotestsum
FROM base AS shfmt
ARG SHFMT_VERSION=v3.0.2
ARG SHFMT_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
&& /build/shfmt --version
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh shfmt
FROM dev-base AS dockercli
ARG DOCKERCLI_CHANNEL
ARG DOCKERCLI_VERSION
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/dockercli.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh dockercli
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh dockercli
FROM runtime-dev AS runc
ARG RUNC_VERSION
ARG RUNC_COMMIT
ARG RUNC_BUILDTAGS
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/runc.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh runc
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh runc
FROM dev-base AS tini
ARG DEBIAN_FRONTEND
ARG TINI_VERSION
ARG TINI_COMMIT
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
cmake \
vim-common
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/tini.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh tini
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tini
FROM dev-base AS rootlesskit
ARG ROOTLESSKIT_VERSION
ARG PREFIX=/build
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/rootlesskit.installer /
ARG ROOTLESSKIT_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
/install.sh rootlesskit \
&& "${PREFIX}"/rootlesskit --version \
&& "${PREFIX}"/rootlesskit-docker-proxy --help
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh rootlesskit
COPY ./contrib/dockerd-rootless.sh /build
COPY ./contrib/dockerd-rootless-setuptool.sh /build
FROM base AS crun
ARG CRUN_VERSION=1.4.5
RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-crun-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
build-essential \
libcap-dev \
libprotobuf-c-dev \
libseccomp-dev \
libsystemd-dev \
libtool \
libudev-dev \
libyajl-dev \
python3 \
;
RUN --mount=type=tmpfs,target=/tmp/crun-build \
git clone https://github.com/containers/crun.git /tmp/crun-build && \
cd /tmp/crun-build && \
git checkout -q "${CRUN_VERSION}" && \
./autogen.sh && \
./configure --bindir=/build && \
make -j install
FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64
# vpnkit
# use dummy scratch stage to avoid build to fail for unsupported platforms
FROM scratch AS vpnkit-windows
FROM scratch AS vpnkit-linux-386
FROM scratch AS vpnkit-linux-arm
FROM scratch AS vpnkit-linux-ppc64le
FROM scratch AS vpnkit-linux-riscv64
FROM scratch AS vpnkit-linux-s390x
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-amd64
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-arm64
FROM vpnkit-linux-${TARGETARCH} AS vpnkit-linux
FROM vpnkit-${TARGETOS} AS vpnkit
FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64
FROM scratch AS vpnkit
COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64
COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64
# TODO: Some of this is only really needed for testing, it would be nice to split this up
FROM runtime-dev AS dev-systemd-false
@@ -324,19 +270,16 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
aufs-tools \
bash-completion \
bzip2 \
inetutils-ping \
iproute2 \
iptables \
jq \
libcap2-bin \
libnet1 \
libnl-3-200 \
libprotobuf-c1 \
libyajl2 \
net-tools \
patch \
pigz \
python3-pip \
python3-setuptools \
@@ -348,8 +291,7 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
vim-common \
xfsprogs \
xz-utils \
zip \
zstd
zip
# Switch to use iptables instead of nftables (to match the CI hosts)
@@ -358,27 +300,24 @@ RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
ARG YAMLLINT_VERSION=1.27.1
RUN pip3 install yamllint==${YAMLLINT_VERSION}
RUN pip3 install yamllint==1.26.1
COPY --from=dockercli /build/ /usr/local/cli
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=swagger /build/ /usr/local/bin/
COPY --from=delve /build/ /usr/local/bin/
COPY --from=tomll /build/ /usr/local/bin/
COPY --from=gowinres /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=registry /build/ /usr/local/bin/
COPY --from=criu /build/ /usr/local/bin/
COPY --from=criu /build/ /usr/local/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=gotestsum /build/ /usr/local/bin/
COPY --from=golangci_lint /build/ /usr/local/bin/
COPY --from=shfmt /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit / /usr/local/bin/
COPY --from=crun /build/ /usr/local/bin/
COPY hack/dockerfile/etc/docker/ /etc/docker/
COPY --from=vpnkit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
ENV PATH=/usr/local/cli:$PATH
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
@@ -414,42 +353,35 @@ ARG PRODUCT
ENV PRODUCT=${PRODUCT}
ARG DEFAULT_PRODUCT_LICENSE
ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE}
ARG PACKAGER_NAME
ENV PACKAGER_NAME=${PACKAGER_NAME}
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV PREFIX=/build
# TODO: This is here because hack/make.sh binary copies these extras binaries
# from $PATH into the bundles dir.
# It would be nice to handle this in a different way.
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit / /usr/local/bin/
COPY --from=gowinres /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=vpnkit /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
FROM binary-base AS build-binary
RUN --mount=type=cache,target=/root/.cache \
--mount=type=bind,target=.,ro \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh binary
FROM binary-base AS build-dynbinary
RUN --mount=type=cache,target=/root/.cache \
--mount=type=bind,target=.,ro \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh dynbinary
FROM binary-base AS build-cross
ARG DOCKER_CROSSPLATFORMS
RUN --mount=type=cache,target=/root/.cache \
--mount=type=bind,target=.,ro \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
--mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \
hack/make.sh cross
FROM scratch AS binary

26
Dockerfile.buildx Normal file
View File

@@ -0,0 +1,26 @@
ARG GO_VERSION=1.13.15
ARG BUILDX_COMMIT=v0.5.1
ARG BUILDX_REPO=https://github.com/docker/buildx.git
FROM golang:${GO_VERSION}-buster AS build
ARG BUILDX_REPO
RUN git clone "${BUILDX_REPO}" /buildx
WORKDIR /buildx
ARG BUILDX_COMMIT
RUN git fetch origin "${BUILDX_COMMIT}":build && git checkout build
ARG GOOS
ARG GOARCH
# Keep these essentially no-op var settings for debug purposes.
# It allows us to see what the GOOS/GOARCH that's being built for is.
RUN GOOS="${GOOS}" GOARCH="${GOARCH}" BUILDX_COMMIT="${BUILDX_COMMIT}"; \
pkg="github.com/docker/buildx"; \
ldflags="\
-X \"${pkg}/version.Version=$(git describe --tags)\" \
-X \"${pkg}/version.Revision=$(git rev-parse --short HEAD)\" \
-X \"${pkg}/version.Package=buildx\" \
"; \
go build -mod=vendor -ldflags "${ldflags}" -o /usr/bin/buildx ./cmd/buildx
FROM golang:${GO_VERSION}-buster
COPY --from=build /usr/bin/buildx /usr/bin/buildx
ENTRYPOINT ["/usr/bin/buildx"]

View File

@@ -1,4 +1,4 @@
ARG GO_VERSION=1.19.3
ARG GO_VERSION=1.13.15
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off
@@ -18,19 +18,20 @@ FROM base AS frozen-images
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
buildpack-deps:buster@sha256:d0abb4b1e5c664828b93e8b6ac84d10bce45ee469999bef88304be04a2709491 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye@sha256:7190e972ab16aefea4d758ebe42a293f4e5c5be63595f4d03a5b9bf6839a4344 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/dockercli.installer ./
RUN PREFIX=/build ./install.sh dockercli
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
# TestDockerCLIBuildSuite dependency
# Build DockerSuite.TestBuild* dependency
FROM base AS contrib
COPY contrib/syscall-test /build/syscall-test
COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile
@@ -65,9 +66,7 @@ RUN apk --no-cache add \
ca-certificates \
g++ \
git \
inetutils-ping \
iptables \
libcap2-bin \
pigz \
tar \
xz

View File

@@ -5,12 +5,9 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.19.3
ARG GO_VERSION=1.13.15
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE}
FROM golang:${GO_VERSION}-buster
ENV GO111MODULE=off
# allow replacing httpredir or deb mirror
@@ -39,6 +36,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
xfsprogs \
xz-utils \
\
aufs-tools \
vim-common \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -155,7 +155,7 @@
# The number of build steps below are explicitly minimised to improve performance.
# Extremely important - do not change the following line to reference a "specific" image,
# such as `mcr.microsoft.com/windows/servercore:ltsc2022`. If using this Dockerfile in process
# such as `mcr.microsoft.com/windows/servercore:ltsc2019`. If using this Dockerfile in process
# isolated containers, the kernel of the host must match the container image, and hence
# would fail between Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).
# It is expected that the image `microsoft/windowsservercore:latest` is present, and matches
@@ -165,23 +165,18 @@ FROM microsoft/windowsservercore
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.19.3
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOWINRES_VERSION=v0.3.0
ARG CONTAINERD_VERSION=v1.6.10
ARG GO_VERSION=1.13.15
ARG GOTESTSUM_COMMIT=v0.5.3
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - CONTAINERD_VERSION must be consistent with 'hack/dockerfile/install/containerd.installer' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=${GO_VERSION} `
CONTAINERD_VERSION=${CONTAINERD_VERSION} `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION} `
GOWINRES_VERSION=${GOWINRES_VERSION}
GOTESTSUM_COMMIT=${GOTESTSUM_COMMIT}
RUN `
Function Test-Nano() { `
@@ -216,7 +211,7 @@ RUN `
} `
} `
`
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin;C:\containerd\bin'); `
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin'); `
`
Write-Host INFO: Downloading git...; `
$location='https://www.nuget.org/api/v2/package/GitForWindows/'+$Env:GIT_VERSION; `
@@ -257,16 +252,6 @@ RUN `
Remove-Item C:\binutils.zip; `
Remove-Item C:\gitsetup.zip; `
`
Write-Host INFO: Downloading containerd; `
Install-Package -Force 7Zip4PowerShell; `
$location='https://github.com/containerd/containerd/releases/download/'+$Env:CONTAINERD_VERSION+'/containerd-'+$Env:CONTAINERD_VERSION.TrimStart('v')+'-windows-amd64.tar.gz'; `
Download-File $location C:\containerd.tar.gz; `
New-Item -Path C:\containerd -ItemType Directory; `
Expand-7Zip C:\containerd.tar.gz C:\; `
Expand-7Zip C:\containerd.tar C:\containerd; `
Remove-Item C:\containerd.tar.gz; `
Remove-Item C:\containerd.tar; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `
Write-Host INFO: Ensuring existence of directory $srcDir...; `
@@ -276,36 +261,21 @@ RUN `
C:\git\cmd\git config --global core.autocrlf true;
RUN `
Function Install-GoTestSum() { `
Function Build-GoTestSum() { `
Write-Host "INFO: Building gotestsum version $Env:GOTESTSUM_COMMIT in $Env:GOPATH"; `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
&go get -buildmode=exe "gotest.tools/gotestsum@${Env:GOTESTSUM_COMMIT}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
Throw '"gotestsum build failed..."'; `
} `
Write-Host "INFO: Build done for gotestsum..."; `
} `
`
Install-GoTestSum
RUN `
Function Install-GoWinres() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing go-winres version $Env:GOWINRES_VERSION in $Env:GOBIN"; `
&go install "github.com/tc-hib/go-winres@${Env:GOWINRES_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"go-winres install failed..."'; `
} `
} `
`
Install-GoWinres
Build-GoTestSum
# Make PowerShell the default entrypoint
ENTRYPOINT ["powershell.exe"]

694
Jenkinsfile vendored
View File

@@ -8,9 +8,16 @@ pipeline {
timestamps()
}
parameters {
booleanParam(name: 'unit_validate', defaultValue: true, description: 'amd64 (x86_64) unit tests and vendor check')
booleanParam(name: 'validate_force', defaultValue: false, description: 'force validation steps to be run, even if no changes were detected')
booleanParam(name: 'amd64', defaultValue: true, description: 'amd64 (x86_64) Build/Test')
booleanParam(name: 'rootless', defaultValue: true, description: 'amd64 (x86_64) Build/Test (Rootless mode)')
booleanParam(name: 'cgroup2', defaultValue: true, description: 'amd64 (x86_64) Build/Test (cgroup v2)')
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test')
booleanParam(name: 's390x', defaultValue: false, description: 'IBM Z (s390x) Build/Test')
booleanParam(name: 'ppc64le', defaultValue: false, description: 'PowerPC (ppc64le) Build/Test')
booleanParam(name: 's390x', defaultValue: true, description: 'IBM Z (s390x) Build/Test')
booleanParam(name: 'ppc64le', defaultValue: true, description: 'PowerPC (ppc64le) Build/Test')
booleanParam(name: 'windowsRS1', defaultValue: false, description: 'Windows 2016 (RS1) Build/Test')
booleanParam(name: 'windowsRS5', defaultValue: true, description: 'Windows 2019 (RS5) Build/Test')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check')
}
environment {
@@ -18,7 +25,7 @@ pipeline {
DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2'
APT_MIRROR = 'cdn-fastly.deb.debian.org'
CHECK_CONFIG_COMMIT = '33a3680e08d1007e72c3b3f1454f823d8e9948ee'
CHECK_CONFIG_COMMIT = '78405559cfe5987174aa2cb6463b9b2c1b917255'
TESTDEBUG = '0'
TIMEOUT = '120m'
}
@@ -39,29 +46,510 @@ pipeline {
beforeAgent true
expression { params.dco }
}
agent { label 'arm64 && ubuntu-2004' }
agent { label 'amd64 && ubuntu-1804 && overlay2' }
steps {
sh '''
docker run --rm \
-v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
alpine sh -c 'apk add --no-cache -q bash git openssh-client && cd /workspace && hack/validate/dco'
'''
}
}
stage('Build') {
parallel {
stage('unit-validate') {
when {
beforeAgent true
expression { params.unit_validate }
}
agent { label 'amd64 && ubuntu-1804 && overlay2' }
environment {
// On master ("non-pull-request"), force running some validation checks (vendor, swagger),
// even if no files were changed. This allows catching problems caused by pull-requests
// that were merged out-of-sequence.
TEST_FORCE_VALIDATE = sh returnStdout: true, script: 'if [ "${BRANCH_NAME%%-*}" != "PR" ] || [ "${CHANGE_TARGET:-master}" != "master" ] || [ "${validate_force}" = "true" ]; then echo "1"; fi'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
}
}
stage("Validate") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_FORCE_VALIDATE \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/default
'''
}
}
stage("Docker-py") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon \
test-docker-py
'''
}
post {
always {
junit testResults: 'bundles/test-docker-py/junit-report.xml', allowEmptyResults: true
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=docker-py
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czf ${bundleName}-bundles.tar.gz bundles/test-docker-py/*.xml bundles/test-docker-py/*.log
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
}
}
stage("Static") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh binary-daemon
'''
}
}
stage("Cross") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh cross
'''
}
}
// needs to be last stage that calls make.sh for the junit report to work
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Validate vendor") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_FORCE_VALIDATE \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/vendor
'''
}
}
}
post {
always {
sh '''
echo 'Ensuring container killed.'
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=unit
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czvf ${bundleName}-bundles.tar.gz bundles/junit-report.xml bundles/go-test-report.json bundles/profile.out
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('amd64') {
when {
beforeAgent true
expression { params.amd64 }
}
agent { label 'amd64 && ubuntu-1804 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
# todo: include ip_vs in base image
sudo modprobe ip_vs
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Run tests") {
steps {
sh '''#!/bin/bash
# bash is needed so 'jobs -p' works properly
# it also accepts setting inline envvars for functions without explicitly exporting
set -x
run_tests() {
[ -n "$TESTDEBUG" ] && rm= || rm=--rm;
docker run $rm -t --privileged \
-v "$WORKSPACE/bundles/${TEST_INTEGRATION_DEST}:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/bundles/dynbinary-daemon:/go/src/github.com/docker/docker/bundles/dynbinary-daemon" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name "$CONTAINER_NAME" \
-e KEEPBUNDLE=1 \
-e TESTDEBUG \
-e TESTFLAGS \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
"$1" \
test-integration
}
trap "exit" INT TERM
trap 'pids=$(jobs -p); echo "Remaining pids to kill: [$pids]"; [ -z "$pids" ] || kill $pids' EXIT
CONTAINER_NAME=docker-pr$BUILD_NUMBER
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name ${CONTAINER_NAME}-build \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon
# flaky + integration
TEST_INTEGRATION_DEST=1 CONTAINER_NAME=${CONTAINER_NAME}-1 TEST_SKIP_INTEGRATION_CLI=1 run_tests test-integration-flaky &
# integration-cli first set
TEST_INTEGRATION_DEST=2 CONTAINER_NAME=${CONTAINER_NAME}-2 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSuite|DockerNetworkSuite|DockerHubPullSuite|DockerRegistrySuite|DockerSchema1RegistrySuite|DockerRegistryAuthTokenSuite|DockerRegistryAuthHtpasswdSuite)/" run_tests &
# integration-cli second set
TEST_INTEGRATION_DEST=3 CONTAINER_NAME=${CONTAINER_NAME}-3 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSwarmSuite|DockerDaemonSuite|DockerExternalVolumeSuite)/" run_tests &
c=0
for job in $(jobs -p); do
wait ${job} || c=$?
done
exit $c
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
cids=$(docker ps -aq -f name=docker-pr${BUILD_NUMBER}-*)
[ -n "$cids" ] && docker rm -vf $cids || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('rootless') {
when {
beforeAgent true
expression { params.rootless }
}
agent { label 'amd64 && ubuntu-1804 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration tests") {
environment {
DOCKER_ROOTLESS = '1'
TEST_SKIP_INTEGRATION_CLI = '1'
}
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_ROOTLESS \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64-rootless
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('cgroup2') {
when {
beforeAgent true
expression { params.cgroup2 }
}
agent { label 'amd64 && ubuntu-2004 && cgroup2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR --build-arg SYSTEMD=true -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration tests") {
environment {
DOCKER_SYSTEMD = '1' // recommended cgroup driver for v2
TEST_SKIP_INTEGRATION_CLI = '1' // CLI tests do not support v2
}
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_SYSTEMD \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64-cgroup2
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('s390x') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.s390x }
}
expression { params.s390x }
}
agent { label 's390x-ubuntu-2004' }
agent { label 's390x-ubuntu-1804' }
stages {
stage("Print info") {
@@ -84,9 +572,6 @@ pipeline {
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
@@ -102,7 +587,7 @@ pipeline {
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
@@ -167,13 +652,10 @@ pipeline {
stage('s390x integration-cli') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.s390x }
}
not { changeRequest() }
expression { params.s390x }
}
agent { label 's390x-ubuntu-2004' }
agent { label 's390x-ubuntu-1804' }
stages {
stage("Print info") {
@@ -253,13 +735,15 @@ pipeline {
stage('ppc64le') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.ppc64le }
}
expression { params.ppc64le }
}
agent { label 'ppc64le-ubuntu-1604' }
// ppc64le machines run on Docker 18.06, and buildkit has some
// bugs on that version. Build and use buildx instead.
environment {
USE_BUILDX = '1'
DOCKER_BUILDKIT = '0'
}
stages {
stage("Print info") {
@@ -276,15 +760,13 @@ pipeline {
stage("Build dev image") {
steps {
sh '''
docker buildx build --load --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
make bundles/buildx
bundles/buildx build --load --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
@@ -300,7 +782,7 @@ pipeline {
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
@@ -365,13 +847,16 @@ pipeline {
stage('ppc64le integration-cli') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.ppc64le }
}
not { changeRequest() }
expression { params.ppc64le }
}
agent { label 'ppc64le-ubuntu-1604' }
// ppc64le machines run on Docker 18.06, and buildkit has some
// bugs on that version. Build and use buildx instead.
environment {
USE_BUILDX = '1'
DOCKER_BUILDKIT = '0'
}
stages {
stage("Print info") {
@@ -388,7 +873,8 @@ pipeline {
stage("Build dev image") {
steps {
sh '''
docker buildx build --load --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
make bundles/buildx
bundles/buildx build --load --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
@@ -477,9 +963,6 @@ pipeline {
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
@@ -495,7 +978,7 @@ pipeline {
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
@@ -557,6 +1040,135 @@ pipeline {
}
}
}
stage('win-RS1') {
when {
beforeAgent true
// Skip this stage on PRs unless the windowsRS1 checkbox is selected
anyOf {
not { changeRequest() }
expression { params.windowsRS1 }
}
}
environment {
DOCKER_BUILDKIT = '0'
DOCKER_DUT_DEBUG = '1'
SKIP_VALIDATION_TESTS = '1'
SOURCES_DRIVE = 'd'
SOURCES_SUBDIR = 'gopath'
TESTRUN_DRIVE = 'd'
TESTRUN_SUBDIR = "CI"
WINDOWS_BASE_IMAGE = 'mcr.microsoft.com/windows/servercore'
WINDOWS_BASE_IMAGE_TAG = 'ltsc2016'
}
agent {
node {
customWorkspace 'd:\\gopath\\src\\github.com\\docker\\docker'
label 'windows-2016'
}
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Run tests") {
steps {
powershell '''
$ErrorActionPreference = 'Stop'
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
Invoke-WebRequest https://github.com/moby/docker-ci-zap/blob/master/docker-ci-zap.exe?raw=true -OutFile C:/Windows/System32/docker-ci-zap.exe
./hack/ci/windows.ps1
exit $LastExitCode
'''
}
}
}
post {
always {
junit testResults: 'bundles/junit-report-*.xml', allowEmptyResults: true
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
powershell '''
cd $env:WORKSPACE
$bundleName="windowsRS1-integration"
Write-Host -ForegroundColor Green "Creating ${bundleName}-bundles.zip"
# archiveArtifacts does not support env-vars to , so save the artifacts in a fixed location
Compress-Archive -Path "bundles/CIDUT.out", "bundles/CIDUT.err", "bundles/junit-report-*.xml" -CompressionLevel Optimal -DestinationPath "${bundleName}-bundles.zip"
'''
archiveArtifacts artifacts: '*-bundles.zip', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('win-RS5') {
when {
beforeAgent true
expression { params.windowsRS5 }
}
environment {
DOCKER_BUILDKIT = '0'
DOCKER_DUT_DEBUG = '1'
SKIP_VALIDATION_TESTS = '1'
SOURCES_DRIVE = 'd'
SOURCES_SUBDIR = 'gopath'
TESTRUN_DRIVE = 'd'
TESTRUN_SUBDIR = "CI"
WINDOWS_BASE_IMAGE = 'mcr.microsoft.com/windows/servercore'
WINDOWS_BASE_IMAGE_TAG = 'ltsc2019'
}
agent {
node {
customWorkspace 'd:\\gopath\\src\\github.com\\docker\\docker'
label 'windows-2019'
}
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Run tests") {
steps {
powershell '''
$ErrorActionPreference = 'Stop'
Invoke-WebRequest https://github.com/moby/docker-ci-zap/blob/master/docker-ci-zap.exe?raw=true -OutFile C:/Windows/System32/docker-ci-zap.exe
./hack/ci/windows.ps1
exit $LastExitCode
'''
}
}
}
post {
always {
junit testResults: 'bundles/junit-report-*.xml', allowEmptyResults: true
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
powershell '''
cd $env:WORKSPACE
$bundleName="windowsRS5-integration"
Write-Host -ForegroundColor Green "Creating ${bundleName}-bundles.zip"
# archiveArtifacts does not support env-vars to , so save the artifacts in a fixed location
Compress-Archive -Path "bundles/CIDUT.out", "bundles/CIDUT.err", "bundles/junit-report-*.xml" -CompressionLevel Optimal -DestinationPath "${bundleName}-bundles.zip"
'''
archiveArtifacts artifacts: '*-bundles.zip', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
}
}
}

View File

@@ -28,13 +28,14 @@
"anusha",
"coolljt0725",
"cpuguy83",
"crosbymichael",
"estesp",
"johnstep",
"justincormack",
"kolyshkin",
"lowenna",
"mhbauer",
"runcom",
"samuelkarp",
"stevvooe",
"thajeztah",
"tianon",
@@ -61,15 +62,11 @@
people = [
"alexellis",
"andrewhsu",
"corhere",
"fntlnz",
"gianarb",
"ndeloof",
"neersighted",
"olljanat",
"programmerq",
"ripcurld",
"rumpl",
"samwhited",
"thajeztah"
]
@@ -106,17 +103,6 @@
# and tweets as @calavera.
"calavera",
# Michael Crosby was "chief maintainer" of the Docker project.
# During his time as a maintainer, Michael contributed to many
# milestones of the project; he was release captain of Docker v1.0.0,
# started the development of "libcontainer" (what later became runc)
# and containerd, as well as demoing cool hacks such as live migrating
# a game server container with checkpoint/restore.
#
# Michael is currently a maintainer of containerd, but you may see
# him around in other projects on GitHub.
"crosbymichael",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
@@ -180,16 +166,6 @@
# check out her open source projects on GitHub https://github.com/jessfraz (a must-try).
"jessfraz",
# As a maintainer, John Howard managed to make the impossible possible;
# to run Docker on Windows. After facing many challenges, teaching
# fellow-maintainers that 'Windows is not Linux', and many changes in
# Windows Server to facilitate containers, native Windows containers
# saw the light of day in 2015.
#
# John is now enjoying life without containers: playing piano, painting,
# and walking his dogs, but you may occasionally see him drop by on GitHub.
"lowenna",
# Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
@@ -317,11 +293,6 @@
Email = "leijitang@huawei.com"
GitHub = "coolljt0725"
[people.corhere]
Name = "Cory Snider"
Email = "csnider@mirantis.com"
GitHub = "corhere"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
@@ -432,16 +403,6 @@
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.ndeloof]
Name = "Nicolas De Loof"
Email = "nicolas.deloof@gmail.com"
GitHub = "ndeloof"
[people.neersighted]
Name = "Bjorn Neergaard"
Email = "bneergaard@mirantis.com"
GitHub = "neersighted"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
@@ -457,21 +418,11 @@
Email = "ripcurld.github@gmail.com"
GitHub = "ripcurld"
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.runcom]
Name = "Antonio Murdaca"
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.samuelkarp]
Name = "Samuel Karp"
Email = "me@samuelkarp.com"
GitHub = "samuelkarp"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"

108
Makefile
View File

@@ -1,7 +1,19 @@
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate validate-% win
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate win
ifdef USE_BUILDX
BUILDX ?= $(shell command -v buildx)
BUILDX ?= $(shell command -v docker-buildx)
DOCKER_BUILDX_CLI_PLUGIN_PATH ?= ~/.docker/cli-plugins/docker-buildx
BUILDX ?= $(shell if [ -x "$(DOCKER_BUILDX_CLI_PLUGIN_PATH)" ]; then echo $(DOCKER_BUILDX_CLI_PLUGIN_PATH); fi)
endif
ifndef USE_BUILDX
DOCKER_BUILDKIT := 1
export DOCKER_BUILDKIT
endif
BUILDX ?= bundles/buildx
DOCKER ?= docker
BUILDX ?= $(DOCKER) buildx
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
@@ -54,13 +66,10 @@ DOCKER_ENVS := \
-e DOCKER_TEST_HOST \
-e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \
-e DELVE_PORT \
-e GITHUB_ACTIONS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e TESTCOVERAGE \
-e TESTDEBUG \
-e TESTDIRS \
-e TESTFLAGS \
@@ -71,11 +80,16 @@ DOCKER_ENVS := \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
-e VALIDATE_ORIGIN_BRANCH \
-e HTTP_PROXY \
-e HTTPS_PROXY \
-e NO_PROXY \
-e http_proxy \
-e https_proxy \
-e no_proxy \
-e VERSION \
-e PLATFORM \
-e DEFAULT_PRODUCT_LICENSE \
-e PRODUCT \
-e PACKAGER_NAME
-e PRODUCT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`
@@ -93,7 +107,7 @@ DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDD
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git"
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache -v docker-mod-cache:/go/pkg/mod/
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
@@ -102,11 +116,12 @@ endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),)
DOCKER_IMAGE := docker-dev
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",)
DELVE_PORT_FORWARD := $(if $(DELVE_PORT),-p "$(DELVE_PORT)",)
DOCKER_FLAGS := $(DOCKER) run --rm --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD) $(DELVE_PORT_FORWARD)
DOCKER_FLAGS := $(DOCKER) run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
BUILD_APT_MIRROR := $(if $(DOCKER_BUILD_APT_MIRROR),--build-arg APT_MIRROR=$(DOCKER_BUILD_APT_MIRROR))
export BUILD_APT_MIRROR
@@ -125,14 +140,6 @@ ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t
endif
# on GitHub Runners input device is not a TTY but we allocate a pseudo-one,
# otherwise keep STDIN open even if not attached if not a GitHub Runner.
ifeq ($(GITHUB_ACTIONS),true)
DOCKER_FLAGS += -t
else
DOCKER_FLAGS += -i
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
DOCKER_BUILD_ARGS += --build-arg=GO_VERSION
@@ -141,7 +148,12 @@ DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
BUILD_OPTS := ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -f "$(DOCKERFILE)"
ifdef USE_BUILDX
BUILD_OPTS += $(BUILDX_BUILD_EXTRA_OPTS)
BUILD_CMD := $(BUILDX) build
else
BUILD_CMD := $(DOCKER) build
endif
# This is used for the legacy "build" target and anything still depending on it
BUILD_CROSS =
@@ -152,21 +164,28 @@ ifdef DOCKER_CROSSPLATFORMS
BUILD_CROSS = --build-arg CROSS=true
endif
VERSION_AUTOGEN_ARGS = --build-arg VERSION --build-arg DOCKER_GITCOMMIT --build-arg PRODUCT --build-arg PLATFORM --build-arg DEFAULT_PRODUCT_LICENSE --build-arg PACKAGER_NAME
VERSION_AUTOGEN_ARGS = --build-arg VERSION --build-arg DOCKER_GITCOMMIT --build-arg PRODUCT --build-arg PLATFORM --build-arg DEFAULT_PRODUCT_LICENSE
default: binary
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
binary: bundles ## build statically linked linux binaries
# This is only used to work around read-only bind mounts of the source code into
# binary build targets. We end up mounting a tmpfs over autogen which allows us
# to write build-time generated assets even though the source is mounted read-only
# ...But in order to do so, this dir needs to already exist.
autogen:
mkdir -p autogen
binary: buildx autogen ## build statically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
dynbinary: bundles ## build dynamically linked linux binaries
dynbinary: buildx autogen ## build dynamically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
cross: BUILD_OPTS += --build-arg CROSS=true --build-arg DOCKER_CROSSPLATFORMS
cross: bundles ## cross build the binaries for darwin, freebsd and\nwindows
cross: buildx autogen ## cross build the binaries for darwin, freebsd and\nwindows
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
bundles:
@@ -176,8 +195,8 @@ bundles:
clean: clean-cache
.PHONY: clean-cache
clean-cache: ## remove the docker volumes that are used for caching in the dev-container
docker volume rm -f docker-dev-cache docker-mod-cache
clean-cache:
docker volume rm -f docker-dev-cache
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@@ -194,8 +213,11 @@ build: shell_target := --target=dev
else
build: shell_target := --target=final
endif
build: bundles
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) --load $(BUILD_CROSS) -t "$(DOCKER_IMAGE)" .
ifdef USE_BUILDX
build: buildx_load := --load
endif
build: buildx
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) $(buildx_load) $(BUILD_CROSS) -t "$(DOCKER_IMAGE)" .
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
@@ -225,9 +247,6 @@ test-unit: build ## run the unit tests
validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor
$(DOCKER_RUN_DOCKER) hack/validate/all
validate-%: build ## validate specific check
$(DOCKER_RUN_DOCKER) hack/validate/$*
win: build ## cross build the binary for windows
$(DOCKER_RUN_DOCKER) DOCKER_CROSSPLATFORMS=windows/amd64 hack/make.sh cross
@@ -245,4 +264,31 @@ swagger-docs: ## preview the API documentation
@docker run --rm -v $(PWD)/api/swagger.yaml:/usr/share/nginx/html/swagger.yaml \
-e 'REDOC_OPTIONS=hide-hostname="true" lazy-rendering' \
-p $(SWAGGER_DOCS_PORT):80 \
bfirsh/redoc:1.14.0
bfirsh/redoc:1.6.2
.PHONY: buildx
ifdef USE_BUILDX
ifeq ($(BUILDX), bundles/buildx)
buildx: bundles/buildx ## build buildx cli tool
endif
endif
# This intentionally is not using the `--output` flag from the docker CLI, which
# is a buildkit option. The idea here being that if buildx is being used, it's
# because buildkit is not supported natively
bundles/buildx: bundles ## build buildx CLI tool
docker build -f $${BUILDX_DOCKERFILE:-Dockerfile.buildx} -t "moby-buildx:$${BUILDX_COMMIT:-latest}" \
--build-arg BUILDX_COMMIT \
--build-arg BUILDX_REPO \
--build-arg GOOS=$$(if [ -n "$(GOOS)" ]; then echo $(GOOS); else go env GOHOSTOS || uname | awk '{print tolower($$0)}' || true; fi) \
--build-arg GOARCH=$$(if [ -n "$(GOARCH)" ]; then echo $(GOARCH); else go env GOHOSTARCH || true; fi) \
.
id=$$(docker create moby-buildx:$${BUILDX_COMMIT:-latest}); \
if [ -n "$${id}" ]; then \
docker cp $${id}:/usr/bin/buildx $@ \
&& touch $@; \
docker rm -f $${id}; \
fi
$@ version

View File

@@ -3,7 +3,7 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
DefaultVersion = "1.42"
DefaultVersion = "1.41"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package api // import "github.com/docker/docker/api"

View File

@@ -1,34 +0,0 @@
package server
import (
"net/http"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/gorilla/mux"
"google.golang.org/grpc/status"
)
// makeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func makeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := httpstatus.FromError(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = httputils.WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}

View File

@@ -1,150 +0,0 @@
package httpstatus // import "github.com/docker/docker/api/server/httpstatus"
import (
"fmt"
"net/http"
containerderrors "github.com/containerd/containerd/errdefs"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/docker/errdefs"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type causer interface {
Cause() error
}
// FromError retrieves status code from error message.
func FromError(err error) int {
if err == nil {
logrus.WithFields(logrus.Fields{"error": err}).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case errdefs.IsNotFound(err):
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case errdefs.IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case errdefs.IsForbidden(err):
statusCode = http.StatusForbidden
case errdefs.IsNotModified(err):
statusCode = http.StatusNotModified
case errdefs.IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case errdefs.IsSystem(err) || errdefs.IsUnknown(err) || errdefs.IsDataLoss(err) || errdefs.IsDeadline(err) || errdefs.IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return FromError(e.Cause())
}
logrus.WithFields(logrus.Fields{
"module": "api",
"error_type": fmt.Sprintf("%T", err),
}).Debugf("FIXME: Got an API for which error does not match any expected type!!!: %+v", err)
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case containerderrors.IsInvalidArgument(err):
return http.StatusBadRequest
case containerderrors.IsNotFound(err):
return http.StatusNotFound
case containerderrors.IsAlreadyExists(err):
return http.StatusConflict
case containerderrors.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case containerderrors.IsUnavailable(err):
return http.StatusServiceUnavailable
case containerderrors.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View File

@@ -0,0 +1,9 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import "github.com/docker/docker/errdefs"
// GetHTTPErrorStatusCode retrieves status code from error message.
//
// Deprecated: use errdefs.GetHTTPErrorStatusCode
func GetHTTPErrorStatusCode(err error) int {
return errdefs.GetHTTPErrorStatusCode(err)
}

View File

@@ -2,14 +2,18 @@ package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"context"
"encoding/json"
"io"
"mime"
"net/http"
"strings"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/status"
)
// APIVersionKey is the client's requested API version.
@@ -49,50 +53,17 @@ func CheckForJSON(r *http.Request) error {
ct := r.Header.Get("Content-Type")
// No Content-Type header is ok as long as there's no Body
if ct == "" && (r.Body == nil || r.ContentLength == 0) {
return nil
if ct == "" {
if r.Body == nil || r.ContentLength == 0 {
return nil
}
}
// Otherwise it better be json
return matchesContentType(ct, "application/json")
}
// ReadJSON validates the request to have the correct content-type, and decodes
// the request's Body into out.
func ReadJSON(r *http.Request, out interface{}) error {
err := CheckForJSON(r)
if err != nil {
return err
}
if r.Body == nil || r.ContentLength == 0 {
// an empty body is not invalid, so don't return an error; see
// https://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html
if matchesContentType(ct, "application/json") {
return nil
}
dec := json.NewDecoder(r.Body)
err = dec.Decode(out)
defer r.Body.Close()
if err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return errdefs.InvalidParameter(errors.Wrap(err, "invalid JSON"))
}
if dec.More() {
return errdefs.InvalidParameter(errors.New("unexpected content after JSON"))
}
return nil
}
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
return errdefs.InvalidParameter(errors.Errorf("Content-Type specified (%s) must be 'application/json'", ct))
}
// ParseForm ensures the request form is parsed even with invalid content types.
@@ -121,14 +92,33 @@ func VersionFromContext(ctx context.Context) string {
return ""
}
// MakeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func MakeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := errdefs.GetHTTPErrorStatusCode(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}
// matchesContentType validates the content type against the expected one
func matchesContentType(contentType, expectedType string) error {
func matchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "malformed Content-Type header (%s)", contentType))
logrus.Errorf("Error parsing media type: %s error: %v", contentType, err)
}
if mimetype != expectedType {
return errdefs.InvalidParameter(errors.Errorf("unsupported Content-Type header (%s): must be '%s'", contentType, expectedType))
}
return nil
return err == nil && mimetype == expectedType
}

View File

@@ -1,130 +1,18 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"net/http"
"strings"
"testing"
)
import "testing"
// matchesContentType
func TestJsonContentType(t *testing.T) {
err := matchesContentType("application/json", "application/json")
if err != nil {
t.Error(err)
if !matchesContentType("application/json", "application/json") {
t.Fail()
}
err = matchesContentType("application/json; charset=utf-8", "application/json")
if err != nil {
t.Error(err)
if !matchesContentType("application/json; charset=utf-8", "application/json") {
t.Fail()
}
expected := "unsupported Content-Type header (dockerapplication/json): must be 'application/json'"
err = matchesContentType("dockerapplication/json", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
}
expected = "malformed Content-Type header (foo;;;bar): mime: invalid media parameter"
err = matchesContentType("foo;;;bar", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
if matchesContentType("dockerapplication/json", "application/json") {
t.Fail()
}
}
func TestReadJSON(t *testing.T) {
t.Run("nil body", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", nil)
if err != nil {
t.Error(err)
}
foo := struct{}{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
})
t.Run("empty body", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(""))
if err != nil {
t.Error(err)
}
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "" {
t.Errorf("expected: '', got: %s", foo.SomeField)
}
})
t.Run("with valid request", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"}`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with whitespace", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`
{"SomeField":"some value"}
`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with extra content", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"} and more content`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "unexpected content after JSON"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
t.Run("invalid JSON", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`{invalid json`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "invalid JSON: invalid character 'i' looking for beginning of object key string"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
}

View File

@@ -0,0 +1,15 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"encoding/json"
"net/http"
)
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
}

View File

@@ -16,7 +16,7 @@ func (s *Server) handlerWithGlobalMiddlewares(handler httputils.APIFunc) httputi
next = m.WrapHandler(next)
}
if logrus.GetLevel() == logrus.DebugLevel {
if s.cfg.Logging && logrus.GetLevel() == logrus.DebugLevel {
next = middleware.DebugRequestMiddleware(next)
}

View File

@@ -61,4 +61,5 @@ func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
return handler(ctx, w, r, vars)
}
}

View File

@@ -1,8 +1,6 @@
package build // import "github.com/docker/docker/api/server/router/build"
import (
"runtime"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types"
)
@@ -39,24 +37,17 @@ func (r *buildRouter) initRoutes() {
}
}
// BuilderVersion derives the default docker builder version from the config.
//
// The default on Linux is version "2" (BuildKit), but the daemon can be
// configured to recommend version "1" (classic Builder). Windows does not
// yet support BuildKit for native Windows images, and uses "1" (classic builder)
// as a default.
//
// This value is only a recommendation as advertised by the daemon, and it is
// up to the client to choose which builder to use.
// BuilderVersion derives the default docker builder version from the config
// Note: it is valid to have BuilderVersion unset which means it is up to the
// client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion {
// TODO(thaJeztah) move the default to daemon/config
if runtime.GOOS == "windows" {
return types.BuilderV1
}
bv := types.BuilderBuildKit
if v, ok := features["buildkit"]; ok && !v {
bv = types.BuilderV1
var bv types.BuilderVersion
if v, ok := features["buildkit"]; ok {
if v {
bv = types.BuilderBuildKit
} else {
bv = types.BuilderV1
}
}
return bv
}

View File

@@ -238,6 +238,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
defer func() { _ = output.Close() }()
errf := func(err error) error {
if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 {
_, _ = output.Write(notVerboseBuffer.Bytes())
}

View File

@@ -2,6 +2,7 @@ package checkpoint // import "github.com/docker/docker/api/server/router/checkpo
import (
"context"
"encoding/json"
"net/http"
"github.com/docker/docker/api/server/httputils"
@@ -14,7 +15,9 @@ func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.R
}
var options types.CheckpointCreateOptions
if err := httputils.ReadJSON(r, &options); err != nil {
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&options); err != nil {
return err
}

View File

@@ -17,7 +17,7 @@ type execBackend interface {
ContainerExecCreate(name string, config *types.ExecConfig) (string, error)
ContainerExecInspect(id string) (*backend.ExecInspect, error)
ContainerExecResize(name string, height, width int) error
ContainerExecStart(ctx context.Context, name string, options container.ExecStartOptions) error
ContainerExecStart(ctx context.Context, name string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error
ExecExists(name string) (bool, error)
}
@@ -32,15 +32,15 @@ type copyBackend interface {
// stateBackend includes functions to implement to provide container state lifecycle functionality.
type stateBackend interface {
ContainerCreate(config types.ContainerCreateConfig) (container.CreateResponse, error)
ContainerKill(name string, signal string) error
ContainerCreate(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error)
ContainerKill(name string, sig uint64) error
ContainerPause(name string) error
ContainerRename(oldName, newName string) error
ContainerResize(name string, height, width int) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error
ContainerRestart(name string, seconds *int) error
ContainerRm(name string, config *types.ContainerRmConfig) error
ContainerStart(name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error
ContainerStop(name string, seconds *int) error
ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)

View File

@@ -56,7 +56,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8, Errors out since 1.12
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),

View File

@@ -6,21 +6,20 @@ import (
"fmt"
"io"
"net/http"
"runtime"
"strconv"
"syscall"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/versions"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/signal"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -155,12 +154,6 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
return err
}
contentType := types.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
// if has a tty, we're not muxing streams. if it doesn't, we are. simple.
// this is the point of no return for writing a response. once we call
// WriteLogStream, the response has been started and errors will be
@@ -227,26 +220,20 @@ func (s *containerRouter) postContainersStop(ctx context.Context, w http.Respons
return err
}
var (
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
var seconds *int
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil {
return err
}
options.Timeout = &valSeconds
seconds = &valSeconds
}
if err := s.backend.ContainerStop(ctx, vars["name"], options); err != nil {
if err := s.backend.ContainerStop(vars["name"], seconds); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -255,8 +242,18 @@ func (s *containerRouter) postContainersKill(ctx context.Context, w http.Respons
return err
}
var sig syscall.Signal
name := vars["name"]
if err := s.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
// If we have a signal, look at it. Otherwise, do nothing
if sigStr := r.Form.Get("signal"); sigStr != "" {
var err error
if sig, err = signal.ParseSignal(sigStr); err != nil {
return errdefs.InvalidParameter(err)
}
}
if err := s.backend.ContainerKill(name, uint64(sig)); err != nil {
var isStopped bool
if errdefs.IsConflict(err) {
isStopped = true
@@ -280,26 +277,21 @@ func (s *containerRouter) postContainersRestart(ctx context.Context, w http.Resp
return err
}
var (
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
var seconds *int
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil {
return err
}
options.Timeout = &valSeconds
seconds = &valSeconds
}
if err := s.backend.ContainerRestart(ctx, vars["name"], options); err != nil {
if err := s.backend.ContainerRestart(vars["name"], seconds); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -344,18 +336,12 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
if err := httputils.ParseForm(r); err != nil {
return err
}
if v := r.Form.Get("condition"); v != "" {
switch container.WaitCondition(v) {
case container.WaitConditionNotRunning:
waitCondition = containerpkg.WaitConditionNotRunning
case container.WaitConditionNextExit:
waitCondition = containerpkg.WaitConditionNextExit
case container.WaitConditionRemoved:
waitCondition = containerpkg.WaitConditionRemoved
legacyRemovalWaitPre134 = versions.LessThan(version, "1.34")
default:
return errdefs.InvalidParameter(errors.Errorf("invalid condition: %q", v))
}
switch container.WaitCondition(r.Form.Get("condition")) {
case container.WaitConditionNextExit:
waitCondition = containerpkg.WaitConditionNextExit
case container.WaitConditionRemoved:
waitCondition = containerpkg.WaitConditionRemoved
legacyRemovalWaitPre134 = versions.LessThan(version, "1.34")
}
}
@@ -385,12 +371,12 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
return nil
}
var waitError *container.WaitExitError
var waitError *container.ContainerWaitOKBodyError
if status.Err() != nil {
waitError = &container.WaitExitError{Message: status.Err().Error()}
waitError = &container.ContainerWaitOKBodyError{Message: status.Err().Error()}
}
return json.NewEncoder(w).Encode(&container.WaitResponse{
return json.NewEncoder(w).Encode(&container.ContainerWaitOKBody{
StatusCode: int64(status.ExitCode()),
Error: waitError,
})
@@ -436,20 +422,19 @@ func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
var updateConfig container.UpdateConfig
if err := httputils.ReadJSON(r, &updateConfig); err != nil {
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&updateConfig); err != nil {
return err
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") {
updateConfig.PidsLimit = nil
}
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
// Ignore KernelMemory removed in API 1.42.
updateConfig.KernelMemory = 0
}
if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 {
// Both `0` and `-1` are accepted to set "unlimited" when updating.
// Historically, any negative value was accepted, so treat them as
@@ -506,59 +491,16 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
// Older clients (API < 1.40) expects the default to be shareable, make them happy
if hostConfig.IpcMode.IsEmpty() {
hostConfig.IpcMode = container.IPCModeShareable
hostConfig.IpcMode = container.IpcMode("shareable")
}
}
if hostConfig != nil && versions.LessThan(version, "1.41") && !s.cgroup2 {
// Older clients expect the default to be "host" on cgroup v1 hosts
if hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsModeHost
hostConfig.CgroupnsMode = container.CgroupnsMode("host")
}
}
if hostConfig != nil && versions.LessThan(version, "1.42") {
for _, m := range hostConfig.Mounts {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
if bo := m.BindOptions; bo != nil {
bo.CreateMountpoint = false
}
// These combinations are invalid, but weren't validated in API < 1.42.
// We reset them here, so that validation doesn't produce an error.
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
m.VolumeOptions = nil
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
m.TmpfsOptions = nil
}
if bo := m.BindOptions; bo != nil {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
bo.CreateMountpoint = false
}
}
}
if hostConfig != nil && versions.GreaterThanOrEqualTo(version, "1.42") {
// Ignore KernelMemory removed in API 1.42.
hostConfig.KernelMemory = 0
for _, m := range hostConfig.Mounts {
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
return errdefs.InvalidParameter(fmt.Errorf("VolumeOptions must not be specified on mount type %q", m.Type))
}
if o := m.BindOptions; o != nil && m.Type != mount.TypeBind {
return errdefs.InvalidParameter(fmt.Errorf("BindOptions must not be specified on mount type %q", m.Type))
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
return errdefs.InvalidParameter(fmt.Errorf("TmpfsOptions must not be specified on mount type %q", m.Type))
}
}
}
if hostConfig != nil && runtime.GOOS == "linux" && versions.LessThan(version, "1.42") {
// ConsoleSize is not respected by Linux daemon before API 1.42
hostConfig.ConsoleSize = [2]uint{0, 0}
}
var platform *specs.Platform
if versions.GreaterThanOrEqualTo(version, "1.41") {
if v := r.Form.Get("platform"); v != "" {
@@ -646,8 +588,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
return errdefs.InvalidParameter(errors.Errorf("error attaching to container %s, hijack connection missing", containerName))
}
contentType := types.MediaTypeRawStream
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
conn, _, err := hijacker.Hijack()
if err != nil {
return nil, nil, nil, err
@@ -657,10 +598,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
conn.Write([]byte{})
if upgrade {
if multiplexed && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
} else {
fmt.Fprintf(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
}
@@ -684,16 +622,16 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
}
if err = s.backend.ContainerAttach(containerName, attachConfig); err != nil {
logrus.WithError(err).Errorf("Handler for %s %s returned error", r.Method, r.URL.Path)
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
// Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack()
if errHijack != nil {
logrus.WithError(err).Errorf("Handler for %s %s: unable to close stream; error when hijacking connection", r.Method, r.URL.Path)
} else {
statusCode := httpstatus.FromError(err)
if errHijack == nil {
statusCode := errdefs.GetHTTPErrorStatusCode(err)
statusText := http.StatusText(statusCode)
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: %s\r\n\r\n%s\r\n", statusCode, statusText, contentType, err.Error())
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n%s\r\n", statusCode, statusText, err.Error())
httputils.CloseStreams(conn)
} else {
logrus.Errorf("Error Hijacking: %v", err)
}
}
return nil
@@ -713,7 +651,7 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
version := httputils.VersionFromContext(ctx)
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
wsChan := make(chan *websocket.Conn)
h := func(conn *websocket.Conn) {
wsChan <- conn
@@ -735,22 +673,15 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
return conn, conn, conn, nil
}
useStdin, useStdout, useStderr := true, true, true
if versions.GreaterThanOrEqualTo(version, "1.42") {
useStdin = httputils.BoolValue(r, "stdin")
useStdout = httputils.BoolValue(r, "stdout")
useStderr = httputils.BoolValue(r, "stderr")
}
attachConfig := &backend.ContainerAttachConfig{
GetStreams: setupStreams,
UseStdin: useStdin,
UseStdout: useStdout,
UseStderr: useStderr,
Logs: httputils.BoolValue(r, "logs"),
Stream: httputils.BoolValue(r, "stream"),
DetachKeys: detachKeys,
MuxStreams: false, // never multiplex, as we rely on websocket to manage distinct streams
UseStdin: true,
UseStdout: true,
UseStderr: true,
MuxStreams: false, // TODO: this should be true since it's a single stream for both stdout and stderr
}
err = s.backend.ContainerAttach(containerName, attachConfig)
@@ -775,7 +706,7 @@ func (s *containerRouter) postContainersPrune(ctx context.Context, w http.Respon
pruneFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
return errdefs.InvalidParameter(err)
}
pruneReport, err := s.backend.ContainersPrune(ctx, pruneFilters)

View File

@@ -6,12 +6,14 @@ import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
gddohttputil "github.com/golang/gddo/httputil"
)
@@ -24,18 +26,23 @@ func (pathError) Error() string {
func (pathError) InvalidParameter() {}
// postContainersCopy is deprecated in favor of getContainersArchive.
//
// Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
func (s *containerRouter) postContainersCopy(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// Deprecated since 1.8, Errors out since 1.12
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.24") {
w.WriteHeader(http.StatusNotFound)
return nil
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
cfg := types.CopyConfig{}
if err := httputils.ReadJSON(r, &cfg); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&cfg); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if cfg.Resource == "" {

View File

@@ -2,6 +2,8 @@ package container // import "github.com/docker/docker/api/server/router/containe
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
@@ -9,7 +11,6 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/stdcopy"
@@ -37,26 +38,27 @@ func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.Re
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
name := vars["name"]
execConfig := &types.ExecConfig{}
if err := httputils.ReadJSON(r, execConfig); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(execConfig); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if len(execConfig.Cmd) == 0 {
return execCommandError{}
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.42") {
// Not supported by API versions before 1.42
execConfig.ConsoleSize = nil
}
// Register an instance of Exec in container.
id, err := s.backend.ContainerExecCreate(vars["name"], execConfig)
id, err := s.backend.ContainerExecCreate(name, execConfig)
if err != nil {
logrus.Errorf("Error setting up exec command in container %s: %v", vars["name"], err)
logrus.Errorf("Error setting up exec command in container %s: %v", name, err)
return err
}
@@ -72,11 +74,9 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.22") {
// API versions before 1.22 did not enforce application/json content-type.
// Allow older clients to work by patching the content-type.
if r.Header.Get("Content-Type") != "application/json" {
r.Header.Set("Content-Type", "application/json")
if versions.GreaterThan(version, "1.21") {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
}
@@ -87,26 +87,17 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
)
execStartCheck := &types.ExecStartCheck{}
if err := httputils.ReadJSON(r, execStartCheck); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(execStartCheck); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if exists, err := s.backend.ExecExists(execName); !exists {
return err
}
if execStartCheck.ConsoleSize != nil {
// Not supported before 1.42
if versions.LessThan(version, "1.42") {
execStartCheck.ConsoleSize = nil
}
// No console without tty
if !execStartCheck.Tty {
execStartCheck.ConsoleSize = nil
}
}
if !execStartCheck.Detach {
var err error
// Setting up the streaming http interface.
@@ -117,11 +108,7 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
defer httputils.CloseStreams(inStream, outStream)
if _, ok := r.Header["Upgrade"]; ok {
contentType := types.MediaTypeRawStream
if !execStartCheck.Tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
} else {
fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n")
}
@@ -140,16 +127,9 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
}
}
options := container.ExecStartOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
ConsoleSize: execStartCheck.ConsoleSize,
}
// Now run the user process in container.
// Maybe we should we pass ctx here if we're not detaching?
if err := s.backend.ContainerExecStart(context.Background(), execName, options); err != nil {
if err := s.backend.ContainerExecStart(context.Background(), execName, stdin, stdout, stderr); err != nil {
if execStartCheck.Detach {
return err
}

View File

@@ -11,5 +11,5 @@ import (
// Backend is all the methods that need to be implemented
// to provide image specific functionality.
type Backend interface {
GetRepository(context.Context, reference.Named, *types.AuthConfig) (distribution.Repository, error)
GetRepository(context.Context, reference.Named, *types.AuthConfig) (distribution.Repository, bool, error)
}

View File

@@ -57,7 +57,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
}
distrepo, err := s.backend.GetRepository(ctx, namedRef, config)
distrepo, _, err := s.backend.GetRepository(ctx, namedRef, config)
if err != nil {
return err
}

View File

@@ -2,7 +2,6 @@ package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"github.com/docker/docker/api/server/router"
"github.com/moby/buildkit/util/grpcerrors"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
@@ -16,11 +15,8 @@ type grpcRouter struct {
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
r := &grpcRouter{
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
),
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(),
}
for _, b := range backends {
b.RegisterGRPC(r.grpcServer)
@@ -30,12 +26,12 @@ func NewRouter(backends ...Backend) router.Router {
}
// Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route {
return gr.routes
func (r *grpcRouter) Routes() []router.Route {
return r.routes
}
func (gr *grpcRouter) initRoutes() {
gr.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC),
func (r *grpcRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/grpc", r.serveGRPC),
}
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
dockerimage "github.com/docker/docker/image"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -23,20 +22,20 @@ type Backend interface {
type imageBackend interface {
ImageDelete(imageRef string, force, prune bool) ([]types.ImageDeleteResponseItem, error)
ImageHistory(imageName string) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts types.ImageListOptions) ([]*types.ImageSummary, error)
GetImage(refOrID string, platform *specs.Platform) (retImg *dockerimage.Image, retErr error)
Images(imageFilters filters.Args, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error)
LookupImage(name string) (*types.ImageInspect, error)
TagImage(imageName, repository, tag string) (string, error)
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error)
}
type importExportBackend interface {
LoadImage(inTar io.ReadCloser, outStream io.Writer, quiet bool) error
ImportImage(src string, repository string, platform *specs.Platform, tag string, msg string, inConfig io.ReadCloser, outStream io.Writer, changes []string) error
ImportImage(src string, repository, platform string, tag string, msg string, inConfig io.ReadCloser, outStream io.Writer, changes []string) error
ExportImage(names []string, outStream io.Writer) error
}
type registryBackend interface {
PullImage(ctx context.Context, image, tag string, platform *specs.Platform, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
PushImage(ctx context.Context, image, tag string, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
SearchRegistryForImages(ctx context.Context, searchFilters filters.Args, term string, limit int, authConfig *types.AuthConfig, metaHeaders map[string][]string) (*registry.SearchResults, error)
SearchRegistryForImages(ctx context.Context, filtersArgs string, term string, limit int, authConfig *types.AuthConfig, metaHeaders map[string][]string) (*registry.SearchResults, error)
}

View File

@@ -2,28 +2,17 @@ package image // import "github.com/docker/docker/api/server/router/image"
import (
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/reference"
)
// imageRouter is a router to talk with the image controller
type imageRouter struct {
backend Backend
referenceBackend reference.Store
imageStore image.Store
layerStore layer.Store
routes []router.Route
backend Backend
routes []router.Route
}
// NewRouter initializes a new image router
func NewRouter(backend Backend, referenceBackend reference.Store, imageStore image.Store, layerStore layer.Store) router.Router {
r := &imageRouter{
backend: backend,
referenceBackend: referenceBackend,
imageStore: imageStore,
layerStore: layerStore,
}
func NewRouter(backend Backend) router.Router {
r := &imageRouter{backend: backend}
r.initRoutes()
return r
}

View File

@@ -7,19 +7,17 @@ import (
"net/http"
"strconv"
"strings"
"time"
"github.com/containerd/containerd/platforms"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/registry"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -32,13 +30,13 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
}
var (
image = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
message = r.Form.Get("message")
progressErr error
output = ioutils.NewWriteFlusher(w)
platform *specs.Platform
image = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
message = r.Form.Get("message")
err error
output = ioutils.NewWriteFlusher(w)
platform *specs.Platform
)
defer output.Close()
@@ -46,11 +44,15 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.32") {
if p := r.FormValue("platform"); p != "" {
sp, err := platforms.Parse(p)
apiPlatform := r.FormValue("platform")
if apiPlatform != "" {
sp, err := platforms.Parse(apiPlatform)
if err != nil {
return err
}
if err := system.ValidatePlatform(sp); err != nil {
return err
}
platform = &sp
}
}
@@ -73,16 +75,23 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
authConfig = &types.AuthConfig{}
}
}
progressErr = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { // import
src := r.Form.Get("fromSrc")
progressErr = s.backend.ImportImage(src, repo, platform, tag, message, r.Body, output, r.Form["changes"])
}
if progressErr != nil {
if !output.Flushed() {
return progressErr
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
_, _ = output.Write(streamformatter.FormatError(progressErr))
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
if err != nil {
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
@@ -204,12 +213,7 @@ func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r
}
func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
image, err := s.backend.GetImage(vars["name"], nil)
if err != nil {
return err
}
imageInspect, err := s.toImageInspect(image)
imageInspect, err := s.backend.LookupImage(vars["name"])
if err != nil {
return err
}
@@ -217,85 +221,6 @@ func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter
return httputils.WriteJSON(w, http.StatusOK, imageInspect)
}
func (s *imageRouter) toImageInspect(img *image.Image) (*types.ImageInspect, error) {
refs := s.referenceBackend.References(img.ID().Digest())
repoTags := []string{}
repoDigests := []string{}
for _, ref := range refs {
switch ref.(type) {
case reference.NamedTagged:
repoTags = append(repoTags, reference.FamiliarString(ref))
case reference.Canonical:
repoDigests = append(repoDigests, reference.FamiliarString(ref))
}
}
var size int64
var layerMetadata map[string]string
layerID := img.RootFS.ChainID()
if layerID != "" {
l, err := s.layerStore.Get(layerID)
if err != nil {
return nil, err
}
defer layer.ReleaseAndLog(s.layerStore, l)
size = l.Size()
layerMetadata, err = l.Metadata()
if err != nil {
return nil, err
}
}
comment := img.Comment
if len(comment) == 0 && len(img.History) > 0 {
comment = img.History[len(img.History)-1].Comment
}
lastUpdated, err := s.imageStore.GetLastUpdated(img.ID())
if err != nil {
return nil, err
}
return &types.ImageInspect{
ID: img.ID().String(),
RepoTags: repoTags,
RepoDigests: repoDigests,
Parent: img.Parent.String(),
Comment: comment,
Created: img.Created.Format(time.RFC3339Nano),
Container: img.Container,
ContainerConfig: &img.ContainerConfig,
DockerVersion: img.DockerVersion,
Author: img.Author,
Config: img.Config,
Architecture: img.Architecture,
Variant: img.Variant,
Os: img.OperatingSystem(),
OsVersion: img.OSVersion,
Size: size,
VirtualSize: size, // TODO: field unused, deprecate
GraphDriver: types.GraphDriverData{
Name: s.layerStore.DriverName(),
Data: layerMetadata,
},
RootFS: rootFSToAPIType(img.RootFS),
Metadata: types.ImageMetadata{
LastTagTime: lastUpdated,
},
}, nil
}
func rootFSToAPIType(rootfs *image.RootFS) types.RootFS {
var layers []string
for _, l := range rootfs.DiffIDs {
layers = append(layers, l.String())
}
return types.RootFS{
Type: rootfs.Type,
Layers: layers,
}
}
func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
@@ -308,24 +233,13 @@ func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
// NOTE: filter is a shell glob string applied to repository names.
filterParam := r.Form.Get("filter")
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
}
var sharedSize bool
if versions.GreaterThanOrEqualTo(version, "1.42") {
// NOTE: Support for the "shared-size" parameter was added in API 1.42.
sharedSize = httputils.BoolValue(r, "shared-size")
}
images, err := s.backend.Images(ctx, types.ImageListOptions{
All: httputils.BoolValue(r, "all"),
Filters: imageFilters,
SharedSize: sharedSize,
})
images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)
if err != nil {
return err
}
@@ -377,21 +291,15 @@ func (s *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter
headers[k] = v
}
}
var limit int
limit := registry.DefaultSearchLimit
if r.Form.Get("limit") != "" {
var err error
limit, err = strconv.Atoi(r.Form.Get("limit"))
if err != nil || limit < 0 {
return errdefs.InvalidParameter(errors.Wrap(err, "invalid limit specified"))
limitValue, err := strconv.Atoi(r.Form.Get("limit"))
if err != nil {
return err
}
limit = limitValue
}
searchFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
query, err := s.backend.SearchRegistryForImages(ctx, searchFilters, r.Form.Get("term"), limit, config, headers)
query, err := s.backend.SearchRegistryForImages(ctx, r.Form.Get("filters"), r.Form.Get("term"), limit, config, headers)
if err != nil {
return err
}

View File

@@ -6,7 +6,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/libnetwork"
"github.com/docker/libnetwork"
)
// Backend is all the methods that need to be implemented

View File

@@ -2,6 +2,8 @@ package network // import "github.com/docker/docker/api/server/router/network"
import (
"context"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
@@ -12,8 +14,8 @@ import (
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/libnetwork"
netconst "github.com/docker/docker/libnetwork/datastore"
"github.com/docker/libnetwork"
netconst "github.com/docker/libnetwork/datastore"
"github.com/pkg/errors"
)
@@ -28,7 +30,7 @@ func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWrit
}
if err := network.ValidateFilters(filter); err != nil {
return err
return errdefs.InvalidParameter(err)
}
var list []types.NetworkResource
@@ -203,15 +205,23 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var create types.NetworkCreateRequest
if err := httputils.ParseForm(r); err != nil {
return err
}
var create types.NetworkCreateRequest
if err := httputils.ReadJSON(r, &create); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&create); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if nws, err := n.cluster.GetNetworksByName(create.Name); err == nil && len(nws) > 0 {
return nameConflict(create.Name)
}
@@ -245,15 +255,22 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
}
func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var connect types.NetworkConnect
if err := httputils.ParseForm(r); err != nil {
return err
}
var connect types.NetworkConnect
if err := httputils.ReadJSON(r, &connect); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&connect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
// Unlike other operations, we does not check ambiguity of the name/ID here.
// The reason is that, In case of attachable network in swarm scope, the actual local network
// may not be available at the time. At the same time, inside daemon `ConnectContainerToNetwork`
@@ -262,15 +279,22 @@ func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseW
}
func (n *networkRouter) postNetworkDisconnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var disconnect types.NetworkDisconnect
if err := httputils.ParseForm(r); err != nil {
return err
}
var disconnect types.NetworkDisconnect
if err := httputils.ReadJSON(r, &disconnect); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&disconnect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
return n.backend.DisconnectContainerFromNetwork(disconnect.Container, vars["id"], disconnect.Force)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"encoding/base64"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
@@ -12,6 +13,7 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/streamformatter"
"github.com/pkg/errors"
@@ -94,8 +96,12 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
}
var privileges types.PluginPrivileges
if err := httputils.ReadJSON(r, &privileges); err != nil {
return err
dec := json.NewDecoder(r.Body)
if err := dec.Decode(&privileges); err != nil {
return errors.Wrap(err, "failed to parse privileges")
}
if dec.More() {
return errors.New("invalid privileges")
}
metaHeaders, authConfig := parseHeaders(r.Header)
@@ -129,8 +135,12 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
}
var privileges types.PluginPrivileges
if err := httputils.ReadJSON(r, &privileges); err != nil {
return err
dec := json.NewDecoder(r.Body)
if err := dec.Decode(&privileges); err != nil {
return errors.Wrap(err, "failed to parse privileges")
}
if dec.More() {
return errors.New("invalid privileges")
}
metaHeaders, authConfig := parseHeaders(r.Header)
@@ -267,8 +277,11 @@ func (pr *pluginRouter) pushPlugin(ctx context.Context, w http.ResponseWriter, r
func (pr *pluginRouter) setPlugin(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var args []string
if err := httputils.ReadJSON(r, &args); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&args); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if err := pr.backend.Set(vars["name"], args); err != nil {
return err

View File

@@ -2,7 +2,9 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
@@ -19,8 +21,11 @@ import (
func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.InitRequest
if err := httputils.ReadJSON(r, &req); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
@@ -43,8 +48,11 @@ func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) joinCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.JoinRequest
if err := httputils.ReadJSON(r, &req); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
return sr.backend.Join(req)
}
@@ -70,8 +78,11 @@ func (sr *swarmRouter) inspectCluster(ctx context.Context, w http.ResponseWriter
func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var swarm types.Spec
if err := httputils.ReadJSON(r, &swarm); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&swarm); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -121,8 +132,11 @@ func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) unlockCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.UnlockRequest
if err := httputils.ReadJSON(r, &req); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if err := sr.backend.UnlockSwarm(req); err != nil {
@@ -150,7 +164,7 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
}
filter, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
return errdefs.InvalidParameter(err)
}
// the status query parameter is only support in API versions >= 1.41. If
@@ -202,8 +216,11 @@ func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec
if err := httputils.ReadJSON(r, &service); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
// Get returns "" if the header does not exist
@@ -226,8 +243,11 @@ func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec
if err := httputils.ReadJSON(r, &service); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -321,8 +341,11 @@ func (sr *swarmRouter) getNode(ctx context.Context, w http.ResponseWriter, r *ht
func (sr *swarmRouter) updateNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var node types.NodeSpec
if err := httputils.ReadJSON(r, &node); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&node); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -400,8 +423,11 @@ func (sr *swarmRouter) getSecrets(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec
if err := httputils.ReadJSON(r, &secret); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
if secret.Templating != nil && versions.LessThan(version, "1.37") {
@@ -438,8 +464,11 @@ func (sr *swarmRouter) getSecret(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec
if err := httputils.ReadJSON(r, &secret); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -471,8 +500,11 @@ func (sr *swarmRouter) getConfigs(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec
if err := httputils.ReadJSON(r, &config); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
@@ -510,8 +542,11 @@ func (sr *swarmRouter) getConfig(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec
if err := httputils.ReadJSON(r, &config); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")

View File

@@ -3,6 +3,7 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import (
"context"
"fmt"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
@@ -14,7 +15,7 @@ import (
// swarmLogs takes an http response, request, and selector, and writes the logs
// specified by the selector to the response
func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, selector *backend.LogSelector) error {
func (sr *swarmRouter) swarmLogs(ctx context.Context, w io.Writer, r *http.Request, selector *backend.LogSelector) error {
// Args are validated before the stream starts because when it starts we're
// sending HTTP 200 by writing an empty chunk of data to tell the client that
// daemon is going to stream. By sending this initial HTTP 200 we can't report
@@ -62,11 +63,6 @@ func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *
return err
}
contentType := basictypes.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = basictypes.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty)
return nil
}

View File

@@ -115,4 +115,5 @@ func TestAdjustForAPIVersion(t *testing.T) {
if len(spec.TaskTemplate.ContainerSpec.Ulimits) != 0 {
t.Error("Ulimits were not stripped from spec")
}
}

View File

@@ -10,24 +10,12 @@ import (
"github.com/docker/docker/api/types/swarm"
)
// DiskUsageOptions holds parameters for system disk usage query.
type DiskUsageOptions struct {
// Containers controls whether container disk usage should be computed.
Containers bool
// Images controls whether image disk usage should be computed.
Images bool
// Volumes controls whether volume disk usage should be computed.
Volumes bool
}
// Backend is the methods that need to be implemented to provide
// system specific functionality.
type Backend interface {
SystemInfo() *types.Info
SystemVersion() types.Version
SystemDiskUsage(ctx context.Context, opts DiskUsageOptions) (*types.DiskUsage, error)
SystemDiskUsage(ctx context.Context) (*types.DiskUsage, error)
SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{})
UnsubscribeFromEvents(chan interface{})
AuthenticateToRegistry(ctx context.Context, authConfig *types.AuthConfig) (string, string, error)
@@ -38,8 +26,3 @@ type Backend interface {
type ClusterBackend interface {
Info() swarm.Info
}
// StatusProvider provides methods to get the swarm status of the current node.
type StatusProvider interface {
Status() string
}

View File

@@ -13,11 +13,10 @@ import (
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
timetypes "github.com/docker/docker/api/types/time"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/ioutils"
"github.com/pkg/errors"
pkgerrors "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
@@ -35,9 +34,6 @@ func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r
if bv := builderVersion; bv != "" {
w.Header().Set("Builder-Version", string(bv))
}
w.Header().Set("Swarm", s.swarmStatus())
if r.Method == http.MethodHead {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Length", "0")
@@ -47,15 +43,6 @@ func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r
return err
}
func (s *systemRouter) swarmStatus() string {
if s.cluster != nil {
if p, ok := s.cluster.(StatusProvider); ok {
return p.Status()
}
}
return string(swarm.LocalNodeStateInactive)
}
func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
info := s.backend.SystemInfo()
@@ -64,8 +51,7 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
info.Warnings = append(info.Warnings, info.Swarm.Warnings...)
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.25") {
if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") {
// TODO: handle this conversion in engine-api
type oldInfo struct {
*types.Info
@@ -86,7 +72,7 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
old.SecurityOptions = nameOnlySecurityOptions
return httputils.WriteJSON(w, http.StatusOK, old)
}
if versions.LessThan(version, "1.39") {
if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") {
if info.KernelVersion == "" {
info.KernelVersion = "<unknown>"
}
@@ -94,9 +80,6 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
info.OperatingSystem = "<unknown>"
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
info.KernelMemory = false
}
return httputils.WriteJSON(w, http.StatusOK, info)
}
@@ -107,95 +90,37 @@ func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r
}
func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
var getContainers, getImages, getVolumes, getBuildCache bool
typeStrs, ok := r.Form["type"]
if versions.LessThan(version, "1.42") || !ok {
getContainers, getImages, getVolumes, getBuildCache = true, true, true, true
} else {
for _, typ := range typeStrs {
switch types.DiskUsageObject(typ) {
case types.ContainerObject:
getContainers = true
case types.ImageObject:
getImages = true
case types.VolumeObject:
getVolumes = true
case types.BuildCacheObject:
getBuildCache = true
default:
return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)}
}
}
}
eg, ctx := errgroup.WithContext(ctx)
var systemDiskUsage *types.DiskUsage
if getContainers || getImages || getVolumes {
eg.Go(func() error {
var err error
systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{
Containers: getContainers,
Images: getImages,
Volumes: getVolumes,
})
return err
})
}
var du *types.DiskUsage
eg.Go(func() error {
var err error
du, err = s.backend.SystemDiskUsage(ctx)
return err
})
var buildCache []*types.BuildCache
if getBuildCache {
eg.Go(func() error {
var err error
buildCache, err = s.builder.DiskUsage(ctx)
if err != nil {
return errors.Wrap(err, "error getting build cache usage")
}
if buildCache == nil {
// Ensure empty `BuildCache` field is represented as empty JSON array(`[]`)
// instead of `null` to be consistent with `Images`, `Containers` etc.
buildCache = []*types.BuildCache{}
}
return nil
})
}
eg.Go(func() error {
var err error
buildCache, err = s.builder.DiskUsage(ctx)
if err != nil {
return pkgerrors.Wrap(err, "error getting build cache usage")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
var builderSize int64
if versions.LessThan(version, "1.42") {
for _, b := range buildCache {
builderSize += b.Size
// Parents field was added in API 1.42 to replace the Parent field.
b.Parents = nil
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
for _, b := range buildCache {
// Parent field is deprecated in API v1.42 and up, as it is deprecated
// in BuildKit. Empty the field to omit it in the API response.
b.Parent = "" //nolint:staticcheck // ignore SA1019 (Parent field is deprecated)
}
for _, b := range buildCache {
builderSize += b.Size
}
du := types.DiskUsage{
BuildCache: buildCache,
BuilderSize: builderSize,
}
if systemDiskUsage != nil {
du.LayersSize = systemDiskUsage.LayersSize
du.Images = systemDiskUsage.Images
du.Containers = systemDiskUsage.Containers
du.Volumes = systemDiskUsage.Volumes
}
du.BuilderSize = builderSize
du.BuildCache = buildCache
return httputils.WriteJSON(w, http.StatusOK, du)
}

View File

@@ -7,28 +7,14 @@ import (
// TODO return types need to be refactored into pkg
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
)
// Backend is the methods that need to be implemented to provide
// volume specific functionality
type Backend interface {
List(ctx context.Context, filter filters.Args) ([]*volume.Volume, []string, error)
Get(ctx context.Context, name string, opts ...opts.GetOption) (*volume.Volume, error)
Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*volume.Volume, error)
List(ctx context.Context, filter filters.Args) ([]*types.Volume, []string, error)
Get(ctx context.Context, name string, opts ...opts.GetOption) (*types.Volume, error)
Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error)
Remove(ctx context.Context, name string, opts ...opts.RemoveOption) error
Prune(ctx context.Context, pruneFilters filters.Args) (*types.VolumesPruneReport, error)
}
// ClusterBackend is the backend used for Swarm Cluster Volumes. Regular
// volumes go through the volume service, but to avoid across-dependency
// between the cluster package and the volume package, we simply provide two
// backends here.
type ClusterBackend interface {
GetVolume(nameOrID string) (volume.Volume, error)
GetVolumes(options volume.ListOptions) ([]*volume.Volume, error)
CreateVolume(volume volume.CreateOptions) (*volume.Volume, error)
RemoveVolume(nameOrID string, force bool) error
UpdateVolume(nameOrID string, version uint64, volume volume.UpdateOptions) error
IsManager() bool
}

View File

@@ -5,15 +5,13 @@ import "github.com/docker/docker/api/server/router"
// volumeRouter is a router to talk with the volumes controller
type volumeRouter struct {
backend Backend
cluster ClusterBackend
routes []router.Route
}
// NewRouter initializes a new volume router
func NewRouter(b Backend, cb ClusterBackend) router.Router {
func NewRouter(b Backend) router.Router {
r := &volumeRouter{
backend: b,
cluster: cb,
}
r.initRoutes()
return r
@@ -32,8 +30,6 @@ func (r *volumeRouter) initRoutes() {
// POST
router.NewPostRoute("/volumes/create", r.postVolumesCreate),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune),
// PUT
router.NewPutRoute("/volumes/{name:.*}", r.putVolumesUpdate),
// DELETE
router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes),
}

View File

@@ -2,24 +2,16 @@ package volume // import "github.com/docker/docker/api/server/router/volume"
import (
"context"
"fmt"
"encoding/json"
"io"
"net/http"
"strconv"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/api/types/volume"
volumetypes "github.com/docker/docker/api/types/volume"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/volume/service/opts"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
// clusterVolumesVersion defines the API version that swarm cluster volume
// functionality was introduced. avoids the use of magic numbers.
clusterVolumesVersion = "1.42"
)
func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -29,62 +21,25 @@ func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter
filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return errors.Wrap(err, "error reading volume filters")
return errdefs.InvalidParameter(errors.Wrap(err, "error reading volume filters"))
}
volumes, warnings, err := v.backend.List(ctx, filters)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
clusterVolumes, swarmErr := v.cluster.GetVolumes(volume.ListOptions{Filters: filters})
if swarmErr != nil {
// if there is a swarm error, we may not want to error out right
// away. the local list probably worked. instead, let's do what we
// do if there's a bad driver while trying to list: add the error
// to the warnings. don't do this if swarm is not initialized.
warnings = append(warnings, swarmErr.Error())
}
// add the cluster volumes to the return
volumes = append(volumes, clusterVolumes...)
}
return httputils.WriteJSON(w, http.StatusOK, &volume.ListResponse{Volumes: volumes, Warnings: warnings})
return httputils.WriteJSON(w, http.StatusOK, &volumetypes.VolumeListOKBody{Volumes: volumes, Warnings: warnings})
}
func (v *volumeRouter) getVolumeByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
// re: volume name duplication
//
// we prefer to get volumes locally before attempting to get them from the
// cluster. Local volumes can only be looked up by name, but cluster
// volumes can also be looked up by ID.
vol, err := v.backend.Get(ctx, vars["name"], opts.WithGetResolveStatus)
// if the volume is not found in the regular volume backend, and the client
// is using an API version greater than 1.42 (when cluster volumes were
// introduced), then check if Swarm has the volume.
if errdefs.IsNotFound(err) && versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
swarmVol, err := v.cluster.GetVolume(vars["name"])
// if swarm returns an error and that error indicates that swarm is not
// initialized, return original NotFound error. Otherwise, we'd return
// a weird swarm unavailable error on non-swarm engines.
if err != nil {
return err
}
vol = &swarmVol
} else if err != nil {
// otherwise, if this isn't NotFound, or this isn't a high enough version,
// just return the error by itself.
volume, err := v.backend.Get(ctx, vars["name"], opts.WithGetResolveStatus)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, vol)
return httputils.WriteJSON(w, http.StatusOK, volume)
}
func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -92,65 +47,23 @@ func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWri
return err
}
var req volume.CreateOptions
if err := httputils.ReadJSON(r, &req); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
var (
vol *volume.Volume
err error
version = httputils.VersionFromContext(ctx)
)
// if the ClusterVolumeSpec is filled in, then this is a cluster volume
// and is created through the swarm cluster volume backend.
//
// re: volume name duplication
//
// As it happens, there is no good way to prevent duplication of a volume
// name between local and cluster volumes. This is because Swarm volumes
// can be created from any manager node, bypassing most of the protections
// we could put into the engine side.
//
// Instead, we will allow creating a volume with a duplicate name, which
// should not break anything.
if req.ClusterVolumeSpec != nil && versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) {
logrus.Debug("using cluster volume")
vol, err = v.cluster.CreateVolume(req)
} else {
logrus.Debug("using regular volume")
vol, err = v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))
}
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusCreated, vol)
}
func (v *volumeRouter) putVolumesUpdate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if !v.cluster.IsManager() {
return errdefs.Unavailable(errors.New("volume update only valid for cluster volumes, but swarm is unavailable"))
}
if err := httputils.ParseForm(r); err != nil {
return err
}
rawVersion := r.URL.Query().Get("version")
version, err := strconv.ParseUint(rawVersion, 10, 64)
if err != nil {
err = fmt.Errorf("invalid swarm object version '%s': %v", rawVersion, err)
var req volumetypes.VolumeCreateBody
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
var req volume.UpdateOptions
if err := httputils.ReadJSON(r, &req); err != nil {
volume, err := v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))
if err != nil {
return err
}
return v.cluster.UpdateVolume(vars["name"], version, req)
return httputils.WriteJSON(w, http.StatusCreated, volume)
}
func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -158,26 +71,9 @@ func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter,
return err
}
force := httputils.BoolValue(r, "force")
version := httputils.VersionFromContext(ctx)
err := v.backend.Remove(ctx, vars["name"], opts.WithPurgeOnError(force))
// when a removal is forced, if the volume does not exist, no error will be
// returned. this means that to ensure forcing works on swarm volumes as
// well, we should always also force remove against the cluster.
if err != nil || force {
if versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
if errdefs.IsNotFound(err) || force {
err := v.cluster.RemoveVolume(vars["name"], force)
if err != nil {
return err
}
}
} else {
return err
}
if err := v.backend.Remove(ctx, vars["name"], opts.WithPurgeOnError(force)); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -192,12 +88,6 @@ func (v *volumeRouter) postVolumesPrune(ctx context.Context, w http.ResponseWrit
return err
}
// API version 1.42 changes behavior where prune should only prune anonymous volumes.
// To keep older API behavior working, we need to add this filter option to consider all (local) volumes for pruning, not just anonymous ones.
if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") {
pruneFilters.Add("all", "true")
}
pruneReport, err := v.backend.Prune(ctx, pruneFilters)
if err != nil {
return err

View File

@@ -1,760 +0,0 @@
package volume
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http/httptest"
"testing"
"gotest.tools/v3/assert"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/volume/service/opts"
)
func callGetVolume(v *volumeRouter, name string) (*httptest.ResponseRecorder, error) {
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
vars := map[string]string{"name": name}
req := httptest.NewRequest("GET", fmt.Sprintf("/volumes/%s", name), nil)
resp := httptest.NewRecorder()
err := v.getVolumeByName(ctx, resp, req, vars)
return resp, err
}
func callListVolumes(v *volumeRouter) (*httptest.ResponseRecorder, error) {
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
vars := map[string]string{}
req := httptest.NewRequest("GET", "/volumes", nil)
resp := httptest.NewRecorder()
err := v.getVolumesList(ctx, resp, req, vars)
return resp, err
}
func TestGetVolumeByNameNotFoundNoSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameNotFoundNotManager(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{swarm: true},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameNotFound(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{swarm: true, manager: true},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameFoundRegular(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"volume1": {
Name: "volume1",
},
},
},
cluster: &fakeClusterBackend{swarm: true, manager: true},
}
_, err := callGetVolume(v, "volume1")
assert.NilError(t, err)
}
func TestGetVolumeByNameFoundSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"volume1": {
Name: "volume1",
},
},
},
}
_, err := callGetVolume(v, "volume1")
assert.NilError(t, err)
}
func TestListVolumes(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"v3": {Name: "v3"},
"v4": {Name: "v4"},
},
},
}
resp, err := callListVolumes(v)
assert.NilError(t, err)
d := json.NewDecoder(resp.Result().Body)
respVols := volume.ListResponse{}
assert.NilError(t, d.Decode(&respVols))
assert.Assert(t, respVols.Volumes != nil)
assert.Equal(t, len(respVols.Volumes), 4, "volumes %v", respVols.Volumes)
}
func TestListVolumesNoSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{},
}
_, err := callListVolumes(v)
assert.NilError(t, err)
}
func TestListVolumesNoManager(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{swarm: true},
}
resp, err := callListVolumes(v)
assert.NilError(t, err)
d := json.NewDecoder(resp.Result().Body)
respVols := volume.ListResponse{}
assert.NilError(t, d.Decode(&respVols))
assert.Equal(t, len(respVols.Volumes), 2)
assert.Equal(t, len(respVols.Warnings), 0)
}
func TestCreateRegularVolume(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
Name: "vol1",
Driver: "foodriver",
}
buf := bytes.Buffer{}
e := json.NewEncoder(&buf)
e.Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
assert.NilError(t, json.NewDecoder(resp.Result().Body).Decode(&respVolume))
assert.Equal(t, respVolume.Name, "vol1")
assert.Equal(t, respVolume.Driver, "foodriver")
assert.Equal(t, 1, len(b.volumes))
assert.Equal(t, 0, len(c.volumes))
}
func TestCreateSwarmVolumeNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestCreateSwarmVolumeNotManager(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{swarm: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestCreateVolumeCluster(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
assert.NilError(t, json.NewDecoder(resp.Result().Body).Decode(&respVolume))
assert.Equal(t, respVolume.Name, "volCluster")
assert.Equal(t, respVolume.Driver, "someCSI")
assert.Equal(t, 0, len(b.volumes))
assert.Equal(t, 1, len(c.volumes))
}
func TestUpdateVolume(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vo1",
ClusterVolume: &volume.ClusterVolume{
ID: "vol1",
},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, c.volumes["vol1"].ClusterVolume.Meta.Version.Index, uint64(1))
}
func TestUpdateVolumeNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestUpdateVolumeNotFound(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestVolumeRemove(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
},
},
}
c := &fakeClusterBackend{swarm: true, manager: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
}
func TestVolumeRemoveSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
ClusterVolume: &volume.ClusterVolume{},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(c.volumes), 0)
}
func TestVolumeRemoveNotFoundNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err), err.Error())
}
func TestVolumeRemoveNotFoundNoManager(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{swarm: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestVolumeRemoveFoundNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
},
},
}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
}
func TestVolumeRemoveNoSwarmInUse(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"inuse": {
Name: "inuse",
},
},
}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/inuse", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "inuse"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsConflict(err))
}
func TestVolumeRemoveSwarmForce(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
ClusterVolume: &volume.ClusterVolume{},
Options: map[string]string{"mustforce": "yes"},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsConflict(err))
ctx = context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req = httptest.NewRequest("DELETE", "/volumes/vol1?force=1", nil)
resp = httptest.NewRecorder()
err = v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
assert.Equal(t, len(c.volumes), 0)
}
type fakeVolumeBackend struct {
volumes map[string]*volume.Volume
}
func (b *fakeVolumeBackend) List(_ context.Context, _ filters.Args) ([]*volume.Volume, []string, error) {
volumes := []*volume.Volume{}
for _, v := range b.volumes {
volumes = append(volumes, v)
}
return volumes, nil, nil
}
func (b *fakeVolumeBackend) Get(_ context.Context, name string, _ ...opts.GetOption) (*volume.Volume, error) {
if v, ok := b.volumes[name]; ok {
return v, nil
}
return nil, errdefs.NotFound(fmt.Errorf("volume %s not found", name))
}
func (b *fakeVolumeBackend) Create(_ context.Context, name, driverName string, _ ...opts.CreateOption) (*volume.Volume, error) {
if _, ok := b.volumes[name]; ok {
// TODO(dperny): return appropriate error type
return nil, fmt.Errorf("already exists")
}
v := &volume.Volume{
Name: name,
Driver: driverName,
}
if b.volumes == nil {
b.volumes = map[string]*volume.Volume{
name: v,
}
} else {
b.volumes[name] = v
}
return v, nil
}
func (b *fakeVolumeBackend) Remove(_ context.Context, name string, o ...opts.RemoveOption) error {
removeOpts := &opts.RemoveConfig{}
for _, opt := range o {
opt(removeOpts)
}
if v, ok := b.volumes[name]; !ok {
if !removeOpts.PurgeOnError {
return errdefs.NotFound(fmt.Errorf("volume %s not found", name))
}
} else if v.Name == "inuse" {
return errdefs.Conflict(fmt.Errorf("volume in use"))
}
delete(b.volumes, name)
return nil
}
func (b *fakeVolumeBackend) Prune(_ context.Context, _ filters.Args) (*types.VolumesPruneReport, error) {
return nil, nil
}
type fakeClusterBackend struct {
swarm bool
manager bool
idCount int
volumes map[string]*volume.Volume
}
func (c *fakeClusterBackend) checkSwarm() error {
if !c.swarm {
return errdefs.Unavailable(fmt.Errorf("this node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again"))
} else if !c.manager {
return errdefs.Unavailable(fmt.Errorf("this node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager"))
}
return nil
}
func (c *fakeClusterBackend) IsManager() bool {
return c.swarm && c.manager
}
func (c *fakeClusterBackend) GetVolume(nameOrID string) (volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return volume.Volume{}, err
}
if v, ok := c.volumes[nameOrID]; ok {
return *v, nil
}
return volume.Volume{}, errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
func (c *fakeClusterBackend) GetVolumes(options volume.ListOptions) ([]*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
volumes := []*volume.Volume{}
for _, v := range c.volumes {
volumes = append(volumes, v)
}
return volumes, nil
}
func (c *fakeClusterBackend) CreateVolume(volumeCreate volume.CreateOptions) (*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
if _, ok := c.volumes[volumeCreate.Name]; ok {
// TODO(dperny): return appropriate already exists error
return nil, fmt.Errorf("already exists")
}
v := &volume.Volume{
Name: volumeCreate.Name,
Driver: volumeCreate.Driver,
Labels: volumeCreate.Labels,
Options: volumeCreate.DriverOpts,
Scope: "global",
}
v.ClusterVolume = &volume.ClusterVolume{
ID: fmt.Sprintf("cluster_%d", c.idCount),
Spec: *volumeCreate.ClusterVolumeSpec,
}
c.idCount = c.idCount + 1
if c.volumes == nil {
c.volumes = map[string]*volume.Volume{
v.Name: v,
}
} else {
c.volumes[v.Name] = v
}
return v, nil
}
func (c *fakeClusterBackend) RemoveVolume(nameOrID string, force bool) error {
if err := c.checkSwarm(); err != nil {
return err
}
v, ok := c.volumes[nameOrID]
if !ok {
return errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
if _, mustforce := v.Options["mustforce"]; mustforce && !force {
return errdefs.Conflict(fmt.Errorf("volume %s must be force removed", nameOrID))
}
delete(c.volumes, nameOrID)
return nil
}
func (c *fakeClusterBackend) UpdateVolume(nameOrID string, version uint64, _ volume.UpdateOptions) error {
if err := c.checkSwarm(); err != nil {
return err
}
if v, ok := c.volumes[nameOrID]; ok {
if v.ClusterVolume.Meta.Version.Index != version {
return fmt.Errorf("wrong version")
}
v.ClusterVolume.Meta.Version.Index = v.ClusterVolume.Meta.Version.Index + 1
// for testing, we don't actually need to change anything about the
// volume object. let's just increment the version so we can see the
// call happened.
} else {
return errdefs.NotFound(fmt.Errorf("volume %q not found", nameOrID))
}
return nil
}

View File

@@ -6,14 +6,13 @@ import (
"net"
"net/http"
"strings"
"time"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/server/router/debug"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/sirupsen/logrus"
)
@@ -24,12 +23,11 @@ const versionMatcher = "/v{version:[0-9.]+}"
// Config provides the configuration for the API server
type Config struct {
Logging bool
CorsHeaders string
Version string
SocketGroup string
TLSConfig *tls.Config
// Hosts is a list of addresses for the API to listen on.
Hosts []string
}
// Server contains instance details for the server
@@ -59,8 +57,7 @@ func (s *Server) Accept(addr string, listeners ...net.Listener) {
for _, listener := range listeners {
httpServer := &HTTPServer{
srv: &http.Server{
Addr: addr,
ReadHeaderTimeout: 5 * time.Minute, // "G112: Potential Slowloris Attack (gosec)"; not a real concern for our use, so setting a long timeout.
Addr: addr,
},
l: listener,
}
@@ -142,11 +139,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
}
if err := handlerFunc(ctx, w, r, vars); err != nil {
statusCode := httpstatus.FromError(err)
statusCode := errdefs.GetHTTPErrorStatusCode(err)
if statusCode >= 500 {
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
}
makeErrorHandler(err)(w, r)
httputils.MakeErrorHandler(err)(w, r)
}
}
}
@@ -187,7 +184,7 @@ func (s *Server) createMux() *mux.Router {
m.Path("/debug" + r.Path()).Handler(f)
}
notFoundHandler := makeErrorHandler(pageNotFoundError{})
notFoundHandler := httputils.MakeErrorHandler(pageNotFoundError{})
m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler)
m.NotFoundHandler = notFoundHandler
m.MethodNotAllowedHandler = notFoundHandler

File diff suppressed because it is too large Load Diff

View File

@@ -10,15 +10,18 @@ import (
// ContainerAttachConfig holds the streams to use when connecting to a container to view logs.
type ContainerAttachConfig struct {
GetStreams func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error)
GetStreams func() (io.ReadCloser, io.Writer, io.Writer, error)
UseStdin bool
UseStdout bool
UseStderr bool
Logs bool
Stream bool
DetachKeys string
// Used to signify that streams must be multiplexed by producer as endpoint can't manage multiple streams.
// This is typically set by HTTP endpoint, while websocket can transport raw streams
// Used to signify that streams are multiplexed and therefore need a StdWriter to encode stdout/stderr messages accordingly.
// TODO @cpuguy83: This shouldn't be needed. It was only added so that http and websocket endpoints can use the same function, and the websocket function was not using a stdwriter prior to this change...
// HOWEVER, the websocket endpoint is using a single stream and SHOULD be encoded with stdout/stderr as is done for HTTP since it is still just a single stream.
// Since such a change is an API change unrelated to the current changeset we'll keep it as is here and change separately.
MuxStreams bool
}
@@ -35,6 +38,8 @@ type PartialLogMetaData struct {
// LogMessage is datastructure that represents piece of output produced by some
// container. The Line member is a slice of an array whose contents can be
// changed after a log driver's Log() method returns.
// changes to this struct need to be reflect in the reset method in
// daemon/logger/logger.go
type LogMessage struct {
Line []byte
Source string

View File

@@ -59,6 +59,7 @@ type ContainerExecInspect struct {
// ContainerListOptions holds parameters to list containers with.
type ContainerListOptions struct {
Quiet bool
Size bool
All bool
Latest bool
@@ -112,16 +113,10 @@ type NetworkListOptions struct {
Filters filters.Args
}
// NewHijackedResponse intializes a HijackedResponse type
func NewHijackedResponse(conn net.Conn, mediaType string) HijackedResponse {
return HijackedResponse{Conn: conn, Reader: bufio.NewReader(conn), mediaType: mediaType}
}
// HijackedResponse holds connection information for a hijacked request.
type HijackedResponse struct {
mediaType string
Conn net.Conn
Reader *bufio.Reader
Conn net.Conn
Reader *bufio.Reader
}
// Close closes the hijacked connection and reader.
@@ -129,15 +124,6 @@ func (h *HijackedResponse) Close() {
h.Conn.Close()
}
// MediaType let client know if HijackedResponse hold a raw or multiplexed stream.
// returns false if HTTP Content-Type is not relevant, and container must be inspected
func (h *HijackedResponse) MediaType() (string, bool) {
if h.mediaType == "" {
return "", false
}
return h.mediaType, true
}
// CloseWriter is an interface that implements structs
// that close input streams to prevent from writing.
type CloseWriter interface {
@@ -250,20 +236,10 @@ type ImageImportOptions struct {
Platform string // Platform is the target platform of the image
}
// ImageListOptions holds parameters to list images with.
// ImageListOptions holds parameters to filter the list of images with.
type ImageListOptions struct {
// All controls whether all images in the graph are filtered, or just
// the heads.
All bool
// Filters is a JSON-encoded set of filter arguments.
All bool
Filters filters.Args
// SharedSize indicates whether the shared size of images should be computed.
SharedSize bool
// ContainerCount indicates whether container count should be computed.
ContainerCount bool
}
// ImageLoadResponse returns information to the client about a load process.

View File

@@ -33,7 +33,6 @@ type ExecConfig struct {
User string // User that will run the command
Privileged bool // Is the container in privileged mode
Tty bool // Attach standard streams to a tty.
ConsoleSize *[2]uint `json:",omitempty"` // Initial console size [height, width]
AttachStdin bool // Attach the standard input, makes possible user interaction
AttachStderr bool // Attach the standard error
AttachStdout bool // Attach the standard output

View File

@@ -1,7 +1,6 @@
package container // import "github.com/docker/docker/api/types/container"
import (
"io"
"time"
"github.com/docker/docker/api/types/strslice"
@@ -14,24 +13,6 @@ import (
// Docker interprets it as 3 nanoseconds.
const MinimumDuration = 1 * time.Millisecond
// StopOptions holds the options to stop or restart a container.
type StopOptions struct {
// Signal (optional) is the signal to send to the container to (gracefully)
// stop it before forcibly terminating the container with SIGKILL after the
// timeout expires. If not value is set, the default (SIGTERM) is used.
Signal string `json:",omitempty"`
// Timeout (optional) is the timeout (in seconds) to wait for the container
// to stop gracefully before forcibly terminating it with SIGKILL.
//
// - Use nil to use the default timeout (10 seconds).
// - Use '-1' to wait indefinitely.
// - Use '0' to not wait for the container to exit gracefully, and
// immediately proceeds to forcibly terminating the container.
// - Other positive values are used as timeout (in seconds).
Timeout *int `json:",omitempty"`
}
// HealthConfig holds configuration settings for the HEALTHCHECK feature.
type HealthConfig struct {
// Test is the test to perform to check that the container is healthy.
@@ -53,14 +34,6 @@ type HealthConfig struct {
Retries int `json:",omitempty"`
}
// ExecStartOptions holds the options to start container's exec.
type ExecStartOptions struct {
Stdin io.Reader
Stdout io.Writer
Stderr io.Writer
ConsoleSize *[2]uint `json:",omitempty"`
}
// Config contains the configuration data about a container.
// It should hold only portable information about the container.
// Here, "portable" means "independent from the host we are running on".

View File

@@ -0,0 +1,20 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerCreateCreatedBody OK response to ContainerCreate operation
// swagger:model ContainerCreateCreatedBody
type ContainerCreateCreatedBody struct {
// The ID of the created container
// Required: true
ID string `json:"Id"`
// Warnings encountered when creating the container
// Required: true
Warnings []string `json:"Warnings"`
}

View File

@@ -0,0 +1,28 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerWaitOKBodyError container waiting error, if any
// swagger:model ContainerWaitOKBodyError
type ContainerWaitOKBodyError struct {
// Details of an error
Message string `json:"Message,omitempty"`
}
// ContainerWaitOKBody OK response to ContainerWait operation
// swagger:model ContainerWaitOKBody
type ContainerWaitOKBody struct {
// error
// Required: true
Error *ContainerWaitOKBodyError `json:"Error"`
// Exit code of the container
// Required: true
StatusCode int64 `json:"StatusCode"`
}

View File

@@ -1,19 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// CreateResponse ContainerCreateResponse
//
// OK response to ContainerCreate operation
// swagger:model CreateResponse
type CreateResponse struct {
// The ID of the created container
// Required: true
ID string `json:"Id"`
// Warnings encountered when creating the container
// Required: true
Warnings []string `json:"Warnings"`
}

View File

@@ -1,16 +0,0 @@
package container // import "github.com/docker/docker/api/types/container"
// ContainerCreateCreatedBody OK response to ContainerCreate operation
//
// Deprecated: use CreateResponse
type ContainerCreateCreatedBody = CreateResponse
// ContainerWaitOKBody OK response to ContainerWait operation
//
// Deprecated: use WaitResponse
type ContainerWaitOKBody = WaitResponse
// ContainerWaitOKBodyError container waiting error, if any
//
// Deprecated: use WaitExitError
type ContainerWaitOKBodyError = WaitExitError

View File

@@ -13,26 +13,19 @@ import (
// CgroupnsMode represents the cgroup namespace mode of the container
type CgroupnsMode string
// cgroup namespace modes for containers
const (
CgroupnsModeEmpty CgroupnsMode = ""
CgroupnsModePrivate CgroupnsMode = "private"
CgroupnsModeHost CgroupnsMode = "host"
)
// IsPrivate indicates whether the container uses its own private cgroup namespace
func (c CgroupnsMode) IsPrivate() bool {
return c == CgroupnsModePrivate
return c == "private"
}
// IsHost indicates whether the container shares the host's cgroup namespace
func (c CgroupnsMode) IsHost() bool {
return c == CgroupnsModeHost
return c == "host"
}
// IsEmpty indicates whether the container cgroup namespace mode is unset
func (c CgroupnsMode) IsEmpty() bool {
return c == CgroupnsModeEmpty
return c == ""
}
// Valid indicates whether the cgroup namespace mode is valid
@@ -44,69 +37,60 @@ func (c CgroupnsMode) Valid() bool {
// values are platform specific
type Isolation string
// Isolation modes for containers
const (
IsolationEmpty Isolation = "" // IsolationEmpty is unspecified (same behavior as default)
IsolationDefault Isolation = "default" // IsolationDefault is the default isolation mode on current daemon
IsolationProcess Isolation = "process" // IsolationProcess is process isolation mode
IsolationHyperV Isolation = "hyperv" // IsolationHyperV is HyperV isolation mode
)
// IsDefault indicates the default isolation technology of a container. On Linux this
// is the native driver. On Windows, this is a Windows Server Container.
func (i Isolation) IsDefault() bool {
// TODO consider making isolation-mode strict (case-sensitive)
v := Isolation(strings.ToLower(string(i)))
return v == IsolationDefault || v == IsolationEmpty
return strings.ToLower(string(i)) == "default" || string(i) == ""
}
// IsHyperV indicates the use of a Hyper-V partition for isolation
func (i Isolation) IsHyperV() bool {
// TODO consider making isolation-mode strict (case-sensitive)
return Isolation(strings.ToLower(string(i))) == IsolationHyperV
return strings.ToLower(string(i)) == "hyperv"
}
// IsProcess indicates the use of process isolation
func (i Isolation) IsProcess() bool {
// TODO consider making isolation-mode strict (case-sensitive)
return Isolation(strings.ToLower(string(i))) == IsolationProcess
return strings.ToLower(string(i)) == "process"
}
const (
// IsolationEmpty is unspecified (same behavior as default)
IsolationEmpty = Isolation("")
// IsolationDefault is the default isolation mode on current daemon
IsolationDefault = Isolation("default")
// IsolationProcess is process isolation mode
IsolationProcess = Isolation("process")
// IsolationHyperV is HyperV isolation mode
IsolationHyperV = Isolation("hyperv")
)
// IpcMode represents the container ipc stack.
type IpcMode string
// IpcMode constants
const (
IPCModeNone IpcMode = "none"
IPCModeHost IpcMode = "host"
IPCModeContainer IpcMode = "container"
IPCModePrivate IpcMode = "private"
IPCModeShareable IpcMode = "shareable"
)
// IsPrivate indicates whether the container uses its own private ipc namespace which can not be shared.
func (n IpcMode) IsPrivate() bool {
return n == IPCModePrivate
return n == "private"
}
// IsHost indicates whether the container shares the host's ipc namespace.
func (n IpcMode) IsHost() bool {
return n == IPCModeHost
return n == "host"
}
// IsShareable indicates whether the container's ipc namespace can be shared with another container.
func (n IpcMode) IsShareable() bool {
return n == IPCModeShareable
return n == "shareable"
}
// IsContainer indicates whether the container uses another container's ipc namespace.
func (n IpcMode) IsContainer() bool {
return strings.HasPrefix(string(n), string(IPCModeContainer)+":")
parts := strings.SplitN(string(n), ":", 2)
return len(parts) > 1 && parts[0] == "container"
}
// IsNone indicates whether container IpcMode is set to "none".
func (n IpcMode) IsNone() bool {
return n == IPCModeNone
return n == "none"
}
// IsEmpty indicates whether container IpcMode is empty
@@ -121,8 +105,9 @@ func (n IpcMode) Valid() bool {
// Container returns the name of the container ipc stack is going to be used.
func (n IpcMode) Container() string {
if n.IsContainer() {
return strings.TrimPrefix(string(n), string(IPCModeContainer)+":")
parts := strings.SplitN(string(n), ":", 2)
if len(parts) > 1 && parts[0] == "container" {
return parts[1]
}
return ""
}
@@ -341,7 +326,7 @@ type LogMode string
// Available logging modes
const (
LogModeUnset LogMode = ""
LogModeUnset = ""
LogModeBlocking LogMode = "blocking"
LogModeNonBlock LogMode = "non-blocking"
)
@@ -376,17 +361,14 @@ type Resources struct {
Devices []DeviceMapping // List of devices to map inside the container
DeviceCgroupRules []string // List of rule to be added to the device cgroup
DeviceRequests []DeviceRequest // List of device requests for device drivers
// KernelMemory specifies the kernel memory limit (in bytes) for the container.
// Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes.
KernelMemory int64 `json:",omitempty"`
KernelMemoryTCP int64 `json:",omitempty"` // Hard limit for kernel TCP buffer memory (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap
MemorySwappiness *int64 // Tuning container memory swappiness behaviour
OomKillDisable *bool // Whether to disable OOM Killer or not
PidsLimit *int64 // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
Ulimits []*units.Ulimit // List of ulimits to be set in the container
KernelMemory int64 // Kernel memory limit (in bytes), Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
KernelMemoryTCP int64 // Hard limit for kernel TCP buffer memory (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap
MemorySwappiness *int64 // Tuning container memory swappiness behaviour
OomKillDisable *bool // Whether to disable OOM Killer or not
PidsLimit *int64 // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
Ulimits []*units.Ulimit // List of ulimits to be set in the container
// Applicable to Windows
CPUCount int64 `json:"CpuCount"` // CPU count
@@ -417,7 +399,6 @@ type HostConfig struct {
AutoRemove bool // Automatically remove container when it exits
VolumeDriver string // Name of the volume driver used to mount volumes
VolumesFrom []string // List of volumes to take from other container
ConsoleSize [2]uint // Initial console size (height,width)
// Applicable to UNIX platforms
CapAdd strslice.StrSlice // List of kernel capabilities to add to the container
@@ -446,7 +427,8 @@ type HostConfig struct {
Runtime string `json:",omitempty"` // Runtime to use with this container
// Applicable to Windows
Isolation Isolation // Isolation technology of the container (e.g. default, hyperv)
ConsoleSize [2]uint // Initial console size (height,width)
Isolation Isolation // Isolation technology of the container (e.g. default, hyperv)
// Contains container's resources (cgroups, ulimits)
Resources

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package container // import "github.com/docker/docker/api/types/container"

View File

@@ -1,12 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// WaitExitError container waiting error, if any
// swagger:model WaitExitError
type WaitExitError struct {
// Details of an error
Message string `json:"Message,omitempty"`
}

View File

@@ -1,18 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// WaitResponse ContainerWaitResponse
//
// OK response to ContainerWait operation
// swagger:model WaitResponse
type WaitResponse struct {
// error
Error *WaitExitError `json:"Error,omitempty"`
// Exit code of the container
// Required: true
StatusCode int64 `json:"StatusCode"`
}

View File

@@ -1,14 +0,0 @@
package types // import "github.com/docker/docker/api/types"
import "github.com/docker/docker/api/types/volume"
// Volume volume
//
// Deprecated: use github.com/docker/docker/api/types/volume.Volume
type Volume = volume.Volume
// VolumeUsageData Usage details about the volume. This information is used by the
// `GET /system/df` endpoint, and omitted in other endpoints.
//
// Deprecated: use github.com/docker/docker/api/types/volume.UsageData
type VolumeUsageData = volume.UsageData

View File

@@ -1,26 +1,33 @@
package events // import "github.com/docker/docker/api/types/events"
// Type is used for event-types.
type Type = string
// List of known event types.
const (
BuilderEventType Type = "builder" // BuilderEventType is the event type that the builder generates.
ConfigEventType Type = "config" // ConfigEventType is the event type that configs generate.
ContainerEventType Type = "container" // ContainerEventType is the event type that containers generate.
DaemonEventType Type = "daemon" // DaemonEventType is the event type that daemon generate.
ImageEventType Type = "image" // ImageEventType is the event type that images generate.
NetworkEventType Type = "network" // NetworkEventType is the event type that networks generate.
NodeEventType Type = "node" // NodeEventType is the event type that nodes generate.
PluginEventType Type = "plugin" // PluginEventType is the event type that plugins generate.
SecretEventType Type = "secret" // SecretEventType is the event type that secrets generate.
ServiceEventType Type = "service" // ServiceEventType is the event type that services generate.
VolumeEventType Type = "volume" // VolumeEventType is the event type that volumes generate.
// BuilderEventType is the event type that the builder generates
BuilderEventType = "builder"
// ContainerEventType is the event type that containers generate
ContainerEventType = "container"
// DaemonEventType is the event type that daemon generate
DaemonEventType = "daemon"
// ImageEventType is the event type that images generate
ImageEventType = "image"
// NetworkEventType is the event type that networks generate
NetworkEventType = "network"
// PluginEventType is the event type that plugins generate
PluginEventType = "plugin"
// VolumeEventType is the event type that volumes generate
VolumeEventType = "volume"
// ServiceEventType is the event type that services generate
ServiceEventType = "service"
// NodeEventType is the event type that nodes generate
NodeEventType = "node"
// SecretEventType is the event type that secrets generate
SecretEventType = "secret"
// ConfigEventType is the event type that configs generate
ConfigEventType = "config"
)
// Actor describes something that generates events,
// like a container, or a network, or a volume.
// It has a defined name and a set of attributes.
// It has a defined name and a set or attributes.
// The container attributes are its labels, other actors
// can generate these attributes from other properties.
type Actor struct {
@@ -32,11 +39,11 @@ type Actor struct {
type Message struct {
// Deprecated information from JSONMessage.
// With data only in container events.
Status string `json:"status,omitempty"` // Deprecated: use Action instead.
ID string `json:"id,omitempty"` // Deprecated: use Actor.ID instead.
From string `json:"from,omitempty"` // Deprecated: use Actor.Attributes["image"] instead.
Status string `json:"status,omitempty"`
ID string `json:"id,omitempty"`
From string `json:"from,omitempty"`
Type Type
Type string
Action string
Actor Actor
// Engine events are local scope. Cluster events are swarm scope.

View File

@@ -1,5 +1,4 @@
/*
Package filters provides tools for encoding a mapping of keys to a set of
/*Package filters provides tools for encoding a mapping of keys to a set of
multiple values.
*/
package filters // import "github.com/docker/docker/api/types/filters"
@@ -10,7 +9,6 @@ import (
"strings"
"github.com/docker/docker/api/types/versions"
"github.com/pkg/errors"
)
// Args stores a mapping of keys to a set of multiple values.
@@ -99,7 +97,7 @@ func FromJSON(p string) (Args, error) {
// Fallback to parsing arguments in the legacy slice format
deprecated := map[string][]string{}
if legacyErr := json.Unmarshal(raw, &deprecated); legacyErr != nil {
return args, invalidFilter{errors.Wrap(err, "invalid filter")}
return args, err
}
args.fields = deprecatedArgs(deprecated)
@@ -249,10 +247,10 @@ func (args Args) Contains(field string) bool {
return ok
}
type invalidFilter struct{ error }
type invalidFilter string
func (e invalidFilter) Error() string {
return e.error.Error()
return "Invalid filter '" + string(e) + "'"
}
func (invalidFilter) InvalidParameter() {}
@@ -262,7 +260,7 @@ func (invalidFilter) InvalidParameter() {}
func (args Args) Validate(accepted map[string]bool) error {
for name := range args.fields {
if !accepted[name] {
return invalidFilter{errors.New("invalid filter '" + name + "'")}
return invalidFilter(name)
}
}
return nil

View File

@@ -69,14 +69,9 @@ func TestFromJSON(t *testing.T) {
}
for _, invalid := range invalids {
_, err := FromJSON(invalid)
if err == nil {
if _, err := FromJSON(invalid); err == nil {
t.Fatalf("Expected an error with %v, got nothing", invalid)
}
var invalidFilterError invalidFilter
if !errors.As(err, &invalidFilterError) {
t.Fatalf("Expected an invalidFilter error, got %T", err)
}
}
for expectedArgs, matchers := range valid {
@@ -332,14 +327,9 @@ func TestValidate(t *testing.T) {
}
f.Add("bogus", "running")
err := f.Validate(valid)
if err == nil {
if err := f.Validate(valid); err == nil {
t.Fatal("Expected to return an error, got nil")
}
var invalidFilterError invalidFilter
if !errors.As(err, &invalidFilterError) {
t.Fatalf("Expected an invalidFilter error, got %T", err)
}
}
func TestWalkValues(t *testing.T) {

View File

@@ -3,21 +3,15 @@ package types
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// GraphDriverData Information about the storage driver used to store the container's and
// image's filesystem.
//
// GraphDriverData Information about a container's graph driver.
// swagger:model GraphDriverData
type GraphDriverData struct {
// Low-level storage metadata, provided as key/value pairs.
//
// This information is driver-specific, and depends on the storage-driver
// in use, and should be used for informational purposes only.
//
// data
// Required: true
Data map[string]string `json:"Data"`
// Name of the storage driver.
// name
// Required: true
Name string `json:"Name"`
}

View File

@@ -7,91 +7,43 @@ package types
// swagger:model ImageSummary
type ImageSummary struct {
// Number of containers using this image. Includes both stopped and running
// containers.
//
// This size is not calculated by default, and depends on which API endpoint
// is used. `-1` indicates that the value has not been set / calculated.
//
// containers
// Required: true
Containers int64 `json:"Containers"`
// Date and time at which the image was created as a Unix timestamp
// (number of seconds sinds EPOCH).
//
// created
// Required: true
Created int64 `json:"Created"`
// ID is the content-addressable ID of an image.
//
// This identifier is a content-addressable digest calculated from the
// image's configuration (which includes the digests of layers used by
// the image).
//
// Note that this digest differs from the `RepoDigests` below, which
// holds digests of image manifests that reference the image.
//
// Id
// Required: true
ID string `json:"Id"`
// User-defined key/value metadata.
// labels
// Required: true
Labels map[string]string `json:"Labels"`
// ID of the parent image.
//
// Depending on how the image was created, this field may be empty and
// is only set for images that were built/created locally. This field
// is empty if the image was pulled from an image registry.
//
// parent Id
// Required: true
ParentID string `json:"ParentId"`
// List of content-addressable digests of locally available image manifests
// that the image is referenced from. Multiple manifests can refer to the
// same image.
//
// These digests are usually only available if the image was either pulled
// from a registry, or if the image was pushed to a registry, which is when
// the manifest is generated and its digest calculated.
//
// repo digests
// Required: true
RepoDigests []string `json:"RepoDigests"`
// List of image names/tags in the local image cache that reference this
// image.
//
// Multiple image tags can refer to the same image, and this list may be
// empty if no tags reference the image, in which case the image is
// "untagged", in which case it can still be referenced by its ID.
//
// repo tags
// Required: true
RepoTags []string `json:"RepoTags"`
// Total size of image layers that are shared between this image and other
// images.
//
// This size is not calculated by default. `-1` indicates that the value
// has not been set / calculated.
//
// shared size
// Required: true
SharedSize int64 `json:"SharedSize"`
// Total size of the image including all layers it is composed of.
//
// size
// Required: true
Size int64 `json:"Size"`
// Total size of the image including all layers it is composed of.
//
// In versions of Docker before v1.10, this field was calculated from
// the image itself and all of its parent images. Docker v1.10 and up
// store images self-contained, and no longer use a parent-chain, making
// this field an equivalent of the Size field.
//
// This field is kept for backward compatibility, but may be removed in
// a future version of the API.
//
// virtual size
// Required: true
VirtualSize int64 `json:"VirtualSize"`
}

View File

@@ -17,8 +17,6 @@ const (
TypeTmpfs Type = "tmpfs"
// TypeNamedPipe is the type for mounting Windows named pipes
TypeNamedPipe Type = "npipe"
// TypeCluster is the type for Swarm Cluster Volumes.
TypeCluster Type = "cluster"
)
// Mount represents a mount (volume).
@@ -32,10 +30,9 @@ type Mount struct {
ReadOnly bool `json:",omitempty"`
Consistency Consistency `json:",omitempty"`
BindOptions *BindOptions `json:",omitempty"`
VolumeOptions *VolumeOptions `json:",omitempty"`
TmpfsOptions *TmpfsOptions `json:",omitempty"`
ClusterOptions *ClusterOptions `json:",omitempty"`
BindOptions *BindOptions `json:",omitempty"`
VolumeOptions *VolumeOptions `json:",omitempty"`
TmpfsOptions *TmpfsOptions `json:",omitempty"`
}
// Propagation represents the propagation of a mount.
@@ -82,9 +79,8 @@ const (
// BindOptions defines options specific to mounts of type "bind".
type BindOptions struct {
Propagation Propagation `json:",omitempty"`
NonRecursive bool `json:",omitempty"`
CreateMountpoint bool `json:",omitempty"`
Propagation Propagation `json:",omitempty"`
NonRecursive bool `json:",omitempty"`
}
// VolumeOptions represents the options for a mount of type volume.
@@ -133,8 +129,3 @@ type TmpfsOptions struct {
// Some of these may be straightforward to add, but others, such as
// uid/gid have implications in a clustered system.
}
// ClusterOptions specifies options for a Cluster volume.
type ClusterOptions struct {
// intentionally empty
}

View File

@@ -45,32 +45,31 @@ func (ipnet *NetIPNet) UnmarshalJSON(b []byte) (err error) {
// IndexInfo contains information about a registry
//
// RepositoryInfo Examples:
// {
// "Index" : {
// "Name" : "docker.io",
// "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
// "Secure" : true,
// "Official" : true,
// },
// "RemoteName" : "library/debian",
// "LocalName" : "debian",
// "CanonicalName" : "docker.io/debian"
// "Official" : true,
// }
//
// {
// "Index" : {
// "Name" : "docker.io",
// "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
// "Secure" : true,
// "Official" : true,
// },
// "RemoteName" : "library/debian",
// "LocalName" : "debian",
// "CanonicalName" : "docker.io/debian"
// "Official" : true,
// }
//
// {
// "Index" : {
// "Name" : "127.0.0.1:5000",
// "Mirrors" : [],
// "Secure" : false,
// "Official" : false,
// },
// "RemoteName" : "user/repo",
// "LocalName" : "127.0.0.1:5000/user/repo",
// "CanonicalName" : "127.0.0.1:5000/user/repo",
// "Official" : false,
// }
// {
// "Index" : {
// "Name" : "127.0.0.1:5000",
// "Mirrors" : [],
// "Secure" : false,
// "Official" : false,
// },
// "RemoteName" : "user/repo",
// "LocalName" : "127.0.0.1:5000/user/repo",
// "CanonicalName" : "127.0.0.1:5000/user/repo",
// "Official" : false,
// }
type IndexInfo struct {
// Name is the name of the registry, such as "docker.io"
Name string

View File

@@ -33,16 +33,17 @@ func TestStrSliceUnmarshalJSON(t *testing.T) {
"[]": {},
`["/bin/sh","-c","echo"]`: {"/bin/sh", "-c", "echo"},
}
for input, expected := range parts {
for json, expectedParts := range parts {
strs := StrSlice{"default", "values"}
if err := strs.UnmarshalJSON([]byte(input)); err != nil {
if err := strs.UnmarshalJSON([]byte(json)); err != nil {
t.Fatal(err)
}
actualParts := []string(strs)
if !reflect.DeepEqual(actualParts, expected) {
t.Fatalf("%#v: expected %v, got %v", input, expected, actualParts)
if !reflect.DeepEqual(actualParts, expectedParts) {
t.Fatalf("%#v: expected %v, got %v", json, expectedParts, actualParts)
}
}
}

View File

@@ -1,20 +1,12 @@
package swarm // import "github.com/docker/docker/api/types/swarm"
import (
"strconv"
"time"
)
import "time"
// Version represents the internal object version.
type Version struct {
Index uint64 `json:",omitempty"`
}
// String implements fmt.Stringer interface.
func (v Version) String() string {
return strconv.FormatUint(v.Index, 10)
}
// Meta is a base object inherited by most of the other once.
type Meta struct {
Version Version `json:",omitempty"`

View File

@@ -53,7 +53,6 @@ type NodeDescription struct {
Resources Resources `json:",omitempty"`
Engine EngineDescription `json:",omitempty"`
TLSInfo TLSInfo `json:",omitempty"`
CSIInfo []NodeCSIInfo `json:",omitempty"`
}
// Platform represents the platform (Arch/OS).
@@ -69,21 +68,6 @@ type EngineDescription struct {
Plugins []PluginDescription `json:",omitempty"`
}
// NodeCSIInfo represents information about a CSI plugin available on the node
type NodeCSIInfo struct {
// PluginName is the name of the CSI plugin.
PluginName string `json:",omitempty"`
// NodeID is the ID of the node as reported by the CSI plugin. This is
// different from the swarm node ID.
NodeID string `json:",omitempty"`
// MaxVolumesPerNode is the maximum number of volumes that may be published
// to this node
MaxVolumesPerNode int64 `json:",omitempty"`
// AccessibleTopology indicates the location of this node in the CSI
// plugin's topology
AccessibleTopology *Topology `json:",omitempty"`
}
// PluginDescription represents the description of an engine plugin.
type PluginDescription struct {
Type string `json:",omitempty"`
@@ -129,11 +113,3 @@ const (
// NodeStateDisconnected DISCONNECTED
NodeStateDisconnected NodeState = "disconnected"
)
// Topology defines the CSI topology of this node. This type is a duplicate of
// github.com/docker/docker/api/types.Topology. Because the type definition
// is so simple and to avoid complicated structure or circular imports, we just
// duplicate it here. See that type for full documentation
type Topology struct {
Segments map[string]string `json:",omitempty"`
}

View File

@@ -213,16 +213,6 @@ type Info struct {
Warnings []string `json:",omitempty"`
}
// Status provides information about the current swarm status and role,
// obtained from the "Swarm" header in the API response.
type Status struct {
// NodeState represents the state of the node.
NodeState LocalNodeState
// ControlAvailable indicates if the node is a swarm manager.
ControlAvailable bool
}
// Peer represents a peer.
type Peer struct {
NodeID string

View File

@@ -62,11 +62,6 @@ type Task struct {
// used to determine which Tasks belong to which run of the job. This field
// is absent if the Service mode is Replicated or Global.
JobIteration *Version `json:",omitempty"`
// Volumes is the list of VolumeAttachments for this task. It specifies
// which particular volumes are to be used by this particular task, and
// fulfilling what mounts in the spec.
Volumes []VolumeAttachment
}
// TaskSpec represents the spec of a task.
@@ -209,17 +204,3 @@ type ContainerStatus struct {
type PortStatus struct {
Ports []PortConfig `json:",omitempty"`
}
// VolumeAttachment contains the associating a Volume to a Task.
type VolumeAttachment struct {
// ID is the Swarmkit ID of the Volume. This is not the CSI VolumeId.
ID string `json:",omitempty"`
// Source, together with Target, indicates the Mount, as specified in the
// ContainerSpec, that this volume fulfills.
Source string `json:",omitempty"`
// Target, together with Source, indicates the Mount, as specified
// in the ContainerSpec, that this volume fulfills.
Target string `json:",omitempty"`
}

View File

@@ -0,0 +1,12 @@
package time // import "github.com/docker/docker/api/types/time"
import (
"strconv"
"time"
)
// DurationToSecondsString converts the specified duration to the number
// seconds it represents, formatted as a string.
func DurationToSecondsString(duration time.Duration) string {
return strconv.FormatFloat(duration.Seconds(), 'f', 0, 64)
}

View File

@@ -0,0 +1,26 @@
package time // import "github.com/docker/docker/api/types/time"
import (
"testing"
"time"
)
func TestDurationToSecondsString(t *testing.T) {
cases := []struct {
in time.Duration
expected string
}{
{0 * time.Second, "0"},
{1 * time.Second, "1"},
{1 * time.Minute, "60"},
{24 * time.Hour, "86400"},
}
for _, c := range cases {
s := DurationToSecondsString(c.in)
if s != c.expected {
t.Errorf("wrong value for input `%v`: expected `%s`, got `%s`", c.in, c.expected, s)
t.Fail()
}
}
}

View File

@@ -100,10 +100,8 @@ func GetTimestamp(value string, reference time.Time) (string, error) {
// if the incoming nanosecond portion is longer or shorter than 9 digits it is
// converted to nanoseconds. The expectation is that the seconds and
// seconds will be used to create a time variable. For example:
//
// seconds, nanoseconds, err := ParseTimestamp("1136073600.000000001",0)
// if err == nil since := time.Unix(seconds, nanoseconds)
//
// seconds, nanoseconds, err := ParseTimestamp("1136073600.000000001",0)
// if err == nil since := time.Unix(seconds, nanoseconds)
// returns seconds as def(aultSeconds) if value == ""
func ParseTimestamps(value string, def int64) (int64, int64, error) {
if value == "" {

View File

@@ -14,136 +14,43 @@ import (
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/volume"
"github.com/docker/go-connections/nat"
)
const (
// MediaTypeRawStream is vendor specific MIME-Type set for raw TTY streams
MediaTypeRawStream = "application/vnd.docker.raw-stream"
// MediaTypeMultiplexedStream is vendor specific MIME-Type set for stdin/stdout/stderr multiplexed streams
MediaTypeMultiplexedStream = "application/vnd.docker.multiplexed-stream"
)
// RootFS returns Image's RootFS description including the layer IDs.
type RootFS struct {
Type string `json:",omitempty"`
Layers []string `json:",omitempty"`
Type string
Layers []string `json:",omitempty"`
BaseLayer string `json:",omitempty"`
}
// ImageInspect contains response of Engine API:
// GET "/images/{name:.*}/json"
type ImageInspect struct {
// ID is the content-addressable ID of an image.
//
// This identifier is a content-addressable digest calculated from the
// image's configuration (which includes the digests of layers used by
// the image).
//
// Note that this digest differs from the `RepoDigests` below, which
// holds digests of image manifests that reference the image.
ID string `json:"Id"`
// RepoTags is a list of image names/tags in the local image cache that
// reference this image.
//
// Multiple image tags can refer to the same image, and this list may be
// empty if no tags reference the image, in which case the image is
// "untagged", in which case it can still be referenced by its ID.
RepoTags []string
// RepoDigests is a list of content-addressable digests of locally available
// image manifests that the image is referenced from. Multiple manifests can
// refer to the same image.
//
// These digests are usually only available if the image was either pulled
// from a registry, or if the image was pushed to a registry, which is when
// the manifest is generated and its digest calculated.
RepoDigests []string
// Parent is the ID of the parent image.
//
// Depending on how the image was created, this field may be empty and
// is only set for images that were built/created locally. This field
// is empty if the image was pulled from an image registry.
Parent string
// Comment is an optional message that can be set when committing or
// importing the image.
Comment string
// Created is the date and time at which the image was created, formatted in
// RFC 3339 nano-seconds (time.RFC3339Nano).
Created string
// Container is the ID of the container that was used to create the image.
//
// Depending on how the image was created, this field may be empty.
Container string
// ContainerConfig is an optional field containing the configuration of the
// container that was last committed when creating the image.
//
// Previous versions of Docker builder used this field to store build cache,
// and it is not in active use anymore.
ID string `json:"Id"`
RepoTags []string
RepoDigests []string
Parent string
Comment string
Created string
Container string
ContainerConfig *container.Config
// DockerVersion is the version of Docker that was used to build the image.
//
// Depending on how the image was created, this field may be empty.
DockerVersion string
// Author is the name of the author that was specified when committing the
// image, or as specified through MAINTAINER (deprecated) in the Dockerfile.
Author string
Config *container.Config
// Architecture is the hardware CPU architecture that the image runs on.
Architecture string
// Variant is the CPU architecture variant (presently ARM-only).
Variant string `json:",omitempty"`
// OS is the Operating System the image is built to run on.
Os string
// OsVersion is the version of the Operating System the image is built to
// run on (especially for Windows).
OsVersion string `json:",omitempty"`
// Size is the total size of the image including all layers it is composed of.
Size int64
// VirtualSize is the total size of the image including all layers it is
// composed of.
//
// In versions of Docker before v1.10, this field was calculated from
// the image itself and all of its parent images. Docker v1.10 and up
// store images self-contained, and no longer use a parent-chain, making
// this field an equivalent of the Size field.
//
// This field is kept for backward compatibility, but may be removed in
// a future version of the API.
VirtualSize int64 // TODO(thaJeztah): deprecate this field
// GraphDriver holds information about the storage driver used to store the
// container's and image's filesystem.
GraphDriver GraphDriverData
// RootFS contains information about the image's RootFS, including the
// layer IDs.
RootFS RootFS
// Metadata of the image in the local cache.
//
// This information is local to the daemon, and not part of the image itself.
Metadata ImageMetadata
DockerVersion string
Author string
Config *container.Config
Architecture string
Variant string `json:",omitempty"`
Os string
OsVersion string `json:",omitempty"`
Size int64
VirtualSize int64
GraphDriver GraphDriverData
RootFS RootFS
Metadata ImageMetadata
}
// ImageMetadata contains engine-local data about the image
type ImageMetadata struct {
// LastTagTime is the date and time at which the image was last tagged.
LastTagTime time.Time `json:",omitempty"`
}
@@ -200,15 +107,6 @@ type Ping struct {
OSType string
Experimental bool
BuilderVersion BuilderVersion
// SwarmStatus provides information about the current swarm status of the
// engine, obtained from the "Swarm" header in the API response.
//
// It can be a nil struct if the API version does not provide this header
// in the ping response, or if an error occurred, in which case the client
// should use other ways to get the current swarm status, such as the /swarm
// endpoint.
SwarmStatus *swarm.Status
}
// ComponentVersion describes the version information for a specific component.
@@ -260,8 +158,8 @@ type Info struct {
Plugins PluginsInfo
MemoryLimit bool
SwapLimit bool
KernelMemory bool `json:",omitempty"` // Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
KernelMemoryTCP bool `json:",omitempty"` // KernelMemoryTCP is not supported on cgroups v2.
KernelMemory bool // Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
KernelMemoryTCP bool
CPUCfsPeriod bool `json:"CpuCfsPeriod"`
CPUCfsQuota bool `json:"CpuCfsQuota"`
CPUShares bool
@@ -314,12 +212,7 @@ type Info struct {
SecurityOptions []string
ProductLicense string `json:",omitempty"`
DefaultAddressPools []NetworkAddressPool `json:",omitempty"`
// Warnings contains a slice of warnings that occurred while collecting
// system information. These warnings are intended to be informational
// messages for the user, and are not intended to be parsed / used for
// other purposes, as they do not have a fixed format.
Warnings []string
Warnings []string
}
// KeyValue holds a key/value pair
@@ -390,8 +283,6 @@ type ExecStartCheck struct {
Detach bool
// Check if there's a tty
Tty bool
// Terminal size [height, width], unused if Tty == false
ConsoleSize *[2]uint `json:",omitempty"`
}
// HealthcheckResult stores information about a single run of a healthcheck probe
@@ -525,44 +416,13 @@ type DefaultNetworkSettings struct {
// MountPoint represents a mount point configuration inside the container.
// This is used for reporting the mountpoints in use by a container.
type MountPoint struct {
// Type is the type of mount, see `Type<foo>` definitions in
// github.com/docker/docker/api/types/mount.Type
Type mount.Type `json:",omitempty"`
// Name is the name reference to the underlying data defined by `Source`
// e.g., the volume name.
Name string `json:",omitempty"`
// Source is the source location of the mount.
//
// For volumes, this contains the storage location of the volume (within
// `/var/lib/docker/volumes/`). For bind-mounts, and `npipe`, this contains
// the source (host) part of the bind-mount. For `tmpfs` mount points, this
// field is empty.
Source string
// Destination is the path relative to the container root (`/`) where the
// Source is mounted inside the container.
Type mount.Type `json:",omitempty"`
Name string `json:",omitempty"`
Source string
Destination string
// Driver is the volume driver used to create the volume (if it is a volume).
Driver string `json:",omitempty"`
// Mode is a comma separated list of options supplied by the user when
// creating the bind/volume mount.
//
// The default is platform-specific (`"z"` on Linux, empty on Windows).
Mode string
// RW indicates whether the mount is mounted writable (read-write).
RW bool
// Propagation describes how mounts are propagated from the host into the
// mount point, and vice-versa. Refer to the Linux kernel documentation
// for details:
// https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
//
// This field is not used on Windows.
Driver string `json:",omitempty"`
Mode string
RW bool
Propagation mount.Propagation
}
@@ -670,36 +530,15 @@ type ShimConfig struct {
Opts interface{}
}
// DiskUsageObject represents an object type used for disk usage query filtering.
type DiskUsageObject string
const (
// ContainerObject represents a container DiskUsageObject.
ContainerObject DiskUsageObject = "container"
// ImageObject represents an image DiskUsageObject.
ImageObject DiskUsageObject = "image"
// VolumeObject represents a volume DiskUsageObject.
VolumeObject DiskUsageObject = "volume"
// BuildCacheObject represents a build-cache DiskUsageObject.
BuildCacheObject DiskUsageObject = "build-cache"
)
// DiskUsageOptions holds parameters for system disk usage query.
type DiskUsageOptions struct {
// Types specifies what object types to include in the response. If empty,
// all object types are returned.
Types []DiskUsageObject
}
// DiskUsage contains response of Engine API:
// GET "/system/df"
type DiskUsage struct {
LayersSize int64
Images []*ImageSummary
Containers []*Container
Volumes []*volume.Volume
Volumes []*Volume
BuildCache []*BuildCache
BuilderSize int64 `json:",omitempty"` // Deprecated: deprecated in API 1.38, and no longer used since API 1.40.
BuilderSize int64 // deprecated
}
// ContainersPruneReport contains the response for Engine API:
@@ -774,31 +613,18 @@ type BuildResult struct {
ID string
}
// BuildCache contains information about a build cache record.
// BuildCache contains information about a build cache record
type BuildCache struct {
// ID is the unique ID of the build cache record.
ID string
// Parent is the ID of the parent build cache record.
//
// Deprecated: deprecated in API v1.42 and up, as it was deprecated in BuildKit; use Parents instead.
Parent string `json:"Parent,omitempty"`
// Parents is the list of parent build cache record IDs.
Parents []string `json:" Parents,omitempty"`
// Type is the cache record type.
Type string
// Description is a description of the build-step that produced the build cache.
ID string
Parent string
Type string
Description string
// InUse indicates if the build cache is in use.
InUse bool
// Shared indicates if the build cache is shared.
Shared bool
// Size is the amount of disk space used by the build cache (in bytes).
Size int64
// CreatedAt is the date and time at which the build cache was created.
CreatedAt time.Time
// LastUsedAt is the date and time at which the build cache was last used.
LastUsedAt *time.Time
UsageCount int
InUse bool
Shared bool
Size int64
CreatedAt time.Time
LastUsedAt *time.Time
UsageCount int
}
// BuildCachePruneOptions hold parameters to prune the build cache

View File

@@ -8,9 +8,6 @@ import (
// compare compares two version strings
// returns -1 if v1 < v2, 1 if v1 > v2, 0 otherwise.
func compare(v1, v2 string) int {
if v1 == v2 {
return 0
}
var (
currTab = strings.Split(v1, ".")
otherTab = strings.Split(v2, ".")

72
api/types/volume.go Normal file
View File

@@ -0,0 +1,72 @@
package types
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// Volume volume
// swagger:model Volume
type Volume struct {
// Date/Time the volume was created.
CreatedAt string `json:"CreatedAt,omitempty"`
// Name of the volume driver used by the volume.
// Required: true
Driver string `json:"Driver"`
// User-defined key/value metadata.
// Required: true
Labels map[string]string `json:"Labels"`
// Mount path of the volume on the host.
// Required: true
Mountpoint string `json:"Mountpoint"`
// Name of the volume.
// Required: true
Name string `json:"Name"`
// The driver specific options used when creating the volume.
//
// Required: true
Options map[string]string `json:"Options"`
// The level at which the volume exists. Either `global` for cluster-wide,
// or `local` for machine level.
//
// Required: true
Scope string `json:"Scope"`
// Low-level details about the volume, provided by the volume driver.
// Details are returned as a map with key/value pairs:
// `{"key":"value","key2":"value2"}`.
//
// The `Status` field is optional, and is omitted if the volume driver
// does not support this feature.
//
Status map[string]interface{} `json:"Status,omitempty"`
// usage data
UsageData *VolumeUsageData `json:"UsageData,omitempty"`
}
// VolumeUsageData Usage details about the volume. This information is used by the
// `GET /system/df` endpoint, and omitted in other endpoints.
//
// swagger:model VolumeUsageData
type VolumeUsageData struct {
// The number of containers referencing this volume. This field
// is set to `-1` if the reference-count is not available.
//
// Required: true
RefCount int64 `json:"RefCount"`
// Amount of disk space used by the volume (in bytes). This information
// is only available for volumes created with the `"local"` volume
// driver. For volumes created with other volume drivers, this field
// is set to `-1` ("not available")
//
// Required: true
Size int64 `json:"Size"`
}

View File

@@ -1,420 +0,0 @@
package volume
import (
"github.com/docker/docker/api/types/swarm"
)
// ClusterVolume contains options and information specific to, and only present
// on, Swarm CSI cluster volumes.
type ClusterVolume struct {
// ID is the Swarm ID of the volume. Because cluster volumes are Swarm
// objects, they have an ID, unlike non-cluster volumes, which only have a
// Name. This ID can be used to refer to the cluster volume.
ID string
// Meta is the swarm metadata about this volume.
swarm.Meta
// Spec is the cluster-specific options from which this volume is derived.
Spec ClusterVolumeSpec
// PublishStatus contains the status of the volume as it pertains to its
// publishing on Nodes.
PublishStatus []*PublishStatus `json:",omitempty"`
// Info is information about the global status of the volume.
Info *Info `json:",omitempty"`
}
// ClusterVolumeSpec contains the spec used to create this volume.
type ClusterVolumeSpec struct {
// Group defines the volume group of this volume. Volumes belonging to the
// same group can be referred to by group name when creating Services.
// Referring to a volume by group instructs swarm to treat volumes in that
// group interchangeably for the purpose of scheduling. Volumes with an
// empty string for a group technically all belong to the same, emptystring
// group.
Group string `json:",omitempty"`
// AccessMode defines how the volume is used by tasks.
AccessMode *AccessMode `json:",omitempty"`
// AccessibilityRequirements specifies where in the cluster a volume must
// be accessible from.
//
// This field must be empty if the plugin does not support
// VOLUME_ACCESSIBILITY_CONSTRAINTS capabilities. If it is present but the
// plugin does not support it, volume will not be created.
//
// If AccessibilityRequirements is empty, but the plugin does support
// VOLUME_ACCESSIBILITY_CONSTRAINTS, then Swarmkit will assume the entire
// cluster is a valid target for the volume.
AccessibilityRequirements *TopologyRequirement `json:",omitempty"`
// CapacityRange defines the desired capacity that the volume should be
// created with. If nil, the plugin will decide the capacity.
CapacityRange *CapacityRange `json:",omitempty"`
// Secrets defines Swarm Secrets that are passed to the CSI storage plugin
// when operating on this volume.
Secrets []Secret `json:",omitempty"`
// Availability is the Volume's desired availability. Analogous to Node
// Availability, this allows the user to take volumes offline in order to
// update or delete them.
Availability Availability `json:",omitempty"`
}
// Availability specifies the availability of the volume.
type Availability string
const (
// AvailabilityActive indicates that the volume is active and fully
// schedulable on the cluster.
AvailabilityActive Availability = "active"
// AvailabilityPause indicates that no new workloads should use the
// volume, but existing workloads can continue to use it.
AvailabilityPause Availability = "pause"
// AvailabilityDrain indicates that all workloads using this volume
// should be rescheduled, and the volume unpublished from all nodes.
AvailabilityDrain Availability = "drain"
)
// AccessMode defines the access mode of a volume.
type AccessMode struct {
// Scope defines the set of nodes this volume can be used on at one time.
Scope Scope `json:",omitempty"`
// Sharing defines the number and way that different tasks can use this
// volume at one time.
Sharing SharingMode `json:",omitempty"`
// MountVolume defines options for using this volume as a Mount-type
// volume.
//
// Either BlockVolume or MountVolume, but not both, must be present.
MountVolume *TypeMount `json:",omitempty"`
// BlockVolume defines options for using this volume as a Block-type
// volume.
//
// Either BlockVolume or MountVolume, but not both, must be present.
BlockVolume *TypeBlock `json:",omitempty"`
}
// Scope defines the Scope of a Cluster Volume. This is how many nodes a
// Volume can be accessed simultaneously on.
type Scope string
const (
// ScopeSingleNode indicates the volume can be used on one node at a
// time.
ScopeSingleNode Scope = "single"
// ScopeMultiNode indicates the volume can be used on many nodes at
// the same time.
ScopeMultiNode Scope = "multi"
)
// SharingMode defines the Sharing of a Cluster Volume. This is how Tasks using a
// Volume at the same time can use it.
type SharingMode string
const (
// SharingNone indicates that only one Task may use the Volume at a
// time.
SharingNone SharingMode = "none"
// SharingReadOnly indicates that the Volume may be shared by any
// number of Tasks, but they must be read-only.
SharingReadOnly SharingMode = "readonly"
// SharingOneWriter indicates that the Volume may be shared by any
// number of Tasks, but all after the first must be read-only.
SharingOneWriter SharingMode = "onewriter"
// SharingAll means that the Volume may be shared by any number of
// Tasks, as readers or writers.
SharingAll SharingMode = "all"
)
// TypeBlock defines options for using a volume as a block-type volume.
//
// Intentionally empty.
type TypeBlock struct{}
// TypeMount contains options for using a volume as a Mount-type
// volume.
type TypeMount struct {
// FsType specifies the filesystem type for the mount volume. Optional.
FsType string `json:",omitempty"`
// MountFlags defines flags to pass when mounting the volume. Optional.
MountFlags []string `json:",omitempty"`
}
// TopologyRequirement expresses the user's requirements for a volume's
// accessible topology.
type TopologyRequirement struct {
// Requisite specifies a list of Topologies, at least one of which the
// volume must be accessible from.
//
// Taken verbatim from the CSI Spec:
//
// Specifies the list of topologies the provisioned volume MUST be
// accessible from.
// This field is OPTIONAL. If TopologyRequirement is specified either
// requisite or preferred or both MUST be specified.
//
// If requisite is specified, the provisioned volume MUST be
// accessible from at least one of the requisite topologies.
//
// Given
// x = number of topologies provisioned volume is accessible from
// n = number of requisite topologies
// The CO MUST ensure n >= 1. The SP MUST ensure x >= 1
// If x==n, then the SP MUST make the provisioned volume available to
// all topologies from the list of requisite topologies. If it is
// unable to do so, the SP MUST fail the CreateVolume call.
// For example, if a volume should be accessible from a single zone,
// and requisite =
// {"region": "R1", "zone": "Z2"}
// then the provisioned volume MUST be accessible from the "region"
// "R1" and the "zone" "Z2".
// Similarly, if a volume should be accessible from two zones, and
// requisite =
// {"region": "R1", "zone": "Z2"},
// {"region": "R1", "zone": "Z3"}
// then the provisioned volume MUST be accessible from the "region"
// "R1" and both "zone" "Z2" and "zone" "Z3".
//
// If x<n, then the SP SHALL choose x unique topologies from the list
// of requisite topologies. If it is unable to do so, the SP MUST fail
// the CreateVolume call.
// For example, if a volume should be accessible from a single zone,
// and requisite =
// {"region": "R1", "zone": "Z2"},
// {"region": "R1", "zone": "Z3"}
// then the SP may choose to make the provisioned volume available in
// either the "zone" "Z2" or the "zone" "Z3" in the "region" "R1".
// Similarly, if a volume should be accessible from two zones, and
// requisite =
// {"region": "R1", "zone": "Z2"},
// {"region": "R1", "zone": "Z3"},
// {"region": "R1", "zone": "Z4"}
// then the provisioned volume MUST be accessible from any combination
// of two unique topologies: e.g. "R1/Z2" and "R1/Z3", or "R1/Z2" and
// "R1/Z4", or "R1/Z3" and "R1/Z4".
//
// If x>n, then the SP MUST make the provisioned volume available from
// all topologies from the list of requisite topologies and MAY choose
// the remaining x-n unique topologies from the list of all possible
// topologies. If it is unable to do so, the SP MUST fail the
// CreateVolume call.
// For example, if a volume should be accessible from two zones, and
// requisite =
// {"region": "R1", "zone": "Z2"}
// then the provisioned volume MUST be accessible from the "region"
// "R1" and the "zone" "Z2" and the SP may select the second zone
// independently, e.g. "R1/Z4".
Requisite []Topology `json:",omitempty"`
// Preferred is a list of Topologies that the volume should attempt to be
// provisioned in.
//
// Taken from the CSI spec:
//
// Specifies the list of topologies the CO would prefer the volume to
// be provisioned in.
//
// This field is OPTIONAL. If TopologyRequirement is specified either
// requisite or preferred or both MUST be specified.
//
// An SP MUST attempt to make the provisioned volume available using
// the preferred topologies in order from first to last.
//
// If requisite is specified, all topologies in preferred list MUST
// also be present in the list of requisite topologies.
//
// If the SP is unable to to make the provisioned volume available
// from any of the preferred topologies, the SP MAY choose a topology
// from the list of requisite topologies.
// If the list of requisite topologies is not specified, then the SP
// MAY choose from the list of all possible topologies.
// If the list of requisite topologies is specified and the SP is
// unable to to make the provisioned volume available from any of the
// requisite topologies it MUST fail the CreateVolume call.
//
// Example 1:
// Given a volume should be accessible from a single zone, and
// requisite =
// {"region": "R1", "zone": "Z2"},
// {"region": "R1", "zone": "Z3"}
// preferred =
// {"region": "R1", "zone": "Z3"}
// then the the SP SHOULD first attempt to make the provisioned volume
// available from "zone" "Z3" in the "region" "R1" and fall back to
// "zone" "Z2" in the "region" "R1" if that is not possible.
//
// Example 2:
// Given a volume should be accessible from a single zone, and
// requisite =
// {"region": "R1", "zone": "Z2"},
// {"region": "R1", "zone": "Z3"},
// {"region": "R1", "zone": "Z4"},
// {"region": "R1", "zone": "Z5"}
// preferred =
// {"region": "R1", "zone": "Z4"},
// {"region": "R1", "zone": "Z2"}
// then the the SP SHOULD first attempt to make the provisioned volume
// accessible from "zone" "Z4" in the "region" "R1" and fall back to
// "zone" "Z2" in the "region" "R1" if that is not possible. If that
// is not possible, the SP may choose between either the "zone"
// "Z3" or "Z5" in the "region" "R1".
//
// Example 3:
// Given a volume should be accessible from TWO zones (because an
// opaque parameter in CreateVolumeRequest, for example, specifies
// the volume is accessible from two zones, aka synchronously
// replicated), and
// requisite =
// {"region": "R1", "zone": "Z2"},
// {"region": "R1", "zone": "Z3"},
// {"region": "R1", "zone": "Z4"},
// {"region": "R1", "zone": "Z5"}
// preferred =
// {"region": "R1", "zone": "Z5"},
// {"region": "R1", "zone": "Z3"}
// then the the SP SHOULD first attempt to make the provisioned volume
// accessible from the combination of the two "zones" "Z5" and "Z3" in
// the "region" "R1". If that's not possible, it should fall back to
// a combination of "Z5" and other possibilities from the list of
// requisite. If that's not possible, it should fall back to a
// combination of "Z3" and other possibilities from the list of
// requisite. If that's not possible, it should fall back to a
// combination of other possibilities from the list of requisite.
Preferred []Topology `json:",omitempty"`
}
// Topology is a map of topological domains to topological segments.
//
// This description is taken verbatim from the CSI Spec:
//
// A topological domain is a sub-division of a cluster, like "region",
// "zone", "rack", etc.
// A topological segment is a specific instance of a topological domain,
// like "zone3", "rack3", etc.
// For example {"com.company/zone": "Z1", "com.company/rack": "R3"}
// Valid keys have two segments: an OPTIONAL prefix and name, separated
// by a slash (/), for example: "com.company.example/zone".
// The key name segment is REQUIRED. The prefix is OPTIONAL.
// The key name MUST be 63 characters or less, begin and end with an
// alphanumeric character ([a-z0-9A-Z]), and contain only dashes (-),
// underscores (_), dots (.), or alphanumerics in between, for example
// "zone".
// The key prefix MUST be 63 characters or less, begin and end with a
// lower-case alphanumeric character ([a-z0-9]), contain only
// dashes (-), dots (.), or lower-case alphanumerics in between, and
// follow domain name notation format
// (https://tools.ietf.org/html/rfc1035#section-2.3.1).
// The key prefix SHOULD include the plugin's host company name and/or
// the plugin name, to minimize the possibility of collisions with keys
// from other plugins.
// If a key prefix is specified, it MUST be identical across all
// topology keys returned by the SP (across all RPCs).
// Keys MUST be case-insensitive. Meaning the keys "Zone" and "zone"
// MUST not both exist.
// Each value (topological segment) MUST contain 1 or more strings.
// Each string MUST be 63 characters or less and begin and end with an
// alphanumeric character with '-', '_', '.', or alphanumerics in
// between.
type Topology struct {
Segments map[string]string `json:",omitempty"`
}
// CapacityRange describes the minimum and maximum capacity a volume should be
// created with
type CapacityRange struct {
// RequiredBytes specifies that a volume must be at least this big. The
// value of 0 indicates an unspecified minimum.
RequiredBytes int64
// LimitBytes specifies that a volume must not be bigger than this. The
// value of 0 indicates an unspecified maximum
LimitBytes int64
}
// Secret represents a Swarm Secret value that must be passed to the CSI
// storage plugin when operating on this Volume. It represents one key-value
// pair of possibly many.
type Secret struct {
// Key is the name of the key of the key-value pair passed to the plugin.
Key string
// Secret is the swarm Secret object from which to read data. This can be a
// Secret name or ID. The Secret data is retrieved by Swarm and used as the
// value of the key-value pair passed to the plugin.
Secret string
}
// PublishState represents the state of a Volume as it pertains to its
// use on a particular Node.
type PublishState string
const (
// StatePending indicates that the volume should be published on
// this node, but the call to ControllerPublishVolume has not been
// successfully completed yet and the result recorded by swarmkit.
StatePending PublishState = "pending-publish"
// StatePublished means the volume is published successfully to the node.
StatePublished PublishState = "published"
// StatePendingNodeUnpublish indicates that the Volume should be
// unpublished on the Node, and we're waiting for confirmation that it has
// done so. After the Node has confirmed that the Volume has been
// unpublished, the state will move to StatePendingUnpublish.
StatePendingNodeUnpublish PublishState = "pending-node-unpublish"
// StatePendingUnpublish means the volume is still published to the node
// by the controller, awaiting the operation to unpublish it.
StatePendingUnpublish PublishState = "pending-controller-unpublish"
)
// PublishStatus represents the status of the volume as published to an
// individual node
type PublishStatus struct {
// NodeID is the ID of the swarm node this Volume is published to.
NodeID string `json:",omitempty"`
// State is the publish state of the volume.
State PublishState `json:",omitempty"`
// PublishContext is the PublishContext returned by the CSI plugin when
// a volume is published.
PublishContext map[string]string `json:",omitempty"`
}
// Info contains information about the Volume as a whole as provided by
// the CSI storage plugin.
type Info struct {
// CapacityBytes is the capacity of the volume in bytes. A value of 0
// indicates that the capacity is unknown.
CapacityBytes int64 `json:",omitempty"`
// VolumeContext is the context originating from the CSI storage plugin
// when the Volume is created.
VolumeContext map[string]string `json:",omitempty"`
// VolumeID is the ID of the Volume as seen by the CSI storage plugin. This
// is distinct from the Volume's Swarm ID, which is the ID used by all of
// the Docker Engine to refer to the Volume. If this field is blank, then
// the Volume has not been successfully created yet.
VolumeID string `json:",omitempty"`
// AccessibleTopolgoy is the topology this volume is actually accessible
// from.
AccessibleTopology []Topology `json:",omitempty"`
}

View File

@@ -1,29 +0,0 @@
package volume
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// CreateOptions VolumeConfig
//
// Volume configuration
// swagger:model CreateOptions
type CreateOptions struct {
// cluster volume spec
ClusterVolumeSpec *ClusterVolumeSpec `json:"ClusterVolumeSpec,omitempty"`
// Name of the volume driver to use.
Driver string `json:"Driver,omitempty"`
// A mapping of driver options and values. These options are
// passed directly to the driver and are driver specific.
//
DriverOpts map[string]string `json:"DriverOpts,omitempty"`
// User-defined key/value metadata.
Labels map[string]string `json:"Labels,omitempty"`
// The new volume's name. If not specified, Docker generates a name.
//
Name string `json:"Name,omitempty"`
}

View File

@@ -1,11 +0,0 @@
package volume // import "github.com/docker/docker/api/types/volume"
// VolumeCreateBody Volume configuration
//
// Deprecated: use CreateOptions
type VolumeCreateBody = CreateOptions
// VolumeListOKBody Volume list response
//
// Deprecated: use ListResponse
type VolumeListOKBody = ListResponse

View File

@@ -1,18 +0,0 @@
package volume
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// ListResponse VolumeListResponse
//
// Volume list response
// swagger:model ListResponse
type ListResponse struct {
// List of volumes
Volumes []*Volume `json:"Volumes"`
// Warnings that occurred when fetching the list of volumes.
//
Warnings []string `json:"Warnings"`
}

View File

@@ -1,8 +0,0 @@
package volume // import "github.com/docker/docker/api/types/volume"
import "github.com/docker/docker/api/types/filters"
// ListOptions holds parameters to list volumes.
type ListOptions struct {
Filters filters.Args
}

View File

@@ -1,75 +0,0 @@
package volume
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// Volume volume
// swagger:model Volume
type Volume struct {
// cluster volume
ClusterVolume *ClusterVolume `json:"ClusterVolume,omitempty"`
// Date/Time the volume was created.
CreatedAt string `json:"CreatedAt,omitempty"`
// Name of the volume driver used by the volume.
// Required: true
Driver string `json:"Driver"`
// User-defined key/value metadata.
// Required: true
Labels map[string]string `json:"Labels"`
// Mount path of the volume on the host.
// Required: true
Mountpoint string `json:"Mountpoint"`
// Name of the volume.
// Required: true
Name string `json:"Name"`
// The driver specific options used when creating the volume.
//
// Required: true
Options map[string]string `json:"Options"`
// The level at which the volume exists. Either `global` for cluster-wide,
// or `local` for machine level.
//
// Required: true
Scope string `json:"Scope"`
// Low-level details about the volume, provided by the volume driver.
// Details are returned as a map with key/value pairs:
// `{"key":"value","key2":"value2"}`.
//
// The `Status` field is optional, and is omitted if the volume driver
// does not support this feature.
//
Status map[string]interface{} `json:"Status,omitempty"`
// usage data
UsageData *UsageData `json:"UsageData,omitempty"`
}
// UsageData Usage details about the volume. This information is used by the
// `GET /system/df` endpoint, and omitted in other endpoints.
//
// swagger:model UsageData
type UsageData struct {
// The number of containers referencing this volume. This field
// is set to `-1` if the reference-count is not available.
//
// Required: true
RefCount int64 `json:"RefCount"`
// Amount of disk space used by the volume (in bytes). This information
// is only available for volumes created with the `"local"` volume
// driver. For volumes created with other volume drivers, this field
// is set to `-1` ("not available")
//
// Required: true
Size int64 `json:"Size"`
}

Some files were not shown because too many files have changed in this diff Show More