Compare commits

..

247 Commits

Author SHA1 Message Date
Andrew Hsu
8c91e9672c Merge pull request #163 from thaJeztah/18.09_backport_busyboxstage2
[18.09 backport] Windows: Bump busybox to v1.1
2019-02-25 16:29:56 -08:00
John Howard
613c2f27ed Windows: Bump busybox to v1.1
Signed-off-by: John Howard <jhoward@microsoft.com>

This is a follow-on from https://github.com/moby/moby/pull/38277
but had to be done in a couple of stages to ensure that CI didn't
break. v1.1 of the busybox image is now based on a CMD of "sh"
rather than using an entrypoint. And it also uses the bin directory
rather than `c:\busybox`. This makes it look a lot closer to the
Linux busybox image, and means that a couple of Windows-isms in
CI tests can be reverted back to be identical to their Linux
equivalents.

(cherry picked from commit 561e0f6b7f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-25 10:44:48 +01:00
Andrew Hsu
e4b8756784 Merge pull request #153 from thaJeztah/18.09_backport_update_containerd_1.2.4
[18.09 backport] update containerd 1.2.4, runc 6635b4f
2019-02-23 11:09:18 -08:00
Sebastiaan van Stijn
ffeebb217c Update containerd runtime 1.2.4
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 26413ede57)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-23 01:49:38 +01:00
Sebastiaan van Stijn
c7fca75c03 Update runc to 6635b4f (fix CVE-2019-5736)
- Fixes a vulnerability in runc that allows a container escape (CVE-2019-5736)
  6635b4f0c6,
- Includes security fix for `runc run --no-pivot` (`DOCKER_RAMDISK=1`):
  28a697cce3
  (NOTE: the vuln is attackable only when `DOCKER_RAMDISK=1` is set && seccomp is disabled)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f03698b69a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-23 01:49:25 +01:00
Sebastiaan van Stijn
88330c9aac Revert "Merge pull request #240 from seemethere/bundle_me_up_1809"
This reverts commit eb137ff176, reversing
changes made to a79fabbfe8.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-23 01:49:12 +01:00
Andrew Hsu
ba8664cc22 Merge pull request #154 from thaJeztah/18.09_backport_fix_stale_container_on_start
[18.09 backport] Delete stale containerd object on start failure
2019-02-22 13:52:47 -08:00
Tibor Vass
24c6c3eb52 Merge pull request #162 from thaJeztah/18.09_backport_38636_fix_nil_pointer_dereference
[18.09 backport] Fix nil pointer derefence on failure to connect to containerd
2019-02-22 10:34:11 -08:00
Simão Reis
0841c61862 Fix nil pointer derefence on failure to connect to containerd
Signed-off-by: Simão Reis <smnrsti@gmail.com>
(cherry picked from commit 3134161be3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-22 13:22:29 +01:00
Tibor Vass
2e4c5c57c3 Merge pull request #160 from thaJeztah/18.09_backport_add_missing_char_device_mode
[18.09 backport] Graphdriver: fix "device" mode not being detected if "character-device" bit is set
2019-02-21 17:01:40 -08:00
Tibor Vass
db7a8cb7ba Merge pull request #135 from thaJeztah/18.09_backport_xattr_fix
[18.09 backport] Ignore xattr ENOTSUP errors on copy (fixes #38155)
2019-02-21 15:00:20 -08:00
Andrew Hsu
6b0ba3745d Merge pull request #143 from thaJeztah/18.09_backport_skip_kmem_tests_on_rhel
[18.09 backport] Skip kernel-memory tests on RHEL/CentOS daemons
2019-02-20 18:23:09 -08:00
Andrew Hsu
5c15222f0f Merge pull request #158 from thaJeztah/18.09_backport_save_the_environment
[18.09 backport] Fix: plugin-tests discarding current environment
2019-02-20 18:11:25 -08:00
Andrew Hsu
f935add758 Merge pull request #157 from thaJeztah/18.09_backport_fix_test_int
[18.09 backport] make test-integration: use correct dockerd binary
2019-02-20 18:10:26 -08:00
Andrew Hsu
3c1fa928cb Merge pull request #159 from thaJeztah/18.09_backport_even_more_names_redux
[18.09 backport] Makes a few modifications to the name generator.
2019-02-20 18:08:52 -08:00
Andrew Hsu
37cf1cd68e Merge pull request #161 from kolyshkin/18.09-backport-38423
[18.09] Backport "Disabled these tests on s390x and ppc64le:"
2019-02-20 18:06:18 -08:00
Andrew Hsu
02c953cf36 Merge pull request #155 from thaJeztah/18.09_backport_override_validate
[18.09 backport] Allow overriding repository and branch in validate scripts, and no need to git fetch in CI
2019-02-20 18:05:25 -08:00
Andrew Hsu
9dc0488d1c Merge pull request #149 from thaJeztah/18.09_backport_fix_restart
[18.09 backport] keep old network ids
2019-02-20 18:03:40 -08:00
Olli Janatuinen
278f1a130b Disabled these tests on s390x and ppc64le:
- TestAPISwarmLeaderElection
- TestAPISwarmRaftQuorum
- TestSwarmClusterRotateUnlockKey

because they are known to be flaky.

Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>
(cherry picked from commit 02157c638b)
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2019-02-20 13:51:17 -08:00
Sebastiaan van Stijn
3744b45ba8 Graphdriver: fix "device" mode not being detected if "character-device" bit is set
Due to a bug in Golang (github.com/golang#27640), the "character device"
bit was omitted when checking file-modes with `os.ModeType`.

This bug was resolved in Go 1.12, but as a result, graphdrivers
would no longer recognize "device" files, causing pulling of
images that have a file with this filemode to fail;

    failed to register layer:
    unknown file type for /var/lib/docker/vfs/dir/.../dev/console

The current code checked for an exact match of Modes to be set. The
`os.ModeCharDevice` and `os.ModeDevice` bits will always be set in
tandem, however, because the code was only looking for an exact
match, this detection broke now that `os.ModeCharDevice` was added.

This patch changes the code to be more defensive, and instead
check if the `os.ModeDevice` bit is set (either with, or without
the `os.ModeCharDevice` bit).

In addition, some information was added to the error-message if
no type was matched, to assist debugging in case additional types
are added in future.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7a38c2c06)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-20 14:31:18 +01:00
Debayan De
a818442de7 Makes a few modifications to the name generator.
* Replaces `cocks` with `cerf` as the former might be perceived as
offensive by some people (as pointed out by @jeking3
[here](https://github.com/moby/moby/pull/37157#commitcomment-31758059))
* Removes a duplicate entry for `burnell`
* Re-arranges the entry for `sutherland` to ensure that the names are in
sorted order
* Adds entries for `shamir` and `wilbur`

Signed-off-by: Debayan De <debayande@users.noreply.github.com>
(cherry picked from commit e50f791d42)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-20 13:31:11 +01:00
Sebastiaan van Stijn
19e733f89f Fix: plugin-tests discarding current environment
By default, exec uses the environment of the current process, however,
if `exec.Env` is not `nil`, the environment is discarded:

e73f489494/src/os/exec/exec.go (L57-L60)

> If Env is nil, the new process uses the current process's environment.

When adding a new environment variable, prepend the current environment,
to make sure it is not discarded.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b84bff7f8a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-20 11:27:07 +01:00
Kir Kolyshkin
e9ecd5e486 make test-integration: use correct dockerd binary
Here's what happens:
1. One runs `make binary` once
2. Days go by...
3. One makes changes to dockerd sources
4. One runs `make test-integration` to test the changes
5. One spends a long time figuring out why on Earth
   those changes in step 3 are ignored by step 4.
6. One writes this patch
7. ...
8. PROFIT!!

OK, so `make test-integration` builds a dockerd binary
in bundles/dynbinary-daemon/, when starts a daemon instance
for testing. The problem is, the script that starts the
daemon sets PATH to try `bundles/binary-daemon/` first,
and `bundles/dynbinary-daemon/` second.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 228bc35e82)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-19 22:35:50 +01:00
Tibor Vass
7b9ec00eec hack: no need to git fetch in CI
CIs are assumed to do a git fetch and git merge before running tests.
Therefore, no need for a git fetch inside our validate scripts in CI.

If VALIDATE_ORIGIN_BRANCH is set, then git fetch is skipped and
VALIDATE_ORIGIN_BRANCH is used in validate scripts.

Otherwise, behavior is unchanged.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit feb70fd5c9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-18 11:19:49 +01:00
Sebastiaan van Stijn
748f37022d Allow overriding repository and branch in validate scripts
When running CI in other repositories (e.g. Docker's downstream
docker/engine repository), or other branches, the validation
scripts were calculating the list of changes based on the wrong
information.

This lead to weird failures in CI in a branch where these values
were not updated ':-) (CI on a pull request failed because it detected
that new tests were added to the deprecated `integration-cli` test-suite,
but the pull request did not actually make changes in that area).

This patch allows overriding the target repository (and branch)
to compare to (without having to edit the scripts).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2a08f33166)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-18 11:19:23 +01:00
Brian Goff
1d0353548a Delete stale containerd object on start failure
containerd has two objects with regard to containers.
There is a "container" object which is metadata and a "task" which is
manging the actual runtime state.

When docker starts a container, it creartes both the container metadata
and the task at the same time. So when a container exits, docker deletes
both of these objects as well.

This ensures that if, on start, when we go to create the container metadata object
in containerd, if there is an error due to a name conflict that we go
ahead and clean that up and try again.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 5ba30cd1dc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-15 01:01:52 +01:00
Andrew Hsu
02b07d4ede Merge pull request #147 from thaJeztah/18.09_bump_golang_1.10.8
[18.09] Bump Golang 1.10.8 (CVE-2019-6486)
2019-02-13 08:20:56 -08:00
Sebastiaan van Stijn
caabacdda5 Merge pull request #150 from thaJeztah/18.09_backport_fix_pkg_archive_xattr_test
[18.09 backport] pkg/archive: fix TestTarUntarWithXattr failure on recent kernel
2019-02-13 12:09:06 +01:00
Akihiro Suda
d158b9e74f pkg/archive: fix TestTarUntarWithXattr failure on recent kernel
Recent kernel has strict check for security.capability value.
Fix #38289

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
(cherry picked from commit 9ddd6e47a9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-12 00:13:23 +01:00
akolomentsev
317e0acc4e keep old network ids
for windows all networks are re-populated in the store during network controller initialization. In current version it also regenerate network Ids which may be referenced by other components and it may cause broken references to a networks. This commit avoids regeneration of network ids.

Signed-off-by: Andrey Kolomentsev <andrey.kolomentsev@docker.com>
(cherry picked from commit e017717d96)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-11 22:12:52 +01:00
Sebastiaan van Stijn
325f6ee47a [18.09] Bump Golang 1.10.8 (CVE-2019-6486)
See the milestone for details;
https://github.com/golang/go/issues?q=milestone%3AGo1.10.8+label%3ACherryPickApproved

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-09 11:05:52 +01:00
Dimitris Mandalidis
c51d247f03 Ignore xattr ENOTSUP errors on copy (fixes #38155)
Signed-off-by: Dimitris Mandalidis <dimitris.mandalidis@gmail.com>
(cherry picked from commit d0192ae154)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-02-09 11:04:09 +01:00
Andrew Hsu
eb137ff176 Merge pull request #240 from seemethere/bundle_me_up_1809
[18.09-ce] [ENGSEC-30] CVE-2019-5736 apply fix via git bundle instead of patches
2019-02-06 15:39:49 -08:00
Eli Uriegas
03dfb0ba53 Apply git bundles for CVE-2019-5736
A git bundle allows us keep the same SHA, giving us the ability to
validate our patch against a known entity and allowing us to push
directly from our private forks to public forks without having to
re-apply any patches.

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2019-02-06 00:25:54 +00:00
Jameson Hyde
a79fabbfe8 If url includes scheme, urlPath will drop hostname, which would not match the auth check
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>
(cherry picked from commit 754fb8d9d03895ae3ab60d2ad778152b0d835206)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2019-01-09 17:31:53 +00:00
Jameson Hyde
fc274cd2ff Authz plugin security fixes for 0-length content and path validation Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>
fix comments

(cherry picked from commit 9659c3a52bac57e615b5fb49b0652baca448643e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2019-01-09 17:31:53 +00:00
Eli Uriegas
d4f336d8ef Merge pull request #144 from thaJeztah/18.09_backport_bump_containerd_v1.2.2
[18.09 backport] Bump containerd to v1.2.2
2019-01-08 10:05:03 -08:00
Sebastiaan van Stijn
f80c6d7ae1 Bump containerd to v1.2.2
- Fix a bug that a container can't be stopped or inspected when its corresponding image is deleted
- Fix a bug that the cri plugin handles containerd events outside of k8s.io namespace

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27cc170d28)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-01-08 02:45:06 +01:00
Sebastiaan van Stijn
e042692db1 Skip kernel-memory tests on RHEL/CentOS daemons
RHEL/CentOS 3.10 kernels report that kernel-memory accounting is supported,
but it actually does not work.

Runc (when compiled for those kernels) will be compiled without kernel-memory
support, so even though the daemon may be reporting that it's supported,
it actually is not.

This cause tests to fail when testing against a daemon that's using a runc
version without kmem support.

For now, skip these tests based on the kernel version reported by the daemon.

This should fix failures such as:

```
FAIL: /go/src/github.com/docker/docker/integration-cli/docker_cli_run_unix_test.go:499: DockerSuite.TestRunWithKernelMemory

assertion failed:
Command:  /usr/bin/docker run --kernel-memory 50M --name test1 busybox cat /sys/fs/cgroup/memory/memory.kmem.limit_in_bytes
ExitCode: 0
Error:    <nil>
Stdout:   9223372036854771712

Stderr:   WARNING: You specified a kernel memory limit on a kernel older than 4.0. Kernel memory limits are experimental on older kernels, it won't work as expected and can cause your system to be unstable.

Failures:
Expected stdout to contain "52428800"

FAIL: /go/src/github.com/docker/docker/integration-cli/docker_cli_update_unix_test.go:125: DockerSuite.TestUpdateKernelMemory

/go/src/github.com/docker/docker/integration-cli/docker_cli_update_unix_test.go:136:
    ...open /go/src/github.com/docker/docker/integration-cli/docker_cli_update_unix_test.go: no such file or directory
... obtained string = "9223372036854771712"
... expected string = "104857600"

----------------------------------------------------------------------
FAIL: /go/src/github.com/docker/docker/integration-cli/docker_cli_update_unix_test.go:139: DockerSuite.TestUpdateKernelMemoryUninitialized

/go/src/github.com/docker/docker/integration-cli/docker_cli_update_unix_test.go:149:
    ...open /go/src/github.com/docker/docker/integration-cli/docker_cli_update_unix_test.go: no such file or directory
... value = nil
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1e1156cf67)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-01-05 09:53:31 +01:00
Sebastiaan van Stijn
ce8b8f1cf3 Merge pull request #142 from docker/revert-130-18.09_backport_fix_api_return_code
Revert "[18.09 backport] API: fix status code on conflicting service names"
2018-12-28 21:50:02 +01:00
Madhu Venugopal
24f71e3998 Revert "[18.09 backport] API: fix status code on conflicting service names"
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2018-12-28 09:40:26 -08:00
Andrew Hsu
484a3c3ad0 Merge pull request #140 from andrewhsu/d
[18.09] libcontainerd: prevent exec delete locking
2018-12-17 16:15:27 +01:00
Tonis Tiigi
6646d08782 libcontainerd: prevent exec delete locking
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 332f134890)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-12-17 12:07:32 +00:00
Andrew Hsu
a9ae6c7547 Revert "Propagate context to exec delete"
This reverts commit b6430ba413.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-12-17 12:06:35 +00:00
Andrew Hsu
cc7773c787 Merge pull request #139 from andrewhsu/ctxt
[18.09] Propagate context to exec delete
2018-12-16 11:14:58 +00:00
Andrew Hsu
b2185081d9 Merge pull request #138 from andrewhsu/cont
[18.09] Update containerd to aa5e000c963756778ab3ebd1a12c6
2018-12-16 11:14:33 +00:00
Andrew Hsu
a6d4103450 Merge pull request #137 from thaJeztah/18.09_bump_golang_1.10.6
[18.09] Bump Golang 1.10.6 (CVE-2018-16875)
2018-12-14 23:23:09 +00:00
Michael Crosby
b6430ba413 Propagate context to exec delete
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 96e0ba1afb)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-12-14 22:54:46 +00:00
Michael Crosby
d161dfe1a3 Update containerd to aa5e000c963756778ab3ebd1a12c6
This includes a patch on top of containerd 1.2.1 to handle fifo
timeouts.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit e5d9d72162)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-12-14 22:47:44 +00:00
Sebastiaan van Stijn
8afe9f422d Bump Golang 1.10.6 (CVE-2018-16875)
go1.10.6 (released 2018/12/14)

- crypto/x509: CPU denial of service in chain validation golang/go#29233
- cmd/go: directory traversal in "go get" via curly braces in import paths golang/go#29231
- cmd/go: remote command execution during "go get -u" golang/go#29230

See the Go 1.10.6 milestone on the issue tracker for details:
https://github.com/golang/go/issues?q=milestone%3AGo1.10.6

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-14 00:44:49 +01:00
Andrew Hsu
42b58273f6 Merge pull request #130 from thaJeztah/18.09_backport_fix_api_return_code
[18.09 backport] API: fix status code on conflicting service names
2018-12-13 10:54:52 +00:00
Andrew Hsu
a8572d3e8e Merge pull request #132 from thaJeztah/18.09_backport_idprefix
[18.09 backport] fixes display text in Multiple IDs found with provided prefix
2018-12-13 10:53:54 +00:00
Andrew Hsu
01c732d40a Merge pull request #136 from thaJeztah/18.09_backport_fix_panic
[18.09 engine] registry: use len(via)!=0 instead of via!=nil
2018-12-13 10:43:38 +00:00
Iskander (Alex) Sharipov
3482a3b14a registry: use len(via)!=0 instead of via!=nil
This avoids the corner case where `via` is not nil, but has a length of 0,
so the updated code does not panic in that situation.

Signed-off-by: Iskander Sharipov <quasilyte@gmail.com>
(cherry picked from commit a5c185b994)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-12 18:12:01 +01:00
Andrew Hsu
1ffccb515a Merge pull request #133 from thaJeztah/18.09_backport_fix_ipam_swagger
[18.09 backport] Swagger: fix definition of IPAM driver options
2018-12-12 16:24:46 +00:00
Sebastiaan van Stijn
55a4be8cf5 Swagger: fix definition of IPAM driver options
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a5dd68186c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-12 01:59:01 +01:00
Lifubang
1043f40fb5 fixes display text in Multiple IDs found with provided prefix
Signed-off-by: Lifubang <lifubang@acmcoder.com>
(cherry picked from commit 00eb3480dc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-11 15:15:20 +01:00
Andrew Hsu
d21754a3fb Merge pull request #131 from tonistiigi/1809-update-buildkit
[18.09 backport] vendor: update buildkit to d9f75920
2018-12-10 16:29:04 +00:00
Andrew Hsu
b54b6d145c Merge pull request #129 from thaJeztah/18.09_backport_bump_containerd_vendoring
[18.09 backport] update containerd vendoring to v1.2.1
2018-12-10 13:54:05 +00:00
Tonis Tiigi
43dedf3975 vendor: update buildkit to d9f75920
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 32f4805815)
2018-12-10 13:03:13 +00:00
Sebastiaan van Stijn
a69626afb1 Add test for status code on conflicting service names
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b0de11cf30)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-10 12:21:26 +01:00
Sebastiaan van Stijn
ad7105260f Update swarmkit to return correct error-codes on conflicting names
This updates the swarmkit vendoring to the latest version in the bump_v18.09
branch

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-10 12:18:32 +01:00
Andrew Hsu
b66c7ad62e use empty string as cgroup path to grab first find
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 78045a5419)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 18:58:03 +01:00
Andrew Hsu
5cd4797c89 vndr libnetwork to adjust for updated runc
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 75c4b74155)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 18:57:54 +01:00
Andrew Hsu
7dfd23acf1 update containerd to v1.2.1
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 615eecf8ac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 18:45:14 +01:00
Andrew Hsu
6c633fbe18 Merge pull request #128 from thaJeztah/18.09_backport_containerd_v1.2.1-GA
[18.09 backport] update just installer of containerd to 1.2.1
2018-12-07 06:18:48 -08:00
Andrew Hsu
2c64d7c858 update just installer of containerd to 1.2.1
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 1014b2bb66)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-12-07 11:20:22 +01:00
Andrew Hsu
82a4418f57 Merge pull request #126 from thaJeztah/18.09_backport_mask_asound
[18.09 backport] Masked /proc/asound
2018-12-03 14:06:43 -08:00
Andrew Hsu
e7a4385e24 Merge pull request #123 from tonistiigi/1809-builder-net
[18.09] builder: set externalkey option for faster hook processing
2018-11-30 14:02:12 -08:00
Andrew Hsu
09251ef9ca Merge pull request #127 from thaJeztah/18.09_bump_go_to_1.10.5
[18.09] Bump Go to 1.10.5
2018-11-30 13:59:38 -08:00
Sebastiaan van Stijn
00ad8e7c57 Bump Go to 1.10.5
go1.10.5 (released 2018/11/02) includes fixes to the go command, linker,
runtime and the database/sql package. See the milestone on the issue
tracker for details:

List of changes; https://github.com/golang/go/issues?q=milestone%3AGo1.10.5

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-30 20:43:05 +01:00
Jonathan A. Schweder
5fffdb3226 Masked /proc/asound
@sw-pschmied originally post this in #38285

While looking through the Moby source code was found /proc/asound to be
shared with containers as read-only (as defined in
https://github.com/moby/moby/blob/master/oci/defaults.go#L128).

This can lead to two information leaks.

---

**Leak of media playback status of the host**

Steps to reproduce the issue:

 - Listen to music/Play a YouTube video/Do anything else that involves
sound output
 - Execute docker run --rm ubuntu:latest bash -c "sleep 7; cat
/proc/asound/card*/pcm*p/sub*/status | grep state | cut -d ' ' -f2 |
grep RUNNING || echo 'not running'"
 - See that the containerized process is able to check whether someone
on the host is playing music as it prints RUNNING
 - Stop the music output
 - Execute the command again (The sleep is delaying the output because
information regarding playback status isn't propagated instantly)
 - See that it outputs not running

**Describe the results you received:**

A containerized process is able to gather information on the playback
status of an audio device governed by the host. Therefore a process of a
container is able to check whether and what kind of user activity is
present on the host system. Also, this may indicate whether a container
runs on a desktop system or a server as media playback rarely happens on
server systems.

The description above is in regard to media playback - when examining
`/proc/asound/card*/pcm*c/sub*/status` (`pcm*c` instead of `pcm*p`) this
can also leak information regarding capturing sound, as in recording
audio or making calls on the host system.

Signed-off-by: Jonathan A. Schweder <jonathanschweder@gmail.com>

(cherry picked from commit 64e52ff3db)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-30 14:57:51 +01:00
Andrew Hsu
e32fc16daa Merge pull request #125 from thaJeztah/18.09_backport_busybox
[18.09 backport] Windows: Tie busybox to specific version
2018-11-29 22:46:29 -08:00
John Howard
9c93de59da Windows:Tie busybox to version
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 14c8b67e51)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-30 01:38:11 +01:00
Tonis Tiigi
73911117b3 builder: delete sandbox in a goroutine for performance
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit bcf1967dd0)
2018-11-29 09:15:15 -08:00
Tonis Tiigi
8fe3b4d2ec builder: set externalkey option for faster hook processing
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 1ad272c7e4)
2018-11-29 09:15:00 -08:00
Andrew Hsu
a1f6b04a8d Merge pull request #81 from thaJeztah/18.09_backport_processandiot
[18.09 backport] Windows:Allow process isolation
2018-11-27 09:58:22 -08:00
Andrew Hsu
7a566c0e4a Merge pull request #85 from thaJeztah/18.09_backport_deprecated_storagedrivers
[18.09 backport] deprecate legacy "overlay", and "devicemapper" storage drivers
2018-11-27 09:57:46 -08:00
Andrew Hsu
61a250fd23 Merge pull request #107 from thaJeztah/18.09_backport_bump_libnetwork
[18.09 backport] update libnetwork to fix iptables compatibility on debian
2018-11-27 09:55:12 -08:00
Andrew Hsu
8f18feabeb Merge pull request #110 from thaJeztah/18.09_backport_handle_invalid_json
[18.09 backport] API: properly handle invalid JSON to return a 400 status
2018-11-27 09:51:54 -08:00
Andrew Hsu
08a77f11a6 Merge pull request #112 from thaJeztah/18.09_backport_moby_37747
[18.09 backport] awslogs: account for UTF-8 normalization in limits
2018-11-27 09:48:39 -08:00
Andrew Hsu
4fd103ae26 Merge pull request #113 from thaJeztah/18.09_backport_detach
[18.09 backport] Windows: DetachVhd attempt in cleanup
2018-11-27 09:47:07 -08:00
Andrew Hsu
52a6fc02b1 Merge pull request #114 from thaJeztah/18.09_backport_limit_client_readall
[18.09 backport] client: use io.LimitedReader for reading HTTP error
2018-11-27 09:44:42 -08:00
Andrew Hsu
12b8ec42b6 Merge pull request #116 from thaJeztah/18.09_backport_apparmor_external_templates
[18.09 backport] apparmor: allow receiving of signals from 'docker kill'
2018-11-27 09:41:37 -08:00
Andrew Hsu
23122e4d52 Merge pull request #118 from thaJeztah/18.09_backport_fence_default_addr_pools
[18.09 backport] Ignore default address-pools on API < 1.39
2018-11-27 09:38:39 -08:00
Andrew Hsu
04a6b49a89 Merge pull request #119 from thaJeztah/18.09_backport_fix_default_addr_pools_swagger
[18.09 backport] Add missing default address pool fields to swagger
2018-11-27 09:36:52 -08:00
Andrew Hsu
c488cf7e95 Merge pull request #120 from thaJeztah/18.09_backport_check_for_more_ipvs_options
[18.09 backport] Add CONFIG_IP_VS_PROTO_TCP, CONFIG_IP_VS_PROTO_UDP, IP_NF_TARGET_REDIRECT to check-config.sh
2018-11-27 09:35:55 -08:00
Andrew Hsu
c95cf2a5d3 Merge pull request #121 from thaJeztah/18.09_backport_containerd_v1.2.1
[18.09 backport] Update containerd to v1.2.1-rc.0
2018-11-27 09:15:48 -08:00
Andrew Hsu
9606931393 Merge pull request #122 from tonistiigi/buildkit-18091
[18.09 backport] BuildKit fixes for 18.09.1
2018-11-26 15:56:38 -08:00
Tonis Tiigi
850fff5fc7 vendor: update buildkit to v0.3.3
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 6204eb0645)
2018-11-21 14:10:01 -08:00
Tonis Tiigi
0d17f40994 builder: avoid unset credentials in containerd
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit effb2bd9d2)
2018-11-21 14:09:31 -08:00
Tibor Vass
34867646af builder: ignore label and label! prune filters
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 62923f29f5)
2018-11-21 14:08:18 -08:00
Tibor Vass
0b2d88d328 builder: deprecate prune filter unused-for in favor of until
This is to keep the UX consistent. `unused-for` is still accepted and a synonym.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 369da264ba)
2018-11-21 14:08:04 -08:00
Eli Uriegas
27b0fee846 Merge pull request #84 from thaJeztah/18.09_backport_ovr2_index
[18.09 backport] overlay2: use index=off if possible (fix EBUSY on mount)
2018-11-21 15:46:01 -06:00
Sebastiaan van Stijn
4cc45d91eb Ignore default address-pools on API < 1.39
These options were added in API 1.39, so should be ignored
when using an older version of the API.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7632ccbc66)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 22:15:18 +01:00
Aleksa Sarai
67c602c3fe apparmor: allow receiving of signals from 'docker kill'
In newer kernels, AppArmor will reject attempts to send signals to a
container because the signal originated from outside of that AppArmor
profile. Correct this by allowing all unconfined signals to be received.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Aleksa Sarai <asarai@suse.de>
(cherry picked from commit 4822fb1e24)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 22:13:56 +01:00
Sebastiaan van Stijn
db7f375d6a Update containerd to v1.2.1-rc.0
The previous update used a commit from master. Now that
all the fixes are backported to the containerd 1.2 release
branch, we can switch back to that branch.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2fb5de68a9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:59:41 +01:00
Michael Crosby
7d6ec38402 wip: bump containerd and runc version
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit d13528c635)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:59:33 +01:00
Sebastiaan van Stijn
64a05e3d16 Bump containerd binary to fix shim hang
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7af4c904b3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:59:27 +01:00
Sebastiaan van Stijn
262abed3d2 Update runc to 58592df56734acf62e574865fe40b9e53e967910
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fc0038a3ed)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:59:20 +01:00
Sebastiaan van Stijn
e137337fe6 Update containerd to v1.2.0
release notes: https://github.com/containerd/containerd/releases/tag/v1.2.0

- New V2 Runtime with a stable gRPC interface for managing containers through
  external shims.
- Updated CRI Plugin, validated against Kubernetes v1.11 and v1.12, but it is
  also compatible with Kubernetes v1.10.
- Support for Kubernetes Runtime Class, introduced in Kubernetes 1.12
- A new proxy plugin configuration has been added to allow external
  snapshotters be connected to containerd using gRPC.-
- A new Install method on the containerd client allows users to publish host
  level binaries using standard container build tooling and container
  distribution tooling to download containerd related binaries on their systems.
- Add support for cleaning up leases and content ingests to garbage collections.
- Improved multi-arch image support using more precise matching and ranking
- Added a runtime `options` field for shim v2 runtime. Use the `options` field to
  config runtime specific options, e.g. `NoPivotRoot` and `SystemdCgroup` for
  runtime type `io.containerd.runc.v1`.
- Some Minor API additions
  - Add `ListStream` method to containers API. This allows listing a larger
    number of containers without hitting message size limts.
  - Add `Sync` flag to `Delete` in leases API. Setting this option will ensure
    a garbage collection completes before the removal call is returned. This can
    be used to guarantee unreferenced objects are removed from disk after a lease.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8674930c84)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:59:13 +01:00
Sebastiaan van Stijn
c9c87d76d6 Add a note about updating runc / runc vendoring
Containerd should be "leading" when specifying which version of runc to use.
From the RUNC.MD document in the containerd repository
(https://github.com/containerd/containerd/blob/b1e202c32724e82779544365528a1a082
b335553/RUNC.md);

> We depend on a specific runc version when dealing with advanced features. You
> should have a specific runc build for development. The current supported runc
> commit is described in vendor.conf. Please refer to the line that starts with
> github.com/opencontainers/runc.

This patch adds a note to vendor.conf and runc.installer to describe the order
in which runc should be updated.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit da3810d235)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:59:06 +01:00
Sebastiaan van Stijn
a4decd0c4c Update containerd to v1.1.4
Fixes a potential content store bug, backported from 1.2

- v1.1.3 release notes: https://github.com/containerd/containerd/releases/tag/v1.1.3
- v1.1.4 release notes: https://github.com/containerd/containerd/releases/tag/v1.1.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b3c3c7a5a3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-21 21:58:58 +01:00
Sebastiaan van Stijn
25bec4665b Add CONFIG_IP_VS_PROTO_TCP, CONFIG_IP_VS_PROTO_UDP, IP_NF_TARGET_REDIRECT to check-config.sh
On kernels without this options set, publishing ports for swarm
services does not work, making the published port not accessible
("connection refused")

Thanks to Wenbo Wang for reporting, and Tianon for finding this.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 44e1c6ce81)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-20 18:08:44 +01:00
Sebastiaan van Stijn
56cc26f927 Add missing default address pool fields to swagger
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2e8c913dbd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-20 15:50:46 +01:00
Andrew Hsu
4980e48e4b Merge pull request #109 from thaJeztah/18.09_backport_cleanup_volume_tests
[18.09 backport] Cleanup volume integration tests
2018-11-14 15:41:13 -08:00
Andrew Hsu
299385de7f Merge pull request #103 from thaJeztah/18.09_backport_fix_double_scheme
[18.09 backport] Fix double "unix://" scheme in TestInfoAPIWarnings
2018-11-14 15:39:54 -08:00
Kir Kolyshkin
8486ea11ae runc.installer: add nokmem build tag for rhel7 kernel
In case we're running on RHEL7 kernel, which has non-working
and broken kernel memory controller, add 'nokmem' build tag
so that runc never enables kmem accounting.

For more info, see the following runc commit:
https://github.com/opencontainers/runc/commit/6a2c1559684

This behavior can be overriden by having `RUNC_NOKMEM` environment
variable set (e.g. to empty value to disable setting nokmem).

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 8972aa9350)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-12 15:51:52 +01:00
Kir Kolyshkin
5b8cee93b5 Bump runc
Changes: a00bf01908...9f1e94488e

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 335736fb01)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-12 15:51:44 +01:00
Akihiro Suda
49556e0470 client: use io.LimitedReader for reading HTTP error
client.checkResponseErr() was hanging and consuming infinite memory
when the serverResp.Body io.Reader returns infinite stream.

This commit prohibits reading more than 1MiB.

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
(cherry picked from commit 1db4be0c32)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-12 11:44:37 +01:00
John Howard
02fe71843e Windows: DetachVhd attempt in cleanup
Signed-off-by: John Howard <jhoward@microsoft.com>

This is a fix for a few related scenarios where it's impossible to remove layers or containers
until the host is rebooted. Generally (or at least easiest to repro) through a forced daemon kill
while a container is running.

Possibly slightly worse than that, as following a host reboot, the scratch layer would possibly be leaked and
left on disk under the dataroot\windowsfilter directory after the container is removed.

One such example of a failure:

1. run a long running container with the --rm flag
docker run --rm -d --name test microsoft/windowsservercore powershell sleep 30
2. Force kill the daemon not allowing it to cleanup. Simulates a crash or a host power-cycle.
3. (re-)Start daemon
4. docker ps -a
PS C:\control> docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                PORTS               NAMES
7aff773d782b        malloc              "powershell start-sl…"   11 seconds ago      Removal In Progress                       malloc
5. Try to remove
PS C:\control> docker rm 7aff
Error response from daemon: container 7aff773d782bbf35d95095369ffcb170b7b8f0e6f8f65d5aff42abf61234855d: driver "windowsfilter" failed to remove root filesystem: rename C:\control\windowsfilter\7aff773d782bbf35d95095369ffcb170b7b8f0e6f8f65d5aff42abf61234855d C:\control\windowsfilter\7aff773d782bbf35d95095369ffcb170b7b8f0e6f8f65d5aff42abf61234855d-removing: Access is denied.
PS C:\control>

Step 5 fails.

(cherry picked from commit efdad53744)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-09 23:31:49 +01:00
Samuel Karp
757650e8dc awslogs: account for UTF-8 normalization in limits
The CloudWatch Logs API defines its limits in terms of bytes, but its
inputs in terms of UTF-8 encoded strings.  Byte-sequences which are not
valid UTF-8 encodings are normalized to the Unicode replacement
character U+FFFD, which is a 3-byte sequence in UTF-8.  This replacement
can cause the input to grow, exceeding the API limit and causing failed
API calls.

This commit adds logic for counting the effective byte length after
normalization and splitting input without splitting valid UTF-8
byte-sequences into two invalid byte-sequences.

Fixes https://github.com/moby/moby/issues/37747

Signed-off-by: Samuel Karp <skarp@amazon.com>
(cherry picked from commit 1e8ef38627)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 15:26:01 +01:00
Sebastiaan van Stijn
9e06a42123 API: properly handle invalid JSON to return a 400 status
The API did not treat invalid JSON payloads as a 400 error, as a result
returning a 500 error;

Before this change, an invalid JSON body would return a 500 error;

```bash
curl -v \
  --unix-socket /var/run/docker.sock \
  -X POST \
  "http://localhost/v1.30/networks/create" \
  -H "Content-Type: application/json" \
  -d '{invalid json'
```

```
> POST /v1.30/networks/create HTTP/1.1
> Host: localhost
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 13
>
* upload completely sent off: 13 out of 13 bytes
< HTTP/1.1 500 Internal Server Error
< Api-Version: 1.40
< Content-Type: application/json
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/dev (linux)
< Date: Mon, 05 Nov 2018 11:55:20 GMT
< Content-Length: 79
<
{"message":"invalid character 'i' looking for beginning of object key string"}
```

Empty request:

```bash
curl -v \
  --unix-socket /var/run/docker.sock \
  -X POST \
  "http://localhost/v1.30/networks/create" \
  -H "Content-Type: application/json"
```

```
> POST /v1.30/networks/create HTTP/1.1
> Host: localhost
> User-Agent: curl/7.54.0
> Accept: */*
> Content-Type: application/json
>
< HTTP/1.1 500 Internal Server Error
< Api-Version: 1.38
< Content-Length: 18
< Content-Type: application/json
< Date: Mon, 05 Nov 2018 12:00:18 GMT
< Docker-Experimental: true
< Ostype: linux
< Server: Docker/18.06.1-ce (linux)
<
{"message":"EOF"}
```

After this change, a 400 is returned;

```bash
curl -v \
  --unix-socket /var/run/docker.sock \
  -X POST \
  "http://localhost/v1.30/networks/create" \
  -H "Content-Type: application/json" \
  -d '{invalid json'
```

```
> POST /v1.30/networks/create HTTP/1.1
> Host: localhost
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
> Content-Length: 13
>
* upload completely sent off: 13 out of 13 bytes
< HTTP/1.1 400 Bad Request
< Api-Version: 1.40
< Content-Type: application/json
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/dev (linux)
< Date: Mon, 05 Nov 2018 11:57:15 GMT
< Content-Length: 79
<
{"message":"invalid character 'i' looking for beginning of object key string"}
```

Empty request:

```bash
curl -v \
  --unix-socket /var/run/docker.sock \
  -X POST \
  "http://localhost/v1.30/networks/create" \
  -H "Content-Type: application/json"
```

```
> POST /v1.30/networks/create HTTP/1.1
> Host: localhost
> User-Agent: curl/7.52.1
> Accept: */*
> Content-Type: application/json
>
< HTTP/1.1 400 Bad Request
< Api-Version: 1.40
< Content-Type: application/json
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/dev (linux)
< Date: Mon, 05 Nov 2018 11:59:22 GMT
< Content-Length: 49
<
{"message":"got EOF while reading request body"}
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7b488fbc8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 14:01:27 +01:00
Sebastiaan van Stijn
e8eb3ca4ee Enable volume tests on Windows
These tests don't seem to have anything Linux-specific,
so enable them on Windows

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b334198e65)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 14:00:20 +01:00
Sebastiaan van Stijn
673f04f0b1 Integration test: use filepath.Join() to make path cross-platform
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05e18429cf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 14:00:14 +01:00
Sebastiaan van Stijn
65bf95f3df Some improvements to TestVolumesInspect
Some improvements in this test;

- use the volume-information that's returned by VolumeCreate as "expected"
- don't use an explict name for the volume, as it was only used to reference
  the volume for inspection
- improve the test-output on failure, so that "expected" and "actual" values
  are printed

Without this patch applied;

    === RUN   TestVolumesInspect
    --- FAIL: TestVolumesInspect (0.02s)
     	volume_test.go:108: assertion failed: false (bool) != true (true bool): Time Volume is CreatedAt not equal to current time
    FAIL

With this patch applied;

    === RUN   TestVolumesInspect
    --- FAIL: TestVolumesInspect (0.02s)
        volume_test.go:95: assertion failed: expression is false: createdAt.Truncate(time.Minute).Equal(now.Truncate(time.Minute)): CreatedAt (2018-11-01 16:15:20 +0000 UTC) not equal to creation time (2018-11-01 16:15:20.2421166 +0000 UTC m=+13.733512701)
    FAIL

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8e8cac8263)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 14:00:08 +01:00
Deep Debroy
9fc9c3099d Renamed windowsRS1.ps1 to windows.ps1
Signed-off-by: Deep Debroy <ddebroy@docker.com>
(cherry picked from commit 7d1c1a411b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 13:56:04 +01:00
Salahuddin Khan
37cb9e7300 Enabling Windows integration tests
Signed-off-by: Salahuddin Khan <salah@docker.com>
(cherry picked from commit 4c8b1fd5a2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 13:55:59 +01:00
Vincent Demeester
59be98043a Windows: Start of enabling tests under integration/
- Add windows CI entrypoint script.

Signed-off-by: John Howard <jhoward@microsoft.com>
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
Signed-off-by: Daniel Nephin <dnephin@docker.com>
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
(cherry picked from commit d3cc071bb9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-08 13:55:48 +01:00
Andrew Hsu
f5749085e9 Merge pull request #74 from thaJeztah/18.09_backport_no_more_version_mismatch
[18.09 backport] remove version-checks for containerd and runc
2018-11-06 11:31:40 -08:00
Andrew Hsu
6236f7b8a4 Merge pull request #79 from thaJeztah/18.09_backport_bugfix_issue_37870
[18.09 backport] bugfix: wait for stdin creation before CloseIO
2018-11-06 11:27:58 -08:00
Andrew Hsu
9512677feb Merge pull request #108 from tonistiigi/copy-0.1.9
[18.09] builder: update copy to 0.1.9
2018-11-06 11:26:09 -08:00
Tibor Vass
5bb36e25ba Merge pull request #96 from thaJeztah/18.09_backport_fix-duplicate-release
[18.09 backport] builder: fix duplicate mount release
2018-11-06 11:22:47 -08:00
Tonis Tiigi
45654ed012 builder: update copy to 0.1.9
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-11-06 10:52:34 -08:00
Andrew Hsu
334099505f Merge pull request #105 from tiborvass/18.09-bk-fix-filters
[18.09] builder: fix bugs when pruning buildkit cache with filters
2018-11-06 09:23:25 -08:00
Sebastiaan van Stijn
e1783a72d1 [18.09 backport] update libnetwork to fix iptables compatibility on debian
Fixes a compatibility issue on recent debian versions, where iptables now uses
nft by default.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-06 12:39:04 +01:00
Sebastiaan van Stijn
c27094289a update containerd client and dependencies to v1.2.0
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dd7799afd4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-06 11:03:22 +01:00
Akihiro Suda
0afe0309bd bump up runc
Changes: 69663f0bd4...a00bf01908

Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
(cherry picked from commit 275044bbc3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-06 11:03:14 +01:00
John Howard
41f3cea42f Vendor Microsoft/hcsshim @ v0.7.9
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit d03ab10662)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-06 11:03:06 +01:00
John Howard
9cf6464b63 LCOW: ApplyDiff() use tar2ext4, not SVM
Signed-off-by: John Howard <jhoward@microsoft.com>

This removes the need for an SVM in the LCOW driver to ApplyDiff.

This change relates to a fix for https://github.com/moby/moby/issues/36353

However, it found another issue, tracked by https://github.com/moby/moby/issues/37955

(cherry picked from commit bde9996065)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-11-06 11:03:00 +01:00
Tibor Vass
52a3c39506 builder: fix bugs when pruning buildkit cache with filters
Only the filters the user specified should be added as cache filters to buildkit.
Make an AND operation of the provided filters.
ID filter now does prefix-matching.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit b6137bebb83e886aef906b7ff277778b69616991)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-11-05 22:59:24 +00:00
Andrew Hsu
4fc9786f78 Merge pull request #104 from anshulpundir/1809
[18.09] Vendor swarmkit to 6186e40
2018-10-31 19:01:51 -07:00
Anshul Pundir
46dfcd83bf [18.09] Vendor swarmkit to 6186e40fb04a7681e25a9101dbc7418c37ef0c8b
Signed-off-by: Anshul Pundir <anshul.pundir@docker.com>
2018-10-31 16:04:51 -07:00
Sebastiaan van Stijn
c40a7d393b Fix double "unix://" scheme in TestInfoAPIWarnings
`d.Sock()` already returns the socket-path including the
`unix://` scheme.

Also removed `--iptables=false`, as it didn't really seem
nescessary for this test.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1434204647)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-31 14:25:40 +01:00
Andrew Hsu
fb51c760c4 Merge pull request #99 from andrewhsu/grpc
[18.09] cluster: set bigger grpc limit for array requests
2018-10-30 18:49:11 -07:00
Andrew Hsu
66bfae52bc Merge pull request #100 from thaJeztah/18.09_backport_log_error_spelling
[18.09 backport] Fix incorrect spelling in error message
2018-10-30 18:47:28 -07:00
Tonis Tiigi
6ca0546f25 cluster: set bigger grpc limit for array requests
4MB client side limit was introduced in vendoring go-grpc#1165 (v1.4.0)
making these requests likely to produce errors

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 489b8eda66)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-10-30 23:04:27 +00:00
Andrew Hsu
2822d49c10 Merge pull request #101 from thaJeztah/18.09_backport_document_service_version
[18.09 backport] Add more API doc details on service update version.
2018-10-30 13:14:04 -07:00
Brian Goff
64b0c76151 Add more API doc details on service update version.
Hopefully this removes some confusion as to what this version number
should be.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 5bdfa19b86)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-30 14:25:19 +01:00
Phil Estes
5591f0b1ee Fix incorrect spelling in error message
Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
(cherry picked from commit f962bd06ed)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-30 11:29:02 +01:00
Eli Uriegas
4594e70063 Merge pull request #38 from thaJeztah/18.09_backport_bump_golang_1.10.4
[18.09 backport] bump Go to 1.10.4
2018-10-26 10:03:38 -07:00
Sebastiaan van Stijn
7236817725 Bump Go to 1.10.4
Includes fixes to the go command, linker, and the net/http, mime/multipart,
ld/macho, bytes, and strings packages. See the Go 1.10.4 milestone on the
issue tracker for details:

https://github.com/golang/go/issues?q=milestone%3AGo1.10.4

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fe1fb7417c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-26 12:14:01 +02:00
Andrew Hsu
78746ca9e8 Merge pull request #95 from thaJeztah/add_note_about_branch
[18.09] Add note that we use the bump_v18.09 branch for SwarmKit
2018-10-24 16:57:02 -07:00
Tonis Tiigi
5853cd510c builder: fix duplicate mount release
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 2732fe527f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-24 20:11:51 +02:00
Sebastiaan van Stijn
6ee7d86a12 Add note that we use the bump_v18.09 branch for SwarmKit
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-23 13:37:15 +02:00
Wei Fu
ae6284a623 testing: add case for exec closeStdin
add regression case for the issue#37870

Signed-off-by: Wei Fu <fuweid89@gmail.com>
(cherry picked from commit 8e25f4ff6d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-23 13:20:45 +02:00
Andrew Hsu
8d624c31dd Merge pull request #94 from dperny/18.09-bump-swarmkit
[18.09 Backport] Bump swarmkit to c82e409d
2018-10-22 16:47:21 -07:00
Drew Erny
1222a7081a Bump swarmkit
Signed-off-by: Drew Erny <drew.erny@docker.com>
2018-10-22 15:10:20 -05:00
Andrew Hsu
6f1145e740 Merge pull request #64 from thaJeztah/18.09_backport_syslog
[18.09 backport] move the syslog syscall to be gated by CAP_SYS_ADMIN or CAP_SYSLOG
2018-10-22 08:24:03 -07:00
Madhu Venugopal
ef87a664ef Merge pull request #93 from ctelfer/18.09-backport-dsr
[18.09] Bump libnetwork to 6da50d19 for DSR load balancing changes
2018-10-19 09:37:11 -07:00
Tibor Vass
3dc9802a83 Merge pull request #88 from tonistiigi/fix-private-pull-1809
[18.09 backport] builder: fix private pulls on buildkit
2018-10-18 10:57:46 -07:00
Chris Telfer
fd1fe0b702 Bump libnetwork to 6da50d19 for DSR changes
Bump libnetwork to 6da50d1978302f04c3e2089e29112ea24812f05b which
is the current tip of libnetwork's bump_18.09 branch to get the DSR load
balancing mode option changes for the 18.09 branch of Docker CE.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
2018-10-18 10:52:57 -04:00
Tonis Tiigi
fdaf08a57b builder: fix private pulls on buildkit
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit c693d45acf)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-10-17 17:54:13 -07:00
Andrew Hsu
4d0b8cc2d7 Merge pull request #86 from kolyshkin/18.09-backport-btrfs-prop
[18.09] backport Fix mount propagation for btrfs
2018-10-12 18:28:24 -07:00
Andrew Hsu
7c63f178e7 Merge pull request #82 from tiborvass/18.09-buildkit-cherry-picks
[18.09 backport] builder: treat unset keep-storage as 0
2018-10-12 11:01:20 -07:00
Andrew Hsu
b811212ccd Merge pull request #83 from thaJeztah/18.09_backport_bump_buildkit
[18.09 backport] bump buildkit to c7bb575343df0cbfeab8b5b28149630b8153fcc6
2018-10-12 10:43:01 -07:00
Kir Kolyshkin
fa8ac94616 btrfs: ensure graphdriver home is bind mount
For some reason, shared mount propagation between the host
and a container does not work for btrfs, unless container
root directory (i.e. graphdriver home) is a bind mount.

The above issue was reproduced on SLES 12sp3 + btrfs using
the following script:

	#!/bin/bash
	set -eux -o pipefail

	# DIR should not be under a subvolume
	DIR=${DIR:-/lib}
	MNT=$DIR/my-mnt
	FILE=$MNT/file

	ID=$(docker run -d --privileged -v $DIR:$DIR:rshared ubuntu sleep 24h)
	docker exec $ID mkdir -p $MNT
	docker exec $ID mount -t tmpfs tmpfs $MNT
	docker exec $ID touch $FILE
	ls -l $FILE
	umount $MNT
	docker rm -f $ID

which fails this way:

	+ ls -l /lib/my-mnt/file
	ls: cannot access '/lib/my-mnt/file': No such file or directory

meaning the mount performed inside a priviledged container is not
propagated back to the host (even if all the mounts have "shared"
propagation mode).

The remedy to the above is to make graphdriver home a bind mount.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 16d822bba8)
2018-10-12 09:29:38 -07:00
Kir Kolyshkin
2199ada691 pkg/mount: add MakeMount()
This function ensures the argument is the mount point
(i.e. if it's not, it bind mounts it to itself).

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 8abadb36fa)
2018-10-12 09:29:38 -07:00
Kir Kolyshkin
fd7611ff1f pkg/mount: simplify ensureMountedAs
1. There is no need to specify rw argument -- bind mounts are
   read-write by default.

2. There is no point in parsing /proc/self/mountinfo after performing
   a mount, especially if we don't check whether the fs is mounted or
   not -- the only outcome from it could be an error from our mountinfo
   parser, which makes no sense in this context.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f01297d1ae)
2018-10-12 09:29:38 -07:00
Sebastiaan van Stijn
c20e8dffbb Deprecate legacy overlay storage driver, and add warning
The `overlay` storage driver is deprecated in favor of the `overlay2` storage
driver, which has all the benefits of `overlay`, without its limitations (excessive
inode consumption). The legacy `overlay` storage driver will be removed in a future
release. Users of the `overlay` storage driver should migrate to the `overlay2`
storage driver.

The legacy `overlay` storage driver allowed using overlayFS-backed filesystems
on pre 4.x kernels. Now that all supported distributions are able to run `overlay2`
(as they are either on kernel 4.x, or have support for multiple lowerdirs
backported), there is no reason to keep maintaining the `overlay` storage driver.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 31be4e0ba1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-12 02:26:17 +02:00
Sebastiaan van Stijn
734e7a8e55 Deprecate "devicemapper" storage driver, and add warning
The `devicemapper` storage driver is deprecated in favor of `overlay2`, and will
be removed in a future release. Users of the `devicemapper` storage driver are
recommended to migrate to a different storage driver, such as `overlay2`, which
is now the default storage driver.

The `devicemapper` storage driver facilitates running Docker on older (3.x) kernels
that have no support for other storage drivers (such as overlay2, or AUFS).

Now that support for `overlay2` is added to all supported distros (as they are
either on kernel 4.x, or have support for multiple lowerdirs backported), there
is no reason to continue maintenance of the `devicemapper` storage driver.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 06fcabbaa0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-12 02:25:39 +02:00
Tibor Vass
dbfc648a94 builder: treat unset keep-storage as 0
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d6ac2b0db0)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-10-11 20:35:43 +00:00
Tibor Vass
8e67dfab97 Merge pull request #75 from thaJeztah/18.09_backport_bump_containerd_client_1.2.0_rc.1
[18.09] backport update containerd client and dependencies to v1.2.0-rc.1
2018-10-11 13:27:48 -07:00
Tibor Vass
b38d454861 Merge pull request #73 from thaJeztah/18.09_backport_addr_pool
[18.09] backport default-addr-pool-mask-length param max value check
2018-10-11 13:27:22 -07:00
Tibor Vass
4b8336f7cf Merge pull request #70 from thaJeztah/18.09_backport_upstream_dos_fix
[18.09] backport fix denial of service with large numbers in cpuset-cpus and cpuset-mems
2018-10-11 13:25:55 -07:00
Tibor Vass
2697d2b687 Merge pull request #72 from thaJeztah/18.09_backport_esc-879
[18.09] backport masking credentials from proxy URL
2018-10-11 13:25:30 -07:00
Kir Kolyshkin
690e097fed overlay2: use index=off if possible
As pointed out in https://github.com/moby/moby/issues/37970,
Docker overlay driver can't work with index=on feature of
the Linux kernel "overlay" filesystem. In case the global
default is set to "yes", Docker will fail with EBUSY when
trying to mount, like this:

> error creating overlay mount to ...../merged: device or resource busy

and the kernel log should contain something like:

> overlayfs: upperdir is in-use by another mount, mount with
> '-o index=off' to override exclusive upperdir protection.

A workaround is to set index=off in overlay kernel module
parameters, or even recompile the kernel with
CONFIG_OVERLAY_FS_INDEX=n in .config. Surely this is not
always practical or even possible.

The solution, as pointed out my Amir Goldstein (as well as
the above kernel message:) is to use 'index=off' option
when mounting.

NOTE since older (< 4.13rc1) kernels do not support "index="
overlayfs parameter, try to figure out whether the option
is supported. In case it's not possible to figure out,
assume it is not.

NOTE the default can be changed anytime (by writing to
/sys/module/overlay/parameters/index) so we need to always
use index=off.

[v2: move the detection code to Init()]
[v3: don't set index=off if stat() failed]

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 8422d85087)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-11 22:09:38 +02:00
Kir Kolyshkin
dc0a4db7c9 overlay2: use global logger instance
This simplifies the code a lot.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit a55d32546a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-11 22:09:30 +02:00
Sebastiaan van Stijn
f58f842143 bump buildkit to c7bb575343df0cbfeab8b5b28149630b8153fcc6
Relevant changes:

- buildkit#667 gateway: check for `ReadDir` and `StatFile` caps on client side
- buildkit#668 dockerfile: fix ssh required option
- buildkit#669 dockerfile: update default copy image
- buildkit#670 solver: specify SSH key ID in error message when required key was not forwarded
- buildkit#673 solver: fix possible nil dereference
- buildkit#672 fix setting uncompressed label on content
- buildkit#680 dockerfile: fix empty dest directory panic

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9cfce30214)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-11 21:55:49 +02:00
John Howard
7184074c08 Windows:Allow process isolation
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit c907c2486c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-11 16:04:45 +02:00
Wei Fu
6679a5faeb bugfix: wait for stdin creation before CloseIO
The stdin fifo of exec process is created in containerd side after
client calls Start. If the client calls CloseIO before Start call, the
stdin of exec process is still opened and wait for close.

For this case, client closes stdinCloseSync channel after Start.

Signed-off-by: Wei Fu <fuweid89@gmail.com>
(cherry picked from commit c7890f25a9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-10 20:43:14 +02:00
Akihiro Suda
90c72824c3 bump up buildkit
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
(cherry picked from commit 837b9c6214)
Signed-off-by: Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
2018-10-11 03:01:18 +09:00
Eli Uriegas
ad08dc12e0 Merge pull request #76 from seemethere/dockerfile_copy_1809
Switch copy image to a docker org based one
2018-10-08 14:10:53 -07:00
Eli Uriegas
7b54720ccb Switch copy image to a docker org based one
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit 5cfd110c30)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2018-10-05 18:01:10 +00:00
Justin Cormack
0922d32bce Fix denial of service with large numbers in cpuset-cpus and cpuset-mems
Using a value such as `--cpuset-mems=1-9223372036854775807` would cause
`dockerd` to run out of memory allocating a map of the values in the
validation code. Set limits to the normal limit of the number of CPUs,
and improve the error handling.

Reported by Huawei PSIRT.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f8e876d761)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-05 15:13:43 +02:00
Sebastiaan van Stijn
148d9f0e58 Update containerd client and dependencies to v1.2.0-rc.1
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dd622c81a4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-05 14:53:33 +02:00
Sebastiaan van Stijn
5070e418b8 Update containerd dependencies
This updates the containerd dependencies to match
the versions used by the vendored containerd version

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 31a9c9e791)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-05 14:38:34 +02:00
Sebastiaan van Stijn
054c3c2931 Remove version-checks for containerd and runc
With containerd reaching 1.0, the runtime now
has a stable API, so there's no need to do a check
if the installed version matches the expected version.

Current versions of Docker now also package containerd
and runc separately, and can be _updated_ separately.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c65f0bd13c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-05 12:35:59 +02:00
selansen
9406f3622d Fix for default-addr-pool-mask-length param max value check
We check for max value for -default-addr-pool-mask-length param as 32.
But There won't be enough addresses on the  overlay network. Hence we are
keeping it 29 so that we would be having atleast 8 addresses in /29 network.

Signed-off-by: selansen <elango.siva@docker.com>
(cherry picked from commit d25c5df80e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-04 21:59:25 +02:00
selansen
9816bfcaf5 Global Default AddressPool - Update
Addressing few review comments as part of code refactoring.
Also moved validation logic from CLI to Moby.

Signed-off-by: selansen <elango.siva@docker.com>
(cherry picked from commit 148ff00a0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-04 21:59:09 +02:00
Sebastiaan van Stijn
52d6ad2a68 Merge pull request #66 from thaJeztah/18.09_backport_fix-dm-errmsg
[18.09] backport: gd/dm: fix error message
2018-10-04 21:28:22 +02:00
Dani Louca
58e5151270 Masking credentials from proxy URL
Signed-off-by: Dani Louca <dani.louca@docker.com>
(cherry picked from commit 78fd978454)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-04 21:20:54 +02:00
Sebastiaan van Stijn
6e5ed2ccce Merge pull request #67 from thaJeztah/18.09_backport_windows-network-plugin-miss-fix
[18.09] Fix long startup on windows, with non-hns governed Hyper-V networks
2018-10-03 23:27:28 +02:00
Simon Ferquel
54bd14a3fe Fix long startup on windows, with non-hns governed Hyper-V networks
Similar to a related issue where previously, private Hyper-V networks
would each add 15 secs to the daemon startup, non-hns governed internal
networks are reported by hns as network type "internal" which is not
mapped to any network plugin (and thus we get the same plugin load retry
loop as before).

This issue hits Docker for Desktop because we setup such a network for
the Linux VM communication.

Signed-off-by: Simon Ferquel <simon.ferquel@docker.com>
(cherry picked from commit 6a1a4f9721)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-03 15:24:34 +02:00
Kir Kolyshkin
c9ddc6effc gd/dm: fix error message
The parameter name was wrong, which may mislead a user.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit c378fb774e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-03 14:01:13 +02:00
Justin Cormack
16836e60bc Move the syslog syscall to be gated by CAP_SYS_ADMIN or CAP_SYSLOG
This call is what is used to implement `dmesg` to get kernel messages
about the host. This can leak substantial information about the host.
It is normally available to unprivileged users on the host, unless
the sysctl `kernel.dmesg_restrict = 1` is set, but this is not set
by standard on the majority of distributions. Blocking this to restrict
leaks about the configuration seems correct.

Fix #37897

See also https://googleprojectzero.blogspot.com/2018/09/a-cache-invalidation-bug-in-linux.html

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit ccd22ffcc8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-10-02 20:33:38 +02:00
Andrew Hsu
e44436c31f Merge pull request #62 from thaJeztah/18.09_backport_tweak_error_message
[18.09] backport: tweak bind mount errors
2018-09-28 14:13:41 -07:00
Andrew Hsu
34b3cf4b0c Merge pull request #56 from thaJeztah/18.09_backport_more_permissive_daeon_conf_dir
[18.09] backport loosen permissions on /etc/docker directory
2018-09-28 11:42:01 -07:00
Andrew Hsu
51618f7a83 Merge pull request #63 from tiborvass/18.09-vndr-buildkit
[18.09] vendor buildkit to 8f4dff0d16ea91cb43315d5f5aa4b27f4fe4e1f2
2018-09-28 10:57:56 -07:00
Sebastiaan van Stijn
b499acc0e8 Tweak bind mount errors
These messages were enhanced to include the path that was
missing (in df6af282b9), but
also changed the first part of the message.

This change complicates running e2e tests with mixed versions
of the engine.

Looking at the full error message, "mount" is a bit redundant
as well, because the error message already indicates this is
about a "mount";

    docker run --rm --mount type=bind,source=/no-such-thing,target=/foo busybox
    docker: Error response from daemon: invalid mount config for type "bind": bind mount source path does not exist: /no-such-thing.

Removing the "mount" part from the error message, because
it was redundant, and makes cross-version testing easier :)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 574db7a537)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-28 14:35:55 +02:00
Tibor Vass
67541d5841 vendor buildkit to 8f4dff0d16ea91cb43315d5f5aa4b27f4fe4e1f2
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit e161a8d1e9)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-27 22:46:57 +00:00
Eli Uriegas
989fab3c71 Merge pull request #61 from tiborvass/18.09-remove-docker-prefix-containerd
[18.09] Remove 'docker-' prefix for containerd and runc binaries
2018-09-26 11:45:50 -07:00
Tibor Vass
6bf8dfc4d8 fix daemon tests that were using wrong containerd socket
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 52b60f705c)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-25 23:09:25 +00:00
Tibor Vass
e090646d47 hack/make: remove 'docker-' prefix when copying binaries
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 361412c79e)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-25 23:09:25 +00:00
Tibor Vass
b3bb2aabb8 Remove 'docker-' prefix for containerd and runc binaries
This allows to run the daemon in environments that have upstream containerd installed.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 34eede0296)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-24 22:35:36 +00:00
Andrew Hsu
e69efe2ef5 Merge pull request #51 from thaJeztah/18.09_backport_fix-libcontainerd-startup-error
[18.09] backport: Add fail fast path when containerd fails on startup
2018-09-22 00:11:43 -07:00
Andrew Hsu
ccab609365 Merge pull request #60 from tiborvass/18.09-remove-boltdb
[18.09] Remove boltdb dependency
2018-09-22 00:11:17 -07:00
Andrew Hsu
0a6866b839 Merge pull request #59 from tonistiigi/buildkit-1809
[18.09] Backport Buildkit fixes for 18.09
2018-09-21 21:59:27 -07:00
Tibor Vass
cce1763d57 vendor: remove boltdb dependency which is superseded by bbolt
This also brings in these PRs from swarmkit:
- https://github.com/docker/swarmkit/pull/2691
- https://github.com/docker/swarmkit/pull/2744
- https://github.com/docker/swarmkit/pull/2732
- https://github.com/docker/swarmkit/pull/2729
- https://github.com/docker/swarmkit/pull/2748

Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-22 01:24:11 +00:00
Tibor Vass
3d67dd0465 builder: vendor buildkit to 39404586a50d1b9d0fb1c578cf0f4de7bdb7afe5
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d0f00bc1fb)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tibor Vass
73e2f72a7c builder: use buildkit's GC for build cache
This allows users to configure the buildkit GC.

The following enables the default GC:
```
{
  "builder": {
    "gc": {
      "enabled": true
    }
  }
}
```

The default GC policy has a simple config:
```
{
  "builder": {
    "gc": {
      "enabled": true,
      "defaultKeepStorage": "30GB"
    }
  }
}
```

A custom GC policy can be used instead by specifying a list of cache prune rules:
```
{
  "builder": {
    "gc": {
      "enabled": true,
      "policy": [
        {"keepStorage": "512MB", "filter": ["unused-for=1400h"]]},
        {"keepStorage": "30GB", "all": true}
      ]
    }
  }
}
```

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 4a776d0ca7)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Anda Xu
2926a45be6 add support of registry-mirrors and insecure-registries to buildkit
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 171d51c861)
(cherry picked from commit a72752b2f74467333b4ebe21c6c474eb0c2b99e0)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Anda Xu
b73fd4d936 update vendor
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 308701fac6)
(cherry picked from commit b48afc216f46c8e786560b807528699012e1627b)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tibor Vass
bb2adc4496 daemon/images: removed "found leaked image layer" warning, because it is expected now with buildkit
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 5aa222d0fe)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tonis Tiigi
b501aa82d5 vendor: update bolt to bbolt
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Tonis Tiigi
46a703bb3b vendor: add bbolt v1.3.1-etcd.8
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2018-09-21 17:06:25 -07:00
Andrew Hsu
ff9340ca2c Merge pull request #52 from thaJeztah/18.09_backport_fix-TestServiceWithDefaultAddressPoolInit
[18.09] backport TestServiceWithDefaultAddressPoolInit: avoid panic
2018-09-21 11:19:57 -07:00
Andrew Hsu
90a90ae2e1 Merge pull request #57 from AntaresS/cherry-37871
[18.09] backport fixing daemon won't start when "runtimes" option defined in both config file and cli
2018-09-21 09:54:31 -07:00
Anda Xu
66ed41aec8 fixed the dockerd won't start bug when 'runtimes' field is defined in both daemon config file and cli flags
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 8392d0930b)
2018-09-20 10:54:47 -07:00
Andrew Hsu
ea2e2c5427 Merge pull request #50 from AntaresS/cherry-pick-moby
[18.09] backport propagate the dockerd cgroup-parent config to buildkitd
2018-09-18 16:36:12 -07:00
Anda Xu
a5d731edec create newBuildKit function separately in daemon_unix.go and daemon_windows.go for cross platform build
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 66ac92cdc6)
2018-09-18 11:19:51 -07:00
Sebastiaan van Stijn
fc576226b2 Loosen permissions on /etc/docker directory
The `/etc/docker` directory is used both by the dockerd daemon
and the docker cli (if installed on the saem host as the daemon).

In situations where the `/etc/docker` directory does not exist,
and an initial `key.json` (legacy trust key) is generated (at the
default location), the `/etc/docker/` directory was created with
0700 permissions, making the directory only accessible by `root`.

Given that the `0600` permissions on the key itself already protect
it from being used by other users, the permissions of `/etc/docker`
can be less restrictive.

This patch changes the permissions for the directory to `0755`, so
that the CLI (if executed as non-root) can also access this directory.

> **NOTE**: "strictly", this patch is only needed for situations where no _custom_
> location for the trustkey is specified (not overridden with `--deprecated-key-path`),
> but setting the permissions only for the "default" case would make
> this more complicated.

```bash
make binary shell

make install

ls -la /etc/ | grep docker

dockerd
^C

ls -la /etc/ | grep docker
drwxr-xr-x 2 root root    4096 Sep 14 12:11 docker
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cecd981717)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-18 12:34:56 +02:00
Tibor Vass
c24fd7a2c3 Merge pull request #55 from thaJeztah/18.09_backport_fix-progress-panic
[18.09] backport pkg/progress: work around closing closed channel panic
2018-09-17 11:43:41 -07:00
Sebastiaan van Stijn
5fb0a7ced7 Merge pull request #53 from thaJeztah/18.09_backport_buildkit-cli-control
[18.09] backport always hornor client side to choose which builder to use with DOCKER_…
2018-09-17 12:34:28 +02:00
Tibor Vass
2c26eac566 pkg/progress: work around closing closed channel panic
I could not reproduce the panic in #37735, so here's a bandaid.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7dac70324d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-17 12:28:09 +02:00
Anda Xu
5badfb40eb always hornor client side to choose which builder to use with DOCKER_BUILDKIT env var regardless the server setup
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 5d931705e3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-14 17:29:47 +02:00
Kir Kolyshkin
f43fc6650c TestServiceWithDefaultAddressPoolInit: avoid panic
Saw this in moby ci:

> 00:22:07.582 === RUN   TestServiceWithDefaultAddressPoolInit
> 00:22:08.887 --- FAIL: TestServiceWithDefaultAddressPoolInit (1.30s)
> 00:22:08.887 	daemon.go:290: [d905878b35bb9] waiting for daemon to start
> 00:22:08.887 	daemon.go:322: [d905878b35bb9] daemon started
> 00:22:08.888 panic: runtime error: index out of range [recovered]
> 00:22:08.889 	panic: runtime error: index out of range
> 00:22:08.889
> 00:22:08.889 goroutine 360 [running]:
> 00:22:08.889 testing.tRunner.func1(0xc42069d770)
> 00:22:08.889 	/usr/local/go/src/testing/testing.go:742 +0x29d
> 00:22:08.890 panic(0x85d680, 0xb615f0)
> 00:22:08.890 	/usr/local/go/src/runtime/panic.go:502 +0x229
> 00:22:08.890 github.com/docker/docker/integration/network.TestServiceWithDefaultAddressPoolInit(0xc42069d770)
> 00:22:08.891 	/go/src/github.com/docker/docker/integration/network/service_test.go:348 +0xb53
> .....

Apparently `out.IPAM.Config[0]` is not there, so to avoid panic, let's
check the size of `out.IPAM.Config` first.

Fixes: f7ad95cab9

[v2: add logging of data returned by NetworkInspect()]
[v3: use assert.Assert to fail immediately]

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 69d3a8936b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-14 15:22:43 +02:00
Derek McGowan
85361af1f7 Add fail fast path when containerd fails on startup
Prevents looping of startup errors such as containerd
not being found on the path.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit ce0b0b72bc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-09-14 15:20:07 +02:00
Anda Xu
ee40a9ebcd update vendor
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 54b3af4c7d)
2018-09-13 16:42:13 -07:00
Anda Xu
e8620110fc propagate the dockerd cgroup-parent config to buildkitd
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit d52485c2f9)
2018-09-13 16:36:57 -07:00
Andrew Hsu
e988001872 Merge pull request #46 from kolyshkin/18.09-backport-pr37771
[18.09] backport #37771 "vendor: update tar-split"
2018-09-12 18:16:16 -07:00
Andrew Hsu
6531bac59b Merge pull request #48 from kolyshkin/18.09-backport-logs-follow
[18.09] backport "daemon.ContainerLogs(): fix resource leak on follow"
2018-09-12 18:13:56 -07:00
Kir Kolyshkin
2a82480df9 TestFollowLogsProducerGone: add
This should test that
 - all the messages produced are delivered (i.e. not lost)
 - followLogs() exits

Loosely based on the test having the same name by Brian Goff, see
https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f845d76d04)
2018-09-06 18:39:22 -07:00
Kir Kolyshkin
84a5b528ae daemon.ContainerLogs(): fix resource leak on follow
When daemon.ContainerLogs() is called with options.follow=true
(as in "docker logs --follow"), the "loggerutils.followLogs()"
function never returns (even then the logs consumer is gone).
As a result, all the resources associated with it (including
an opened file descriptor for the log file being read, two FDs
for a pipe, and two FDs for inotify watch) are never released.

If this is repeated (such as by running "docker logs --follow"
and pressing Ctrl-C a few times), this results in DoS caused by
either hitting the limit of inotify watches, or the limit of
opened files. The only cure is daemon restart.

Apparently, what happens is:

1. logs producer (a container) is gone, calling (*LogWatcher).Close()
for all its readers (daemon/logger/jsonfilelog/jsonfilelog.go:175).

2. WatchClose() is properly handled by a dedicated goroutine in
followLogs(), cancelling the context.

3. Upon receiving the ctx.Done(), the code in followLogs()
(daemon/logger/loggerutils/logfile.go#L626-L638) keeps to
send messages _synchronously_ (which is OK for now).

4. Logs consumer is gone (Ctrl-C is pressed on a terminal running
"docker logs --follow"). Method (*LogWatcher).Close() is properly
called (see daemon/logs.go:114). Since it was called before and
due to to once.Do(), nothing happens (which is kinda good, as
otherwise it will panic on closing a closed channel).

5. A goroutine (see item 3 above) keeps sending log messages
synchronously to the logWatcher.Msg channel. Since the
channel reader is gone, the channel send operation blocks forever,
and resource cleanup set up in defer statements at the beginning
of followLogs() never happens.

Alas, the fix is somewhat complicated:

1. Distinguish between close from logs producer and logs consumer.
To that effect,
 - yet another channel is added to LogWatcher();
 - {Watch,}Close() are renamed to {Watch,}ProducerGone();
 - {Watch,}ConsumerGone() are added;

*NOTE* that ProducerGone()/WatchProducerGone() pair is ONLY needed
in order to stop ConsumerLogs(follow=true) when a container is stopped;
otherwise we're not interested in it. In other words, we're only
using it in followLogs().

2. Code that was doing (logWatcher*).Close() is modified to either call
ProducerGone() or ConsumerGone(), depending on the context.

3. Code that was waiting for WatchClose() is modified to wait for
either ConsumerGone() or ProducerGone(), or both, depending on the
context.

4. followLogs() are modified accordingly:
 - context cancellation is happening on WatchProducerGone(),
and once it's received the FileWatcher is closed and waitRead()
returns errDone on EOF (i.e. log rotation handling logic is disabled);
 - due to this, code that was writing synchronously to logWatcher.Msg
can be and is removed as the code above it handles this case;
 - function returns once ConsumerGone is received, freeing all the
resources -- this is the bugfix itself.

While at it,

1. Let's also remove the ctx usage to simplify the code a bit.
It was introduced by commit a69a59ffc7 ("Decouple removing the
fileWatcher from reading") in order to fix a bug. The bug was actually
a deadlock in fsnotify, and the fix was just a workaround. Since then
the fsnofify bug has been fixed, and a new fsnotify was vendored in.
For more details, please see
https://github.com/moby/moby/pull/27782#issuecomment-416794490

2. Since `(*filePoller).Close()` is fixed to remove all the files
being watched, there is no need to explicitly call
fileWatcher.Remove(name) anymore, so get rid of the extra code.

Should fix https://github.com/moby/moby/issues/37391

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 916eabd459)
2018-09-06 18:39:22 -07:00
Brian Goff
511741735e daemon/logger/loggerutils: add TestFollowLogsClose
This test case checks that followLogs() exits once the reader is gone.
Currently it does not (i.e. this test is supposed to fail) due to #37391.

[kolyshkin@: test case Brian Goff, changelog and all bugs are by me]
Source: https://gist.github.com/cpuguy83/e538793de18c762608358ee0eaddc197

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit d37a11bfba)
2018-09-06 18:39:22 -07:00
Kir Kolyshkin
2b8bc86679 daemon.ContainerLogs: minor debug logging cleanup
This code has many return statements, for some of them the
"end logs" or "end stream" message was not printed, giving
the impression that this "for" loop never ended.

Make sure that "begin logs" is to be followed by "end logs".

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 2e4c2a6bf9)
2018-09-06 18:39:22 -07:00
Kir Kolyshkin
4e2dbfa1af pkg/filenotify/poller: fix Close()
The code in Close() that removes the watches was not working,
because it first sets `w.closed = true` and then calls w.close(),
which starts with
```
        if w.closed {
                return errPollerClosed
	}
```

Fix by setting w.closed only after calling w.remove() for all the
files being watched.

While at it, remove the duplicated `delete(w.watches, name)` code.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit fffa8958d0)
2018-09-06 18:39:21 -07:00
Kir Kolyshkin
3a3bfcbf47 pkg/filenotify/poller: close file asap
There is no need to wait for up to 200ms in order to close
the file descriptor once the chClose is received.

This commit might reduce the chances for occasional "The process
cannot access the file because it is being used by another process"
error on Windows, where an opened file can't be removed.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit dfbb64ea7d)
2018-09-06 18:39:21 -07:00
Kir Kolyshkin
7be43586af pkg/filenotify: poller.Add: fix fd leaks on err
In case of errors, the file descriptor is never closed. Fix it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 88bcf1573c)
2018-09-06 18:39:21 -07:00
Kir Kolyshkin
d7085abec2 vendor: update tar-split
To include https://github.com/vbatts/tar-split/pull/48 which
fixes the issue of creating an image with >8GB file in it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 92e7543903)
2018-09-06 17:24:39 -07:00
Kir Kolyshkin
fc1d808c44 integration/build: add TestBuildHugeFile
Add a test case for creating a 8GB file inside a container.
Due to a bug in tar-split this was failing in Docker 18.06.

The file being created is sparse, so there's not much I/O
happening or disk space being used -- meaning the test is
fast and does not require a lot of disk space.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b3165f5b2d)
2018-09-06 17:24:36 -07:00
Andrew Hsu
7485ef7e46 Merge pull request #44 from andrewhsu/sup
[18.09] Fix supervisor healthcheck throttling
2018-09-05 11:14:31 -07:00
Andrew Hsu
d2ecc7bad1 Merge pull request #43 from andrewhsu/tls
[18.09] client: dial tls on Dialer if tls config is set
2018-09-05 06:43:23 -07:00
Derek McGowan
f121eccf29 Fix supervisor healthcheck throttling
Fix default case causing the throttling to not be used.
Ensure that nil client condition is handled.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit c3e3293843)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-09-05 06:59:52 +00:00
Andrew Hsu
00a9cf39ed Merge pull request #42 from tiborvass/18.09-cp-buildkit
[18.09] Buildkit cherry-picks
2018-09-04 18:46:24 -07:00
Tonis Tiigi
c2d0053207 client: dial tls on Dialer if tls config is set
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 5974fc2540)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-09-05 01:17:31 +00:00
Tibor Vass
4c35d81147 vendor buildkit to fix a couple of bugs
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit effa24bf48)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:52:24 +00:00
Tonis Tiigi
28150fc70c builder: implement ref checker
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 354c241041)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:02:28 +00:00
Tibor Vass
d2c3163642 builder: fix pruning all cache
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d47435a004)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:02:28 +00:00
Tibor Vass
3153708f13 builder: add prune options to the API
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8ff7847d1c)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:02:28 +00:00
Anda Xu
2f94f10342 allow features option live reloadable
Signed-off-by: Anda Xu <anda.xu@docker.com>
(cherry picked from commit 58a75cebdd)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-09-04 15:01:46 +00:00
Andrew Hsu
b8a4fe5f8f Merge pull request #37 from thaJeztah/18.09_backport_fix_prefix_matching
[18.09] backport: fix regression when filtering container names using a leading slash
2018-08-31 21:15:00 -07:00
Madhu Venugopal
648704522b Merge pull request #41 from kolyshkin/18.09-backport-pr37739
[18.09] backport "fix relabeling local volume source dir"
2018-08-31 19:01:01 -07:00
Kir Kolyshkin
4032b6778d Fix relabeling local volume source dir
In case a volume is specified via Mounts API, and SELinux is enabled,
the following error happens on container start:

> $ docker volume create testvol
> $ docker run --rm --mount source=testvol,target=/tmp busybox true
> docker: Error response from daemon: error setting label on mount
> source '': no such file or directory.

The functionality to relabel the source of a local mount specified via
Mounts API was introduced in commit 5bbf5cc and later broken by commit
e4b6adc, which removed setting mp.Source field.

With the current data structures, the host dir is already available in
v.Mountpoint, so let's just use it.

Fixes: e4b6adc
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
2018-08-30 17:34:59 -07:00
Sebastiaan van Stijn
5fa80da2d3 Fix regression when filtering container names using a leading slash
Commit 5c8da2e967 updated the filtering behavior
to match container-names without having to specify the leading slash.

This change caused a regression in situations where a regex was provided as
filter, using an explicit leading slash (`--filter name=^/mycontainername`).

This fix changes the filters to match containers both with, and without the
leading slash, effectively making the leading slash optional when filtering.

With this fix, filters with and without a leading slash produce the same result:

    $ docker ps --filter name=^a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                2 minutes ago       Up 2 minutes                            a2
    56e53770e316        busybox             "sh"                2 minutes ago       Up 2 minutes                            a1

    $ docker ps --filter name=^/a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                2 minutes ago       Up 2 minutes                            a2
    56e53770e316        busybox             "sh"                3 minutes ago       Up 3 minutes                            a1

    $ docker ps --filter name=^b
    CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS               NAMES
    b69003b6a6fe        busybox             "sh"                About a minute ago   Up About a minute                       b1

    $ docker ps --filter name=^/b
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    b69003b6a6fe        busybox             "sh"                56 seconds ago      Up 54 seconds                           b1

    $ docker ps --filter name=/a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                3 minutes ago       Up 3 minutes                            a2
    56e53770e316        busybox             "sh"                4 minutes ago       Up 4 minutes                            a1

    $ docker ps --filter name=a
    CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
    21afd6362b0c        busybox             "sh"                3 minutes ago       Up 3 minutes                            a2
    56e53770e316        busybox             "sh"                4 minutes ago       Up 4 minutes                            a1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f9b5ba810)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-29 12:54:06 +02:00
Tibor Vass
be371291bc Merge pull request #35 from tiborvass/18.09-fix-network-buildkit
[18.09] builder: fix bridge networking when using buildkit
2018-08-23 06:21:34 -07:00
Tibor Vass
1d531ff64f builder: fix bridge networking when using buildkit
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit dc7e472db9)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-08-23 05:32:51 +00:00
5101 changed files with 285020 additions and 724970 deletions

View File

@@ -3,14 +3,11 @@ curators:
- alexellis
- andrewhsu
- anonymuse
- arkodg
- chanwit
- ehazlett
- fntlnz
- gianarb
- kolyshkin
- mgoelzer
- olljanat
- programmerq
- rheinwein
- ripcurld0
@@ -18,5 +15,3 @@ curators:
features:
- comments
- pr_description_required

View File

@@ -1,6 +1,7 @@
.git
.go-pkg-cache
.gopath
bundles
.gopath
vendor/pkg
.go-pkg-cache
.git
hack/integration-cli-on-swarm/integration-cli-on-swarm

8
.github/CODEOWNERS vendored
View File

@@ -4,13 +4,17 @@
# KEEP THIS FILE SORTED. Order is important. Last match takes precedence.
builder/** @tonistiigi
client/** @dnephin
contrib/mkimage/** @tianon
daemon/graphdriver/devmapper/** @rhvgoyal
daemon/graphdriver/lcow/** @johnstep
daemon/graphdriver/lcow/** @johnstep @jhowardmsft
daemon/graphdriver/overlay/** @dmcgowan
daemon/graphdriver/overlay2/** @dmcgowan
daemon/graphdriver/windows/** @johnstep
daemon/graphdriver/windows/** @johnstep @jhowardmsft
daemon/logger/awslogs/** @samuelkarp
hack/** @tianon
hack/integration-cli-on-swarm/** @AkihiroSuda
integration-cli/** @vdemeester
integration/** @vdemeester
plugin/** @cpuguy83
project/** @thaJeztah

7
.gitignore vendored
View File

@@ -3,7 +3,6 @@
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.gz
*.orig
test.main
.*.swp
@@ -17,7 +16,9 @@ autogen/
bundles/
cmd/dockerd/dockerd
contrib/builder/rpm/*/changelog
dockerversion/version_autogen.go
dockerversion/version_autogen_unix.go
vendor/pkg/
go-test-report.json
hack/integration-cli-on-swarm/integration-cli-on-swarm
coverage.txt
profile.out
junit-report.xml

130
.mailmap
View File

@@ -10,8 +10,6 @@
<mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Aaron L. Xu <liker.xu@foxmail.com>
Abhinandan Prativadi <abhi@docker.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Dobrawy <naczelnik@jawnosc.tk> <ad-m@users.noreply.github.com>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
Ahmed Kamal <email.ahmedkamal@googlemail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
@@ -19,9 +17,7 @@ AJ Bowen <aj@soulshake.net>
AJ Bowen <aj@soulshake.net> <aj@gandi.net>
AJ Bowen <aj@soulshake.net> <amy@gandi.net>
Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akihiro Suda <suda.akihiro@lab.ntt.co.jp> <suda.kyoto@gmail.com>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
@@ -37,11 +33,8 @@ Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com>
Alicia Lauerman <alicia@eta.im> <allydevour@me.com>
Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com> <andrey.kolomentsev@gmail.com>
André Martins <aanm90@gmail.com> <martins@noironetworks.com>
Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
@@ -54,8 +47,6 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Arko Dasgupta <arko.dasgupta@docker.com>
Arko Dasgupta <arko.dasgupta@docker.com> <arkodg@users.noreply.github.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
@@ -63,26 +54,21 @@ Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
Ben Toews <mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com>
Benny Ng <benny.tpng@gmail.com>
Benoit Chesneau <bchesneau@gmail.com>
Bevisy Zhang <binbin36520@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Bhumika Bayani <bhumikabayani@gmail.com>
Bilal Amarni <bilal.amarni@gmail.com> <bamarni@users.noreply.github.com>
Bill Wang <ozbillwang@gmail.com> <SydOps@users.noreply.github.com>
Bily Zhang <xcoder@tenxcloud.com>
Bin Liu <liubin0329@gmail.com>
Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Boaz Shuster <ripcurld.github@gmail.com>
Boqin Qin <bobbqqin@gmail.com>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Brian Goff <cpuguy83@gmail.com>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain>
Charles Hooper <charles.hooper@dotcloud.com> <chooper@plumata.com>
@@ -91,17 +77,12 @@ Chen Chuanliang <chen.chuanliang@zte.com.cn>
Chen Mingjie <chenmingjie0828@163.com>
Chen Qiu <cheney-90@hotmail.com>
Chen Qiu <cheney-90@hotmail.com> <21321229@zju.edu.cn>
Chengfei Shang <cfshang@alauda.io>
Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
Chris Price <cprice@mirantis.com> <chris.price@docker.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Corbin Coleman <corbin.coleman@docker.com>
Cristian Ariza <dev@cristianrz.com>
Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
@@ -116,7 +97,6 @@ Daniel Garcia <daniel@danielgarcia.info>
Daniel Gasienica <daniel@gasienica.ch> <dgasienica@zynga.com>
Daniel Goosen <daniel.goosen@surveysampling.com> <djgoosen@users.noreply.github.com>
Daniel Grunwell <mwgrunny@gmail.com>
Daniel Hiltgen <daniel.hiltgen@docker.com> <dhiltgen@users.noreply.github.com>
Daniel J Walsh <dwalsh@redhat.com>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
@@ -124,7 +104,6 @@ Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> <root@vagrant-ubuntu-12.10.vagr
Daniel Nephin <dnephin@docker.com> <dnephin@gmail.com>
Daniel Norberg <dano@spotify.com> <daniel.norberg@gmail.com>
Daniel Watkins <daniel@daniel-watkins.co.uk>
Daniel Zhang <jmzwcn@gmail.com>
Danny Yates <danny@codeaholics.org> <Danny.Yates@mailonline.co.uk>
Darren Shepherd <darren.s.shepherd@gmail.com> <darren@rancher.com>
Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
@@ -139,14 +118,9 @@ Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Diego Siqueira <dieg0@live.com>
Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com>
Dominic Yin <yindongchao@inspur.com>
Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Doug Tangren <d.tangren@gmail.com>
Drew Erny <derny@mirantis.com>
Drew Erny <derny@mirantis.com> <drew.erny@docker.com>
Elan Ruusamäe <glen@pld-linux.org>
Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee>
Elango Sivanandam <elango.siva@docker.com>
@@ -172,20 +146,14 @@ Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Fu JinLin <withlin@yeah.net>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com>
Geon Kim <geon0250@gmail.com>
George Kontridze <george@bugsnag.com>
Gerwim Feiken <g.feiken@tfe.nl> <gerwim@gmail.com>
Giampaolo Mancini <giampaolo@trampolineup.com>
Giovan Isa Musthofa <giovanism@outlook.co.id>
Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
Gou Rao <gou@portworx.com> <gourao@users.noreply.github.com>
Grant Millar <rid@cylo.io>
Grant Millar <rid@cylo.io> <grant@cylo.io>
Grant Millar <rid@cylo.io> <grant@seednet.eu>
Greg Stephens <greg@udon.org>
Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com>
@@ -200,7 +168,6 @@ Hakan Özler <hakan.ozler@kodcu.com>
Hao Shu Wei <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com> <haoshuwei1989@163.com>
Harald Albers <github@albersweb.de> <albers@users.noreply.github.com>
Harald Niesche <harald@niesche.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
Harry Zhang <harryz@hyper.sh> <resouer@163.com>
@@ -208,7 +175,6 @@ Harry Zhang <harryz@hyper.sh> <resouer@gmail.com>
Harry Zhang <resouer@163.com>
Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com>
Helen Xie <chenjg@harmonycloud.cn>
Hiroyuki Sasagawa <hs19870702@gmail.com>
Hollie Teal <hollie@docker.com>
Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
@@ -216,38 +182,27 @@ Hu Keping <hukeping@huawei.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com>
Ian Campbell <ian.campbell@docker.com>
Ian Campbell <ian.campbell@docker.com> <ijc@docker.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Iskander Sharipov <quasilyte@gmail.com>
Ivan Markin <sw@nogoegst.net> <twim@riseup.net>
Jack Laxson <jackjrabbit@gmail.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com>
Jaivish Kothari <janonymous.codevulture@gmail.com>
James Nesbitt <jnesbitt@mirantis.com>
James Nesbitt <jnesbitt@mirantis.com> <james.nesbitt@wunderkraut.com>
Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com>
Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
Jean-Tiare Le Bigot <jt@yadutaf.fr> <admin@jtlebi.fr>
Jeff Anderson <jeff@docker.com> <jefferya@programmerq.net>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
Jeroen Franse <jeroenfranse@gmail.com>
Jessica Frazelle <jess@oxide.computer>
Jessica Frazelle <jess@oxide.computer> <acidburn@docker.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@google.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@microsoft.com>
Jessica Frazelle <jess@oxide.computer> <jess@docker.com>
Jessica Frazelle <jess@oxide.computer> <jess@mesosphere.com>
Jessica Frazelle <jess@oxide.computer> <jessfraz@google.com>
Jessica Frazelle <jess@oxide.computer> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@oxide.computer> <me@jessfraz.com>
Jessica Frazelle <jess@oxide.computer> <princess@docker.com>
Jian Liao <jliao@alauda.io>
Jiang Jinyang <jjyruby@gmail.com>
Jiang Jinyang <jjyruby@gmail.com> <jiangjinyang@outlook.com>
Jessica Frazelle <jessfraz@google.com>
Jessica Frazelle <jessfraz@google.com> <acidburn@docker.com>
Jessica Frazelle <jessfraz@google.com> <acidburn@google.com>
Jessica Frazelle <jessfraz@google.com> <jess@docker.com>
Jessica Frazelle <jessfraz@google.com> <jess@mesosphere.com>
Jessica Frazelle <jessfraz@google.com> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jessfraz@google.com> <me@jessfraz.com>
Jessica Frazelle <jessfraz@google.com> <princess@docker.com>
Jim Galasyn <jim.galasyn@docker.com>
Jiuyue Ma <majiuyue@huawei.com>
Joey Geiger <jgeiger@gmail.com>
@@ -256,22 +211,19 @@ Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Harris <john@johnharris.io>
John Howard <github@lowenna.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com>
John Howard <github@lowenna.com> <jhoward@ntdev.microsoft.com>
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com>
John Howard <github@lowenna.com> <John.Howard@microsoft.com>
John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@ntdev.microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhowardmsft@users.noreply.github.com>
John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan Choy <jonathan.j.choy@gmail.com>
Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jordan Arentsen <blissdev@gmail.com>
Jordan Jennings <jjn2009@gmail.com> <jjn2009@users.noreply.github.com>
Jorit Kleine-Möllhoff <joppich@bricknet.de> <joppich@users.noreply.github.com>
Jose Diaz-Gonzalez <email@josediazgonzalez.com>
Jose Diaz-Gonzalez <email@josediazgonzalez.com> <jose@seatgeek.com>
Jose Diaz-Gonzalez <email@josediazgonzalez.com> <josegonzalez@users.noreply.github.com>
Jose Diaz-Gonzalez <jose@seatgeek.com> <josegonzalez@users.noreply.github.com>
Josh Bonczkowski <josh.bonczkowski@gmail.com>
Josh Eveleth <joshe@opendns.com> <jeveleth@users.noreply.github.com>
Josh Hawn <josh.hawn@docker.com> <jlhawn@berkeley.edu>
@@ -285,7 +237,6 @@ Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
Justin Terry <juterry@microsoft.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jp@enix.org>
@@ -294,11 +245,8 @@ Kai Qiang Wu (Kennan) <wkq5325@gmail.com>
Kai Qiang Wu (Kennan) <wkq5325@gmail.com> <wkqwu@cn.ibm.com>
Kamil Domański <kamil@domanski.co>
Kamjar Gerami <kami.gerami@gmail.com>
Karthik Nayak <karthik.188@gmail.com>
Karthik Nayak <karthik.188@gmail.com> <Karthik.188@gmail.com>
Ken Cochrane <kencochrane@gmail.com> <KenCochrane@gmail.com>
Ken Herner <kherner@progress.com> <chosenken@gmail.com>
Ken Reese <krrgithub@gmail.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
@@ -312,7 +260,6 @@ Konstantin Pelykh <kpelykh@zettaset.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com>
Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com>
Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com>
Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com>
Liang Mingqiang <mqliang.zju@gmail.com>
@@ -321,8 +268,7 @@ Liao Qingwei <liaoqingwei@huawei.com>
Linus Heckemann <lheckemann@twig-world.com>
Linus Heckemann <lheckemann@twig-world.com> <anonymouse2048@gmail.com>
Lokesh Mandvekar <lsm5@fedoraproject.org> <lsm5@redhat.com>
Lorenzo Fontana <fontanalorenz@gmail.com> <fontanalorenzo@me.com>
Lorenzo Fontana <fontanalorenz@gmail.com> <lo@linux.com>
Lorenzo Fontana <lo@linux.com> <fontanalorenzo@me.com>
Louis Opter <kalessin@kalessin.fr>
Louis Opter <kalessin@kalessin.fr> <louis@dotcloud.com>
Luca Favatella <luca.favatella@erlang-solutions.com> <lucafavatella@users.noreply.github.com>
@@ -358,11 +304,10 @@ Matthew Mosesohn <raytrac3r@gmail.com>
Matthew Mueller <mattmuelle@gmail.com>
Matthias Kühnle <git.nivoc@neverbox.com> <kuehnle@online.de>
Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com>
Maxwell <csuhp007@gmail.com>
Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com>
Michael Crosby <michael@docker.com> <crosby.michael@gmail.com>
Michael Crosby <michael@docker.com> <crosbymichael@gmail.com>
Michael Crosby <michael@docker.com> <michael@crosbymichael.com>
Michał Gryko <github@odkurzacz.org>
Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org>
Michael Huettermann <michael@huettermann.net>
Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com>
@@ -370,9 +315,6 @@ Michael Nussbaum <michael.nussbaum@getbraintree.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com>
Michael Spetsiotis <michael_spets@hotmail.com>
Michal Minář <miminar@redhat.com>
Michał Gryko <github@odkurzacz.org>
Michiel de Jong <michiel@unhosted.org>
Mickaël Fortunato <morsi.morsicus@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
@@ -385,7 +327,6 @@ Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathanleclaire@gmail.com>
Neil Horman <nhorman@tuxdriver.com> <nhorman@hmswarspite.think-freely.org>
@@ -397,9 +338,6 @@ Nolan Darilek <nolan@thewordnerd.info>
O.S. Tezer <ostezer@gmail.com>
O.S. Tezer <ostezer@gmail.com> <ostezer@users.noreply.github.com>
Oh Jinkyun <tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
Oliver Reason <oli@overrateddev.co>
Olli Janatuinen <olli.janatuinen@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
@@ -413,7 +351,6 @@ Peter Waller <p@pwaller.net> <peter@scraperwiki.com>
Phil Estes <estesp@linux.vnet.ibm.com> <estesp@gmail.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com>
Prasanna Gautam <prasannagautam@gmail.com>
Qiang Huang <h.huangqiang@huawei.com>
Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
@@ -421,12 +358,8 @@ Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>
Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io>
Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Sakeven Jiang <jc5930@sina.cn>
@@ -442,7 +375,6 @@ Shengbo Song <thomassong@tencent.com>
Shengbo Song <thomassong@tencent.com> <mymneo@163.com>
Shih-Yuan Lee <fourdollars@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
Shukui Yang <yangshukui@huawei.com>
Shuwei Hao <haosw@cn.ibm.com>
Shuwei Hao <haosw@cn.ibm.com> <haoshuwei24@gmail.com>
@@ -461,12 +393,9 @@ Stefan Berger <stefanb@linux.vnet.ibm.com>
Stefan Berger <stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
Stefan J. Wernli <swernli@microsoft.com> <swernli@ntdev.microsoft.com>
Stefan S. <tronicum@user.github.com>
Stefan Scherer <stefan.scherer@docker.com>
Stefan Scherer <stefan.scherer@docker.com> <scherer_stefan@icloud.com>
Stephan Spindler <shutefan@gmail.com> <shutefan@users.noreply.github.com>
Stephen Day <stevvooe@gmail.com>
Stephen Day <stevvooe@gmail.com> <stephen.day@docker.com>
Stephen Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
Stephen Day <stephen.day@docker.com>
Stephen Day <stephen.day@docker.com> <stevvooe@users.noreply.github.com>
Steve Desmond <steve@vtsv.ca> <stevedesmond-ca@users.noreply.github.com>
Sun Gengze <690388648@qq.com>
Sun Jianbo <wonderflow.sun@gmail.com>
@@ -500,11 +429,9 @@ Toli Kuznets <toli@docker.com>
Tom Barlow <tomwbarlow@gmail.com>
Tom Sweeney <tsweeney@redhat.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Trace Andreason <tandreason@gmail.com>
Trishna Guha <trishnaguha17@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com>
Tyler Brown <tylers.pile@gmail.com>
Umesh Yadav <umesh4257@gmail.com>
Umesh Yadav <umesh4257@gmail.com> <dungeonmaster18@users.noreply.github.com>
Victor Lyuboslavsky <victor@victoreda.com>
@@ -514,19 +441,14 @@ Victor Vieux <victor.vieux@docker.com> <victor@docker.com>
Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com>
Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com>
Victor Vieux <victor.vieux@docker.com> <vieux@docker.com>
Vikram bir Singh <vsingh@mirantis.com>
Vikram bir Singh <vsingh@mirantis.com> <vikrambir.singh@docker.com>
Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <vincent@bernat.im>
Vincent Boulineau <vincent.boulineau@datadoghq.com>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@sbr.pm>
Vishnu Kannan <vishnuk@google.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com> <tmp6154@yandex.ru>
Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>
@@ -542,14 +464,8 @@ Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com>
Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
Will Weaver <monkey@buildingbananas.com>
Wing-Kam Wong <wingkwong.code@gmail.com>
Xian Chaobo <xianchaobo@huawei.com>
Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn>
Xianglin Gao <xlgao@zju.edu.cn>
Xianlu Bird <xianlubird@gmail.com>
Xiao YongBiao <xyb4638@gmail.com>
Xiaodong Liu <liuxiaodong@loongson.cn>
Xiaodong Zhang <a4012017@sina.com>
Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
Xuecong Liao <satorulogic@gmail.com>
Yamasaki Masahide <masahide.y@gmail.com>
@@ -561,21 +477,15 @@ Yi EungJun <eungjun.yi@navercorp.com> <semtlenori@gmail.com>
Ying Li <ying.li@docker.com>
Ying Li <ying.li@docker.com> <cyli@twistedmatrix.com>
Yong Tang <yong.tang.github@outlook.com> <yongtang@users.noreply.github.com>
Yongxin Li <yxli@alauda.io>
Yosef Fertel <yfertel@gmail.com> <frosforever@users.noreply.github.com>
Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>
Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn>
Yue Zhang <zy675793960@yeah.net>
Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu>
Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org>
ZhangHang <stevezhang2014@gmail.com>
Zhenkun Bi <bi.zhenkun@zte.com.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhoulin Xie <zhoulin.xie@daocloud.io>
Zhu Kunjia <zhu.kunjia@zte.com.cn>
Ziheng Liu <lzhfromustc@gmail.com>
Zou Yu <zouyu7@huawei.com>
Zuhayr Elahi <zuhayr.elahi@docker.com>
Zuhayr Elahi <zuhayr.elahi@docker.com> <elahi.zuhayr@gmail.com>

233
AUTHORS

File diff suppressed because it is too large Load Diff

View File

@@ -99,7 +99,7 @@ be found.
* Add `--format` option to `docker node ls` [#30424](https://github.com/docker/docker/pull/30424)
* Add `--prune` option to `docker stack deploy` to remove services that are no longer defined in the docker-compose file [#31302](https://github.com/docker/docker/pull/31302)
* Add `PORTS` column for `docker service ls` when using `ingress` mode [#30813](https://github.com/docker/docker/pull/30813)
- Fix unnecessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
- Fix unnescessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
- Fix `docker stack deploy` not supporting `endpoint_mode` when deploying from a docker compose file [#32333](https://github.com/docker/docker/pull/32333)
- Proceed with startup if cluster component cannot be created to allow recovering from a broken swarm setup [#31631](https://github.com/docker/docker/pull/31631)

View File

@@ -27,10 +27,10 @@ issue, please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated and we will publicly thank you for it,
although we keep your name confidential if you request it. We also like to send
gifts&mdash;if you're into schwag, make sure to let us know. We currently do not
offer a paid security bounty program, but are not ruling it out in the future.
Security reports are greatly appreciated and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
## Reporting other issues

View File

@@ -1,408 +1,243 @@
# syntax=docker/dockerfile:1
# This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Use make to build a development environment image and run it in a container.
# # This is slow the first time.
# make BIND_DIR=. shell
#
# The following commands are executed inside the running container.
ARG CROSS="false"
ARG SYSTEMD="false"
ARG GO_VERSION=1.20.10
ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_VERSION=0.5.0
ARG DOCKER_BUILDTAGS="apparmor seccomp"
# # Make a dockerd binary.
# # hack/make.sh binary
#
# # Install dockerd to /usr/local/bin
# # make install
#
# # Run unit tests
# # hack/test/unit
#
# # Run tests e.g. integration, py
# # hack/make.sh binary test-integration test-docker-py
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE} AS base
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
ARG APT_MIRROR
RUN test -n "$APT_MIRROR" && sed -ri "s/(httpredir|deb|security).debian.org/${APT_MIRROR}/g" /etc/apt/sources.list || true
ENV GO111MODULE=off
FROM golang:1.10.8 AS base
# FIXME(vdemeester) this is kept for other script depending on it to not fail right away
# Remove this once the other scripts uses something else to detect the version
ENV GO_VERSION 1.10.8
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
FROM base AS criu
ARG DEBIAN_FRONTEND
# Install dependency packages specific to criu
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libcap-dev \
libnet-dev \
libnl-3-dev \
libprotobuf-c-dev \
libprotobuf-dev \
protobuf-c-compiler \
protobuf-compiler \
python3-protobuf
# Install CRIU for checkpoint/restore support
ARG CRIU_VERSION=3.14
RUN mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
ENV CRIU_VERSION 3.6
# Install dependency packages specific to criu
RUN apt-get update && apt-get install -y \
libnet-dev \
libprotobuf-c0-dev \
libprotobuf-dev \
libnl-3-dev \
libcap-dev \
protobuf-compiler \
protobuf-c-compiler \
python-protobuf \
&& mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution
# Install two versions of the registry. The first one is a recent version that
# supports both schema 1 and 2 manifests. The second one is an older version that
# only supports schema1 manifests. This allows integration-cli tests to cover
# push/pull with both schema1 and schema2 manifests.
# The old version of the registry is not working on arm64, so installation is
# skipped on that architecture.
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/docker/distribution.git . \
&& git checkout -q "$REGISTRY_COMMIT" \
&& GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|armhf|ppc64*|s390x) \
git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \
GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|ppc64*|s390x) \
(cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1"); \
GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac \
&& rm -rf "$GOPATH"
FROM base AS docker-py
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT 8b246db271a85d6541dc458838627e89c683e42f
RUN git clone https://github.com/docker/docker-py.git /build \
&& cd /build \
&& git checkout -q $DOCKER_PY_COMMIT
FROM base AS swagger
WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger
# Install go-swagger for validating swagger.yaml
# This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
ENV GO_SWAGGER_COMMIT c56166c036004ba7a3a321e5951ba472b9ae298c
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/kolyshkin/go-swagger.git . \
&& git checkout -q "$GO_SWAGGER_COMMIT" \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger
ENV GO_SWAGGER_COMMIT c28258affb0b6251755d92489ef685af8d4ff3eb
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/go-swagger/go-swagger.git "$GOPATH/src/github.com/go-swagger/go-swagger" \
&& (cd "$GOPATH/src/github.com/go-swagger/go-swagger" && git checkout -q "$GO_SWAGGER_COMMIT") \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger \
&& rm -rf "$GOPATH"
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq
FROM base AS frozen-images
RUN apt-get update && apt-get install -y jq ca-certificates --no-install-recommends
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
ARG TARGETARCH
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
FROM base AS cross-false
# Just a little hack so we don't have to install these deps twice, once for runc and once for dockerd
FROM base AS runtime-dev
RUN apt-get update && apt-get install -y \
libapparmor-dev \
libseccomp-dev
FROM --platform=linux/amd64 base AS cross-true
ARG DEBIAN_FRONTEND
RUN dpkg --add-architecture arm64
RUN dpkg --add-architecture armel
RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture ppc64el
RUN dpkg --add-architecture s390x
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
crossbuild-essential-arm64 \
crossbuild-essential-armel \
crossbuild-essential-armhf \
crossbuild-essential-ppc64el \
crossbuild-essential-s390x
FROM cross-${CROSS} as dev-base
FROM dev-base AS runtime-dev-cross-false
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
libsystemd-dev \
libudev-dev
FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true
ARG DEBIAN_FRONTEND
# These crossbuild packages rely on gcc-<arch>, but this doesn't want to install
# on non-amd64 systems, so other architectures cannot crossbuild amd64.
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
libapparmor-dev:armhf \
libapparmor-dev:ppc64el \
libapparmor-dev:s390x \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libseccomp-dev:armhf \
libseccomp-dev:ppc64el \
libseccomp-dev:s390x
FROM runtime-dev-cross-${CROSS} AS runtime-dev
FROM base AS tomlv
ARG TOMLV_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tomlv
ENV INSTALL_BINARY_NAME=tomlv
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM base AS vndr
ARG VNDR_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh vndr
ENV INSTALL_BINARY_NAME=vndr
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev
ARG CONTAINERD_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh containerd
FROM base AS containerd
RUN apt-get update && apt-get install -y btrfs-tools
ENV INSTALL_BINARY_NAME=containerd
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS proxy
ARG LIBNETWORK_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh proxy
FROM base AS proxy
ENV INSTALL_BINARY_NAME=proxy
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM base AS golangci_lint
ARG GOLANGCI_LINT_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh golangci_lint
FROM base AS gometalinter
ENV INSTALL_BINARY_NAME=gometalinter
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM base AS gotestsum
ARG GOTESTSUM_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh gotestsum
FROM base AS shfmt
ARG SHFMT_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh shfmt
FROM dev-base AS dockercli
ARG DOCKERCLI_CHANNEL
ARG DOCKERCLI_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh dockercli
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM runtime-dev AS runc
ARG RUNC_VERSION
ARG RUNC_BUILDTAGS
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh runc
ENV INSTALL_BINARY_NAME=runc
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS tini
ARG DEBIAN_FRONTEND
ARG TINI_VERSION
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
cmake \
vim-common
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tini
FROM base AS tini
RUN apt-get update && apt-get install -y cmake vim-common
COPY hack/dockerfile/install/install.sh ./install.sh
ENV INSTALL_BINARY_NAME=tini
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS rootlesskit
ARG ROOTLESSKIT_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh rootlesskit
COPY ./contrib/dockerd-rootless.sh /build
COPY ./contrib/dockerd-rootless-setuptool.sh /build
FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64
FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64
FROM scratch AS vpnkit
COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64
COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64
# TODO: Some of this is only really needed for testing, it would be nice to split this up
FROM runtime-dev AS dev-systemd-false
ARG DEBIAN_FRONTEND
FROM runtime-dev AS dev
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
RUN useradd --create-home --gid docker unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig
# Set dev environment as safe git directory to prevent "dubious ownership" errors
# when bind-mounting the source into the dev-container. See https://github.com/moby/moby/pull/44930
RUN git config --global --add safe.directory $GOPATH/src/github.com/docker/docker
# This should only install packages that are specifically needed for the dev environment and nothing else
# Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
bash-completion \
bzip2 \
inetutils-ping \
iproute2 \
iptables \
jq \
libcap2-bin \
libnet1 \
libnl-3-200 \
libprotobuf-c1 \
net-tools \
patch \
pigz \
python3-pip \
python3-setuptools \
python3-wheel \
sudo \
thin-provisioning-tools \
uidmap \
vim \
vim-common \
xfsprogs \
xz-utils \
zip
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
RUN pip3 install yamllint==1.26.1
COPY --from=dockercli /build/ /usr/local/cli
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
bash-completion \
btrfs-tools \
iptables \
jq \
libcap2-bin \
libdevmapper-dev \
libudev-dev \
libsystemd-dev \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
net-tools \
pigz \
python-backports.ssl-match-hostname \
python-dev \
python-mock \
python-pip \
python-requests \
python-setuptools \
python-websocket \
python-wheel \
thin-provisioning-tools \
vim \
vim-common \
xfsprogs \
zip \
bzip2 \
xz-utils \
--no-install-recommends
COPY --from=swagger /build/swagger* /usr/local/bin/
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=swagger /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=registry /build/ /usr/local/bin/
COPY --from=gometalinter /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=dockercli /build/ /usr/local/cli
COPY --from=registry /build/registry* /usr/local/bin/
COPY --from=criu /build/ /usr/local/
COPY --from=docker-py /build/ /docker-py
# TODO: This is for the docker-py tests, which shouldn't really be needed for
# this image, but currently CI is expecting to run this image. This should be
# split out into a separate image, including all the `python-*` deps installed
# above.
RUN cd /docker-py \
&& pip install docker-pycreds==0.2.1 \
&& pip install yamllint==1.5.0 \
&& pip install -r test-requirements.txt
# Skip the CRIU stage for now, as the opensuse package repository is sometimes
# unstable, and we're currently not using it in CI.
#
# FIXME(thaJeztah): re-enable this stage when https://github.com/moby/moby/issues/38963 is resolved (see https://github.com/moby/moby/pull/38984)
# COPY --from=criu /build/ /usr/local/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=gotestsum /build/ /usr/local/bin/
COPY --from=golangci_lint /build/ /usr/local/bin/
COPY --from=shfmt /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
ENV PATH=/usr/local/cli:$PATH
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
# Options for hack/validate/gometalinter
ENV GOMETALINTER_OPTS="--deadline=2m"
WORKDIR /go/src/github.com/docker/docker
VOLUME /var/lib/docker
VOLUME /home/unprivilegeduser/.local/share/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
FROM dev-systemd-false AS dev-systemd-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
dbus \
dbus-user-session \
systemd \
systemd-sysv
RUN mkdir -p hack \
&& curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \
&& chmod +x hack/dind-systemd
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev
FROM runtime-dev AS binary-base
ARG DOCKER_GITCOMMIT=HEAD
ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT}
ARG VERSION
ENV VERSION=${VERSION}
ARG PLATFORM
ENV PLATFORM=${PLATFORM}
ARG PRODUCT
ENV PRODUCT=${PRODUCT}
ARG DEFAULT_PRODUCT_LICENSE
ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE}
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV PREFIX=/build
# TODO: This is here because hack/make.sh binary copies these extras binaries
# from $PATH into the bundles dir.
# It would be nice to handle this in a different way.
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=vpnkit /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
FROM binary-base AS build-binary
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh binary
FROM binary-base AS build-dynbinary
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh dynbinary
FROM binary-base AS build-cross
ARG DOCKER_CROSSPLATFORMS
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
--mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \
hack/make.sh cross
FROM scratch AS binary
COPY --from=build-binary /build/bundles/ /
FROM scratch AS dynbinary
COPY --from=build-dynbinary /build/bundles/ /
FROM scratch AS cross
COPY --from=build-cross /build/bundles/ /
FROM dev AS final
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,67 +1,49 @@
ARG GO_VERSION=1.20.10
## Step 1: Build tests
FROM golang:1.10.8-alpine3.7 as builder
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off
RUN apk --no-cache add \
RUN apk add --update \
bash \
btrfs-progs-dev \
build-base \
curl \
lvm2-dev \
jq
jq \
&& rm -rf /var/cache/apk/*
RUN mkdir -p /build/
RUN mkdir -p /go/src/github.com/docker/docker/
WORKDIR /go/src/github.com/docker/docker/
FROM base AS frozen-images
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
# Generate frozen images
COPY contrib/download-frozen-image-v2.sh contrib/download-frozen-image-v2.sh
RUN contrib/download-frozen-image-v2.sh /output/docker-frozen-images \
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
# Build DockerSuite.TestBuild* dependency
FROM base AS contrib
COPY contrib/syscall-test /build/syscall-test
COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile
COPY contrib/httpserver contrib/httpserver
RUN CGO_ENABLED=0 go build -buildmode=pie -o /build/httpserver/httpserver github.com/docker/docker/contrib/httpserver
# Build the integration tests and copy the resulting binaries to /build/tests
FROM base AS builder
# Install dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN ./hack/dockerfile/install/install.sh dockercli
# Set tag and add sources
COPY . .
# Copy test sources tests that use assert can print errors
RUN mkdir -p /build${PWD} && find integration integration-cli -name \*_test.go -exec cp --parents '{}' /build${PWD} \;
# Build and install test binaries
ARG DOCKER_GITCOMMIT=undefined
ARG DOCKER_GITCOMMIT
ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT:-undefined}
ADD . .
# Build DockerSuite.TestBuild* dependency
RUN CGO_ENABLED=0 go build -buildmode=pie -o /output/httpserver github.com/docker/docker/contrib/httpserver
# Build the integration tests and copy the resulting binaries to /output/tests
RUN hack/make.sh build-integration-test-binary
RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \;
RUN mkdir -p /output/tests && find . -name test.main -exec cp --parents '{}' /output/tests \;
## Generate testing image
FROM alpine:3.10 as runner
ENV DOCKER_REMOTE_DAEMON=1
ENV DOCKER_INTEGRATION_DAEMON_DEST=/
ENTRYPOINT ["/scripts/run.sh"]
# Add an unprivileged user to be used for tests which need it
RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash
## Step 2: Generate testing image
FROM alpine:3.7 as runner
# GNU tar is used for generating the emptyfs image
RUN apk --no-cache add \
RUN apk add --update \
bash \
ca-certificates \
g++ \
@@ -69,16 +51,24 @@ RUN apk --no-cache add \
iptables \
pigz \
tar \
xz
xz \
&& rm -rf /var/cache/apk/*
COPY hack/test/e2e-run.sh /scripts/run.sh
COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh
# Add an unprivileged user to be used for tests which need it
RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash
COPY integration/testdata /tests/integration/testdata
COPY integration/build/testdata /tests/integration/build/testdata
COPY integration-cli/fixtures /tests/integration-cli/fixtures
COPY contrib/httpserver/Dockerfile /tests/contrib/httpserver/Dockerfile
COPY contrib/syscall-test /tests/contrib/syscall-test
COPY integration-cli/fixtures /tests/integration-cli/fixtures
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=dockercli /build/ /usr/bin/
COPY --from=contrib /build/ /tests/contrib/
COPY --from=builder /build/ /
COPY hack/test/e2e-run.sh /scripts/run.sh
COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh
COPY --from=builder /output/docker-frozen-images /docker-frozen-images
COPY --from=builder /output/httpserver /tests/contrib/httpserver/httpserver
COPY --from=builder /output/tests /tests
COPY --from=builder /usr/local/bin/docker /usr/bin/docker
ENV DOCKER_REMOTE_DAEMON=1 DOCKER_INTEGRATION_DAEMON_DEST=/
ENTRYPOINT ["/scripts/run.sh"]

View File

@@ -5,10 +5,7 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.20.10
FROM golang:${GO_VERSION}-buster
ENV GO111MODULE=off
FROM debian:stretch
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
@@ -18,13 +15,13 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
btrfs-tools \
build-essential \
curl \
cmake \
gcc \
git \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
ca-certificates \
@@ -40,6 +37,18 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
vim-common \
&& rm -rf /var/lib/apt/lists/*
# Install Go
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.10.8
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH
ENV GOPATH /go
ENV CGO_LDFLAGS -L/lib
# Install runc, containerd, tini and docker-proxy
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install

View File

@@ -45,8 +45,8 @@
#
# 1. Clone the sources from github.com:
#
# >> git clone https://github.com/docker/docker.git C:\gopath\src\github.com\docker\docker
# >> Cloning into 'C:\gopath\src\github.com\docker\docker'...
# >> git clone https://github.com/docker/docker.git C:\go\src\github.com\docker\docker
# >> Cloning into 'C:\go\src\github.com\docker\docker'...
# >> remote: Counting objects: 186216, done.
# >> remote: Compressing objects: 100% (21/21), done.
# >> remote: Total 186216 (delta 5), reused 0 (delta 0), pack-reused 186195
@@ -59,7 +59,7 @@
#
# 2. Change directory to the cloned docker sources:
#
# >> cd C:\gopath\src\github.com\docker\docker
# >> cd C:\go\src\github.com\docker\docker
#
#
# 3. Build a docker image with the components required to build the docker binaries from source
@@ -79,8 +79,8 @@
# 5. Copy the binaries out of the container, replacing HostPath with an appropriate destination
# folder on the host system where you want the binaries to be located.
#
# >> docker cp binaries:C:\gopath\src\github.com\docker\docker\bundles\docker.exe C:\HostPath\docker.exe
# >> docker cp binaries:C:\gopath\src\github.com\docker\docker\bundles\dockerd.exe C:\HostPath\dockerd.exe
# >> docker cp binaries:C:\go\src\github.com\docker\docker\bundles\docker.exe C:\HostPath\docker.exe
# >> docker cp binaries:C:\go\src\github.com\docker\docker\bundles\dockerd.exe C:\HostPath\dockerd.exe
#
#
# 6. (Optional) Remove the interim container holding the built executable binaries:
@@ -147,36 +147,24 @@
# The docker integration tests do not currently run in a container on Windows, predominantly
# due to Windows not supporting privileged mode, so anything using a volume would fail.
# They (along with the rest of the docker CI suite) can be run using
# https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
# https://github.com/jhowardmsft/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
#
# -----------------------------------------------------------------------------------------
# The number of build steps below are explicitly minimised to improve performance.
# Extremely important - do not change the following line to reference a "specific" image,
# such as `mcr.microsoft.com/windows/servercore:ltsc2019`. If using this Dockerfile in process
# isolated containers, the kernel of the host must match the container image, and hence
# would fail between Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).
# It is expected that the image `microsoft/windowsservercore:latest` is present, and matches
# the hosts kernel version before doing a build.
FROM microsoft/windowsservercore
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.20.10
ARG GOTESTSUM_VERSION=v1.8.2
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=${GO_VERSION} `
ENV GO_VERSION=1.10.8 `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION}
GOPATH=C:\go `
FROM_DOCKERFILE=1
RUN `
Function Test-Nano() { `
@@ -205,7 +193,6 @@ RUN `
Throw ("Failed to download " + $source) `
}`
} else { `
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
$webClient = New-Object System.Net.WebClient; `
$webClient.DownloadFile($source, $target); `
} `
@@ -218,17 +205,16 @@ RUN `
Download-File $location C:\gitsetup.zip; `
`
Write-Host INFO: Downloading go...; `
$dlGoVersion=$Env:GO_VERSION -replace '\.0$',''; `
Download-File "https://golang.org/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; `
Download-File $('https://golang.org/dl/go'+$Env:GO_VERSION+'.windows-amd64.zip') C:\go.zip; `
`
Write-Host INFO: Downloading compiler 1 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
`
Write-Host INFO: Downloading compiler 2 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/runtime.zip C:\runtime.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/runtime.zip C:\runtime.zip; `
`
Write-Host INFO: Downloading compiler 3 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/binutils.zip C:\binutils.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/binutils.zip C:\binutils.zip; `
`
Write-Host INFO: Extracting git...; `
Expand-Archive C:\gitsetup.zip C:\git-tmp; `
@@ -252,35 +238,19 @@ RUN `
Remove-Item C:\binutils.zip; `
Remove-Item C:\gitsetup.zip; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `
Write-Host INFO: Ensuring existence of directory $srcDir...; `
New-Item -Force -ItemType Directory -Path $srcDir | Out-Null; `
Write-Host INFO: Creating source directory...; `
New-Item -ItemType Directory -Path C:\go\src\github.com\docker\docker | Out-Null; `
`
Write-Host INFO: Configuring git core.autocrlf...; `
C:\git\cmd\git config --global core.autocrlf true;
RUN `
Function Install-GoTestSum() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
} `
} `
C:\git\cmd\git config --global core.autocrlf true; `
`
Install-GoTestSum
Write-Host INFO: Completed
# Make PowerShell the default entrypoint
ENTRYPOINT ["powershell.exe"]
# Set the working directory to the location of the sources
WORKDIR ${GOPATH}\src\github.com\docker\docker
WORKDIR C:\go\src\github.com\docker\docker
# Copy the sources into the container
COPY . .

713
Jenkinsfile vendored
View File

@@ -1,713 +0,0 @@
#!groovy
pipeline {
agent none
options {
buildDiscarder(logRotator(daysToKeepStr: '30'))
timeout(time: 2, unit: 'HOURS')
timestamps()
}
parameters {
booleanParam(name: 'unit_validate', defaultValue: true, description: 'amd64 (x86_64) unit tests and vendor check')
booleanParam(name: 'validate_force', defaultValue: false, description: 'force validation steps to be run, even if no changes were detected')
booleanParam(name: 'amd64', defaultValue: true, description: 'amd64 (x86_64) Build/Test')
booleanParam(name: 'rootless', defaultValue: true, description: 'amd64 (x86_64) Build/Test (Rootless mode)')
booleanParam(name: 'cgroup2', defaultValue: true, description: 'amd64 (x86_64) Build/Test (cgroup v2)')
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test')
booleanParam(name: 'windowsRS5', defaultValue: true, description: 'Windows 2019 (RS5) Build/Test')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check')
}
environment {
DOCKER_BUILDKIT = '1'
DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2'
CHECK_CONFIG_COMMIT = '78405559cfe5987174aa2cb6463b9b2c1b917255'
TESTDEBUG = '0'
TIMEOUT = '120m'
}
stages {
stage('pr-hack') {
when { changeRequest() }
steps {
script {
echo "Workaround for PR auto-cancel feature. Borrowed from https://issues.jenkins-ci.org/browse/JENKINS-43353"
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
}
}
stage('DCO-check') {
when {
beforeAgent true
expression { params.dco }
}
agent { label 'arm64 && ubuntu-2004' }
steps {
sh '''
docker run --rm \
-v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
'''
}
}
stage('Build') {
parallel {
stage('unit-validate') {
when {
beforeAgent true
expression { params.unit_validate }
}
agent { label 'amd64 && ubuntu-2004 && overlay2' }
environment {
// On master ("non-pull-request"), force running some validation checks (vendor, swagger),
// even if no files were changed. This allows catching problems caused by pull-requests
// that were merged out-of-sequence.
TEST_FORCE_VALIDATE = sh returnStdout: true, script: 'if [ "${BRANCH_NAME%%-*}" != "PR" ] || [ "${CHANGE_TARGET:-master}" != "master" ] || [ "${validate_force}" = "true" ]; then echo "1"; fi'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
}
}
stage("Validate") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_FORCE_VALIDATE \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/default
'''
}
}
stage("Docker-py") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon \
test-docker-py
'''
}
post {
always {
junit testResults: 'bundles/test-docker-py/junit-report.xml', allowEmptyResults: true
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=docker-py
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czf ${bundleName}-bundles.tar.gz bundles/test-docker-py/*.xml bundles/test-docker-py/*.log
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
}
}
stage("Static") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh binary-daemon
'''
}
}
stage("Cross") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh cross
'''
}
}
// needs to be last stage that calls make.sh for the junit report to work
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Validate vendor") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_FORCE_VALIDATE \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/vendor
'''
}
}
}
post {
always {
sh '''
echo 'Ensuring container killed.'
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=unit
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czvf ${bundleName}-bundles.tar.gz bundles/junit-report.xml bundles/go-test-report.json bundles/profile.out
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('amd64') {
when {
beforeAgent true
expression { params.amd64 }
}
agent { label 'amd64 && ubuntu-2004 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
# todo: include ip_vs in base image
sudo modprobe ip_vs
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Run tests") {
steps {
sh '''#!/bin/bash
# bash is needed so 'jobs -p' works properly
# it also accepts setting inline envvars for functions without explicitly exporting
set -x
run_tests() {
[ -n "$TESTDEBUG" ] && rm= || rm=--rm;
docker run $rm -t --privileged \
-v "$WORKSPACE/bundles/${TEST_INTEGRATION_DEST}:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/bundles/dynbinary-daemon:/go/src/github.com/docker/docker/bundles/dynbinary-daemon" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name "$CONTAINER_NAME" \
-e KEEPBUNDLE=1 \
-e TESTDEBUG \
-e TESTFLAGS \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
"$1" \
test-integration
}
trap "exit" INT TERM
trap 'pids=$(jobs -p); echo "Remaining pids to kill: [$pids]"; [ -z "$pids" ] || kill $pids' EXIT
CONTAINER_NAME=docker-pr$BUILD_NUMBER
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name ${CONTAINER_NAME}-build \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon
# flaky + integration
TEST_INTEGRATION_DEST=1 CONTAINER_NAME=${CONTAINER_NAME}-1 TEST_SKIP_INTEGRATION_CLI=1 run_tests test-integration-flaky &
# integration-cli first set
TEST_INTEGRATION_DEST=2 CONTAINER_NAME=${CONTAINER_NAME}-2 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSuite|DockerNetworkSuite|DockerHubPullSuite|DockerRegistrySuite|DockerSchema1RegistrySuite|DockerRegistryAuthTokenSuite|DockerRegistryAuthHtpasswdSuite)/" run_tests &
# integration-cli second set
TEST_INTEGRATION_DEST=3 CONTAINER_NAME=${CONTAINER_NAME}-3 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSwarmSuite|DockerDaemonSuite|DockerExternalVolumeSuite)/" run_tests &
c=0
for job in $(jobs -p); do
wait ${job} || c=$?
done
exit $c
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
cids=$(docker ps -aq -f name=docker-pr${BUILD_NUMBER}-*)
[ -n "$cids" ] && docker rm -vf $cids || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('rootless') {
when {
beforeAgent true
expression { params.rootless }
}
agent { label 'amd64 && ubuntu-2004 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration tests") {
environment {
DOCKER_ROOTLESS = '1'
TEST_SKIP_INTEGRATION_CLI = '1'
}
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_ROOTLESS \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64-rootless
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('cgroup2') {
when {
beforeAgent true
expression { params.cgroup2 }
}
agent { label 'amd64 && ubuntu-2004 && cgroup2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR --build-arg SYSTEMD=true -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration tests") {
environment {
DOCKER_SYSTEMD = '1' // recommended cgroup driver for v2
TEST_SKIP_INTEGRATION_CLI = '1' // CLI tests do not support v2
}
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_SYSTEMD \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64-cgroup2
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('arm64') {
when {
beforeAgent true
expression { params.arm64 }
}
agent { label 'arm64 && ubuntu-2004' }
environment {
TEST_SKIP_INTEGRATION_CLI = '1'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm -t docker:${GIT_COMMIT} .'
}
}
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=arm64-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('win-RS5') {
when {
beforeAgent true
expression { params.windowsRS5 }
}
environment {
DOCKER_BUILDKIT = '0'
DOCKER_DUT_DEBUG = '1'
SKIP_VALIDATION_TESTS = '1'
SOURCES_DRIVE = 'd'
SOURCES_SUBDIR = 'gopath'
TESTRUN_DRIVE = 'd'
TESTRUN_SUBDIR = "CI"
WINDOWS_BASE_IMAGE = 'mcr.microsoft.com/windows/servercore'
WINDOWS_BASE_IMAGE_TAG = 'ltsc2019'
}
agent {
node {
customWorkspace 'd:\\gopath\\src\\github.com\\docker\\docker'
label 'windows-2019'
}
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Run tests") {
steps {
powershell '''
$ErrorActionPreference = 'Stop'
Invoke-WebRequest https://github.com/moby/docker-ci-zap/blob/master/docker-ci-zap.exe?raw=true -OutFile C:/Windows/System32/docker-ci-zap.exe
./hack/ci/windows.ps1
exit $LastExitCode
'''
}
}
}
post {
always {
junit testResults: 'bundles/junit-report-*.xml', allowEmptyResults: true
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
powershell '''
cd $env:WORKSPACE
$bundleName="windowsRS5-integration"
Write-Host -ForegroundColor Green "Creating ${bundleName}-bundles.zip"
# archiveArtifacts does not support env-vars to , so save the artifacts in a fixed location
Compress-Archive -Path "bundles/CIDUT.out", "bundles/CIDUT.err", "bundles/junit-report-*.xml" -CompressionLevel Optimal -DestinationPath "${bundleName}-bundles.zip"
'''
archiveArtifacts artifacts: '*-bundles.zip', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
}
}
}
}

View File

@@ -176,7 +176,7 @@
END OF TERMS AND CONDITIONS
Copyright 2013-2018 Docker, Inc.
Copyright 2013-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -24,17 +24,20 @@
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [
"aaronlehmann",
"akihirosuda",
"anusha",
"coolljt0725",
"cpuguy83",
"crosbymichael",
"dnephin",
"duglin",
"estesp",
"jhowardmsft",
"johnstep",
"justincormack",
"kolyshkin",
"lowenna",
"mhbauer",
"mlaventure",
"runcom",
"stevvooe",
"thajeztah",
@@ -47,6 +50,15 @@
"yongtang"
]
[Org."Docs maintainers"]
# TODO Describe the docs maintainers role.
people = [
"misty",
"thajeztah"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
@@ -62,12 +74,13 @@
people = [
"alexellis",
"andrewhsu",
"anonymuse",
"chanwit",
"fntlnz",
"gianarb",
"olljanat",
"programmerq",
"rheinwein",
"ripcurld",
"samwhited",
"thajeztah"
]
@@ -78,12 +91,6 @@
# Thank you!
people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository.
@@ -103,19 +110,6 @@
# and tweets as @calavera.
"calavera",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
@@ -124,7 +118,7 @@
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Evan Hazlett is the creator of the Shipyard and Interlock open source projects,
# Evan Hazlett is the creator of of the Shipyard and Interlock open source projects,
# and the author of "Orca", which became the foundation of Docker Universal Control
# Plane (UCP). As a maintainer, Evan helped integrating SwarmKit (secrets, tasks)
# into the Docker engine.
@@ -179,13 +173,6 @@
# Swarm mode networking.
"mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of
@@ -260,7 +247,7 @@
[people.akihirosuda]
Name = "Akihiro Suda"
Email = "akihiro.suda.cz@hco.ntt.co.jp"
Email = "suda.akihiro@lab.ntt.co.jp"
GitHub = "AkihiroSuda"
[people.aluzzardi]
@@ -278,6 +265,11 @@
Email = "andrewhsu@docker.com"
GitHub = "andrewhsu"
[people.anonymuse]
Name = "Jesse White"
Email = "anonymuse@gmail.com"
GitHub = "anonymuse"
[people.anusha]
Name = "Anusha Ragunathan"
Email = "anusha@docker.com"
@@ -298,6 +290,11 @@
Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83"
[people.chanwit]
Name = "Chanwit Kaewkasi"
Email = "chanwit@gmail.com"
GitHub = "chanwit"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
@@ -348,6 +345,11 @@
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jhowardmsft]
Name = "John Howard"
Email = "jhoward@microsoft.com"
GitHub = "jhowardmsft"
[people.jessfraz]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
@@ -363,21 +365,11 @@
Email = "justin.cormack@docker.com"
GitHub = "justincormack"
[people.kolyshkin]
Name = "Kir Kolyshkin"
Email = "kolyshkin@gmail.com"
GitHub = "kolyshkin"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo]
Name = "Madhu Venugopal"
Email = "madhu@docker.com"
@@ -388,6 +380,11 @@
Email = "mbauer@us.ibm.com"
GitHub = "mhbauer"
[people.misty]
Name = "Misty Stanley-Jones"
Email = "misty@docker.com"
GitHub = "mistyhacks"
[people.mlaventure]
Name = "Kenfe-Mickaël Laventure"
Email = "mickael.laventure@gmail.com"
@@ -403,16 +400,16 @@
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
GitHub = "olljanat"
[people.programmerq]
Name = "Jeff Anderson"
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.rheinwein]
Name = "Laura Frank"
Email = "laura@codeship.com"
GitHub = "rheinwein"
[people.ripcurld]
Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com"
@@ -423,11 +420,6 @@
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"

173
Makefile
View File

@@ -1,25 +1,10 @@
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate win
BUILDX_VERSION ?= v0.8.2
ifdef USE_BUILDX
BUILDX ?= $(shell command -v buildx)
BUILDX ?= $(shell command -v docker-buildx)
DOCKER_BUILDX_CLI_PLUGIN_PATH ?= ~/.docker/cli-plugins/docker-buildx
BUILDX ?= $(shell if [ -x "$(DOCKER_BUILDX_CLI_PLUGIN_PATH)" ]; then echo $(DOCKER_BUILDX_CLI_PLUGIN_PATH); fi)
endif
ifndef USE_BUILDX
DOCKER_BUILDKIT := 1
export DOCKER_BUILDKIT
endif
BUILDX ?= bundles/buildx
DOCKER ?= docker
.PHONY: all binary dynbinary build cross help init-go-pkg-cache install manpages run shell test test-docker-py test-integration test-unit validate win
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
export DOCKER_GRAPHDRIVER
DOCKER_INCREMENTAL_BINARY := $(if $(DOCKER_INCREMENTAL_BINARY),$(DOCKER_INCREMENTAL_BINARY),1)
export DOCKER_INCREMENTAL_BINARY
# get OS/Arch of docker engine
DOCKER_OSARCH := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKER_ENGINE_OSARCH}')
@@ -46,11 +31,11 @@ export VALIDATE_ORIGIN_BRANCH
#
DOCKER_ENVS := \
-e DOCKER_CROSSPLATFORMS \
-e BUILD_APT_MIRROR \
-e BUILDFLAGS \
-e KEEPBUNDLE \
-e DOCKER_BUILD_ARGS \
-e DOCKER_BUILD_GOGC \
-e DOCKER_BUILD_OPTS \
-e DOCKER_BUILD_PKGS \
-e DOCKER_BUILDKIT \
-e DOCKER_BASH_COMPLETION_PATH \
@@ -59,24 +44,16 @@ DOCKER_ENVS := \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_INCREMENTAL_BINARY \
-e DOCKER_LDFLAGS \
-e DOCKER_PORT \
-e DOCKER_REMAP_ROOT \
-e DOCKER_ROOTLESS \
-e DOCKER_STORAGE_OPTS \
-e DOCKER_TEST_HOST \
-e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e TESTDEBUG \
-e TESTDIRS \
-e TESTFLAGS \
-e TESTFLAGS_INTEGRATION \
-e TESTFLAGS_INTEGRATION_CLI \
-e TEST_FILTER \
-e TIMEOUT \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
@@ -97,35 +74,39 @@ DOCKER_ENVS := \
# (default to no bind mount if DOCKER_HOST is set)
# note: BINDDIR is supported for backwards-compatibility here
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
# DOCKER_MOUNT can be overriden, but use at your own risk!
ifndef DOCKER_MOUNT
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT))
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
# The volume will be cleaned up when the container is removed due to `--rm`.
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git"
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),)
# enable package cache if DOCKER_INCREMENTAL_BINARY and DOCKER_MOUNT (i.e.DOCKER_HOST) are set
PKGCACHE_MAP := gopath:/go/pkg goroot-linux_amd64:/usr/local/go/pkg/linux_amd64 goroot-linux_amd64_netgo:/usr/local/go/pkg/linux_amd64_netgo
PKGCACHE_VOLROOT := dockerdev-go-pkg-cache
PKGCACHE_VOL := $(if $(PKGCACHE_DIR),$(CURDIR)/$(PKGCACHE_DIR)/,$(PKGCACHE_VOLROOT)-)
DOCKER_MOUNT_PKGCACHE := $(if $(DOCKER_INCREMENTAL_BINARY),$(shell echo $(PKGCACHE_MAP) | sed -E 's@([^ ]*)@-v "$(PKGCACHE_VOL)\1"@g'),)
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_PKGCACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",)
DOCKER_FLAGS := $(DOCKER) run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
DOCKER_FLAGS := docker run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
BUILD_APT_MIRROR := $(if $(DOCKER_BUILD_APT_MIRROR),--build-arg APT_MIRROR=$(DOCKER_BUILD_APT_MIRROR))
export BUILD_APT_MIRROR
SWAGGER_DOCS_PORT ?= 9000
INTEGRATION_CLI_MASTER_IMAGE := $(if $(INTEGRATION_CLI_MASTER_IMAGE), $(INTEGRATION_CLI_MASTER_IMAGE), integration-cli-master)
INTEGRATION_CLI_WORKER_IMAGE := $(if $(INTEGRATION_CLI_WORKER_IMAGE), $(INTEGRATION_CLI_WORKER_IMAGE), integration-cli-worker)
define \n
@@ -141,84 +122,47 @@ endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
DOCKER_BUILD_ARGS += --build-arg=GO_VERSION
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
BUILD_OPTS := ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -f "$(DOCKERFILE)"
ifdef USE_BUILDX
BUILD_OPTS += $(BUILDX_BUILD_EXTRA_OPTS)
BUILD_CMD := $(BUILDX) build
else
BUILD_CMD := $(DOCKER) build
endif
# This is used for the legacy "build" target and anything still depending on it
BUILD_CROSS =
ifdef DOCKER_CROSS
BUILD_CROSS = --build-arg CROSS=$(DOCKER_CROSS)
endif
ifdef DOCKER_CROSSPLATFORMS
BUILD_CROSS = --build-arg CROSS=true
endif
VERSION_AUTOGEN_ARGS = --build-arg VERSION --build-arg DOCKER_GITCOMMIT --build-arg PRODUCT --build-arg PLATFORM --build-arg DEFAULT_PRODUCT_LICENSE
default: binary
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives
all: build ## validate all checks, build linux binaries, run all tests\ncross build non-linux binaries and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
# This is only used to work around read-only bind mounts of the source code into
# binary build targets. We end up mounting a tmpfs over autogen which allows us
# to write build-time generated assets even though the source is mounted read-only
# ...But in order to do so, this dir needs to already exist.
autogen:
mkdir -p autogen
binary: build ## build the linux binaries
$(DOCKER_RUN_DOCKER) hack/make.sh binary
binary: buildx autogen ## build statically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
dynbinary: build ## build the linux dynbinaries
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary
dynbinary: buildx autogen ## build dynamically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
cross: BUILD_OPTS += --build-arg CROSS=true --build-arg DOCKER_CROSSPLATFORMS
cross: buildx autogen ## cross build the binaries for darwin, freebsd and\nwindows
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
build: bundles init-go-pkg-cache
$(warning The docker client CLI has moved to github.com/docker/cli. For a dev-test cycle involving the CLI, run:${\n} DOCKER_CLI_PATH=/host/path/to/cli/binary make shell ${\n} then change the cli and compile into a binary at the same location.${\n})
docker build ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" -f "$(DOCKERFILE)" .
bundles:
mkdir bundles
.PHONY: clean
clean: clean-cache
clean: clean-pkg-cache-vol ## clean up cached resources
.PHONY: clean-cache
clean-cache:
docker volume rm -f docker-dev-cache
clean-pkg-cache-vol:
@- $(foreach mapping,$(PKGCACHE_MAP), \
$(shell docker volume rm $(PKGCACHE_VOLROOT)-$(shell echo $(mapping) | awk -F':/' '{ print $$1 }') > /dev/null 2>&1) \
)
cross: build ## cross build the binaries for darwin, freebsd and\nwindows
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
init-go-pkg-cache:
$(if $(PKGCACHE_DIR), mkdir -p $(shell echo $(PKGCACHE_MAP) | sed -E 's@([^: ]*):[^ ]*@$(PKGCACHE_DIR)/\1@g'))
install: ## install the linux binaries
KEEPBUNDLE=1 hack/make.sh install-binary
run: build ## run the docker daemon in a container
$(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run"
.PHONY: build
ifeq ($(BIND_DIR), .)
build: shell_target := --target=dev
else
build: shell_target := --target=final
endif
ifdef USE_BUILDX
build: buildx_load := --load
endif
build: buildx
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) $(buildx_load) $(BUILD_CROSS) -t "$(DOCKER_IMAGE)" .
shell: build ## start a shell inside the build env
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
test: build test-unit ## run the unit, integration and docker-py tests
@@ -229,16 +173,8 @@ test-docker-py: build ## run the docker-py tests
test-integration-cli: test-integration ## (DEPRECATED) use test-integration
ifneq ($(and $(TEST_SKIP_INTEGRATION),$(TEST_SKIP_INTEGRATION_CLI)),)
test-integration:
@echo Both integrations suites skipped per environment variables
else
test-integration: build ## run the integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration
endif
test-integration-flaky: build ## run the stress test for all new integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration-flaky
test-unit: build ## run the unit tests
$(DOCKER_RUN_DOCKER) hack/test/unit
@@ -247,7 +183,7 @@ validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isol
$(DOCKER_RUN_DOCKER) hack/validate/all
win: build ## cross build the binary for windows
$(DOCKER_RUN_DOCKER) DOCKER_CROSSPLATFORMS=windows/amd64 hack/make.sh cross
$(DOCKER_RUN_DOCKER) hack/make.sh win
.PHONY: swagger-gen
swagger-gen:
@@ -265,13 +201,18 @@ swagger-docs: ## preview the API documentation
-p $(SWAGGER_DOCS_PORT):80 \
bfirsh/redoc:1.6.2
.PHONY: buildx
ifdef USE_BUILDX
ifeq ($(BUILDX), bundles/buildx)
buildx: bundles/buildx ## build buildx cli tool
endif
endif
bundles/buildx: bundles ## build buildx CLI tool
curl -fsSL https://raw.githubusercontent.com/moby/buildkit/70deac12b5857a1aa4da65e90b262368e2f71500/hack/install-buildx | VERSION="$(BUILDX_VERSION)" BINDIR="$(@D)" bash
$@ version
build-integration-cli-on-swarm: build ## build images and binary for running integration-cli on Swarm in parallel
@echo "Building hack/integration-cli-on-swarm (if build fails, please refer to hack/integration-cli-on-swarm/README.md)"
go build -buildmode=pie -o ./hack/integration-cli-on-swarm/integration-cli-on-swarm ./hack/integration-cli-on-swarm/host
@echo "Building $(INTEGRATION_CLI_MASTER_IMAGE)"
docker build -t $(INTEGRATION_CLI_MASTER_IMAGE) hack/integration-cli-on-swarm/agent
# For worker, we don't use `docker build` so as to enable DOCKER_INCREMENTAL_BINARY and so on
@echo "Building $(INTEGRATION_CLI_WORKER_IMAGE) from $(DOCKER_IMAGE)"
$(eval tmp := integration-cli-worker-tmp)
# We mount pkgcache, but not bundle (bundle needs to be baked into the image)
# For avoiding bakings DOCKER_GRAPHDRIVER and so on to image, we cannot use $(DOCKER_ENVS) here
docker run -t -d --name $(tmp) -e DOCKER_GITCOMMIT -e BUILDFLAGS -e DOCKER_INCREMENTAL_BINARY --privileged $(DOCKER_MOUNT_PKGCACHE) $(DOCKER_IMAGE) top
docker exec $(tmp) hack/make.sh build-integration-test-binary dynbinary
docker exec $(tmp) go build -buildmode=pie -o /worker github.com/docker/docker/hack/integration-cli-on-swarm/agent/worker
docker commit -c 'ENTRYPOINT ["/worker"]' $(tmp) $(INTEGRATION_CLI_WORKER_IMAGE)
docker rm -f $(tmp)

2
NOTICE
View File

@@ -3,7 +3,7 @@ Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/creack/pty) developed
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:

View File

@@ -35,83 +35,34 @@ issue, in the Slack channel, or in person at the Moby Summits that happen every
## 1.1 Runtime improvements
Over time we have accumulated a lot of functionality in the container runtime
aspect of Moby while also growing in other areas. Much of the container runtime
pieces are now duplicated work available in other, lower level components such
as [containerd](https://containerd.io).
We introduced [`runC`](https://runc.io) as a standalone low-level tool for container
execution in 2015, the first stage in spinning out parts of the Engine into standalone tools.
Moby currently only utilizes containerd for basic runtime state management, e.g. starting
and stopping a container, which is what the pre-containerd 1.0 daemon provided.
Now that containerd is a full-fledged container runtime which supports full
container life-cycle management, we would like to start relying more on containerd
and removing the bits in Moby which are now duplicated. This will necessitate
a significant effort to refactor and even remove large parts of Moby's codebase.
As runC continued evolving, and the OCI specification along with it, we created
[`containerd`](https://github.com/containerd/containerd), a daemon to control and monitor `runC`.
In late 2016 this was relaunched as the `containerd` 1.0 track, aiming to provide a common runtime
for the whole spectrum of container systems, including Kubernetes, with wide community support.
This change meant that there was an increased scope for `containerd`, including image management
and storage drivers.
Tracking issues:
Moby will rely on a long-running `containerd` companion daemon for all container execution
related operations. This could open the door in the future for Engine restarts without interrupting
running containers. The switch over to containerd 1.0 is an important goal for the project, and
will result in a significant simplification of the functions implemented in this repository.
- [#38043](https://github.com/moby/moby/issues/38043) Proposal: containerd image integration
## 1.2 Image Builder
Work is ongoing to integrate [BuildKit](https://github.com/moby/buildkit) into
Moby and replace the "v0" build implementation. Buildkit offers better cache
management, parallelizable build steps, and better extensibility while also
keeping builds portable, a chief tenent of Moby's builder.
Upon completion of this effort, users will have a builder that performs better
while also being more extensible, enabling users to provide their own custom
syntax which can be either Dockerfile-like or something completely different.
See [buildpacks on buildkit](https://github.com/tonistiigi/buildkit-pack) as an
example of this extensibility.
New features for the builder and Dockerfile should be implemented first in the
BuildKit backend using an external Dockerfile implementation from the container
images. This allows everyone to test and evaluate the feature without upgrading
their daemon. New features should go to the experimental channel first, and can be
part of the `docker/dockerfile:experimental` image. From there they graduate to
`docker/dockerfile:latest` and binary releases. The Dockerfile frontend source
code is temporarily located at
[https://github.com/moby/buildkit/tree/master/frontend/dockerfile](https://github.com/moby/buildkit/tree/master/frontend/dockerfile)
with separate new features defined with go build tags.
Tracking issues:
- [#32925](https://github.com/moby/moby/issues/32925) discussion: builder future: buildkit
## 1.3 Rootless Mode
Running the daemon requires elevated privileges for many tasks. We would like to
support running the daemon as a normal, unprivileged user without requiring `suid`
binaries.
Tracking issues:
- [#37375](https://github.com/moby/moby/issues/37375) Proposal: allow running `dockerd` as an unprivileged user (aka rootless mode)
## 1.4 Testing
Moby has many tests, both unit and integration. Moby needs more tests which can
cover the full spectrum functionality and edge cases out there.
Tests in the `integration-cli` folder should also be migrated into (both in
location and style) the `integration` folder. These newer tests are simpler to
run in isolation, simpler to read, simpler to write, and more fully exercise the
API. Meanwhile tests of the docker CLI should generally live in docker/cli.
Tracking issues:
- [#32866](https://github.com/moby/moby/issues/32866) Replace integration-cli suite with API test suite
## 1.5 Internal decoupling
## 1.2 Internal decoupling
A lot of work has been done in trying to decouple Moby internals. This process of creating
standalone projects with a well defined function that attract a dedicated community should continue.
As well as integrating `containerd` we would like to integrate [BuildKit](https://github.com/moby/buildkit)
as the next standalone component.
We see gRPC as the natural communication layer between decoupled components.
In addition to pushing out large components into other projects, much of the
internal code structure, and in particular the
["Daemon"](https://godoc.org/github.com/docker/docker/daemon#Daemon) object,
should be split into smaller, more manageable, and more testable components.
## 1.3 Custom assembly tooling
We have been prototyping the Moby [assembly tool](https://github.com/moby/tool) which was originally
developed for LinuxKit and intend to turn it into a more generic packaging and assembly mechanism
that can build not only the default version of Moby, as distribution packages or other useful forms,
but can also build very different container systems, themselves built of cooperating daemons built in
and running in containers. We intend to merge this functionality into this repo.

View File

@@ -1,9 +0,0 @@
# Reporting security issues
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View File

@@ -28,7 +28,7 @@ Most code changes will fall into one of the following categories.
### Writing tests for new features
New code should be covered by unit tests. If the code is difficult to test with
unit tests, then that is a good sign that it should be refactored to make it
a unit tests then that is a good sign that it should be refactored to make it
easier to reuse and maintain. Consider accepting unexported interfaces instead
of structs so that fakes can be provided for dependencies.
@@ -44,23 +44,16 @@ case. Error cases should be handled by unit tests.
Bugs fixes should include a unit test case which exercises the bug.
A bug fix may also include new assertions in existing integration tests for the
A bug fix may also include new assertions in an existing integration tests for the
API endpoint.
### Writing new integration tests
Note the `integration-cli` tests are deprecated; new tests will be rejected by
the CI.
Instead, implement new tests under `integration/`.
### Integration tests environment considerations
When adding new tests or modifying existing tests under `integration/`, testing
When adding new tests or modifying existing test under `integration/`, testing
environment should be properly considered. `skip.If` from
[gotest.tools/skip](https://godoc.org/gotest.tools/skip) can be used to make the
test run conditionally. Full testing environment conditions can be found at
[environment.go](https://github.com/moby/moby/blob/6b6eeed03b963a27085ea670f40cd5ff8a61f32e/testutil/environment/environment.go)
[environment.go](https://github.com/moby/moby/blob/cb37987ee11655ed6bbef663d245e55922354c68/internal/test/environment/environment.go)
Here is a quick example. If the test needs to interact with a docker daemon on
the same host, the following condition should be checked within the test code
@@ -74,8 +67,6 @@ If a remote daemon is detected, the test will be skipped.
## Running tests
### Unit Tests
To run the unit test suite:
```
@@ -91,36 +82,8 @@ The following environment variables may be used to run a subset of tests:
* `TESTFLAGS` - flags passed to `go test`, to run tests which match a pattern
use `TESTFLAGS="-test.run TestNameOrPrefix"`
### Integration Tests
To run the integration test suite:
```
make test-integration
```
This make target runs both the "integration" suite and the "integration-cli"
suite.
You can specify which integration test dirs to build and run by specifying
the list of dirs in the TEST_INTEGRATION_DIR environment variable.
You can also explicitly skip either suite by setting (any value) in
TEST_SKIP_INTEGRATION and/or TEST_SKIP_INTEGRATION_CLI environment variables.
Flags specific to each suite can be set in the TESTFLAGS_INTEGRATION and
TESTFLAGS_INTEGRATION_CLI environment variables.
If all you want is to specify a test filter to run, you can set the
`TEST_FILTER` environment variable. This ends up getting passed directly to `go
test -run` (or `go test -check-f`, depending on the test suite). It will also
automatically set the other above mentioned environment variables accordingly.
### Go Version
You can change a version of golang used for building stuff that is being tested
by setting `GO_VERSION` variable, for example:
```
make GO_VERSION=1.12.8 test
```

View File

@@ -3,7 +3,7 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
DefaultVersion = "1.41"
DefaultVersion = "1.39"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package api // import "github.com/docker/docker/api"

View File

@@ -3,19 +3,17 @@ package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
"fmt"
"strconv"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/builder"
buildkit "github.com/docker/docker/builder/builder-next"
daemonevents "github.com/docker/docker/daemon/events"
"github.com/docker/docker/builder/fscache"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/stringid"
"github.com/pkg/errors"
"google.golang.org/grpc"
"golang.org/x/sync/errgroup"
)
// ImageComponent provides an interface for working with images
@@ -32,21 +30,14 @@ type Builder interface {
// Backend provides build functionality to the API router
type Backend struct {
builder Builder
fsCache *fscache.FSCache
imageComponent ImageComponent
buildkit *buildkit.Builder
eventsService *daemonevents.Events
}
// NewBackend creates a new build backend from components
func NewBackend(components ImageComponent, builder Builder, buildkit *buildkit.Builder, es *daemonevents.Events) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, buildkit: buildkit, eventsService: es}, nil
}
// RegisterGRPC registers buildkit controller to the grpc server.
func (b *Backend) RegisterGRPC(s *grpc.Server) {
if b.buildkit != nil {
b.buildkit.RegisterGRPC(s)
}
func NewBackend(components ImageComponent, builder Builder, fsCache *fscache.FSCache, buildkit *buildkit.Builder) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, fsCache: fsCache, buildkit: buildkit}, nil
}
// Build builds an image from a Source
@@ -91,8 +82,6 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
if !useBuildKit {
stdout := config.ProgressWriter.StdoutFormatter
fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID))
}
if imageID != "" {
err = tagger.TagImages(image.ID(imageID))
}
return imageID, err
@@ -100,16 +89,34 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
// PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
buildCacheSize, cacheIDs, err := b.buildkit.Prune(ctx, opts)
if err != nil {
return nil, errors.Wrap(err, "failed to prune build cache")
}
b.eventsService.Log("prune", events.BuilderEventType, events.Actor{
Attributes: map[string]string{
"reclaimed": strconv.FormatInt(buildCacheSize, 10),
},
eg, ctx := errgroup.WithContext(ctx)
var fsCacheSize uint64
eg.Go(func() error {
var err error
fsCacheSize, err = b.fsCache.Prune(ctx)
if err != nil {
return errors.Wrap(err, "failed to prune fscache")
}
return nil
})
return &types.BuildCachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
var buildCacheSize int64
var cacheIDs []string
eg.Go(func() error {
var err error
buildCacheSize, cacheIDs, err = b.buildkit.Prune(ctx, opts)
if err != nil {
return errors.Wrap(err, "failed to prune build cache")
}
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
return &types.BuildCachePruneReport{SpaceReclaimed: fsCacheSize + uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
}
// Cancel cancels the build by ID

View File

@@ -1,34 +0,0 @@
package server
import (
"net/http"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/gorilla/mux"
"google.golang.org/grpc/status"
)
// makeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func makeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := httpstatus.FromError(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = httputils.WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}

View File

@@ -1,23 +1,24 @@
package httpstatus // import "github.com/docker/docker/api/server/httpstatus"
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"fmt"
"net/http"
containerderrors "github.com/containerd/containerd/errdefs"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type causer interface {
Cause() error
}
// FromError retrieves status code from error message.
func FromError(err error) int {
// GetHTTPErrorStatusCode retrieves status code from error message.
func GetHTTPErrorStatusCode(err error) int {
if err == nil {
logrus.WithFields(logrus.Fields{"error": err}).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
@@ -34,7 +35,7 @@ func FromError(err error) int {
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err):
case errdefs.IsConflict(err) || errdefs.IsAlreadyExists(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
@@ -53,16 +54,9 @@ func FromError(err error) int {
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return FromError(e.Cause())
return GetHTTPErrorStatusCode(e.Cause())
}
logrus.WithFields(logrus.Fields{
@@ -78,9 +72,31 @@ func FromError(err error) int {
return statusCode
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}
// MakeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func MakeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := GetHTTPErrorStatusCode(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
WriteJSON(w, statusCode, response)
} else {
http.Error(w, grpc.ErrorDesc(err), statusCode)
}
}
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
switch grpc.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
@@ -100,6 +116,9 @@ func statusCodeFromGRPCError(err error) int {
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
if e, ok := err.(causer); ok {
return statusCodeFromGRPCError(e.Cause())
}
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
@@ -110,41 +129,3 @@ func statusCodeFromGRPCError(err error) int {
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case containerderrors.IsInvalidArgument(err):
return http.StatusBadRequest
case containerderrors.IsNotFound(err):
return http.StatusNotFound
case containerderrors.IsAlreadyExists(err):
return http.StatusConflict
case containerderrors.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case containerderrors.IsUnavailable(err):
return http.StatusServiceUnavailable
case containerderrors.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View File

@@ -23,7 +23,7 @@ func TestBoolValue(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := BoolValue(r, "test")
@@ -34,14 +34,14 @@ func TestBoolValue(t *testing.T) {
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest(http.MethodGet, "", nil)
r, _ := http.NewRequest("GET", "", nil)
if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest(http.MethodGet, "", nil)
r, _ = http.NewRequest("GET", "", nil)
r.Form = v
if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
@@ -59,7 +59,7 @@ func TestInt64ValueOrZero(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := Int64ValueOrZero(r, "test")
@@ -79,7 +79,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1)
@@ -95,7 +95,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{}
v.Set("test", "invalid")
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1)

View File

@@ -12,8 +12,10 @@ import (
"github.com/sirupsen/logrus"
)
type contextKey string
// APIVersionKey is the client's requested API version.
type APIVersionKey struct{}
const APIVersionKey contextKey = "api-version"
// APIFunc is an adapter to allow the use of ordinary functions as Docker API endpoints.
// Any function that has the appropriate signature can be registered as an API endpoint (e.g. getVersion).
@@ -27,7 +29,7 @@ func HijackConnection(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
return nil, nil, err
}
// Flush the options to make sure the client sets the raw mode
_, _ = conn.Write([]byte{})
conn.Write([]byte{})
return conn, conn, nil
}
@@ -37,9 +39,9 @@ func CloseStreams(streams ...interface{}) {
if tcpc, ok := stream.(interface {
CloseWrite() error
}); ok {
_ = tcpc.CloseWrite()
tcpc.CloseWrite()
} else if closer, ok := stream.(io.Closer); ok {
_ = closer.Close()
closer.Close()
}
}
}
@@ -81,7 +83,7 @@ func VersionFromContext(ctx context.Context) string {
return ""
}
if val := ctx.Value(APIVersionKey{}); val != nil {
if val := ctx.Value(APIVersionKey); val != nil {
return val.(string)
}

View File

@@ -51,10 +51,10 @@ func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMess
logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...)
}
if msg.Source == "stdout" && config.ShowStdout {
_, _ = outStream.Write(logLine)
outStream.Write(logLine)
}
if msg.Source == "stderr" && config.ShowStderr {
_, _ = errStream.Write(logLine)
errStream.Write(logLine)
}
}
}

View File

@@ -18,7 +18,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost {
if r.Method != "POST" {
return handler(ctx, w, r, vars)
}
if err := httputils.CheckForJSON(r); err != nil {
@@ -41,7 +41,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
maskSecretKeys(postForm)
maskSecretKeys(postForm, r.RequestURI)
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
logrus.Debugf("form data: %s", string(formStr))
@@ -54,37 +54,41 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
}
}
func maskSecretKeys(inp interface{}) {
func maskSecretKeys(inp interface{}, path string) {
// Remove any query string from the path
idx := strings.Index(path, "?")
if idx != -1 {
path = path[:idx]
}
// Remove trailing / characters
path = strings.TrimRight(path, "/")
if arr, ok := inp.([]interface{}); ok {
for _, f := range arr {
maskSecretKeys(f)
maskSecretKeys(f, path)
}
return
}
if form, ok := inp.(map[string]interface{}); ok {
scrub := []string{
// Note: The Data field contains the base64-encoded secret in 'secret'
// and 'config' create and update requests. Currently, no other POST
// API endpoints use a data field, so we scrub this field unconditionally.
// Change this handling to be conditional if a new endpoint is added
// in future where this field should not be scrubbed.
"data",
"jointoken",
"password",
"secret",
"signingcakey",
"unlockkey",
}
loop0:
for k, v := range form {
for _, m := range scrub {
for _, m := range []string{"password", "secret", "jointoken", "unlockkey", "signingcakey"} {
if strings.EqualFold(m, k) {
form[k] = "*****"
continue loop0
}
}
maskSecretKeys(v)
maskSecretKeys(v, path)
}
// Route-specific redactions
if strings.HasSuffix(path, "/secrets/create") {
for k := range form {
if k == "Data" {
form[k] = "*****"
}
}
}
}
}

View File

@@ -3,31 +3,37 @@ package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestMaskSecretKeys(t *testing.T) {
tests := []struct {
doc string
path string
input map[string]interface{}
expected map[string]interface{}
}{
{
doc: "secret/config create and update requests",
path: "/v1.30/secrets/create",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
doc: "masking other fields (recursively)",
path: "/v1.30/secrets/create//",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/secrets/create?key=val",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/v1.30/some/other/path",
input: map[string]interface{}{
"password": "pass",
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
"signingcakey": "signingcakey",
"password": "pass",
"other": map[string]interface{}{
"password": "pass",
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
@@ -35,13 +41,8 @@ func TestMaskSecretKeys(t *testing.T) {
},
},
expected: map[string]interface{}{
"password": "*****",
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
"signingcakey": "*****",
"password": "*****",
"other": map[string]interface{}{
"password": "*****",
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
@@ -49,27 +50,10 @@ func TestMaskSecretKeys(t *testing.T) {
},
},
},
{
doc: "case insensitive field matching",
input: map[string]interface{}{
"PASSWORD": "pass",
"other": map[string]interface{}{
"PASSWORD": "pass",
},
},
expected: map[string]interface{}{
"PASSWORD": "*****",
"other": map[string]interface{}{
"PASSWORD": "*****",
},
},
},
}
for _, testcase := range tests {
t.Run(testcase.doc, func(t *testing.T) {
maskSecretKeys(testcase.input)
assert.Check(t, is.DeepEqual(testcase.expected, testcase.input))
})
maskSecretKeys(testcase.input, testcase.path)
assert.Check(t, is.DeepEqual(testcase.expected, testcase.input))
}
}

View File

@@ -58,7 +58,7 @@ func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.
if versions.GreaterThan(apiVersion, v.defaultVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultVersion}
}
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
ctx = context.WithValue(ctx, httputils.APIVersionKey, apiVersion)
return handler(ctx, w, r, vars)
}

View File

@@ -8,8 +8,8 @@ import (
"testing"
"github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestVersionMiddlewareVersion(t *testing.T) {
@@ -25,7 +25,7 @@ func TestVersionMiddlewareVersion(t *testing.T) {
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
@@ -76,7 +76,7 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()

View File

@@ -31,8 +31,8 @@ func (r *buildRouter) Routes() []router.Route {
func (r *buildRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/build", r.postBuild),
router.NewPostRoute("/build/prune", r.postPrune),
router.NewPostRoute("/build", r.postBuild, router.WithCancel),
router.NewPostRoute("/build/prune", r.postPrune, router.WithCancel),
router.NewPostRoute("/build/cancel", r.postCancel),
}
}

View File

@@ -38,36 +38,8 @@ func (e invalidIsolationError) Error() string {
func (e invalidIsolationError) InvalidParameter() {}
func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) {
options := &types.ImageBuildOptions{
Version: types.BuilderV1, // Builder V1 is the default, but can be overridden
Dockerfile: r.FormValue("dockerfile"),
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
CPUPeriod: httputils.Int64ValueOrZero(r, "cpuperiod"),
CPUQuota: httputils.Int64ValueOrZero(r, "cpuquota"),
CPUSetCPUs: r.FormValue("cpusetcpus"),
CPUSetMems: r.FormValue("cpusetmems"),
CgroupParent: r.FormValue("cgroupparent"),
NetworkMode: r.FormValue("networkmode"),
Tags: r.Form["t"],
ExtraHosts: r.Form["extrahosts"],
SecurityOpt: r.Form["securityopt"],
Squash: httputils.BoolValue(r, "squash"),
Target: r.FormValue("target"),
RemoteContext: r.FormValue("remote"),
SessionID: r.FormValue("session"),
BuildID: r.FormValue("buildid"),
}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
version := httputils.VersionFromContext(ctx)
options := &types.ImageBuildOptions{}
if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
@@ -78,37 +50,52 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
options.Dockerfile = r.FormValue("dockerfile")
options.SuppressOutput = httputils.BoolValue(r, "q")
options.NoCache = httputils.BoolValue(r, "nocache")
options.ForceRemove = httputils.BoolValue(r, "forcerm")
options.MemorySwap = httputils.Int64ValueOrZero(r, "memswap")
options.Memory = httputils.Int64ValueOrZero(r, "memory")
options.CPUShares = httputils.Int64ValueOrZero(r, "cpushares")
options.CPUPeriod = httputils.Int64ValueOrZero(r, "cpuperiod")
options.CPUQuota = httputils.Int64ValueOrZero(r, "cpuquota")
options.CPUSetCPUs = r.FormValue("cpusetcpus")
options.CPUSetMems = r.FormValue("cpusetmems")
options.CgroupParent = r.FormValue("cgroupparent")
options.NetworkMode = r.FormValue("networkmode")
options.Tags = r.Form["t"]
options.ExtraHosts = r.Form["extrahosts"]
options.SecurityOpt = r.Form["securityopt"]
options.Squash = httputils.BoolValue(r, "squash")
options.Target = r.FormValue("target")
options.RemoteContext = r.FormValue("remote")
if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform")
}
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, err
}
options.Outputs = outputs
}
}
if s := r.Form.Get("shmsize"); s != "" {
shmSize, err := strconv.ParseInt(s, 10, 64)
if r.Form.Get("shmsize") != "" {
shmSize, err := strconv.ParseInt(r.Form.Get("shmsize"), 10, 64)
if err != nil {
return nil, err
}
options.ShmSize = shmSize
}
if i := r.FormValue("isolation"); i != "" {
options.Isolation = container.Isolation(i)
if !options.Isolation.IsValid() {
return nil, invalidIsolationError(options.Isolation)
if i := container.Isolation(r.FormValue("isolation")); i != "" {
if !container.Isolation.IsValid(i) {
return nil, invalidIsolationError(i)
}
options.Isolation = i
}
if ulimitsJSON := r.FormValue("ulimits"); ulimitsJSON != "" {
var buildUlimits = []*units.Ulimit{}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
var buildUlimits = []*units.Ulimit{}
ulimitsJSON := r.FormValue("ulimits")
if ulimitsJSON != "" {
if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading ulimit settings")
}
@@ -127,7 +114,8 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
// the fact they mentioned it, we need to pass that along to the builder
// so that it can print a warning about "foo" being unused if there is
// no "ARG foo" in the Dockerfile.
if buildArgsJSON := r.FormValue("buildargs"); buildArgsJSON != "" {
buildArgsJSON := r.FormValue("buildargs")
if buildArgsJSON != "" {
var buildArgs = map[string]*string{}
if err := json.Unmarshal([]byte(buildArgsJSON), &buildArgs); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading build args")
@@ -135,7 +123,8 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.BuildArgs = buildArgs
}
if labelsJSON := r.FormValue("labels"); labelsJSON != "" {
labelsJSON := r.FormValue("labels")
if labelsJSON != "" {
var labels = map[string]string{}
if err := json.Unmarshal([]byte(labelsJSON), &labels); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading labels")
@@ -143,41 +132,40 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.Labels = labels
}
if cacheFromJSON := r.FormValue("cachefrom"); cacheFromJSON != "" {
cacheFromJSON := r.FormValue("cachefrom")
if cacheFromJSON != "" {
var cacheFrom = []string{}
if err := json.Unmarshal([]byte(cacheFromJSON), &cacheFrom); err != nil {
return nil, err
}
options.CacheFrom = cacheFrom
}
if bv := r.FormValue("version"); bv != "" {
v, err := parseVersion(bv)
if err != nil {
return nil, err
}
options.Version = v
options.SessionID = r.FormValue("session")
options.BuildID = r.FormValue("buildid")
builderVersion, err := parseVersion(r.FormValue("version"))
if err != nil {
return nil, err
}
options.Version = builderVersion
return options, nil
}
func parseVersion(s string) (types.BuilderVersion, error) {
switch types.BuilderVersion(s) {
case types.BuilderV1:
if s == "" || s == string(types.BuilderV1) {
return types.BuilderV1, nil
case types.BuilderBuildKit:
return types.BuilderBuildKit, nil
default:
return "", errors.Errorf("invalid version %q", s)
}
if s == string(types.BuilderBuildKit) {
return types.BuilderBuildKit, nil
}
return "", errors.Errorf("invalid version %s", s)
}
func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
fltrs, err := filters.FromJSON(r.Form.Get("filters"))
filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return errors.Wrap(err, "could not parse filters")
}
@@ -192,7 +180,7 @@ func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
Filters: filters,
KeepStorage: int64(ks),
}
@@ -235,12 +223,12 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
}
output := ioutils.NewWriteFlusher(ww)
defer func() { _ = output.Close() }()
defer output.Close()
errf := func(err error) error {
if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 {
_, _ = output.Write(notVerboseBuffer.Bytes())
output.Write(notVerboseBuffer.Bytes())
}
// Do not write the error in the http output if it's still empty.
@@ -291,7 +279,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
// Everything worked so if -q was provided the output from the daemon
// should be just the image ID and we'll print that to stdout.
if buildOptions.SuppressOutput {
_, _ = fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
}
return nil
}
@@ -307,7 +295,7 @@ func getAuthConfigs(header http.Header) map[string]types.AuthConfig {
authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authConfigsEncoded))
// Pulling an image does not error when no auth is provided so to remain
// consistent with the existing api decode errors are ignored
_ = json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
return authConfigs
}
@@ -429,7 +417,7 @@ func (w *wcf) notify() {
w.mu.Lock()
if !w.ready {
if w.buf.Len() > 0 {
_, _ = io.Copy(w.Writer, w.buf)
io.Copy(w.Writer, w.buf)
}
if w.flushed {
w.flusher.Flush()

View File

@@ -10,15 +10,13 @@ type containerRouter struct {
backend Backend
decoder httputils.ContainerDecoder
routes []router.Route
cgroup2 bool
}
// NewRouter initializes a new container router
func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) router.Router {
func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
r := &containerRouter{
backend: b,
decoder: decoder,
cgroup2: cgroup2,
}
r.initRoutes()
return r
@@ -40,8 +38,8 @@ func (r *containerRouter) initRoutes() {
router.NewGetRoute("/containers/{name:.*}/changes", r.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", r.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", r.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs, router.WithCancel),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats, router.WithCancel),
router.NewGetRoute("/containers/{name:.*}/attach/ws", r.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", r.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", r.getContainersArchive),
@@ -53,7 +51,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/containers/{name:.*}/restart", r.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", r.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", r.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait, router.WithCancel),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8, Errors out since 1.12
@@ -62,7 +60,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", r.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", r.postContainerUpdate),
router.NewPostRoute("/containers/prune", r.postContainersPrune),
router.NewPostRoute("/containers/prune", r.postContainersPrune, router.WithCancel),
router.NewPostRoute("/commit", r.postCommit),
// PUT
router.NewPutRoute("/containers/{name:.*}/archive", r.putContainersArchive),

View File

@@ -9,8 +9,6 @@ import (
"strconv"
"syscall"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
@@ -21,7 +19,6 @@ import (
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/signal"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/websocket"
@@ -44,7 +41,7 @@ func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
}
config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && err != io.EOF { // Do not fail if body is empty.
if err != nil && err != io.EOF { //Do not fail if body is empty.
return err
}
@@ -108,14 +105,9 @@ func (s *containerRouter) getContainersStats(ctx context.Context, w http.Respons
if !stream {
w.Header().Set("Content-Type", "application/json")
}
var oneShot bool
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.41") {
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false)
}
config := &backend.ContainerStatsConfig{
Stream: stream,
OneShot: oneShot,
OutStream: w,
Version: httputils.VersionFromContext(ctx),
}
@@ -346,6 +338,9 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
}
}
// Note: the context should get canceled if the client closes the
// connection since this handler has been wrapped by the
// router.WithCancel() wrapper.
waitC, err := s.backend.ContainerWait(ctx, vars["name"], waitCondition)
if err != nil {
return err
@@ -433,16 +428,6 @@ func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
if err := decoder.Decode(&updateConfig); err != nil {
return err
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") {
updateConfig.PidsLimit = nil
}
if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 {
// Both `0` and `-1` are accepted to set "unlimited" when updating.
// Historically, any negative value was accepted, so treat them as
// "unlimited" as well.
var unlimited int64
updateConfig.PidsLimit = &unlimited
}
hostConfig := &container.HostConfig{
Resources: updateConfig.Resources,
@@ -480,54 +465,12 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
hostConfig.AutoRemove = false
}
if hostConfig != nil && versions.LessThan(version, "1.40") {
// Ignore BindOptions.NonRecursive because it was added in API 1.40.
for _, m := range hostConfig.Mounts {
if bo := m.BindOptions; bo != nil {
bo.NonRecursive = false
}
}
// Ignore KernelMemoryTCP because it was added in API 1.40.
hostConfig.KernelMemoryTCP = 0
// Older clients (API < 1.40) expects the default to be shareable, make them happy
if hostConfig.IpcMode.IsEmpty() {
hostConfig.IpcMode = container.IpcMode("shareable")
}
}
if hostConfig != nil && versions.LessThan(version, "1.41") && !s.cgroup2 {
// Older clients expect the default to be "host" on cgroup v1 hosts
if hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsMode("host")
}
}
var platform *specs.Platform
if versions.GreaterThanOrEqualTo(version, "1.41") {
if v := r.Form.Get("platform"); v != "" {
p, err := platforms.Parse(v)
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = &p
}
}
if hostConfig != nil && hostConfig.PidsLimit != nil && *hostConfig.PidsLimit <= 0 {
// Don't set a limit if either no limit was specified, or "unlimited" was
// explicitly set.
// Both `0` and `-1` are accepted as "unlimited", and historically any
// negative value was accepted, so treat those as "unlimited" as well.
hostConfig.PidsLimit = nil
}
ccr, err := s.backend.ContainerCreate(types.ContainerCreateConfig{
Name: name,
Config: config,
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
AdjustCPUShares: adjustCPUShares,
Platform: platform,
})
if err != nil {
return err
@@ -627,7 +570,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
// Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack()
if errHijack == nil {
statusCode := httpstatus.FromError(err)
statusCode := httputils.GetHTTPErrorStatusCode(err)
statusText := http.StatusText(statusCode)
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n%s\r\n", statusCode, statusText, err.Error())
httputils.CloseStreams(conn)

View File

@@ -14,8 +14,7 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -43,10 +42,9 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
image := vars["name"]
// TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors?
ref, err := reference.ParseAnyReference(image)
if err != nil {
return errdefs.InvalidParameter(err)
return err
}
namedRef, ok := ref.(reference.Named)
if !ok {
@@ -54,7 +52,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
// full image ID
return errors.Errorf("no manifest found for full image ID")
}
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
return errors.Errorf("unknown image reference format: %s", image)
}
distrepo, _, err := s.backend.GetRepository(ctx, namedRef, config)
@@ -68,7 +66,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
taggedRef, ok := namedRef.(reference.NamedTagged)
if !ok {
return errdefs.InvalidParameter(errors.Errorf("image reference not tagged: %s", image))
return errors.Errorf("image reference not tagged: %s", image)
}
descriptor, err := distrepo.Tags(ctx).Get(ctx, taggedRef.Tag())
@@ -94,16 +92,6 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
}
mnfst, err := mnfstsrvc.Get(ctx, distributionInspect.Descriptor.Digest)
if err != nil {
switch err {
case reference.ErrReferenceInvalidFormat,
reference.ErrTagInvalidFormat,
reference.ErrDigestInvalidFormat,
reference.ErrNameContainsUppercase,
reference.ErrNameEmpty,
reference.ErrNameTooLong,
reference.ErrNameNotCanonical:
return errdefs.InvalidParameter(err)
}
return err
}

View File

@@ -44,7 +44,7 @@ func experimentalHandler(ctx context.Context, w http.ResponseWriter, r *http.Req
return notImplementedError{}
}
// Handler returns the APIFunc to let the server wrap it in middlewares.
// Handler returns returns the APIFunc to let the server wrap it in middlewares.
func (r *experimentalRoute) Handler() httputils.APIFunc {
return r.handler
}

View File

@@ -1,8 +0,0 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import "google.golang.org/grpc"
// Backend abstracts a registerable GRPC service.
type Backend interface {
RegisterGRPC(*grpc.Server)
}

View File

@@ -1,41 +0,0 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"github.com/docker/docker/api/server/router"
"github.com/moby/buildkit/util/grpcerrors"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
type grpcRouter struct {
routes []router.Route
grpcServer *grpc.Server
h2Server *http2.Server
}
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
r := &grpcRouter{
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
),
}
for _, b := range backends {
b.RegisterGRPC(r.grpcServer)
}
r.initRoutes()
return r
}
// Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route {
return gr.routes
}
func (gr *grpcRouter) initRoutes() {
gr.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC),
}
}

View File

@@ -1,45 +0,0 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"context"
"net/http"
"github.com/pkg/errors"
"golang.org/x/net/http2"
)
func (gr *grpcRouter) serveGRPC(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
h, ok := w.(http.Hijacker)
if !ok {
return errors.New("handler does not support hijack")
}
proto := r.Header.Get("Upgrade")
if proto == "" {
return errors.New("no upgrade proto in request")
}
if proto != "h2c" {
return errors.Errorf("protocol %s not supported", proto)
}
conn, _, err := h.Hijack()
if err != nil {
return err
}
resp := &http.Response{
StatusCode: http.StatusSwitchingProtocols,
ProtoMajor: 1,
ProtoMinor: 1,
Header: http.Header{},
}
resp.Header.Set("Connection", "Upgrade")
resp.Header.Set("Upgrade", proto)
// set raw mode
conn.Write([]byte{})
resp.Write(conn)
// https://godoc.org/golang.org/x/net/http2#Server.ServeConn
// TODO: is it a problem that conn has already been written to?
gr.h2Server.ServeConn(conn, &http2.ServeConnOpts{Handler: gr.grpcServer})
return nil
}

View File

@@ -34,10 +34,10 @@ func (r *imageRouter) initRoutes() {
router.NewGetRoute("/images/{name:.*}/json", r.getImagesByName),
// POST
router.NewPostRoute("/images/load", r.postImagesLoad),
router.NewPostRoute("/images/create", r.postImagesCreate),
router.NewPostRoute("/images/{name:.*}/push", r.postImagesPush),
router.NewPostRoute("/images/create", r.postImagesCreate, router.WithCancel),
router.NewPostRoute("/images/{name:.*}/push", r.postImagesPush, router.WithCancel),
router.NewPostRoute("/images/{name:.*}/tag", r.postImagesTag),
router.NewPostRoute("/images/prune", r.postImagesPrune),
router.NewPostRoute("/images/prune", r.postImagesPrune, router.WithCancel),
// DELETE
router.NewDeleteRoute("/images/{name:.*}", r.deleteImages),
}

View File

@@ -57,41 +57,43 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
}
}
if image != "" { // pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
if err == nil {
if image != "" { //pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
}
}
}
authEncoded := r.Header.Get("X-Registry-Auth")
authConfig := &types.AuthConfig{}
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
authEncoded := r.Header.Get("X-Registry-Auth")
authConfig := &types.AuthConfig{}
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
}
}
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { //import
src := r.Form.Get("fromSrc")
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { // import
src := r.Form.Get("fromSrc")
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
if err != nil {
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -136,7 +138,7 @@ func (s *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter,
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -161,7 +163,7 @@ func (s *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -177,7 +179,7 @@ func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter,
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -231,12 +233,10 @@ func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
filterParam := r.Form.Get("filter")
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
filterParam := r.Form.Get("filter")
// FIXME(vdemeester) This has been deprecated in 1.13, and is target for removal for v17.12
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)

View File

@@ -1,6 +1,7 @@
package router // import "github.com/docker/docker/api/server/router"
import (
"context"
"net/http"
"github.com/docker/docker/api/server/httputils"
@@ -44,30 +45,60 @@ func NewRoute(method, path string, handler httputils.APIFunc, opts ...RouteWrapp
// NewGetRoute initializes a new route with the http method GET.
func NewGetRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodGet, path, handler, opts...)
return NewRoute("GET", path, handler, opts...)
}
// NewPostRoute initializes a new route with the http method POST.
func NewPostRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPost, path, handler, opts...)
return NewRoute("POST", path, handler, opts...)
}
// NewPutRoute initializes a new route with the http method PUT.
func NewPutRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPut, path, handler, opts...)
return NewRoute("PUT", path, handler, opts...)
}
// NewDeleteRoute initializes a new route with the http method DELETE.
func NewDeleteRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodDelete, path, handler, opts...)
return NewRoute("DELETE", path, handler, opts...)
}
// NewOptionsRoute initializes a new route with the http method OPTIONS.
func NewOptionsRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodOptions, path, handler, opts...)
return NewRoute("OPTIONS", path, handler, opts...)
}
// NewHeadRoute initializes a new route with the http method HEAD.
func NewHeadRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodHead, path, handler, opts...)
return NewRoute("HEAD", path, handler, opts...)
}
func cancellableHandler(h httputils.APIFunc) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if notifier, ok := w.(http.CloseNotifier); ok {
notify := notifier.CloseNotify()
notifyCtx, cancel := context.WithCancel(ctx)
finished := make(chan struct{})
defer close(finished)
ctx = notifyCtx
go func() {
select {
case <-notify:
cancel()
case <-finished:
}
}()
}
return h(ctx, w, r, vars)
}
}
// WithCancel makes new route which embeds http.CloseNotifier feature to
// context.Context of handler.
func WithCancel(r Route) Route {
return localRoute{
method: r.Method(),
path: r.Path(),
handler: cancellableHandler(r.Handler()),
}
}

View File

@@ -36,7 +36,7 @@ func (r *networkRouter) initRoutes() {
router.NewPostRoute("/networks/create", r.postNetworkCreate),
router.NewPostRoute("/networks/{id:.*}/connect", r.postNetworkConnect),
router.NewPostRoute("/networks/{id:.*}/disconnect", r.postNetworkDisconnect),
router.NewPostRoute("/networks/prune", r.postNetworksPrune),
router.NewPostRoute("/networks/prune", r.postNetworksPrune, router.WithCancel),
// DELETE
router.NewDeleteRoute("/networks/{id:.*}", r.deleteNetwork),
}

View File

@@ -30,7 +30,7 @@ func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWrit
}
if err := network.ValidateFilters(filter); err != nil {
return errdefs.InvalidParameter(err)
return err
}
var list []types.NetworkResource

View File

@@ -28,11 +28,11 @@ func (r *pluginRouter) initRoutes() {
router.NewGetRoute("/plugins/{name:.*}/json", r.inspectPlugin),
router.NewGetRoute("/plugins/privileges", r.getPrivileges),
router.NewDeleteRoute("/plugins/{name:.*}", r.removePlugin),
router.NewPostRoute("/plugins/{name:.*}/enable", r.enablePlugin),
router.NewPostRoute("/plugins/{name:.*}/enable", r.enablePlugin), // PATCH?
router.NewPostRoute("/plugins/{name:.*}/disable", r.disablePlugin),
router.NewPostRoute("/plugins/pull", r.pullPlugin),
router.NewPostRoute("/plugins/{name:.*}/push", r.pushPlugin),
router.NewPostRoute("/plugins/{name:.*}/upgrade", r.upgradePlugin),
router.NewPostRoute("/plugins/pull", r.pullPlugin, router.WithCancel),
router.NewPostRoute("/plugins/{name:.*}/push", r.pushPlugin, router.WithCancel),
router.NewPostRoute("/plugins/{name:.*}/upgrade", r.upgradePlugin, router.WithCancel),
router.NewPostRoute("/plugins/{name:.*}/set", r.setPlugin),
router.NewPostRoute("/plugins/create", r.createPlugin),
}

View File

@@ -123,7 +123,7 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -162,7 +162,7 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -211,7 +211,7 @@ func (pr *pluginRouter) createPlugin(ctx context.Context, w http.ResponseWriter,
if err := pr.backend.CreateFromContext(ctx, r.Body, options); err != nil {
return err
}
// TODO: send progress bar
//TODO: send progress bar
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -270,7 +270,7 @@ func (pr *pluginRouter) pushPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}

View File

@@ -37,7 +37,7 @@ func (sr *swarmRouter) initRoutes() {
router.NewPostRoute("/services/create", sr.createService),
router.NewPostRoute("/services/{id}/update", sr.updateService),
router.NewDeleteRoute("/services/{id}", sr.removeService),
router.NewGetRoute("/services/{id}/logs", sr.getServiceLogs),
router.NewGetRoute("/services/{id}/logs", sr.getServiceLogs, router.WithCancel),
router.NewGetRoute("/nodes", sr.getNodes),
router.NewGetRoute("/nodes/{id}", sr.getNode),
@@ -46,7 +46,7 @@ func (sr *swarmRouter) initRoutes() {
router.NewGetRoute("/tasks", sr.getTasks),
router.NewGetRoute("/tasks/{id}", sr.getTask),
router.NewGetRoute("/tasks/{id}/logs", sr.getTaskLogs),
router.NewGetRoute("/tasks/{id}/logs", sr.getTaskLogs, router.WithCancel),
router.NewGetRoute("/secrets", sr.getSecrets),
router.NewPostRoute("/secrets/create", sr.createSecret),

View File

@@ -28,16 +28,11 @@ func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
// DefaultAddrPool and SubnetSize were added in API 1.39. Ignore on older API versions.
if versions.LessThan(version, "1.39") {
req.DefaultAddrPool = nil
req.SubnetSize = 0
}
// DataPathPort was added in API 1.40. Ignore this option on older API versions.
if versions.LessThan(version, "1.40") {
req.DataPathPort = 0
}
nodeID, err := sr.backend.Init(req)
if err != nil {
logrus.Errorf("Error initializing swarm: %v", err)
@@ -167,19 +162,7 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
return errdefs.InvalidParameter(err)
}
// the status query parameter is only support in API versions >= 1.41. If
// the client is using a lesser version, ignore the parameter.
cliVersion := httputils.VersionFromContext(ctx)
var status bool
if value := r.URL.Query().Get("status"); value != "" && !versions.LessThan(cliVersion, "1.41") {
var err error
status, err = strconv.ParseBool(value)
if err != nil {
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for status: %s", value)
}
}
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter, Status: status})
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter})
if err != nil {
logrus.Errorf("Error getting services: %v", err)
return err
@@ -190,21 +173,15 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var insertDefaults bool
if value := r.URL.Query().Get("insertDefaults"); value != "" {
var err error
insertDefaults, err = strconv.ParseBool(value)
if err != nil {
err := fmt.Errorf("invalid value for insertDefaults: %s", value)
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for insertDefaults: %s", value)
}
}
// you may note that there is no code here to handle the "status" query
// parameter, as in getServices. the Status field is not supported when
// retrieving an individual service because the Backend API changes
// required to accommodate it would be too disruptive, and because that
// field is so rarely needed as part of an individual service inspection.
service, err := sr.backend.GetService(vars["id"], insertDefaults)
if err != nil {
logrus.Errorf("Error getting service %s: %v", vars["id"], err)
@@ -225,13 +202,12 @@ func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter,
// Get returns "" if the header does not exist
encodedAuth := r.Header.Get("X-Registry-Auth")
cliVersion := r.Header.Get("version")
queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" {
if versions.LessThan(v, "1.30") {
queryRegistry = true
}
adjustForAPIVersion(v, &service)
if cliVersion != "" && versions.LessThan(cliVersion, "1.30") {
queryRegistry = true
}
resp, err := sr.backend.CreateService(service, encodedAuth, queryRegistry)
if err != nil {
logrus.Errorf("Error creating service %s: %v", service.Name, err)
@@ -263,12 +239,10 @@ func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter,
flags.EncodedRegistryAuth = r.Header.Get("X-Registry-Auth")
flags.RegistryAuthFrom = r.URL.Query().Get("registryAuthFrom")
flags.Rollback = r.URL.Query().Get("rollback")
cliVersion := r.Header.Get("version")
queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" {
if versions.LessThan(v, "1.30") {
queryRegistry = true
}
adjustForAPIVersion(v, &service)
if cliVersion != "" && versions.LessThan(cliVersion, "1.30") {
queryRegistry = true
}
resp, err := sr.backend.UpdateService(vars["id"], version, service, flags, queryRegistry)

View File

@@ -9,8 +9,6 @@ import (
"github.com/docker/docker/api/server/httputils"
basictypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/versions"
)
// swarmLogs takes an http response, request, and selector, and writes the logs
@@ -66,52 +64,3 @@ func (sr *swarmRouter) swarmLogs(ctx context.Context, w io.Writer, r *http.Reque
httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty)
return nil
}
// adjustForAPIVersion takes a version and service spec and removes fields to
// make the spec compatible with the specified version.
func adjustForAPIVersion(cliVersion string, service *swarm.ServiceSpec) {
if cliVersion == "" {
return
}
if versions.LessThan(cliVersion, "1.40") {
if service.TaskTemplate.ContainerSpec != nil {
// Sysctls for docker swarm services weren't supported before
// API version 1.40
service.TaskTemplate.ContainerSpec.Sysctls = nil
if service.TaskTemplate.ContainerSpec.Privileges != nil && service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec != nil {
// Support for setting credential-spec through configs was added in API 1.40
service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config = ""
}
for _, config := range service.TaskTemplate.ContainerSpec.Configs {
// support for the Runtime target was added in API 1.40
config.Runtime = nil
}
}
if service.TaskTemplate.Placement != nil {
// MaxReplicas for docker swarm services weren't supported before
// API version 1.40
service.TaskTemplate.Placement.MaxReplicas = 0
}
}
if versions.LessThan(cliVersion, "1.41") {
if service.TaskTemplate.ContainerSpec != nil {
// Capabilities and Ulimits for docker swarm services weren't
// supported before API version 1.41
service.TaskTemplate.ContainerSpec.CapabilityAdd = nil
service.TaskTemplate.ContainerSpec.CapabilityDrop = nil
service.TaskTemplate.ContainerSpec.Ulimits = nil
}
if service.TaskTemplate.Resources != nil && service.TaskTemplate.Resources.Limits != nil {
// Limits.Pids not supported before API version 1.41
service.TaskTemplate.Resources.Limits.Pids = 0
}
// jobs were only introduced in API version 1.41. Nil out both Job
// modes; if the service is one of these modes and subsequently has no
// mode, then something down the pipe will thrown an error.
service.Mode.ReplicatedJob = nil
service.Mode.GlobalJob = nil
}
}

View File

@@ -1,119 +0,0 @@
package swarm // import "github.com/docker/docker/api/server/router/swarm"
import (
"reflect"
"testing"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/go-units"
)
func TestAdjustForAPIVersion(t *testing.T) {
var (
expectedSysctls = map[string]string{"foo": "bar"}
)
// testing the negative -- does this leave everything else alone? -- is
// prohibitively time-consuming to write, because it would need an object
// with literally every field filled in.
spec := &swarm.ServiceSpec{
TaskTemplate: swarm.TaskSpec{
ContainerSpec: &swarm.ContainerSpec{
Sysctls: expectedSysctls,
Privileges: &swarm.Privileges{
CredentialSpec: &swarm.CredentialSpec{
Config: "someconfig",
},
},
Configs: []*swarm.ConfigReference{
{
File: &swarm.ConfigReferenceFileTarget{
Name: "foo",
UID: "bar",
GID: "baz",
},
ConfigID: "configFile",
ConfigName: "configFile",
},
{
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
ConfigID: "configRuntime",
ConfigName: "configRuntime",
},
},
Ulimits: []*units.Ulimit{
{
Name: "nofile",
Soft: 100,
Hard: 200,
},
},
},
Placement: &swarm.Placement{
MaxReplicas: 222,
},
Resources: &swarm.ResourceRequirements{
Limits: &swarm.Limit{
Pids: 300,
},
},
},
}
// first, does calling this with a later version correctly NOT strip
// fields? do the later version first, so we can reuse this spec in the
// next test.
adjustForAPIVersion("1.41", spec)
if !reflect.DeepEqual(spec.TaskTemplate.ContainerSpec.Sysctls, expectedSysctls) {
t.Error("Sysctls was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids == 0 {
t.Error("PidsLimit was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 300 {
t.Error("PidsLimit did not preserve the value from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "someconfig" {
t.Error("CredentialSpec.Config field was stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Configs[1].Runtime == nil {
t.Error("ConfigReferenceRuntimeTarget was stripped from spec")
}
if spec.TaskTemplate.Placement.MaxReplicas != 222 {
t.Error("MaxReplicas was stripped from spec")
}
if len(spec.TaskTemplate.ContainerSpec.Ulimits) == 0 {
t.Error("Ulimits were stripped from spec")
}
// next, does calling this with an earlier version correctly strip fields?
adjustForAPIVersion("1.29", spec)
if spec.TaskTemplate.ContainerSpec.Sysctls != nil {
t.Error("Sysctls was not stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 0 {
t.Error("PidsLimit was not stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "" {
t.Error("CredentialSpec.Config field was not stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Configs[1].Runtime != nil {
t.Error("ConfigReferenceRuntimeTarget was not stripped from spec")
}
if spec.TaskTemplate.Placement.MaxReplicas != 0 {
t.Error("MaxReplicas was not stripped from spec")
}
if len(spec.TaskTemplate.ContainerSpec.Ulimits) != 0 {
t.Error("Ulimits were not stripped from spec")
}
}

View File

@@ -13,7 +13,7 @@ import (
// Backend is the methods that need to be implemented to provide
// system specific functionality.
type Backend interface {
SystemInfo() *types.Info
SystemInfo() (*types.Info, error)
SystemVersion() types.Version
SystemDiskUsage(ctx context.Context) (*types.DiskUsage, error)
SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{})

View File

@@ -2,7 +2,8 @@ package system // import "github.com/docker/docker/api/server/router/system"
import (
"github.com/docker/docker/api/server/router"
buildkit "github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/fscache"
)
// systemRouter provides information about the Docker system overall.
@@ -11,15 +12,17 @@ type systemRouter struct {
backend Backend
cluster ClusterBackend
routes []router.Route
fscache *fscache.FSCache // legacy
builder *buildkit.Builder
features *map[string]bool
}
// NewRouter initializes a new system router
func NewRouter(b Backend, c ClusterBackend, builder *buildkit.Builder, features *map[string]bool) router.Router {
func NewRouter(b Backend, c ClusterBackend, fscache *fscache.FSCache, builder *buildkit.Builder, features *map[string]bool) router.Router {
r := &systemRouter{
backend: b,
cluster: c,
fscache: fscache,
builder: builder,
features: features,
}
@@ -27,11 +30,10 @@ func NewRouter(b Backend, c ClusterBackend, builder *buildkit.Builder, features
r.routes = []router.Route{
router.NewOptionsRoute("/{anyroute:.*}", optionsHandler),
router.NewGetRoute("/_ping", r.pingHandler),
router.NewHeadRoute("/_ping", r.pingHandler),
router.NewGetRoute("/events", r.getEvents),
router.NewGetRoute("/events", r.getEvents, router.WithCancel),
router.NewGetRoute("/info", r.getInfo),
router.NewGetRoute("/version", r.getVersion),
router.NewGetRoute("/system/df", r.getDiskUsage),
router.NewGetRoute("/system/df", r.getDiskUsage, router.WithCancel),
router.NewPostRoute("/auth", r.postAuth),
}

View File

@@ -27,28 +27,21 @@ func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request,
}
func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Add("Pragma", "no-cache")
builderVersion := build.BuilderVersion(*s.features)
if bv := builderVersion; bv != "" {
w.Header().Set("Builder-Version", string(bv))
}
if r.Method == http.MethodHead {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Length", "0")
return nil
}
_, err := w.Write([]byte{'O', 'K'})
return err
}
func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
info := s.backend.SystemInfo()
info, err := s.backend.SystemInfo()
if err != nil {
return err
}
if s.cluster != nil {
info.Swarm = s.cluster.Info()
info.Warnings = append(info.Warnings, info.Swarm.Warnings...)
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") {
@@ -99,6 +92,16 @@ func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter,
return err
})
var builderSize int64 // legacy
eg.Go(func() error {
var err error
builderSize, err = s.fscache.DiskUsage(ctx)
if err != nil {
return pkgerrors.Wrap(err, "error getting fscache build cache usage")
}
return nil
})
var buildCache []*types.BuildCache
eg.Go(func() error {
var err error
@@ -113,7 +116,6 @@ func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter,
return err
}
var builderSize int64
for _, b := range buildCache {
builderSize += b.Size
}
@@ -163,9 +165,7 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
if !onlyPastEvents {
dur := until.Sub(now)
timer := time.NewTimer(dur)
defer timer.Stop()
timeout = timer.C
timeout = time.After(dur)
}
}

View File

@@ -29,7 +29,7 @@ func (r *volumeRouter) initRoutes() {
router.NewGetRoute("/volumes/{name:.*}", r.getVolumeByName),
// POST
router.NewPostRoute("/volumes/create", r.postVolumesCreate),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune, router.WithCancel),
// DELETE
router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes),
}

View File

@@ -0,0 +1,30 @@
package server // import "github.com/docker/docker/api/server"
import (
"net/http"
"sync"
"github.com/gorilla/mux"
)
// routerSwapper is an http.Handler that allows you to swap
// mux routers.
type routerSwapper struct {
mu sync.Mutex
router *mux.Router
}
// Swap changes the old router with the new one.
func (rs *routerSwapper) Swap(newRouter *mux.Router) {
rs.mu.Lock()
rs.router = newRouter
rs.mu.Unlock()
}
// ServeHTTP makes the routerSwapper to implement the http.Handler interface.
func (rs *routerSwapper) ServeHTTP(w http.ResponseWriter, r *http.Request) {
rs.mu.Lock()
router := rs.router
rs.mu.Unlock()
router.ServeHTTP(w, r)
}

View File

@@ -6,9 +6,7 @@ import (
"net"
"net/http"
"strings"
"time"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router"
@@ -33,10 +31,11 @@ type Config struct {
// Server contains instance details for the server
type Server struct {
cfg *Config
servers []*HTTPServer
routers []router.Router
middlewares []middleware.Middleware
cfg *Config
servers []*HTTPServer
routers []router.Router
routerSwapper *routerSwapper
middlewares []middleware.Middleware
}
// New returns a new instance of the server based on the specified configuration.
@@ -58,8 +57,7 @@ func (s *Server) Accept(addr string, listeners ...net.Listener) {
for _, listener := range listeners {
httpServer := &HTTPServer{
srv: &http.Server{
Addr: addr,
ReadHeaderTimeout: 5 * time.Minute, // "G112: Potential Slowloris Attack (gosec)"; not a real concern for our use, so setting a long timeout.
Addr: addr,
},
l: listener,
}
@@ -81,7 +79,7 @@ func (s *Server) Close() {
func (s *Server) serveAPI() error {
var chErrors = make(chan error, len(s.servers))
for _, srv := range s.servers {
srv.srv.Handler = s.createMux()
srv.srv.Handler = s.routerSwapper
go func(srv *HTTPServer) {
var err error
logrus.Infof("API listen on %s", srv.l.Addr())
@@ -131,8 +129,8 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
// use intermediate variable to prevent "should not use basic type
// string as key in context.WithValue" golint errors
ctx := context.WithValue(r.Context(), dockerversion.UAStringKey{}, r.Header.Get("User-Agent"))
r = r.WithContext(ctx)
var ki interface{} = dockerversion.UAStringKey
ctx := context.WithValue(context.Background(), ki, r.Header.Get("User-Agent"))
handlerFunc := s.handlerWithGlobalMiddlewares(handler)
vars := mux.Vars(r)
@@ -141,11 +139,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
}
if err := handlerFunc(ctx, w, r, vars); err != nil {
statusCode := httpstatus.FromError(err)
statusCode := httputils.GetHTTPErrorStatusCode(err)
if statusCode >= 500 {
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
}
makeErrorHandler(err)(w, r)
httputils.MakeErrorHandler(err)(w, r)
}
}
}
@@ -154,6 +152,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
// This method also enables the Go profiler.
func (s *Server) InitRouter(routers ...router.Router) {
s.routers = append(s.routers, routers...)
m := s.createMux()
s.routerSwapper = &routerSwapper{
router: m,
}
}
type pageNotFoundError struct{}
@@ -186,10 +189,9 @@ func (s *Server) createMux() *mux.Router {
m.Path("/debug" + r.Path()).Handler(f)
}
notFoundHandler := makeErrorHandler(pageNotFoundError{})
notFoundHandler := httputils.MakeErrorHandler(pageNotFoundError{})
m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler)
m.NotFoundHandler = notFoundHandler
m.MethodNotAllowedHandler = notFoundHandler
return m
}

View File

@@ -22,7 +22,7 @@ func TestMiddlewares(t *testing.T) {
srv.UseMiddleware(middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, api.MinVersion))
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,8 @@
package {{ .Package }} // import "github.com/docker/docker/api/types/{{ .Package }}"
package {{ .Package }}
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -30,7 +30,7 @@ type ContainerAttachConfig struct {
// expectation is for the logger endpoints to assemble the chunks using this
// metadata.
type PartialLogMetaData struct {
Last bool // true if this message is last of a partial
Last bool //true if this message is last of a partial
ID string // identifies group of messages comprising a single record
Ordinal int // ordering of message in partial group
}
@@ -73,7 +73,6 @@ type LogSelector struct {
// behavior of a backend.ContainerStats() call.
type ContainerStatsConfig struct {
Stream bool
OneShot bool
OutStream io.Writer
Version string
}

View File

@@ -50,7 +50,7 @@ type ContainerCommitOptions struct {
// ContainerExecInspect holds information returned by exec inspect.
type ContainerExecInspect struct {
ExecID string `json:"ID"`
ExecID string
ContainerID string
Running bool
ExitCode int
@@ -187,15 +187,6 @@ type ImageBuildOptions struct {
// build request. The same identifier can be used to gracefully cancel the
// build with the cancel request.
BuildID string
// Outputs defines configurations for exporting build results. Only supported
// in BuildKit mode
Outputs []ImageBuildOutput
}
// ImageBuildOutput defines configuration for exporting a build result
type ImageBuildOutput struct {
Type string
Attrs map[string]string
}
// BuilderVersion sets the version of underlying builder to use
@@ -205,7 +196,7 @@ const (
// BuilderV1 is the first generation builder in docker daemon
BuilderV1 BuilderVersion = "1"
// BuilderBuildKit is builder based on moby/buildkit project
BuilderBuildKit BuilderVersion = "2"
BuilderBuildKit = "2"
)
// ImageBuildResponse holds information
@@ -265,7 +256,7 @@ type ImagePullOptions struct {
// if the privilege request fails.
type RequestPrivilegeFunc func() (string, error)
// ImagePushOptions holds information to push images.
//ImagePushOptions holds information to push images.
type ImagePushOptions ImagePullOptions
// ImageRemoveOptions holds parameters to remove images.
@@ -363,10 +354,6 @@ type ServiceUpdateOptions struct {
// ServiceListOptions holds parameters to list services with.
type ServiceListOptions struct {
Filters filters.Args
// Status indicates whether the server should include the service task
// count of running and desired tasks.
Status bool
}
// ServiceInspectOptions holds parameters related to the "service inspect"

View File

@@ -3,7 +3,6 @@ package types // import "github.com/docker/docker/api/types"
import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// configs holds structs used for internal communication between the
@@ -16,7 +15,6 @@ type ContainerCreateConfig struct {
Config *container.Config
HostConfig *container.HostConfig
NetworkingConfig *network.NetworkingConfig
Platform *specs.Platform
AdjustCPUShares bool
}

View File

@@ -54,7 +54,7 @@ type Config struct {
Env []string // List of environment variable to set in the container
Cmd strslice.StrSlice // Command to run when starting the container
Healthcheck *HealthConfig `json:",omitempty"` // Healthcheck describes how to check the container is healthy
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (meaning treat as a command line) (Windows specific).
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (Windows specific)
Image string // Name of the image as it was passed by the operator (e.g. could be symbolic)
Volumes map[string]struct{} // List of volumes (mounts) used for the container
WorkingDir string // Current directory (PWD) in the command will be launched

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
@@ -10,9 +11,7 @@ package container // import "github.com/docker/docker/api/types/container"
// swagger:model ContainerTopOKBody
type ContainerTopOKBody struct {
// Each process running in the container, where each is process
// is an array of values corresponding to the titles.
//
// Each process running in the container, where each is process is an array of values corresponding to the titles
// Required: true
Processes [][]string `json:"Processes"`

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -7,32 +7,9 @@ import (
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/strslice"
"github.com/docker/go-connections/nat"
units "github.com/docker/go-units"
"github.com/docker/go-units"
)
// CgroupnsMode represents the cgroup namespace mode of the container
type CgroupnsMode string
// IsPrivate indicates whether the container uses its own private cgroup namespace
func (c CgroupnsMode) IsPrivate() bool {
return c == "private"
}
// IsHost indicates whether the container shares the host's cgroup namespace
func (c CgroupnsMode) IsHost() bool {
return c == "host"
}
// IsEmpty indicates whether the container cgroup namespace mode is unset
func (c CgroupnsMode) IsEmpty() bool {
return c == ""
}
// Valid indicates whether the cgroup namespace mode is valid
func (c CgroupnsMode) Valid() bool {
return c.IsEmpty() || c.IsPrivate() || c.IsHost()
}
// Isolation represents the isolation technology of a container. The supported
// values are platform specific
type Isolation string
@@ -145,7 +122,7 @@ func (n NetworkMode) ConnectedContainer() string {
return ""
}
// UserDefined indicates user-created network
//UserDefined indicates user-created network
func (n NetworkMode) UserDefined() string {
if n.IsUserDefined() {
return string(n)
@@ -267,16 +244,6 @@ func (n PidMode) Container() string {
return ""
}
// DeviceRequest represents a request for devices from a device driver.
// Used by GPU device drivers.
type DeviceRequest struct {
Driver string // Name of device driver
Count int // Number of devices to request (-1 = All)
DeviceIDs []string // List of device IDs as recognizable by the device driver
Capabilities [][]string // An OR list of AND lists of device capabilities (e.g. "gpu")
Options map[string]string // Options to pass onto the device driver
}
// DeviceMapping represents the device mapping between the host and the container.
type DeviceMapping struct {
PathOnHost string
@@ -360,14 +327,13 @@ type Resources struct {
CpusetMems string // CpusetMems 0-2, 0,1
Devices []DeviceMapping // List of devices to map inside the container
DeviceCgroupRules []string // List of rule to be added to the device cgroup
DeviceRequests []DeviceRequest // List of device requests for device drivers
KernelMemory int64 // Kernel memory limit (in bytes), Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
KernelMemoryTCP int64 // Hard limit for kernel TCP buffer memory (in bytes)
DiskQuota int64 // Disk limit (in bytes)
KernelMemory int64 // Kernel memory limit (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap
MemorySwappiness *int64 // Tuning container memory swappiness behaviour
OomKillDisable *bool // Whether to disable OOM Killer or not
PidsLimit *int64 // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
PidsLimit int64 // Setting pids limit for a container
Ulimits []*units.Ulimit // List of ulimits to be set in the container
// Applicable to Windows
@@ -403,7 +369,6 @@ type HostConfig struct {
// Applicable to UNIX platforms
CapAdd strslice.StrSlice // List of kernel capabilities to add to the container
CapDrop strslice.StrSlice // List of kernel capabilities to remove from the container
CgroupnsMode CgroupnsMode // Cgroup namespace mode to use for the container
DNS []string `json:"Dns"` // List of DNS server to lookup
DNSOptions []string `json:"DnsOptions"` // List of DNSOption to look for
DNSSearch []string `json:"DnsSearch"` // List of DNSSearch to look for

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package container // import "github.com/docker/docker/api/types/container"

View File

@@ -1,6 +0,0 @@
package types
// Error returns the error message
func (e ErrorResponse) Error() string {
return e.Message
}

View File

@@ -1,8 +1,6 @@
package events // import "github.com/docker/docker/api/types/events"
const (
// BuilderEventType is the event type that the builder generates
BuilderEventType = "builder"
// ContainerEventType is the event type that containers generate
ContainerEventType = "container"
// DaemonEventType is the event type that daemon generate

View File

@@ -1,11 +1,11 @@
/*
Package filters provides tools for encoding a mapping of keys to a set of
/*Package filters provides tools for encoding a mapping of keys to a set of
multiple values.
*/
package filters // import "github.com/docker/docker/api/types/filters"
import (
"encoding/json"
"errors"
"regexp"
"strings"
@@ -37,19 +37,45 @@ func NewArgs(initialArgs ...KeyValuePair) Args {
return args
}
// Keys returns all the keys in list of Args
func (args Args) Keys() []string {
keys := make([]string, 0, len(args.fields))
for k := range args.fields {
keys = append(keys, k)
// ParseFlag parses a key=value string and adds it to an Args.
//
// Deprecated: Use Args.Add()
func ParseFlag(arg string, prev Args) (Args, error) {
filters := prev
if len(arg) == 0 {
return filters, nil
}
return keys
if !strings.Contains(arg, "=") {
return filters, ErrBadFormat
}
f := strings.SplitN(arg, "=", 2)
name := strings.ToLower(strings.TrimSpace(f[0]))
value := strings.TrimSpace(f[1])
filters.Add(name, value)
return filters, nil
}
// ErrBadFormat is an error returned when a filter is not in the form key=value
//
// Deprecated: this error will be removed in a future version
var ErrBadFormat = errors.New("bad format of filter (expected name=value)")
// ToParam encodes the Args as args JSON encoded string
//
// Deprecated: use ToJSON
func ToParam(a Args) (string, error) {
return ToJSON(a)
}
// MarshalJSON returns a JSON byte representation of the Args
func (args Args) MarshalJSON() ([]byte, error) {
if len(args.fields) == 0 {
return []byte("{}"), nil
return []byte{}, nil
}
return json.Marshal(args.fields)
}
@@ -67,7 +93,7 @@ func ToJSON(a Args) (string, error) {
// then the encoded format will use an older legacy format where the values are a
// list of strings, instead of a set.
//
// Deprecated: do not use in any new code; use ToJSON instead
// Deprecated: Use ToJSON
func ToParamWithVersion(version string, a Args) (string, error) {
if a.Len() == 0 {
return "", nil
@@ -81,6 +107,13 @@ func ToParamWithVersion(version string, a Args) (string, error) {
return ToJSON(a)
}
// FromParam decodes a JSON encoded string into Args
//
// Deprecated: use FromJSON
func FromParam(p string) (Args, error) {
return FromJSON(p)
}
// FromJSON decodes a JSON encoded string into Args
func FromJSON(p string) (Args, error) {
args := NewArgs()
@@ -107,6 +140,9 @@ func FromJSON(p string) (Args, error) {
// UnmarshalJSON populates the Args from JSON encode bytes
func (args Args) UnmarshalJSON(raw []byte) error {
if len(raw) == 0 {
return nil
}
return json.Unmarshal(raw, &args.fields)
}
@@ -152,7 +188,7 @@ func (args Args) Len() int {
func (args Args) MatchKVList(key string, sources map[string]string) bool {
fieldValues := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
@@ -198,7 +234,7 @@ func (args Args) Match(field, source string) bool {
// ExactMatch returns true if the source matches exactly one of the values.
func (args Args) ExactMatch(key, source string) bool {
fieldValues, ok := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if !ok || len(fieldValues) == 0 {
return true
}
@@ -211,7 +247,7 @@ func (args Args) ExactMatch(key, source string) bool {
// matches exactly the value.
func (args Args) UniqueExactMatch(key, source string) bool {
fieldValues := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
@@ -239,6 +275,14 @@ func (args Args) FuzzyMatch(key, source string) bool {
return false
}
// Include returns true if the key exists in the mapping
//
// Deprecated: use Contains
func (args Args) Include(field string) bool {
_, ok := args.fields[field]
return ok
}
// Contains returns true if the key exists in the mapping
func (args Args) Contains(field string) bool {
_, ok := args.fields[field]
@@ -279,22 +323,6 @@ func (args Args) WalkValues(field string, op func(value string) error) error {
return nil
}
// Clone returns a copy of args.
func (args Args) Clone() (newArgs Args) {
newArgs.fields = make(map[string]map[string]bool, len(args.fields))
for k, m := range args.fields {
var mm map[string]bool
if m != nil {
mm = make(map[string]bool, len(m))
for kk, v := range m {
mm[kk] = v
}
}
newArgs.fields[k] = mm
}
return newArgs
}
func deprecatedArgs(d map[string][]string) map[string]map[string]bool {
m := map[string]map[string]bool{}
for k, v := range d {

View File

@@ -1,31 +1,44 @@
package filters // import "github.com/docker/docker/api/types/filters"
import (
"encoding/json"
"errors"
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestMarshalJSON(t *testing.T) {
fields := map[string]map[string]bool{
"created": {"today": true},
"image.name": {"ubuntu*": true, "*untu": true},
func TestParseArgs(t *testing.T) {
// equivalent of `docker ps -f 'created=today' -f 'image.name=ubuntu*' -f 'image.name=*untu'`
flagArgs := []string{
"created=today",
"image.name=ubuntu*",
"image.name=*untu",
}
a := Args{fields: fields}
var (
args = NewArgs()
err error
)
_, err := a.MarshalJSON()
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
for i := range flagArgs {
args, err = ParseFlag(flagArgs[i], args)
assert.NilError(t, err)
}
assert.Check(t, is.Len(args.Get("created"), 1))
assert.Check(t, is.Len(args.Get("image.name"), 2))
}
func TestMarshalJSONWithEmpty(t *testing.T) {
_, err := json.Marshal(NewArgs())
func TestParseArgsEdgeCase(t *testing.T) {
var args Args
args, err := ParseFlag("", args)
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
t.Fatal(err)
}
if args.Len() != 0 {
t.Fatalf("Expected an empty Args (map), got %v", args)
}
if args, err = ParseFlag("anything", args); err == nil || err != ErrBadFormat {
t.Fatalf("Expected ErrBadFormat, got %v", err)
}
}
@@ -334,6 +347,17 @@ func TestContains(t *testing.T) {
}
}
func TestInclude(t *testing.T) {
f := NewArgs()
if f.Include("status") {
t.Fatal("Expected to not include a status key, got true")
}
f.Add("status", "running")
if !f.Include("status") {
t.Fatal("Expected to include a status key, got false")
}
}
func TestValidate(t *testing.T) {
f := NewArgs()
f.Add("status", "running")
@@ -358,17 +382,14 @@ func TestWalkValues(t *testing.T) {
f.Add("status", "running")
f.Add("status", "paused")
err := f.WalkValues("status", func(value string) error {
f.WalkValues("status", func(value string) error {
if value != "running" && value != "paused" {
t.Fatalf("Unexpected value %s", value)
}
return nil
})
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
err = f.WalkValues("status", func(value string) error {
err := f.WalkValues("status", func(value string) error {
return errors.New("return")
})
if err == nil {
@@ -400,11 +421,3 @@ func TestFuzzyMatch(t *testing.T) {
}
}
}
func TestClone(t *testing.T) {
f := NewArgs()
f.Add("foo", "bar")
f2 := f.Clone()
f2.Add("baz", "qux")
assert.Check(t, is.Len(f.Get("baz"), 0))
}

View File

@@ -1,7 +1,8 @@
package image // import "github.com/docker/docker/api/types/image"
package image
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -79,8 +79,7 @@ const (
// BindOptions defines options specific to mounts of type "bind".
type BindOptions struct {
Propagation Propagation `json:",omitempty"`
NonRecursive bool `json:",omitempty"`
Propagation Propagation `json:",omitempty"`
}
// VolumeOptions represents the options for a mount of type volume.
@@ -113,7 +112,7 @@ type TmpfsOptions struct {
// TODO(stevvooe): There are several more tmpfs flags, specified in the
// daemon, that are accepted. Only the most basic are added for now.
//
// From https://github.com/moby/sys/blob/mount/v0.1.1/mount/flags.go#L47-L56
// From docker/docker/pkg/mount/flags.go:
//
// var validFlags = map[string]bool{
// "": true,

View File

@@ -1,6 +1,7 @@
package network // import "github.com/docker/docker/api/types/network"
import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/errdefs"
)
// Address represents an IP address
@@ -12,7 +13,7 @@ type Address struct {
// IPAM represents IP Address Management
type IPAM struct {
Driver string
Options map[string]string // Per network IPAM driver options
Options map[string]string //Per network IPAM driver options
Config []IPAMConfig
}
@@ -111,16 +112,15 @@ type ConfigReference struct {
}
var acceptedFilters = map[string]bool{
"dangling": true,
"driver": true,
"id": true,
"label": true,
"name": true,
"scope": true,
"type": true,
"driver": true,
"type": true,
"name": true,
"id": true,
"label": true,
"scope": true,
}
// ValidateFilters validates the list of filter args with the available filters.
func ValidateFilters(filter filters.Args) error {
return filter.Validate(acceptedFilters)
return errdefs.InvalidParameter(filter.Validate(acceptedFilters))
}

View File

@@ -4,7 +4,7 @@ import (
"encoding/json"
"net"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-spec/specs-go/v1"
)
// ServiceConfig stores daemon registry services configuration.
@@ -45,32 +45,31 @@ func (ipnet *NetIPNet) UnmarshalJSON(b []byte) (err error) {
// IndexInfo contains information about a registry
//
// RepositoryInfo Examples:
// {
// "Index" : {
// "Name" : "docker.io",
// "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
// "Secure" : true,
// "Official" : true,
// },
// "RemoteName" : "library/debian",
// "LocalName" : "debian",
// "CanonicalName" : "docker.io/debian"
// "Official" : true,
// }
//
// {
// "Index" : {
// "Name" : "docker.io",
// "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
// "Secure" : true,
// "Official" : true,
// },
// "RemoteName" : "library/debian",
// "LocalName" : "debian",
// "CanonicalName" : "docker.io/debian"
// "Official" : true,
// }
//
// {
// "Index" : {
// "Name" : "127.0.0.1:5000",
// "Mirrors" : [],
// "Secure" : false,
// "Official" : false,
// },
// "RemoteName" : "user/repo",
// "LocalName" : "127.0.0.1:5000/user/repo",
// "CanonicalName" : "127.0.0.1:5000/user/repo",
// "Official" : false,
// }
// {
// "Index" : {
// "Name" : "127.0.0.1:5000",
// "Mirrors" : [],
// "Secure" : false,
// "Official" : false,
// },
// "RemoteName" : "user/repo",
// "LocalName" : "127.0.0.1:5000/user/repo",
// "CanonicalName" : "127.0.0.1:5000/user/repo",
// "Official" : false,
// }
type IndexInfo struct {
// Name is the name of the registry, such as "docker.io"
Name string

93
api/types/seccomp.go Normal file
View File

@@ -0,0 +1,93 @@
package types // import "github.com/docker/docker/api/types"
// Seccomp represents the config for a seccomp profile for syscall restriction.
type Seccomp struct {
DefaultAction Action `json:"defaultAction"`
// Architectures is kept to maintain backward compatibility with the old
// seccomp profile.
Architectures []Arch `json:"architectures,omitempty"`
ArchMap []Architecture `json:"archMap,omitempty"`
Syscalls []*Syscall `json:"syscalls"`
}
// Architecture is used to represent a specific architecture
// and its sub-architectures
type Architecture struct {
Arch Arch `json:"architecture"`
SubArches []Arch `json:"subArchitectures"`
}
// Arch used for architectures
type Arch string
// Additional architectures permitted to be used for system calls
// By default only the native architecture of the kernel is permitted
const (
ArchX86 Arch = "SCMP_ARCH_X86"
ArchX86_64 Arch = "SCMP_ARCH_X86_64"
ArchX32 Arch = "SCMP_ARCH_X32"
ArchARM Arch = "SCMP_ARCH_ARM"
ArchAARCH64 Arch = "SCMP_ARCH_AARCH64"
ArchMIPS Arch = "SCMP_ARCH_MIPS"
ArchMIPS64 Arch = "SCMP_ARCH_MIPS64"
ArchMIPS64N32 Arch = "SCMP_ARCH_MIPS64N32"
ArchMIPSEL Arch = "SCMP_ARCH_MIPSEL"
ArchMIPSEL64 Arch = "SCMP_ARCH_MIPSEL64"
ArchMIPSEL64N32 Arch = "SCMP_ARCH_MIPSEL64N32"
ArchPPC Arch = "SCMP_ARCH_PPC"
ArchPPC64 Arch = "SCMP_ARCH_PPC64"
ArchPPC64LE Arch = "SCMP_ARCH_PPC64LE"
ArchS390 Arch = "SCMP_ARCH_S390"
ArchS390X Arch = "SCMP_ARCH_S390X"
)
// Action taken upon Seccomp rule match
type Action string
// Define actions for Seccomp rules
const (
ActKill Action = "SCMP_ACT_KILL"
ActTrap Action = "SCMP_ACT_TRAP"
ActErrno Action = "SCMP_ACT_ERRNO"
ActTrace Action = "SCMP_ACT_TRACE"
ActAllow Action = "SCMP_ACT_ALLOW"
)
// Operator used to match syscall arguments in Seccomp
type Operator string
// Define operators for syscall arguments in Seccomp
const (
OpNotEqual Operator = "SCMP_CMP_NE"
OpLessThan Operator = "SCMP_CMP_LT"
OpLessEqual Operator = "SCMP_CMP_LE"
OpEqualTo Operator = "SCMP_CMP_EQ"
OpGreaterEqual Operator = "SCMP_CMP_GE"
OpGreaterThan Operator = "SCMP_CMP_GT"
OpMaskedEqual Operator = "SCMP_CMP_MASKED_EQ"
)
// Arg used for matching specific syscall arguments in Seccomp
type Arg struct {
Index uint `json:"index"`
Value uint64 `json:"value"`
ValueTwo uint64 `json:"valueTwo"`
Op Operator `json:"op"`
}
// Filter is used to conditionally apply Seccomp rules
type Filter struct {
Caps []string `json:"caps,omitempty"`
Arches []string `json:"arches,omitempty"`
}
// Syscall is used to match a group of syscalls in Seccomp
type Syscall struct {
Name string `json:"name,omitempty"`
Names []string `json:"names,omitempty"`
Action Action `json:"action"`
Args []*Arg `json:"args"`
Comment string `json:"comment"`
Includes Filter `json:"includes"`
Excludes Filter `json:"excludes"`
}

View File

@@ -120,7 +120,7 @@ type NetworkStats struct {
RxBytes uint64 `json:"rx_bytes"`
// Packets received. Windows and Linux.
RxPackets uint64 `json:"rx_packets"`
// Received errors. Not used on Windows. Note that we don't `omitempty` this
// Received errors. Not used on Windows. Note that we dont `omitempty` this
// field as it is expected in the >=v1.21 API stats structure.
RxErrors uint64 `json:"rx_errors"`
// Incoming packets dropped. Windows and Linux.
@@ -129,7 +129,7 @@ type NetworkStats struct {
TxBytes uint64 `json:"tx_bytes"`
// Packets sent. Windows and Linux.
TxPackets uint64 `json:"tx_packets"`
// Sent errors. Not used on Windows. Note that we don't `omitempty` this
// Sent errors. Not used on Windows. Note that we dont `omitempty` this
// field as it is expected in the >=v1.21 API stats structure.
TxErrors uint64 `json:"tx_errors"`
// Outgoing packets dropped. Windows and Linux.

View File

@@ -29,8 +29,8 @@ func TestStrSliceMarshalJSON(t *testing.T) {
func TestStrSliceUnmarshalJSON(t *testing.T) {
parts := map[string][]string{
"": {"default", "values"},
"[]": {},
"": {"default", "values"},
"[]": {},
`["/bin/sh","-c","echo"]`: {"/bin/sh", "-c", "echo"},
}
for json, expectedParts := range parts {

View File

@@ -27,14 +27,9 @@ type ConfigReferenceFileTarget struct {
Mode os.FileMode
}
// ConfigReferenceRuntimeTarget is a target for a config specifying that it
// isn't mounted into the container but instead has some other purpose.
type ConfigReferenceRuntimeTarget struct{}
// ConfigReference is a reference to a config in swarm
type ConfigReference struct {
File *ConfigReferenceFileTarget `json:",omitempty"`
Runtime *ConfigReferenceRuntimeTarget `json:",omitempty"`
File *ConfigReferenceFileTarget
ConfigID string
ConfigName string
}

View File

@@ -5,7 +5,6 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/go-units"
)
// DNSConfig specifies DNS related configurations in resolver configuration file (resolv.conf)
@@ -34,7 +33,6 @@ type SELinuxContext struct {
// CredentialSpec for managed service account (Windows only)
type CredentialSpec struct {
Config string
File string
Registry string
}
@@ -68,13 +66,9 @@ type ContainerSpec struct {
// The format of extra hosts on swarmkit is specified in:
// http://man7.org/linux/man-pages/man5/hosts.5.html
// IP_address canonical_hostname [aliases...]
Hosts []string `json:",omitempty"`
DNSConfig *DNSConfig `json:",omitempty"`
Secrets []*SecretReference `json:",omitempty"`
Configs []*ConfigReference `json:",omitempty"`
Isolation container.Isolation `json:",omitempty"`
Sysctls map[string]string `json:",omitempty"`
CapabilityAdd []string `json:",omitempty"`
CapabilityDrop []string `json:",omitempty"`
Ulimits []*units.Ulimit `json:",omitempty"`
Hosts []string `json:",omitempty"`
DNSConfig *DNSConfig `json:",omitempty"`
Secrets []*SecretReference `json:",omitempty"`
Configs []*ConfigReference `json:",omitempty"`
Isolation container.Isolation `json:",omitempty"`
}

View File

@@ -1,5 +1,6 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// Code generated by protoc-gen-gogo.
// source: plugin.proto
// DO NOT EDIT!
/*
Package runtime is a generated protocol buffer package.
@@ -37,7 +38,6 @@ type PluginSpec struct {
Remote string `protobuf:"bytes,2,opt,name=remote,proto3" json:"remote,omitempty"`
Privileges []*PluginPrivilege `protobuf:"bytes,3,rep,name=privileges" json:"privileges,omitempty"`
Disabled bool `protobuf:"varint,4,opt,name=disabled,proto3" json:"disabled,omitempty"`
Env []string `protobuf:"bytes,5,rep,name=env" json:"env,omitempty"`
}
func (m *PluginSpec) Reset() { *m = PluginSpec{} }
@@ -73,13 +73,6 @@ func (m *PluginSpec) GetDisabled() bool {
return false
}
func (m *PluginSpec) GetEnv() []string {
if m != nil {
return m.Env
}
return nil
}
// PluginPrivilege describes a permission the user has to accept
// upon installing a plugin.
type PluginPrivilege struct {
@@ -167,21 +160,6 @@ func (m *PluginSpec) MarshalTo(dAtA []byte) (int, error) {
}
i++
}
if len(m.Env) > 0 {
for _, s := range m.Env {
dAtA[i] = 0x2a
i++
l = len(s)
for l >= 1<<7 {
dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
dAtA[i] = uint8(l)
i++
i += copy(dAtA[i:], s)
}
}
return i, nil
}
@@ -230,6 +208,24 @@ func (m *PluginPrivilege) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Plugin(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Plugin(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintPlugin(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
@@ -259,12 +255,6 @@ func (m *PluginSpec) Size() (n int) {
if m.Disabled {
n += 2
}
if len(m.Env) > 0 {
for _, s := range m.Env {
l = len(s)
n += 1 + l + sovPlugin(uint64(l))
}
}
return n
}
@@ -439,35 +429,6 @@ func (m *PluginSpec) Unmarshal(dAtA []byte) error {
}
}
m.Disabled = bool(v != 0)
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Env", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Env = append(m.Env, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPlugin(dAtA[iNdEx:])
@@ -734,21 +695,18 @@ var (
func init() { proto.RegisterFile("plugin.proto", fileDescriptorPlugin) }
var fileDescriptorPlugin = []byte{
// 256 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x90, 0x4d, 0x4b, 0xc3, 0x30,
0x18, 0xc7, 0x89, 0xdd, 0xc6, 0xfa, 0x4c, 0x70, 0x04, 0x91, 0xe2, 0xa1, 0x94, 0x9d, 0x7a, 0x6a,
0x45, 0x2f, 0x82, 0x37, 0x0f, 0x9e, 0x47, 0xbc, 0x09, 0x1e, 0xd2, 0xf6, 0xa1, 0x06, 0x9b, 0x17,
0x92, 0xb4, 0xe2, 0x37, 0xf1, 0x23, 0x79, 0xf4, 0x23, 0x48, 0x3f, 0x89, 0x98, 0x75, 0x32, 0x64,
0xa7, 0xff, 0x4b, 0xc2, 0x9f, 0x1f, 0x0f, 0x9c, 0x9a, 0xae, 0x6f, 0x85, 0x2a, 0x8c, 0xd5, 0x5e,
0x6f, 0x3e, 0x08, 0xc0, 0x36, 0x14, 0x8f, 0x06, 0x6b, 0x4a, 0x61, 0xa6, 0xb8, 0xc4, 0x84, 0x64,
0x24, 0x8f, 0x59, 0xf0, 0xf4, 0x02, 0x16, 0x16, 0xa5, 0xf6, 0x98, 0x9c, 0x84, 0x76, 0x4a, 0xf4,
0x0a, 0xc0, 0x58, 0x31, 0x88, 0x0e, 0x5b, 0x74, 0x49, 0x94, 0x45, 0xf9, 0xea, 0x7a, 0x5d, 0xec,
0xc6, 0xb6, 0xfb, 0x07, 0x76, 0xf0, 0x87, 0x5e, 0xc2, 0xb2, 0x11, 0x8e, 0x57, 0x1d, 0x36, 0xc9,
0x2c, 0x23, 0xf9, 0x92, 0xfd, 0x65, 0xba, 0x86, 0x08, 0xd5, 0x90, 0xcc, 0xb3, 0x28, 0x8f, 0xd9,
0xaf, 0xdd, 0x3c, 0xc3, 0xd9, 0xbf, 0xb1, 0xa3, 0x78, 0x19, 0xac, 0x1a, 0x74, 0xb5, 0x15, 0xc6,
0x0b, 0xad, 0x26, 0xc6, 0xc3, 0x8a, 0x9e, 0xc3, 0x7c, 0xe0, 0x5d, 0x8f, 0x81, 0x31, 0x66, 0xbb,
0x70, 0xff, 0xf0, 0x39, 0xa6, 0xe4, 0x6b, 0x4c, 0xc9, 0xf7, 0x98, 0x92, 0xa7, 0xdb, 0x56, 0xf8,
0x97, 0xbe, 0x2a, 0x6a, 0x2d, 0xcb, 0x46, 0xd7, 0xaf, 0x68, 0xf7, 0xc2, 0x8d, 0x28, 0xfd, 0xbb,
0x41, 0x57, 0xba, 0x37, 0x6e, 0x65, 0x69, 0x7b, 0xe5, 0x85, 0xc4, 0xbb, 0x49, 0xab, 0x45, 0x38,
0xe4, 0xcd, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x99, 0xa8, 0xd9, 0x9b, 0x58, 0x01, 0x00, 0x00,
// 196 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x29, 0xc8, 0x29, 0x4d,
0xcf, 0xcc, 0xd3, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x57, 0x6a, 0x63, 0xe4, 0xe2, 0x0a, 0x00, 0x0b,
0x04, 0x17, 0xa4, 0x26, 0x0b, 0x09, 0x71, 0xb1, 0xe4, 0x25, 0xe6, 0xa6, 0x4a, 0x30, 0x2a, 0x30,
0x6a, 0x70, 0x06, 0x81, 0xd9, 0x42, 0x62, 0x5c, 0x6c, 0x45, 0xa9, 0xb9, 0xf9, 0x25, 0xa9, 0x12,
0x4c, 0x60, 0x51, 0x28, 0x4f, 0xc8, 0x80, 0x8b, 0xab, 0xa0, 0x28, 0xb3, 0x2c, 0x33, 0x27, 0x35,
0x3d, 0xb5, 0x58, 0x82, 0x59, 0x81, 0x59, 0x83, 0xdb, 0x48, 0x40, 0x0f, 0x62, 0x58, 0x00, 0x4c,
0x22, 0x08, 0x49, 0x8d, 0x90, 0x14, 0x17, 0x47, 0x4a, 0x66, 0x71, 0x62, 0x52, 0x4e, 0x6a, 0x8a,
0x04, 0x8b, 0x02, 0xa3, 0x06, 0x47, 0x10, 0x9c, 0xaf, 0x14, 0xcb, 0xc5, 0x8f, 0xa6, 0x15, 0xab,
0x63, 0x14, 0xb8, 0xb8, 0x53, 0x52, 0x8b, 0x93, 0x8b, 0x32, 0x0b, 0x4a, 0x32, 0xf3, 0xf3, 0xa0,
0x2e, 0x42, 0x16, 0x12, 0x12, 0xe1, 0x62, 0x2d, 0x4b, 0xcc, 0x29, 0x4d, 0x05, 0xbb, 0x88, 0x33,
0x08, 0xc2, 0x71, 0xe2, 0x39, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07, 0x8f, 0xe4,
0x18, 0x93, 0xd8, 0xc0, 0x9e, 0x37, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xb8, 0x84, 0xad, 0x79,
0x0c, 0x01, 0x00, 0x00,
}

View File

@@ -9,7 +9,6 @@ message PluginSpec {
string remote = 2;
repeated PluginPrivilege privileges = 3;
bool disabled = 4;
repeated string env = 5;
}
// PluginPrivilege describes a permission the user has to accept

View File

@@ -10,17 +10,6 @@ type Service struct {
PreviousSpec *ServiceSpec `json:",omitempty"`
Endpoint Endpoint `json:",omitempty"`
UpdateStatus *UpdateStatus `json:",omitempty"`
// ServiceStatus is an optional, extra field indicating the number of
// desired and running tasks. It is provided primarily as a shortcut to
// calculating these values client-side, which otherwise would require
// listing all tasks for a service, an operation that could be
// computation and network expensive.
ServiceStatus *ServiceStatus `json:",omitempty"`
// JobStatus is the status of a Service which is in one of ReplicatedJob or
// GlobalJob modes. It is absent on Replicated and Global services.
JobStatus *JobStatus `json:",omitempty"`
}
// ServiceSpec represents the spec of a service.
@@ -43,10 +32,8 @@ type ServiceSpec struct {
// ServiceMode represents the mode of a service.
type ServiceMode struct {
Replicated *ReplicatedService `json:",omitempty"`
Global *GlobalService `json:",omitempty"`
ReplicatedJob *ReplicatedJob `json:",omitempty"`
GlobalJob *GlobalJob `json:",omitempty"`
Replicated *ReplicatedService `json:",omitempty"`
Global *GlobalService `json:",omitempty"`
}
// UpdateState is the state of a service update.
@@ -83,32 +70,6 @@ type ReplicatedService struct {
// GlobalService is a kind of ServiceMode.
type GlobalService struct{}
// ReplicatedJob is the a type of Service which executes a defined Tasks
// in parallel until the specified number of Tasks have succeeded.
type ReplicatedJob struct {
// MaxConcurrent indicates the maximum number of Tasks that should be
// executing simultaneously for this job at any given time. There may be
// fewer Tasks that MaxConcurrent executing simultaneously; for example, if
// there are fewer than MaxConcurrent tasks needed to reach
// TotalCompletions.
//
// If this field is empty, it will default to a max concurrency of 1.
MaxConcurrent *uint64 `json:",omitempty"`
// TotalCompletions is the total number of Tasks desired to run to
// completion.
//
// If this field is empty, the value of MaxConcurrent will be used.
TotalCompletions *uint64 `json:",omitempty"`
}
// GlobalJob is the type of a Service which executes a Task on every Node
// matching the Service's placement constraints. These tasks run to completion
// and then exit.
//
// This type is deliberately empty.
type GlobalJob struct{}
const (
// UpdateFailureActionPause PAUSE
UpdateFailureActionPause = "pause"
@@ -161,42 +122,3 @@ type UpdateConfig struct {
// started, or the new task is started before the old task is shut down.
Order string
}
// ServiceStatus represents the number of running tasks in a service and the
// number of tasks desired to be running.
type ServiceStatus struct {
// RunningTasks is the number of tasks for the service actually in the
// Running state
RunningTasks uint64
// DesiredTasks is the number of tasks desired to be running by the
// service. For replicated services, this is the replica count. For global
// services, this is computed by taking the number of tasks with desired
// state of not-Shutdown.
DesiredTasks uint64
// CompletedTasks is the number of tasks in the state Completed, if this
// service is in ReplicatedJob or GlobalJob mode. This field must be
// cross-referenced with the service type, because the default value of 0
// may mean that a service is not in a job mode, or it may mean that the
// job has yet to complete any tasks.
CompletedTasks uint64
}
// JobStatus is the status of a job-type service.
type JobStatus struct {
// JobIteration is a value increased each time a Job is executed,
// successfully or otherwise. "Executed", in this case, means the job as a
// whole has been started, not that an individual Task has been launched. A
// job is "Executed" when its ServiceSpec is updated. JobIteration can be
// used to disambiguate Tasks belonging to different executions of a job.
//
// Though JobIteration will increase with each subsequent execution, it may
// not necessarily increase by 1, and so JobIteration should not be used to
// keep track of the number of times a job has been executed.
JobIteration Version
// LastExecution is the time that the job was last executed, as observed by
// Swarm manager.
LastExecution time.Time `json:",omitempty"`
}

View File

@@ -14,7 +14,6 @@ type ClusterInfo struct {
RootRotationInProgress bool
DefaultAddrPool []string
SubnetSize uint32
DataPathPort uint32
}
// Swarm represents a swarm.
@@ -154,7 +153,6 @@ type InitRequest struct {
ListenAddr string
AdvertiseAddr string
DataPathAddr string
DataPathPort uint32
ForceNewCluster bool
Spec Spec
AutoLockManagers bool
@@ -209,8 +207,6 @@ type Info struct {
Managers int `json:",omitempty"`
Cluster *ClusterInfo `json:",omitempty"`
Warnings []string `json:",omitempty"`
}
// Peer represents a peer.

View File

@@ -56,12 +56,6 @@ type Task struct {
DesiredState TaskState `json:",omitempty"`
NetworksAttachments []NetworkAttachment `json:",omitempty"`
GenericResources []GenericResource `json:",omitempty"`
// JobIteration is the JobIteration of the Service that this Task was
// spawned from, if the Service is a ReplicatedJob or GlobalJob. This is
// used to determine which Tasks belong to which run of the job. This field
// is absent if the Service mode is Replicated or Global.
JobIteration *Version `json:",omitempty"`
}
// TaskSpec represents the spec of a task.
@@ -91,21 +85,13 @@ type TaskSpec struct {
Runtime RuntimeType `json:",omitempty"`
}
// Resources represents resources (CPU/Memory) which can be advertised by a
// node and requested to be reserved for a task.
// Resources represents resources (CPU/Memory).
type Resources struct {
NanoCPUs int64 `json:",omitempty"`
MemoryBytes int64 `json:",omitempty"`
GenericResources []GenericResource `json:",omitempty"`
}
// Limit describes limits on resources which can be requested by a task.
type Limit struct {
NanoCPUs int64 `json:",omitempty"`
MemoryBytes int64 `json:",omitempty"`
Pids int64 `json:",omitempty"`
}
// GenericResource represents a "user defined" resource which can
// be either an integer (e.g: SSD=3) or a string (e.g: SSD=sda1)
type GenericResource struct {
@@ -133,7 +119,7 @@ type DiscreteGenericResource struct {
// ResourceRequirements represents resources requirements.
type ResourceRequirements struct {
Limits *Limit `json:",omitempty"`
Limits *Resources `json:",omitempty"`
Reservations *Resources `json:",omitempty"`
}
@@ -141,7 +127,6 @@ type ResourceRequirements struct {
type Placement struct {
Constraints []string `json:",omitempty"`
Preferences []PlacementPreference `json:",omitempty"`
MaxReplicas uint64 `json:",omitempty"`
// Platforms stores all the platforms that the image can run on.
// This field is used in the platform filter for scheduling. If empty,

View File

@@ -100,10 +100,8 @@ func GetTimestamp(value string, reference time.Time) (string, error) {
// if the incoming nanosecond portion is longer or shorter than 9 digits it is
// converted to nanoseconds. The expectation is that the seconds and
// seconds will be used to create a time variable. For example:
//
// seconds, nanoseconds, err := ParseTimestamp("1136073600.000000001",0)
// if err == nil since := time.Unix(seconds, nanoseconds)
//
// seconds, nanoseconds, err := ParseTimestamp("1136073600.000000001",0)
// if err == nil since := time.Unix(seconds, nanoseconds)
// returns seconds as def(aultSeconds) if value == ""
func ParseTimestamps(value string, def int64) (int64, int64, error) {
if value == "" {

View File

@@ -39,7 +39,6 @@ type ImageInspect struct {
Author string
Config *container.Config
Architecture string
Variant string `json:",omitempty"`
Os string
OsVersion string `json:",omitempty"`
Size int64
@@ -154,17 +153,15 @@ type Info struct {
Images int
Driver string
DriverStatus [][2]string
SystemStatus [][2]string `json:",omitempty"` // SystemStatus is only propagated by the Swarm standalone API
SystemStatus [][2]string
Plugins PluginsInfo
MemoryLimit bool
SwapLimit bool
KernelMemory bool // Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
KernelMemoryTCP bool
KernelMemory bool
CPUCfsPeriod bool `json:"CpuCfsPeriod"`
CPUCfsQuota bool `json:"CpuCfsQuota"`
CPUShares bool
CPUSet bool
PidsLimit bool
IPv4Forwarding bool
BridgeNfIptables bool
BridgeNfIP6tables bool `json:"BridgeNfIp6tables"`
@@ -175,11 +172,9 @@ type Info struct {
SystemTime string
LoggingDriver string
CgroupDriver string
CgroupVersion string `json:",omitempty"`
NEventsListener int
KernelVersion string
OperatingSystem string
OSVersion string
OSType string
Architecture string
IndexServerAddress string
@@ -195,24 +190,23 @@ type Info struct {
Labels []string
ExperimentalBuild bool
ServerVersion string
ClusterStore string `json:",omitempty"` // Deprecated: host-discovery and overlay networks with external k/v stores are deprecated
ClusterAdvertise string `json:",omitempty"` // Deprecated: host-discovery and overlay networks with external k/v stores are deprecated
ClusterStore string
ClusterAdvertise string
Runtimes map[string]Runtime
DefaultRuntime string
Swarm swarm.Info
// LiveRestoreEnabled determines whether containers should be kept
// running when the daemon is shutdown or upon daemon start if
// running containers are detected
LiveRestoreEnabled bool
Isolation container.Isolation
InitBinary string
ContainerdCommit Commit
RuncCommit Commit
InitCommit Commit
SecurityOptions []string
ProductLicense string `json:",omitempty"`
DefaultAddressPools []NetworkAddressPool `json:",omitempty"`
Warnings []string
LiveRestoreEnabled bool
Isolation container.Isolation
InitBinary string
ContainerdCommit Commit
RuncCommit Commit
InitCommit Commit
SecurityOptions []string
ProductLicense string `json:",omitempty"`
Warnings []string
}
// KeyValue holds a key/value pair
@@ -220,12 +214,6 @@ type KeyValue struct {
Key, Value string
}
// NetworkAddressPool is a temp struct used by Info struct
type NetworkAddressPool struct {
Base string
Size int
}
// SecurityOpt contains the name and options of a security option
type SecurityOpt struct {
Name string
@@ -326,7 +314,7 @@ type ContainerState struct {
}
// ContainerNode stores information about the node that a container
// is running on. It's only used by the Docker Swarm standalone API
// is running on. It's only available in Docker Swarm
type ContainerNode struct {
ID string
IPAddress string `json:"IP"`
@@ -350,7 +338,7 @@ type ContainerJSONBase struct {
HostnamePath string
HostsPath string
LogPath string
Node *ContainerNode `json:",omitempty"` // Node is only propagated by Docker Swarm standalone API
Node *ContainerNode `json:",omitempty"`
Name string
RestartCount int
Driver string
@@ -518,16 +506,6 @@ type Checkpoint struct {
type Runtime struct {
Path string `json:"path"`
Args []string `json:"runtimeArgs,omitempty"`
// This is exposed here only for internal use
// It is not currently supported to specify custom shim configs
Shim *ShimConfig `json:"-"`
}
// ShimConfig is used by runtime to configure containerd shims
type ShimConfig struct {
Binary string
Opts interface{}
}
// DiskUsage contains response of Engine API:

View File

@@ -27,13 +27,10 @@ type Volume struct {
Name string `json:"Name"`
// The driver specific options used when creating the volume.
//
// Required: true
Options map[string]string `json:"Options"`
// The level at which the volume exists. Either `global` for cluster-wide,
// or `local` for machine level.
//
// The level at which the volume exists. Either `global` for cluster-wide, or `local` for machine level.
// Required: true
Scope string `json:"Scope"`

View File

@@ -1,7 +1,8 @@
package volume // import "github.com/docker/docker/api/types/volume"
package volume
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
@@ -14,9 +15,7 @@ type VolumeCreateBody struct {
// Required: true
Driver string `json:"Driver"`
// A mapping of driver options and values. These options are
// passed directly to the driver and are driver specific.
//
// A mapping of driver options and values. These options are passed directly to the driver and are driver specific.
// Required: true
DriverOpts map[string]string `json:"DriverOpts"`
@@ -25,7 +24,6 @@ type VolumeCreateBody struct {
Labels map[string]string `json:"Labels"`
// The new volume's name. If not specified, Docker generates a name.
//
// Required: true
Name string `json:"Name"`
}

View File

@@ -1,7 +1,8 @@
package volume // import "github.com/docker/docker/api/types/volume"
package volume
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
@@ -16,8 +17,7 @@ type VolumeListOKBody struct {
// Required: true
Volumes []*types.Volume `json:"Volumes"`
// Warnings that occurred when fetching the list of volumes.
//
// Warnings that occurred when fetching the list of volumes
// Required: true
Warnings []string `json:"Warnings"`
}

View File

@@ -5,23 +5,20 @@ import (
"encoding/json"
"fmt"
"io"
"path"
"io/ioutil"
"runtime"
"sync"
"time"
"github.com/containerd/containerd/content"
containerderrors "github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/gc"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/leases"
"github.com/containerd/containerd/platforms"
ctdreference "github.com/containerd/containerd/reference"
"github.com/containerd/containerd/remotes"
"github.com/containerd/containerd/remotes/docker"
"github.com/containerd/containerd/remotes/docker/schema1"
distreference "github.com/docker/distribution/reference"
dimages "github.com/docker/docker/daemon/images"
"github.com/docker/docker/distribution"
"github.com/docker/docker/distribution/metadata"
"github.com/docker/docker/distribution/xfer"
@@ -30,54 +27,86 @@ import (
pkgprogress "github.com/docker/docker/pkg/progress"
"github.com/docker/docker/reference"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/client/llb"
gw "github.com/moby/buildkit/frontend/gateway/client"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/solver"
"github.com/moby/buildkit/session/auth"
"github.com/moby/buildkit/source"
"github.com/moby/buildkit/util/flightcontrol"
"github.com/moby/buildkit/util/imageutil"
"github.com/moby/buildkit/util/leaseutil"
"github.com/moby/buildkit/util/progress"
"github.com/moby/buildkit/util/resolver"
"github.com/moby/buildkit/util/tracing"
digest "github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/identity"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/time/rate"
)
// SourceOpt is options for creating the image source
type SourceOpt struct {
SessionManager *session.Manager
ContentStore content.Store
CacheAccessor cache.Accessor
ReferenceStore reference.Store
DownloadManager distribution.RootFSDownloadManager
MetadataStore metadata.V2MetadataService
ImageStore image.Store
RegistryHosts docker.RegistryHosts
LayerStore layer.Store
LeaseManager leases.Manager
GarbageCollect func(ctx context.Context) (gc.Stats, error)
ResolverOpt resolver.ResolveOptionsFunc
}
// Source is the source implementation for accessing container images
type Source struct {
type imageSource struct {
SourceOpt
g flightcontrol.Group
}
// NewSource creates a new image source
func NewSource(opt SourceOpt) (*Source, error) {
return &Source{SourceOpt: opt}, nil
func NewSource(opt SourceOpt) (source.Source, error) {
is := &imageSource{
SourceOpt: opt,
}
return is, nil
}
// ID returns image scheme identifier
func (is *Source) ID() string {
func (is *imageSource) ID() string {
return source.DockerImageScheme
}
func (is *Source) resolveLocal(refStr string) (*image.Image, error) {
func (is *imageSource) getResolver(ctx context.Context, rfn resolver.ResolveOptionsFunc, ref string) remotes.Resolver {
opt := docker.ResolverOptions{
Client: tracing.DefaultClient,
}
if rfn != nil {
opt = rfn(ref)
}
opt.Credentials = is.getCredentialsFromSession(ctx)
r := docker.NewResolver(opt)
return r
}
func (is *imageSource) getCredentialsFromSession(ctx context.Context) func(string) (string, string, error) {
id := session.FromContext(ctx)
if id == "" {
// can be removed after containerd/containerd#2812
return func(string) (string, string, error) {
return "", "", nil
}
}
return func(host string) (string, string, error) {
timeoutCtx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
caller, err := is.SessionManager.Get(timeoutCtx, id)
if err != nil {
return "", "", err
}
return auth.CredentialsFunc(tracing.ContextWithSpanFromContext(context.TODO(), ctx), caller)(host)
}
}
func (is *imageSource) resolveLocal(refStr string) ([]byte, error) {
ref, err := distreference.ParseNormalizedNamed(refStr)
if err != nil {
return nil, err
@@ -90,23 +119,16 @@ func (is *Source) resolveLocal(refStr string) (*image.Image, error) {
if err != nil {
return nil, err
}
return img, nil
return img.RawJSON(), nil
}
func (is *Source) resolveRemote(ctx context.Context, ref string, platform *ocispec.Platform, sm *session.Manager, g session.Group) (digest.Digest, []byte, error) {
func (is *imageSource) resolveRemote(ctx context.Context, ref string, platform *ocispec.Platform) (digest.Digest, []byte, error) {
type t struct {
dgst digest.Digest
dt []byte
}
p := platforms.DefaultSpec()
if platform != nil {
p = *platform
}
// key is used to synchronize resolutions that can happen in parallel when doing multi-stage.
key := "getconfig::" + ref + "::" + platforms.Format(p)
res, err := is.g.Do(ctx, key, func(ctx context.Context) (interface{}, error) {
res := resolver.DefaultPool.GetResolver(is.RegistryHosts, ref, "pull", sm, g)
dgst, dt, err := imageutil.Config(ctx, ref, res, is.ContentStore, is.LeaseManager, platform)
res, err := is.g.Do(ctx, ref, func(ctx context.Context) (interface{}, error) {
dgst, dt, err := imageutil.Config(ctx, ref, is.getResolver(ctx, is.ResolverOpt, ref), is.ContentStore, platform)
if err != nil {
return nil, err
}
@@ -120,15 +142,14 @@ func (is *Source) resolveRemote(ctx context.Context, ref string, platform *ocisp
return typed.dgst, typed.dt, nil
}
// ResolveImageConfig returns image config for an image
func (is *Source) ResolveImageConfig(ctx context.Context, ref string, opt llb.ResolveImageConfigOpt, sm *session.Manager, g session.Group) (digest.Digest, []byte, error) {
func (is *imageSource) ResolveImageConfig(ctx context.Context, ref string, opt gw.ResolveImageConfigOpt) (digest.Digest, []byte, error) {
resolveMode, err := source.ParseImageResolveMode(opt.ResolveMode)
if err != nil {
return "", nil, err
}
switch resolveMode {
case source.ResolveModeForcePull:
dgst, dt, err := is.resolveRemote(ctx, ref, opt.Platform, sm, g)
dgst, dt, err := is.resolveRemote(ctx, ref, opt.Platform)
// TODO: pull should fallback to local in case of failure to allow offline behavior
// the fallback doesn't work currently
return dgst, dt, err
@@ -145,26 +166,18 @@ func (is *Source) ResolveImageConfig(ctx context.Context, ref string, opt llb.Re
// default == prefer local, but in the future could be smarter
fallthrough
case source.ResolveModePreferLocal:
img, err := is.resolveLocal(ref)
dt, err := is.resolveLocal(ref)
if err == nil {
if opt.Platform != nil && !platformMatches(img, opt.Platform) {
logrus.WithField("ref", ref).Debugf("Requested build platform %s does not match local image platform %s, checking remote",
path.Join(opt.Platform.OS, opt.Platform.Architecture, opt.Platform.Variant),
path.Join(img.OS, img.Architecture, img.Variant),
)
} else {
return "", img.RawJSON(), err
}
return "", dt, err
}
// fallback to remote
return is.resolveRemote(ctx, ref, opt.Platform, sm, g)
return is.resolveRemote(ctx, ref, opt.Platform)
}
// should never happen
return "", nil, fmt.Errorf("builder cannot resolve image %s: invalid mode %q", ref, opt.ResolveMode)
}
// Resolve returns access to pulling for an identifier
func (is *Source) Resolve(ctx context.Context, id source.Identifier, sm *session.Manager, vtx solver.Vertex) (source.SourceInstance, error) {
func (is *imageSource) Resolve(ctx context.Context, id source.Identifier) (source.SourceInstance, error) {
imageIdentifier, ok := id.(*source.ImageIdentifier)
if !ok {
return nil, errors.Errorf("invalid image identifier %v", id)
@@ -176,32 +189,28 @@ func (is *Source) Resolve(ctx context.Context, id source.Identifier, sm *session
}
p := &puller{
src: imageIdentifier,
is: is,
//resolver: is.getResolver(is.RegistryHosts, imageIdentifier.Reference.String(), sm, g),
src: imageIdentifier,
is: is,
resolver: is.getResolver(ctx, is.ResolverOpt, imageIdentifier.Reference.String()),
platform: platform,
sm: sm,
}
return p, nil
}
type puller struct {
is *Source
is *imageSource
resolveOnce sync.Once
resolveLocalOnce sync.Once
g flightcontrol.Group
src *source.ImageIdentifier
desc ocispec.Descriptor
ref string
resolveErr error
resolver remotes.Resolver
config []byte
platform ocispec.Platform
sm *session.Manager
}
func (p *puller) resolver(g session.Group) remotes.Resolver {
return resolver.DefaultPool.GetResolver(p.is.RegistryHosts, p.src.Reference.String(), "pull", p.sm, g)
}
func (p *puller) mainManifestKey(platform ocispec.Platform) (digest.Digest, error) {
func (p *puller) mainManifestKey(dgst digest.Digest, platform ocispec.Platform) (digest.Digest, error) {
dt, err := json.Marshal(struct {
Digest digest.Digest
OS string
@@ -242,38 +251,31 @@ func (p *puller) resolveLocal() {
}
if p.src.ResolveMode == source.ResolveModeDefault || p.src.ResolveMode == source.ResolveModePreferLocal {
ref := p.src.Reference.String()
img, err := p.is.resolveLocal(ref)
dt, err := p.is.resolveLocal(p.src.Reference.String())
if err == nil {
if !platformMatches(img, &p.platform) {
logrus.WithField("ref", ref).Debugf("Requested build platform %s does not match local image platform %s, not resolving",
path.Join(p.platform.OS, p.platform.Architecture, p.platform.Variant),
path.Join(img.OS, img.Architecture, img.Variant),
)
} else {
p.config = img.RawJSON()
}
p.config = dt
}
}
})
}
func (p *puller) resolve(ctx context.Context, g session.Group) error {
_, err := p.g.Do(ctx, "", func(ctx context.Context) (_ interface{}, err error) {
func (p *puller) resolve(ctx context.Context) error {
p.resolveOnce.Do(func() {
resolveProgressDone := oneOffProgress(ctx, "resolve "+p.src.Reference.String())
defer func() {
resolveProgressDone(err)
}()
ref, err := distreference.ParseNormalizedNamed(p.src.Reference.String())
if err != nil {
return nil, err
p.resolveErr = err
resolveProgressDone(err)
return
}
if p.desc.Digest == "" && p.config == nil {
origRef, desc, err := p.resolver(g).Resolve(ctx, ref.String())
origRef, desc, err := p.resolver.Resolve(ctx, ref.String())
if err != nil {
return nil, err
p.resolveErr = err
resolveProgressDone(err)
return
}
p.desc = desc
@@ -288,90 +290,58 @@ func (p *puller) resolve(ctx context.Context, g session.Group) error {
if p.config == nil && p.desc.MediaType != images.MediaTypeDockerSchema1Manifest {
ref, err := distreference.WithDigest(ref, p.desc.Digest)
if err != nil {
return nil, err
p.resolveErr = err
resolveProgressDone(err)
return
}
_, dt, err := p.is.ResolveImageConfig(ctx, ref.String(), llb.ResolveImageConfigOpt{Platform: &p.platform, ResolveMode: resolveModeToString(p.src.ResolveMode)}, p.sm, g)
_, dt, err := p.is.ResolveImageConfig(ctx, ref.String(), gw.ResolveImageConfigOpt{Platform: &p.platform, ResolveMode: resolveModeToString(p.src.ResolveMode)})
if err != nil {
return nil, err
p.resolveErr = err
resolveProgressDone(err)
return
}
p.config = dt
}
return nil, nil
resolveProgressDone(nil)
})
return err
return p.resolveErr
}
func (p *puller) CacheKey(ctx context.Context, g session.Group, index int) (string, solver.CacheOpts, bool, error) {
func (p *puller) CacheKey(ctx context.Context, index int) (string, bool, error) {
p.resolveLocal()
if p.desc.Digest != "" && index == 0 {
dgst, err := p.mainManifestKey(p.platform)
dgst, err := p.mainManifestKey(p.desc.Digest, p.platform)
if err != nil {
return "", nil, false, err
return "", false, err
}
return dgst.String(), nil, false, nil
return dgst.String(), false, nil
}
if p.config != nil {
k := cacheKeyFromConfig(p.config).String()
if k == "" {
return digest.FromBytes(p.config).String(), nil, true, nil
}
return k, nil, true, nil
return cacheKeyFromConfig(p.config).String(), true, nil
}
if err := p.resolve(ctx, g); err != nil {
return "", nil, false, err
if err := p.resolve(ctx); err != nil {
return "", false, err
}
if p.desc.Digest != "" && index == 0 {
dgst, err := p.mainManifestKey(p.platform)
dgst, err := p.mainManifestKey(p.desc.Digest, p.platform)
if err != nil {
return "", nil, false, err
return "", false, err
}
return dgst.String(), nil, false, nil
return dgst.String(), false, nil
}
if len(p.config) == 0 && p.desc.MediaType != images.MediaTypeDockerSchema1Manifest {
return "", nil, false, errors.Errorf("invalid empty config file resolved for %s", p.src.Reference.String())
}
k := cacheKeyFromConfig(p.config).String()
if k == "" || p.desc.MediaType == images.MediaTypeDockerSchema1Manifest {
dgst, err := p.mainManifestKey(p.platform)
if err != nil {
return "", nil, false, err
}
return dgst.String(), nil, true, nil
}
return k, nil, true, nil
return cacheKeyFromConfig(p.config).String(), true, nil
}
func (p *puller) getRef(ctx context.Context, diffIDs []layer.DiffID, opts ...cache.RefOption) (cache.ImmutableRef, error) {
var parent cache.ImmutableRef
if len(diffIDs) > 1 {
var err error
parent, err = p.getRef(ctx, diffIDs[:len(diffIDs)-1], opts...)
if err != nil {
return nil, err
}
defer parent.Release(context.TODO())
}
return p.is.CacheAccessor.GetByBlob(ctx, ocispec.Descriptor{
Annotations: map[string]string{
"containerd.io/uncompressed": diffIDs[len(diffIDs)-1].String(),
},
}, parent, opts...)
}
func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.ImmutableRef, error) {
func (p *puller) Snapshot(ctx context.Context) (cache.ImmutableRef, error) {
p.resolveLocal()
if len(p.config) == 0 {
if err := p.resolve(ctx, g); err != nil {
return nil, err
}
if err := p.resolve(ctx); err != nil {
return nil, err
}
if p.config != nil {
@@ -380,31 +350,16 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
if len(img.RootFS.DiffIDs) == 0 {
return nil, nil
}
l, err := p.is.LayerStore.Get(img.RootFS.ChainID())
if err == nil {
layer.ReleaseAndLog(p.is.LayerStore, l)
ref, err := p.getRef(ctx, img.RootFS.DiffIDs, cache.WithDescription(fmt.Sprintf("from local %s", p.ref)))
if err != nil {
return nil, err
}
return ref, nil
ref, err := p.is.CacheAccessor.GetFromSnapshotter(ctx, string(img.RootFS.ChainID()), cache.WithDescription(fmt.Sprintf("from local %s", p.ref)))
if err != nil {
return nil, err
}
return ref, nil
}
}
ongoing := newJobs(p.ref)
ctx, done, err := leaseutil.WithLease(ctx, p.is.LeaseManager, leases.WithExpiration(5*time.Minute), leaseutil.MakeTemporary)
if err != nil {
return nil, err
}
defer func() {
done(context.TODO())
if p.is.GarbageCollect != nil {
go p.is.GarbageCollect(context.TODO())
}
}()
pctx, stopProgress := context.WithCancel(ctx)
pw, _, ctx := progress.FromContext(ctx)
@@ -419,16 +374,12 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
<-progressDone
}()
fetcher, err := p.resolver(g).Fetcher(ctx, p.ref)
fetcher, err := p.resolver.Fetcher(ctx, p.ref)
if err != nil {
stopProgress()
return nil, err
}
platform := platforms.Only(p.platform)
var nonLayers []digest.Digest
var (
schema1Converter *schema1.Converter
handlers []images.Handler
@@ -449,7 +400,6 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest,
images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex,
images.MediaTypeDockerSchema2Config, ocispec.MediaTypeImageConfig:
nonLayers = append(nonLayers, desc.Digest)
default:
return nil, images.ErrSkipDesc
}
@@ -459,10 +409,10 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
// Get all the children for a descriptor
childrenHandler := images.ChildrenHandler(p.is.ContentStore)
// Set any children labels for that content
childrenHandler = images.SetChildrenLabels(p.is.ContentStore, childrenHandler)
// Filter the children by the platform
childrenHandler = images.FilterPlatforms(childrenHandler, platform)
// Limit manifests pulled to the best match in an index
childrenHandler = images.LimitManifests(childrenHandler, platform, 1)
childrenHandler = images.FilterPlatforms(childrenHandler, platforms.Default())
handlers = append(handlers,
remotes.FetchHandler(p.is.ContentStore, fetcher),
@@ -470,7 +420,7 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
)
}
if err := images.Dispatch(ctx, images.Handlers(handlers...), nil, p.desc); err != nil {
if err := images.Dispatch(ctx, images.Handlers(handlers...), p.desc); err != nil {
stopProgress()
return nil, err
}
@@ -483,12 +433,12 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
}
}
mfst, err := images.Manifest(ctx, p.is.ContentStore, p.desc, platform)
mfst, err := images.Manifest(ctx, p.is.ContentStore, p.desc, platforms.Default())
if err != nil {
return nil, err
}
config, err := images.Config(ctx, p.is.ContentStore, p.desc, platform)
config, err := images.Config(ctx, p.is.ContentStore, p.desc, platforms.Default())
if err != nil {
return nil, err
}
@@ -529,7 +479,7 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
tm := time.Now()
end = &tm
}
_ = pw.Write("extracting "+p.ID, progress.Status{
pw.Write("extracting "+p.ID, progress.Status{
Action: "extract",
Started: &st.st,
Completed: end,
@@ -546,9 +496,6 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
layers := make([]xfer.DownloadDescriptor, 0, len(mfst.Layers))
for i, desc := range mfst.Layers {
if err := desc.Digest.Validate(); err != nil {
return nil, errors.Wrap(err, "layer digest could not be validated")
}
ongoing.add(desc)
layers = append(layers, &layerDescriptor{
desc: desc,
@@ -561,31 +508,24 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
defer func() {
<-progressDone
for _, desc := range mfst.Layers {
p.is.ContentStore.Delete(context.TODO(), desc.Digest)
}
}()
r := image.NewRootFS()
rootFS, release, err := p.is.DownloadManager.Download(ctx, *r, runtime.GOOS, layers, pkgprogress.ChanOutput(pchan))
stopProgress()
if err != nil {
return nil, err
}
stopProgress()
ref, err := p.getRef(ctx, rootFS.DiffIDs, cache.WithDescription(fmt.Sprintf("pulled from %s", p.ref)))
ref, err := p.is.CacheAccessor.GetFromSnapshotter(ctx, string(rootFS.ChainID()), cache.WithDescription(fmt.Sprintf("pulled from %s", p.ref)))
release()
if err != nil {
return nil, err
}
// keep manifest blobs until ref is alive for cache
for _, nl := range nonLayers {
if err := p.is.LeaseManager.AddResource(ctx, leases.Lease{ID: ref.ID()}, leases.Resource{
ID: nl.String(),
Type: "content",
}); err != nil {
return nil, err
}
}
// TODO: handle windows layers for cross platform builds
if p.src.RecordType != "" && cache.GetRecordType(ref) == "" {
@@ -600,7 +540,7 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
// Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)
type layerDescriptor struct {
is *Source
is *imageSource
fetcher remotes.Fetcher
desc ocispec.Descriptor
diffID layer.DiffID
@@ -640,7 +580,7 @@ func (ld *layerDescriptor) Download(ctx context.Context, progressOutput pkgprogr
return nil, 0, err
}
return io.NopCloser(content.NewReader(ra)), ld.desc.Size, nil
return ioutil.NopCloser(content.NewReader(ra)), ld.desc.Size, nil
}
func (ld *layerDescriptor) Close() {
@@ -702,7 +642,7 @@ func showProgress(ctx context.Context, ongoing *jobs, cs content.Store, pw progr
refKey := remotes.MakeRefKey(ctx, j.Descriptor)
if a, ok := actives[refKey]; ok {
started := j.started
_ = pw.Write(j.Digest.String(), progress.Status{
pw.Write(j.Digest.String(), progress.Status{
Action: a.Status,
Total: int(a.Total),
Current: int(a.Offset),
@@ -714,8 +654,8 @@ func showProgress(ctx context.Context, ongoing *jobs, cs content.Store, pw progr
if !j.done {
info, err := cs.Info(context.TODO(), j.Digest)
if err != nil {
if containerderrors.IsNotFound(err) {
// _ = pw.Write(j.Digest.String(), progress.Status{
if errdefs.IsNotFound(err) {
// pw.Write(j.Digest.String(), progress.Status{
// Action: "waiting",
// })
continue
@@ -727,7 +667,7 @@ func showProgress(ctx context.Context, ongoing *jobs, cs content.Store, pw progr
if done || j.done {
started := j.started
createdAt := info.CreatedAt
_ = pw.Write(j.Digest.String(), progress.Status{
pw.Write(j.Digest.String(), progress.Status{
Action: "done",
Current: int(info.Size),
Total: int(info.Size),
@@ -813,13 +753,13 @@ func oneOffProgress(ctx context.Context, id string) func(err error) error {
st := progress.Status{
Started: &now,
}
_ = pw.Write(id, st)
pw.Write(id, st)
return func(err error) error {
// TODO: set error on status
now := time.Now()
st.Completed = &now
_ = pw.Write(id, st)
_ = pw.Close()
pw.Write(id, st)
pw.Close()
return err
}
}
@@ -830,11 +770,10 @@ func cacheKeyFromConfig(dt []byte) digest.Digest {
var img ocispec.Image
err := json.Unmarshal(dt, &img)
if err != nil {
logrus.WithError(err).Errorf("failed to unmarshal image config for cache key %v", err)
return digest.FromBytes(dt)
}
if img.RootFS.Type != "layers" || len(img.RootFS.DiffIDs) == 0 {
return ""
if img.RootFS.Type != "layers" {
return digest.FromBytes(dt)
}
return identity.ChainID(img.RootFS.DiffIDs)
}
@@ -852,13 +791,3 @@ func resolveModeToString(rm source.ResolveMode) string {
}
return ""
}
func platformMatches(img *image.Image, p *ocispec.Platform) bool {
return dimages.OnlyPlatformWithFallback(*p).Match(ocispec.Platform{
Architecture: img.Architecture,
OS: img.OS,
OSVersion: img.OSVersion,
OSFeatures: img.OSFeatures,
Variant: img.Variant,
})
}

View File

@@ -1,168 +0,0 @@
package localinlinecache
import (
"context"
"encoding/json"
"time"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/remotes/docker"
distreference "github.com/docker/distribution/reference"
imagestore "github.com/docker/docker/image"
"github.com/docker/docker/reference"
"github.com/moby/buildkit/cache/remotecache"
registryremotecache "github.com/moby/buildkit/cache/remotecache/registry"
v1 "github.com/moby/buildkit/cache/remotecache/v1"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/solver"
"github.com/moby/buildkit/worker"
digest "github.com/opencontainers/go-digest"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
func init() {
// See https://github.com/moby/buildkit/pull/1993.
v1.EmptyLayerRemovalSupported = false
}
// ResolveCacheImporterFunc returns a resolver function for local inline cache
func ResolveCacheImporterFunc(sm *session.Manager, resolverFunc docker.RegistryHosts, cs content.Store, rs reference.Store, is imagestore.Store) remotecache.ResolveCacheImporterFunc {
upstream := registryremotecache.ResolveCacheImporterFunc(sm, cs, resolverFunc)
return func(ctx context.Context, group session.Group, attrs map[string]string) (remotecache.Importer, specs.Descriptor, error) {
if dt, err := tryImportLocal(rs, is, attrs["ref"]); err == nil {
return newLocalImporter(dt), specs.Descriptor{}, nil
}
return upstream(ctx, group, attrs)
}
}
func tryImportLocal(rs reference.Store, is imagestore.Store, refStr string) ([]byte, error) {
ref, err := distreference.ParseNormalizedNamed(refStr)
if err != nil {
return nil, err
}
dgst, err := rs.Get(ref)
if err != nil {
return nil, err
}
img, err := is.Get(imagestore.ID(dgst))
if err != nil {
return nil, err
}
return img.RawJSON(), nil
}
func newLocalImporter(dt []byte) remotecache.Importer {
return &localImporter{dt: dt}
}
type localImporter struct {
dt []byte
}
func (li *localImporter) Resolve(ctx context.Context, _ specs.Descriptor, id string, w worker.Worker) (solver.CacheManager, error) {
cc := v1.NewCacheChains()
if err := li.importInlineCache(ctx, li.dt, cc); err != nil {
return nil, err
}
keysStorage, resultStorage, err := v1.NewCacheKeyStorage(cc, w)
if err != nil {
return nil, err
}
return solver.NewCacheManager(id, keysStorage, resultStorage), nil
}
func (li *localImporter) importInlineCache(ctx context.Context, dt []byte, cc solver.CacheExporterTarget) error {
var img image
if err := json.Unmarshal(dt, &img); err != nil {
return err
}
if img.Cache == nil {
return nil
}
var config v1.CacheConfig
if err := json.Unmarshal(img.Cache, &config.Records); err != nil {
return err
}
createdDates, createdMsg, err := parseCreatedLayerInfo(img)
if err != nil {
return err
}
layers := v1.DescriptorProvider{}
for i, diffID := range img.Rootfs.DiffIDs {
dgst := digest.Digest(diffID.String())
desc := specs.Descriptor{
Digest: dgst,
Size: -1,
MediaType: images.MediaTypeDockerSchema2Layer,
Annotations: map[string]string{},
}
if createdAt := createdDates[i]; createdAt != "" {
desc.Annotations["buildkit/createdat"] = createdAt
}
if createdBy := createdMsg[i]; createdBy != "" {
desc.Annotations["buildkit/description"] = createdBy
}
desc.Annotations["containerd.io/uncompressed"] = img.Rootfs.DiffIDs[i].String()
layers[dgst] = v1.DescriptorProviderPair{
Descriptor: desc,
Provider: &emptyProvider{},
}
config.Layers = append(config.Layers, v1.CacheLayer{
Blob: dgst,
ParentIndex: i - 1,
})
}
return v1.ParseConfig(config, layers, cc)
}
type image struct {
Rootfs struct {
DiffIDs []digest.Digest `json:"diff_ids"`
} `json:"rootfs"`
Cache []byte `json:"moby.buildkit.cache.v0"`
History []struct {
Created *time.Time `json:"created,omitempty"`
CreatedBy string `json:"created_by,omitempty"`
EmptyLayer bool `json:"empty_layer,omitempty"`
} `json:"history,omitempty"`
}
func parseCreatedLayerInfo(img image) ([]string, []string, error) {
dates := make([]string, 0, len(img.Rootfs.DiffIDs))
createdBy := make([]string, 0, len(img.Rootfs.DiffIDs))
for _, h := range img.History {
if !h.EmptyLayer {
str := ""
if h.Created != nil {
dt, err := h.Created.MarshalText()
if err != nil {
return nil, nil, err
}
str = string(dt)
}
dates = append(dates, str)
createdBy = append(createdBy, h.CreatedBy)
}
}
return dates, createdBy, nil
}
type emptyProvider struct {
}
func (p *emptyProvider) ReaderAt(ctx context.Context, dec specs.Descriptor) (content.ReaderAt, error) {
return nil, errors.Errorf("ReaderAt not implemented for empty provider")
}

View File

@@ -12,22 +12,12 @@ import (
"golang.org/x/sync/errgroup"
)
func (s *snapshotter) GetDiffIDs(ctx context.Context, key string) ([]layer.DiffID, error) {
func (s *snapshotter) EnsureLayer(ctx context.Context, key string) ([]layer.DiffID, error) {
if l, err := s.getLayer(key, true); err != nil {
return nil, err
} else if l != nil {
return getDiffChain(l), nil
}
return nil, nil
}
func (s *snapshotter) EnsureLayer(ctx context.Context, key string) ([]layer.DiffID, error) {
diffIDs, err := s.GetDiffIDs(ctx, key)
if err != nil {
return nil, err
} else if diffIDs != nil {
return diffIDs, nil
}
id, committed := s.getGraphDriverID(key)
if !committed {

View File

@@ -1,133 +0,0 @@
package snapshot
import (
"context"
"sync"
"github.com/containerd/containerd/leases"
"github.com/sirupsen/logrus"
bolt "go.etcd.io/bbolt"
)
type sLM struct {
manager leases.Manager
s *snapshotter
mu sync.Mutex
byLease map[string]map[string]struct{}
bySnapshot map[string]map[string]struct{}
}
func newLeaseManager(s *snapshotter, lm leases.Manager) *sLM {
return &sLM{
s: s,
manager: lm,
byLease: map[string]map[string]struct{}{},
bySnapshot: map[string]map[string]struct{}{},
}
}
func (l *sLM) Create(ctx context.Context, opts ...leases.Opt) (leases.Lease, error) {
return l.manager.Create(ctx, opts...)
}
func (l *sLM) Delete(ctx context.Context, lease leases.Lease, opts ...leases.DeleteOpt) error {
if err := l.manager.Delete(ctx, lease, opts...); err != nil {
return err
}
l.mu.Lock()
if snaps, ok := l.byLease[lease.ID]; ok {
for sID := range snaps {
l.delRef(lease.ID, sID)
}
}
l.mu.Unlock()
return nil
}
func (l *sLM) List(ctx context.Context, filters ...string) ([]leases.Lease, error) {
return l.manager.List(ctx, filters...)
}
func (l *sLM) AddResource(ctx context.Context, lease leases.Lease, resource leases.Resource) error {
if err := l.manager.AddResource(ctx, lease, resource); err != nil {
return err
}
if resource.Type == "snapshots/default" {
l.mu.Lock()
l.addRef(lease.ID, resource.ID)
l.mu.Unlock()
}
return nil
}
func (l *sLM) DeleteResource(ctx context.Context, lease leases.Lease, resource leases.Resource) error {
if err := l.manager.DeleteResource(ctx, lease, resource); err != nil {
return err
}
if resource.Type == "snapshots/default" {
l.mu.Lock()
l.delRef(lease.ID, resource.ID)
l.mu.Unlock()
}
return nil
}
func (l *sLM) ListResources(ctx context.Context, lease leases.Lease) ([]leases.Resource, error) {
return l.manager.ListResources(ctx, lease)
}
func (l *sLM) addRef(lID, sID string) {
load := false
snapshots, ok := l.byLease[lID]
if !ok {
snapshots = map[string]struct{}{}
l.byLease[lID] = snapshots
}
if _, ok := snapshots[sID]; !ok {
snapshots[sID] = struct{}{}
}
leases, ok := l.bySnapshot[sID]
if !ok {
leases = map[string]struct{}{}
l.byLease[sID] = leases
load = true
}
if _, ok := leases[lID]; !ok {
leases[lID] = struct{}{}
}
if load {
l.s.getLayer(sID, true)
if _, ok := l.s.chainID(sID); ok {
l.s.db.Update(func(tx *bolt.Tx) error {
b, err := tx.CreateBucketIfNotExists([]byte(lID))
if err != nil {
return err
}
return b.Put(keyChainID, []byte(sID))
})
}
}
}
func (l *sLM) delRef(lID, sID string) {
snapshots, ok := l.byLease[lID]
if !ok {
delete(snapshots, sID)
if len(snapshots) == 0 {
delete(l.byLease, lID)
}
}
leases, ok := l.bySnapshot[sID]
if !ok {
delete(leases, lID)
if len(leases) == 0 {
delete(l.bySnapshot, sID)
if err := l.s.remove(context.TODO(), sID); err != nil {
logrus.Warnf("failed to remove snapshot %v", sID)
}
}
}
}

View File

@@ -7,12 +7,10 @@ import (
"strings"
"sync"
"github.com/containerd/containerd/leases"
"github.com/containerd/containerd/mount"
"github.com/containerd/containerd/snapshots"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/layer"
"github.com/docker/docker/pkg/idtools"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/snapshot"
digest "github.com/opencontainers/go-digest"
@@ -22,16 +20,14 @@ import (
var keyParent = []byte("parent")
var keyCommitted = []byte("committed")
var keyIsCommitted = []byte("iscommitted")
var keyChainID = []byte("chainid")
var keySize = []byte("size")
// Opt defines options for creating the snapshotter
type Opt struct {
GraphDriver graphdriver.Driver
LayerStore layer.Store
Root string
IdentityMapping *idtools.IdentityMapping
GraphDriver graphdriver.Driver
LayerStore layer.Store
Root string
}
type graphIDRegistrar interface {
@@ -53,17 +49,19 @@ type snapshotter struct {
reg graphIDRegistrar
}
var _ snapshot.SnapshotterBase = &snapshotter{}
// NewSnapshotter creates a new snapshotter
func NewSnapshotter(opt Opt, prevLM leases.Manager) (snapshot.Snapshotter, leases.Manager, error) {
func NewSnapshotter(opt Opt) (snapshot.SnapshotterBase, error) {
dbPath := filepath.Join(opt.Root, "snapshots.db")
db, err := bolt.Open(dbPath, 0600, nil)
if err != nil {
return nil, nil, errors.Wrapf(err, "failed to open database file %s", dbPath)
return nil, errors.Wrapf(err, "failed to open database file %s", dbPath)
}
reg, ok := opt.LayerStore.(graphIDRegistrar)
if !ok {
return nil, nil, errors.Errorf("layerstore doesn't support graphID registration")
return nil, errors.Errorf("layerstore doesn't support graphID registration")
}
s := &snapshotter{
@@ -72,45 +70,18 @@ func NewSnapshotter(opt Opt, prevLM leases.Manager) (snapshot.Snapshotter, lease
refs: map[string]layer.Layer{},
reg: reg,
}
lm := newLeaseManager(s, prevLM)
ll, err := lm.List(context.TODO())
if err != nil {
return nil, nil, err
}
for _, l := range ll {
rr, err := lm.ListResources(context.TODO(), l)
if err != nil {
return nil, nil, err
}
for _, r := range rr {
if r.Type == "snapshots/default" {
lm.addRef(l.ID, r.ID)
}
}
}
return s, lm, nil
}
func (s *snapshotter) Name() string {
return "default"
}
func (s *snapshotter) IdentityMapping() *idtools.IdentityMapping {
return s.opt.IdentityMapping
return s, nil
}
func (s *snapshotter) Prepare(ctx context.Context, key, parent string, opts ...snapshots.Opt) error {
origParent := parent
if parent != "" {
if l, err := s.getLayer(parent, false); err != nil {
return errors.Wrapf(err, "failed to get parent layer %s", parent)
return err
} else if l != nil {
parent, err = getGraphID(l)
if err != nil {
return errors.Wrapf(err, "failed to get parent graphid %s", l.ChainID())
return err
}
} else {
parent, _ = s.getGraphDriverID(parent)
@@ -165,24 +136,23 @@ func (s *snapshotter) getLayer(key string, withCommitted bool) (layer.Layer, err
return nil
}); err != nil {
s.mu.Unlock()
return nil, errors.WithStack(err)
return nil, err
}
s.mu.Unlock()
if id == "" {
s.mu.Unlock()
return nil, nil
}
return s.getLayer(string(id), withCommitted)
}
var err error
l, err = s.opt.LayerStore.Get(id)
if err != nil {
s.mu.Unlock()
return nil, errors.WithStack(err)
return nil, err
}
s.refs[key] = l
if err := s.db.Update(func(tx *bolt.Tx) error {
_, err := tx.CreateBucketIfNotExists([]byte(key))
return errors.WithStack(err)
return err
}); err != nil {
s.mu.Unlock()
return nil, err
@@ -274,24 +244,24 @@ func (s *snapshotter) Mounts(ctx context.Context, key string) (snapshot.Mountabl
id := identity.NewID()
var rwlayer layer.RWLayer
return &mountable{
idmap: s.opt.IdentityMapping,
acquire: func() ([]mount.Mount, func() error, error) {
acquire: func() ([]mount.Mount, error) {
rwlayer, err = s.opt.LayerStore.CreateRWLayer(id, l.ChainID(), nil)
if err != nil {
return nil, nil, err
return nil, err
}
rootfs, err := rwlayer.Mount("")
if err != nil {
return nil, nil, err
return nil, err
}
return []mount.Mount{{
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}, func() error {
_, err := s.opt.LayerStore.ReleaseRWLayer(rwlayer)
return err
}, nil
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}, nil
},
release: func() error {
_, err := s.opt.LayerStore.ReleaseRWLayer(rwlayer)
return err
},
}, nil
}
@@ -299,28 +269,24 @@ func (s *snapshotter) Mounts(ctx context.Context, key string) (snapshot.Mountabl
id, _ := s.getGraphDriverID(key)
return &mountable{
idmap: s.opt.IdentityMapping,
acquire: func() ([]mount.Mount, func() error, error) {
acquire: func() ([]mount.Mount, error) {
rootfs, err := s.opt.GraphDriver.Get(id, "")
if err != nil {
return nil, nil, err
return nil, err
}
return []mount.Mount{{
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}, func() error {
return s.opt.GraphDriver.Put(id)
}, nil
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}, nil
},
release: func() error {
return s.opt.GraphDriver.Put(id)
},
}, nil
}
func (s *snapshotter) Remove(ctx context.Context, key string) error {
return errors.Errorf("calling snapshot.remove is forbidden")
}
func (s *snapshotter) remove(ctx context.Context, key string) error {
l, err := s.getLayer(key, true)
if err != nil {
return err
@@ -329,17 +295,8 @@ func (s *snapshotter) remove(ctx context.Context, key string) error {
id, _ := s.getGraphDriverID(key)
var found bool
var alreadyCommitted bool
if err := s.db.Update(func(tx *bolt.Tx) error {
b := tx.Bucket([]byte(key))
found = b != nil
if b != nil {
if b.Get(keyIsCommitted) != nil {
alreadyCommitted = true
return nil
}
}
found = tx.Bucket([]byte(key)) != nil
if found {
tx.DeleteBucket([]byte(key))
if id != key {
@@ -351,10 +308,6 @@ func (s *snapshotter) remove(ctx context.Context, key string) error {
return err
}
if alreadyCommitted {
return nil
}
if l != nil {
s.mu.Lock()
delete(s.refs, key)
@@ -376,14 +329,7 @@ func (s *snapshotter) Commit(ctx context.Context, name, key string, opts ...snap
if err != nil {
return err
}
if err := b.Put(keyCommitted, []byte(key)); err != nil {
return err
}
b, err = tx.CreateBucketIfNotExists([]byte(key))
if err != nil {
return err
}
return b.Put(keyIsCommitted, []byte{})
return b.Put(keyCommitted, []byte(key))
})
}
@@ -391,8 +337,8 @@ func (s *snapshotter) View(ctx context.Context, key, parent string, opts ...snap
return s.Mounts(ctx, parent)
}
func (s *snapshotter) Walk(context.Context, snapshots.WalkFunc, ...string) error {
return nil
func (s *snapshotter) Walk(ctx context.Context, fn func(context.Context, snapshots.Info) error) error {
return errors.Errorf("not-implemented")
}
func (s *snapshotter) Update(ctx context.Context, info snapshots.Info, fieldpaths ...string) (snapshots.Info, error) {
@@ -482,33 +428,31 @@ func (s *snapshotter) Close() error {
type mountable struct {
mu sync.Mutex
mounts []mount.Mount
acquire func() ([]mount.Mount, func() error, error)
acquire func() ([]mount.Mount, error)
release func() error
refCount int
idmap *idtools.IdentityMapping
}
func (m *mountable) Mount() ([]mount.Mount, func() error, error) {
func (m *mountable) Mount() ([]mount.Mount, error) {
m.mu.Lock()
defer m.mu.Unlock()
if m.mounts != nil {
m.refCount++
return m.mounts, m.releaseMount, nil
return m.mounts, nil
}
mounts, release, err := m.acquire()
mounts, err := m.acquire()
if err != nil {
return nil, nil, err
return nil, err
}
m.mounts = mounts
m.release = release
m.refCount = 1
return m.mounts, m.releaseMount, nil
return m.mounts, nil
}
func (m *mountable) releaseMount() error {
func (m *mountable) Release() error {
m.mu.Lock()
defer m.mu.Unlock()
@@ -523,12 +467,5 @@ func (m *mountable) releaseMount() error {
}
m.mounts = nil
defer func() {
m.release = nil
}()
return m.release()
}
func (m *mountable) IdentityMapping() *idtools.IdentityMapping {
return m.idmap
}

Some files were not shown because too many files have changed in this diff Show More