Compare commits

..

178 Commits

Author SHA1 Message Date
Paweł Gronowski
3ab5c7d003 Merge pull request #48383 from vvoland/48382-27.x
[27.x backport] Dockerfile/vendor: update containerd to v1.7.21
2024-08-27 16:00:14 +02:00
Paweł Gronowski
875e8aeef2 vendor: github.com/containerd/containerd v1.7.21
full diff: https://github.com/containerd/containerd/compare/v1.7.20...v1.7.21

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a88efd7359)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-27 14:07:56 +02:00
Paweł Gronowski
1900e4d8eb Dockerfile: update containerd binary to v1.7.21 (static binaries and CI only)
Update the containerd binary that's used in CI and static binaries

- full diff: https://github.com/containerd/containerd/compare/v1.7.20...v1.7.21
- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.21

```markdown changelog
Update containerd (static binaries only) to [v1.7.21](https://github.com/containerd/containerd/releases/tag/v1.7.21)
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit de4fc1c927)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-27 13:43:36 +02:00
Sebastiaan van Stijn
cd7746d30b Merge pull request #48380 from vvoland/48374-27.x
[27.x backport] c8d/pull: Keep the replaced image as dangling
2024-08-27 13:08:51 +02:00
Paweł Gronowski
2a13a384b8 Merge pull request #48376 from vvoland/48293-27.x
[27.x backport] c8d/load: Multi-platform fixes
2024-08-27 11:39:06 +02:00
Sebastiaan van Stijn
9fd71f5d0e Merge pull request #48378 from corhere/backport-27.x/dockerd-manpage
[27.x backport] Move dockerd man page back from docker/cli
2024-08-27 10:43:54 +02:00
Paweł Gronowski
ecd2b6ff09 c8d/image: Add hostPlatformMatcher
Subset of 842c5c584e which only adds the
`hostPlatformMatcher` method.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-27 10:17:50 +02:00
Cory Snider
d5b03423d1 man: support bringing your own go-md2man
Set the GO_MD2MAN make variable to elide building go-md2man from
vendored sources and use the specified command instead.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit edfde78355)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Cory Snider
56c5c23114 man: build dockerd man pages using make
Vendor the go-md2man tool used to generate the man pages so that the
only dependency is a Go toolchain.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 05d7008419)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Grace Choi
77b2eb5734 Removed all mentions of "please" from docs and messages
Signed-off-by: Grace Choi <gracechoi@utexas.edu>
Signed-off-by: Pranjal Rai <pranjalrai@utexas.edu>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b4cee5c3ee)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
David Karlsson
805becdc7e docs: add default-network-opt daemon option
Signed-off-by: David Karlsson <35727626+dvdksn@users.noreply.github.com>
(cherry picked from commit f1ec84314d)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Sebastiaan van Stijn
a5828ac742 docs: remove devicemapper
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 23812190c3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Kir Kolyshkin
f7246a0e2c man/dockerd.8: assorted formatting fixes
Mostly, this makes sure that literals (such as true, false, host,
private, examples of options usage etc.) are typeset in bold, except for
filenames, which are typeset in italic.

While at it,
 - remove some default values from synopsis as it should not
   be there;
 - fix man pages references (page name in bold, volume number in
   regular).

This is not a complete fix, but a step in the right direction.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 690d166632)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Kir Kolyshkin
f110e779f6 man/dockerd.8: escape asterisks and underscores
1. Escape asterisks and underscores, that have special meaning in
   Markdown. While most markdown processors are smart enough to
   distinguish whether it's a literal * or _ or a formatting directive,
   escaping makes things more explicit.

2. Fix using wrong level of headings in some dm options (most are ####,
   but some were #####).

3. Do not use sub-heading for examples in some dm options (this is how
   it's done in the rest of the man page).

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 374b779dd1)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Sebastiaan van Stijn
edbcbf8da7 docs: update dockerd usage output for new proxy-options
Adds documentation for the options that were added in
427c7cc5f8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 386d0c0fbc)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Ashly Mathew
b7cc16b0b0 Fix styling of arguments
Signed-off-by: Ashly Mathew <ashlymathew93@gmail.com>
(cherry picked from commit 54971ac807)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Luis Henrique Mulinari
ecbc4f05bb Fix the max-concurrent-downloads and max-concurrent-uploads configs documentation
This fix tries to address issues raised in moby/moby#44346.
The max-concurrent-downloads and max-concurrent-uploads limits are applied for the whole engine and not for each pull/push command.

Signed-off-by: Luis Henrique Mulinari <luis.mulinari@gmail.com>
(cherry picked from commit a8b8f9b288)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Sebastiaan van Stijn
43298ad298 docs: remove documentation about deprecated cluster-store
This removes documentation related to legacy overlay networks using
an external k/v store.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 68e9223289)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Anca Iordache
8cc7f26f56 Document --validate daemon option
Signed-off-by: Anca Iordache <anca.iordache@docker.com>
(cherry picked from commit 6c702167bf)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Sebastiaan van Stijn
255eaa6647 Update man-page source MarkDown to work with go-md2man v2
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit af45195a21)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Akihiro Suda
ee27f4cd7f docs: update for cgroup v2 and rootless
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 562a6d2b13)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Rob Gulewich
c1d3c952e7 docker run: specify cgroup namespace mode with --cgroupns
Signed-off-by: Rob Gulewich <rgulewich@netflix.com>
(cherry picked from commit 7cf2132655)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Lukas Heeren
fc9029a2e2 daemon: document --max-download-attempts option
update docs based on PR 39949

Signed-off-by: Lukas Heeren <lukas-heeren@hotmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1cbcd5d47a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
taiji-tech
115b10a467 Update document links and title.
Signed-off-by: taiji-tech <csuhqg@foxmail.com>
(cherry picked from commit 3cfa74724c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
selansen
488872bcb4 Allow user to specify default address pools for docker networks This is separate commit for CLI files to address PR 36054
Signed-off-by: selansen <elango.siva@docker.com>
(cherry picked from commit 462f38bd8b)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Sebastiaan van Stijn
f623030fac Update docs and completion-scripts for deprecated features
- the `--disable-legacy-registry` daemon flag was removed
- duplicate keys with conflicting values for engine labels
  now produce an error instead of a warning.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 13ff896b38)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Renaud Gaubert
ac7032bff9 Added docs for dockerd
Signed-off-by: Renaud Gaubert <renaud.gaubert@gmail.com>
(cherry picked from commit f3c3b05b50)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Abdur Rehman
161006302f fix a number of minor typos
Fix 19 typos, grammatical errors and duplicated words.

These fixes have minimal impact on the code as these are either in the
doc files or in comments inside the code files.

Signed-off-by: Abdur Rehman <abdur_rehman@mentor.com>
(cherry picked from commit 20f8455562)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Kir Kolyshkin
a6de17d230 Introduce/document new IPC modes
This builds (and depends) on https://github.com/moby/moby/pull/34087

Version 2:
 - remove --ipc argument validation (it is now done by daemon)
 - add/document 'none' value
 - docs/reference/run.md: add a table with better modes description
 - dockerd(8) typesetting fixes

Version 3:
 - remove ipc mode tests from cli/command/container/opts_test.go

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit c23d4b017a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Aleksa Sarai
6513e80c84 docs: add documentation for dm.libdm_log_level
This is a new option added specifically to allow for debugging of bugs
in Docker's storage drivers or libdm itself.

Signed-off-by: Aleksa Sarai <asarai@suse.de>
(cherry picked from commit 25baee8ab9)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Cory Snider
3d917f3fd6 Restore dockerd man page
Prepare to move the dockerd man page back to this repository from
docker/cli, retaining history.

This partially reverts commit b5579a4ce3.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 7d3f09a9c3)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-26 16:22:55 -04:00
Paweł Gronowski
e854a5c201 c8d/pull: Replace pointer to interface with interface
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 0afe684685)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 18:39:47 +02:00
Paweł Gronowski
ea58dab95e c8d/pull: Keep the replaced image as dangling
With graphdrivers, the old image was still kept as a dangling image.
Keep the same behavior with containerd.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit db40a6132b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 18:39:43 +02:00
Paweł Gronowski
0a38589add c8d/load: Only unpack host platform images
When loading a multi-platform image, it's not necessary to unpack all
platforms, especially those which have a completely different OS.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 6ebe6a7353)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 18:39:10 +02:00
Paweł Gronowski
7c069d3021 c8d/load: Don't fail whole operation if unpack failed
Log the error to the progress output instead.
The image is still loaded into the content store and image service even
if the unpacking failed, so don't error out the whole operation to avoid
missing the load events for other image names loaded from the same
archive.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 728894b7d0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 18:39:08 +02:00
Sebastiaan van Stijn
b27de4ef16 Merge pull request #48369 from vvoland/48367-27.x
[27.x backport] c8d/list: Fix race condition when traversing containers
2024-08-26 13:04:26 +02:00
Paweł Gronowski
5002faebe8 integration/TestAPIImagesListManifests: Check Containers
Verify that the ImageData.Containers contains the ID of the container
using that image.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 55f693e7b7)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 10:41:22 +02:00
Paweł Gronowski
a15a309832 c8d/list: Update benchmark to also have containers
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 29a2f6d339)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 10:41:21 +02:00
Paweł Gronowski
fd5cede287 c8d/list: Fix race condition when traversing containers
Use a regular for loop instead of ApplyAll which spawns a separate
goroutine for each separate container.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a5d75f6d27)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-26 10:41:19 +02:00
Sebastiaan van Stijn
c050bc3739 Merge pull request #48364 from austinvazquez/cherry-pick-3cd28504dec017ef38f1a7abc141a493b9319757-to-27.x
[27.x backport] govulncheck to report known vulnerabilities
2024-08-23 22:18:00 +02:00
Sebastiaan van Stijn
de22458d0f Merge pull request #48363 from austinvazquez/cherry-pick-c4ba1f47187fb77646d906c512084a185036fd51-to-27.x
[27.x backport] Dockerfile: update xx to v1.5.0
2024-08-23 22:17:23 +02:00
CrazyMax
65c4e49aff govulncheck to report known vulnerabilities
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 3cd28504de)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-08-22 13:57:15 +00:00
Sebastiaan van Stijn
7ebb277873 Dockerfile: update xx to v1.5.0
full diff: https://github.com/tonistiigi/xx/compare/v1.4.0...v1.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c4ba1f4718)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-08-22 13:46:45 +00:00
Paweł Gronowski
9942d656ba Merge pull request #48346 from vvoland/47526-27.x
[27.x backport] c8d: Multi-platform image list
2024-08-16 18:47:36 +02:00
Paweł Gronowski
ad5eb875d4 c8d/list: Don't require opts.ContainerCount for manifest containers
The `GET /images/json` requires an optional `container-count` parameter
which set the `Containers` property of in the ImageSummary to a number
of containers using that image.

This was also propagated to the new manifest list property which
includes a list of all the container IDs that are using this specific
image manifest.

Disconnect the `ImageData.Containers` property from this option and
always include it by default without an explicit opt-in.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit b93cf37dcd)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 17:28:44 +02:00
Paweł Gronowski
3d845e0e8c c8d/list: Add test for total and content size
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 495fab8e66)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 17:28:42 +02:00
Paweł Gronowski
3563a707d0 c8d/list: Fix Total size calculation
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 469c2ef3ec)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 17:28:41 +02:00
Paweł Gronowski
89757f83ff api/list: Expose manifests
Add `Manifests` field to `ImageSummary` which exposes all image
manifests (which includes other blobs using the image media type, like
buildkit attestations).

There's also a new `manifests` query field that needs to be set in order
for the response to contain the new information.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 050afe1e1a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 17:28:40 +02:00
Paweł Gronowski
bb2fec6425 api: Bump default version to 1.47
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 85e9102dc9)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 17:28:39 +02:00
Paweł Gronowski
0f8fcec1d9 swagger: Disable ImageSummary model generation
Our version of go-swagger doesn't handle the `omitempty` correctly for
the new field.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit efb3c50799)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 17:28:37 +02:00
Paweł Gronowski
1a342adda7 Merge pull request #48344 from vvoland/48324-27.x
[27.x backport] fix deprecation comments, and update some godoc
2024-08-16 16:59:01 +02:00
Sebastiaan van Stijn
1ec5e86154 api/types/registry: fix godoc, and add some doc-links
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e77e543b58)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 12:56:26 +02:00
Sebastiaan van Stijn
62f32e9a97 plugin: fix deprecation comments
These must have a whitespace before them, otherwise they are ignored.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 218c08b283)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-16 12:56:24 +02:00
Sebastiaan van Stijn
68484b732a Merge pull request #48341 from tonistiigi/v27-update-buildkit-v0.15.2
[27.x] vendor: update buildkit to v0.15.2
2024-08-15 21:50:09 +02:00
Tonis Tiigi
830c76c6f2 vendor: update buildkit to v0.15.2
Also brings in fix for moby/buildkit#5242

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit c459986399)
2024-08-15 18:42:29 +03:00
Sebastiaan van Stijn
8f969bf61c Merge pull request #48332 from vvoland/48281-27.x
[27.x backport] Migrate per-endpoint sysctls until 28.0.0
2024-08-15 10:39:38 +02:00
Sebastiaan van Stijn
290663ede5 Merge pull request #48333 from vvoland/48081-27.x
[27.x backport] do another run of gofumpt
2024-08-15 10:16:52 +02:00
Sebastiaan van Stijn
354bf75675 libcontainerd: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 080a8e1b6b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:42 +02:00
Sebastiaan van Stijn
4ab7d90669 pkg/plugins: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 56fa45773f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:41 +02:00
Sebastiaan van Stijn
c11b2d9c7d pkg/archive: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0e2d40c24a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:40 +02:00
Sebastiaan van Stijn
ccdc79d55a libnetwork: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 84e43da752)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:39 +02:00
Sebastiaan van Stijn
35b1a30028 layer: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit bb1b766ddb)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:37 +02:00
Sebastiaan van Stijn
9f63aa7435 internal: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 07469b4509)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:36 +02:00
Sebastiaan van Stijn
4d16ac993e integration: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8e50a96a78)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:34 +02:00
Sebastiaan van Stijn
6d5266a650 integration-cli: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c3ac7fee26)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:32 +02:00
Sebastiaan van Stijn
4084dac566 daemon: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e765dd90ee)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:30 +02:00
Sebastiaan van Stijn
c36ab4c2ca daemon/containerd: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 181101c4a8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:29 +02:00
Sebastiaan van Stijn
904867593b daemon/config: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 46b0102da4)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:11 +02:00
Sebastiaan van Stijn
72876770d0 builder: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4a89963f1e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:10 +02:00
Sebastiaan van Stijn
e8109ee4da api/types: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8768145519)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:08 +02:00
Sebastiaan van Stijn
ed65e1224e api/server: gofumpt
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 92346bcec6)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:33:06 +02:00
Rob Murray
d54aff9312 API 1.46: end per-interface sysctl migration in major release
Rather than in API 1.47.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit a86a9e3aa4)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:29:23 +02:00
Rob Murray
8f12906274 Migrate per-endpoint sysctls until 28.0.0
Commit 0071832226 introduced
per-endpoint sysctls, and migration to them from the top-level
'--sysctl' option.

The migration was intended to be short-term, disabled in the
next major release and code was added to check for the next
API version. But now, the API version will be bumped in a
minor release - this breaking change needs to wait until the
next major release, and we don't yet know the API version
number for that.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 17adc1478b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-14 19:29:19 +02:00
Albin Kerouanton
5955778fe3 Merge pull request #48326 from robmry/backport-27.2/64bit_iprange_fix
[27.2 backport] Allow 64-bit --ip-range
2024-08-14 09:24:04 +02:00
Rob Murray
c53be2f3d5 Allow --ip-range ending on a 64-bit boundary
When defaultipam.newPoolData is asked for a pool of 64-bits
or more, it ends up with an overflowed u64 - so, it just
subtracts one to get a nearly-big-enough range (for a 64-bit
subnet).

When defaultipam.getAddress is called with an ipr (sub-pool
range), the range it calls bitmask.SetAnyInRange with is
exclusive of end. So, its end param can't be MaxUint64,
because that's the max value for the top end of the range
and, when checking the range, SetAnyInRange fails.

When fixed-cidr-v6 behaves more like fixed-cidr, it will ask
for a 64-bit range if that's what fixed-cidr-v6 needs. So,
it hits the bug when allocating an address for, for example:

  docker network create --ipv6 --subnet fddd::/64 --ip-range fddd::/64 b46

The additional check for "ipr == base" avoids the issue in
this case, by ignoring the ipr/sub-pool range if ipr is the
same as the pool itself (not really a sub-pool).

But, it still fails when ipr!=base. For example:

  docker network create --ipv6 --subnet fddd::/56 --ip-range fddd::/64 b46

So, also subtract one from 'end' if it's going to hit the max
value allowed by the Bitmap.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 496b457ad8)
Signed-off-by: Rob Murray <rob.murray@docker.com>
2024-08-13 15:40:48 +01:00
Sebastiaan van Stijn
f9522e5e96 Merge pull request #48315 from vvoland/48169-27.x
[27.x backport] rm regexp use
2024-08-10 15:59:46 +02:00
Sebastiaan van Stijn
a037b7250c Merge pull request #48314 from vvoland/48275-27.x
[27.x backport] api/swagger: fix x-nullable for SystemInfo.Containerd (api v1.46)
2024-08-09 17:18:14 +02:00
Kir Kolyshkin
fc0150b962 daemon/containerd: rm use of regexp
Replace the regexp check with a function.

Keep the use of regexp.QuoteMeta.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 508939821b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-09 10:08:05 +02:00
Kir Kolyshkin
73c01d0b6a image/v1: rm regexp use
Replace the regexp checking ID validity with a for loop.

The benefits are:
 - faster (up to 10x faster with less allocations);
 - no init overhead to compile the regexp.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b66d4b567a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-09 10:08:04 +02:00
Kir Kolyshkin
c93fe4a27d layer: rm regexp use
Replace the regexp checking ID validity with a function. The benefits
are:

 - function is faster (up to 10x faster with less allocations);
 - no init overhead to compile the regexp;

Add a test case.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 1c0dc8a94f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-09 10:08:02 +02:00
Sebastiaan van Stijn
31459c8268 docs/api: swagger: fix x-nullable for SystemInfo.Containerd (api v1.46)
This field was added in 812f319a57, but it
looks like redoc doesn't like the field in this location, producing a
warning.

Rendering the docs (`make swagger-docs`) showed a warning:

> Warning: Other properties are defined at the same level as $ref at
> "#/definitions/SystemInfo/properties/Containerd". They are IGNORED
> according to the JsonSchema spec

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7dec1c67a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-09 10:00:24 +02:00
Sebastiaan van Stijn
35d430c62e api/swagger: fix x-nullable for SystemInfo.Containerd
This field was added in 812f319a57, but it
looks like redoc doesn't like the field in this location, producing a
warning.

Rendering the docs (`make swagger-docs`) showed a warning:

> Warning: Other properties are defined at the same level as $ref at
> "#/definitions/SystemInfo/properties/Containerd". They are IGNORED
> according to the JsonSchema spec

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 66b5b8bfa8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-09 10:00:21 +02:00
Sebastiaan van Stijn
f5fa0908ef Merge pull request #48308 from thaJeztah/27.x_backport_migrate_userns
[27.x backport] migrate to github.com/moby/sys/userns
2024-08-08 12:34:48 +02:00
Sebastiaan van Stijn
a17f5d4f10 Merge pull request #48294 from austinvazquez/cherry-pick-2b5ffa0b63c76e8bb4ebb253d7e4db5c7af918c0-to-27.x
[27.x backport] gha: set permissions to read-only by default
2024-08-08 11:59:30 +02:00
Sebastiaan van Stijn
80a59c2f1a migrate to github.com/moby/sys/userns
Commit 2ce811e632 migrated the use of the
userns package to the github.com/moby/sys/user module.

After further discussion with maintainers, it was decided to move the
userns package to a separate module, as it has no direct relation with
"user" operations (other than having "user" in its name).

This patch migrates our code to use the new module.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7b0ef10a9a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-08-08 11:04:52 +02:00
Tianon Gravi
95db7055cc Merge pull request #48301 from vvoland/48300-27.x
[27.x backport] update to go1.21.13
2024-08-07 16:16:41 -07:00
Paweł Gronowski
e7fe276c00 update to go1.21.13
- https://github.com/golang/go/issues?q=milestone%3AGo1.21.13+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.12...go1.21.13

go1.21.13 (released 2024-08-06) includes fixes to the go command, the
covdata command, and the bytes package. See the Go 1.21.13 milestone on
our issue tracker for details.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit b24c2e95e5)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-08-07 11:45:54 +02:00
Sebastiaan van Stijn
e8cd19e810 gha: set permissions to read-only by default
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b5ffa0b63)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-08-06 15:55:59 +00:00
Paweł Gronowski
45d37a0ca9 Merge pull request #48246 from vvoland/48239-27.x
[27.1 backport] vendor: update buildkit to v0.15.1
2024-07-26 18:11:02 +02:00
CrazyMax
e0c52e0ba6 vendor: update buildkit to v0.15.1
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 1baf8f9e60)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-26 16:29:21 +02:00
Paweł Gronowski
b9be986e35 Merge pull request #48245 from thaJeztah/27.1_backport_buildkit_fix_grpc_control_api_sizes
[27.1 backport] api/server/router/grpc: NewRouter: set correct MaxRecvMsgSize, MaxSendMsgSize
2024-07-26 16:10:33 +02:00
Sebastiaan van Stijn
efb67b16b0 api/server/router/grpc: NewRouter: set correct MaxRecvMsgSize, MaxSendMsgSize
[buildkit@29b4b1a537][1] applied changes to `buildkitd` to set the correct
defaults, which should be 16MB, but used the library defaults. Without that
change, builds using large Dockerfiles would fail with a `ResourceExhausted`
error;

    => [internal] load build definition from Dockerfile
     => => transferring dockerfile: 896.44kB
    ERROR: failed to receive status: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (44865299 vs. 16777216)

However those changes were applied to the `buildkitd` code, which is the
daemon when running BuildKit standalone (or in a container through the
`container` driver). When running a build with the BuildKit builder compiled
into the Docker Engine, that code is not used, so the BuildKit changes did
not fix the issue.

This patch applies the same changes as were made in [buildkit@29b4b1a537][1]
to the gRPC endpoint provided by the dockerd daemon.

[1]: 29b4b1a537

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cdbfae1d3e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-26 14:28:27 +02:00
Paweł Gronowski
741e23b913 Merge pull request #48238 from thaJeztah/27.1_backport_migrate_userns
[27.1 backport] migrate to github.com/moby/sys/user/userns
2024-07-26 10:05:53 +02:00
Sebastiaan van Stijn
f96e26f68d migrate to github.com/moby/sys/user/userns
The userns package in libcontainer was integrated into the moby/sys/user
module at commit [3778ae603c706494fd1e2c2faf83b406e38d687d][1].

The userns package is used in many places, and currently either depends
on runc/libcontainer, or on containerd, both of which have a complex
dependency tree. This patch is part of a series of patches to unify the
implementations, and to migrate toward that implementation to simplify
the dependency tree.

[1]: 3778ae603c

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2ce811e632)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-25 14:15:19 +02:00
Sebastiaan van Stijn
78b59867f2 vendor: github.com/moby/sys/user v0.2.0
full diff: https://github.com/moby/sys/compare/user/v0.1.0...user/v0.2.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 91dfc326cf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-25 14:15:14 +02:00
Akihiro Suda
7d861e889c Merge pull request #48235 from thaJeztah/27.1_backport_vendor_flock
[27.1 backport] vendor: github.com/gofrs/flock v0.12.1
2024-07-25 17:49:46 +09:00
Sebastiaan van Stijn
17e1108324 Merge pull request #48228 from vvoland/47789-27.0
[27.0 backport] hack: explicitly control enabling the journald logging driver
2024-07-25 01:39:49 +02:00
Sebastiaan van Stijn
018137b01a vendor: github.com/gofrs/flock v0.12.1
- fix: missing read-write flag in reopenFDOnError
  fixes a regression that could result in a `ERROR: bad file descriptor`.

b659e1e00a
introduced a regression where `f.flag` would not be in read-write mode
[1]  but read-only [2] which breaks people using NFS protocol.

[1]: b659e1e00a (diff-87c2c4fe0fb43f4b38b4bee45c1b54cfb694c61e311f93b369caa44f6c1323ffR192)
[2]: b659e1e00a (diff-22145325dded38eb5288ed3321a113d8260ccc70747ee04d4551bfd2fba975fdR69)

full diff: https://github.com/gofrs/flock/compare/v0.12.0...v0.12.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1e2ccf8046)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-25 00:45:40 +02:00
Sebastiaan van Stijn
650e06ac75 vendor: golang.org/x/sys v0.22.0
full diff: https://github.com/golang/sys/compare/v0.21.0...v0.22.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 077b32ac4e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-25 00:45:40 +02:00
Akihiro Suda
7f5494dc97 Merge pull request #48233 from AkihiroSuda/cherrypick-48216-27
[27.x backport] dockerd-rootless-setuptool.sh: move RootlessKit smoke test
2024-07-25 02:53:28 +09:00
Akihiro Suda
bfe5339c7e dockerd-rootless-setuptool.sh: move RootlessKit smoke test
`dockerd-rootless-setuptool.sh check` now skips the smoke test for
running RootlessKit.

Fix docker/docker-install issue 417

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit e2237240f5)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2024-07-25 00:37:02 +09:00
Paweł Gronowski
a4046c4ca0 Merge pull request #48221 from thaJeztah/27.1_backport_readme_commercial_support
[27.1 backport] README: replace obsolete Docker EE mention
2024-07-24 11:48:07 +02:00
William Hubbs
99471ac2fe hack: explicitly control enabling the journald logging driver
Without this, the dependency on systemd is said to be "automagic", which
can lead to breakage, for example, if a binary package of docker is
built on a system that has systemd installed then installed on a system
that does not have systemd installed.

for example: https://bugs.gentoo.org/914076

Signed-off-by: William Hubbs <w.d.hubbs@gmail.com>
(cherry picked from commit 499c842c52)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-24 10:56:45 +02:00
Cory Snider
b9b43b3bdf README: replace obsolete Docker EE mention
Docker EE is no more. Point users looking for commercial support at the
currently-maintained commercial products based on the Moby project:
Docker Desktop and Mirantis Container Runtime.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit b37c983d31)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-23 22:05:49 +02:00
Sebastiaan van Stijn
cc13f95251 Merge commit from fork
[27.0] AuthZ plugin security fixes
2024-07-23 21:36:28 +02:00
Sebastiaan van Stijn
a21b1a2d12 Merge pull request #48196 from thaJeztah/27.1_backport_vendor_containerd_1.7.20
[27.1 backport] vendor: github.com/containerd/containerd v1.7.20
2024-07-19 16:42:08 +02:00
Sebastiaan van Stijn
1bc907c97c vendor: github.com/containerd/containerd v1.7.20
full diff: https://github.com/containerd/containerd/compare/v1.7.19...v1.7.20

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 55a5f3fcaa)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 15:40:23 +02:00
Sebastiaan van Stijn
4bb4575ffb Merge pull request #48191 from thaJeztah/27.1_backport_update_containerd_binary_1.7.20
[27.1 backport] update containerd binary to v1.7.20
2024-07-19 13:44:08 +02:00
Sebastiaan van Stijn
df7f275db6 Merge pull request #48195 from thaJeztah/27.1_backport_fix_pr_title_check
[27.1 backport] gha: check-pr-branch: fix branch check regression
2024-07-19 12:34:25 +02:00
Sebastiaan van Stijn
1c0885d60d gha: check-pr-branch: fix branch check regression
This check was updated in f460110ef5, but
introduced some bugs;

- the regular expressions were meant to define a capturing group, but
  the braces (`(`, `)`) were escaped (they previously were used by
  `sed`, which requires different escaping), so no value was captured.
- the check itself was not updated to use the resulting `$target_branch`
  env-var, so was comparing against the `$GITHUB_BASE_REF` (which is
  the branch name before stripping minor versions).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e0b98a3222)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 12:29:22 +02:00
Paweł Gronowski
fb3ec9fc96 Merge pull request #48187 from thaJeztah/27.1_backport_bump_buildx_compose
[27.0 backport] Dockerfile: update buildx to v0.16.1, compose to v2.29.0
2024-07-19 10:05:14 +02:00
Sebastiaan van Stijn
ed83a9e3a1 update containerd binary to v1.7.20
Update the containerd binary that's used in CI and for the static packages.

release notes: https://github.com/containerd/containerd/releases/tag/v1.7.20
full diff: https://github.com/containerd/containerd/compare/v1.7.18...v1.7.19

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fbbda057ac)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 02:35:39 +02:00
Sebastiaan van Stijn
71b59bf442 Merge pull request #48178 from thaJeztah/27.1_backport_relax_pr_check
[27.1 backport] gha: check-pr-branch: verify major version only
2024-07-19 02:26:06 +02:00
Sebastiaan van Stijn
f8f926b719 Merge pull request #48185 from thaJeztah/27.1_backport_internalize_pkg_directory
[27.0 backport] deprecate packages that are to be removed in the next release
2024-07-19 02:06:21 +02:00
Sebastiaan van Stijn
422ef48c2f gha: check-pr-branch: verify major version only
We'll be using release branches for minor version updates, so instead
of (e.g.) a 27.0 branch, we'll be using 27.x and continue using the
branch for minor version updates.

This patch changes the validation step to only compare against the
major version.

Co-authored-by: Cory Snider <corhere@gmail.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f460110ef5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 01:21:30 +02:00
Sebastiaan van Stijn
c9d37a9198 [27.1] pkg/rootless/specconv: deprecate, and add temporary aliases
There's no (known) external consumers of this, but let's add a
deprecation for the 27.1 release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:23 +02:00
Sebastiaan van Stijn
1f16a44b3d pkg/rootless/specconv: move to internal
This package is only used by the daemon, so move it to the internal
rootless package instead.

Note that technically this could be in daemon/internal, but as there's
already an existing internal/rootless package (which needs to be in the
top-level internal package because it's also used by /plugin), I'm moving
it there.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit efdaca2792)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:23 +02:00
Sebastiaan van Stijn
c8f1317585 pkg/directory: deprecate, and move to internal
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3a3bb1cb50)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:23 +02:00
Sebastiaan van Stijn
68587c38fe pkg/directory: fix comment, and remove import comments
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 80900bdbcd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:23 +02:00
Sebastiaan van Stijn
d1ea2b1fec [27.1] pkg/containerfs: deprecate, and add temporary aliases
There's no (known) external consumers of this, but let's add a
deprecation for the 27.1 release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:23 +02:00
Sebastiaan van Stijn
31c1b7dc17 pkg/containerfs: move to internal
The only external consumer are the `graphdriver` and `graphdriver/shim`
packages in github.com/docker/go-plugins-helpers, which depended on
[ContainerFS][1], which was removed in 9ce2b30b81.

graphdriver-plugins were deprecated in 6da604aa6a,
and support for them removed in 555dac5e14,
so removing this should not be an issue.

Ideally this package would've been moved inside `daemon/internal`, but it's used
by the `daemon` (cleanupContainer), `plugin` package, and by `graphdrivers`,
so needs to be in the top-level `internal/` package.

[1]: 6eecb7beb6/graphdriver/api.go (L218)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f2970e5358)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:22 +02:00
Sebastiaan van Stijn
6231ea4a34 pkg/containerfs: cleanup GoDoc, and make Windows a proper wrapper
- Improve some GoDoc to use docs links
- Change the Windows stub to an actual wrapper function, as we don't
  want it to be updateable, and it currently shows as "variable" on
  pkg.go.dev, which is confusing.
- Remove "import" comments in preparation of moving this package

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a3e6ce95c4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:22 +02:00
Sebastiaan van Stijn
dc33eb81d8 pkg/containerfs: remove CleanScopedPath and make it internal
The container package is the only consumer of this function in our code
and there's no known external users;
https://grep.app/search?q=.CleanScopedPath%28&filter[lang][0]=Go

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e2ae6907c6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:23:22 +02:00
Sebastiaan van Stijn
51433d65c0 Merge pull request #48184 from thaJeztah/27.1_backport_rm_deprecated_executiondriver
[27.0 backport] api/types/system: remove Info.ExecutionDriver
2024-07-19 00:11:22 +02:00
Sebastiaan van Stijn
f3bd9da62a Merge pull request #48183 from thaJeztah/27.1_backport_bump_google_deps
[27.0 backport] vendor: cloud.google.com/go/logging v1.9.0
2024-07-19 00:10:49 +02:00
Sebastiaan van Stijn
bc6ae42031 Dockerfile: update compose to v2.29.0
This is the version used in the dev-container, and for testing.

release notes: https://github.com/docker/compose/releases/tag/v2.29.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a42f7fd717)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:10:16 +02:00
Sebastiaan van Stijn
af8866f324 Dockerfile: update buildx to v0.16.1
This is the version used in the dev-container, and for testing.

release notes:
https://github.com/docker/buildx/releases/tag/v0.16.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 97b51c6b72)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-19 00:10:15 +02:00
Sebastiaan van Stijn
5e4ddd81a2 api/types/system: remove Info.ExecutionDriver
The execution-driver was replaced with containerd since docker 1.11 (API
v1.23) in 9c4570a958, after which the value
was no longer set. The field was left in the type definition.
Commit 1fb1136fec removed its use from the
CLI and [docker/engine-api@39c7d7e] removed it from the API type, followed
by an update to the API docs in 3c6ef4c29d.

Changes to the API types were not pulled into the engine until v1.13, and
probably because of that gated it on API version < 1.25 instead of < 1.24
(see 6d98e344c7); setting a "not supported"
value for older versions.

Based on the above; this field was deprecated in API v1.23, and empty
since then. Given that the minimum API version supported by the engine
is not v1.24, we can safely remove it.

[docker/engine-api@39c7d7e]: 39c7d7ec19

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e4d792a06d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-18 23:08:18 +02:00
Sebastiaan van Stijn
147eaae6b7 Merge pull request #48181 from vvoland/48156-27.0
[27.0 backport] Fix API version in TestSetInterfaceSysctl
2024-07-18 22:58:35 +02:00
Sebastiaan van Stijn
c7e4d181a1 vendor: cloud.google.com/go/logging v1.9.0
removes dependency on appengine, among others

full diff: https://github.com/googleapis/google-cloud-go/compare/logging/v1.0.1...logging/v1.9.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0fa71a4cfc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-18 22:32:11 +02:00
Sebastiaan van Stijn
3d7e824bc2 vendor: golang.org/x/oauth2 v0.21.0
removes dependency on appengine, among others

full diff: https://github.com/golang/oauth2/compare/v0.11.0...v0.21.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit eafad2cb86)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-18 22:32:11 +02:00
Sebastiaan van Stijn
d66b76d2e6 vendor: cloud.google.com/go/compute/metadata v0.3.0
full diff: https://github.com/googleapis/google-cloud-go/compare/compute/metadata/v0.2.3...compute/metadata/v0.3.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9b782b8ff7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-18 22:32:11 +02:00
Sebastiaan van Stijn
0e678a85d7 Merge pull request #48182 from vvoland/48078-27.0
[27.0 backport] c8d/build: Log `image tag` event when image was built with Buildkit
2024-07-18 21:47:07 +02:00
Brian Goff
3db1544179 Merge pull request #48175 from thaJeztah/27.1_backports
[27.0 backport] vendor: update buildkit to v0.15.0
2024-07-18 19:46:04 +00:00
Paweł Gronowski
03dc388f63 c8d/build: Log image tag event when image was built with Buildkit
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 53bc396ef4)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-18 17:46:34 +02:00
Paweł Gronowski
5ee23b6050 builder-next: Add ImageNamedByBuildkit callback
When image is built with buildkit with containerd integration the image
service has no way of knowing that the image was tagged because buildkit
creates the image directly in containerd image store.

Add a callback that is called by the exporter wrapper.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 1506bbcfe8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-18 17:46:32 +02:00
Paweł Gronowski
53c521a6b2 builder-next: Don't return error from exported callback
This is only a callback that notifies about event so there is no way to
react to the error.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit bce76d486e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-18 17:46:29 +02:00
Rob Murray
eccccd7577 Fix API version in TestSetInterfaceSysctl
The test checks that it's possible to set a per-interface sysctl
using '--sysctl' - but, after API v1.46, it's not (and driver option
'com.docker.network.endpoint.sysctls' must be used instead).

Test added in commit fde80fe2
Per-interface sysctls added, with API changes, in commit 00718322

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit f649fd0c97)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-18 17:34:58 +02:00
Sebastiaan van Stijn
d9e3d1b815 update containerd binary to v1.7.19
Update the containerd binary that's used in CI and for the static packages.

- release notes: https://github.com/containerd/containerd/releases/tag/v1.7.19
- full diff: https://github.com/containerd/containerd/compare/v1.7.18...v1.7.19

Welcome to the v1.7.19 release of containerd!

The nineteenth patch release for containerd 1.7 contains various updates and
splits the main module from the api module in preparation for the same change
in containerd 2.0. Splitting the modules will allow 1.7 and 2.x to both exist
as transitive dependencies without running into API registration errors.
Projects should use this version as the minimum 1.7 version in preparing to
use containerd 2.0 or to be imported alongside it.

Highlights

- Fix support for OTLP config
- Add API go module
- Remove overlayfs volatile option on temp mounts
- Update runc binary to v1.1.13
- Migrate platforms package to github.com/containerd/platforms
- Migrate reference/docker package to github.com/distribution/reference

Container Runtime Interface (CRI)

- Fix panic in NRI from nil CRI reference
- Fix Windows HPC working directory

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 398e15b7de)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 23:12:42 +02:00
Tonis Tiigi
b91e20cc2e vendor: update buildkit to v0.15.0
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 68bd630830)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 23:00:07 +02:00
Tonis Tiigi
505be6557b vendor: update buildkit to v0.15.0-rc2
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 89781912c1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:59:33 +02:00
Tonis Tiigi
b1613dc2a1 vendor: update buildkit to v0.15.0-rc1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 1787c364e0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:58:28 +02:00
Sebastiaan van Stijn
52f6163746 vendor: golang.org/x/net v0.25.0
full diff: https://github.com/golang/net/compare/v0.24.0...v0.25.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 066b7fa83c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:57:41 +02:00
Sebastiaan van Stijn
c70e404e9e vendor: golang.org/x/crypto v0.23.0
full diff: https://github.com/golang/crypto/compare/v0.22.0...v0.23.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 7721408db7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:57:34 +02:00
Sebastiaan van Stijn
d7a3f01421 vendor: golang.org/x/text v0.15.0
no changes in vendored files

full diff: https://github.com/golang/text/compare/v0.14.0...v0.15.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f43436e6b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:57:27 +02:00
Sebastiaan van Stijn
0f2f9e0049 vendor: golang.org/x/sys v0.21.0
full diff: https://github.com/golang/sys/compare/v0.19.0...v0.21.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 342ce515ab)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:57:17 +02:00
Sebastiaan van Stijn
45a1c34202 vendor: github.com/klauspost/compress v1.17.9
full diff: https://github.com/klauspost/compress/compare/v1.17.4...v1.17.9

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2e58a29023)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:57:10 +02:00
Sebastiaan van Stijn
7b31435cf8 Migrate to github.com/containerd/platforms module
Switch to use github.com/containerd/platforms module, because containerd's
platforms package has moved to a separate module. This allows updating the
platforms parsing independent of the containerd module itself.

The package in containerd is deprecated, but kept as an alias to provide
compatibility between codebases.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d0aa3eaccf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:52:58 +02:00
Sebastiaan van Stijn
99df4fee0b vendor: github.com/containerd/containerd v1.7.19
Highlights

- Fix support for OTLP config
- Add API go module
- Remove overlayfs volatile option on temp mounts
- Update runc binary to v1.1.13
- Migrate platforms package to github.com/containerd/platforms
- Migrate reference/docker package to github.com/distribution/reference

Container Runtime Interface (CRI)

- Fix panic in NRI from nil CRI reference
- Fix Windows HPC working directory

full diff: https://github.com/containerd/containerd/compare/v1.7.18...v1.7.19

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8983957ac5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:52:50 +02:00
Sebastiaan van Stijn
9f08d1e357 vendor: github.com/microsoft/hcsshim v0.11.7
- Fix process handle leak when launching a job container
- Add EndpointState attribute to the HNSEndpoint struct to support
  hyperv containers for k8s
- Add support for loadbalancer policy update in hns
- Changes for checking the global version for modify policy version support
- OutBoundNATPolicy Schema changes (add MaxPortPoolUsage to OutboundNatPolicySetting)

full diff: https://github.com/microsoft/hcsshim/compare/v0.11.5...v0.11.7

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a2fe103f0d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-07-17 22:52:28 +02:00
Jameson Hyde
d1bbb61844 If url includes scheme, urlPath will drop hostname, which would not match the auth check
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>
(cherry picked from commit 754fb8d9d03895ae3ab60d2ad778152b0d835206)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5282cb25d0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-15 18:46:29 +02:00
Jameson Hyde
0835eaa5a1 Authz plugin security fixes for 0-length content and path validation
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>

fix comments

(cherry picked from commit 9659c3a52bac57e615b5fb49b0652baca448643e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2ac8a479c5)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-15 18:46:27 +02:00
Akihiro Suda
73ce798d3b Merge pull request #48155 from vvoland/v27.0-48154
[27.0 backport] docs/api: Add missing `
2024-07-11 23:22:30 +09:00
Paweł Gronowski
b039de78d7 docs/api: Add missing `
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 7f04a603f6)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-11 14:28:18 +02:00
Sebastiaan van Stijn
7fa33a539a Merge pull request #48141 from AkihiroSuda/cherrypick-48134-27
[27.0 backport] rootless: add `Requires=dbus.socket`
2024-07-08 15:05:47 -05:00
Akihiro Suda
7d99ebe418 rootless: add Requires=dbus.socket
On a cgroup v2 host with systemd, dbus is needed to avoid the following error:
```
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed
: unable to start container process: unable to apply cgroup configuration: unable to start unit "docker-170a4183e351e69835b82cc3134b97c8cbb0e6d3a6
16d5a0fb0ea473075062ad.scope" (properties [{Name:Description Value:"libcontainer container 170a4183e351e69835b82cc3134b97c8cbb0e6d3a616d5a0fb0ea47
3075062ad"} {Name:Slice Value:"user.slice"} {Name:Delegate Value:true} {Name:PIDs Value:@au [2872]} {Name:MemoryAccounting Value:true} {Name:CPUAc
counting Value:true} {Name:IOAccounting Value:true} {Name:TasksAccounting Value:true} {Name:DefaultDependencies Value:false}]): Interactive authen
tication required.: unknown.
```

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 206445fa4f)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2024-07-08 17:41:41 +09:00
Akihiro Suda
e7e0428218 Merge pull request #48122 from vvoland/v27.0-48120
[27.0 backport] update to go1.21.12
2024-07-04 10:57:45 +09:00
Paweł Gronowski
540b29c0c6 update to go1.21.12
- https://github.com/golang/go/issues?q=milestone%3AGo1.21.12+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.11...go1.21.12

These minor releases include 1 security fixes following the security policy:

net/http: denial of service due to improper 100-continue handling

The net/http HTTP/1.1 client mishandled the case where a server responds to a request with an "Expect: 100-continue" header with a non-informational (200 or higher) status. This mishandling could leave a client connection in an invalid state, where the next request sent on the connection will fail.

An attacker sending a request to a net/http/httputil.ReverseProxy proxy can exploit this mishandling to cause a denial of service by sending "Expect: 100-continue" requests which elicit a non-informational response from the backend. Each such request leaves the proxy with an invalid connection, and causes one subsequent request using that connection to fail.

Thanks to Geoff Franks for reporting this issue.

This is CVE-2024-24791 and Go issue https://go.dev/issue/67555.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.21.12

**- Description for the changelog**

```markdown changelog
Update Go runtime to 1.21.12
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 4d1d7c3ebe)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-03 14:20:05 +02:00
Sebastiaan van Stijn
662f78c0b1 Merge pull request #48090 from thaJeztah/27.0_backport_48067_fix_specific_ipv6_portmap_proxy_to_ipv4
[27.0 backport] Fix incorrect validation of port mapping
2024-06-28 23:16:49 +02:00
Sebastiaan van Stijn
b86d9bdaf3 Merge pull request #48086 from thaJeztah/27.0_backport_fix_rootless_pull
[27.0 backport] daemon/graphdriver/overlay2: set TarOptions.InUserNS for native differ (fix "failed to Lchown "/dev/console")
2024-06-28 22:40:07 +02:00
Sebastiaan van Stijn
0dbc3ac59e Merge pull request #48087 from thaJeztah/27.0_backport_gofmt
[27.0 backport] fix some gofmt issues reported by goreportcard
2024-06-28 21:11:01 +02:00
Rob Murray
276a648ec3 Fix incorrect validation of port mapping
Regression introduced in 01eecb6.

A port mapping from a specific IPv6 host address can be used
by a container on an IPv4-only network, docker-proxy makes the
connection.

Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit dfbcddb9f5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-28 21:07:29 +02:00
Sebastiaan van Stijn
22aa07b28f Merge pull request #48089 from robmry/backport-27.0/48069_fix_overlapping_subnets
[27.0 backport] Fix duplicate subnet allocations
2024-06-28 18:26:59 +02:00
Rob Murray
23b8b023dd Fix duplicate subnet allocations
Keep allocated subnets in-order, so that they're not mistakenly
reallocated due to a gap in the list where misplaced subnets should
have been.

Introduced in 9d288b5.

The iterator over allocated subnets was incremented too early, this
change moves it past three clauses in addrSpace.allocatePredefinedPool().
The three new unit tests correspond to a separate failure caused by
incrementing before each of them.

(cherry picked from commit 4de54ee14c)
Signed-off-by: Rob Murray <rob.murray@docker.com>
2024-06-28 16:24:47 +01:00
Sebastiaan van Stijn
bf222d635b fix some gofmt issues reported by goreportcard
https://goreportcard.com/report/github.com/docker/docker

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6ada1cff02)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-28 16:48:01 +02:00
Sebastiaan van Stijn
f8231b52d3 daemon/graphdriver/overlay2: set TarOptions.InUserNS for native differ
Commits b2fd67de77 (and the follow-up commit
f6b80253b8) updated doesSupportNativeDiff to
detect whether the host can support native overlay diffing with userns
enabled.

As a result, [useNaiveDiff] would now return "false" in cases where it
previously would return "true" (and thus skip). However, [overlay2],
unlike [fuse-overlay] did not take user namespaces into account, when
using the native differ, and it therefore did not set the InUserNS option
in TarOptions.

As a result pkg/archive.createTarFile would attempt tocreate [device-nodes]
through [handleTarTypeBlockCharFifo] which would fail, but the resulting
error `EPERM` would be discarded, and `createTarFile` would not return
early, therefor attempting to [os.LChown] the missing file, ultimately
resulting in an error:

    failed to Lchown "/dev/console" for UID 0, GID 0: lchown /dev/console: no such file or directory

This patch fixes the missing option in overlay.

[useNaiveDiff]: 47eebd718f/daemon/graphdriver/overlay2/overlay.go (L248-L256)
[overlay2]: 47eebd718f/daemon/graphdriver/overlay2/overlay.go (L684-L689)
[fuse-overlay]: 47eebd718f/daemon/graphdriver/fuse-overlayfs/fuseoverlayfs.go (L456-L462)
[device-nodes]: ff1e2c0de7/pkg/archive/archive.go (L713-L720)
[handleTarTypeBlockCharFifo]: 47eebd718f/pkg/archive/archive_unix.go (L110-L114)
[os.LChown]: ff1e2c0de7/pkg/archive/archive.go (L762-L773)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6521057bb2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-28 16:31:54 +02:00
Sebastiaan van Stijn
b951474404 pkg/archive: createTarFile: consistently use the same value for userns
createTarFile accepts a opts (TarOptions) argument to specify whether
userns is enabled; whe should consider always detecting locally, but
at least make sure we're consistently working with the same value within
this function.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 969993a729)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-28 16:31:53 +02:00
Sebastiaan van Stijn
c5794e23ec pkg/archive: handleTarTypeBlockCharFifo: don't discard EPERM errors
This function was discarding EPERM errors if it detected that userns was
enabled; move such checks to the caller-site, so that they can decide
how to handle the error (which, in case of userns may be to log and ignore).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 379ce56cd8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-28 16:31:53 +02:00
Sebastiaan van Stijn
02e24483be pkg/archive: getWhiteoutConverter: don't error with userns enabled
Since 838047a1f5, the overlayWhiteoutConverter
is supported with userns enabled, so we no longer need this check.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit af85e47343)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-28 16:31:53 +02:00
Sebastiaan van Stijn
b70040a8fc Merge pull request #48074 from vvoland/v27.0-48073
[27.0 backport] Dockerfile: update compose to v2.28.1, update cli to v27.0.2
2024-06-27 18:00:44 +02:00
Paweł Gronowski
838330bac3 Dockerfile: update docker CLI to v27.0.2
Update the Docker CLI used in the dev-container

full diff: https://github.com/docker/cli/compare/v26.1.0...v27.0.2

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 3928165cf7)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-06-27 14:55:28 +02:00
Paweł Gronowski
e419e22f29 Dockerfile: update compose to v2.28.1
Update the compose cli plugin used in the dev-container

full diff: https://github.com/docker/cli/compare/v2.27.1...v2.28.1

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 790035f754)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-06-27 14:55:26 +02:00
Paweł Gronowski
e953d76450 Merge pull request #48060 from thaJeztah/27.0_backport_api_deprecate_ContainerJSONBase_Node
[27.0 backport] api/types: deprecate ContainerJSONBase.Node, ContainerNode
2024-06-26 20:30:43 +02:00
Paweł Gronowski
861fde8cc9 Merge pull request #48061 from thaJeztah/27_backport_bump_golangci_lint
[27.0 backport] update golangci-lint to v1.59.1
2024-06-26 19:14:38 +02:00
Sebastiaan van Stijn
3557077867 update golangci-lint to v1.59.1
full diff: https://github.com/golangci/golangci-lint/compare/v1.55.2...v1.59.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 95fae036ae)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 14:09:41 +02:00
Sebastiaan van Stijn
c95b917167 pkg/archive: reformat code to make #nosec comment work again
Looks like the way it picks up #nosec comments changed, causing the
linter error to re-appear;

    pkg/archive/archive_linux.go:57:17: G305: File traversal when extracting zip/tar archive (gosec)
                    Name:       filepath.Join(hdr.Name, WhiteoutOpaqueDir),
                                ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4160d5aa7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 14:09:41 +02:00
Sebastiaan van Stijn
c0ff08acbd builder/remotecontext: reformat code to make #nosec comment work again
Looks like the way it picks up #nosec comments changed, causing the
linter error to re-appear;

    builder/remotecontext/remote.go:48:17: G107: Potential HTTP request made with variable url (gosec)
        if resp, err = http.Get(address); err != nil {
                       ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 04bf0e3d69)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 14:09:41 +02:00
Sebastiaan van Stijn
4587688258 api/types: deprecate ContainerJSONBase.Node, ContainerNode
The `Node` field and related `ContainerNode` type were used by the classic
(standalone) Swarm API. API documentation for this field was already removed
in 234d5a78fe (API 1.41 / docker 20.10), and
as the Docker Engine didn't implement these fields for the Swarm API, it
would always have been unset / nil.

Let's do a quick deprecation, and remove it on the next release.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1fc9236119)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-26 14:05:47 +02:00
4051 changed files with 142147 additions and 279984 deletions

View File

@@ -19,14 +19,11 @@ Please provide the following information:
**- How to verify it**
**- Human readable description for the release notes**
**- Description for the changelog**
<!--
Write a short (one line) summary that describes the changes in this
pull request for inclusion in the changelog.
It must be placed inside the below triple backticks section.
NOTE: Only fill this section if changes introduced in this PR are user-facing.
The PR must have a relevant impact/ label.
It must be placed inside the below triple backticks section:
-->
```markdown changelog

View File

@@ -16,12 +16,11 @@ on:
workflow_call:
env:
ALPINE_VERSION: "3.21"
ALPINE_VERSION: 3.16
jobs:
run:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
steps:
-
name: Checkout
@@ -49,12 +48,10 @@ jobs:
name: Validate
run: |
docker run --rm \
--quiet \
-v ./:/workspace \
-w /workspace \
-v "$(pwd):/workspace" \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && hack/validate/dco'
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
env:
VALIDATE_REPO: ${{ github.server_url }}/${{ github.repository }}.git
VALIDATE_BRANCH: ${{ steps.base-ref.outputs.result }}

View File

@@ -21,8 +21,7 @@ on:
jobs:
run:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
outputs:
matrix: ${{ steps.set.outputs.matrix }}
steps:

View File

@@ -21,57 +21,21 @@ on:
default: "graphdriver"
env:
GO_VERSION: "1.23.7"
GO_VERSION: "1.21.13"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
ITG_CLI_MATRIX_SIZE: 6
DOCKER_EXPERIMENTAL: 1
DOCKER_GRAPHDRIVER: ${{ inputs.storage == 'snapshotter' && 'overlayfs' || 'overlay2' }}
TEST_INTEGRATION_USE_SNAPSHOTTER: ${{ inputs.storage == 'snapshotter' && '1' || '' }}
SETUP_BUILDX_VERSION: edge
SETUP_BUILDX_VERSION: latest
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
unit-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
includes: ${{ steps.set.outputs.includes }}
steps:
-
name: Create matrix includes
id: set
uses: actions/github-script@v7
with:
script: |
let includes = [
{ mode: '' },
{ mode: 'rootless' },
{ mode: 'systemd' },
];
if ("${{ inputs.storage }}" == "snapshotter") {
includes.push({ mode: 'firewalld' });
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(includes)}`);
core.setOutput('includes', JSON.stringify(includes));
});
-
name: Show matrix
run: |
echo ${{ steps.set.outputs.includes }}
unit:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- unit-prepare
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.unit-prepare.outputs.includes) }}
timeout-minutes: 120
steps:
-
name: Checkout
@@ -79,15 +43,6 @@ jobs:
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "DOCKER_FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -97,7 +52,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -128,14 +83,14 @@ jobs:
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit-${{ inputs.storage }}-${{ matrix.mode }}
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 10
if: always()
needs:
- unit
@@ -145,12 +100,11 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
with:
pattern: test-reports-unit-${{ inputs.storage }}-*
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports
-
name: Install teststat
@@ -162,9 +116,9 @@ jobs:
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
steps:
-
name: Checkout
@@ -184,7 +138,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -192,7 +146,7 @@ jobs:
-
name: Test
run: |
make TEST_SKIP_INTEGRATION_CLI=1 -o build test-docker-py
make -o build test-docker-py
-
name: Prepare reports
if: always()
@@ -220,8 +174,8 @@ jobs:
integration-flaky:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
steps:
-
name: Checkout
@@ -238,7 +192,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -250,52 +204,21 @@ jobs:
env:
TEST_SKIP_INTEGRATION_CLI: 1
integration-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
includes: ${{ steps.set.outputs.includes }}
steps:
-
name: Create matrix includes
id: set
uses: actions/github-script@v7
with:
script: |
let includes = [
{ os: 'ubuntu-20.04', mode: '' },
{ os: 'ubuntu-20.04', mode: 'rootless' },
{ os: 'ubuntu-20.04', mode: 'systemd' },
{ os: 'ubuntu-24.04', mode: '' },
{ os: 'ubuntu-22.04', mode: 'rootless' },
// { os: 'ubuntu-24.04', mode: 'rootless' }, // FIXME: https://github.com/moby/moby/pull/49579#issuecomment-2698622223
{ os: 'ubuntu-24.04', mode: 'systemd' },
// { os: 'ubuntu-20.04', mode: 'rootless-systemd' }, // FIXME: https://github.com/moby/moby/issues/44084
// { os: 'ubuntu-24.04', mode: 'rootless-systemd' }, // FIXME: https://github.com/moby/moby/issues/44084
];
if ("${{ inputs.storage }}" == "snapshotter") {
includes.push({ os: 'ubuntu-24.04', mode: 'firewalld' });
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(includes)}`);
core.setOutput('includes', JSON.stringify(includes));
});
-
name: Show matrix
run: |
echo ${{ steps.set.outputs.includes }}
integration:
runs-on: ${{ matrix.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- integration-prepare
timeout-minutes: 120
strategy:
fail-fast: false
matrix:
include: ${{ fromJson(needs.integration-prepare.outputs.includes) }}
os:
- ubuntu-20.04
- ubuntu-22.04
mode:
- ""
- rootless
- systemd
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
steps:
-
name: Checkout
@@ -317,10 +240,6 @@ jobs:
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "DOCKER_FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
@@ -331,7 +250,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -384,9 +303,9 @@ jobs:
retention-days: 1
integration-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 10
if: always()
needs:
- integration
@@ -396,7 +315,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
@@ -414,11 +332,10 @@ jobs:
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
outputs:
matrix: ${{ steps.set.outputs.matrix }}
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
@@ -428,13 +345,12 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create test matrix
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
@@ -446,53 +362,20 @@ jobs:
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} -o "./..." -o "DockerSwarmSuite" -o "DockerNetworkSuite|DockerExternalVolumeSuite" ./...)"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Create gha matrix
id: set
uses: actions/github-script@v7
with:
script: |
let matrix = {
test: ${{ steps.tests.outputs.matrix }},
include: [],
};
// For some reasons, GHA doesn't combine a dynamically defined
// 'include' with other matrix variables that aren't part of the
// include items.
// Moreover, since the goal is to run only relevant tests with
// firewalld enabled to minimize the number of CI jobs, we
// statically define the list of test suites that we want to run.
if ("${{ inputs.storage }}" == "snapshotter") {
matrix.include.push({
'mode': 'firewalld',
'test': 'DockerCLINetworkSuite|DockerCLIPortSuite|DockerDaemonSuite'
});
matrix.include.push({
'mode': 'firewalld',
'test': 'DockerSwarmSuite'
});
matrix.include.push({
'mode': 'firewalld',
'test': 'DockerNetworkSuite'
});
}
await core.group(`Set matrix`, async () => {
core.info(`matrix: ${JSON.stringify(matrix)}`);
core.setOutput('matrix', JSON.stringify(matrix));
});
-
name: Show final gha matrix
name: Show matrix
run: |
echo ${{ steps.set.outputs.matrix }}
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-20.04
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 120
needs:
- integration-cli-prepare
strategy:
fail-fast: false
matrix: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
matrix:
test: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
steps:
-
name: Checkout
@@ -503,15 +386,6 @@ jobs:
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"firewalld"* ]]; then
echo "DOCKER_FIREWALLD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}firewalld"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -521,7 +395,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -573,9 +447,9 @@ jobs:
retention-days: 1
integration-cli-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
runs-on: ubuntu-20.04
continue-on-error: ${{ github.event_name != 'pull_request' }}
timeout-minutes: 10
if: always()
needs:
- integration-cli
@@ -585,7 +459,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4

View File

@@ -28,7 +28,7 @@ on:
default: false
env:
GO_VERSION: "1.23.7"
GO_VERSION: "1.21.13"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.25
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
@@ -42,7 +42,6 @@ env:
jobs:
build:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
@@ -122,7 +121,7 @@ jobs:
unit-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
timeout-minutes: 120
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
@@ -203,8 +202,7 @@ jobs:
retention-days: 1
unit-test-report:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-latest
if: always()
needs:
- unit-test
@@ -214,7 +212,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download artifacts
uses: actions/download-artifact@v4
@@ -231,8 +228,7 @@ jobs:
find /tmp/artifacts -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
integration-test-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
@@ -244,7 +240,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Install gotestlist
run:
@@ -267,8 +262,8 @@ jobs:
integration-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
timeout-minutes: 120
needs:
- build
- integration-test-prepare
@@ -349,12 +344,33 @@ jobs:
$ErrorActionPreference = "Stop"
Write-Host "Service removed"
}
-
name: Starting containerd
if: matrix.runtime == 'containerd'
run: |
Write-Host "Generating config"
& "${{ env.BIN_OUT }}\containerd.exe" config default | Out-File "$env:TEMP\ctn.toml" -Encoding ascii
Write-Host "Creating service"
New-Item -ItemType Directory "$env:TEMP\ctn-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\ctn-state" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait "${{ env.BIN_OUT }}\containerd.exe" `
-ArgumentList "--log-level=debug", `
"--config=$env:TEMP\ctn.toml", `
"--address=\\.\pipe\containerd-containerd", `
"--root=$env:TEMP\ctn-root", `
"--state=$env:TEMP\ctn-state", `
"--log-file=$env:TEMP\ctn.log", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name containerd
Start-Sleep -Seconds 5
Write-Host "Service started successfully!"
-
name: Starting test daemon
run: |
Write-Host "Creating service"
If ("${{ matrix.runtime }}" -eq "containerd") {
$runtimeArg="--default-runtime=io.containerd.runhcs.v1"
$runtimeArg="--containerd=\\.\pipe\containerd-containerd"
echo "DOCKER_WINDOWS_CONTAINERD_RUNTIME=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
New-Item -ItemType Directory "$env:TEMP\moby-root" -ErrorAction SilentlyContinue | Out-Null
@@ -394,17 +410,6 @@ jobs:
Start-Sleep -Seconds 1
}
Write-Host "Test daemon started and replied!"
If ("${{ matrix.runtime }}" -eq "containerd") {
$containerdProcesses = Get-Process -Name containerd -ErrorAction:SilentlyContinue
If (-not $containerdProcesses) {
Throw "containerd process is not running"
} else {
foreach ($process in $containerdProcesses) {
$processPath = (Get-Process -Id $process.Id -FileVersionInfo).FileName
Write-Output "Running containerd instance binary Path: $($processPath)"
}
}
}
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
@@ -433,7 +438,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Test integration
if: matrix.test == './...'
@@ -469,6 +473,19 @@ jobs:
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Stop containerd
if: always() && matrix.runtime == 'containerd'
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name containerd
$ErrorActionPreference = "Stop"
-
name: Containerd logs
if: always() && matrix.runtime == 'containerd'
run: |
Copy-Item "$env:TEMP\ctn.log" -Destination ".\bundles\containerd.log"
Get-Content "$env:TEMP\ctn.log" | Out-Host
-
name: Stop daemon
if: always()
@@ -504,8 +521,7 @@ jobs:
retention-days: 1
integration-test-report:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-latest
continue-on-error: ${{ inputs.storage == 'snapshotter' && github.event_name != 'pull_request' }}
if: always()
needs:
@@ -527,7 +543,6 @@ jobs:
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4

View File

@@ -1,276 +0,0 @@
name: arm64
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.23.7"
TESTSTAT_VERSION: v0.1.25
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
DOCKER_EXPERIMENTAL: 1
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-24.04-arm
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
with:
targets: ${{ matrix.target }}
-
name: List artifacts
run: |
tree -nh ${{ env.DESTDIR }}
-
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
build-dev:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
steps:
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev-arm64
*.cache-to=type=gha,scope=dev-arm64,mode=max
*.output=type=cacheonly
test-unit:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev-arm64
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-unit-arm64-graphdriver
path: /tmp/reports/*
retention-days: 1
test-unit-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- test-unit
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
with:
pattern: test-reports-unit-arm64-*
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY
test-integration:
runs-on: ubuntu-24.04-arm
timeout-minutes: 120 # guardrails timeout for the whole job
continue-on-error: ${{ github.event_name != 'pull_request' }}
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up tracing
uses: ./.github/actions/setup-tracing
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
version: ${{ env.SETUP_BUILDX_VERSION }}
driver-opts: image=${{ env.SETUP_BUILDKIT_IMAGE }}
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev-arm64
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsPath="/tmp/reports/arm64-graphdriver"
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
curl -sSLf localhost:16686/api/traces?service=integration-test-client > $reportsPath/jaeger-trace.json
-
name: Send to Codecov
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration
token: ${{ secrets.CODECOV_TOKEN }} # used to upload coverage reports: https://github.com/moby/buildkit/pull/4660#issue-2142122533
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v4
with:
name: test-reports-integration-arm64-graphdriver
path: /tmp/reports/*
retention-days: 1
test-integration-report:
runs-on: ubuntu-24.04
timeout-minutes: 10
continue-on-error: ${{ github.event_name != 'pull_request' }}
if: always()
needs:
- test-integration
steps:
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: Download reports
uses: actions/download-artifact@v4
with:
path: /tmp/reports
pattern: test-reports-integration-arm64-*
merge-multiple: true
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
find /tmp/reports -type f -name '*-go-test-report.json' -exec teststat -markdown {} \+ >> $GITHUB_STEP_SUMMARY

View File

@@ -19,7 +19,6 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
tags:
- 'v*'
pull_request:
@@ -31,7 +30,7 @@ env:
PLATFORM: Moby Engine - Nightly
PRODUCT: moby-bin
PACKAGER_NAME: The Moby Project
SETUP_BUILDX_VERSION: edge
SETUP_BUILDX_VERSION: latest
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
@@ -40,8 +39,7 @@ jobs:
uses: ./.github/workflows/.dco.yml
prepare:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
outputs:
platforms: ${{ steps.platforms.outputs.matrix }}
steps:
@@ -59,7 +57,7 @@ jobs:
## push semver tag v23.0.0
# moby/moby-bin:23.0.0
# moby/moby-bin:latest
## push semver prerelease tag v23.0.0-beta.1
## push semver prelease tag v23.0.0-beta.1
# moby/moby-bin:23.0.0-beta.1
## push on master
# moby/moby-bin:master
@@ -93,8 +91,7 @@ jobs:
echo "matrix=$(docker buildx bake bin-image-cross --print | jq -cr '.target."bin-image-cross".platforms')" >>${GITHUB_OUTPUT}
build:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
- prepare
@@ -104,16 +101,16 @@ jobs:
matrix:
platform: ${{ fromJson(needs.prepare.outputs.platforms) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Download meta bake definition
uses: actions/download-artifact@v4
@@ -140,9 +137,8 @@ jobs:
-
name: Build
id: bake
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
source: .
files: |
./docker-bake.hcl
/tmp/bake-meta.json
@@ -169,8 +165,7 @@ jobs:
retention-days: 1
merge:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- build
if: always() && !contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && github.event_name != 'pull_request' && github.repository == 'moby/moby'

View File

@@ -19,13 +19,12 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.23.7"
GO_VERSION: "1.21.13"
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDX_VERSION: latest
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
@@ -33,11 +32,13 @@ jobs:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -47,7 +48,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: binary
-
@@ -60,8 +61,8 @@ jobs:
retention-days: 1
test:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build
env:
@@ -101,12 +102,6 @@ jobs:
uses: actions/checkout@v4
with:
path: moby
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
cache-dependency-path: vendor.sum
-
name: BuildKit ref
run: |
@@ -144,9 +139,8 @@ jobs:
docker info
-
name: Build test image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
source: .
workdir: ./buildkit
targets: integration-tests
set: |

View File

@@ -19,12 +19,11 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
DESTDIR: ./build
SETUP_BUILDX_VERSION: edge
SETUP_BUILDX_VERSION: latest
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
@@ -32,8 +31,7 @@ jobs:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
@@ -43,6 +41,11 @@ jobs:
- binary
- dynbinary
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -52,7 +55,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: ${{ matrix.target }}
-
@@ -65,8 +68,7 @@ jobs:
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
prepare-cross:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
runs-on: ubuntu-latest
needs:
- validate-dco
outputs:
@@ -87,8 +89,7 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
cross:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
- prepare-cross
@@ -97,6 +98,11 @@ jobs:
matrix:
platform: ${{ fromJson(needs.prepare-cross.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Prepare
run: |
@@ -111,7 +117,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: all
set: |
@@ -127,13 +133,15 @@ jobs:
govulncheck:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
permissions:
# required to write sarif report
security-events: write
# required to check out the repository
contents: read
steps:
-
name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -143,7 +151,7 @@ jobs:
buildkitd-flags: --debug
-
name: Run
uses: docker/bake-action@v6
uses: docker/bake-action@v5
with:
targets: govulncheck
env:

View File

@@ -1,71 +0,0 @@
name: codeql
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
tags:
- 'v*'
pull_request:
# The branches below must be a subset of the branches above
branches: ["master"]
schedule:
# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ │
# * * * * *
- cron: '0 9 * * 4'
jobs:
codeql:
runs-on: ubuntu-24.04
timeout-minutes: 10
permissions:
actions: read
contents: read
security-events: write
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 2
# CodeQL 2.16.4's auto-build added support for multi-module repositories,
# and is trying to be smart by searching for modules in every directory,
# including vendor directories. If no module is found, it's creating one
# which is ... not what we want, so let's give it a "go.mod".
# see: https://github.com/docker/cli/pull/4944#issuecomment-2002034698
- name: Create go.mod
run: |
ln -s vendor.mod go.mod
ln -s vendor.sum go.sum
- name: Update Go
uses: actions/setup-go@v5
with:
go-version: "1.23.7"
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: go
- name: Autobuild
uses: github/codeql-action/autobuild@v3
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3
with:
category: "/language:go"

View File

@@ -19,14 +19,13 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
env:
GO_VERSION: "1.23.7"
GO_VERSION: "1.21.13"
GIT_PAGER: "cat"
PAGER: "cat"
SETUP_BUILDX_VERSION: edge
SETUP_BUILDX_VERSION: latest
SETUP_BUILDKIT_IMAGE: moby/buildkit:latest
jobs:
@@ -34,8 +33,7 @@ jobs:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
@@ -51,6 +49,9 @@ jobs:
if [ "${{ matrix.mode }}" = "systemd" ]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -60,7 +61,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -84,8 +85,7 @@ jobs:
storage: ${{ matrix.storage }}
validate-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
@@ -106,8 +106,8 @@ jobs:
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-24.04
timeout-minutes: 30 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- validate-prepare
- build-dev
@@ -133,7 +133,7 @@ jobs:
buildkitd-flags: --debug
-
name: Build dev image
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: dev
set: |
@@ -144,8 +144,7 @@ jobs:
make -o build validate-${{ matrix.script }}
smoke-prepare:
runs-on: ubuntu-24.04
timeout-minutes: 10 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
@@ -166,8 +165,7 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
smoke:
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
needs:
- smoke-prepare
strategy:
@@ -175,6 +173,9 @@ jobs:
matrix:
platform: ${{ fromJson(needs.smoke-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v4
-
name: Prepare
run: |
@@ -192,7 +193,7 @@ jobs:
buildkitd-flags: --debug
-
name: Test
uses: docker/bake-action@v6
uses: docker/bake-action@v4
with:
targets: binary-smoketest
set: |

View File

@@ -11,12 +11,11 @@ permissions:
on:
pull_request:
types: [opened, edited, labeled, unlabeled, synchronize]
types: [opened, edited, labeled, unlabeled]
jobs:
check-area-label:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
steps:
- name: Missing `area/` label
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') && !contains(join(github.event.pull_request.labels.*.name, ','), 'area/')
@@ -27,10 +26,9 @@ jobs:
run: exit 0
check-changelog:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
if: contains(join(github.event.pull_request.labels.*.name, ','), 'impact/')
runs-on: ubuntu-20.04
env:
HAS_IMPACT_LABEL: ${{ contains(join(github.event.pull_request.labels.*.name, ','), 'impact/') }}
PR_BODY: |
${{ github.event.pull_request.body }}
steps:
@@ -42,31 +40,22 @@ jobs:
# Strip empty lines
desc=$(echo "$block" | awk NF)
if [ "$HAS_IMPACT_LABEL" = "true" ]; then
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
if [ -z "$desc" ]; then
echo "::error::Changelog section is empty. Please provide a description for the changelog."
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
exit 1
fi
else
if [ -n "$desc" ]; then
echo "::error::PR has a changelog description, but no changelog label"
echo "::error::Please add the relevant 'impact/' label to the PR or remove the changelog description"
exit 1
fi
len=$(echo -n "$desc" | wc -c)
if [[ $len -le 6 ]]; then
echo "::error::Description looks too short: $desc"
exit 1
fi
echo "This PR will be included in the release notes with the following note:"
echo "$desc"
check-pr-branch:
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
runs-on: ubuntu-20.04
env:
PR_TITLE: ${{ github.event.pull_request.title }}
steps:

View File

@@ -19,7 +19,6 @@ on:
branches:
- 'master'
- '[0-9]+.[0-9]+'
- '[0-9]+.x'
pull_request:
jobs:

View File

@@ -1,51 +1,50 @@
linters:
enable:
- asasalint # Detects "[]any" used as argument for variadic "func(...any)".
- copyloopvar # Detects places where loop variables are copied.
- depguard
- dogsled # Detects assignments with too many blank identifiers.
- dupword # Detects duplicate words.
- durationcheck # Detect cases where two time.Duration values are being multiplied in possibly erroneous ways.
- errchkjson # Detects unsupported types passed to json encoding functions and reports if checks for the returned error can be omitted.
- exhaustive # Detects missing options in enum switch statements.
- exptostd # Detects functions from golang.org/x/exp/ that can be replaced by std functions.
- fatcontext # Detects nested contexts in loops and function literals.
- gocheckcompilerdirectives # Detects invalid go compiler directive comments (//go:).
- dupword # Checks for duplicate words in the source code.
- goimports
- gosec # Detects security problems.
- gosec
- gosimple
- govet
- forbidigo
- iface # Detects incorrect use of interfaces. Currently only used for "identical" interfaces in the same package.
- importas
- ineffassign
- makezero # Finds slice declarations with non-zero initial length.
- mirror # Detects wrong mirror patterns of bytes/strings usage.
- misspell # Detects commonly misspelled English words in comments.
- nakedret # Detects uses of naked returns.
- nilnesserr # Detects returning nil errors. It combines the features of nilness and nilerr,
- nosprintfhostport # Detects misuse of Sprintf to construct a host with port in a URL.
- reassign # Detects reassigning a top-level variable in another package.
- revive # Metalinter; drop-in replacement for golint.
- spancheck # Detects mistakes with OpenTelemetry/Census spans.
- misspell
- revive
- staticcheck
- typecheck
- unconvert # Detects unnecessary type conversions.
- unconvert
- unused
- wastedassign # Detects wasted assignment statements.
disable:
- errcheck
run:
# prevent golangci-lint from deducting the go version to lint for through go.mod,
# which causes it to fallback to go1.17 semantics.
go: "1.23.7"
concurrency: 2
# Only supported with go modules enabled (build flag -mod=vendor only valid when using modules)
# modules-download-mode: vendor
run:
concurrency: 2
modules-download-mode: vendor
skip-dirs:
- docs
linters-settings:
dupword:
ignore:
- "true" # some tests use this as expected output
- "false" # some tests use this as expected output
- "root" # for tests using "ls" output with files owned by "root:root"
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
alias:
# Enforce alias to prevent it accidentally being used instead of our
# own errdefs package (or vice-versa).
- pkg: github.com/containerd/errdefs
alias: cerrdefs
- pkg: github.com/opencontainers/image-spec/specs-go/v1
alias: ocispec
govet:
check-shadowing: false
depguard:
rules:
main:
@@ -64,104 +63,17 @@ linters-settings:
desc: The logs package has moved to a separate module, https://github.com/containerd/log
- pkg: "github.com/containerd/containerd/pkg/userns"
desc: Use github.com/moby/sys/userns instead.
- pkg: "github.com/tonistiigi/fsutil"
desc: The fsutil module does not have a stable API, so we should not have a direct dependency unless necessary.
dupword:
ignore:
- "true" # some tests use this as expected output
- "false" # some tests use this as expected output
- "root" # for tests using "ls" output with files owned by "root:root"
exhaustive:
# Program elements to check for exhaustiveness.
# Default: [ switch ]
check:
- switch
# - map # TODO(thaJeztah): also enable for maps
# Presence of "default" case in switch statements satisfies exhaustiveness,
# even if all enum members are not listed.
# Default: false
#
# TODO(thaJeztah): consider not allowing this to catch new values being added (and falling through to "default")
default-signifies-exhaustive: true
forbidigo:
forbid:
- pkg: ^sync/atomic$
p: ^atomic\.(Add|CompareAndSwap|Load|Store|Swap).
msg: Go 1.19 atomic types should be used instead.
- pkg: ^regexp$
p: ^regexp\.MustCompile
msg: Use internal/lazyregexp.New instead.
- pkg: github.com/vishvananda/netlink$
p: ^netlink\.(Handle\.)?(AddrList|BridgeVlanList|ChainList|ClassList|ConntrackTableList|ConntrackDeleteFilter$|ConntrackDeleteFilters|DevLinkGetDeviceList|DevLinkGetAllPortList|DevlinkGetDeviceParams|FilterList|FouList|GenlFamilyList|GTPPDPList|LinkByName|LinkByAlias|LinkList|LinkSubscribeWithOptions|NeighList$|NeighProxyList|NeighListExecute|NeighSubscribeWithOptions|LinkGetProtinfo|QdiscList|RdmaLinkList|RdmaLinkByName|RdmaLinkDel|RouteList|RouteListFilteredIter|RuleListFiltered$|RouteSubscribeWithOptions|RuleList$|RuleListFiltered|SocketGet|SocketDiagTCPInfo|SocketDiagTCP|SocketDiagUDPInfo|SocketDiagUDP|UnixSocketDiagInfo|UnixSocketDiag|VDPAGetDevConfigList|VDPAGetDevList|VDPAGetMGMTDevList|XfrmPolicyList|XfrmStateList)
msg: Use internal nlwrap package for EINTR handling.
- pkg: github.com/docker/docker/internal/nlwrap$
p: ^nlwrap.Handle.(BridgeVlanList|ChainList|ClassList|ConntrackDeleteFilter$|DevLinkGetDeviceList|DevLinkGetAllPortList|DevlinkGetDeviceParams|FilterList|FouList|GenlFamilyList|GTPPDPList|LinkByAlias|LinkSubscribeWithOptions|NeighList$|NeighProxyList|NeighListExecute|NeighSubscribeWithOptions|LinkGetProtinfo|QdiscList|RdmaLinkList|RdmaLinkByName|RdmaLinkDel|RouteListFilteredIter|RuleListFiltered$|RouteSubscribeWithOptions|RuleList$|RuleListFiltered|SocketGet|SocketDiagTCPInfo|SocketDiagTCP|SocketDiagUDPInfo|SocketDiagUDP|UnixSocketDiagInfo|UnixSocketDiag|VDPAGetDevConfigList|VDPAGetDevList|VDPAGetMGMTDevList)
msg: Add a wrapper to nlwrap.Handle for EINTR handling and update the list in .golangci.yml.
analyze-types: true
gosec:
excludes:
- G104 # G104: Errors unhandled; (TODO: reduce unhandled errors, or explicitly ignore)
- G113 # G113: Potential uncontrolled memory consumption in Rat.SetString (CVE-2022-23772); (only affects go < 1.16.14. and go < 1.17.7)
- G115 # G115: integer overflow conversion; (TODO: verify these: https://github.com/moby/moby/issues/48358)
- G204 # G204: Subprocess launched with variable; too many false positives.
- G301 # G301: Expect directory permissions to be 0750 or less (also EXC0009); too restrictive
- G302 # G302: Expect file permissions to be 0600 or less (also EXC0009); too restrictive
- G304 # G304: Potential file inclusion via variable.
- G306 # G306: Expect WriteFile permissions to be 0600 or less (too restrictive; also flags "0o644" permissions)
- G307 # G307: Deferring unsafe method "*os.File" on type "Close" (also EXC0008); (TODO: evaluate these and fix where needed: G307: Deferring unsafe method "*os.File" on type "Close")
- G504 # G504: Blocklisted import net/http/cgi: Go versions < 1.6.3 are vulnerable to Httpoxy attack: (CVE-2016-5386); (only affects go < 1.6.3)
govet:
enable-all: true
disable:
- fieldalignment # TODO: evaluate which ones should be updated.
importas:
# Do not allow unaliased imports of aliased packages.
no-unaliased: true
alias:
# Enforce alias to prevent it accidentally being used instead of our
# own errdefs package (or vice-versa).
- pkg: github.com/containerd/errdefs
alias: cerrdefs
- pkg: github.com/containerd/containerd/images
alias: c8dimages
- pkg: github.com/opencontainers/image-spec/specs-go/v1
alias: ocispec
# Enforce that gotest.tools/v3/assert/cmp is always aliased as "is"
- pkg: gotest.tools/v3/assert/cmp
alias: is
nakedret:
# Disallow naked returns if func has more lines of code than this setting.
# Default: 30
max-func-lines: 0
- pkg: "github.com/opencontainers/runc/libcontainer/userns"
desc: Use github.com/moby/sys/userns instead.
revive:
rules:
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
spancheck:
# Default: ["end"]
checks:
- end # check that `span.End()` is called
- record-error # check that `span.RecordError(err)` is called when an error is returned
- set-status # check that `span.SetStatus(codes.Error, msg)` is called when an error is returned
issues:
# The default exclusion rules are a bit too permissive, so copying the relevant ones below
exclude-use-default: false
exclude-dirs:
- docs
exclude-rules:
# We prefer to use an "exclude-list" so that new "default" exclusions are not
# automatically inherited. We can decide whether or not to follow upstream
@@ -170,25 +82,50 @@ issues:
# (unlike the "include" option), the "exclude" option does not take exclusion
# ID's.
#
# These exclusion patterns are copied from the default excludes at:
# https://github.com/golangci/golangci-lint/blob/v1.61.0/pkg/config/issues.go#L11-L104
#
# The default list of exclusions can be found at:
# https://golangci-lint.run/usage/false-positives/#default-exclusions
# These exclusion patterns are copied from the default excluses at:
# https://github.com/golangci/golangci-lint/blob/v1.46.2/pkg/config/issues.go#L10-L104
# EXC0001
- text: "Error return value of .((os\\.)?std(out|err)\\..*|.*Close|.*Flush|os\\.Remove(All)?|.*print(f|ln)?|os\\.(Un)?Setenv). is not checked"
linters:
- errcheck
# EXC0006
- text: "Use of unsafe calls should be audited"
linters:
- gosec
# EXC0007
- text: "Subprocess launch(ed with variable|ing should be audited)"
linters:
- gosec
# EXC0008
# TODO: evaluate these and fix where needed: G307: Deferring unsafe method "*os.File" on type "Close" (gosec)
- text: "(G104|G307)"
linters:
- gosec
# EXC0009
- text: "(Expect directory permissions to be 0750 or less|Expect file permissions to be 0600 or less)"
linters:
- gosec
# EXC0010
- text: "Potential file inclusion via variable"
linters:
- gosec
# Looks like the match in "EXC0007" above doesn't catch this one
# TODO: consider upstreaming this to golangci-lint's default exclusion rules
- text: "G204: Subprocess launched with a potential tainted input or cmd arguments"
linters:
- gosec
# Looks like the match in "EXC0009" above doesn't catch this one
# TODO: consider upstreaming this to golangci-lint's default exclusion rules
- text: "G306: Expect WriteFile permissions to be 0600 or less"
linters:
- gosec
# Exclude some linters from running on tests files.
- path: _test\.go
linters:
- errcheck
- text: "G404: Use of weak random number generator"
path: _test\.go
linters:
- gosec
# Suppress golint complaining about generated types in api/types/
@@ -196,11 +133,10 @@ issues:
path: "api/types/(volume|container)/"
linters:
- revive
# FIXME: ignoring unused assigns to ctx for now; too many hits in libnetwork/xxx functions that setup traces
- text: "assigned to ctx, but never used afterwards"
# FIXME temporarily suppress these (see https://github.com/gotestyourself/gotest.tools/issues/272)
- text: "SA1019: (assert|cmp|is)\\.ErrorType is deprecated"
linters:
- wastedassign
- staticcheck
- text: "ineffectual assignment to ctx"
source: "ctx[, ].*=.*\\(ctx[,)]"
@@ -212,33 +148,6 @@ issues:
linters:
- staticcheck
# Ignore "nested context in function literal (fatcontext)" as we intentionally set up tracing on a base-context for tests.
# FIXME(thaJeztah): see if there's a more iodiomatic way to do this.
- text: 'nested context in function literal'
path: '((main|check)_(linux_|)test\.go)|testutil/helpers\.go'
linters:
- fatcontext
- text: '^shadow: declaration of "(ctx|err|ok)" shadows declaration'
linters:
- govet
- text: '^shadow: declaration of "(out)" shadows declaration'
path: _test\.go
linters:
- govet
- text: 'use of `regexp.MustCompile` forbidden'
path: _test\.go
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "internal/lazyregexp"
linters:
- forbidigo
- text: 'use of `regexp.MustCompile` forbidden'
path: "libnetwork/cmd/networkdb-test/dbclient"
linters:
- forbidigo
# Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0

View File

@@ -94,8 +94,6 @@ Arnaud Rebillout <arnaud.rebillout@collabora.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com> <elboulangero@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Artur Meyster <arthurfbi@yahoo.com>
Austin Vazquez <macedonv@amazon.com>
Austin Vazquez <macedonv@amazon.com> <55906459+austinvazquez@users.noreply.github.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
@@ -509,7 +507,6 @@ Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
mrfly <mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Myeongjoon Kim <kimmj8409@gmail.com>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@@ -532,8 +529,6 @@ Ouyang Liduo <oyld0210@163.com>
Patrick St. laurent <patrick@saint-laurent.us>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
Paweł Gronowski <pawel.gronowski@docker.com>
Paweł Gronowski <pawel.gronowski@docker.com> <me@woland.xyz>
Pavel Tikhomirov <ptikhomirov@virtuozzo.com> <ptikhomirov@parallels.com>
Pawel Konczalski <mail@konczalski.de>
Peter Choi <phkchoi89@gmail.com> <reikani@Peters-MacBook-Pro.local>
@@ -561,8 +556,6 @@ Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Rodrigo Campos <rodrigoca@microsoft.com>
Rodrigo Campos <rodrigoca@microsoft.com> <rodrigo@kinvolk.io>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>

32
AUTHORS
View File

@@ -2,9 +2,7 @@
# This file lists all contributors to the repository.
# See hack/generate-authors.sh to make modifications.
7sunarni <710720732@qq.com>
Aanand Prasad <aanand.prasad@gmail.com>
Aarni Koskela <akx@iki.fi>
Aaron Davidson <aaron@databricks.com>
Aaron Feng <aaron.feng@gmail.com>
Aaron Hnatiw <aaron@griddio.com>
@@ -13,7 +11,6 @@ Aaron L. Xu <liker.xu@foxmail.com>
Aaron Lehmann <alehmann@netflix.com>
Aaron Welch <welch@packet.net>
Aaron Yoshitake <airandfingers@gmail.com>
Abdur Rehman <abdur_rehman@mentor.com>
Abel Muiño <amuino@gmail.com>
Abhijeet Kasurde <akasurde@redhat.com>
Abhinandan Prativadi <aprativadi@gmail.com>
@@ -27,11 +24,9 @@ Adam Avilla <aavilla@yp.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Eijdenberg <adam.eijdenberg@gmail.com>
Adam Kunk <adam.kunk@tiaa-cref.org>
Adam Lamers <adam.lamers@wmsdev.pl>
Adam Miller <admiller@redhat.com>
Adam Mills <adam@armills.info>
Adam Pointer <adam.pointer@skybettingandgaming.com>
Adam Simon <adamsimon85100@gmail.com>
Adam Singer <financeCoding@gmail.com>
Adam Thornton <adam.thornton@maryville.com>
Adam Walz <adam@adamwalz.net>
@@ -124,7 +119,6 @@ amangoel <amangoel@gmail.com>
Amen Belayneh <amenbelayneh@gmail.com>
Ameya Gawde <agawde@mirantis.com>
Amir Goldstein <amir73il@aquasec.com>
AmirBuddy <badinlu.amirhossein@gmail.com>
Amit Bakshi <ambakshi@gmail.com>
Amit Krishnan <amit.krishnan@oracle.com>
Amit Shukla <amit.shukla@docker.com>
@@ -174,7 +168,6 @@ Andrey Kolomentsev <andrey.kolomentsev@docker.com>
Andrey Petrov <andrey.petrov@shazow.net>
Andrey Stolbovsky <andrey.stolbovsky@gmail.com>
André Martins <aanm90@gmail.com>
Andrés Maldonado <maldonado@codelutin.com>
Andy Chambers <anchambers@paypal.com>
andy diller <dillera@gmail.com>
Andy Goldstein <agoldste@redhat.com>
@@ -226,7 +219,6 @@ Artur Meyster <arthurfbi@yahoo.com>
Arun Gupta <arun.gupta@gmail.com>
Asad Saeeduddin <masaeedu@gmail.com>
Asbjørn Enge <asbjorn@hanafjedle.net>
Ashly Mathew <ashly.mathew@sap.com>
Austin Vazquez <macedonv@amazon.com>
averagehuman <averagehuman@users.noreply.github.com>
Avi Das <andas222@gmail.com>
@@ -353,7 +345,6 @@ Chance Zibolski <chance.zibolski@gmail.com>
Chander Govindarajan <chandergovind@gmail.com>
Chanhun Jeong <keyolk@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com>
Charity Kathure <ckathure@microsoft.com>
Charles Chan <charleswhchan@users.noreply.github.com>
Charles Hooper <charles.hooper@dotcloud.com>
Charles Law <claw@conduce.com>
@@ -489,7 +480,6 @@ Daniel Farrell <dfarrell@redhat.com>
Daniel Garcia <daniel@danielgarcia.info>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Grunwell <mwgrunny@gmail.com>
Daniel Guns <danbguns@gmail.com>
Daniel Helfand <helfand.4@gmail.com>
Daniel Hiltgen <daniel.hiltgen@docker.com>
Daniel J Walsh <dwalsh@redhat.com>
@@ -773,7 +763,6 @@ Frank Macreery <frank@macreery.com>
Frank Rosquin <frank.rosquin+github@gmail.com>
Frank Villaro-Dixon <frank.villarodixon@merkle.com>
Frank Yang <yyb196@gmail.com>
François Scala <github@arcenik.net>
Fred Lifton <fred.lifton@docker.com>
Frederick F. Kautz IV <fkautz@redhat.com>
Frederico F. de Oliveira <FreddieOliveira@users.noreply.github.com>
@@ -809,7 +798,6 @@ GennadySpb <lipenkov@gmail.com>
Geoff Levand <geoff@infradead.org>
Geoffrey Bachelet <grosfrais@gmail.com>
Geon Kim <geon0250@gmail.com>
George Adams <georgeadams1995@gmail.com>
George Kontridze <george@bugsnag.com>
George Ma <mayangang@outlook.com>
George MacRorie <gmacr31@gmail.com>
@@ -838,7 +826,6 @@ Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
Gosuke Miyashita <gosukenator@gmail.com>
Gou Rao <gou@portworx.com>
Govinda Fichtner <govinda.fichtner@googlemail.com>
Grace Choi <grace.54109@gmail.com>
Grant Millar <rid@cylo.io>
Grant Reaber <grant.reaber@gmail.com>
Graydon Hoare <graydon@pobox.com>
@@ -979,7 +966,6 @@ James Nugent <james@jen20.com>
James Sanders <james3sanders@gmail.com>
James Turnbull <james@lovedthanlost.net>
James Watkins-Harvey <jwatkins@progi-media.com>
Jameson Hyde <jameson.hyde@docker.com>
Jamie Hannaford <jamie@limetree.org>
Jamshid Afshar <jafshar@yahoo.com>
Jan Breig <git@pygos.space>
@@ -1078,16 +1064,13 @@ Jim Perrin <jperrin@centos.org>
Jimmy Cuadra <jimmy@jimmycuadra.com>
Jimmy Puckett <jimmy.puckett@spinen.com>
Jimmy Song <rootsongjc@gmail.com>
jinjiadu <jinjiadu@aliyun.com>
Jinsoo Park <cellpjs@gmail.com>
Jintao Zhang <zhangjintao9020@gmail.com>
Jiri Appl <jiria@microsoft.com>
Jiri Popelka <jpopelka@redhat.com>
Jiuyue Ma <majiuyue@huawei.com>
Jiří Župka <jzupka@redhat.com>
jjimbo137 <115816493+jjimbo137@users.noreply.github.com>
Joakim Roubert <joakim.roubert@axis.com>
Joan Grau <grautxo.dev@proton.me>
Joao Fernandes <joao.fernandes@docker.com>
Joao Trindade <trindade.joao@gmail.com>
Joe Beda <joe.github@bedafamily.com>
@@ -1172,7 +1155,6 @@ Josiah Kiehl <jkiehl@riotgames.com>
José Tomás Albornoz <jojo@eljojo.net>
Joyce Jang <mail@joycejang.com>
JP <jpellerin@leapfrogonline.com>
JSchltggr <jschltggr@gmail.com>
Julian Taylor <jtaylor.debian@googlemail.com>
Julien Barbier <write0@gmail.com>
Julien Bisconti <veggiemonk@users.noreply.github.com>
@@ -1307,7 +1289,6 @@ Laura Brehm <laurabrehm@hey.com>
Laura Frank <ljfrank@gmail.com>
Laurent Bernaille <laurent.bernaille@datadoghq.com>
Laurent Erignoux <lerignoux@gmail.com>
Laurent Goderre <laurent.goderre@docker.com>
Laurie Voss <github@seldo.com>
Leandro Motta Barros <lmb@stackedboxes.org>
Leandro Siqueira <leandro.siqueira@gmail.com>
@@ -1388,7 +1369,6 @@ Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
Madhav Puri <madhav.puri@gmail.com>
Madhu Venugopal <mavenugo@gmail.com>
Mageee <fangpuyi@foxmail.com>
maggie44 <64841595+maggie44@users.noreply.github.com>
Mahesh Tiyyagura <tmahesh@gmail.com>
malnick <malnick@gmail..com>
Malte Janduda <mail@janduda.net>
@@ -1599,7 +1579,6 @@ Muayyad Alsadi <alsadi@gmail.com>
Muhammad Zohaib Aslam <zohaibse011@gmail.com>
Mustafa Akın <mustafa91@gmail.com>
Muthukumar R <muthur@gmail.com>
Myeongjoon Kim <kimmj8409@gmail.com>
Máximo Cuadros <mcuadros@gmail.com>
Médi-Rémi Hashim <medimatrix@users.noreply.github.com>
Nace Oroz <orkica@gmail.com>
@@ -1614,7 +1593,6 @@ Natasha Jarus <linuxmercedes@gmail.com>
Nate Brennand <nate.brennand@clever.com>
Nate Eagleson <nate@nateeag.com>
Nate Jones <nate@endot.org>
Nathan Baulch <nathan.baulch@gmail.com>
Nathan Carlson <carl4403@umn.edu>
Nathan Herald <me@nathanherald.com>
Nathan Hsieh <hsieh.nathan@gmail.com>
@@ -1677,7 +1655,6 @@ Nuutti Kotivuori <naked@iki.fi>
nzwsch <hi@nzwsch.com>
O.S. Tezer <ostezer@gmail.com>
objectified <objectified@gmail.com>
Octol1ttle <l1ttleofficial@outlook.com>
Odin Ugedal <odin@ugedal.com>
Oguz Bilgic <fisyonet@gmail.com>
Oh Jinkyun <tintypemolly@gmail.com>
@@ -1786,7 +1763,6 @@ Pierre Carrier <pierre@meteor.com>
Pierre Dal-Pra <dalpra.pierre@gmail.com>
Pierre Wacrenier <pierre.wacrenier@gmail.com>
Pierre-Alain RIVIERE <pariviere@ippon.fr>
pinglanlu <pinglanlu@outlook.com>
Piotr Bogdan <ppbogdan@gmail.com>
Piotr Karbowski <piotr.karbowski@protonmail.ch>
Porjo <porjo38@yahoo.com.au>
@@ -1814,7 +1790,6 @@ Quentin Tayssier <qtayssier@gmail.com>
r0n22 <cameron.regan@gmail.com>
Rachit Sharma <rachitsharma613@gmail.com>
Radostin Stoyanov <rstoyanov1@gmail.com>
Rafael Fernández López <ereslibre@ereslibre.es>
Rafal Jeczalik <rjeczalik@gmail.com>
Rafe Colton <rafael.colton@gmail.com>
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
@@ -1881,7 +1856,7 @@ Robin Speekenbrink <robin@kingsquare.nl>
Robin Thoni <robin@rthoni.com>
robpc <rpcann@gmail.com>
Rodolfo Carvalho <rhcarvalho@gmail.com>
Rodrigo Campos <rodrigoca@microsoft.com>
Rodrigo Campos <rodrigo@kinvolk.io>
Rodrigo Vaz <rodrigo.vaz@gmail.com>
Roel Van Nyen <roel.vannyen@gmail.com>
Roger Peppe <rogpeppe@gmail.com>
@@ -2020,7 +1995,6 @@ Sevki Hasirci <s@sevki.org>
Shane Canon <scanon@lbl.gov>
Shane da Silva <shane@dasilva.io>
Shaun Kaasten <shaunk@gmail.com>
Shaun Thompson <shaun.thompson@docker.com>
shaunol <shaunol@gmail.com>
Shawn Landden <shawn@churchofgit.com>
Shawn Siefkas <shawn.siefkas@meredith.com>
@@ -2039,7 +2013,6 @@ Shijun Qin <qinshijun16@mails.ucas.ac.cn>
Shishir Mahajan <shishir.mahajan@redhat.com>
Shoubhik Bose <sbose78@gmail.com>
Shourya Sarcar <shourya.sarcar@gmail.com>
Shreenidhi Shedi <shreenidhi.shedi@broadcom.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
shuai-z <zs.broccoli@gmail.com>
Shukui Yang <yangshukui@huawei.com>
@@ -2127,7 +2100,6 @@ Sébastien Stormacq <sebsto@users.noreply.github.com>
Sören Tempel <soeren+git@soeren-tempel.net>
Tabakhase <mail@tabakhase.com>
Tadej Janež <tadej.j@nez.si>
Tadeusz Dudkiewicz <tadeusz.dudkiewicz@rtbhouse.com>
Takuto Sato <tockn.jp@gmail.com>
tang0th <tang0th@gmx.com>
Tangi Colin <tangicolin@gmail.com>
@@ -2135,7 +2107,6 @@ Tatsuki Sugiura <sugi@nemui.org>
Tatsushi Inagaki <e29253@jp.ibm.com>
Taylan Isikdemir <taylani@google.com>
Taylor Jones <monitorjbl@gmail.com>
tcpdumppy <847462026@qq.com>
Ted M. Young <tedyoung@gmail.com>
Tehmasp Chaudhri <tehmasp@gmail.com>
Tejaswini Duggaraju <naduggar@microsoft.com>
@@ -2420,7 +2391,6 @@ You-Sheng Yang (楊有勝) <vicamo@gmail.com>
youcai <omegacoleman@gmail.com>
Youcef YEKHLEF <yyekhlef@gmail.com>
Youfu Zhang <zhangyoufu@gmail.com>
YR Chen <stevapple@icloud.com>
Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>

View File

@@ -1,32 +1,28 @@
# syntax=docker/dockerfile:1.7
ARG GO_VERSION=1.23.7
ARG GO_VERSION=1.21.13
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
ARG XX_VERSION=1.6.1
ARG XX_VERSION=1.5.0
ARG VPNKIT_VERSION=0.5.0
# DOCKERCLI_VERSION is the version of the CLI to install in the dev-container.
ARG DOCKERCLI_VERSION=v28.0.1
ARG DOCKERCLI_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_VERSION=v27.0.2
# cli version used for integration-cli tests
ARG DOCKERCLI_INTEGRATION_REPOSITORY="https://github.com/docker/cli.git"
ARG DOCKERCLI_INTEGRATION_VERSION=v17.06.2-ce
# BUILDX_VERSION is the version of buildx to install in the dev container.
ARG BUILDX_VERSION=0.20.1
ARG COMPOSE_VERSION=v2.33.1
ARG BUILDX_VERSION=0.16.1
ARG COMPOSE_VERSION=v2.29.0
ARG SYSTEMD="false"
ARG FIREWALLD="false"
ARG DOCKER_STATIC=1
# REGISTRY_VERSION specifies the version of the registry to download from
# https://hub.docker.com/r/distribution/distribution. This version of
# the registry is used to test schema 2 manifests. Generally, the version
# specified here should match a current release.
ARG REGISTRY_VERSION=3.0.0-rc.1
ARG REGISTRY_VERSION=2.8.3
# delve is currently only supported on linux/amd64 and linux/arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
@@ -141,9 +137,7 @@ RUN /download-frozen-image-v2.sh /build \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bookworm-slim@sha256:2bc5c236e9b262645a323e9088dfa3bb1ecb16cc75811daf40a23a824d665be9 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1 \
hello-world:amd64@sha256:90659bf80b44ce6be8234e6ff90a1ac34acbeb826903b02cfa0da11c82cbc042 \
hello-world:arm64@sha256:963612c5503f3f1674f315c67089dee577d8cc6afc18565e0b4183ae355fb343
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# delve
FROM base AS delve-src
@@ -153,7 +147,7 @@ RUN git init . && git remote add origin "https://github.com/go-delve/delve.git"
# from the https://github.com/go-delve/delve repository.
# It can be used to run Docker with a possibility of
# attaching debugger to it.
ARG DELVE_VERSION=v1.23.0
ARG DELVE_VERSION=v1.21.1
RUN git fetch -q --depth 1 origin "${DELVE_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS delve-supported
@@ -170,6 +164,19 @@ EOT
FROM binary-dummy AS delve-unsupported
FROM delve-${DELVE_SUPPORTED} AS delve
FROM base AS tomll
# GOTOML_VERSION specifies the version of the tomll binary to build and install
# from the https://github.com/pelletier/go-toml repository. This binary is used
# in CI in the hack/validate/toml script.
#
# When updating this version, consider updating the github.com/pelletier/go-toml
# dependency in vendor.mod accordingly.
ARG GOTOML_VERSION=v1.8.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" \
&& /build/tomll --help
FROM base AS gowinres
# GOWINRES_VERSION defines go-winres tool version
ARG GOWINRES_VERSION=v0.3.1
@@ -189,7 +196,7 @@ RUN git init . && git remote add origin "https://github.com/containerd/container
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.7.27
ARG CONTAINERD_VERSION=v1.7.21
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build
@@ -199,6 +206,8 @@ RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc \
libbtrfs-dev \
libsecret-1-dev \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=containerd-src,src=/usr/src/containerd,rw \
@@ -220,14 +229,14 @@ FROM binary-dummy AS containerd-windows
FROM containerd-${TARGETOS} AS containerd
FROM base AS golangci_lint
ARG GOLANGCI_LINT_VERSION=v1.64.5
ARG GOLANGCI_LINT_VERSION=v1.59.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
&& /build/golangci-lint --version
FROM base AS gotestsum
ARG GOTESTSUM_VERSION=v1.12.0
ARG GOTESTSUM_VERSION=v1.8.2
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \
@@ -256,8 +265,7 @@ RUN --mount=source=hack/dockerfile/cli.sh,target=/download-or-build-cli.sh \
--mount=type=cache,target=/root/.cache/go-build,id=dockercli-build-$TARGETPLATFORM \
rm -f ./.git/*.lock \
&& /download-or-build-cli.sh ${DOCKERCLI_VERSION} ${DOCKERCLI_REPOSITORY} /build \
&& /build/docker --version \
&& /build/docker completion bash >/completion.bash
&& /build/docker --version
FROM base AS dockercli-integration
WORKDIR /go/src/github.com/docker/cli
@@ -279,7 +287,7 @@ RUN git init . && git remote add origin "https://github.com/opencontainers/runc.
# that is used. If you need to update runc, open a pull request in the containerd
# project first, and update both after that is merged. When updating RUNC_VERSION,
# consider updating runc in vendor.mod accordingly.
ARG RUNC_VERSION=v1.2.5
ARG RUNC_VERSION=v1.1.13
RUN git fetch -q --depth 1 origin "${RUNC_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS runc-build
@@ -288,6 +296,7 @@ ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-runc-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-runc-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
dpkg-dev \
gcc \
libc6-dev \
libseccomp-dev \
@@ -347,7 +356,7 @@ FROM base AS rootlesskit-src
WORKDIR /usr/src/rootlesskit
RUN git init . && git remote add origin "https://github.com/rootless-containers/rootlesskit.git"
# When updating, also update vendor.mod and hack/dockerfile/install/rootlesskit.installer accordingly.
ARG ROOTLESSKIT_VERSION=v2.3.4
ARG ROOTLESSKIT_VERSION=v2.0.2
RUN git fetch -q --depth 1 origin "${ROOTLESSKIT_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS rootlesskit-build
@@ -368,6 +377,8 @@ RUN --mount=from=rootlesskit-src,src=/usr/src/rootlesskit,rw \
export CGO_ENABLED=$([ "$DOCKER_STATIC" = "1" ] && echo "0" || echo "1")
xx-go build -o /build/rootlesskit -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit
xx-go build -o /build/rootlesskit-docker-proxy -ldflags="$([ "$DOCKER_STATIC" != "1" ] && echo "-linkmode=external")" ./cmd/rootlesskit-docker-proxy
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /build/rootlesskit-docker-proxy
EOT
COPY --link ./contrib/dockerd-rootless.sh /build/
COPY --link ./contrib/dockerd-rootless-setuptool.sh /build/
@@ -389,6 +400,7 @@ RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
libseccomp-dev \
libsystemd-dev \
libtool \
libudev-dev \
libyajl-dev \
python3 \
;
@@ -441,13 +453,14 @@ FROM binary-dummy AS containerutil-linux
FROM containerutil-build AS containerutil-windows-amd64
FROM containerutil-windows-${TARGETARCH} AS containerutil-windows
FROM containerutil-${TARGETOS} AS containerutil
FROM docker/buildx-bin:${BUILDX_VERSION} AS buildx
FROM docker/compose-bin:${COMPOSE_VERSION} AS compose
FROM docker/buildx-bin:${BUILDX_VERSION} as buildx
FROM docker/compose-bin:${COMPOSE_VERSION} as compose
FROM base AS dev-systemd-false
COPY --link --from=frozen-images /build/ /docker-frozen-images
COPY --link --from=swagger /build/ /usr/local/bin/
COPY --link --from=delve /build/ /usr/local/bin/
COPY --link --from=tomll /build/ /usr/local/bin/
COPY --link --from=gowinres /build/ /usr/local/bin/
COPY --link --from=tini /build/ /usr/local/bin/
COPY --link --from=registry /build/ /usr/local/bin/
@@ -491,24 +504,16 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
systemd-sysv
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev-firewalld-false
FROM dev-systemd-true AS dev-firewalld-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
firewalld
RUN sed -i 's/FirewallBackend=nftables/FirewallBackend=iptables/' /etc/firewalld/firewalld.conf
FROM dev-firewalld-${FIREWALLD} AS dev-base
FROM dev-systemd-${SYSTEMD} AS dev-base
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig
# Set dev environment as safe git directory to prevent "dubious ownership" errors
# when bind-mounting the source into the dev-container. See https://github.com/moby/moby/pull/44930
@@ -524,7 +529,6 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
inetutils-ping \
iproute2 \
iptables \
nftables \
jq \
libcap2-bin \
libnet1 \
@@ -532,7 +536,6 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
libprotobuf-c1 \
libyajl2 \
net-tools \
netcat-openbsd \
patch \
pigz \
sudo \
@@ -545,16 +548,24 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
xz-utils \
zip \
zstd
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
gcc \
pkg-config \
dpkg-dev \
libapparmor-dev \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev \
yamllint
COPY --link --from=dockercli /build/ /usr/local/cli
COPY --link --from=dockercli /completion.bash /etc/bash_completion.d/docker
COPY --link --from=dockercli-integration /build/ /usr/local/cli-integration
FROM base AS build
@@ -572,10 +583,14 @@ ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
xx-apt-get install --no-install-recommends -y \
dpkg-dev \
gcc \
libapparmor-dev \
libc6-dev \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev \
pkg-config
ARG DOCKER_BUILDTAGS
ARG DOCKER_DEBUG
@@ -598,13 +613,14 @@ RUN <<EOT
EOT
RUN --mount=type=bind,target=.,rw \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
--mount=type=cache,target=/root/.cache/go-build,id=moby-build-$TARGETPLATFORM <<EOT
set -e
target=$([ "$DOCKER_STATIC" = "1" ] && echo "binary" || echo "dynbinary")
xx-go --wrap
PKG_CONFIG=$(xx-go env PKG_CONFIG) ./hack/make.sh $target
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/dockerd$([ "$(xx-info os)" = "windows" ] && echo ".exe")
[ "$(xx-info os)" != "linux" ] || xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/docker-proxy
xx-verify $([ "$DOCKER_STATIC" = "1" ] && echo "--static") /tmp/bundles/${target}-daemon/docker-proxy$([ "$(xx-info os)" = "windows" ] && echo ".exe")
mkdir /build
mv /tmp/bundles/${target}-daemon/* /build/
EOT
@@ -632,7 +648,7 @@ COPY --link --from=build /build /
# smoke tests
# usage:
# > docker buildx bake binary-smoketest
FROM base AS smoketest
FROM --platform=$TARGETPLATFORM base AS smoketest
WORKDIR /usr/local/bin
COPY --from=build /build .
RUN <<EOT

View File

@@ -5,7 +5,7 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.23.7
ARG GO_VERSION=1.21.13
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
@@ -22,6 +22,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
cmake \
git \
libapparmor-dev \
libseccomp-dev \
ca-certificates \
e2fsprogs \

View File

@@ -161,10 +161,10 @@ FROM ${WINDOWS_BASE_IMAGE}:${WINDOWS_BASE_IMAGE_TAG}
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.23.7
ARG GOTESTSUM_VERSION=v1.12.0
ARG GO_VERSION=1.21.13
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOWINRES_VERSION=v0.3.1
ARG CONTAINERD_VERSION=v1.7.27
ARG CONTAINERD_VERSION=v1.7.21
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.

165
Jenkinsfile vendored Normal file
View File

@@ -0,0 +1,165 @@
#!groovy
pipeline {
agent none
options {
buildDiscarder(logRotator(daysToKeepStr: '30'))
timeout(time: 2, unit: 'HOURS')
timestamps()
}
parameters {
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check')
}
environment {
DOCKER_BUILDKIT = '1'
DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2'
CHECK_CONFIG_COMMIT = '33a3680e08d1007e72c3b3f1454f823d8e9948ee'
TESTDEBUG = '0'
TIMEOUT = '120m'
}
stages {
stage('pr-hack') {
when { changeRequest() }
steps {
script {
echo "Workaround for PR auto-cancel feature. Borrowed from https://issues.jenkins-ci.org/browse/JENKINS-43353"
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
}
}
stage('DCO-check') {
when {
beforeAgent true
expression { params.dco }
}
agent { label 'arm64 && ubuntu-2004' }
steps {
sh '''
docker run --rm \
-v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
'''
}
}
stage('Build') {
parallel {
stage('arm64') {
when {
beforeAgent true
expression { params.arm64 }
}
agent { label 'arm64 && ubuntu-2004' }
environment {
TEST_SKIP_INTEGRATION_CLI = '1'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm -t docker:${GIT_COMMIT} .'
}
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_INTEGRATION_USE_SNAPSHOTTER \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=arm64-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
}
}
}
}

View File

@@ -1,33 +1,585 @@
# Moby maintainers file
#
# See project/GOVERNANCE.md for committer versus reviewer roles
# This file describes the maintainer groups within the moby/moby project.
# More detail on Moby project governance is available in the
# project/GOVERNANCE.md file found in this repository.
#
# COMMITTERS
# GitHub ID, Name, Email address, GPG fingerprint
"akerouanton","Albin Kerouanton","albinker@gmail.com"
"AkihiroSuda","Akihiro Suda","akihiro.suda.cz@hco.ntt.co.jp"
"austinvazquez","Austin Vazquez","macedonv@amazon.com"
"cpuguy83","Brian Goff","cpuguy83@gmail.com"
"robmry","Rob Murray","rob.murray@docker.com"
"thaJeztah","Sebastiaan van Stijn","github@gone.nl"
"tianon","Tianon Gravi","admwiggin@gmail.com"
"tonistiigi","Tõnis Tiigi","tonis@docker.com"
"vvoland","Paweł Gronowski","pawel.gronowski@docker.com"
# It is structured to be consumable by both humans and programs.
# To extract its contents programmatically, use any TOML-compliant
# parser.
#
# REVIEWERS
# GitHub ID, Name, Email address, GPG fingerprint
"coolljt0725","Lei Jitang","leijitang@huawei.com"
"corhere","Cory Snider","csnider@mirantis.com"
"crazy-max","Kevin Alvarez","contact@crazymax.dev"
"dmcgowan","Derek McGowan","derek@mcgstyle.net"
"estesp","Phil Estes","estesp@linux.vnet.ibm.com"
"justincormack","Justin Cormack","justin.cormack@docker.com"
"kolyshkin","Kir Kolyshkin","kolyshkin@gmail.com"
"laurazard","Laura Brehm","laurabrehm@hey.com"
"neersighted","Bjorn Neergaard","bjorn@neersighted.com"
"rumpl","Djordje Lukic","djordje.lukic@docker.com"
"samuelkarp","Samuel Karp","me@samuelkarp.com"
"stevvooe","Stephen Day","stephen.day@docker.com"
"thompson-shaun","Shaun Thompson","shaun.thompson@docker.com"
"tiborvass","Tibor Vass","tibor@docker.com"
"unclejack","Cristian Staretu","cristian.staretu@gmail.com"
# TODO(estesp): This file should not necessarily depend on docker/opensource
# This file is compiled into the MAINTAINERS file in docker/opensource.
#
[Org]
[Org."Core maintainers"]
# The Core maintainers are the ghostbusters of the project: when there's a problem others
# can't solve, they show up and fix it with bizarre devices and weaponry.
# They have final say on technical implementation and coding style.
# They are ultimately responsible for quality in all its forms: usability polish,
# bugfixes, performance, stability, etc. When ownership can cleanly be passed to
# a subsystem, they are responsible for doing so and holding the
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [
"akerouanton",
"akihirosuda",
"anusha",
"coolljt0725",
"corhere",
"cpuguy83",
"crazy-max",
"estesp",
"johnstep",
"justincormack",
"kolyshkin",
"laurazard",
"mhbauer",
"neersighted",
"robmry",
"rumpl",
"runcom",
"samuelkarp",
"stevvooe",
"thajeztah",
"tianon",
"tibor",
"tonistiigi",
"unclejack",
"vdemeester",
"vieux",
"vvoland",
"yongtang"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
# that our various contribution and reviewing processes are respected. With their knowledge of
# the repository activity, they can also guide contributors to relevant material or
# discussions.
#
# They are neither code nor docs reviewers, so they are never expected to merge. They can
# however:
# - close an issue or pull request when it's an exact duplicate
# - close an issue or pull request when it's inappropriate or off-topic
people = [
"alexellis",
"andrewhsu",
"bsousaa",
"dmcgowan",
"fntlnz",
"gianarb",
"olljanat",
"programmerq",
"ripcurld",
"sam-thibault",
"samwhited",
"thajeztah"
]
[Org.Alumni]
# This list contains maintainers that are no longer active on the project.
# It is thanks to these people that the project has become what it is today.
# Thank you!
people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository.
"albers",
# Andrea Luzzardi started contributing to the Docker codebase in the "dotCloud"
# era, even before it was called "Docker". He is one of the architects of both
# Swarm and SwarmKit, and its integration into the Docker engine.
"aluzzardi",
# David Calavera contributed many features to Docker, such as an improved
# event system, dynamic configuration reloading, volume plugins, fancy
# new templating options, and an external client credential store. As a
# maintainer, David was release captain for Docker 1.8, and competing
# with Jess Frazelle to be "top dream killer".
# David is now doing amazing stuff as CTO for https://www.netlify.com,
# and tweets as @calavera.
"calavera",
# Michael Crosby was "chief maintainer" of the Docker project.
# During his time as a maintainer, Michael contributed to many
# milestones of the project; he was release captain of Docker v1.0.0,
# started the development of "libcontainer" (what later became runc)
# and containerd, as well as demoing cool hacks such as live migrating
# a game server container with checkpoint/restore.
#
# Michael is currently a maintainer of containerd, but you may see
# him around in other projects on GitHub.
"crosbymichael",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
# (https://github.com/contiv) and various open source projects
# in his own repository https://github.com/erikh. You may
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Evan Hazlett is the creator of the Shipyard and Interlock open source projects,
# and the author of "Orca", which became the foundation of Docker Universal Control
# Plane (UCP). As a maintainer, Evan helped integrating SwarmKit (secrets, tasks)
# into the Docker engine.
"ehazlett",
# Arnaud Porterie (AKA "icecrime") was in charge of maintaining the maintainers.
# As a maintainer, he made life easier for contributors to the Docker open-source
# projects, bringing order in the chaos by designing a triage- and review workflow
# using labels (see https://icecrime.net/technology/a-structured-approach-to-labeling/),
# and automating the hell out of things with his buddies GordonTheTurtle and Poule
# (a chicken!).
#
# A lesser-known fact is that he created the first commit in the libnetwork repository
# even though he didn't know anything about it. Some say, he's now selling stuff on
# the internet ;-)
"icecrime",
# After a false start with his first PR being rejected, James Turnbull became a frequent
# contributor to the documentation, and became a docs maintainer on December 5, 2013. As
# a maintainer, James lifted the docs to a higher standard, and introduced the community
# guidelines ("three strikes"). James is currently changing the world as CTO of https://www.empatico.org,
# meanwhile authoring various books that are worth checking out. You can find him on Twitter,
# rambling as @kartar, and although no longer active as a maintainer, he's always "game" to
# help out reviewing docs PRs, so you may still see him around in the repository.
"jamtur01",
# Jessica Frazelle, also known as the "Keyser Söze of containers",
# runs *everything* in containers. She started contributing to
# Docker with a (fun fun) change involving both iptables and regular
# expressions (coz, YOLO!) on July 10, 2014
# https://github.com/docker/docker/pull/6950/commits/f3a68ffa390fb851115c77783fa4031f1d3b2995.
# Jess was Release Captain for Docker 1.4, 1.6 and 1.7, and contributed
# many features and improvement, among which "seccomp profiles" (making
# containers a lot more secure). Besides being a maintainer, she
# set up the CI infrastructure for the project, giving everyone
# something to shout at if a PR failed ("noooo Janky!").
# Be sure you don't miss her talks at a conference near you (a must-see),
# read her blog at https://blog.jessfraz.com (a must-read), and
# check out her open source projects on GitHub https://github.com/jessfraz (a must-try).
"jessfraz",
# As a maintainer, John Howard managed to make the impossible possible;
# to run Docker on Windows. After facing many challenges, teaching
# fellow-maintainers that 'Windows is not Linux', and many changes in
# Windows Server to facilitate containers, native Windows containers
# saw the light of day in 2015.
#
# John is now enjoying life without containers: playing piano, painting,
# and walking his dogs, but you may occasionally see him drop by on GitHub.
"lowenna",
# Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
# Not many know that Alexander is a master negotiator, being able to change course
# of action with a single "Nope, we're not gonna do that".
"lk4d4",
# Madhu Venugopal was part of the SocketPlane team that joined Docker.
# As a maintainer, he was working with Jana for the Container Network
# Model (CNM) implemented through libnetwork, and the "routing mesh" powering
# Swarm mode networking.
"mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of
# docker/docker project, and implemented a new docs.docker.com theme and
# nav for 2015 Dockercon. Fun fact: the most inherited layer in DockerHub
# public repositories was originally referenced in
# maryatdocker/docker-whale back in May 2015.
"moxiegirl",
# Jana Radhakrishnan was part of the SocketPlane team that joined Docker.
# As a maintainer, he was the lead architect for the Container Network
# Model (CNM) implemented through libnetwork, and the "routing mesh" powering
# Swarm mode networking.
#
# Jana started new adventures in networking, but you can find him tweeting as @mrjana,
# coding on GitHub https://github.com/mrjana, and he may be hiding on the Docker Community
# slack channel :-)
"mrjana",
# Sven Dowideit became a well known person in the Docker ecosphere, building
# boot2docker, and became a regular contributor to the project, starting as
# early as October 2013 (https://github.com/docker/docker/pull/2119), to become
# a maintainer less than two months later (https://github.com/docker/docker/pull/3061).
#
# As a maintainer, Sven took on the task to convert the documentation from
# ReStructuredText to Markdown, migrate to Hugo for generating the docs, and
# writing tooling for building, testing, and publishing them.
#
# If you're not in the occasion to visit "the Australian office", you
# can keep up with Sven on Twitter (@SvenDowideit), his blog http://fosiki.com,
# and of course on GitHub.
"sven",
# Vincent "vbatts!" Batts made his first contribution to the project
# in November 2013, to become a maintainer a few months later, on
# May 10, 2014 (https://github.com/docker/docker/commit/d6e666a87a01a5634c250358a94c814bf26cb778).
# As a maintainer, Vincent made important contributions to core elements
# of Docker, such as "distribution" (tarsum) and graphdrivers (btrfs, devicemapper).
# He also contributed the "tar-split" library, an important element
# for the content-addressable store.
# Vincent is currently a member of the Open Containers Initiative
# Technical Oversight Board (TOB), besides his work at Red Hat and
# Project Atomic. You can still find him regularly hanging out in
# our repository and the #docker-dev and #docker-maintainers IRC channels
# for a chat, as he's always a lot of fun.
"vbatts",
# Vishnu became a maintainer to help out on the daemon codebase and
# libcontainer integration. He's currently involved in the
# Open Containers Initiative, working on the specifications,
# besides his work on cAdvisor and Kubernetes for Google.
"vishh"
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
[people.aaronlehmann]
Name = "Aaron Lehmann"
Email = "aaron.lehmann@docker.com"
GitHub = "aaronlehmann"
[people.akerouanton]
Name = "Albin Kerouanton"
Email = "albinker@gmail.com"
GitHub = "akerouanton"
[people.alexellis]
Name = "Alex Ellis"
Email = "alexellis2@gmail.com"
GitHub = "alexellis"
[people.akihirosuda]
Name = "Akihiro Suda"
Email = "akihiro.suda.cz@hco.ntt.co.jp"
GitHub = "AkihiroSuda"
[people.aluzzardi]
Name = "Andrea Luzzardi"
Email = "al@docker.com"
GitHub = "aluzzardi"
[people.albers]
Name = "Harald Albers"
Email = "github@albersweb.de"
GitHub = "albers"
[people.andrewhsu]
Name = "Andrew Hsu"
Email = "andrewhsu@docker.com"
GitHub = "andrewhsu"
[people.anusha]
Name = "Anusha Ragunathan"
Email = "anusha@docker.com"
GitHub = "anusha-ragunathan"
[people.bsousaa]
Name = "Bruno de Sousa"
Email = "bruno.sousa@docker.com"
GitHub = "bsousaa"
[people.calavera]
Name = "David Calavera"
Email = "david.calavera@gmail.com"
GitHub = "calavera"
[people.coolljt0725]
Name = "Lei Jitang"
Email = "leijitang@huawei.com"
GitHub = "coolljt0725"
[people.corhere]
Name = "Cory Snider"
Email = "csnider@mirantis.com"
GitHub = "corhere"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83"
[people.crazy-max]
Name = "Kevin Alvarez"
Email = "contact@crazymax.dev"
GitHub = "crazy-max"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
GitHub = "crosbymichael"
[people.dnephin]
Name = "Daniel Nephin"
Email = "dnephin@gmail.com"
GitHub = "dnephin"
[people.dmcgowan]
Name = "Derek McGowan"
Email = "derek@mcgstyle.net"
GitHub = "dmcgowan"
[people.duglin]
Name = "Doug Davis"
Email = "dug@us.ibm.com"
GitHub = "duglin"
[people.ehazlett]
Name = "Evan Hazlett"
Email = "ejhazlett@gmail.com"
GitHub = "ehazlett"
[people.erikh]
Name = "Erik Hollensbe"
Email = "erik@docker.com"
GitHub = "erikh"
[people.estesp]
Name = "Phil Estes"
Email = "estesp@linux.vnet.ibm.com"
GitHub = "estesp"
[people.fntlnz]
Name = "Lorenzo Fontana"
Email = "fontanalorenz@gmail.com"
GitHub = "fntlnz"
[people.gianarb]
Name = "Gianluca Arbezzano"
Email = "ga@thumpflow.com"
GitHub = "gianarb"
[people.icecrime]
Name = "Arnaud Porterie"
Email = "icecrime@gmail.com"
GitHub = "icecrime"
[people.jamtur01]
Name = "James Turnbull"
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jessfraz]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
GitHub = "jessfraz"
[people.johnstep]
Name = "John Stephens"
Email = "johnstep@docker.com"
GitHub = "johnstep"
[people.justincormack]
Name = "Justin Cormack"
Email = "justin.cormack@docker.com"
GitHub = "justincormack"
[people.kolyshkin]
Name = "Kir Kolyshkin"
Email = "kolyshkin@gmail.com"
GitHub = "kolyshkin"
[people.laurazard]
Name = "Laura Brehm"
Email = "laura.brehm@docker.com"
GitHub = "laurazard"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo]
Name = "Madhu Venugopal"
Email = "madhu@docker.com"
GitHub = "mavenugo"
[people.mhbauer]
Name = "Morgan Bauer"
Email = "mbauer@us.ibm.com"
GitHub = "mhbauer"
[people.mlaventure]
Name = "Kenfe-Mickaël Laventure"
Email = "mickael.laventure@gmail.com"
GitHub = "mlaventure"
[people.moxiegirl]
Name = "Mary Anthony"
Email = "mary.anthony@docker.com"
GitHub = "moxiegirl"
[people.mrjana]
Name = "Jana Radhakrishnan"
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.neersighted]
Name = "Bjorn Neergaard"
Email = "bjorn@neersighted.com"
GitHub = "neersighted"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
GitHub = "olljanat"
[people.programmerq]
Name = "Jeff Anderson"
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.robmry]
Name = "Rob Murray"
Email = "rob.murray@docker.com"
GitHub = "robmry"
[people.ripcurld]
Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com"
GitHub = "ripcurld"
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.runcom]
Name = "Antonio Murdaca"
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.sam-thibault]
Name = "Sam Thibault"
Email = "sam.thibault@docker.com"
GitHub = "sam-thibault"
[people.samuelkarp]
Name = "Samuel Karp"
Email = "me@samuelkarp.com"
GitHub = "samuelkarp"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"
GitHub = "shykes"
[people.stevvooe]
Name = "Stephen Day"
Email = "stephen.day@docker.com"
GitHub = "stevvooe"
[people.sven]
Name = "Sven Dowideit"
Email = "SvenDowideit@home.org.au"
GitHub = "SvenDowideit"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"
GitHub = "thaJeztah"
[people.tianon]
Name = "Tianon Gravi"
Email = "admwiggin@gmail.com"
GitHub = "tianon"
[people.tibor]
Name = "Tibor Vass"
Email = "tibor@docker.com"
GitHub = "tiborvass"
[people.tonistiigi]
Name = "Tõnis Tiigi"
Email = "tonis@docker.com"
GitHub = "tonistiigi"
[people.unclejack]
Name = "Cristian Staretu"
Email = "cristian.staretu@gmail.com"
GitHub = "unclejack"
[people.vbatts]
Name = "Vincent Batts"
Email = "vbatts@redhat.com"
GitHub = "vbatts"
[people.vdemeester]
Name = "Vincent Demeester"
Email = "vincent@sbr.pm"
GitHub = "vdemeester"
[people.vieux]
Name = "Victor Vieux"
Email = "vieux@docker.com"
GitHub = "vieux"
[people.vishh]
Name = "Vishnu Kannan"
Email = "vishnuk@google.com"
GitHub = "vishh"
[people.vvoland]
Name = "Paweł Gronowski"
Email = "pawel.gronowski@docker.com"
GitHub = "vvoland"
[people.yongtang]
Name = "Yong Tang"
Email = "yong.tang.github@outlook.com"
GitHub = "yongtang"

View File

@@ -1,6 +1,10 @@
DOCKER ?= docker
BUILDX ?= $(DOCKER) buildx
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info -f '{{ .Driver }}' 2>&1))
export DOCKER_GRAPHDRIVER
DOCKER_GITCOMMIT := $(shell git rev-parse HEAD)
export DOCKER_GITCOMMIT
@@ -31,6 +35,7 @@ DOCKER_ENVS := \
-e DOCKER_BUILD_OPTS \
-e DOCKER_BUILD_PKGS \
-e DOCKER_BUILDKIT \
-e DOCKER_BASH_COMPLETION_PATH \
-e DOCKER_CLI_PATH \
-e DOCKERCLI_VERSION \
-e DOCKERCLI_REPOSITORY \
@@ -38,7 +43,6 @@ DOCKER_ENVS := \
-e DOCKERCLI_INTEGRATION_REPOSITORY \
-e DOCKER_DEBUG \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_FIREWALLD \
-e DOCKER_GITCOMMIT \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_LDFLAGS \
@@ -86,7 +90,7 @@ DOCKER_ENVS := \
# note: BINDDIR is supported for backwards-compatibility here
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
# DOCKER_MOUNT can be overridden, but use at your own risk!
# DOCKER_MOUNT can be overriden, but use at your own risk!
ifndef DOCKER_MOUNT
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT))
@@ -98,14 +102,8 @@ DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docke
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache -v docker-mod-cache:/go/pkg/mod/
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
ifdef BIND_GIT
# Gets the common .git directory (even from inside a git worktree)
GITDIR := $(shell realpath $(shell git rev-parse --git-common-dir))
MOUNT_GITDIR := $(if $(GITDIR),-v "$(GITDIR):$(GITDIR)")
endif
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION) $(MOUNT_GITDIR)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
@@ -150,9 +148,6 @@ DOCKER_BUILD_ARGS += --build-arg=DOCKERCLI_INTEGRATION_REPOSITORY
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
ifdef DOCKER_FIREWALLD
DOCKER_BUILD_ARGS += --build-arg=FIREWALLD=true
endif
BUILD_OPTS := ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS}
BUILD_CMD := $(BUILDX) build

View File

@@ -1,11 +1,6 @@
The Moby Project
================
[![PkgGoDev](https://pkg.go.dev/badge/github.com/docker/docker)](https://pkg.go.dev/github.com/docker/docker)
[![Go Report Card](https://goreportcard.com/badge/github.com/docker/docker)](https://goreportcard.com/report/github.com/docker/docker)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/moby/moby/badge)](https://scorecard.dev/viewer/?uri=github.com/moby/moby)
![Moby Project logo](docs/static_files/moby-project-logo.png "The Moby Project")
Moby is an open-source project created by Docker to enable and accelerate software containerization.

View File

@@ -1,42 +1,9 @@
# Security Policy
# Reporting security issues
The maintainers of the Moby project take security seriously. If you discover
a security issue, please bring it to their attention right away!
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
## Reporting a Vulnerability
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately
to [security@docker.com](mailto:security@docker.com).
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Reporter(s) can expect a response within 72 hours, acknowledging the issue was
received.
## Review Process
After receiving the report, an initial triage and technical analysis is
performed to confirm the report and determine its scope. We may request
additional information in this stage of the process.
Once a reviewer has confirmed the relevance of the report, a draft security
advisory will be created on GitHub. The draft advisory will be used to discuss
the issue with maintainers, the reporter(s), and where applicable, other
affected parties under embargo.
If the vulnerability is accepted, a timeline for developing a patch, public
disclosure, and patch release will be determined. If there is an embargo period
on public disclosure before the patch release, the reporter(s) are expected to
participate in the discussion of the timeline and abide by agreed upon dates
for public disclosure.
## Accreditation
Security reports are greatly appreciated and we will publicly thank you,
although we will keep your name confidential if you request it. We also like to
send gifts - if you're into swag, make sure to let us know. We do not currently
offer a paid security bounty program at this time.
## Supported Versions
This project uses long-lived branches to maintain releases. Refer to
[BRANCHES-AND-TAGS.md](project/BRANCHES-AND-TAGS.md) in the default branch to
learn about the current maintenance status of each branch.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View File

@@ -3,7 +3,7 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of the current REST API.
DefaultVersion = "1.48"
DefaultVersion = "1.47"
// MinSupportedAPIVersion is the minimum API version that can be supported
// by the API server, specified as "major.minor". Note that the daemon

View File

@@ -29,29 +29,6 @@ func BoolValueOrDefault(r *http.Request, k string, d bool) bool {
return BoolValue(r, k)
}
// Uint32Value parses a form value into an uint32 type. It returns an error
// if the field is not set, empty, incorrectly formatted, or out of range.
func Uint32Value(r *http.Request, field string) (uint32, error) {
// strconv.ParseUint returns an "strconv.ErrSyntax" for negative values,
// not an "out of range". Strip the prefix before parsing, and use it
// later to detect valid, but negative values.
v, isNeg := strings.CutPrefix(r.Form.Get(field), "-")
if v == "" || v[0] == '+' {
// Fast-path for invalid values.
return 0, strconv.ErrSyntax
}
i, err := strconv.ParseUint(v, 10, 32)
if err != nil {
// Unwrap to remove the 'strconv.ParseUint: parsing "some-invalid-value":' prefix.
return 0, errors.Unwrap(err)
}
if isNeg {
return 0, strconv.ErrRange
}
return uint32(i), nil
}
// Int64ValueOrZero parses a form value into an int64 type.
// It returns 0 if the parsing fails.
func Int64ValueOrZero(r *http.Request, k string) int64 {

View File

@@ -1,16 +1,16 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"math"
"encoding/json"
"net/http"
"net/url"
"strconv"
"testing"
"github.com/containerd/platforms"
"github.com/docker/docker/errdefs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestBoolValue(t *testing.T) {
@@ -111,125 +111,22 @@ func TestInt64ValueOrDefaultWithError(t *testing.T) {
}
}
func TestUint32Value(t *testing.T) {
const valueNotSet = "unset"
func TestParsePlatformInvalid(t *testing.T) {
for _, tc := range []ocispec.Platform{
{
OSVersion: "1.2.3",
OSFeatures: []string{"a", "b"},
},
{OSVersion: "12.0"},
{OS: "linux"},
{Architecture: "amd64"},
} {
t.Run(platforms.Format(tc), func(t *testing.T) {
js, err := json.Marshal(tc)
assert.NilError(t, err)
tests := []struct {
value string
expected uint32
expectedErr error
}{
{
value: "0",
expected: 0,
},
{
value: strconv.FormatUint(math.MaxUint32, 10),
expected: math.MaxUint32,
},
{
value: valueNotSet,
expectedErr: strconv.ErrSyntax,
},
{
value: "",
expectedErr: strconv.ErrSyntax,
},
{
value: "-1",
expectedErr: strconv.ErrRange,
},
{
value: "4294967296", // MaxUint32+1
expectedErr: strconv.ErrRange,
},
{
value: "not-a-number",
expectedErr: strconv.ErrSyntax,
},
}
for _, tc := range tests {
t.Run(tc.value, func(t *testing.T) {
r, _ := http.NewRequest(http.MethodPost, "", nil)
r.Form = url.Values{}
if tc.value != valueNotSet {
r.Form.Set("field", tc.value)
}
out, err := Uint32Value(r, "field")
assert.Check(t, is.Equal(tc.expected, out))
assert.Check(t, is.ErrorIs(err, tc.expectedErr))
})
}
}
func TestDecodePlatform(t *testing.T) {
tests := []struct {
doc string
platformJSON string
expected *ocispec.Platform
expectedErr string
}{
{
doc: "empty platform",
expectedErr: `failed to parse platform: unexpected end of JSON input`,
},
{
doc: "not JSON",
platformJSON: `linux/ams64`,
expectedErr: `failed to parse platform: invalid character 'l' looking for beginning of value`,
},
{
doc: "malformed JSON",
platformJSON: `{"architecture"`,
expectedErr: `failed to parse platform: unexpected end of JSON input`,
},
{
doc: "missing os",
platformJSON: `{"architecture":"amd64","os":""}`,
expectedErr: `both OS and Architecture must be provided`,
},
{
doc: "variant without architecture",
platformJSON: `{"architecture":"","os":"","variant":"v7"}`,
expectedErr: `optional platform fields provided, but OS and Architecture are missing`,
},
{
doc: "missing architecture",
platformJSON: `{"architecture":"","os":"linux"}`,
expectedErr: `both OS and Architecture must be provided`,
},
{
doc: "os.version without os and architecture",
platformJSON: `{"architecture":"","os":"","os.version":"12.0"}`,
expectedErr: `optional platform fields provided, but OS and Architecture are missing`,
},
{
doc: "os.features without os and architecture",
platformJSON: `{"architecture":"","os":"","os.features":["a","b"]}`,
expectedErr: `optional platform fields provided, but OS and Architecture are missing`,
},
{
doc: "valid platform",
platformJSON: `{"architecture":"arm64","os":"linux","os.version":"12.0", "os.features":["a","b"], "variant": "v7"}`,
expected: &ocispec.Platform{
Architecture: "arm64",
OS: "linux",
OSVersion: "12.0",
OSFeatures: []string{"a", "b"},
Variant: "v7",
},
},
}
for _, tc := range tests {
t.Run(tc.doc, func(t *testing.T) {
p, err := DecodePlatform(tc.platformJSON)
assert.Check(t, is.DeepEqual(p, tc.expected))
if tc.expectedErr != "" {
assert.Check(t, errdefs.IsInvalidParameter(err))
assert.Check(t, is.Error(err, tc.expectedErr))
} else {
assert.Check(t, err)
}
_, err = DecodePlatform(string(js))
assert.Check(t, errdefs.IsInvalidParameter(err))
})
}
}

View File

@@ -0,0 +1,42 @@
package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"context"
"net/http"
"github.com/containerd/log"
"github.com/docker/docker/api/types/registry"
)
// CORSMiddleware injects CORS headers to each request
// when it's configured.
//
// Deprecated: CORS headers should not be set on the API. This feature will be removed in the next release.
type CORSMiddleware struct {
defaultHeaders string
}
// NewCORSMiddleware creates a new CORSMiddleware with default headers.
//
// Deprecated: CORS headers should not be set on the API. This feature will be removed in the next release.
func NewCORSMiddleware(d string) CORSMiddleware {
return CORSMiddleware{defaultHeaders: d}
}
// WrapHandler returns a new handler function wrapping the previous one in the request chain.
func (c CORSMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If "api-cors-header" is not given, but "api-enable-cors" is true, we set cors to "*"
// otherwise, all head values will be passed to HTTP handler
corsHeaders := c.defaultHeaders
if corsHeaders == "" {
corsHeaders = "*"
}
log.G(ctx).Debugf("CORS header is enabled and set to: %s", corsHeaders)
w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, "+registry.AuthHeader)
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
return handler(ctx, w, r, vars)
}
}

View File

@@ -9,34 +9,14 @@ import (
"strings"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/pkg/ioutils"
"github.com/sirupsen/logrus"
)
// DebugRequestMiddleware dumps the request to logger
func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) (retErr error) {
logger := log.G(ctx)
// Use a variable for fields to prevent overhead of repeatedly
// calling WithFields.
fields := log.Fields{
"module": "api",
"method": r.Method,
"request-url": r.RequestURI,
"vars": vars,
}
logger.WithFields(fields).Debugf("handling %s request", r.Method)
defer func() {
if retErr != nil {
// TODO(thaJeztah): unify this with Server.makeHTTPHandler, which also logs internal server errors as error-log. See https://github.com/moby/moby/pull/48740#discussion_r1816675574
fields["error-response"] = retErr
fields["status"] = httpstatus.FromError(retErr)
logger.WithFields(fields).Debugf("error response for %s request", r.Method)
}
}()
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
log.G(ctx).Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost {
return handler(ctx, w, r, vars)
@@ -62,15 +42,11 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
maskSecretKeys(postForm)
// TODO(thaJeztah): is there a better way to detect if we're using JSON-formatted logs?
if _, ok := logger.Logger.Formatter.(*logrus.JSONFormatter); ok {
fields["form-data"] = postForm
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
log.G(ctx).Debugf("form data: %s", string(formStr))
} else {
if data, err := json.Marshal(postForm); err != nil {
fields["form-data"] = postForm
} else {
fields["form-data"] = string(data)
}
log.G(ctx).Debugf("form data: %q", postForm)
}
}

View File

@@ -67,8 +67,8 @@ func (e versionUnsupportedError) InvalidParameter() {}
func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error) func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Set("Server", fmt.Sprintf("Docker/%s (%s)", v.serverVersion, runtime.GOOS))
w.Header().Set("Api-Version", v.defaultAPIVersion)
w.Header().Set("Ostype", runtime.GOOS)
w.Header().Set("API-Version", v.defaultAPIVersion)
w.Header().Set("OSType", runtime.GOOS)
apiVersion := vars["version"]
if apiVersion == "" {

View File

@@ -56,6 +56,7 @@ func TestNewVersionMiddlewareValidation(t *testing.T) {
}
for _, tc := range tests {
tc := tc
t.Run(tc.doc, func(t *testing.T) {
_, err := NewVersionMiddleware("1.2.3", tc.defaultVersion, tc.minVersion)
if tc.expectedErr == "" {
@@ -140,6 +141,6 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
hdr := resp.Result().Header
assert.Check(t, is.Contains(hdr.Get("Server"), "Docker/1.2.3"))
assert.Check(t, is.Contains(hdr.Get("Server"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("Api-Version"), api.DefaultVersion))
assert.Check(t, is.Equal(hdr.Get("Ostype"), runtime.GOOS))
assert.Check(t, is.Equal(hdr.Get("API-Version"), api.DefaultVersion))
assert.Check(t, is.Equal(hdr.Get("OSType"), runtime.GOOS))
}

View File

@@ -13,7 +13,6 @@ import (
"strconv"
"strings"
"sync"
"syscall"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
@@ -178,55 +177,19 @@ func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *
if err != nil {
return err
}
ksfv := r.FormValue("keep-storage")
if ksfv == "" {
ksfv = "0"
}
ks, err := strconv.Atoi(ksfv)
if err != nil {
return invalidParam{errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)}
}
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
}
parseBytesFromFormValue := func(name string) (int64, error) {
if fv := r.FormValue(name); fv != "" {
bs, err := strconv.Atoi(fv)
if err != nil {
return 0, invalidParam{errors.Wrapf(err, "%s is in bytes and expects an integer, got %v", name, fv)}
}
return int64(bs), nil
}
return 0, nil
}
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.48") {
bs, err := parseBytesFromFormValue("reserved-space")
if err != nil {
return err
} else if bs == 0 {
// Deprecated parameter. Only checked if reserved-space is not used.
bs, err = parseBytesFromFormValue("keep-storage")
if err != nil {
return err
}
}
opts.ReservedSpace = bs
if bs, err := parseBytesFromFormValue("max-used-space"); err != nil {
return err
} else {
opts.MaxUsedSpace = bs
}
if bs, err := parseBytesFromFormValue("min-free-space"); err != nil {
return err
} else {
opts.MinFreeSpace = bs
}
} else {
// Only keep-storage was valid in pre-1.48 versions.
bs, err := parseBytesFromFormValue("keep-storage")
if err != nil {
return err
}
opts.ReservedSpace = bs
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
KeepStorage: int64(ks),
}
report, err := br.backend.PruneCache(ctx, opts)
@@ -281,9 +244,8 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return err
}
_, err = output.Write(streamformatter.FormatError(err))
// don't log broken pipe errors as this is the normal case when a client aborts.
if err != nil && !errors.Is(err, syscall.EPIPE) {
log.G(ctx).WithError(err).Warn("could not write error response")
if err != nil {
log.G(ctx).Warnf("could not write error response: %v", err)
}
return nil
}
@@ -318,9 +280,6 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
ProgressWriter: buildProgressWriter(out, wantAux, createProgressReader),
})
if err != nil {
if errors.Is(err, context.Canceled) {
log.G(ctx).Debug("build canceled")
}
return errf(err)
}
@@ -352,14 +311,14 @@ type syncWriter struct {
mu sync.Mutex
}
func (s *syncWriter) Write(b []byte) (int, error) {
func (s *syncWriter) Write(b []byte) (count int, err error) {
s.mu.Lock()
defer s.mu.Unlock()
return s.w.Write(b)
count, err = s.w.Write(b)
s.mu.Unlock()
return
}
func buildProgressWriter(out io.Writer, wantAux bool, createProgressReader func(io.ReadCloser) io.ReadCloser) backend.ProgressWriter {
// see https://github.com/moby/moby/pull/21406
out = &syncWriter{w: out}
var aux *streamformatter.AuxFormatter
@@ -380,12 +339,8 @@ type flusher interface {
Flush()
}
type nopFlusher struct{}
func (f *nopFlusher) Flush() {}
func wrapOutputBufferedUntilRequestRead(rc io.ReadCloser, out io.Writer) (io.ReadCloser, io.Writer) {
var fl flusher = &nopFlusher{}
var fl flusher = &ioutils.NopFlusher{}
if f, ok := out.(flusher); ok {
fl = f
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"io"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
@@ -15,7 +16,7 @@ import (
type execBackend interface {
ContainerExecCreate(name string, options *container.ExecOptions) (string, error)
ContainerExecInspect(id string) (*backend.ExecInspect, error)
ContainerExecResize(ctx context.Context, name string, height, width uint32) error
ContainerExecResize(name string, height, width int) error
ContainerExecStart(ctx context.Context, name string, options backend.ExecStartConfig) error
ExecExists(name string) (bool, error)
}
@@ -34,24 +35,24 @@ type stateBackend interface {
ContainerKill(name string, signal string) error
ContainerPause(name string) error
ContainerRename(oldName, newName string) error
ContainerResize(ctx context.Context, name string, height, width uint32) error
ContainerResize(name string, height, width int) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error
ContainerRm(name string, config *backend.ContainerRmConfig) error
ContainerStart(ctx context.Context, name string, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error
ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.UpdateResponse, error)
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)
}
// monitorBackend includes functions to implement to provide containers monitoring functionality.
type monitorBackend interface {
ContainerChanges(ctx context.Context, name string) ([]archive.Change, error)
ContainerInspect(ctx context.Context, name string, options backend.ContainerInspectOptions) (*container.InspectResponse, error)
ContainerInspect(ctx context.Context, name string, size bool, version string) (interface{}, error)
ContainerLogs(ctx context.Context, name string, config *container.LogsOptions) (msgs <-chan *backend.LogMessage, tty bool, err error)
ContainerStats(ctx context.Context, name string, config *backend.ContainerStatsConfig) error
ContainerTop(name string, psArgs string) (*container.TopResponse, error)
Containers(ctx context.Context, config *container.ListOptions) ([]*container.Summary, error)
ContainerTop(name string, psArgs string) (*container.ContainerTopOKBody, error)
Containers(ctx context.Context, config *container.ListOptions) ([]*types.Container, error)
}
// attachBackend includes function to implement to provide container attaching functionality.

View File

@@ -25,47 +25,47 @@ func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) rout
}
// Routes returns the available routes to the container controller
func (c *containerRouter) Routes() []router.Route {
return c.routes
func (r *containerRouter) Routes() []router.Route {
return r.routes
}
// initRoutes initializes the routes in container router
func (c *containerRouter) initRoutes() {
c.routes = []router.Route{
func (r *containerRouter) initRoutes() {
r.routes = []router.Route{
// HEAD
router.NewHeadRoute("/containers/{name:.*}/archive", c.headContainersArchive),
router.NewHeadRoute("/containers/{name:.*}/archive", r.headContainersArchive),
// GET
router.NewGetRoute("/containers/json", c.getContainersJSON),
router.NewGetRoute("/containers/{name:.*}/export", c.getContainersExport),
router.NewGetRoute("/containers/{name:.*}/changes", c.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", c.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", c.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", c.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", c.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/attach/ws", c.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", c.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", c.getContainersArchive),
router.NewGetRoute("/containers/json", r.getContainersJSON),
router.NewGetRoute("/containers/{name:.*}/export", r.getContainersExport),
router.NewGetRoute("/containers/{name:.*}/changes", r.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", r.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", r.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/attach/ws", r.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", r.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", r.getContainersArchive),
// POST
router.NewPostRoute("/containers/create", c.postContainersCreate),
router.NewPostRoute("/containers/{name:.*}/kill", c.postContainersKill),
router.NewPostRoute("/containers/{name:.*}/pause", c.postContainersPause),
router.NewPostRoute("/containers/{name:.*}/unpause", c.postContainersUnpause),
router.NewPostRoute("/containers/{name:.*}/restart", c.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", c.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", c.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", c.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", c.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", c.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/exec", c.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", c.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", c.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", c.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", c.postContainerUpdate),
router.NewPostRoute("/containers/prune", c.postContainersPrune),
router.NewPostRoute("/commit", c.postCommit),
router.NewPostRoute("/containers/create", r.postContainersCreate),
router.NewPostRoute("/containers/{name:.*}/kill", r.postContainersKill),
router.NewPostRoute("/containers/{name:.*}/pause", r.postContainersPause),
router.NewPostRoute("/containers/{name:.*}/unpause", r.postContainersUnpause),
router.NewPostRoute("/containers/{name:.*}/restart", r.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", r.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", r.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", r.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", r.postContainerUpdate),
router.NewPostRoute("/containers/prune", r.postContainersPrune),
router.NewPostRoute("/commit", r.postCommit),
// PUT
router.NewPutRoute("/containers/{name:.*}/archive", c.putContainersArchive),
router.NewPutRoute("/containers/{name:.*}/archive", r.putContainersArchive),
// DELETE
router.NewDeleteRoute("/containers/{name:.*}", c.deleteContainers),
router.NewDeleteRoute("/containers/{name:.*}", r.deleteContainers),
}
}

View File

@@ -33,7 +33,7 @@ import (
"golang.org/x/net/websocket"
)
func (c *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -49,7 +49,7 @@ func (c *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
// are discarded here, but decoder.DecodeConfig also performs validation,
// so a request containing those additional fields would result in a
// validation error.
config, _, _, err := c.decoder.DecodeConfig(r.Body)
config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && !errors.Is(err, io.EOF) { // Do not fail if body is empty.
return err
}
@@ -59,7 +59,7 @@ func (c *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
return errdefs.InvalidParameter(err)
}
imgID, err := c.backend.CreateImageFromContainer(ctx, r.Form.Get("container"), &backend.CreateImageConfig{
imgID, err := s.backend.CreateImageFromContainer(ctx, r.Form.Get("container"), &backend.CreateImageConfig{
Pause: httputils.BoolValueOrDefault(r, "pause", true), // TODO(dnephin): remove pause arg, and always pause in backend
Tag: ref,
Author: r.Form.Get("author"),
@@ -71,10 +71,10 @@ func (c *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
return err
}
return httputils.WriteJSON(w, http.StatusCreated, &container.CommitResponse{ID: imgID})
return httputils.WriteJSON(w, http.StatusCreated, &types.IDResponse{ID: imgID})
}
func (c *containerRouter) getContainersJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) getContainersJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -99,7 +99,7 @@ func (c *containerRouter) getContainersJSON(ctx context.Context, w http.Response
config.Limit = limit
}
containers, err := c.backend.Containers(ctx, config)
containers, err := s.backend.Containers(ctx, config)
if err != nil {
return err
}
@@ -113,17 +113,10 @@ func (c *containerRouter) getContainersJSON(ctx context.Context, w http.Response
}
}
if versions.LessThan(version, "1.48") {
// ImageManifestDescriptor information was added in API 1.48
for _, c := range containers {
c.ImageManifestDescriptor = nil
}
}
return httputils.WriteJSON(w, http.StatusOK, containers)
}
func (c *containerRouter) getContainersStats(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) getContainersStats(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -137,7 +130,7 @@ func (c *containerRouter) getContainersStats(ctx context.Context, w http.Respons
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false)
}
return c.backend.ContainerStats(ctx, vars["name"], &backend.ContainerStatsConfig{
return s.backend.ContainerStats(ctx, vars["name"], &backend.ContainerStatsConfig{
Stream: stream,
OneShot: oneShot,
OutStream: func() io.Writer {
@@ -153,7 +146,7 @@ func (c *containerRouter) getContainersStats(ctx context.Context, w http.Respons
})
}
func (c *containerRouter) getContainersLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) getContainersLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -180,7 +173,7 @@ func (c *containerRouter) getContainersLogs(ctx context.Context, w http.Response
Details: httputils.BoolValue(r, "details"),
}
msgs, tty, err := c.backend.ContainerLogs(ctx, containerName, logsConfig)
msgs, tty, err := s.backend.ContainerLogs(ctx, containerName, logsConfig)
if err != nil {
return err
}
@@ -199,11 +192,11 @@ func (c *containerRouter) getContainersLogs(ctx context.Context, w http.Response
return nil
}
func (c *containerRouter) getContainersExport(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return c.backend.ContainerExport(ctx, vars["name"], w)
func (s *containerRouter) getContainersExport(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return s.backend.ContainerExport(ctx, vars["name"], w)
}
func (c *containerRouter) postContainersStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
ctx, span := otel.Tracer("").Start(ctx, "containerRouter.postContainersStart")
defer span.End()
@@ -223,7 +216,7 @@ func (c *containerRouter) postContainersStart(ctx context.Context, w http.Respon
return err
}
if err := c.backend.ContainerStart(ctx, vars["name"], r.Form.Get("checkpoint"), r.Form.Get("checkpoint-dir")); err != nil {
if err := s.backend.ContainerStart(ctx, vars["name"], r.Form.Get("checkpoint"), r.Form.Get("checkpoint-dir")); err != nil {
return err
}
@@ -231,7 +224,7 @@ func (c *containerRouter) postContainersStart(ctx context.Context, w http.Respon
return nil
}
func (c *containerRouter) postContainersStop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersStop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -251,7 +244,7 @@ func (c *containerRouter) postContainersStop(ctx context.Context, w http.Respons
options.Timeout = &valSeconds
}
if err := c.backend.ContainerStop(ctx, vars["name"], options); err != nil {
if err := s.backend.ContainerStop(ctx, vars["name"], options); err != nil {
return err
}
@@ -259,13 +252,13 @@ func (c *containerRouter) postContainersStop(ctx context.Context, w http.Respons
return nil
}
func (c *containerRouter) postContainersKill(_ context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersKill(_ context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
if err := c.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
if err := s.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
return errors.Wrapf(err, "cannot kill container: %s", name)
}
@@ -273,7 +266,7 @@ func (c *containerRouter) postContainersKill(_ context.Context, w http.ResponseW
return nil
}
func (c *containerRouter) postContainersRestart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersRestart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -293,7 +286,7 @@ func (c *containerRouter) postContainersRestart(ctx context.Context, w http.Resp
options.Timeout = &valSeconds
}
if err := c.backend.ContainerRestart(ctx, vars["name"], options); err != nil {
if err := s.backend.ContainerRestart(ctx, vars["name"], options); err != nil {
return err
}
@@ -301,12 +294,12 @@ func (c *containerRouter) postContainersRestart(ctx context.Context, w http.Resp
return nil
}
func (c *containerRouter) postContainersPause(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersPause(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := c.backend.ContainerPause(vars["name"]); err != nil {
if err := s.backend.ContainerPause(vars["name"]); err != nil {
return err
}
@@ -315,12 +308,12 @@ func (c *containerRouter) postContainersPause(ctx context.Context, w http.Respon
return nil
}
func (c *containerRouter) postContainersUnpause(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersUnpause(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := c.backend.ContainerUnpause(vars["name"]); err != nil {
if err := s.backend.ContainerUnpause(vars["name"]); err != nil {
return err
}
@@ -329,7 +322,7 @@ func (c *containerRouter) postContainersUnpause(ctx context.Context, w http.Resp
return nil
}
func (c *containerRouter) postContainersWait(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersWait(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// Behavior changed in version 1.30 to handle wait condition and to
// return headers immediately.
version := httputils.VersionFromContext(ctx)
@@ -357,7 +350,7 @@ func (c *containerRouter) postContainersWait(ctx context.Context, w http.Respons
}
}
waitC, err := c.backend.ContainerWait(ctx, vars["name"], waitCondition)
waitC, err := s.backend.ContainerWait(ctx, vars["name"], waitCondition)
if err != nil {
return err
}
@@ -394,8 +387,8 @@ func (c *containerRouter) postContainersWait(ctx context.Context, w http.Respons
})
}
func (c *containerRouter) getContainersChanges(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
changes, err := c.backend.ContainerChanges(ctx, vars["name"])
func (s *containerRouter) getContainersChanges(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
changes, err := s.backend.ContainerChanges(ctx, vars["name"])
if err != nil {
return err
}
@@ -403,12 +396,12 @@ func (c *containerRouter) getContainersChanges(ctx context.Context, w http.Respo
return httputils.WriteJSON(w, http.StatusOK, changes)
}
func (c *containerRouter) getContainersTop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) getContainersTop(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
procList, err := c.backend.ContainerTop(vars["name"], r.Form.Get("ps_args"))
procList, err := s.backend.ContainerTop(vars["name"], r.Form.Get("ps_args"))
if err != nil {
return err
}
@@ -416,21 +409,21 @@ func (c *containerRouter) getContainersTop(ctx context.Context, w http.ResponseW
return httputils.WriteJSON(w, http.StatusOK, procList)
}
func (c *containerRouter) postContainerRename(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerRename(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
name := vars["name"]
newName := r.Form.Get("name")
if err := c.backend.ContainerRename(name, newName); err != nil {
if err := s.backend.ContainerRename(name, newName); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func (c *containerRouter) postContainerUpdate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -462,7 +455,7 @@ func (c *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
}
name := vars["name"]
resp, err := c.backend.ContainerUpdate(name, hostConfig)
resp, err := s.backend.ContainerUpdate(name, hostConfig)
if err != nil {
return err
}
@@ -470,7 +463,7 @@ func (c *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
return httputils.WriteJSON(w, http.StatusOK, resp)
}
func (c *containerRouter) postContainersCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -480,7 +473,7 @@ func (c *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
name := r.Form.Get("name")
config, hostConfig, networkingConfig, err := c.decoder.DecodeConfig(r.Body)
config, hostConfig, networkingConfig, err := s.decoder.DecodeConfig(r.Body)
if err != nil {
if errors.Is(err, io.EOF) {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
@@ -542,7 +535,7 @@ func (c *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
if versions.LessThan(version, "1.41") {
// Older clients expect the default to be "host" on cgroup v1 hosts
if !c.cgroup2 && hostConfig.CgroupnsMode.IsEmpty() {
if !s.cgroup2 && hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsModeHost
}
}
@@ -650,23 +643,7 @@ func (c *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
}
}
if versions.LessThan(version, "1.48") {
for _, epConfig := range networkingConfig.EndpointsConfig {
// Before 1.48, all endpoints had the same priority, so
// reinitialize this field.
epConfig.GwPriority = 0
}
for _, m := range hostConfig.Mounts {
if m.Type == mount.TypeImage {
return errdefs.InvalidParameter(errors.New(`Mount type "Image" needs API v1.48 or newer`))
}
}
}
var warnings []string
if warn := handleVolumeDriverBC(version, hostConfig); warn != "" {
warnings = append(warnings, warn)
}
if warn, err := handleMACAddressBC(config, hostConfig, networkingConfig, version); err != nil {
return err
} else if warn != "" {
@@ -687,7 +664,7 @@ func (c *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
hostConfig.PidsLimit = nil
}
ccr, err := c.backend.ContainerCreate(ctx, backend.ContainerCreateConfig{
ccr, err := s.backend.ContainerCreate(ctx, backend.ContainerCreateConfig{
Name: name,
Config: config,
HostConfig: hostConfig,
@@ -695,12 +672,6 @@ func (c *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
Platform: platform,
DefaultReadOnlyNonRecursive: defaultReadOnlyNonRecursive,
})
// Log warnings for debugging, regardless if the request was successful or not.
if len(ccr.Warnings) > 0 {
log.G(ctx).WithField("warnings", ccr.Warnings).Debug("warnings encountered during container create request")
}
if err != nil {
return err
}
@@ -708,27 +679,6 @@ func (c *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
return httputils.WriteJSON(w, http.StatusCreated, ccr)
}
// handleVolumeDriverBC handles the use of the container-wide "VolumeDriver"
// option when the Mounts API is used for volumes. It produces a warning
// on API 1.48 and up. Older versions of the API did not produce a warning,
// but the CLI would do so.
func handleVolumeDriverBC(version string, hostConfig *container.HostConfig) (warning string) {
if hostConfig.VolumeDriver == "" || versions.LessThan(version, "1.48") {
return ""
}
for _, m := range hostConfig.Mounts {
if m.Type != mount.TypeVolume {
continue
}
if m.VolumeOptions != nil && m.VolumeOptions.DriverConfig != nil && m.VolumeOptions.DriverConfig.Name != "" {
// Driver was configured for this mount, so no ambiguity.
continue
}
return "WARNING: the container-wide volume-driver configuration is ignored for volumes specified via 'mount'. Use '--mount type=volume,volume-driver=...' instead"
}
return ""
}
// handleMACAddressBC takes care of backward-compatibility for the container-wide MAC address by mutating the
// networkingConfig to set the endpoint-specific MACAddress field introduced in API v1.44. It returns a warning message
// or an error if the container-wide field was specified for API >= v1.44.
@@ -819,12 +769,14 @@ func handleSysctlBC(
netIfSysctl := fmt.Sprintf("net.%s.%s.IFNAME.%s=%s", spl[1], spl[2], spl[4], v)
// Find the EndpointConfig to migrate settings to, if not already found.
if ep == nil {
/* TODO(robmry) - apply this to the API version used in 28.0.0
// Per-endpoint sysctls were introduced in API version 1.46. Migration is
// needed, but refuse to do it automatically for API 1.48 and newer.
if versions.GreaterThan(version, "1.47") {
// needed, but refuse to do it automatically for newer versions of the API.
if versions.GreaterThan(version, "1.??") {
return "", fmt.Errorf("interface specific sysctl setting %q must be supplied using driver option '%s'",
k, netlabel.EndpointSysctls)
}
*/
var err error
ep, err = epConfigForNetMode(version, hostConfig.NetworkMode, netConfig)
if err != nil {
@@ -921,7 +873,7 @@ func epConfigForNetMode(
return ep, nil
}
func (c *containerRouter) deleteContainers(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) deleteContainers(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -933,7 +885,7 @@ func (c *containerRouter) deleteContainers(ctx context.Context, w http.ResponseW
RemoveLink: httputils.BoolValue(r, "link"),
}
if err := c.backend.ContainerRm(name, config); err != nil {
if err := s.backend.ContainerRm(name, config); err != nil {
return err
}
@@ -942,24 +894,24 @@ func (c *containerRouter) deleteContainers(ctx context.Context, w http.ResponseW
return nil
}
func (c *containerRouter) postContainersResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
height, err := httputils.Uint32Value(r, "h")
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "invalid resize height %q", r.Form.Get("h")))
return errdefs.InvalidParameter(err)
}
width, err := httputils.Uint32Value(r, "w")
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "invalid resize width %q", r.Form.Get("w")))
return errdefs.InvalidParameter(err)
}
return c.backend.ContainerResize(ctx, vars["name"], height, width)
return s.backend.ContainerResize(vars["name"], height, width)
}
func (c *containerRouter) postContainersAttach(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersAttach(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
err := httputils.ParseForm(r)
if err != nil {
return err
@@ -988,11 +940,9 @@ func (c *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
if multiplexed && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
// FIXME(thaJeztah): we should not ignore errors here; see https://github.com/moby/moby/pull/48359#discussion_r1725562802
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: %v\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n", contentType)
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
} else {
// FIXME(thaJeztah): we should not ignore errors here; see https://github.com/moby/moby/pull/48359#discussion_r1725562802
fmt.Fprint(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
fmt.Fprintf(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
}
go notifyClosed(ctx, conn, cancel)
@@ -1015,7 +965,7 @@ func (c *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
MuxStreams: true,
}
if err = c.backend.ContainerAttach(containerName, attachConfig); err != nil {
if err = s.backend.ContainerAttach(containerName, attachConfig); err != nil {
log.G(ctx).WithError(err).Errorf("Handler for %s %s returned error", r.Method, r.URL.Path)
// Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack()
@@ -1031,7 +981,7 @@ func (c *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
return nil
}
func (c *containerRouter) wsContainersAttach(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -1087,7 +1037,7 @@ func (c *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
MuxStreams: false, // never multiplex, as we rely on websocket to manage distinct streams
}
err = c.backend.ContainerAttach(containerName, attachConfig)
err = s.backend.ContainerAttach(containerName, attachConfig)
close(done)
select {
case <-started:
@@ -1102,7 +1052,7 @@ func (c *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
return err
}
func (c *containerRouter) postContainersPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainersPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -1112,7 +1062,7 @@ func (c *containerRouter) postContainersPrune(ctx context.Context, w http.Respon
return err
}
pruneReport, err := c.backend.ContainersPrune(ctx, pruneFilters)
pruneReport, err := s.backend.ContainersPrune(ctx, pruneFilters)
if err != nil {
return err
}

View File

@@ -273,15 +273,17 @@ func TestHandleSysctlBC(t *testing.T) {
"net.ipv6.conf.all.disable_ipv6": "0",
},
},
/* TODO(robmry) - enable this test for the API version used in 28.0.0
{
name: "migration disabled for newer api",
apiVersion: "1.48",
apiVersion: "1.??",
networkMode: "mynet",
sysctls: map[string]string{
"net.ipv6.conf.eth0.accept_ra": "2",
},
expError: "must be supplied using driver option 'com.docker.network.endpoint.sysctls'",
},
*/
{
name: "only migrate eth0",
apiVersion: "1.46",

View File

@@ -29,13 +29,13 @@ func setContainerPathStatHeader(stat *container.PathStat, header http.Header) er
return nil
}
func (c *containerRouter) headContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) headContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
stat, err := c.backend.ContainerStatPath(v.Name, v.Path)
stat, err := s.backend.ContainerStatPath(v.Name, v.Path)
if err != nil {
return err
}
@@ -66,13 +66,13 @@ func writeCompressedResponse(w http.ResponseWriter, r *http.Request, body io.Rea
return err
}
func (c *containerRouter) getContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) getContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
}
tarArchive, stat, err := c.backend.ContainerArchivePath(v.Name, v.Path)
tarArchive, stat, err := s.backend.ContainerArchivePath(v.Name, v.Path)
if err != nil {
return err
}
@@ -86,7 +86,7 @@ func (c *containerRouter) getContainersArchive(ctx context.Context, w http.Respo
return writeCompressedResponse(w, r, tarArchive)
}
func (c *containerRouter) putContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) putContainersArchive(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
v, err := httputils.ArchiveFormValues(r, vars)
if err != nil {
return err
@@ -95,5 +95,5 @@ func (c *containerRouter) putContainersArchive(ctx context.Context, w http.Respo
noOverwriteDirNonDir := httputils.BoolValue(r, "noOverwriteDirNonDir")
copyUIDGID := httputils.BoolValue(r, "copyUIDGID")
return c.backend.ContainerExtractToDir(v.Name, v.Path, copyUIDGID, noOverwriteDirNonDir, r.Body)
return s.backend.ContainerExtractToDir(v.Name, v.Path, copyUIDGID, noOverwriteDirNonDir, r.Body)
}

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"io"
"net/http"
"strconv"
"github.com/containerd/log"
"github.com/docker/docker/api/server/httputils"
@@ -14,11 +15,10 @@ import (
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/stdcopy"
"github.com/pkg/errors"
)
func (c *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
eConfig, err := c.backend.ContainerExecInspect(vars["id"])
func (s *containerRouter) getExecByID(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
eConfig, err := s.backend.ContainerExecInspect(vars["id"])
if err != nil {
return err
}
@@ -34,7 +34,7 @@ func (execCommandError) Error() string {
func (execCommandError) InvalidParameter() {}
func (c *containerRouter) postContainerExecCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -55,19 +55,19 @@ func (c *containerRouter) postContainerExecCreate(ctx context.Context, w http.Re
}
// Register an instance of Exec in container.
id, err := c.backend.ContainerExecCreate(vars["name"], execConfig)
id, err := s.backend.ContainerExecCreate(vars["name"], execConfig)
if err != nil {
log.G(ctx).Errorf("Error setting up exec command in container %s: %v", vars["name"], err)
return err
}
return httputils.WriteJSON(w, http.StatusCreated, &container.ExecCreateResponse{
return httputils.WriteJSON(w, http.StatusCreated, &types.IDResponse{
ID: id,
})
}
// TODO(vishh): Refactor the code to avoid having to specify stream config as part of both create and start.
func (c *containerRouter) postContainerExecStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
@@ -83,7 +83,7 @@ func (c *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
return err
}
if exists, err := c.backend.ExecExists(execName); !exists {
if exists, err := s.backend.ExecExists(execName); !exists {
return err
}
@@ -138,7 +138,7 @@ func (c *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
// Now run the user process in container.
//
// TODO: Maybe we should we pass ctx here if we're not detaching?
err := c.backend.ContainerExecStart(context.Background(), execName, backend.ExecStartConfig{
err := s.backend.ContainerExecStart(context.Background(), execName, backend.ExecStartConfig{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
@@ -154,18 +154,18 @@ func (c *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
return nil
}
func (c *containerRouter) postContainerExecResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
func (s *containerRouter) postContainerExecResize(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
height, err := httputils.Uint32Value(r, "h")
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "invalid resize height %q", r.Form.Get("h")))
return errdefs.InvalidParameter(err)
}
width, err := httputils.Uint32Value(r, "w")
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "invalid resize width %q", r.Form.Get("w")))
return errdefs.InvalidParameter(err)
}
return c.backend.ContainerExecResize(ctx, vars["name"], height, width)
return s.backend.ContainerExecResize(vars["name"], height, width)
}

View File

@@ -1,6 +1,3 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.22
package container // import "github.com/docker/docker/api/server/router/container"
import (
@@ -8,34 +5,17 @@ import (
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/internal/sliceutil"
"github.com/docker/docker/pkg/stringid"
)
// getContainersByName inspects container's configuration and serializes it as json.
func (c *containerRouter) getContainersByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
ctr, err := c.backend.ContainerInspect(ctx, vars["name"], backend.ContainerInspectOptions{
Size: httputils.BoolValue(r, "size"),
})
func (s *containerRouter) getContainersByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
displaySize := httputils.BoolValue(r, "size")
version := httputils.VersionFromContext(ctx)
json, err := s.backend.ContainerInspect(ctx, vars["name"], displaySize, version)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.45") {
shortCID := stringid.TruncateID(ctr.ID)
for nwName, ep := range ctr.NetworkSettings.Networks {
if container.NetworkMode(nwName).IsUserDefined() {
ep.Aliases = sliceutil.Dedup(append(ep.Aliases, shortCID, ctr.Config.Hostname))
}
}
}
if versions.LessThan(version, "1.48") {
ctr.ImageManifestDescriptor = nil
}
return httputils.WriteJSON(w, http.StatusOK, ctr)
return httputils.WriteJSON(w, http.StatusOK, json)
}

View File

@@ -24,13 +24,13 @@ type debugRouter struct {
func (r *debugRouter) initRoutes() {
r.routes = []router.Route{
router.NewGetRoute("/debug/vars", frameworkAdaptHandler(expvar.Handler())),
router.NewGetRoute("/debug/pprof/", frameworkAdaptHandlerFunc(pprof.Index)),
router.NewGetRoute("/debug/pprof/cmdline", frameworkAdaptHandlerFunc(pprof.Cmdline)),
router.NewGetRoute("/debug/pprof/profile", frameworkAdaptHandlerFunc(pprof.Profile)),
router.NewGetRoute("/debug/pprof/symbol", frameworkAdaptHandlerFunc(pprof.Symbol)),
router.NewGetRoute("/debug/pprof/trace", frameworkAdaptHandlerFunc(pprof.Trace)),
router.NewGetRoute("/debug/pprof/{name}", handlePprof),
router.NewGetRoute("/vars", frameworkAdaptHandler(expvar.Handler())),
router.NewGetRoute("/pprof/", frameworkAdaptHandlerFunc(pprof.Index)),
router.NewGetRoute("/pprof/cmdline", frameworkAdaptHandlerFunc(pprof.Cmdline)),
router.NewGetRoute("/pprof/profile", frameworkAdaptHandlerFunc(pprof.Profile)),
router.NewGetRoute("/pprof/symbol", frameworkAdaptHandlerFunc(pprof.Symbol)),
router.NewGetRoute("/pprof/trace", frameworkAdaptHandlerFunc(pprof.Trace)),
router.NewGetRoute("/pprof/{name}", handlePprof),
}
}

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.22
//go:build go1.21
package grpc // import "github.com/docker/docker/api/server/router/grpc"
@@ -9,14 +9,14 @@ import (
"os"
"strings"
"github.com/containerd/containerd/v2/defaults"
"github.com/containerd/containerd/defaults"
"github.com/containerd/log"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/internal/otelutil"
"github.com/moby/buildkit/util/grpcerrors"
"github.com/moby/buildkit/util/stack"
"github.com/moby/buildkit/util/tracing"
"go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc"
"go.opentelemetry.io/otel"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
@@ -29,9 +29,8 @@ type grpcRouter struct {
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
tp, _ := otelutil.NewTracerProvider(context.Background(), false)
opts := []grpc.ServerOption{
grpc.StatsHandler(tracing.ServerStatsHandler(otelgrpc.WithTracerProvider(tp))),
grpc.StatsHandler(tracing.ServerStatsHandler(otelgrpc.WithTracerProvider(otel.GetTracerProvider()))),
grpc.ChainUnaryInterceptor(unaryInterceptor, grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
grpc.MaxRecvMsgSize(defaults.DefaultMaxRecvMsgSize),

View File

@@ -23,18 +23,17 @@ type Backend interface {
type imageBackend interface {
ImageDelete(ctx context.Context, imageRef string, force, prune bool) ([]image.DeleteResponse, error)
ImageHistory(ctx context.Context, imageName string, platform *ocispec.Platform) ([]*image.HistoryResponseItem, error)
ImageHistory(ctx context.Context, imageName string) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts image.ListOptions) ([]*image.Summary, error)
GetImage(ctx context.Context, refOrID string, options backend.GetImageOpts) (*dockerimage.Image, error)
ImageInspect(ctx context.Context, refOrID string, options backend.ImageInspectOpts) (*image.InspectResponse, error)
TagImage(ctx context.Context, id dockerimage.ID, newRef reference.Named) error
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*image.PruneReport, error)
}
type importExportBackend interface {
LoadImage(ctx context.Context, inTar io.ReadCloser, platform *ocispec.Platform, outStream io.Writer, quiet bool) error
LoadImage(ctx context.Context, inTar io.ReadCloser, outStream io.Writer, quiet bool) error
ImportImage(ctx context.Context, ref reference.Named, platform *ocispec.Platform, msg string, layerReader io.Reader, changes []string) (dockerimage.ID, error)
ExportImage(ctx context.Context, names []string, platform *ocispec.Platform, outStream io.Writer) error
ExportImage(ctx context.Context, names []string, outStream io.Writer) error
}
type registryBackend interface {

View File

@@ -2,6 +2,8 @@ package image // import "github.com/docker/docker/api/server/router/image"
import (
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/reference"
)
@@ -10,15 +12,19 @@ type imageRouter struct {
backend Backend
searcher Searcher
referenceBackend reference.Store
imageStore image.Store
layerStore layer.Store
routes []router.Route
}
// NewRouter initializes a new image router
func NewRouter(backend Backend, searcher Searcher, referenceBackend reference.Store) router.Router {
func NewRouter(backend Backend, searcher Searcher, referenceBackend reference.Store, imageStore image.Store, layerStore layer.Store) router.Router {
ir := &imageRouter{
backend: backend,
searcher: searcher,
referenceBackend: referenceBackend,
imageStore: imageStore,
layerStore: layerStore,
}
ir.initRoutes()
return ir

View File

@@ -14,6 +14,7 @@ import (
"github.com/distribution/reference"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/filters"
imagetypes "github.com/docker/docker/api/types/image"
@@ -141,7 +142,7 @@ func (ir *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrit
id, progressErr = ir.backend.ImportImage(ctx, tagRef, platform, comment, layerReader, r.Form["changes"])
if progressErr == nil {
_, _ = output.Write(streamformatter.FormatStatus("", "%v", id.String()))
output.Write(streamformatter.FormatStatus("", id.String()))
}
}
if progressErr != nil {
@@ -246,22 +247,7 @@ func (ir *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter,
names = r.Form["names"]
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
if formPlatforms := r.Form["platform"]; len(formPlatforms) > 1 {
// TODO(thaJeztah): remove once we support multiple platforms: see https://github.com/moby/moby/issues/48759
return errdefs.InvalidParameter(errors.New("multiple platform parameters not supported"))
}
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
if err := ir.backend.ExportImage(ctx, names, platform, output); err != nil {
if err := ir.backend.ExportImage(ctx, names, output); err != nil {
if !output.Flushed() {
return err
}
@@ -274,28 +260,13 @@ func (ir *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter
if err := httputils.ParseForm(r); err != nil {
return err
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
if formPlatforms := r.Form["platform"]; len(formPlatforms) > 1 {
// TODO(thaJeztah): remove once we support multiple platforms: see https://github.com/moby/moby/issues/48759
return errdefs.InvalidParameter(errors.New("multiple platform parameters not supported"))
}
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
quiet := httputils.BoolValueOrDefault(r, "quiet", true)
w.Header().Set("Content-Type", "application/json")
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := ir.backend.LoadImage(ctx, r.Body, platform, output, quiet); err != nil {
if err := ir.backend.LoadImage(ctx, r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err))
}
return nil
@@ -332,29 +303,14 @@ func (ir *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter,
}
func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var manifests bool
if r.Form.Get("manifests") != "" && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
manifests = httputils.BoolValue(r, "manifests")
}
imageInspect, err := ir.backend.ImageInspect(ctx, vars["name"], backend.ImageInspectOpts{
Manifests: manifests,
})
img, err := ir.backend.GetImage(ctx, vars["name"], backend.GetImageOpts{Details: true})
if err != nil {
return err
}
// Make sure we output empty arrays instead of nil. While Go nil slice is functionally equivalent to an empty slice,
// it matters for the JSON representation.
if imageInspect.RepoTags == nil {
imageInspect.RepoTags = []string{}
}
if imageInspect.RepoDigests == nil {
imageInspect.RepoDigests = []string{}
imageInspect, err := ir.toImageInspect(img)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
@@ -371,12 +327,77 @@ func (ir *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWrite
imageInspect.Container = "" //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
imageInspect.ContainerConfig = nil //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
}
if versions.LessThan(version, "1.48") {
imageInspect.Descriptor = nil
}
return httputils.WriteJSON(w, http.StatusOK, imageInspect)
}
func (ir *imageRouter) toImageInspect(img *image.Image) (*types.ImageInspect, error) {
var repoTags, repoDigests []string
for _, ref := range img.Details.References {
switch ref.(type) {
case reference.NamedTagged:
repoTags = append(repoTags, reference.FamiliarString(ref))
case reference.Canonical:
repoDigests = append(repoDigests, reference.FamiliarString(ref))
}
}
comment := img.Comment
if len(comment) == 0 && len(img.History) > 0 {
comment = img.History[len(img.History)-1].Comment
}
// Make sure we output empty arrays instead of nil.
if repoTags == nil {
repoTags = []string{}
}
if repoDigests == nil {
repoDigests = []string{}
}
var created string
if img.Created != nil {
created = img.Created.Format(time.RFC3339Nano)
}
return &types.ImageInspect{
ID: img.ID().String(),
RepoTags: repoTags,
RepoDigests: repoDigests,
Parent: img.Parent.String(),
Comment: comment,
Created: created,
Container: img.Container, //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
ContainerConfig: &img.ContainerConfig, //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.45.
DockerVersion: img.DockerVersion,
Author: img.Author,
Config: img.Config,
Architecture: img.Architecture,
Variant: img.Variant,
Os: img.OperatingSystem(),
OsVersion: img.OSVersion,
Size: img.Details.Size,
GraphDriver: types.GraphDriverData{
Name: img.Details.Driver,
Data: img.Details.Metadata,
},
RootFS: rootFSToAPIType(img.RootFS),
Metadata: imagetypes.Metadata{
LastTagTime: img.Details.LastUpdated,
},
}, nil
}
func rootFSToAPIType(rootfs *image.RootFS) types.RootFS {
var layers []string
for _, l := range rootfs.DiffIDs {
layers = append(layers, l.String())
}
return types.RootFS{
Type: rootfs.Type,
Layers: layers,
}
}
func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
@@ -419,7 +440,6 @@ func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
useNone := versions.LessThan(version, "1.43")
withVirtualSize := versions.LessThan(version, "1.44")
noDescriptor := versions.LessThan(version, "1.48")
for _, img := range images {
if useNone {
if len(img.RepoTags) == 0 && len(img.RepoDigests) == 0 {
@@ -437,30 +457,13 @@ func (ir *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
if withVirtualSize {
img.VirtualSize = img.Size //nolint:staticcheck // ignore SA1019: field is deprecated, but still set on API < v1.44.
}
if noDescriptor {
img.Descriptor = nil
}
}
return httputils.WriteJSON(w, http.StatusOK, images)
}
func (ir *imageRouter) getImagesHistory(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
var platform *ocispec.Platform
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.48") {
if formPlatform := r.Form.Get("platform"); formPlatform != "" {
p, err := httputils.DecodePlatform(formPlatform)
if err != nil {
return err
}
platform = p
}
}
history, err := ir.backend.ImageHistory(ctx, vars["name"], platform)
history, err := ir.backend.ImageHistory(ctx, vars["name"])
if err != nil {
return err
}

View File

@@ -75,13 +75,13 @@ func (e invalidRequestError) Error() string {
func (e invalidRequestError) InvalidParameter() {}
type ambiguousResultsError string
type ambigousResultsError string
func (e ambiguousResultsError) Error() string {
func (e ambigousResultsError) Error() string {
return "network " + string(e) + " is ambiguous"
}
func (ambiguousResultsError) InvalidParameter() {}
func (ambigousResultsError) InvalidParameter() {}
func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
@@ -182,7 +182,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
}
if len(listByFullName) > 1 {
return errors.Wrapf(ambiguousResultsError(term), "%d matches found based on name", len(listByFullName))
return errors.Wrapf(ambigousResultsError(term), "%d matches found based on name", len(listByFullName))
}
// Find based on partial ID, returns true only if no duplicates
@@ -192,7 +192,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
}
if len(listByPartialID) > 1 {
return errors.Wrapf(ambiguousResultsError(term), "%d matches found based on ID prefix", len(listByPartialID))
return errors.Wrapf(ambigousResultsError(term), "%d matches found based on ID prefix", len(listByPartialID))
}
return libnetwork.ErrNoSuchNetwork(term)
@@ -212,13 +212,6 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
return libnetwork.NetworkNameError(create.Name)
}
version := httputils.VersionFromContext(ctx)
// EnableIPv4 was introduced in API 1.48.
if versions.LessThan(version, "1.48") {
create.EnableIPv4 = nil
}
// For a Swarm-scoped network, this call to backend.CreateNetwork is used to
// validate the configuration. The network will not be created but, if the
// configuration is valid, ManagerRedirectError will be returned and handled

View File

@@ -209,6 +209,10 @@ func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter,
if err := httputils.ReadJSON(r, &service); err != nil {
return err
}
// TODO(thaJeztah): remove logentries check and migration code in release v26.0.0.
if service.TaskTemplate.LogDriver != nil && service.TaskTemplate.LogDriver.Name == "logentries" {
return errdefs.InvalidParameter(errors.New("the logentries logging driver has been deprecated and removed"))
}
// Get returns "" if the header does not exist
encodedAuth := r.Header.Get(registry.AuthHeader)
@@ -237,6 +241,10 @@ func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter,
if err := httputils.ReadJSON(r, &service); err != nil {
return err
}
// TODO(thaJeztah): remove logentries check and migration code in release v26.0.0.
if service.TaskTemplate.LogDriver != nil && service.TaskTemplate.LogDriver.Name == "logentries" {
return errdefs.InvalidParameter(errors.New("the logentries logging driver has been deprecated and removed"))
}
rawVersion := r.URL.Query().Get("version")
version, err := strconv.ParseUint(rawVersion, 10, 64)

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.22
//go:build go1.21
package system // import "github.com/docker/docker/api/server/router/system"

View File

@@ -100,18 +100,6 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
// Containerd field introduced in API v1.46.
info.Containerd = nil
}
if versions.LessThan(version, "1.47") {
// Field is omitted in API 1.48 and up, but should still be included
// in older versions, even if no values are set.
info.RegistryConfig.AllowNondistributableArtifactsCIDRs = []*registry.NetIPNet{}
info.RegistryConfig.AllowNondistributableArtifactsHostnames = []string{}
}
// TODO(thaJeztah): Expected commits are deprecated, and should no longer be set in API 1.49.
info.ContainerdCommit.Expected = info.ContainerdCommit.ID //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.49.
info.RuncCommit.Expected = info.RuncCommit.ID //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.49.
info.InitCommit.Expected = info.InitCommit.ID //nolint:staticcheck // ignore SA1019: field is deprecated, but still used on API < v1.49.
if versions.GreaterThanOrEqualTo(version, "1.42") {
info.KernelMemory = false
}

View File

@@ -188,16 +188,17 @@ func TestCreateRegularVolume(t *testing.T) {
Driver: "foodriver",
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeCreate)
assert.NilError(t, err)
buf := bytes.Buffer{}
e := json.NewEncoder(&buf)
e.Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.postVolumesCreate(ctx, resp, req, nil)
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
@@ -226,16 +227,16 @@ func TestCreateSwarmVolumeNoSwarm(t *testing.T) {
Driver: "someCSI",
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeCreate)
assert.NilError(t, err)
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.postVolumesCreate(ctx, resp, req, nil)
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
@@ -255,16 +256,16 @@ func TestCreateSwarmVolumeNotManager(t *testing.T) {
Driver: "someCSI",
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeCreate)
assert.NilError(t, err)
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.postVolumesCreate(ctx, resp, req, nil)
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
@@ -287,16 +288,16 @@ func TestCreateVolumeCluster(t *testing.T) {
Driver: "someCSI",
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeCreate)
assert.NilError(t, err)
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.postVolumesCreate(ctx, resp, req, nil)
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
@@ -334,17 +335,15 @@ func TestUpdateVolume(t *testing.T) {
Spec: &volume.ClusterVolumeSpec{},
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeUpdate)
assert.NilError(t, err)
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, c.volumes["vol1"].ClusterVolume.Meta.Version.Index, uint64(1))
@@ -363,17 +362,15 @@ func TestUpdateVolumeNoSwarm(t *testing.T) {
Spec: &volume.ClusterVolumeSpec{},
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeUpdate)
assert.NilError(t, err)
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
@@ -395,17 +392,15 @@ func TestUpdateVolumeNotFound(t *testing.T) {
Spec: &volume.ClusterVolumeSpec{},
}
var buf bytes.Buffer
err := json.NewEncoder(&buf).Encode(volumeUpdate)
assert.NilError(t, err)
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err = v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
@@ -586,7 +581,7 @@ type fakeVolumeBackend struct {
}
func (b *fakeVolumeBackend) List(_ context.Context, _ filters.Args) ([]*volume.Volume, []string, error) {
var volumes []*volume.Volume
volumes := []*volume.Volume{}
for _, v := range b.volumes {
volumes = append(volumes, v)
}
@@ -676,12 +671,13 @@ func (c *fakeClusterBackend) GetVolume(nameOrID string) (volume.Volume, error) {
return volume.Volume{}, errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
func (c *fakeClusterBackend) GetVolumes(_ volume.ListOptions) ([]*volume.Volume, error) {
func (c *fakeClusterBackend) GetVolumes(options volume.ListOptions) ([]*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
var volumes []*volume.Volume
volumes := []*volume.Volume{}
for _, v := range c.volumes {
volumes = append(volumes, v)
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/server/router/debug"
"github.com/docker/docker/api/types"
"github.com/docker/docker/dockerversion"
"github.com/gorilla/mux"
@@ -65,22 +66,26 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc, operation string) ht
}
// CreateMux returns a new mux with all the routers registered.
func (s *Server) CreateMux(ctx context.Context, routers ...router.Router) *mux.Router {
log.G(ctx).Debug("Registering routers")
func (s *Server) CreateMux(routers ...router.Router) *mux.Router {
m := mux.NewRouter()
log.G(context.TODO()).Debug("Registering routers")
for _, apiRouter := range routers {
for _, r := range apiRouter.Routes() {
if ctx.Err() != nil {
return m
}
log.G(ctx).WithFields(log.Fields{"method": r.Method(), "path": r.Path()}).Debug("Registering route")
f := s.makeHTTPHandler(r.Handler(), r.Method()+" "+r.Path())
log.G(context.TODO()).Debugf("Registering %s, %s", r.Method(), r.Path())
m.Path(versionMatcher + r.Path()).Methods(r.Method()).Handler(f)
m.Path(r.Path()).Methods(r.Method()).Handler(f)
}
}
// Setup handlers for undefined paths and methods
debugRouter := debug.NewRouter()
for _, r := range debugRouter.Routes() {
f := s.makeHTTPHandler(r.Handler(), r.Method()+" "+r.Path())
m.Path("/debug" + r.Path()).Handler(f)
}
notFoundHandler := http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
_ = httputils.WriteJSON(w, http.StatusNotFound, &types.ErrorResponse{
Message: "page not found",

File diff suppressed because it is too large Load Diff

View File

@@ -92,13 +92,6 @@ type ContainerStatsConfig struct {
OutStream func() io.Writer
}
// ContainerInspectOptions defines options for the backend.ContainerInspect
// call.
type ContainerInspectOptions struct {
// Size controls whether to propagate the container's size fields.
Size bool
}
// ExecStartConfig holds the options to start container's exec.
type ExecStartConfig struct {
Stdin io.Reader
@@ -148,11 +141,7 @@ type CreateImageConfig struct {
// from the backend.
type GetImageOpts struct {
Platform *ocispec.Platform
}
// ImageInspectOpts holds parameters to inspect an image.
type ImageInspectOpts struct {
Manifests bool
Details bool
}
// CommitConfig is the configuration for creating an image as part of a build.

View File

@@ -11,7 +11,7 @@ import (
"github.com/docker/docker/api/types/registry"
)
// NewHijackedResponse initializes a [HijackedResponse] type.
// NewHijackedResponse intializes a HijackedResponse type
func NewHijackedResponse(conn net.Conn, mediaType string) HijackedResponse {
return HijackedResponse{Conn: conn, Reader: bufio.NewReader(conn), mediaType: mediaType}
}
@@ -129,6 +129,14 @@ type ImageBuildResponse struct {
OSType string
}
// RequestPrivilegeFunc is a function interface that
// clients can supply to retry operations after
// getting an authorization error.
// This function returns the registry authentication
// header value in base 64 format, or an error
// if the privilege request fails.
type RequestPrivilegeFunc func(context.Context) (string, error)
// NodeListOptions holds parameters to list nodes with.
type NodeListOptions struct {
Filters filters.Args
@@ -227,18 +235,11 @@ type PluginDisableOptions struct {
// PluginInstallOptions holds parameters to install a plugin.
type PluginInstallOptions struct {
Disabled bool
AcceptAllPermissions bool
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry
RemoteRef string // RemoteRef is the plugin name on the registry
// PrivilegeFunc is a function that clients can supply to retry operations
// after getting an authorization error. This function returns the registry
// authentication header value in base64 encoded format, or an error if the
// privilege request fails.
//
// For details, refer to [github.com/docker/docker/api/types/registry.RequestAuthConfig].
PrivilegeFunc func(context.Context) (string, error)
Disabled bool
AcceptAllPermissions bool
RegistryAuth string // RegistryAuth is the base64 encoded credentials for the registry
RemoteRef string // RemoteRef is the plugin name on the registry
PrivilegeFunc RequestPrivilegeFunc
AcceptPermissionsFunc func(context.Context, PluginPrivileges) (bool, error)
Args []string
}

View File

@@ -1,7 +0,0 @@
package container
import "github.com/docker/docker/api/types/common"
// CommitResponse response for the commit API call, containing the ID of the
// image that was produced.
type CommitResponse = common.IDResponse

View File

@@ -4,22 +4,8 @@ import (
"io"
"os"
"time"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/storage"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// ContainerUpdateOKBody OK response to ContainerUpdate operation
//
// Deprecated: use [UpdateResponse]. This alias will be removed in the next release.
type ContainerUpdateOKBody = UpdateResponse
// ContainerTopOKBody OK response to ContainerTop operation
//
// Deprecated: use [TopResponse]. This alias will be removed in the next release.
type ContainerTopOKBody = TopResponse
// PruneReport contains the response for Engine API:
// POST "/containers/prune"
type PruneReport struct {
@@ -56,133 +42,3 @@ type StatsResponseReader struct {
Body io.ReadCloser `json:"body"`
OSType string `json:"ostype"`
}
// MountPoint represents a mount point configuration inside the container.
// This is used for reporting the mountpoints in use by a container.
type MountPoint struct {
// Type is the type of mount, see `Type<foo>` definitions in
// github.com/docker/docker/api/types/mount.Type
Type mount.Type `json:",omitempty"`
// Name is the name reference to the underlying data defined by `Source`
// e.g., the volume name.
Name string `json:",omitempty"`
// Source is the source location of the mount.
//
// For volumes, this contains the storage location of the volume (within
// `/var/lib/docker/volumes/`). For bind-mounts, and `npipe`, this contains
// the source (host) part of the bind-mount. For `tmpfs` mount points, this
// field is empty.
Source string
// Destination is the path relative to the container root (`/`) where the
// Source is mounted inside the container.
Destination string
// Driver is the volume driver used to create the volume (if it is a volume).
Driver string `json:",omitempty"`
// Mode is a comma separated list of options supplied by the user when
// creating the bind/volume mount.
//
// The default is platform-specific (`"z"` on Linux, empty on Windows).
Mode string
// RW indicates whether the mount is mounted writable (read-write).
RW bool
// Propagation describes how mounts are propagated from the host into the
// mount point, and vice-versa. Refer to the Linux kernel documentation
// for details:
// https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
//
// This field is not used on Windows.
Propagation mount.Propagation
}
// State stores container's running state
// it's part of ContainerJSONBase and returned by "inspect" command
type State struct {
Status string // String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead"
Running bool
Paused bool
Restarting bool
OOMKilled bool
Dead bool
Pid int
ExitCode int
Error string
StartedAt string
FinishedAt string
Health *Health `json:",omitempty"`
}
// Summary contains response of Engine API:
// GET "/containers/json"
type Summary struct {
ID string `json:"Id"`
Names []string
Image string
ImageID string
ImageManifestDescriptor *ocispec.Descriptor `json:"ImageManifestDescriptor,omitempty"`
Command string
Created int64
Ports []Port
SizeRw int64 `json:",omitempty"`
SizeRootFs int64 `json:",omitempty"`
Labels map[string]string
State string
Status string
HostConfig struct {
NetworkMode string `json:",omitempty"`
Annotations map[string]string `json:",omitempty"`
}
NetworkSettings *NetworkSettingsSummary
Mounts []MountPoint
}
// ContainerJSONBase contains response of Engine API GET "/containers/{name:.*}/json"
// for API version 1.18 and older.
//
// TODO(thaJeztah): combine ContainerJSONBase and InspectResponse into a single struct.
// The split between ContainerJSONBase (ContainerJSONBase) and InspectResponse (InspectResponse)
// was done in commit 6deaa58ba5f051039643cedceee97c8695e2af74 (https://github.com/moby/moby/pull/13675).
// ContainerJSONBase contained all fields for API < 1.19, and InspectResponse
// held fields that were added in API 1.19 and up. Given that the minimum
// supported API version is now 1.24, we no longer use the separate type.
type ContainerJSONBase struct {
ID string `json:"Id"`
Created string
Path string
Args []string
State *State
Image string
ResolvConfPath string
HostnamePath string
HostsPath string
LogPath string
Name string
RestartCount int
Driver string
Platform string
MountLabel string
ProcessLabel string
AppArmorProfile string
ExecIDs []string
HostConfig *HostConfig
GraphDriver storage.DriverData
SizeRw *int64 `json:",omitempty"`
SizeRootFs *int64 `json:",omitempty"`
}
// InspectResponse is the response for the GET "/containers/{name:.*}/json"
// endpoint.
type InspectResponse struct {
*ContainerJSONBase
Mounts []MountPoint
Config *Config
NetworkSettings *NetworkSettings
// ImageManifestDescriptor is the descriptor of a platform-specific manifest of the image used to create the container.
ImageManifestDescriptor *ocispec.Descriptor `json:"ImageManifestDescriptor,omitempty"`
}

View File

@@ -0,0 +1,22 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerTopOKBody OK response to ContainerTop operation
// swagger:model ContainerTopOKBody
type ContainerTopOKBody struct {
// Each process running in the container, where each is process
// is an array of values corresponding to the titles.
//
// Required: true
Processes [][]string `json:"Processes"`
// The ps column titles
// Required: true
Titles []string `json:"Titles"`
}

View File

@@ -0,0 +1,16 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerUpdateOKBody OK response to ContainerUpdate operation
// swagger:model ContainerUpdateOKBody
type ContainerUpdateOKBody struct {
// warnings
// Required: true
Warnings []string `json:"Warnings"`
}

View File

@@ -1,13 +1,5 @@
package container
import "github.com/docker/docker/api/types/common"
// ExecCreateResponse is the response for a successful exec-create request.
// It holds the ID of the exec that was created.
//
// TODO(thaJeztah): make this a distinct type.
type ExecCreateResponse = common.IDResponse
// ExecOptions is a small subset of the Config struct that holds the configuration
// for the exec feature of docker.
type ExecOptions struct {

View File

@@ -1,26 +0,0 @@
package container
import "time"
// Health states
const (
NoHealthcheck = "none" // Indicates there is no healthcheck
Starting = "starting" // Starting indicates that the container is not yet ready
Healthy = "healthy" // Healthy indicates that the container is running correctly
Unhealthy = "unhealthy" // Unhealthy indicates that the container has a problem
)
// Health stores information about the container's healthcheck results
type Health struct {
Status string // Status is one of [Starting], [Healthy] or [Unhealthy].
FailingStreak int // FailingStreak is the number of consecutive failures
Log []*HealthcheckResult // Log contains the last few results (oldest first)
}
// HealthcheckResult stores information about a single run of a healthcheck probe
type HealthcheckResult struct {
Start time.Time // Start is the time this check started
End time.Time // End is the time this check ended
ExitCode int // ExitCode meanings: 0=healthy, 1=unhealthy, 2=reserved (considered unhealthy), else=error running probe
Output string // Output from last check
}

View File

@@ -1,7 +1,6 @@
package container // import "github.com/docker/docker/api/types/container"
import (
"errors"
"fmt"
"strings"
@@ -10,7 +9,7 @@ import (
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/strslice"
"github.com/docker/go-connections/nat"
"github.com/docker/go-units"
units "github.com/docker/go-units"
)
// CgroupnsMode represents the cgroup namespace mode of the container
@@ -326,12 +325,12 @@ func ValidateRestartPolicy(policy RestartPolicy) error {
if policy.MaximumRetryCount < 0 {
msg += " and cannot be negative"
}
return &errInvalidParameter{errors.New(msg)}
return &errInvalidParameter{fmt.Errorf(msg)}
}
return nil
case RestartPolicyOnFailure:
if policy.MaximumRetryCount < 0 {
return &errInvalidParameter{errors.New("invalid restart policy: maximum retry count cannot be negative")}
return &errInvalidParameter{fmt.Errorf("invalid restart policy: maximum retry count cannot be negative")}
}
return nil
case "":

View File

@@ -3,6 +3,7 @@ package container
import (
"testing"
"github.com/docker/docker/errdefs"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
@@ -90,24 +91,15 @@ func TestValidateRestartPolicy(t *testing.T) {
}
for _, tc := range tests {
tc := tc
t.Run(tc.name, func(t *testing.T) {
err := ValidateRestartPolicy(tc.input)
if tc.expectedErr == "" {
assert.Check(t, err)
} else {
assert.Check(t, is.ErrorType(err, isInvalidParameter))
assert.Check(t, is.ErrorType(err, errdefs.IsInvalidParameter))
assert.Check(t, is.Error(err, tc.expectedErr))
}
})
}
}
// isInvalidParameter is a minimal implementation of [github.com/docker/docker/errdefs.IsInvalidParameter],
// because this was the only import of that package in api/types, which is the
// package imported by external users.
func isInvalidParameter(err error) bool {
_, ok := err.(interface {
InvalidParameter()
})
return ok
}

View File

@@ -1,56 +0,0 @@
package container
import (
"github.com/docker/docker/api/types/network"
"github.com/docker/go-connections/nat"
)
// NetworkSettings exposes the network settings in the api
type NetworkSettings struct {
NetworkSettingsBase
DefaultNetworkSettings
Networks map[string]*network.EndpointSettings
}
// NetworkSettingsBase holds networking state for a container when inspecting it.
type NetworkSettingsBase struct {
Bridge string // Bridge contains the name of the default bridge interface iff it was set through the daemon --bridge flag.
SandboxID string // SandboxID uniquely represents a container's network stack
SandboxKey string // SandboxKey identifies the sandbox
Ports nat.PortMap // Ports is a collection of PortBinding indexed by Port
// HairpinMode specifies if hairpin NAT should be enabled on the virtual interface
//
// Deprecated: This field is never set and will be removed in a future release.
HairpinMode bool
// LinkLocalIPv6Address is an IPv6 unicast address using the link-local prefix
//
// Deprecated: This field is never set and will be removed in a future release.
LinkLocalIPv6Address string
// LinkLocalIPv6PrefixLen is the prefix length of an IPv6 unicast address
//
// Deprecated: This field is never set and will be removed in a future release.
LinkLocalIPv6PrefixLen int
SecondaryIPAddresses []network.Address // Deprecated: This field is never set and will be removed in a future release.
SecondaryIPv6Addresses []network.Address // Deprecated: This field is never set and will be removed in a future release.
}
// DefaultNetworkSettings holds network information
// during the 2 release deprecation period.
// It will be removed in Docker 1.11.
type DefaultNetworkSettings struct {
EndpointID string // EndpointID uniquely represents a service endpoint in a Sandbox
Gateway string // Gateway holds the gateway address for the network
GlobalIPv6Address string // GlobalIPv6Address holds network's global IPv6 address
GlobalIPv6PrefixLen int // GlobalIPv6PrefixLen represents mask length of network's global IPv6 address
IPAddress string // IPAddress holds the IPv4 address for the network
IPPrefixLen int // IPPrefixLen represents mask length of network's IPv4 address
IPv6Gateway string // IPv6Gateway holds gateway address specific for IPv6
MacAddress string // MacAddress holds the MAC address for the network
}
// NetworkSettingsSummary provides a summary of container's networks
// in /containers/json
type NetworkSettingsSummary struct {
Networks map[string]*network.EndpointSettings
}

View File

@@ -148,15 +148,7 @@ type PidsStats struct {
}
// Stats is Ultimate struct aggregating all types of stats of one container
//
// Deprecated: use [StatsResponse] instead. This type will be removed in the next release.
type Stats = StatsResponse
// StatsResponse aggregates all types of stats of one container.
type StatsResponse struct {
Name string `json:"name,omitempty"`
ID string `json:"id,omitempty"`
type Stats struct {
// Common stats
Read time.Time `json:"read"`
PreRead time.Time `json:"preread"`
@@ -170,8 +162,20 @@ type StatsResponse struct {
StorageStats StorageStats `json:"storage_stats,omitempty"`
// Shared stats
CPUStats CPUStats `json:"cpu_stats,omitempty"`
PreCPUStats CPUStats `json:"precpu_stats,omitempty"` // "Pre"="Previous"
MemoryStats MemoryStats `json:"memory_stats,omitempty"`
Networks map[string]NetworkStats `json:"networks,omitempty"`
CPUStats CPUStats `json:"cpu_stats,omitempty"`
PreCPUStats CPUStats `json:"precpu_stats,omitempty"` // "Pre"="Previous"
MemoryStats MemoryStats `json:"memory_stats,omitempty"`
}
// StatsResponse is newly used Networks.
//
// TODO(thaJeztah): unify with [Stats]. This wrapper was to account for pre-api v1.21 changes, see https://github.com/moby/moby/commit/d3379946ec96fb6163cb8c4517d7d5a067045801
type StatsResponse struct {
Stats
Name string `json:"name,omitempty"`
ID string `json:"id,omitempty"`
// Networks request version >=1.21
Networks map[string]NetworkStats `json:"networks,omitempty"`
}

View File

@@ -1,18 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// TopResponse ContainerTopResponse
//
// Container "top" response.
// swagger:model TopResponse
type TopResponse struct {
// Each process running in the container, where each process
// is an array of values corresponding to the titles.
Processes [][]string `json:"Processes"`
// The ps column titles
Titles []string `json:"Titles"`
}

View File

@@ -1,14 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// UpdateResponse ContainerUpdateResponse
//
// Response for a successful container-update.
// swagger:model UpdateResponse
type UpdateResponse struct {
// Warnings encountered when updating the container.
Warnings []string `json:"Warnings"`
}

View File

@@ -22,3 +22,16 @@ func (e invalidFilter) Error() string {
// InvalidParameter marks this error as ErrInvalidParameter
func (e invalidFilter) InvalidParameter() {}
// unreachableCode is an error indicating that the code path was not expected to be reached.
type unreachableCode struct {
Filter string
Value []string
}
// System marks this error as ErrSystem
func (e unreachableCode) System() {}
func (e unreachableCode) Error() string {
return fmt.Sprintf("unreachable code reached for filter: %q with values: %s", e.Filter, e.Value)
}

View File

@@ -196,10 +196,11 @@ func (args Args) Match(field, source string) bool {
}
// GetBoolOrDefault returns a boolean value of the key if the key is present
// and is interpretable as a boolean value. Otherwise the default value is returned.
// and is intepretable as a boolean value. Otherwise the default value is returned.
// Error is not nil only if the filter values are not valid boolean or are conflicting.
func (args Args) GetBoolOrDefault(key string, defaultValue bool) (bool, error) {
fieldValues, ok := args.fields[key]
if !ok {
return defaultValue, nil
}
@@ -210,11 +211,20 @@ func (args Args) GetBoolOrDefault(key string, defaultValue bool) (bool, error) {
isFalse := fieldValues["0"] || fieldValues["false"]
isTrue := fieldValues["1"] || fieldValues["true"]
if isFalse == isTrue {
// Either no or conflicting truthy/falsy value were provided
conflicting := isFalse && isTrue
invalid := !isFalse && !isTrue
if conflicting || invalid {
return defaultValue, &invalidFilter{key, args.Get(key)}
} else if isFalse {
return false, nil
} else if isTrue {
return true, nil
}
return isTrue, nil
// This code shouldn't be reached.
return defaultValue, &unreachableCode{Filter: key, Value: args.Get(key)}
}
// ExactMatch returns true if the source matches exactly one of the values.

View File

@@ -2,6 +2,7 @@ package filters // import "github.com/docker/docker/api/types/filters"
import (
"encoding/json"
"errors"
"fmt"
"sort"
"testing"
@@ -11,49 +12,53 @@ import (
)
func TestMarshalJSON(t *testing.T) {
a := NewArgs(
Arg("created", "today"),
Arg("image.name", "ubuntu*"),
Arg("image.name", "*untu"),
)
fields := map[string]map[string]bool{
"created": {"today": true},
"image.name": {"ubuntu*": true, "*untu": true},
}
a := Args{fields: fields}
s, err := a.MarshalJSON()
assert.Check(t, err)
const expected = `{"created":{"today":true},"image.name":{"*untu":true,"ubuntu*":true}}`
assert.Check(t, is.Equal(string(s), expected))
_, err := a.MarshalJSON()
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
}
}
func TestMarshalJSONWithEmpty(t *testing.T) {
s, err := json.Marshal(NewArgs())
assert.Check(t, err)
const expected = `{}`
assert.Check(t, is.Equal(string(s), expected))
_, err := json.Marshal(NewArgs())
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
}
}
func TestToJSON(t *testing.T) {
a := NewArgs(
Arg("created", "today"),
Arg("image.name", "ubuntu*"),
Arg("image.name", "*untu"),
)
fields := map[string]map[string]bool{
"created": {"today": true},
"image.name": {"ubuntu*": true, "*untu": true},
}
a := Args{fields: fields}
s, err := ToJSON(a)
assert.Check(t, err)
const expected = `{"created":{"today":true},"image.name":{"*untu":true,"ubuntu*":true}}`
assert.Check(t, is.Equal(s, expected))
_, err := ToJSON(a)
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
}
}
func TestToParamWithVersion(t *testing.T) {
a := NewArgs(
Arg("created", "today"),
Arg("image.name", "ubuntu*"),
Arg("image.name", "*untu"),
)
fields := map[string]map[string]bool{
"created": {"today": true},
"image.name": {"ubuntu*": true, "*untu": true},
}
a := Args{fields: fields}
str1, err := ToParamWithVersion("1.21", a)
assert.Check(t, err)
if err != nil {
t.Errorf("failed to marshal the filters with version < 1.22: %s", err)
}
str2, err := ToParamWithVersion("1.22", a)
assert.Check(t, err)
if err != nil {
t.Errorf("failed to marshal the filters with version >= 1.22: %s", err)
}
if str1 != `{"created":["today"],"image.name":["*untu","ubuntu*"]}` &&
str1 != `{"created":["today"],"image.name":["ubuntu*","*untu"]}` {
t.Errorf("incorrectly marshaled the filters: %s", str1)
@@ -87,30 +92,39 @@ func TestFromJSON(t *testing.T) {
}
for _, invalid := range invalids {
t.Run(invalid, func(t *testing.T) {
_, err := FromJSON(invalid)
if err == nil {
t.Fatalf("Expected an error with %v, got nothing", invalid)
}
var invalidFilterError *invalidFilter
assert.Check(t, is.ErrorType(err, invalidFilterError))
wrappedErr := fmt.Errorf("something went wrong: %w", err)
assert.Check(t, is.ErrorIs(wrappedErr, err))
})
_, err := FromJSON(invalid)
if err == nil {
t.Fatalf("Expected an error with %v, got nothing", invalid)
}
var invalidFilterError *invalidFilter
if !errors.As(err, &invalidFilterError) {
t.Fatalf("Expected an invalidFilter error, got %T", err)
}
wrappedErr := fmt.Errorf("something went wrong: %w", err)
if !errors.Is(wrappedErr, err) {
t.Errorf("Expected a wrapped error to be detected as invalidFilter")
}
}
for expectedArgs, matchers := range valid {
for _, jsonString := range matchers {
args, err := FromJSON(jsonString)
assert.Check(t, err)
assert.Check(t, is.Equal(args.Len(), expectedArgs.Len()))
if err != nil {
t.Fatal(err)
}
if args.Len() != expectedArgs.Len() {
t.Fatalf("Expected %v, go %v", expectedArgs, args)
}
for key, expectedValues := range expectedArgs.fields {
values := args.Get(key)
assert.Check(t, is.Len(values, len(expectedValues)), expectedArgs)
if len(values) != len(expectedValues) {
t.Fatalf("Expected %v, go %v", expectedArgs, args)
}
for _, v := range values {
if !expectedValues[v] {
t.Errorf("Expected %v, go %v", expectedArgs, args)
t.Fatalf("Expected %v, go %v", expectedArgs, args)
}
}
}
@@ -120,12 +134,17 @@ func TestFromJSON(t *testing.T) {
func TestEmpty(t *testing.T) {
a := Args{}
assert.Check(t, is.Equal(a.Len(), 0))
v, err := ToJSON(a)
assert.Check(t, err)
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
}
v1, err := FromJSON(v)
assert.Check(t, err)
assert.Check(t, is.Equal(v1.Len(), 0))
if err != nil {
t.Errorf("%s", err)
}
if a.Len() != v1.Len() {
t.Error("these should both be empty sets")
}
}
func TestArgsMatchKVListEmptySources(t *testing.T) {
@@ -166,7 +185,7 @@ func TestArgsMatchKVList(t *testing.T) {
for args, field := range matches {
if !args.MatchKVList(field, sources) {
t.Errorf("Expected true for %v on %v, got false", sources, args)
t.Fatalf("Expected true for %v on %v, got false", sources, args)
}
}
@@ -192,7 +211,7 @@ func TestArgsMatchKVList(t *testing.T) {
for args, field := range differs {
if args.MatchKVList(field, sources) {
t.Errorf("Expected false for %v on %v, got true", sources, args)
t.Fatalf("Expected false for %v on %v, got true", sources, args)
}
}
}
@@ -230,7 +249,8 @@ func TestArgsMatch(t *testing.T) {
}
for args, field := range matches {
assert.Check(t, args.Match(field, source), "Expected field %s to match %s", field, source)
assert.Check(t, args.Match(field, source),
"Expected field %s to match %s", field, source)
}
differs := map[*Args]string{
@@ -270,62 +290,89 @@ func TestArgsMatch(t *testing.T) {
func TestAdd(t *testing.T) {
f := NewArgs()
f.Add("status", "running")
v := f.Get("status")
assert.Check(t, is.DeepEqual(v, []string{"running"}))
v := f.fields["status"]
if len(v) != 1 || !v["running"] {
t.Fatalf("Expected to include a running status, got %v", v)
}
f.Add("status", "paused")
v = f.Get("status")
assert.Check(t, is.Len(v, 2))
assert.Check(t, is.Contains(v, "running"))
assert.Check(t, is.Contains(v, "paused"))
if len(v) != 2 || !v["paused"] {
t.Fatalf("Expected to include a paused status, got %v", v)
}
}
func TestDel(t *testing.T) {
f := NewArgs()
f.Add("status", "running")
f.Del("status", "running")
assert.Check(t, is.Equal(f.Len(), 0))
assert.Check(t, is.DeepEqual(f.Get("status"), []string{}))
v := f.fields["status"]
if v["running"] {
t.Fatal("Expected to not include a running status filter, got true")
}
}
func TestLen(t *testing.T) {
f := NewArgs()
assert.Check(t, is.Equal(f.Len(), 0))
if f.Len() != 0 {
t.Fatal("Expected to not include any field")
}
f.Add("status", "running")
assert.Check(t, is.Equal(f.Len(), 1))
if f.Len() != 1 {
t.Fatal("Expected to include one field")
}
}
func TestExactMatch(t *testing.T) {
f := NewArgs()
assert.Check(t, f.ExactMatch("status", "running"), "Expected to match `running` when there are no filters")
if !f.ExactMatch("status", "running") {
t.Fatal("Expected to match `running` when there are no filters, got false")
}
f.Add("status", "running")
f.Add("status", "pause*")
assert.Check(t, f.ExactMatch("status", "running"), "Expected to match `running` with one of the filters")
assert.Check(t, !f.ExactMatch("status", "paused"), "Expected to not match `paused` with one of the filters")
if !f.ExactMatch("status", "running") {
t.Fatal("Expected to match `running` with one of the filters, got false")
}
if f.ExactMatch("status", "paused") {
t.Fatal("Expected to not match `paused` with one of the filters, got true")
}
}
func TestOnlyOneExactMatch(t *testing.T) {
f := NewArgs()
assert.Check(t, f.ExactMatch("status", "running"), "Expected to match `running` when there are no filters")
if !f.UniqueExactMatch("status", "running") {
t.Fatal("Expected to match `running` when there are no filters, got false")
}
f.Add("status", "running")
assert.Check(t, f.ExactMatch("status", "running"), "Expected to match `running` with one of the filters")
assert.Check(t, !f.UniqueExactMatch("status", "paused"), "Expected to not match `paused` with one of the filters")
if !f.UniqueExactMatch("status", "running") {
t.Fatal("Expected to match `running` with one of the filters, got false")
}
if f.UniqueExactMatch("status", "paused") {
t.Fatal("Expected to not match `paused` with one of the filters, got true")
}
f.Add("status", "pause")
assert.Check(t, !f.UniqueExactMatch("status", "running"), "Expected to not match only `running` with two filters")
if f.UniqueExactMatch("status", "running") {
t.Fatal("Expected to not match only `running` with two filters, got true")
}
}
func TestContains(t *testing.T) {
f := NewArgs()
assert.Check(t, !f.Contains("status"))
if f.Contains("status") {
t.Fatal("Expected to not contain a status key, got true")
}
f.Add("status", "running")
assert.Check(t, f.Contains("status"))
if !f.Contains("status") {
t.Fatal("Expected to contain a status key, got false")
}
}
func TestValidate(t *testing.T) {
@@ -337,14 +384,23 @@ func TestValidate(t *testing.T) {
"dangling": true,
}
assert.Check(t, f.Validate(valid))
if err := f.Validate(valid); err != nil {
t.Fatal(err)
}
f.Add("bogus", "running")
err := f.Validate(valid)
if err == nil {
t.Fatal("Expected to return an error, got nil")
}
var invalidFilterError *invalidFilter
assert.Check(t, is.ErrorType(err, invalidFilterError))
if !errors.As(err, &invalidFilterError) {
t.Errorf("Expected an invalidFilter error, got %T", err)
}
wrappedErr := fmt.Errorf("something went wrong: %w", err)
assert.Check(t, is.ErrorIs(wrappedErr, err))
if !errors.Is(wrappedErr, err) {
t.Errorf("Expected a wrapped error to be detected as invalidFilter")
}
}
func TestWalkValues(t *testing.T) {
@@ -358,23 +414,23 @@ func TestWalkValues(t *testing.T) {
}
return nil
})
assert.Check(t, err)
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
loops1 := 0
err = f.WalkValues("status", func(value string) error {
loops1++
return nil
return errors.New("return")
})
assert.Check(t, err)
assert.Check(t, is.Equal(loops1, 2), "Expected to not iterate when the field doesn't exist")
if err == nil {
t.Fatal("Expected to get an error, got nil")
}
loops2 := 0
err = f.WalkValues("unknown-key", func(value string) error {
loops2++
return nil
err = f.WalkValues("foo", func(value string) error {
return errors.New("return")
})
assert.Check(t, err)
assert.Check(t, is.Equal(loops2, 0), "Expected to not iterate when the field doesn't exist")
if err != nil {
t.Fatalf("Expected to not iterate when the field doesn't exist, got %v", err)
}
}
func TestFuzzyMatch(t *testing.T) {
@@ -390,7 +446,7 @@ func TestFuzzyMatch(t *testing.T) {
for source, match := range cases {
got := f.FuzzyMatch("container", source)
if got != match {
t.Errorf("Expected %v, got %v: %s", match, got, source)
t.Fatalf("Expected %v, got %v: %s", match, got, source)
}
}
}
@@ -484,6 +540,7 @@ func TestGetBoolOrDefault(t *testing.T) {
expectedValue: false,
},
} {
tc := tc
t.Run(tc.name, func(t *testing.T) {
a := NewArgs()
@@ -511,7 +568,7 @@ func TestGetBoolOrDefault(t *testing.T) {
assert.Check(t, is.DeepEqual(expected.Value, actual.Value))
wrappedErr := fmt.Errorf("something went wrong: %w", err)
assert.Check(t, is.ErrorIs(wrappedErr, err), "Expected a wrapped error to be detected as invalidFilter")
assert.Check(t, errors.Is(wrappedErr, err), "Expected a wrapped error to be detected as invalidFilter")
}
assert.Check(t, is.Equal(tc.expectedValue, value))

View File

@@ -1,13 +1,13 @@
package storage
package types
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// DriverData Information about the storage driver used to store the container's and
// GraphDriverData Information about the storage driver used to store the container's and
// image's filesystem.
//
// swagger:model DriverData
type DriverData struct {
// swagger:model GraphDriverData
type GraphDriverData struct {
// Low-level storage metadata, provided as key/value pairs.
//

View File

@@ -1,10 +1,10 @@
package common
package types
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// IDResponse Response to an API call that returns just an Id
// swagger:model IDResponse
// swagger:model IdResponse
type IDResponse struct {
// The id of the newly created object.

View File

@@ -1,140 +0,0 @@
package image
import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/storage"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// RootFS returns Image's RootFS description including the layer IDs.
type RootFS struct {
Type string `json:",omitempty"`
Layers []string `json:",omitempty"`
}
// InspectResponse contains response of Engine API:
// GET "/images/{name:.*}/json"
type InspectResponse struct {
// ID is the content-addressable ID of an image.
//
// This identifier is a content-addressable digest calculated from the
// image's configuration (which includes the digests of layers used by
// the image).
//
// Note that this digest differs from the `RepoDigests` below, which
// holds digests of image manifests that reference the image.
ID string `json:"Id"`
// RepoTags is a list of image names/tags in the local image cache that
// reference this image.
//
// Multiple image tags can refer to the same image, and this list may be
// empty if no tags reference the image, in which case the image is
// "untagged", in which case it can still be referenced by its ID.
RepoTags []string
// RepoDigests is a list of content-addressable digests of locally available
// image manifests that the image is referenced from. Multiple manifests can
// refer to the same image.
//
// These digests are usually only available if the image was either pulled
// from a registry, or if the image was pushed to a registry, which is when
// the manifest is generated and its digest calculated.
RepoDigests []string
// Parent is the ID of the parent image.
//
// Depending on how the image was created, this field may be empty and
// is only set for images that were built/created locally. This field
// is empty if the image was pulled from an image registry.
Parent string
// Comment is an optional message that can be set when committing or
// importing the image.
Comment string
// Created is the date and time at which the image was created, formatted in
// RFC 3339 nano-seconds (time.RFC3339Nano).
//
// This information is only available if present in the image,
// and omitted otherwise.
Created string `json:",omitempty"`
// Container is the ID of the container that was used to create the image.
//
// Depending on how the image was created, this field may be empty.
//
// Deprecated: this field is omitted in API v1.45, but kept for backward compatibility.
Container string `json:",omitempty"`
// ContainerConfig is an optional field containing the configuration of the
// container that was last committed when creating the image.
//
// Previous versions of Docker builder used this field to store build cache,
// and it is not in active use anymore.
//
// Deprecated: this field is omitted in API v1.45, but kept for backward compatibility.
ContainerConfig *container.Config `json:",omitempty"`
// DockerVersion is the version of Docker that was used to build the image.
//
// Depending on how the image was created, this field may be empty.
DockerVersion string
// Author is the name of the author that was specified when committing the
// image, or as specified through MAINTAINER (deprecated) in the Dockerfile.
Author string
Config *container.Config
// Architecture is the hardware CPU architecture that the image runs on.
Architecture string
// Variant is the CPU architecture variant (presently ARM-only).
Variant string `json:",omitempty"`
// OS is the Operating System the image is built to run on.
Os string
// OsVersion is the version of the Operating System the image is built to
// run on (especially for Windows).
OsVersion string `json:",omitempty"`
// Size is the total size of the image including all layers it is composed of.
Size int64
// VirtualSize is the total size of the image including all layers it is
// composed of.
//
// Deprecated: this field is omitted in API v1.44, but kept for backward compatibility. Use Size instead.
VirtualSize int64 `json:"VirtualSize,omitempty"`
// GraphDriver holds information about the storage driver used to store the
// container's and image's filesystem.
GraphDriver storage.DriverData
// RootFS contains information about the image's RootFS, including the
// layer IDs.
RootFS RootFS
// Metadata of the image in the local cache.
//
// This information is local to the daemon, and not part of the image itself.
Metadata Metadata
// Descriptor is the OCI descriptor of the image target.
// It's only set if the daemon provides a multi-platform image store.
//
// WARNING: This is experimental and may change at any time without any backward
// compatibility.
Descriptor *ocispec.Descriptor `json:"Descriptor,omitempty"`
// Manifests is a list of image manifests available in this image. It
// provides a more detailed view of the platform-specific image manifests or
// other image-attached data like build attestations.
//
// Only available if the daemon provides a multi-platform image store.
//
// WARNING: This is experimental and may change at any time without any backward
// compatibility.
Manifests []ManifestSummary `json:"Manifests,omitempty"`
}

View File

@@ -38,7 +38,7 @@ type PullOptions struct {
// authentication header value in base64 encoded format, or an error if the
// privilege request fails.
//
// For details, refer to [github.com/docker/docker/api/types/registry.RequestAuthConfig].
// Also see [github.com/docker/docker/api/types.RequestPrivilegeFunc].
PrivilegeFunc func(context.Context) (string, error)
Platform string
}
@@ -53,7 +53,7 @@ type PushOptions struct {
// authentication header value in base64 encoded format, or an error if the
// privilege request fails.
//
// For details, refer to [github.com/docker/docker/api/types/registry.RequestAuthConfig].
// Also see [github.com/docker/docker/api/types.RequestPrivilegeFunc].
PrivilegeFunc func(context.Context) (string, error)
// Platform is an optional field that selects a specific platform to push
@@ -86,31 +86,3 @@ type RemoveOptions struct {
Force bool
PruneChildren bool
}
// HistoryOptions holds parameters to get image history.
type HistoryOptions struct {
// Platform from the manifest list to use for history.
Platform *ocispec.Platform
}
// LoadOptions holds parameters to load images.
type LoadOptions struct {
// Quiet suppresses progress output
Quiet bool
// Platforms selects the platforms to load if the image is a
// multi-platform image and has multiple variants.
Platforms []ocispec.Platform
}
type InspectOptions struct {
// Manifests returns the image manifests.
Manifests bool
}
// SaveOptions holds parameters to save images.
type SaveOptions struct {
// Platforms selects the platforms to save if the image is a
// multi-platform image and has multiple variants.
Platforms []ocispec.Platform
}

View File

@@ -1,7 +1,5 @@
package image
import ocispec "github.com/opencontainers/image-spec/specs-go/v1"
type Summary struct {
// Number of containers using this image. Includes both stopped and running
@@ -14,7 +12,7 @@ type Summary struct {
Containers int64 `json:"Containers"`
// Date and time at which the image was created as a Unix timestamp
// (number of seconds since EPOCH).
// (number of seconds sinds EPOCH).
//
// Required: true
Created int64 `json:"Created"`
@@ -44,13 +42,6 @@ type Summary struct {
// Required: true
ParentID string `json:"ParentId"`
// Descriptor is the OCI descriptor of the image target.
// It's only set if the daemon provides a multi-platform image store.
//
// WARNING: This is experimental and may change at any time without any backward
// compatibility.
Descriptor *ocispec.Descriptor `json:"Descriptor,omitempty"`
// Manifests is a list of image manifests available in this image. It
// provides a more detailed view of the platform-specific image manifests or
// other image-attached data like build attestations.

View File

@@ -19,8 +19,6 @@ const (
TypeNamedPipe Type = "npipe"
// TypeCluster is the type for Swarm Cluster Volumes.
TypeCluster Type = "cluster"
// TypeImage is the type for mounting another image's filesystem
TypeImage Type = "image"
)
// Mount represents a mount (volume).
@@ -36,7 +34,6 @@ type Mount struct {
BindOptions *BindOptions `json:",omitempty"`
VolumeOptions *VolumeOptions `json:",omitempty"`
ImageOptions *ImageOptions `json:",omitempty"`
TmpfsOptions *TmpfsOptions `json:",omitempty"`
ClusterOptions *ClusterOptions `json:",omitempty"`
}
@@ -103,10 +100,6 @@ type VolumeOptions struct {
DriverConfig *Driver `json:",omitempty"`
}
type ImageOptions struct {
Subpath string `json:",omitempty"`
}
// Driver represents a volume driver.
type Driver struct {
Name string `json:",omitempty"`

View File

@@ -19,12 +19,6 @@ type EndpointSettings struct {
// generated address).
MacAddress string
DriverOpts map[string]string
// GwPriority determines which endpoint will provide the default gateway
// for the container. The endpoint with the highest priority will be used.
// If multiple endpoints have the same priority, they are lexicographically
// sorted based on their network name, and the one that sorts first is picked.
GwPriority int
// Operational data
NetworkID string
EndpointID string

View File

@@ -23,7 +23,7 @@ func (stub subnetStub) Contains(addr net.IP) bool {
}
func TestEndpointIPAMConfigWithOutOfRangeAddrs(t *testing.T) {
tests := []struct {
testcases := []struct {
name string
ipamConfig *EndpointIPAMConfig
v4Subnets []NetworkSubnet
@@ -80,7 +80,8 @@ func TestEndpointIPAMConfigWithOutOfRangeAddrs(t *testing.T) {
},
}
for _, tc := range tests {
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()
@@ -104,7 +105,7 @@ func TestEndpointIPAMConfigWithOutOfRangeAddrs(t *testing.T) {
}
func TestEndpointIPAMConfigWithInvalidConfig(t *testing.T) {
tests := []struct {
testcases := []struct {
name string
ipamConfig *EndpointIPAMConfig
expectedErrors []string
@@ -161,7 +162,8 @@ func TestEndpointIPAMConfigWithInvalidConfig(t *testing.T) {
},
}
for _, tc := range tests {
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()

View File

@@ -8,7 +8,7 @@ import (
)
func TestNetworkWithInvalidIPAM(t *testing.T) {
tests := []struct {
testcases := []struct {
name string
ipam IPAM
ipv6 bool
@@ -123,7 +123,8 @@ func TestNetworkWithInvalidIPAM(t *testing.T) {
},
}
for _, tc := range tests {
for _, tc := range testcases {
tc := tc
t.Run(tc.name, func(t *testing.T) {
t.Parallel()

View File

@@ -33,7 +33,6 @@ type CreateRequest struct {
type CreateOptions struct {
Driver string // Driver is the driver-name used to create the network (e.g. `bridge`, `overlay`)
Scope string // Scope describes the level at which the network exists (e.g. `swarm` for cluster-wide or `local` for machine level).
EnableIPv4 *bool `json:",omitempty"` // EnableIPv4 represents whether to enable IPv4.
EnableIPv6 *bool `json:",omitempty"` // EnableIPv6 represents whether to enable IPv6.
IPAM *IPAM // IPAM is the network's IP Address Management.
Internal bool // Internal represents if the network is used internal only.
@@ -77,8 +76,7 @@ type Inspect struct {
Created time.Time // Created is the time the network created
Scope string // Scope describes the level at which the network exists (e.g. `swarm` for cluster-wide or `local` for machine level)
Driver string // Driver is the Driver name used to create the network (e.g. `bridge`, `overlay`)
EnableIPv4 bool // EnableIPv4 represents whether IPv4 is enabled
EnableIPv6 bool // EnableIPv6 represents whether IPv6 is enabled
EnableIPv6 bool // EnableIPv6 represents whether to enable IPv6
IPAM IPAM // IPAM is the network's IP Address Management
Internal bool // Internal represents if the network is used internal only
Attachable bool // Attachable represents if the global scope is manually attachable by regular containers from workers in swarm mode.

View File

@@ -1,4 +1,4 @@
package container
package types
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command

View File

@@ -1,29 +1,17 @@
package registry // import "github.com/docker/docker/api/types/registry"
import (
"context"
"encoding/base64"
"encoding/json"
"fmt"
"io"
"strings"
"github.com/pkg/errors"
)
// AuthHeader is the name of the header used to send encoded registry
// authorization credentials for registry operations (push/pull).
const AuthHeader = "X-Registry-Auth"
// RequestAuthConfig is a function interface that clients can supply
// to retry operations after getting an authorization error.
//
// The function must return the [AuthHeader] value ([AuthConfig]), encoded
// in base64url format ([RFC4648, section 5]), which can be decoded by
// [DecodeAuthConfig].
//
// It must return an error if the privilege request fails.
//
// [RFC4648, section 5]: https://tools.ietf.org/html/rfc4648#section-5
type RequestAuthConfig func(context.Context) (string, error)
// AuthConfig contains authorization information for connecting to a Registry.
type AuthConfig struct {
Username string `json:"username,omitempty"`
@@ -97,7 +85,7 @@ func decodeAuthConfigFromReader(rdr io.Reader) (*AuthConfig, error) {
}
func invalid(err error) error {
return errInvalidParameter{fmt.Errorf("invalid X-Registry-Auth header: %w", err)}
return errInvalidParameter{errors.Wrap(err, "invalid X-Registry-Auth header")}
}
type errInvalidParameter struct{ error }

View File

@@ -9,29 +9,11 @@ import (
// ServiceConfig stores daemon registry services configuration.
type ServiceConfig struct {
AllowNondistributableArtifactsCIDRs []*NetIPNet `json:"AllowNondistributableArtifactsCIDRs,omitempty"` // Deprecated: non-distributable artifacts are deprecated and enabled by default. This field will be removed in the next release.
AllowNondistributableArtifactsHostnames []string `json:"AllowNondistributableArtifactsHostnames,omitempty"` // Deprecated: non-distributable artifacts are deprecated and enabled by default. This field will be removed in the next release.
InsecureRegistryCIDRs []*NetIPNet `json:"InsecureRegistryCIDRs"`
IndexConfigs map[string]*IndexInfo `json:"IndexConfigs"`
Mirrors []string
}
// MarshalJSON implements a custom marshaler to include legacy fields
// in API responses.
func (sc ServiceConfig) MarshalJSON() ([]byte, error) {
tmp := map[string]interface{}{
"InsecureRegistryCIDRs": sc.InsecureRegistryCIDRs,
"IndexConfigs": sc.IndexConfigs,
"Mirrors": sc.Mirrors,
}
if sc.AllowNondistributableArtifactsCIDRs != nil {
tmp["AllowNondistributableArtifactsCIDRs"] = nil
}
if sc.AllowNondistributableArtifactsHostnames != nil {
tmp["AllowNondistributableArtifactsHostnames"] = nil
}
return json.Marshal(tmp)
AllowNondistributableArtifactsCIDRs []*NetIPNet
AllowNondistributableArtifactsHostnames []string
InsecureRegistryCIDRs []*NetIPNet `json:"InsecureRegistryCIDRs"`
IndexConfigs map[string]*IndexInfo `json:"IndexConfigs"`
Mirrors []string
}
// NetIPNet is the net.IPNet type, which can be marshalled and
@@ -49,17 +31,15 @@ func (ipnet *NetIPNet) MarshalJSON() ([]byte, error) {
}
// UnmarshalJSON sets the IPNet from a byte array of JSON
func (ipnet *NetIPNet) UnmarshalJSON(b []byte) error {
func (ipnet *NetIPNet) UnmarshalJSON(b []byte) (err error) {
var ipnetStr string
if err := json.Unmarshal(b, &ipnetStr); err != nil {
return err
if err = json.Unmarshal(b, &ipnetStr); err == nil {
var cidr *net.IPNet
if _, cidr, err = net.ParseCIDR(ipnetStr); err == nil {
*ipnet = NetIPNet(*cidr)
}
}
_, cidr, err := net.ParseCIDR(ipnetStr)
if err != nil {
return err
}
*ipnet = NetIPNet(*cidr)
return nil
return
}
// IndexInfo contains information about a registry

View File

@@ -1,30 +0,0 @@
package registry
import (
"encoding/json"
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
func TestServiceConfigMarshalLegacyFields(t *testing.T) {
t.Run("without legacy fields", func(t *testing.T) {
b, err := json.Marshal(&ServiceConfig{})
assert.NilError(t, err)
const expected = `{"IndexConfigs":null,"InsecureRegistryCIDRs":null,"Mirrors":null}`
assert.Check(t, is.Equal(string(b), expected), "Legacy nondistributable-artifacts fields should be omitted in output")
})
// Legacy fields should be returned when set to an empty slice. This is
// used for API versions < 1.49.
t.Run("with legacy fields", func(t *testing.T) {
b, err := json.Marshal(&ServiceConfig{
AllowNondistributableArtifactsCIDRs: []*NetIPNet{},
AllowNondistributableArtifactsHostnames: []string{},
})
assert.NilError(t, err)
const expected = `{"AllowNondistributableArtifactsCIDRs":null,"AllowNondistributableArtifactsHostnames":null,"IndexConfigs":null,"InsecureRegistryCIDRs":null,"Mirrors":null}`
assert.Check(t, is.Equal(string(b), expected))
})
}

View File

@@ -10,12 +10,11 @@ import (
type SearchOptions struct {
RegistryAuth string
// PrivilegeFunc is a function that clients can supply to retry operations
// after getting an authorization error. This function returns the registry
// authentication header value in base64 encoded format, or an error if the
// privilege request fails.
// PrivilegeFunc is a [types.RequestPrivilegeFunc] the client can
// supply to retry operations after getting an authorization error.
//
// For details, refer to [github.com/docker/docker/api/types/registry.RequestAuthConfig].
// It must return the registry authentication header value in base64
// format, or an error if the privilege request fails.
PrivilegeFunc func(context.Context) (string, error)
Filters filters.Args
Limit int

View File

@@ -12,12 +12,6 @@ type Config struct {
// ConfigSpec represents a config specification from a config in swarm
type ConfigSpec struct {
Annotations
// Data is the data to store as a config.
//
// The maximum allowed size is 1000KB, as defined in [MaxConfigSize].
//
// [MaxConfigSize]: https://pkg.go.dev/github.com/moby/swarmkit/v2@v2.0.0-20250103191802-8c1959736554/manager/controlapi#MaxConfigSize
Data []byte `json:",omitempty"`
// Templating controls whether and how to evaluate the config payload as

View File

@@ -12,22 +12,8 @@ type Secret struct {
// SecretSpec represents a secret specification from a secret in swarm
type SecretSpec struct {
Annotations
// Data is the data to store as a secret. It must be empty if a
// [Driver] is used, in which case the data is loaded from an external
// secret store. The maximum allowed size is 500KB, as defined in
// [MaxSecretSize].
//
// This field is only used to create the secret, and is not returned
// by other endpoints.
//
// [MaxSecretSize]: https://pkg.go.dev/github.com/moby/swarmkit/v2@v2.0.0-20250103191802-8c1959736554/api/validation#MaxSecretSize
Data []byte `json:",omitempty"`
// Driver is the name of the secrets driver used to fetch the secret's
// value from an external secret store. If not set, the default built-in
// store is used.
Driver *Driver `json:",omitempty"`
Data []byte `json:",omitempty"`
Driver *Driver `json:",omitempty"` // name of the secrets driver used to fetch the secret's value from an external secret store
// Templating controls whether and how to evaluate the secret payload as
// a template. If it is not set, no templating is used.

View File

@@ -122,7 +122,7 @@ type CAConfig struct {
SigningCAKey string `json:",omitempty"`
// If this value changes, and there is no specified signing cert and key,
// then the swarm is forced to generate a new root certificate and key.
// then the swarm is forced to generate a new root certificate ane key.
ForceRotate uint64 `json:",omitempty"`
}

View File

@@ -29,8 +29,8 @@ type Info struct {
CPUSet bool
PidsLimit bool
IPv4Forwarding bool
BridgeNfIptables bool `json:"BridgeNfIptables"` // Deprecated: netfilter module is now loaded on-demand and no longer during daemon startup, making this field obsolete. This field is always false and will be removed in the next release.
BridgeNfIP6tables bool `json:"BridgeNfIp6tables"` // Deprecated: netfilter module is now loaded on-demand and no longer during daemon startup, making this field obsolete. This field is always false and will be removed in the next release.
BridgeNfIptables bool
BridgeNfIP6tables bool `json:"BridgeNfIp6tables"`
Debug bool
NFd int
OomKillDisable bool
@@ -137,13 +137,8 @@ type PluginsInfo struct {
// Commit holds the Git-commit (SHA1) that a binary was built from, as reported
// in the version-string of external tools, such as containerd, or runC.
type Commit struct {
// ID is the actual commit ID or version of external tool.
ID string
// Expected is the commit ID of external tool expected by dockerd as set at build time.
//
// Deprecated: this field is no longer used in API v1.49, but kept for backward-compatibility with older API versions.
Expected string
ID string // ID is the actual commit ID of external tool.
Expected string // Expected is the commit ID of external tool expected by dockerd as set at build time.
}
// NetworkAddressPool is a temp struct used by [Info] struct.

View File

@@ -6,8 +6,11 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/volume"
"github.com/docker/go-connections/nat"
)
const (
@@ -18,6 +21,145 @@ const (
MediaTypeMultiplexedStream = "application/vnd.docker.multiplexed-stream"
)
// RootFS returns Image's RootFS description including the layer IDs.
type RootFS struct {
Type string `json:",omitempty"`
Layers []string `json:",omitempty"`
}
// ImageInspect contains response of Engine API:
// GET "/images/{name:.*}/json"
type ImageInspect struct {
// ID is the content-addressable ID of an image.
//
// This identifier is a content-addressable digest calculated from the
// image's configuration (which includes the digests of layers used by
// the image).
//
// Note that this digest differs from the `RepoDigests` below, which
// holds digests of image manifests that reference the image.
ID string `json:"Id"`
// RepoTags is a list of image names/tags in the local image cache that
// reference this image.
//
// Multiple image tags can refer to the same image, and this list may be
// empty if no tags reference the image, in which case the image is
// "untagged", in which case it can still be referenced by its ID.
RepoTags []string
// RepoDigests is a list of content-addressable digests of locally available
// image manifests that the image is referenced from. Multiple manifests can
// refer to the same image.
//
// These digests are usually only available if the image was either pulled
// from a registry, or if the image was pushed to a registry, which is when
// the manifest is generated and its digest calculated.
RepoDigests []string
// Parent is the ID of the parent image.
//
// Depending on how the image was created, this field may be empty and
// is only set for images that were built/created locally. This field
// is empty if the image was pulled from an image registry.
Parent string
// Comment is an optional message that can be set when committing or
// importing the image.
Comment string
// Created is the date and time at which the image was created, formatted in
// RFC 3339 nano-seconds (time.RFC3339Nano).
//
// This information is only available if present in the image,
// and omitted otherwise.
Created string `json:",omitempty"`
// Container is the ID of the container that was used to create the image.
//
// Depending on how the image was created, this field may be empty.
//
// Deprecated: this field is omitted in API v1.45, but kept for backward compatibility.
Container string `json:",omitempty"`
// ContainerConfig is an optional field containing the configuration of the
// container that was last committed when creating the image.
//
// Previous versions of Docker builder used this field to store build cache,
// and it is not in active use anymore.
//
// Deprecated: this field is omitted in API v1.45, but kept for backward compatibility.
ContainerConfig *container.Config `json:",omitempty"`
// DockerVersion is the version of Docker that was used to build the image.
//
// Depending on how the image was created, this field may be empty.
DockerVersion string
// Author is the name of the author that was specified when committing the
// image, or as specified through MAINTAINER (deprecated) in the Dockerfile.
Author string
Config *container.Config
// Architecture is the hardware CPU architecture that the image runs on.
Architecture string
// Variant is the CPU architecture variant (presently ARM-only).
Variant string `json:",omitempty"`
// OS is the Operating System the image is built to run on.
Os string
// OsVersion is the version of the Operating System the image is built to
// run on (especially for Windows).
OsVersion string `json:",omitempty"`
// Size is the total size of the image including all layers it is composed of.
Size int64
// VirtualSize is the total size of the image including all layers it is
// composed of.
//
// Deprecated: this field is omitted in API v1.44, but kept for backward compatibility. Use Size instead.
VirtualSize int64 `json:"VirtualSize,omitempty"`
// GraphDriver holds information about the storage driver used to store the
// container's and image's filesystem.
GraphDriver GraphDriverData
// RootFS contains information about the image's RootFS, including the
// layer IDs.
RootFS RootFS
// Metadata of the image in the local cache.
//
// This information is local to the daemon, and not part of the image itself.
Metadata image.Metadata
}
// Container contains response of Engine API:
// GET "/containers/json"
type Container struct {
ID string `json:"Id"`
Names []string
Image string
ImageID string
Command string
Created int64
Ports []Port
SizeRw int64 `json:",omitempty"`
SizeRootFs int64 `json:",omitempty"`
Labels map[string]string
State string
Status string
HostConfig struct {
NetworkMode string `json:",omitempty"`
Annotations map[string]string `json:",omitempty"`
}
NetworkSettings *SummaryNetworkSettings
Mounts []MountPoint
}
// Ping contains response of Engine API:
// GET "/_ping"
type Ping struct {
@@ -63,6 +205,176 @@ type Version struct {
BuildTime string `json:",omitempty"`
}
// HealthcheckResult stores information about a single run of a healthcheck probe
type HealthcheckResult struct {
Start time.Time // Start is the time this check started
End time.Time // End is the time this check ended
ExitCode int // ExitCode meanings: 0=healthy, 1=unhealthy, 2=reserved (considered unhealthy), else=error running probe
Output string // Output from last check
}
// Health states
const (
NoHealthcheck = "none" // Indicates there is no healthcheck
Starting = "starting" // Starting indicates that the container is not yet ready
Healthy = "healthy" // Healthy indicates that the container is running correctly
Unhealthy = "unhealthy" // Unhealthy indicates that the container has a problem
)
// Health stores information about the container's healthcheck results
type Health struct {
Status string // Status is one of Starting, Healthy or Unhealthy
FailingStreak int // FailingStreak is the number of consecutive failures
Log []*HealthcheckResult // Log contains the last few results (oldest first)
}
// ContainerState stores container's running state
// it's part of ContainerJSONBase and will return by "inspect" command
type ContainerState struct {
Status string // String representation of the container state. Can be one of "created", "running", "paused", "restarting", "removing", "exited", or "dead"
Running bool
Paused bool
Restarting bool
OOMKilled bool
Dead bool
Pid int
ExitCode int
Error string
StartedAt string
FinishedAt string
Health *Health `json:",omitempty"`
}
// ContainerJSONBase contains response of Engine API:
// GET "/containers/{name:.*}/json"
type ContainerJSONBase struct {
ID string `json:"Id"`
Created string
Path string
Args []string
State *ContainerState
Image string
ResolvConfPath string
HostnamePath string
HostsPath string
LogPath string
Node *ContainerNode `json:",omitempty"` // Deprecated: Node was only propagated by Docker Swarm standalone API. It sill be removed in the next release.
Name string
RestartCount int
Driver string
Platform string
MountLabel string
ProcessLabel string
AppArmorProfile string
ExecIDs []string
HostConfig *container.HostConfig
GraphDriver GraphDriverData
SizeRw *int64 `json:",omitempty"`
SizeRootFs *int64 `json:",omitempty"`
}
// ContainerJSON is newly used struct along with MountPoint
type ContainerJSON struct {
*ContainerJSONBase
Mounts []MountPoint
Config *container.Config
NetworkSettings *NetworkSettings
}
// NetworkSettings exposes the network settings in the api
type NetworkSettings struct {
NetworkSettingsBase
DefaultNetworkSettings
Networks map[string]*network.EndpointSettings
}
// SummaryNetworkSettings provides a summary of container's networks
// in /containers/json
type SummaryNetworkSettings struct {
Networks map[string]*network.EndpointSettings
}
// NetworkSettingsBase holds networking state for a container when inspecting it.
type NetworkSettingsBase struct {
Bridge string // Bridge contains the name of the default bridge interface iff it was set through the daemon --bridge flag.
SandboxID string // SandboxID uniquely represents a container's network stack
SandboxKey string // SandboxKey identifies the sandbox
Ports nat.PortMap // Ports is a collection of PortBinding indexed by Port
// HairpinMode specifies if hairpin NAT should be enabled on the virtual interface
//
// Deprecated: This field is never set and will be removed in a future release.
HairpinMode bool
// LinkLocalIPv6Address is an IPv6 unicast address using the link-local prefix
//
// Deprecated: This field is never set and will be removed in a future release.
LinkLocalIPv6Address string
// LinkLocalIPv6PrefixLen is the prefix length of an IPv6 unicast address
//
// Deprecated: This field is never set and will be removed in a future release.
LinkLocalIPv6PrefixLen int
SecondaryIPAddresses []network.Address // Deprecated: This field is never set and will be removed in a future release.
SecondaryIPv6Addresses []network.Address // Deprecated: This field is never set and will be removed in a future release.
}
// DefaultNetworkSettings holds network information
// during the 2 release deprecation period.
// It will be removed in Docker 1.11.
type DefaultNetworkSettings struct {
EndpointID string // EndpointID uniquely represents a service endpoint in a Sandbox
Gateway string // Gateway holds the gateway address for the network
GlobalIPv6Address string // GlobalIPv6Address holds network's global IPv6 address
GlobalIPv6PrefixLen int // GlobalIPv6PrefixLen represents mask length of network's global IPv6 address
IPAddress string // IPAddress holds the IPv4 address for the network
IPPrefixLen int // IPPrefixLen represents mask length of network's IPv4 address
IPv6Gateway string // IPv6Gateway holds gateway address specific for IPv6
MacAddress string // MacAddress holds the MAC address for the network
}
// MountPoint represents a mount point configuration inside the container.
// This is used for reporting the mountpoints in use by a container.
type MountPoint struct {
// Type is the type of mount, see `Type<foo>` definitions in
// github.com/docker/docker/api/types/mount.Type
Type mount.Type `json:",omitempty"`
// Name is the name reference to the underlying data defined by `Source`
// e.g., the volume name.
Name string `json:",omitempty"`
// Source is the source location of the mount.
//
// For volumes, this contains the storage location of the volume (within
// `/var/lib/docker/volumes/`). For bind-mounts, and `npipe`, this contains
// the source (host) part of the bind-mount. For `tmpfs` mount points, this
// field is empty.
Source string
// Destination is the path relative to the container root (`/`) where the
// Source is mounted inside the container.
Destination string
// Driver is the volume driver used to create the volume (if it is a volume).
Driver string `json:",omitempty"`
// Mode is a comma separated list of options supplied by the user when
// creating the bind/volume mount.
//
// The default is platform-specific (`"z"` on Linux, empty on Windows).
Mode string
// RW indicates whether the mount is mounted writable (read-write).
RW bool
// Propagation describes how mounts are propagated from the host into the
// mount point, and vice-versa. Refer to the Linux kernel documentation
// for details:
// https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt
//
// This field is not used on Windows.
Propagation mount.Propagation
}
// DiskUsageObject represents an object type used for disk usage query filtering.
type DiskUsageObject string
@@ -89,7 +401,7 @@ type DiskUsageOptions struct {
type DiskUsage struct {
LayersSize int64
Images []*image.Summary
Containers []*container.Summary
Containers []*Container
Volumes []*volume.Volume
BuildCache []*BuildCache
BuilderSize int64 `json:",omitempty"` // Deprecated: deprecated in API 1.38, and no longer used since API 1.40.
@@ -169,11 +481,7 @@ type BuildCache struct {
// BuildCachePruneOptions hold parameters to prune the build cache
type BuildCachePruneOptions struct {
All bool
ReservedSpace int64
MaxUsedSpace int64
MinFreeSpace int64
Filters filters.Args
KeepStorage int64 // Deprecated: deprecated in API 1.48.
All bool
KeepStorage int64
Filters filters.Args
}

View File

@@ -1,115 +1,210 @@
package types
import (
"context"
"github.com/docker/docker/api/types/common"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/storage"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/volume"
)
// IDResponse Response to an API call that returns just an Id.
// ImagesPruneReport contains the response for Engine API:
// POST "/images/prune"
//
// Deprecated: use either [container.CommitResponse] or [container.ExecCreateResponse]. It will be removed in the next release.
type IDResponse = common.IDResponse
// Deprecated: use [image.PruneReport].
type ImagesPruneReport = image.PruneReport
// ContainerJSONBase contains response of Engine API GET "/containers/{name:.*}/json"
// for API version 1.18 and older.
// VolumesPruneReport contains the response for Engine API:
// POST "/volumes/prune".
//
// Deprecated: use [container.InspectResponse] or [container.ContainerJSONBase]. It will be removed in the next release.
type ContainerJSONBase = container.ContainerJSONBase
// Deprecated: use [volume.PruneReport].
type VolumesPruneReport = volume.PruneReport
// ContainerJSON is the response for the GET "/containers/{name:.*}/json"
// endpoint.
// NetworkCreateRequest is the request message sent to the server for network create call.
//
// Deprecated: use [container.InspectResponse]. It will be removed in the next release.
type ContainerJSON = container.InspectResponse
// Deprecated: use [network.CreateRequest].
type NetworkCreateRequest = network.CreateRequest
// Container contains response of Engine API:
// GET "/containers/json"
// NetworkCreate is the expected body of the "create network" http request message
//
// Deprecated: use [container.Summary].
type Container = container.Summary
// Deprecated: use [network.CreateOptions].
type NetworkCreate = network.CreateOptions
// ContainerState stores container's running state
// NetworkListOptions holds parameters to filter the list of networks with.
//
// Deprecated: use [container.State].
type ContainerState = container.State
// Deprecated: use [network.ListOptions].
type NetworkListOptions = network.ListOptions
// NetworkSettings exposes the network settings in the api.
// NetworkCreateResponse is the response message sent by the server for network create call.
//
// Deprecated: use [container.NetworkSettings].
type NetworkSettings = container.NetworkSettings
// Deprecated: use [network.CreateResponse].
type NetworkCreateResponse = network.CreateResponse
// NetworkSettingsBase holds networking state for a container when inspecting it.
// NetworkInspectOptions holds parameters to inspect network.
//
// Deprecated: use [container.NetworkSettingsBase].
type NetworkSettingsBase = container.NetworkSettingsBase
// Deprecated: use [network.InspectOptions].
type NetworkInspectOptions = network.InspectOptions
// DefaultNetworkSettings holds network information
// during the 2 release deprecation period.
// It will be removed in Docker 1.11.
// NetworkConnect represents the data to be used to connect a container to the network
//
// Deprecated: use [container.DefaultNetworkSettings].
type DefaultNetworkSettings = container.DefaultNetworkSettings
// Deprecated: use [network.ConnectOptions].
type NetworkConnect = network.ConnectOptions
// SummaryNetworkSettings provides a summary of container's networks
// in /containers/json.
// NetworkDisconnect represents the data to be used to disconnect a container from the network
//
// Deprecated: use [container.NetworkSettingsSummary].
type SummaryNetworkSettings = container.NetworkSettingsSummary
// Deprecated: use [network.DisconnectOptions].
type NetworkDisconnect = network.DisconnectOptions
// Health states
const (
NoHealthcheck = container.NoHealthcheck // Deprecated: use [container.NoHealthcheck].
Starting = container.Starting // Deprecated: use [container.Starting].
Healthy = container.Healthy // Deprecated: use [container.Healthy].
Unhealthy = container.Unhealthy // Deprecated: use [container.Unhealthy].
)
// Health stores information about the container's healthcheck results.
// EndpointResource contains network resources allocated and used for a container in a network.
//
// Deprecated: use [container.Health].
type Health = container.Health
// Deprecated: use [network.EndpointResource].
type EndpointResource = network.EndpointResource
// HealthcheckResult stores information about a single run of a healthcheck probe.
// NetworkResource is the body of the "get network" http response message/
//
// Deprecated: use [container.HealthcheckResult].
type HealthcheckResult = container.HealthcheckResult
// Deprecated: use [network.Inspect] or [network.Summary] (for list operations).
type NetworkResource = network.Inspect
// MountPoint represents a mount point configuration inside the container.
// This is used for reporting the mountpoints in use by a container.
// NetworksPruneReport contains the response for Engine API:
// POST "/networks/prune"
//
// Deprecated: use [container.MountPoint].
type MountPoint = container.MountPoint
// Deprecated: use [network.PruneReport].
type NetworksPruneReport = network.PruneReport
// Port An open port on a container
// ExecConfig is a small subset of the Config struct that holds the configuration
// for the exec feature of docker.
//
// Deprecated: use [container.Port].
type Port = container.Port
// Deprecated: use [container.ExecOptions].
type ExecConfig = container.ExecOptions
// GraphDriverData Information about the storage driver used to store the container's and
// image's filesystem.
// ExecStartCheck is a temp struct used by execStart
// Config fields is part of ExecConfig in runconfig package
//
// Deprecated: use [storage.DriverData].
type GraphDriverData = storage.DriverData
// Deprecated: use [container.ExecStartOptions] or [container.ExecAttachOptions].
type ExecStartCheck = container.ExecStartOptions
// RootFS returns Image's RootFS description including the layer IDs.
// ContainerExecInspect holds information returned by exec inspect.
//
// Deprecated: use [image.RootFS].
type RootFS = image.RootFS
// Deprecated: use [container.ExecInspect].
type ContainerExecInspect = container.ExecInspect
// ImageInspect contains response of Engine API:
// GET "/images/{name:.*}/json"
// ContainersPruneReport contains the response for Engine API:
// POST "/containers/prune"
//
// Deprecated: use [image.InspectResponse].
type ImageInspect = image.InspectResponse
// Deprecated: use [container.PruneReport].
type ContainersPruneReport = container.PruneReport
// RequestPrivilegeFunc is a function interface that clients can supply to
// retry operations after getting an authorization error.
// This function returns the registry authentication header value in base64
// format, or an error if the privilege request fails.
// ContainerPathStat is used to encode the header from
// GET "/containers/{name:.*}/archive"
// "Name" is the file or directory name.
//
// Deprecated: moved to [github.com/docker/docker/api/types/registry.RequestAuthConfig].
type RequestPrivilegeFunc func(context.Context) (string, error)
// Deprecated: use [container.PathStat].
type ContainerPathStat = container.PathStat
// CopyToContainerOptions holds information
// about files to copy into a container.
//
// Deprecated: use [container.CopyToContainerOptions],
type CopyToContainerOptions = container.CopyToContainerOptions
// ContainerStats contains response of Engine API:
// GET "/stats"
//
// Deprecated: use [container.StatsResponseReader].
type ContainerStats = container.StatsResponseReader
// ThrottlingData stores CPU throttling stats of one running container.
// Not used on Windows.
//
// Deprecated: use [container.ThrottlingData].
type ThrottlingData = container.ThrottlingData
// CPUUsage stores All CPU stats aggregated since container inception.
//
// Deprecated: use [container.CPUUsage].
type CPUUsage = container.CPUUsage
// CPUStats aggregates and wraps all CPU related info of container
//
// Deprecated: use [container.CPUStats].
type CPUStats = container.CPUStats
// MemoryStats aggregates all memory stats since container inception on Linux.
// Windows returns stats for commit and private working set only.
//
// Deprecated: use [container.MemoryStats].
type MemoryStats = container.MemoryStats
// BlkioStatEntry is one small entity to store a piece of Blkio stats
// Not used on Windows.
//
// Deprecated: use [container.BlkioStatEntry].
type BlkioStatEntry = container.BlkioStatEntry
// BlkioStats stores All IO service stats for data read and write.
// This is a Linux specific structure as the differences between expressing
// block I/O on Windows and Linux are sufficiently significant to make
// little sense attempting to morph into a combined structure.
//
// Deprecated: use [container.BlkioStats].
type BlkioStats = container.BlkioStats
// StorageStats is the disk I/O stats for read/write on Windows.
//
// Deprecated: use [container.StorageStats].
type StorageStats = container.StorageStats
// NetworkStats aggregates the network stats of one container
//
// Deprecated: use [container.NetworkStats].
type NetworkStats = container.NetworkStats
// PidsStats contains the stats of a container's pids
//
// Deprecated: use [container.PidsStats].
type PidsStats = container.PidsStats
// Stats is Ultimate struct aggregating all types of stats of one container
//
// Deprecated: use [container.Stats].
type Stats = container.Stats
// StatsJSON is newly used Networks
//
// Deprecated: use [container.StatsResponse].
type StatsJSON = container.StatsResponse
// EventsOptions holds parameters to filter events with.
//
// Deprecated: use [events.ListOptions].
type EventsOptions = events.ListOptions
// ImageSearchOptions holds parameters to search images with.
//
// Deprecated: use [registry.SearchOptions].
type ImageSearchOptions = registry.SearchOptions
// ImageImportSource holds source information for ImageImport
//
// Deprecated: use [image.ImportSource].
type ImageImportSource image.ImportSource
// ImageLoadResponse returns information to the client about a load process.
//
// Deprecated: use [image.LoadResponse].
type ImageLoadResponse = image.LoadResponse
// ContainerNode stores information about the node that a container
// is running on. It's only used by the Docker Swarm standalone API.
//
// Deprecated: ContainerNode was used for the classic Docker Swarm standalone API. It will be removed in the next release.
type ContainerNode struct {
ID string
IPAddress string `json:"IP"`
Addr string
Name string
Cpus int
Memory int64
Labels map[string]string
}

View File

@@ -414,7 +414,7 @@ type Info struct {
// the Volume has not been successfully created yet.
VolumeID string `json:",omitempty"`
// AccessibleTopology is the topology this volume is actually accessible
// AccessibleTopolgoy is the topology this volume is actually accessible
// from.
AccessibleTopology []Topology `json:",omitempty"`
}

View File

@@ -1,5 +1,5 @@
// FIXME(thaJeztah): remove once we are a module; the go:build directive prevents go from downgrading language version to go1.16:
//go:build go1.22
//go:build go1.21
package containerimage
@@ -14,15 +14,15 @@ import (
"sync"
"time"
"github.com/containerd/containerd/v2/core/content"
c8dimages "github.com/containerd/containerd/v2/core/images"
"github.com/containerd/containerd/v2/core/leases"
"github.com/containerd/containerd/v2/core/remotes"
"github.com/containerd/containerd/v2/core/remotes/docker"
"github.com/containerd/containerd/v2/core/remotes/docker/schema1" //nolint:staticcheck // Ignore SA1019: "github.com/containerd/containerd/remotes/docker/schema1" is deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1.
"github.com/containerd/containerd/v2/pkg/gc"
cdreference "github.com/containerd/containerd/v2/pkg/reference"
ctdreference "github.com/containerd/containerd/v2/pkg/reference"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/gc"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/leases"
cdreference "github.com/containerd/containerd/reference"
ctdreference "github.com/containerd/containerd/reference"
"github.com/containerd/containerd/remotes"
"github.com/containerd/containerd/remotes/docker"
"github.com/containerd/containerd/remotes/docker/schema1" //nolint:staticcheck // Ignore SA1019: "github.com/containerd/containerd/remotes/docker/schema1" is deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1.
cerrdefs "github.com/containerd/errdefs"
"github.com/containerd/log"
"github.com/containerd/platforms"
@@ -369,7 +369,7 @@ func (p *puller) resolve(ctx context.Context, g session.Group) error {
// has been read.
// It may be possible to have a mapping between schema 1 manifests
// and the schema 2 manifests they are converted to.
if p.config == nil && p.desc.MediaType != c8dimages.MediaTypeDockerSchema1Manifest {
if p.config == nil && p.desc.MediaType != images.MediaTypeDockerSchema1Manifest {
ref, err := distreference.WithDigest(ref, p.desc.Digest)
if err != nil {
return struct{}{}, err
@@ -423,12 +423,12 @@ func (p *puller) CacheKey(ctx context.Context, g session.Group, index int) (stri
return dgst.String(), p.desc.Digest.String(), nil, false, nil
}
if len(p.config) == 0 && p.desc.MediaType != c8dimages.MediaTypeDockerSchema1Manifest {
if len(p.config) == 0 && p.desc.MediaType != images.MediaTypeDockerSchema1Manifest {
return "", "", nil, false, errors.Errorf("invalid empty config file resolved for %s", p.src.Reference.String())
}
k := cacheKeyFromConfig(p.config).String()
if k == "" || p.desc.MediaType == c8dimages.MediaTypeDockerSchema1Manifest {
if k == "" || p.desc.MediaType == images.MediaTypeDockerSchema1Manifest {
dgst, err := p.mainManifestKey(p.platform)
if err != nil {
return "", "", nil, false, err
@@ -521,14 +521,10 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
var (
schema1Converter *schema1.Converter
handlers []c8dimages.Handler
handlers []images.Handler
)
if p.desc.MediaType == c8dimages.MediaTypeDockerSchema1Manifest {
schema1Converter, err = schema1.NewConverter(p.is.ContentStore, fetcher)
if err != nil {
stopProgress()
return nil, err
}
if p.desc.MediaType == images.MediaTypeDockerSchema1Manifest {
schema1Converter = schema1.NewConverter(p.is.ContentStore, fetcher)
handlers = append(handlers, schema1Converter)
// TODO: Optimize to do dispatch and integrate pulling with download manager,
@@ -537,25 +533,25 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
// TODO: need a wrapper snapshot interface that combines content
// and snapshots as 1) buildkit shouldn't have a dependency on contentstore
// or 2) cachemanager should manage the contentstore
handlers = append(handlers, c8dimages.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
handlers = append(handlers, images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
switch desc.MediaType {
case c8dimages.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest,
c8dimages.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex,
c8dimages.MediaTypeDockerSchema2Config, ocispec.MediaTypeImageConfig:
case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest,
images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex,
images.MediaTypeDockerSchema2Config, ocispec.MediaTypeImageConfig:
nonLayers = append(nonLayers, desc.Digest)
default:
return nil, c8dimages.ErrSkipDesc
return nil, images.ErrSkipDesc
}
ongoing.add(desc)
return nil, nil
}))
// Get all the children for a descriptor
childrenHandler := c8dimages.ChildrenHandler(p.is.ContentStore)
childrenHandler := images.ChildrenHandler(p.is.ContentStore)
// Filter the children by the platform
childrenHandler = c8dimages.FilterPlatforms(childrenHandler, platform)
childrenHandler = images.FilterPlatforms(childrenHandler, platform)
// Limit manifests pulled to the best match in an index
childrenHandler = c8dimages.LimitManifests(childrenHandler, platform, 1)
childrenHandler = images.LimitManifests(childrenHandler, platform, 1)
handlers = append(handlers,
remotes.FetchHandler(p.is.ContentStore, fetcher),
@@ -563,7 +559,7 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
)
}
if err := c8dimages.Dispatch(ctx, c8dimages.Handlers(handlers...), nil, p.desc); err != nil {
if err := images.Dispatch(ctx, images.Handlers(handlers...), nil, p.desc); err != nil {
stopProgress()
return nil, err
}
@@ -576,12 +572,12 @@ func (p *puller) Snapshot(ctx context.Context, g session.Group) (cache.Immutable
}
}
mfst, err := c8dimages.Manifest(ctx, p.is.ContentStore, p.desc, platform)
mfst, err := images.Manifest(ctx, p.is.ContentStore, p.desc, platform)
if err != nil {
return nil, err
}
config, err := c8dimages.Config(ctx, p.is.ContentStore, p.desc, platform)
config, err := images.Config(ctx, p.is.ContentStore, p.desc, platform)
if err != nil {
return nil, err
}
@@ -957,11 +953,15 @@ func applySourcePolicies(ctx context.Context, str string, spls []*spb.Policy) (s
}
if mut {
var (
t string
ok bool
)
t, newRef, ok := strings.Cut(op.GetIdentifier(), "://")
if !ok {
return "", errors.Errorf("could not parse ref: %s", op.GetIdentifier())
}
if t != srctypes.DockerImageScheme {
if ok && t != srctypes.DockerImageScheme {
return "", &imageutil.ResolveToNonImageError{Ref: str, Updated: newRef}
}
ref, err = cdreference.Parse(newRef)

View File

@@ -5,9 +5,9 @@ import (
"encoding/json"
"time"
"github.com/containerd/containerd/v2/core/content"
c8dimages "github.com/containerd/containerd/v2/core/images"
"github.com/containerd/containerd/v2/core/remotes/docker"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/remotes/docker"
distreference "github.com/distribution/reference"
imagestore "github.com/docker/docker/image"
"github.com/docker/docker/reference"
@@ -99,7 +99,7 @@ func (li *localImporter) importInlineCache(ctx context.Context, dt []byte, cc so
desc := ocispec.Descriptor{
Digest: dgst,
Size: -1,
MediaType: c8dimages.MediaTypeDockerSchema2Layer,
MediaType: images.MediaTypeDockerSchema2Layer,
Annotations: map[string]string{},
}
if createdAt := createdDates[i]; createdAt != "" {

View File

@@ -4,7 +4,7 @@ import (
"context"
"sync"
"github.com/containerd/containerd/v2/core/leases"
"github.com/containerd/containerd/leases"
"github.com/containerd/log"
bolt "go.etcd.io/bbolt"
)

View File

@@ -7,9 +7,9 @@ import (
"strings"
"sync"
"github.com/containerd/containerd/v2/core/leases"
"github.com/containerd/containerd/v2/core/mount"
"github.com/containerd/containerd/v2/core/snapshots"
"github.com/containerd/containerd/leases"
"github.com/containerd/containerd/mount"
"github.com/containerd/containerd/snapshots"
cerrdefs "github.com/containerd/errdefs"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/layer"

View File

@@ -4,18 +4,17 @@ import (
"context"
"fmt"
"io"
"net/netip"
"net"
"strconv"
"strings"
"sync"
"time"
"github.com/containerd/containerd/v2/core/remotes/docker"
"github.com/containerd/containerd/remotes/docker"
"github.com/containerd/platforms"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
timetypes "github.com/docker/docker/api/types/time"
"github.com/docker/docker/builder"
"github.com/docker/docker/builder/builder-next/exporter"
@@ -39,7 +38,6 @@ import (
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
grpcmetadata "google.golang.org/grpc/metadata"
"google.golang.org/protobuf/proto"
)
type errMultipleFilterValues struct{}
@@ -97,7 +95,6 @@ type Opt struct {
ContainerdAddress string
ContainerdNamespace string
Callbacks exporter.BuildkitCallbacks
CDISpecDirs []string
}
// Builder can build using BuildKit backend
@@ -165,27 +162,16 @@ func (b *Builder) DiskUsage(ctx context.Context) ([]*types.BuildCache, error) {
Description: r.Description,
InUse: r.InUse,
Shared: r.Shared,
Size: r.Size,
CreatedAt: func() time.Time {
if r.CreatedAt != nil {
return r.CreatedAt.AsTime()
}
return time.Time{}
}(),
LastUsedAt: func() *time.Time {
if r.LastUsedAt == nil {
return nil
}
t := r.LastUsedAt.AsTime()
return &t
}(),
UsageCount: int(r.UsageCount),
Size: r.Size_,
CreatedAt: r.CreatedAt,
LastUsedAt: r.LastUsedAt,
UsageCount: int(r.UsageCount),
})
}
return items, nil
}
// Prune clears all reclaimable build cache.
// Prune clears all reclaimable build cache
func (b *Builder) Prune(ctx context.Context, opts types.BuildCachePruneOptions) (int64, []string, error) {
ch := make(chan *controlapi.UsageRecord)
@@ -211,12 +197,10 @@ func (b *Builder) Prune(ctx context.Context, opts types.BuildCachePruneOptions)
eg.Go(func() error {
defer close(ch)
return b.controller.Prune(&controlapi.PruneRequest{
All: pi.All,
KeepDuration: int64(pi.KeepDuration),
ReservedSpace: pi.ReservedSpace,
MaxUsedSpace: pi.MaxUsedSpace,
MinFreeSpace: pi.MinFreeSpace,
Filter: pi.Filter,
All: pi.All,
KeepDuration: int64(pi.KeepDuration),
KeepBytes: pi.KeepBytes,
Filter: pi.Filter,
}, &pruneProxy{
streamProxy: streamProxy{ctx: ctx},
ch: ch,
@@ -227,7 +211,7 @@ func (b *Builder) Prune(ctx context.Context, opts types.BuildCachePruneOptions)
var cacheIDs []string
eg.Go(func() error {
for r := range ch {
size += r.Size
size += r.Size_
cacheIDs = append(cacheIDs, r.ID)
}
return nil
@@ -292,6 +276,8 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
var out builder.Result
id := identity.NewID()
frontendAttrs := map[string]string{}
if opt.Options.Target != "" {
@@ -348,14 +334,14 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
}
switch opt.Options.NetworkMode {
case network.NetworkHost, network.NetworkNone:
case "host", "none":
frontendAttrs["force-network-mode"] = opt.Options.NetworkMode
case "", network.NetworkDefault:
case "", "default":
default:
return nil, errors.Errorf("network mode %q not supported by buildkit", opt.Options.NetworkMode)
}
extraHosts, err := toBuildkitExtraHosts(opt.Options.ExtraHosts, b.dnsconfig.HostGatewayIPs)
extraHosts, err := toBuildkitExtraHosts(opt.Options.ExtraHosts, b.dnsconfig.HostGatewayIP)
if err != nil {
return nil, err
}
@@ -395,7 +381,7 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
exporterAttrs["name"] = strings.Join(nameAttr, ",")
}
cache := &controlapi.CacheOptions{}
cache := controlapi.CacheOptions{}
if inlineCache := opt.Options.BuildArgs["BUILDKIT_INLINE_CACHE"]; inlineCache != nil {
if b, err := strconv.ParseBool(*inlineCache); err == nil && b {
cache.Exports = append(cache.Exports, &controlapi.CacheOptionsEntry{
@@ -404,7 +390,6 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
}
}
id := identity.NewID()
req := &controlapi.SolveRequest{
Ref: id,
Exporters: []*controlapi.Exporter{
@@ -416,8 +401,8 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
Cache: cache,
}
if opt.Options.NetworkMode == network.NetworkHost {
req.Entitlements = append(req.Entitlements, string(entitlements.EntitlementNetworkHost))
if opt.Options.NetworkMode == "host" {
req.Entitlements = append(req.Entitlements, entitlements.EntitlementNetworkHost)
}
aux := streamformatter.AuxFormatter{Writer: opt.ProgressWriter.Output}
@@ -432,12 +417,12 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
if exporterName != exporter.Moby && exporterName != client.ExporterImage {
return nil
}
imgID, ok := resp.ExporterResponse["containerimage.digest"]
id, ok := resp.ExporterResponse["containerimage.digest"]
if !ok {
return errors.Errorf("missing image id")
}
out.ImageID = imgID
return aux.Emit("moby.image.id", types.BuildResult{ID: imgID})
out.ImageID = id
return aux.Emit("moby.image.id", types.BuildResult{ID: id})
})
ch := make(chan *controlapi.StatusResponse)
@@ -452,7 +437,7 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
eg.Go(func() error {
for sr := range ch {
dt, err := proto.Marshal(sr)
dt, err := sr.Marshal()
if err != nil {
return err
}
@@ -601,7 +586,7 @@ func (j *buildJob) SetUpload(ctx context.Context, rc io.ReadCloser) error {
}
// toBuildkitExtraHosts converts hosts from docker key:value format to buildkit's csv format
func toBuildkitExtraHosts(inp []string, hostGatewayIPs []netip.Addr) (string, error) {
func toBuildkitExtraHosts(inp []string, hostGatewayIP net.IP) (string, error) {
if len(inp) == 0 {
return "", nil
}
@@ -612,17 +597,17 @@ func toBuildkitExtraHosts(inp []string, hostGatewayIPs []netip.Addr) (string, er
return "", errors.Errorf("invalid host %s", h)
}
// If the IP Address is a "host-gateway", replace this value with the
// IP address(es) stored in the daemon level HostGatewayIPs config variable.
// IP address stored in the daemon level HostGatewayIP config variable.
if ip == opts.HostGatewayName {
if len(hostGatewayIPs) == 0 {
gateway := hostGatewayIP.String()
if gateway == "" {
return "", fmt.Errorf("unable to derive the IP value for host-gateway")
}
for _, gip := range hostGatewayIPs {
hosts = append(hosts, host+"="+gip.String())
}
} else {
hosts = append(hosts, host+"="+ip)
ip = gateway
} else if net.ParseIP(ip) == nil {
return "", fmt.Errorf("invalid host %s", h)
}
hosts = append(hosts, host+"="+ip)
}
return strings.Join(hosts, ","), nil
}
@@ -693,17 +678,10 @@ func toBuildkitPruneInfo(opts types.BuildCachePruneOptions) (client.PruneInfo, e
}
}
}
if opts.ReservedSpace == 0 && opts.KeepStorage != 0 {
opts.ReservedSpace = opts.KeepStorage
}
return client.PruneInfo{
All: opts.All,
KeepDuration: until,
ReservedSpace: opts.ReservedSpace,
MaxUsedSpace: opts.MaxUsedSpace,
MinFreeSpace: opts.MinFreeSpace,
Filter: []string{strings.Join(bkFilter, ",")},
All: opts.All,
KeepDuration: until,
KeepBytes: opts.KeepStorage,
Filter: []string{strings.Join(bkFilter, ",")},
}, nil
}

View File

@@ -2,18 +2,16 @@ package buildkit
import (
"context"
"fmt"
"net/http"
"os"
"path/filepath"
"runtime"
"strings"
"time"
ctd "github.com/containerd/containerd/v2/client"
ctdmetadata "github.com/containerd/containerd/v2/core/metadata"
"github.com/containerd/containerd/v2/core/snapshots"
"github.com/containerd/containerd/v2/plugins/content/local"
ctd "github.com/containerd/containerd"
"github.com/containerd/containerd/content/local"
ctdmetadata "github.com/containerd/containerd/metadata"
"github.com/containerd/containerd/snapshots"
"github.com/containerd/log"
"github.com/containerd/platforms"
"github.com/docker/docker/api/types"
@@ -27,7 +25,7 @@ import (
wlabel "github.com/docker/docker/builder/builder-next/worker/label"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/go-units"
units "github.com/docker/go-units"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/cache/metadata"
"github.com/moby/buildkit/cache/remotecache"
@@ -45,9 +43,6 @@ import (
containerdsnapshot "github.com/moby/buildkit/snapshot/containerd"
"github.com/moby/buildkit/solver"
"github.com/moby/buildkit/solver/bboltcachestorage"
"github.com/moby/buildkit/solver/llbsolver/cdidevices"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
"github.com/moby/buildkit/util/archutil"
"github.com/moby/buildkit/util/entitlements"
"github.com/moby/buildkit/util/network/netproviders"
@@ -60,7 +55,9 @@ import (
"go.etcd.io/bbolt"
bolt "go.etcd.io/bbolt"
"go.opentelemetry.io/otel/sdk/trace"
"tags.cncf.io/container-device-interface/pkg/cdi"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/apicaps"
)
func newController(ctx context.Context, rt http.RoundTripper, opt Opt) (*control.Controller, error) {
@@ -89,7 +86,7 @@ func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt
return nil, err
}
historyDB, historyConf, err := openHistoryDB(opt.Root, "history_c8d.db", opt.BuilderConfig.History)
historyDB, historyConf, err := openHistoryDB(opt.Root, opt.BuilderConfig.History)
if err != nil {
return nil, err
}
@@ -112,11 +109,6 @@ func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt
dns := getDNSConfig(opt.DNSConfig)
cdiManager, err := getCDIManager(opt.CDISpecDirs)
if err != nil {
return nil, err
}
workerOpts := containerd.WorkerOptions{
Root: opt.Root,
Address: opt.ContainerdAddress,
@@ -130,7 +122,6 @@ func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt
NetworkOpt: nc,
ApparmorProfile: opt.ApparmorProfile,
Selinux: false,
CDIManager: cdiManager,
}
wo, err := containerd.NewWorkerOpt(workerOpts, ctd.WithTimeout(60*time.Second))
@@ -152,13 +143,13 @@ func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt
wo.RegistryHosts = opt.RegistryHosts
wo.Labels = getLabels(opt, wo.Labels)
exec, err := newExecutor(opt.Root, opt.DefaultCgroupParent, opt.NetworkController, dns, opt.Rootless, opt.IdentityMapping, opt.ApparmorProfile, cdiManager)
exec, err := newExecutor(opt.Root, opt.DefaultCgroupParent, opt.NetworkController, dns, opt.Rootless, opt.IdentityMapping, opt.ApparmorProfile)
if err != nil {
return nil, err
}
wo.Executor = exec
w, err := mobyworker.NewContainerdWorker(ctx, wo, opt.Callbacks, rt)
w, err := mobyworker.NewContainerdWorker(ctx, wo, opt.Callbacks)
if err != nil {
return nil, err
}
@@ -203,12 +194,11 @@ func newSnapshotterController(ctx context.Context, rt http.RoundTripper, opt Opt
LeaseManager: wo.LeaseManager,
ContentStore: wo.ContentStore,
TraceCollector: getTraceExporter(ctx),
GarbageCollect: w.GarbageCollect,
})
}
func openHistoryDB(root string, fn string, cfg *config.BuilderHistoryConfig) (*bolt.DB, *bkconfig.HistoryConfig, error) {
db, err := bbolt.Open(filepath.Join(root, fn), 0o600, nil)
func openHistoryDB(root string, cfg *config.BuilderHistoryConfig) (*bolt.DB, *bkconfig.HistoryConfig, error) {
db, err := bbolt.Open(filepath.Join(root, "history.db"), 0o600, nil)
if err != nil {
return nil, nil, err
}
@@ -303,7 +293,6 @@ func newGraphDriverController(ctx context.Context, rt http.RoundTripper, opt Opt
LeaseManager: lm,
ContentStore: store,
GarbageCollect: mdb.GarbageCollect,
Root: root,
})
if err != nil {
return nil, err
@@ -327,12 +316,7 @@ func newGraphDriverController(ctx context.Context, rt http.RoundTripper, opt Opt
dns := getDNSConfig(opt.DNSConfig)
cdiManager, err := getCDIManager(opt.CDISpecDirs)
if err != nil {
return nil, err
}
exec, err := newExecutor(root, opt.DefaultCgroupParent, opt.NetworkController, dns, opt.Rootless, opt.IdentityMapping, opt.ApparmorProfile, cdiManager)
exec, err := newExecutor(root, opt.DefaultCgroupParent, opt.NetworkController, dns, opt.Rootless, opt.IdentityMapping, opt.ApparmorProfile)
if err != nil {
return nil, err
}
@@ -360,7 +344,7 @@ func newGraphDriverController(ctx context.Context, rt http.RoundTripper, opt Opt
return nil, err
}
historyDB, historyConf, err := openHistoryDB(opt.Root, "history.db", opt.BuilderConfig.History)
historyDB, historyConf, err := openHistoryDB(opt.Root, opt.BuilderConfig.History)
if err != nil {
return nil, err
}
@@ -398,9 +382,7 @@ func newGraphDriverController(ctx context.Context, rt http.RoundTripper, opt Opt
Layers: layers,
Platforms: archutil.SupportedPlatforms(true),
LeaseManager: lm,
GarbageCollect: mdb.GarbageCollect,
Labels: getLabels(opt, nil),
CDIManager: cdiManager,
}
wc := &worker.Controller{}
@@ -439,37 +421,40 @@ func newGraphDriverController(ctx context.Context, rt http.RoundTripper, opt Opt
HistoryDB: historyDB,
HistoryConfig: historyConf,
TraceCollector: getTraceExporter(ctx),
GarbageCollect: w.GarbageCollect,
})
}
func getGCPolicy(conf config.BuilderConfig, root string) ([]client.PruneInfo, error) {
var gcPolicy []client.PruneInfo
if conf.GC.Enabled {
if conf.GC.Policy == nil {
reservedSpace, maxUsedSpace, minFreeSpace, err := parseGCPolicy(config.BuilderGCRule{
ReservedSpace: conf.GC.DefaultReservedSpace,
MaxUsedSpace: conf.GC.DefaultMaxUsedSpace,
MinFreeSpace: conf.GC.DefaultMinFreeSpace,
}, "default")
var (
defaultKeepStorage int64
err error
)
if conf.GC.DefaultKeepStorage != "" {
defaultKeepStorage, err = units.RAMInBytes(conf.GC.DefaultKeepStorage)
if err != nil {
return nil, err
return nil, errors.Wrapf(err, "could not parse '%s' as Builder.GC.DefaultKeepStorage config", conf.GC.DefaultKeepStorage)
}
gcPolicy = mobyworker.DefaultGCPolicy(root, reservedSpace, maxUsedSpace, minFreeSpace)
}
if conf.GC.Policy == nil {
gcPolicy = mobyworker.DefaultGCPolicy(root, defaultKeepStorage)
} else {
gcPolicy = make([]client.PruneInfo, len(conf.GC.Policy))
for i, p := range conf.GC.Policy {
reservedSpace, maxUsedSpace, minFreeSpace, err := parseGCPolicy(p, "")
b, err := units.RAMInBytes(p.KeepStorage)
if err != nil {
return nil, err
}
if b == 0 {
b = defaultKeepStorage
}
gcPolicy[i], err = toBuildkitPruneInfo(types.BuildCachePruneOptions{
All: p.All,
ReservedSpace: reservedSpace,
MaxUsedSpace: maxUsedSpace,
MinFreeSpace: minFreeSpace,
Filters: filters.Args(p.Filter),
All: p.All,
KeepStorage: b,
Filters: filters.Args(p.Filter),
})
if err != nil {
return nil, err
@@ -480,41 +465,6 @@ func getGCPolicy(conf config.BuilderConfig, root string) ([]client.PruneInfo, er
return gcPolicy, nil
}
func parseGCPolicy(p config.BuilderGCRule, prefix string) (reservedSpace, maxUsedSpace, minFreeSpace int64, err error) {
errorString := func(key string) string {
if prefix != "" {
key = prefix + strings.ToTitle(key)
}
return fmt.Sprintf("failed to parse %s", key)
}
if p.ReservedSpace != "" {
b, err := units.RAMInBytes(p.ReservedSpace)
if err != nil {
return 0, 0, 0, errors.Wrap(err, errorString("reservedSpace"))
}
reservedSpace = b
}
if p.MaxUsedSpace != "" {
b, err := units.RAMInBytes(p.MaxUsedSpace)
if err != nil {
return 0, 0, 0, errors.Wrap(err, errorString("maxUsedSpace"))
}
maxUsedSpace = b
}
if p.MinFreeSpace != "" {
b, err := units.RAMInBytes(p.MinFreeSpace)
if err != nil {
return 0, 0, 0, errors.Wrap(err, errorString("minFreeSpace"))
}
minFreeSpace = b
}
return reservedSpace, maxUsedSpace, minFreeSpace, nil
}
func getEntitlements(conf config.BuilderConfig) []string {
var ents []string
// Incase of no config settings, NetworkHost should be enabled & SecurityInsecure must be disabled.
@@ -531,44 +481,6 @@ func getLabels(opt Opt, labels map[string]string) map[string]string {
if labels == nil {
labels = make(map[string]string)
}
if len(opt.DNSConfig.HostGatewayIPs) > 0 {
// TODO(robmry) - buildx has its own version of toBuildkitExtraHosts(), which
// needs to be updated to understand >1 address. For now, take the IPv4 address
// if there is one, else IPv6.
for _, gip := range opt.DNSConfig.HostGatewayIPs {
labels[wlabel.HostGatewayIP] = gip.String()
if gip.Is4() {
break
}
}
}
labels[wlabel.HostGatewayIP] = opt.DNSConfig.HostGatewayIP.String()
return labels
}
func getCDIManager(specDirs []string) (*cdidevices.Manager, error) {
if len(specDirs) == 0 {
return nil, nil
}
cdiCache, err := func() (*cdi.Cache, error) {
cdiCache, err := cdi.NewCache(cdi.WithSpecDirs(specDirs...))
if err != nil {
return nil, err
}
if err := cdiCache.Refresh(); err != nil {
return nil, err
}
if errs := cdiCache.GetErrors(); len(errs) > 0 {
for dir, errs := range errs {
for _, err := range errs {
log.L.Warnf("CDI setup error %v: %+v", dir, err)
}
}
}
return cdiCache, nil
}()
if err != nil {
return nil, errors.Wrapf(err, "CDI registry initialization failure")
}
// TODO: add support for auto-allowed devices from config
return cdidevices.NewManager(cdiCache, nil), nil
}

View File

@@ -19,15 +19,14 @@ import (
resourcestypes "github.com/moby/buildkit/executor/resources/types"
"github.com/moby/buildkit/executor/runcexecutor"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/solver/llbsolver/cdidevices"
"github.com/moby/buildkit/solver/pb"
"github.com/moby/buildkit/util/network"
"github.com/opencontainers/runtime-spec/specs-go"
specs "github.com/opencontainers/runtime-spec/specs-go"
)
const networkName = "bridge"
func newExecutor(root, cgroupParent string, net *libnetwork.Controller, dnsConfig *oci.DNSConfig, rootless bool, idmap idtools.IdentityMapping, apparmorProfile string, cdiManager *cdidevices.Manager) (executor.Executor, error) {
func newExecutor(root, cgroupParent string, net *libnetwork.Controller, dnsConfig *oci.DNSConfig, rootless bool, idmap idtools.IdentityMapping, apparmorProfile string) (executor.Executor, error) {
netRoot := filepath.Join(root, "net")
networkProviders := map[pb.NetMode]network.Provider{
pb.NetMode_UNSET: &bridgeProvider{Controller: net, Root: netRoot},
@@ -75,7 +74,6 @@ func newExecutor(root, cgroupParent string, net *libnetwork.Controller, dnsConfi
DNS: dnsConfig,
ApparmorProfile: apparmorProfile,
ResourceMonitor: rm,
CDIManager: cdiManager,
}, networkProviders)
}

View File

@@ -13,10 +13,9 @@ import (
"github.com/moby/buildkit/executor"
"github.com/moby/buildkit/executor/oci"
resourcetypes "github.com/moby/buildkit/executor/resources/types"
"github.com/moby/buildkit/solver/llbsolver/cdidevices"
)
func newExecutor(_, _ string, _ *libnetwork.Controller, _ *oci.DNSConfig, _ bool, _ idtools.IdentityMapping, _ string, _ *cdidevices.Manager) (executor.Executor, error) {
func newExecutor(_, _ string, _ *libnetwork.Controller, _ *oci.DNSConfig, _ bool, _ idtools.IdentityMapping, _ string) (executor.Executor, error) {
return &stubExecutor{}, nil
}

View File

@@ -6,8 +6,8 @@ import (
"fmt"
"strings"
"github.com/containerd/containerd/v2/core/content"
"github.com/containerd/containerd/v2/core/leases"
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/leases"
"github.com/containerd/log"
distref "github.com/distribution/reference"
"github.com/docker/docker/image"

View File

@@ -18,14 +18,14 @@ import (
func emptyImageConfig() ([]byte, error) {
pl := platforms.Normalize(platforms.DefaultSpec())
dt, err := json.Marshal(ocispec.Image{
Platform: pl,
Config: ocispec.ImageConfig{
WorkingDir: "/",
Env: []string{"PATH=" + system.DefaultPathEnv(pl.OS)},
},
RootFS: ocispec.RootFS{Type: "layers"},
})
img := ocispec.Image{}
img.Architecture = pl.Architecture
img.OS = pl.OS
img.Variant = pl.Variant
img.RootFS.Type = "layers"
img.Config.WorkingDir = "/"
img.Config.Env = []string{"PATH=" + system.DefaultPathEnv(pl.OS)}
dt, err := json.Marshal(img)
return dt, errors.Wrap(err, "failed to create empty image config")
}
@@ -80,7 +80,7 @@ func patchImageConfig(dt []byte, dps []digest.Digest, history []ocispec.History,
}
if cache != nil {
dt, err = json.Marshal(cache.Data)
dt, err := json.Marshal(cache.Data)
if err != nil {
return nil, err
}

Some files were not shown because too many files have changed in this diff Show More