Compare commits

..

155 Commits

Author SHA1 Message Date
Paweł Gronowski
1328a0a61c Merge pull request #50940 from vvoland/50936-23.0
[23.0 backport] Dockerfile.windows: remove deprecated 7Zip4Powershell
2025-09-10 11:08:32 +02:00
Paweł Gronowski
e7cb7cfc60 Dockerfile.windows: remove deprecated 7Zip4Powershell
`tar` utility is included in Windows 10 (17063+) and Windows Server
2019+ so we can use it directly.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 8c8324b37f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-09-09 19:36:23 +02:00
Paweł Gronowski
9e8243de60 Merge pull request #50941 from vvoland/50662-23.0
[23.0 backport] Fix download-frozen-image-v2
2025-09-09 19:34:57 +02:00
Paweł Gronowski
9e6bd0772d download-frozen-image-v2: Use curl -L
Passing the Auth to the redirected location was fixed in curl 7.58:
https://curl.se/changes.html#7_58_0 so we no longer need the extra
handling and can just use `-L` to let curl handle redirects.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit b9b52d59b8)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-09-09 17:52:17 +02:00
Paweł Gronowski
0fb65cdc92 download-frozen-image-v2: handle 307 responses without decimal
Correctly parse HTTP response that doesn't contain an HTTP version with a decimal place:

```
< HTTP/2 307
```

The previous version would only match strings like `HTTP/2.0 307`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 359a881cea)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2025-09-09 17:52:15 +02:00
Cory Snider
e01bbe8173 Merge pull request #50052 from aepifanov/dev/go1.23.9/23.0
[23.0] Update to go1.23.9
2025-05-22 18:05:49 -04:00
Andrey Epifanov
40b0fcd12f update to go1.23.9
https://github.com/golang/go/issues?q=milestone%3AGo1.23.9
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-22 11:40:09 -07:00
Cory Snider
eda5359d4c Merge pull request #49859 from aepifanov/backport-23.0/ubuntu-22.04-gha
[23.0] Update GHA, Containerd, CI image, and Golang to the actual status
2025-05-08 12:08:23 -04:00
Paweł Gronowski
a35f7ee1c2 libnetwork: Mark flaky tests
Mark the following tests as flaky:
- TestNetworkDBCRUDTableEntry
- TestNetworkDBCRUDTableEntries
- TestNetworkDBIslands
- TestNetworkDBNodeLeave

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 9893520c62)
2025-05-08 07:59:57 -07:00
Paweł Gronowski
af82d154e8 hack/unit: Rerun failed flaky libnetwork tests
libnetwork tests tend to be flaky (namely `TestNetworkDBIslands` and
`TestNetworkDBCRUDTableEntries`).

Move execution of tests which name has `TestFlaky` prefix to a separate
gotestsum pass which allows them to be reran 4 times.

On Windows, the libnetwork test execution is not split into a separate
pass.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit d0d8d5d97d)
2025-05-08 07:59:57 -07:00
Pavel Tikhomirov
0d4fb6cf12 integration-cli: Make service process live forever
- TestServiceLogsCompleteness runs service with command to write 6 log
lines but as command exits immediately, service is restarted and 6 more
lines are printed in logs, which confuses the checker.Equals(6) check.

- TestServiceLogsSince runs service with command to write 3 log lines,
and service restart can also affect it's checks.

Let's change from `tail` which exits immediately to `tail -f` which
hangs forever, this way we would not confuse checks with more log lines
when expected.

Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com>
(cherry picked from commit f4c0ec8ffc)
2025-05-08 07:59:57 -07:00
Derek McGowan
9c43d5c05a Run CLI tests with cgroups v2
Signed-off-by: Derek McGowan <derek@mcg.dev>
(cherry picked from commit cd89a35ea0)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	integration-cli/docker_cli_update_unix_test.go

# Conflicts:
#	hack/make/.integration-daemon-start
#	integration-cli/docker_api_stats_test.go
#	integration-cli/requirements_windows_test.go
(cherry picked from commit 408b47c8a7d42f62b9be7b3807d81209221e97a4)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 07:59:54 -07:00
Andrey Epifanov
b5b32e9e71 gha: dco, buildkit, ci , test: update to Ubuntu 24.04
- supersedes, closes https://github.com/moby/moby/pull/49601
- follow-up to https://github.com/moby/moby/pull/49579
- relates to https://github.com/moby/moby/issues/49576
- relates to https://github.com/moby/moby/issues/44084

Updated GHAs from 20.04 to 24.04

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 00:44:43 -07:00
Sebastiaan van Stijn
6279f51613 gha: docker-py: set TEST_SKIP_INTEGRATION_CLI=1
These tests don't actually run the integration-cli suite, but
the global hack/xxx script errors because it's not set;

    ---> Making bundle: test-docker-py (in bundles/test-docker-py)
    ---> Making bundle: .integration-daemon-start (in bundles/test-docker-py)
    Using test binary /usr/local/cli-integration/docker
    # DOCKER_EXPERIMENTAL is set: starting daemon with experimental features enabled!
    # cgroup v2 requires TEST_SKIP_INTEGRATION_CLI to be set
    make: *** [Makefile:220: test-docker-py] Error 1
    Error: Process completed with exit code 2.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 06b87d80ee)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-05-08 00:44:43 -07:00
Andrey Epifanov
9b16b8eb86 cleanup: remove unused code and suppress staticcheck warning
Removed the Windows-specific check from TestMain as it is no longer necessary. Added a staticcheck ignore comment to bypass the SA1019 warning in the devicemapper wrapper for compatibility purposes. These changes ensure cleaner and more focused code.

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 00:44:43 -07:00
Andrey Epifanov
465e60b8a8 Ignore SA1019: memory.Kernel is deprecated
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 00:44:43 -07:00
Sebastiaan van Stijn
598d40a75d integration-cli: use erors.New() instead of fmt.Errorf
integration-cli/benchmark_test.go:49:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:62:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:68:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:73:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:78:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:84:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^
    integration-cli/benchmark_test.go:94:27: printf: non-constant format string in call to fmt.Errorf (govet)
                            chErr <- fmt.Errorf(out)
                                                ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b7a687554)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	integration-cli/benchmark_test.go
2025-05-08 00:44:42 -07:00
Sebastiaan van Stijn
c0df2ee1fa Dockerfile: fix linting warnings
The 'as' keyword should match the case of the 'from' keyword
    FromAsCasing: 'as' and 'FROM' keywords' casing do not match
    More info: https://docs.docker.com/go/dockerfile/rule/from-as-casing/

    Setting platform to predefined $TARGETPLATFORM in FROM is redundant as this is the default behavior
    RedundantTargetPlatform: Setting platform to predefined $TARGETPLATFORM in FROM is redundant as this is the default behavior
    More info: https://docs.docker.com/go/dockerfile/rule/redundant-target-platform/

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b2b55903d0)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	Dockerfile
2025-05-08 00:44:42 -07:00
Andrey Epifanov
9182ce9276 Dockerfile: update golangci-lint to v1.64.5
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 00:44:42 -07:00
Andrey Epifanov
55b913bc54 bump golang to v1.23
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 00:44:42 -07:00
Andrey Epifanov
86c2194cdf Update containerd version to v1.6.38
Bump the containerd version from v1.6.28 (and v1.6.22 in one instance) to v1.6.38 across Dockerfiles and installer scripts.

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2025-05-08 00:44:40 -07:00
Sebastiaan van Stijn
936337c8e2 Merge pull request #49517 from aepifanov/backport-23.0/update_actions
[23.0 backport] update actions
2025-02-25 17:58:07 +01:00
CrazyMax
63fe1ae04a ci: update to codecov/codecov-action@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 5a3c463a37)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6c5e5271c1)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-02-24 05:41:03 -08:00
CrazyMax
815555d4eb ci: update to actions/download-artifact@v4 and actions/upload-artifact@v4
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit 9babc02283)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 693fca6199)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
#	.github/workflows/.windows.yml
#	.github/workflows/bin-image.yml
#	.github/workflows/ci.yml
2025-02-24 05:41:03 -08:00
CrazyMax
322189a2c8 ci: update to actions/cache@v3
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit a83557d747)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 49487e996a)
2025-02-21 08:46:07 -08:00
Sebastiaan van Stijn
6c105c5413 gha: update to docker/setup-qemu-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/setup-qemu-action/compare/v2.2.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5d396e0533)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9de19554c7)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/bin-image.yml
2025-02-21 08:46:07 -08:00
Sebastiaan van Stijn
540b931d36 gha: update to docker/bake-action@v5
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff https://github.com/docker/bake-action/compare/v2.3.0...v5.13.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4a1839ef1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2a80b8a7b2)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-02-21 08:46:04 -08:00
Sebastiaan van Stijn
7567b1c11b gha: update to docker/setup-buildx-action@v3
- Node 20 as default runtime (requires Actions Runner v2.308.0 or later)
- full diff: https://github.com/docker/setup-buildx-action/compare/v2.10.0...v3.0.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b7fd571b0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 61ffecfa3b)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
#	.github/workflows/bin-image.yml
2025-02-21 07:49:29 -08:00
Sebastiaan van Stijn
3c0f804cb2 gha: update to actions/setup-go@v5
- full diff: https://github.com/actions/setup-go/compare/v3.5.0...v5.0.0

v5

In scope of this release, we change Nodejs runtime from node16 to node20.
Moreover, we update some dependencies to the latest versions.

Besides, this release contains such changes as:

- Fix hosted tool cache usage on windows
- Improve documentation regarding dependencies caching

V4

The V4 edition of the action offers:

- Enabled caching by default
- The action will try to enable caching unless the cache input is explicitly
  set to false.

Please see "Caching dependency files and build outputs" for more information:
https://github.com/actions/setup-go#caching-dependency-files-and-build-outputs

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e27a785f43)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1d7df5ecc0)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test.yml
2025-02-21 07:49:28 -08:00
Sebastiaan van Stijn
03ca59364f gha: update to actions/github-script@v7
- full diff: https://github.com/actions/github-script/compare/v6.4.1...v7.0.1

breaking changes: https://github.com/actions/github-script?tab=readme-ov-file#v7

> Version 7 of this action updated the runtime to Node 20
> https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#runs-for-javascript-actions
>
> All scripts are now run with Node 20 instead of Node 16 and are affected
> by any breaking changes between Node 16 and 20
>
> The previews input now only applies to GraphQL API calls as REST API previews
> are no longer necessary
> https://github.blog/changelog/2021-10-14-rest-api-preview-promotions/.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fb53ee6ba3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4e68a265ed)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test-prepare.yml
2025-02-21 07:49:28 -08:00
Sebastiaan van Stijn
793e141745 gha: update to actions/checkout@v4
Release notes:

- https://github.com/actions/checkout/compare/v3.6.0...v4.1.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0ffddc6bb8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e437f890ba)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.test-prepare.yml
#	.github/workflows/.test.yml
#	.github/workflows/bin-image.yml
2025-02-21 07:49:28 -08:00
Akihiro Suda
279b80d5cb Merge pull request #49083 from thaJeztah/23.0_backport_bump_xx
[23.0 backport] update xx to v1.6.1 for compatibility with alpine 3.21
2024-12-16 13:52:43 +09:00
Sebastiaan van Stijn
3bb688291c update xx to v1.6.1 for compatibility with alpine 3.21
This fixes compatibility with alpine 3.21

- Fix additional possible `xx-cc`/`xx-cargo` compatibility issue with Alpine 3.21
- Support for Alpine 3.21
- Fix `xx-verify` with `file` 5.46+
- Fix possible error taking lock in `xx-apk` in latest Alpine without `coreutils`

full diff: https://github.com/tonistiigi/xx/compare/v1.5.0...v1.6.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 89899b71a0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:45:05 +01:00
Sebastiaan van Stijn
fa0400740a Dockerfile: update xx to v1.5.0
full diff: https://github.com/tonistiigi/xx/compare/v1.4.0...v1.5.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c4ba1f4718)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:45:04 +01:00
Sebastiaan van Stijn
311d95d180 Dockerfile: update xx to v1.4.0
full diff: https://github.com/tonistiigi/xx/compare/v1.2.1...v1.4.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4f46c44725)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-12-13 00:45:04 +01:00
Austin Vazquez
009d994c11 Merge pull request #48996 from thaJeztah/23.0_backport_modprobe_br_netfilter
[23.0 backport] Jenkinsfile: modprobe br_netfilter
2024-11-29 19:45:08 -08:00
Sebastiaan van Stijn
f312b5ad19 Jenkinsfile: modprobe br_netfilter
Make sure the module is loaded, as we're not able to load it from within
the dev-container;

    time="2024-11-29T20:40:42Z" level=error msg="Running modprobe br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found in directory /lib/modules/5.15.0-1072-aws\n" error="exit status 1"

Also moving these steps _before_ the "print info" step, so that docker info
doesn't show warnings that bridge-nf-call-iptables and bridge-nf-call-ip6tables
are not loaded.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cce5dfe1e7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-11-29 23:29:14 +01:00
Sebastiaan van Stijn
d08a1aeac6 Merge pull request #48649 from austinvazquez/cherry-pick-c68c9aed8cb3916669de6d7f2c564279ec83663f-to-23.0
[23.0 backport] gha: add guardrails timeouts on all jobs
2024-10-12 18:35:29 +02:00
Sebastiaan van Stijn
c60b124abb gha: remove stray double empty line
Accidentally introduced in 6b7e2783d1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 037bac89fc)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-12 02:16:30 +00:00
Sebastiaan van Stijn
828f2121d4 gha: restrict cross and bin-image to 20 minutes
We had a couple of runs where these jobs got stuck and github
actions didn't allow terminating them, so that they were only
terminated after 120 minutes.

These jobs usually complete in 5 minutes, so let's give them
a shorter timeout. 20 minutes should be enough (don't @ me).

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c68c9aed8c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-12 02:15:02 +00:00
Sebastiaan van Stijn
ef6170a59f gha: add guardrails timeouts on all jobs
We had a few "runaway jobs" recently, where the job got stuck, and kept
running for 6 hours (in one case even 24 hours, probably due some github
outage). Some of those jobs could not be terminated.

While running these actions on public repositories doesn't cost us, it's
still not desirable to have jobs running for that long (as they can still
hold up the queue).

This patch adds a blanket "2 hours" time-limit to all jobs that didn't
have a limit set. We should look at tweaking those limits to actually
expected duration, but having a default at least is a start.

Also changed the position of some existing timeouts so that we have a
consistent order in which it's set; making it easier to spot locations
where no limit is defined.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6b7e2783d1)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-12 02:14:19 +00:00
Sebastiaan van Stijn
554b1e5585 Merge pull request #48627 from thaJeztah/23.0_backport_fix_buildkit_go_version
[23.0 backport] gha: buildkit: make sure expected Go version is installed
2024-10-10 13:47:48 +02:00
Sebastiaan van Stijn
f9ada51dae gha: buildkit: make sure expected Go version is installed
The buildkit workflow uses Go to determine the version of Buildkit to run
integration-tests for. It currently uses on the default version that's
installed on the GitHub actions runners (1.21.13 currently), but this fails
if the go.mod/vendor.mod specify a higher version of Go as required version.

If this fails, the BUILDKIT_REF and REPO env-vars are not set / empty,
resulting in the workflow checking out the current (moby) repository instead
of buildkit, which fails.

This patch adds a step to explicitly install the expected version of Go.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 02d4fc3234)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-10-10 11:44:29 +02:00
Cory Snider
37632e9e19 Merge pull request #48583 from austinvazquez/cherry-pick-ca4c68ab956993b47fd0046b4d96eceab8b9a261-to-23.0
[23.0 backport] update to go1.22.8
2024-10-07 12:31:54 -04:00
Sebastiaan van Stijn
b53e352970 update to go1.22.8
go1.22.8 (released 2024-10-01) includes fixes to cgo, and the maps and syscall
packages. See the Go 1.22.8 milestone on our issue tracker for details;

- https://github.com/golang/go/issues?q=milestone%3AGo1.22.8+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.7...go1.22.8

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ca4c68ab95)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-10-04 20:01:12 +00:00
Sebastiaan van Stijn
9cdf20b80f Merge pull request #48466 from gdams/23
[23.0 backport] seccomp: add riscv64 mapping to seccomp_linux.go
2024-09-10 17:16:30 +02:00
George Adams
3db5ae1e38 seccomp: add riscv64 mapping to seccomp_linux.go
Signed-off-by: George Adams <georgeadams1995@gmail.com>
(cherry picked from commit 1161b790cf)
Signed-off-by: George Adams <georgeadams1995@gmail.com>
2024-09-10 11:43:24 +01:00
Sebastiaan van Stijn
f26b1272b9 Merge pull request #48442 from austinvazquez/cherry-pick-go1.22.7-to-23.0
[23.0 backport] update to go1.22.7
2024-09-09 08:37:20 +02:00
Paweł Gronowski
5b4239b6db update to go1.22.7
- https://github.com/golang/go/issues?q=milestone%3AGo1.22.7+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.22.6...go1.22.7

These minor releases include 3 security fixes following the security policy:

- go/parser: stack exhaustion in all Parse* functions

    Calling any of the Parse functions on Go source code which contains deeply nested literals can cause a panic due to stack exhaustion.

    This is CVE-2024-34155 and Go issue https://go.dev/issue/69138.

- encoding/gob: stack exhaustion in Decoder.Decode

    Calling Decoder.Decode on a message which contains deeply nested structures can cause a panic due to stack exhaustion.

    This is a follow-up to CVE-2022-30635.

    Thanks to Md Sakib Anwar of The Ohio State University (anwar.40@osu.edu) for reporting this issue.

    This is CVE-2024-34156 and Go issue https://go.dev/issue/69139.

- go/build/constraint: stack exhaustion in Parse

    Calling Parse on a "// +build" build tag line with deeply nested expressions can cause a panic due to stack exhaustion.

    This is CVE-2024-34158 and Go issue https://go.dev/issue/69141.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.23.1

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit a2e14dd8bd)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-09-06 22:29:32 +00:00
Sebastiaan van Stijn
ac20cc9d94 Merge pull request #48397 from corhere/backport-23.0/update-go1.22
[23.0 backport] Update to go1.22.6
2024-09-05 20:10:37 +02:00
Sebastiaan van Stijn
037f5ec1fd hack/dind: update comments around AppArmor
Provide more context to the steps we're doing.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 65cfcc28ab)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 18:11:07 -04:00
Sebastiaan van Stijn
47f6acd997 hack/dind-systemd: make AppArmor work with systemd enabled
On bookworm, AppArmor failed to start inside the container, which can be
seen at startup of the dev-container:

    Created symlink /etc/systemd/system/systemd-firstboot.service → /dev/null.
    Created symlink /etc/systemd/system/systemd-udevd.service → /dev/null.
    Created symlink /etc/systemd/system/multi-user.target.wants/docker-entrypoint.service → /etc/systemd/system/docker-entrypoint.service.
    hack/dind-systemd: starting /lib/systemd/systemd --show-status=false --unit=docker-entrypoint.target
    systemd 252.17-1~deb12u1 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
    Detected virtualization docker.
    Detected architecture x86-64.
    modprobe@configfs.service: Deactivated successfully.
    modprobe@dm_mod.service: Deactivated successfully.
    modprobe@drm.service: Deactivated successfully.
    modprobe@efi_pstore.service: Deactivated successfully.
    modprobe@fuse.service: Deactivated successfully.
    modprobe@loop.service: Deactivated successfully.
    apparmor.service: Starting requested but asserts failed.
    proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 49 (systemd-binfmt)
    + source /etc/docker-entrypoint-cmd
    ++ hack/make.sh dynbinary test-integration

When checking "aa-status", an error was printed that the filesystem was
not mounted:

    aa-status
    apparmor filesystem is not mounted.
    apparmor module is loaded.

Checking if "local-fs.target" was loaded, that seemed to be the case;

    systemctl status local-fs.target
    ● local-fs.target - Local File Systems
         Loaded: loaded (/lib/systemd/system/local-fs.target; static)
         Active: active since Mon 2023-11-27 10:48:38 UTC; 18s ago
           Docs: man:systemd.special(7)

However, **on the host**, "/sys/kernel/security" has a mount, which was not
present inside the container:

    mount | grep securityfs
    securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)

Interestingly, on `debian:bullseye`, this was not the case either; no
`securityfs` mount was present inside the container, and apparmor actually
failed to start, but succeeded silently:

    mount | grep securityfs
    systemctl start apparmor
    systemctl status apparmor
    ● apparmor.service - Load AppArmor profiles
         Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled)
         Active: active (exited) since Mon 2023-11-27 11:59:09 UTC; 44s ago
           Docs: man:apparmor(7)
                 https://gitlab.com/apparmor/apparmor/wikis/home/
        Process: 43 ExecStart=/lib/apparmor/apparmor.systemd reload (code=exited, status=0/SUCCESS)
       Main PID: 43 (code=exited, status=0/SUCCESS)
            CPU: 10ms

    Nov 27 11:59:09 9519f89cade1 apparmor.systemd[43]: Not starting AppArmor in container

Same, using the `/etc/init.d/apparmor` script:

    /etc/init.d/apparmor start
    Starting apparmor (via systemctl): apparmor.service.
    echo $?
    0

And apparmor was not actually active:

    aa-status
    apparmor module is loaded.
    apparmor filesystem is not mounted.

    aa-enabled
    Maybe - policy interface not available.

After further investigating, I found that the non-systemd dind script
had a mount for AppArmor, which was added in 31638ab2ad

The systemd variant was missing this mount, which may have gone unnoticed
because `debian:bullseye` was silently ignoring this when starting the
apparmor service.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cfb8ca520a)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 18:11:05 -04:00
Cory Snider
6a7fd30a85 Fix make BIND_DIR=. DOCKER_SYSTEMD=1 shell
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 0e0b300a1c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 18:08:03 -04:00
Sebastiaan van Stijn
ea852019b6 frozen images: update to debian:bookworm-slim
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3bfb6a9420)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 17:51:52 -04:00
Sebastiaan van Stijn
b41516c070 Dockerfile: remove uses of DEBIAN_FRONTEND
We used DEBIAN_FRONTEND in some places to prevent installation of packages
from being blocked. However, debian bookworm now [includes a fix][1] for
situations like this (it was specifically reported for Docker situations <3),
so we can get rid of these.

Thanks to Tianon for noticing this, and for linking to the Debian ticket!

[1]: https://bugs.debian.org/929417

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fd40dfaf58)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 17:48:36 -04:00
Sebastiaan van Stijn
687efd562a Dockerfile: update to Debian "bookworm" (current stable)
Also switch yamllint to be installed from debian's packages, which are
currently at v1.29.0.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e72c4818c4)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 17:48:32 -04:00
Sebastiaan van Stijn
fe212b5963 integration-cli: fix TestDaemonICC tests for newer iptables versions
Debian Woodworm ships with a newer version of iptables, which caused two
tests to fail:

    === FAIL: amd64.integration-cli TestDockerDaemonSuite/TestDaemonICCLinkExpose (1.18s)
    docker_cli_daemon_test.go:841: assertion failed: false (matched bool) != true (true bool): iptables output should have contained "DROP.*all.*ext-bridge6.*ext-bridge6", but was "Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target prot opt in out source destination \n 0 0 DOCKER-USER 0 -- * * 0.0.0.0/0 0.0.0.0/0 \n 0 0 DOCKER-ISOLATION-STAGE-1 0 -- * * 0.0.0.0/0 0.0.0.0/0 \n 0 0 ACCEPT 0 -- * ext-bridge6 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED\n 0 0 DOCKER 0 -- * ext-bridge6 0.0.0.0/0 0.0.0.0/0 \n 0 0 ACCEPT 0 -- ext-bridge6 !ext-bridge6 0.0.0.0/0 0.0.0.0/0 \n 0 0 DROP 0 -- ext-bridge6 ext-bridge6 0.0.0.0/0 0.0.0.0/0 \n"
    --- FAIL: TestDockerDaemonSuite/TestDaemonICCLinkExpose (1.18s)

    === FAIL: amd64.integration-cli TestDockerDaemonSuite/TestDaemonICCPing (1.19s)
    docker_cli_daemon_test.go:803: assertion failed: false (matched bool) != true (true bool): iptables output should have contained "DROP.*all.*ext-bridge5.*ext-bridge5", but was "Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)\n pkts bytes target prot opt in out source destination \n 0 0 DOCKER-USER 0 -- * * 0.0.0.0/0 0.0.0.0/0 \n 0 0 DOCKER-ISOLATION-STAGE-1 0 -- * * 0.0.0.0/0 0.0.0.0/0 \n 0 0 ACCEPT 0 -- * ext-bridge5 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED\n 0 0 DOCKER 0 -- * ext-bridge5 0.0.0.0/0 0.0.0.0/0 \n 0 0 ACCEPT 0 -- ext-bridge5 !ext-bridge5 0.0.0.0/0 0.0.0.0/0 \n 0 0 DROP 0 -- ext-bridge5 ext-bridge5 0.0.0.0/0 0.0.0.0/0 \n"
    --- FAIL: TestDockerDaemonSuite/TestDaemonICCPing (1.19s)

Both the `TestDaemonICCPing`, and `TestDaemonICCLinkExpose` test were introduced
in dd0666e64f. These tests called `iptables` with
the `-n` (`--numeric`) option, which prevents it from doing a reverse-DNS lookup
as an optimization.

However, the `-n` option did not have an effect to the `prot` column before
commit [da8ecc62dd765b15df84c3aa6b83dcb7a81d4ffa] (iptables < v1.8.9 or v1.8.8).
Newer versions, such as the iptables version shipping with Debian Woodworm do,
so we need to update the expected output for this version.

This patch removes the `-n` option, to keep the test more portable, also when
run non-containerized, and removes the use of regular expressions to check the
result, as these regular expressions were quite permissive (using `.*` wild-
card matching). Instead, we're getting the

With this change;

make DOCKER_GRAPHDRIVER=vfs TEST_FILTER=TestDaemonICC TEST_IGNORE_CGROUP_CHECK=1 test-integration
...
--- PASS: TestDockerDaemonSuite (139.11s)
--- PASS: TestDockerDaemonSuite/TestDaemonICCLinkExpose (54.62s)
--- PASS: TestDockerDaemonSuite/TestDaemonICCPing (84.48s)

[da8ecc62dd765b15df84c3aa6b83dcb7a81d4ffa]: https://git.netfilter.org/iptables/commit/?id=da8ecc62dd765b15df84c3aa6b83dcb7a81d4ffa

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c3eed9fa3e)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 17:40:27 -04:00
Sebastiaan van Stijn
1a92ba6573 quota: increase sparse test-image to 300MB
Starting with [6e0ed3d19c54603f0f7d628ea04b550151d8a262], the minimum
allowed size is now 300MB. Given that this is a sparse image, and
the size of the image is irrelevant to the test (we check for
limits defined through project-quotas, not the size of the
device itself), we can raise the size of this image.

[6e0ed3d19c54603f0f7d628ea04b550151d8a262]: https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/commit/?id=6e0ed3d19c54603f0f7d628ea04b550151d8a262

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9709b7e458)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 17:39:19 -04:00
Cory Snider
5d3fbceff0 hack/make/.binary: set CGO_LDFLAGS=-latomic for arm/v5
cross-compiling for arm/v5 was failing;

    #56 84.12 /usr/bin/arm-linux-gnueabi-clang -marm -o $WORK/b001/exe/a.out -Wl,--export-dynamic-symbol=_cgo_panic -Wl,--export-dynamic-symbol=_cgo_topofstack -Wl,--export-dynamic-symbol=crosscall2 -Qunused-arguments -Wl,--compress-debug-sections=zlib /tmp/go-link-759578347/go.o /tmp/go-link-759578347/000000.o /tmp/go-link-759578347/000001.o /tmp/go-link-759578347/000002.o /tmp/go-link-759578347/000003.o /tmp/go-link-759578347/000004.o /tmp/go-link-759578347/000005.o /tmp/go-link-759578347/000006.o /tmp/go-link-759578347/000007.o /tmp/go-link-759578347/000008.o /tmp/go-link-759578347/000009.o /tmp/go-link-759578347/000010.o /tmp/go-link-759578347/000011.o /tmp/go-link-759578347/000012.o /tmp/go-link-759578347/000013.o /tmp/go-link-759578347/000014.o /tmp/go-link-759578347/000015.o /tmp/go-link-759578347/000016.o /tmp/go-link-759578347/000017.o /tmp/go-link-759578347/000018.o -O2 -g -O2 -g -O2 -g -lpthread -O2 -g -no-pie -static
    #56 84.12 ld.lld: error: undefined symbol: __atomic_load_4
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced 2 more times
    #56 84.12
    #56 84.12 ld.lld: error: undefined symbol: __atomic_store_4
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(_cgo_wait_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(x_cgo_notify_runtime_init_done)
    #56 84.12 >>> referenced by gcc_libinit.c
    #56 84.12 >>>               /tmp/go-link-759578347/000009.o:(x_cgo_set_context_function)
    #56 84.12 clang: error: linker command failed with exit code 1 (use -v to see invocation)

From discussion on GitHub;
https://github.com/moby/moby/pull/46982#issuecomment-2206992611

The arm/v5 build failure looks to be due to libatomic not being included
in the link. For reasons probably buried in mailing list archives,
[gcc](https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81358) and clang don't
bother to implicitly auto-link libatomic. This is not a big deal on many
modern platforms with atomic intrinsics as the compiler generates inline
instruction sequences, avoiding any libcalls into libatomic. ARMv5 is not
one of those platforms: all atomic operations require a libcall.

In theory, adding `CGO_LDFLAGS=-latomic` should fix arm/v5 builds.

While it could be argued that cgo should automatically link against
libatomic in the same way that it automatically links against libpthread,
the Go maintainers would have a valid counter-argument that it should be
the C toolchain's responsibility to link against libatomic automatically,
just like it does with libgcc or compiler-rt.

Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 4cd5c2b643)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 16:48:15 -04:00
Cory Snider
cb0e56ff03 hack/make/.binary: set CCGO_CFLAGS=-Wno-atomic-alignment for arm/v5
cross-compiling for arm/v5 fails on go1.22; a fix is included for this
in go1.23 (https://github.com/golang/go/issues/65290), but for go1.22
we can set the correct option manually.

    1.189 + go build -mod=vendor -modfile=vendor.mod -o /tmp/bundles/binary-daemon/dockerd -tags 'netgo osusergo static_build journald' -ldflags '-w -X "github.com/docker/docker/dockerversion.Version=dev" -X "github.com/docker/docker/dockerversion.GitCommit=HEAD" -X "github.com/docker/docker/dockerversion.BuildTime=2024-08-29T16:59:57.000000000+00:00" -X "github.com/docker/docker/dockerversion.PlatformName=" -X "github.com/docker/docker/dockerversion.ProductName=" -X "github.com/docker/docker/dockerversion.DefaultProductLicense=" -extldflags -static ' -gcflags= github.com/docker/docker/cmd/dockerd
    67.78 # runtime/cgo
    67.78 gcc_libinit.c:44:8: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:47:6: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:49:10: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:69:9: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    67.78 gcc_libinit.c:71:3: error: large atomic operation may incur significant performance penalty; the access size (4 bytes) exceeds the max lock-free size (0 bytes) [-Werror,-Watomic-alignment]
    78.20 + rm -f /go/src/github.com/docker/docker/go.mod

Co-authored-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit e853c093bf)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-09-03 16:48:03 -04:00
Cory Snider
2bc990f6aa Update to go1.22.6
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-29 16:43:16 -04:00
Cory Snider
536302f223 vendor: github.com/Microsoft/go-winio v0.5.3
- fileinfo: internally fix FileBasicInfo memory alignment (fixes
  compatibility with go1.22)

full diff: https://github.com/microsoft/go-winio/compare/v0.5.2...v0.5.3

Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-29 16:43:16 -04:00
Sebastiaan van Stijn
20989cf1c1 Dockerfile: use GOTOOLCHAIN=local
Related discussion in https://github.com/docker-library/golang/issues/472

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit aa282973d4)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-28 18:09:57 -04:00
Sebastiaan van Stijn
4fea1a51c0 update golangci-lint to v1.59.1
full diff: https://github.com/golangci/golangci-lint/compare/v1.55.2...v1.59.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 95fae036ae)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-28 17:47:52 -04:00
Sebastiaan van Stijn
842783ebd2 pkg/archive: reformat code to make #nosec comment work again
Looks like the way it picks up #nosec comments changed, causing the
linter error to re-appear;

    pkg/archive/archive_linux.go:57:17: G305: File traversal when extracting zip/tar archive (gosec)
                    Name:       filepath.Join(hdr.Name, WhiteoutOpaqueDir),
                                ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d4160d5aa7)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-28 17:47:46 -04:00
Sebastiaan van Stijn
eea623a1e1 builder/remotecontext: reformat code to make #nosec comment work again
Looks like the way it picks up #nosec comments changed, causing the
linter error to re-appear;

    builder/remotecontext/remote.go:48:17: G107: Potential HTTP request made with variable url (gosec)
        if resp, err = http.Get(address); err != nil {
                       ^

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 04bf0e3d69)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-08-28 16:45:17 -04:00
Sebastiaan van Stijn
ac6750e3e7 Merge pull request #48297 from austinvazquez/cherry-pick-2b5ffa0b63c76e8bb4ebb253d7e4db5c7af918c0-to-23.0
[23.0 backport] gha: set permissions to read-only by default
2024-08-08 12:00:25 +02:00
Sebastiaan van Stijn
da41730314 gha: set permissions to read-only by default
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b5ffa0b63)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-08-06 16:18:03 +00:00
Sebastiaan van Stijn
72d0c877bf Merge pull request #48226 from austinvazquez/backport-workflow-artifact-retention-policy-updates-to-23.0
[23.0 backport] ci: update workflow artifacts retention
2024-07-24 09:56:55 +02:00
CrazyMax
32d039ea96 ci: update workflow artifacts retention
Signed-off-by: CrazyMax <1951866+crazy-max@users.noreply.github.com>
(cherry picked from commit aff003139c)
Signed-off-by: Austin Vazquez <macedonv@amazon.com>
2024-07-24 04:36:58 +00:00
Sebastiaan van Stijn
ae2b3666c5 Merge commit from fork
[23.0] AuthZ plugin security fixes
2024-07-23 21:36:28 +02:00
Jameson Hyde
7e895c4888 If url includes scheme, urlPath will drop hostname, which would not match the auth check
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>
(cherry picked from commit 754fb8d9d03895ae3ab60d2ad778152b0d835206)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 5282cb25d0)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-17 13:10:39 +02:00
Jameson Hyde
8798e4c0b6 Authz plugin security fixes for 0-length content and path validation
Signed-off-by: Jameson Hyde <jameson.hyde@docker.com>

fix comments

(cherry picked from commit 9659c3a52bac57e615b5fb49b0652baca448643e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2ac8a479c5)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-07-17 13:10:38 +02:00
Sebastiaan van Stijn
b449180f06 Merge pull request #47988 from vvoland/v23.0-47985
[23.0 backport] builder/mobyexporter: Add missing nil check
2024-06-15 15:01:58 +02:00
Paweł Gronowski
eb75e3ff8e builder/mobyexporter: Add missing nil check
Add a nil check to handle a case where the image config JSON would
deserialize into a nil map.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 642242a26b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-06-14 18:37:09 +02:00
Paweł Gronowski
907eb136e2 Merge pull request #47905 from thaJeztah/23.0_backport_bump_go1.21.11
[23.0 backport] update to go1.21.11
2024-06-05 10:14:26 +02:00
Sebastiaan van Stijn
6ce2dd6866 update to go1.21.11
go1.21.11 (released 2024-06-04) includes security fixes to the archive/zip
and net/netip packages, as well as bug fixes to the compiler, the go command,
the runtime, and the os package. See the Go 1.21.11 milestone on our issue
tracker for details;

- https://github.com/golang/go/issues?q=milestone%3AGo1.21.11+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.10...go1.21.11

From the security announcement;

We have just released Go versions 1.22.4 and 1.21.11, minor point releases.
These minor releases include 2 security fixes following the security policy:

- archive/zip: mishandling of corrupt central directory record

  The archive/zip package's handling of certain types of invalid zip files
  differed from the behavior of most zip implementations. This misalignment
  could be exploited to create an zip file with contents that vary depending
  on the implementation reading the file. The archive/zip package now rejects
  files containing these errors.

  Thanks to Yufan You for reporting this issue.

  This is CVE-2024-24789 and Go issue https://go.dev/issue/66869.

- net/netip: unexpected behavior from Is methods for IPv4-mapped IPv6 addresses

  The various Is methods (IsPrivate, IsLoopback, etc) did not work as expected
  for IPv4-mapped IPv6 addresses, returning false for addresses which would
  return true in their traditional IPv4 forms.

  Thanks to Enze Wang of Alioth and Jianjun Chen of Zhongguancun Lab
  for reporting this issue.

  This is CVE-2024-24790 and Go issue https://go.dev/issue/67680.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 91e2c29865)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-04 23:33:59 +02:00
Sebastiaan van Stijn
0ad1672781 Merge pull request #47891 from thaJeztah/23.0_backport_platforms_err_handling
[23.0 backport] don't depend on containerd platform.Parse to return a typed error
2024-06-03 17:52:33 +02:00
Sebastiaan van Stijn
fca8ba8b0a don't depend on containerd platform.Parse to return a typed error
We currently depend on the containerd platform-parsing to return typed
errdefs errors; the new containerd platforms module does not return such
errors, and documents that errors returned should not be used as sentinel
errors; c1438e911a/errors.go (L21-L30)

Let's type these errors ourselves, so that we don't depend on the error-types
returned by containerd, and consider that eny platform string that results in
an error is an invalid parameter.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit cd1ed46d73)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-06-03 13:07:39 +02:00
Sebastiaan van Stijn
67640713cc Merge pull request #47868 from dperny/23.0-47854
[23.0 backport] Fix issue where node promotion could fail
2024-05-29 13:52:45 +02:00
Drew Erny
25ee4b891c Fix issue where node promotion could fail
If a node is promoted right after another node is demoted, there exists
the possibility of a race, by which the newly promoted manager attempts
to connect to the newly demoted manager for its initial Raft membership.
This connection fails, and the whole swarm Node object exits.

At this point, the daemon nodeRunner sees the exit and restarts the
Node.

However, if the address of the no-longer-manager is recorded in the
nodeRunner's config.joinAddr, the Node again attempts to connect to the
no-longer-manager, and crashes again. This repeats. The solution is to
remove the node entirely and rejoin the Swarm as a new node.

This change erases config.joinAddr from the restart of the nodeRunner,
if the node has previously become Ready. The node becoming Ready
indicates that at some point, it did successfully join the cluster, in
some fashion. If it has successfully joined the cluster, then Swarm has
its own persistent record of known manager addresses. If no joinAddr is
provided, then Swarm will choose from its persisted list of managers to
join, and will join a functioning manager.

Signed-off-by: Drew Erny <derny@mirantis.com>
(cherry picked from commit 16e5c41591)
Signed-off-by: Drew Erny <derny@mirantis.com>
2024-05-24 12:28:47 -05:00
Cory Snider
fba410bb66 Merge pull request #47844 from aepifanov/fix-cves
[23.0] Fix CVEs
2024-05-17 13:12:03 -04:00
Andrey Epifanov
f4e7362ba2 vendor: bump google.golang.org/grpc to v1.56.3 and google.golang.org/protobuf to v1.33.0
These vulnerabilities were found by govulncheck:

Vulnerability #1: GO-2024-2611
    Infinite loop in JSON unmarshaling in google.golang.org/protobuf
  More info: https://pkg.go.dev/vuln/GO-2024-2611
  Module: google.golang.org/protobuf
    Found in: google.golang.org/protobuf@v1.28.1
    Fixed in: google.golang.org/protobuf@v1.33.0
    Example traces found:
      #1: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Peek
      #2: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls json.Decoder.Read
      #3: daemon/logger/gcplogs/gcplogging.go:154:18: gcplogs.New calls logging.Client.Ping, which eventually calls protojson.Unmarshal

Vulnerability #2: GO-2023-2153
    Denial of service from HTTP/2 Rapid Reset in google.golang.org/grpc
  More info: https://pkg.go.dev/vuln/GO-2023-2153
  Module: google.golang.org/grpc
    Found in: google.golang.org/grpc@v1.50.1
    Fixed in: google.golang.org/grpc@v1.56.3
    Example traces found:
      #1: api/server/router/grpc/grpc.go:20:29: grpc.NewRouter calls grpc.NewServer
      #2: daemon/daemon.go:1477:23: daemon.Daemon.RawSysInfo calls sync.Once.Do, which eventually calls grpc.Server.Serve
      #3: daemon/daemon.go:1477:23: daemon.Daemon.RawSysInfo calls sync.Once.Do, which eventually calls transport.NewServerTransport

full diffs:
 - https://github.com/grpc/grpc-go/compare/v1.50.1..v1.56.3
 - https://github.com/protocolbuffers/protobuf-go/compare/v1.28.1..v1.33.0
 - https://github.com/googleapis/google-api-go-client/compare/v0.93.0..v0.114.0
 - https://github.com/golang/oauth2/compare/v0.1.0..v0.7.0
 - https://github.com/census-instrumentation/opencensus-go/compare/v0.23.0..v0.24.0
 - https://github.com/googleapis/gax-go/compare/v2.4.0..v2.7.1
 - https://github.com/googleapis/enterprise-certificate-proxy/compare/v0.1.0..v0.2.3
 - https://github.com/golang/protobuf/compare/v1.5.2..v1.5.4
 - https://github.com/cespare/xxhash/compare/v2.1.2..v2.2.0
 - https://github.com/googleapis/google-cloud-go/compare/v0.102.1..v0.110.0
 - https://github.com/googleapis/go-genproto v0.0.0-20230410155749-daa745c078e1
 - https://github.com/googleapis/google-cloud-go/compare/logging/v1.4.2..logging/v1.7.0
 - https://github.com/googleapis/google-cloud-go/compare/compute/v1.7.0..compute/v1.19.1

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2024-05-17 04:46:52 -07:00
Andrey Epifanov
0ee31c2bca vendor: bump golang.org/x/net to v0.23.0
Resolves GO-2024-2687, a.k.a. CVE-2023-45288.

    $ hack/with-go-mod.sh go get -modfile=vendor.mod golang.org/x/net@v0.23.0
    [...]
    full diffs:
        - https://github.com/golang/crypto/compare/v0.14.0..v0.21.0
        - https://github.com/golang/net/compare/v0.17.0..v0.23.0
        - https://github.com/golang/sys/compare/v0.13.0..v0.18.0
        - https://github.com/golang/text/compare/v0.13.0..v0.14.0

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2024-05-17 03:26:27 -07:00
Paweł Gronowski
116e9be754 update to go1.21.10
- https://github.com/golang/go/issues?q=milestone%3AGo1.21.10+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.21.9...go1.21.10

These minor releases include 2 security fixes following the security policy:

- cmd/go: arbitrary code execution during build on darwin
On Darwin, building a Go module which contains CGO can trigger arbitrary code execution when using the Apple version of ld, due to usage of the -lto_library flag in a "#cgo LDFLAGS" directive.
Thanks to Juho Forsén of Mattermost for reporting this issue.
This is CVE-2024-24787 and Go issue https://go.dev/issue/67119.

- net: malformed DNS message can cause infinite loop
A malformed DNS message in response to a query can cause the Lookup functions to get stuck in an infinite loop.
Thanks to long-name-let-people-remember-you on GitHub for reporting this issue, and to Mateusz Poliwczak for bringing the issue to our attention.
This is CVE-2024-24788 and Go issue https://go.dev/issue/66754.

View the release notes for more information:
https://go.dev/doc/devel/release#go1.22.3

**- Description for the changelog**

```markdown changelog
Update Go runtime to 1.21.10
```

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 6c97e0e0b5)
Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>

# Conflicts:
#	.github/workflows/.windows.yml
#	.github/workflows/buildkit.yml
#	.github/workflows/test.yml
#	Dockerfile
#	Dockerfile.simple
#	Dockerfile.windows
#	hack/dockerfiles/generate-files.Dockerfile
2024-05-17 02:21:40 -07:00
Sebastiaan van Stijn
b0492570e7 Merge pull request #47831 from vvoland/v23.0-47749
[23.0 backport] apparmor: Allow confined runc to kill containers
2024-05-16 09:13:06 +02:00
Tomáš Virtus
07635aa60e apparmor: Allow confined runc to kill containers
/usr/sbin/runc is confined with "runc" profile[1] introduced in AppArmor
v4.0.0. This change breaks stopping of containers, because the profile
assigned to containers doesn't accept signals from the "runc" peer.
AppArmor >= v4.0.0 is currently part of Ubuntu Mantic (23.10) and later.

In the case of Docker, this regression is hidden by the fact that
dockerd itself sends SIGKILL to the running container after runc fails
to stop it. It is still a regression, because graceful shutdowns of
containers via "docker stop" are no longer possible, as SIGTERM from
runc is not delivered to them. This can be seen in logs from dockerd
when run with debug logging enabled and also from tracing signals with
killsnoop utility from bcc[2] (in bpfcc-tools package in Debian/Ubuntu):

  Test commands:

    root@cloudimg:~# docker run -d --name test redis
    ba04c137827df8468358c274bc719bf7fc291b1ed9acf4aaa128ccc52816fe46
    root@cloudimg:~# docker stop test

  Relevant syslog messages (with wrapped long lines):

    Apr 23 20:45:26 cloudimg kernel: audit:
      type=1400 audit(1713905126.444:253): apparmor="DENIED"
      operation="signal" class="signal" profile="docker-default" pid=9289
      comm="runc" requested_mask="receive" denied_mask="receive"
      signal=kill peer="runc"
    Apr 23 20:45:36 cloudimg dockerd[9030]:
      time="2024-04-23T20:45:36.447016467Z"
      level=warning msg="Container failed to exit within 10s of kill - trying direct SIGKILL"
      container=ba04c137827df8468358c274bc719bf7fc291b1ed9acf4aaa128ccc52816fe46
      error="context deadline exceeded"

  Killsnoop output after "docker stop ...":

    root@cloudimg:~# killsnoop-bpfcc
    TIME      PID      COMM             SIG  TPID     RESULT
    20:51:00  9631     runc             3    9581     -13
    20:51:02  9637     runc             9    9581     -13
    20:51:12  9030     dockerd          9    9581     0

This change extends the docker-default profile with rules that allow
receiving signals from processes that run confined with either runc or
crun profile (crun[4] is an alternative OCI runtime that's also confined
in AppArmor >= v4.0.0, see [1]). It is backward compatible because the
peer value is a regular expression (AARE) so the referenced profile
doesn't have to exist for this profile to successfully compile and load.

Note that the runc profile has an attachment to /usr/sbin/runc. This is
the path where the runc package in Debian/Ubuntu puts the binary. When
the docker-ce package is installed from the upstream repository[3], runc
is installed as part of the containerd.io package at /usr/bin/runc.
Therefore it's still running unconfined and has no issues sending
signals to containers.

[1] https://gitlab.com/apparmor/apparmor/-/commit/2594d936
[2] https://github.com/iovisor/bcc/blob/master/tools/killsnoop.py
[3] https://download.docker.com/linux/ubuntu
[4] https://github.com/containers/crun

Signed-off-by: Tomáš Virtus <nechtom@gmail.com>
(cherry picked from commit 5ebe2c0d6b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-05-14 11:49:03 +02:00
Sebastiaan van Stijn
d839fdc508 Merge pull request #47780 from AkihiroSuda/cherrypick-createmountpoint-23
[23.0] mounts/validate: Don't check source exists with CreateMountpoint
2024-04-30 15:43:00 +02:00
Paweł Gronowski
18e21f82a9 mounts/validate: Don't check source exists with CreateMountpoint
Don't error out when mount source doesn't exist and mounts has
`CreateMountpoint` option enabled.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 05b883bdc8)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2024-04-30 20:47:22 +09:00
Paweł Gronowski
ab34280ac3 Merge pull request #47699 from vvoland/v23.0-47658
[23.0 backport] Fix cases where we are wrapping a nil error
2024-04-09 13:55:11 +02:00
Brian Goff
8f67ed81aa Fix cases where we are wrapping a nil error
This was using `errors.Wrap` when there was no error to wrap, meanwhile
we are supposed to be creating a new error.

Found this while investigating some log corruption issues and
unexpectedly getting a nil reader and a nil error from `getTailReader`.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 0a48d26fbc)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-04-09 10:21:36 +02:00
Bjorn Neergaard
bed0abf9ca Merge pull request #47593 from corhere/backport-23.0/libnet-resolver-nxdomain
[23.0 backport] libnet: Don't forward to upstream resolvers on internal nw
2024-03-19 12:18:12 -06:00
Albin Kerouanton
f4657eae7d libnet: Don't forward to upstream resolvers on internal nw
This commit makes sure the embedded resolver doesn't try to forward to
upstream servers when a container is only attached to an internal
network.

Co-authored-by: Albin Kerouanton <albinker@gmail.com>
Signed-off-by: Rob Murray <rob.murray@docker.com>
(cherry picked from commit 790c3039d0)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-03-19 12:50:35 -04:00
Albin Kerouanton
a379e026c9 integration: Add a new networking integration test suite
This commit introduces a new integration test suite aimed at testing
networking features like inter-container communication, network
isolation, port mapping, etc... and how they interact with daemon-level
and network-level parameters.

So far, there's pretty much no tests making sure our networks are well
configured: 1. there're a few tests for port mapping, but they don't
cover all use cases ; 2. there're a few tests that check if a specific
iptables rule exist, but that doesn't prevent that specific iptables
rule to be wrong in the first place.

As we're planning to refactor how iptables rules are written, and change
some of them to fix known security issues, we need a way to test all
combinations of parameters. So far, this was done by hand, which is
particularly painful and time consuming. As such, this new test suite is
foundational to upcoming work.

Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 409ea700c7)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-03-19 12:50:35 -04:00
Albin Kerouanton
673020119f integration: Add RunAttach helper
Signed-off-by: Albin Kerouanton <albinker@gmail.com>
(cherry picked from commit 5bd8aa5246)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-03-18 16:38:35 -04:00
Sebastiaan van Stijn
1725cc5f92 Merge pull request #47535 from vvoland/v23.0-47530
[23.0 backport] volume: Don't decrement refcount below 0
2024-03-11 15:58:46 +01:00
Paweł Gronowski
d1aa20efb7 volume: Don't decrement refcount below 0
With both rootless and live restore enabled, there's some race condition
which causes the container to be `Unmount`ed before the refcount is
restored.

This makes sure we don't underflow the refcount (uint64) when
decrementing it.

The root cause of this race condition still needs to be investigated and
fixed, but at least this unflakies the `TestLiveRestore`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 294fc9762e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-08 12:56:41 +01:00
Sebastiaan van Stijn
e2ccfc19cc Merge pull request #47529 from vvoland/v23.0-47523
[23.0 backport] builder-next: fix missing lock in ensurelayer
2024-03-07 13:24:55 +01:00
Tonis Tiigi
e35f6fbd08 builder-next: fix missing lock in ensurelayer
When this was called concurrently from the moby image
exporter there could be a data race where a layer was
written to the refs map when it was already there.

In that case the reference count got mixed up and on
release only one of these layers was actually released.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 37545cc644)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-07 12:33:26 +01:00
Sebastiaan van Stijn
64b5835e95 Merge pull request #47515 from vvoland/v23.0-47498
[23.0 backport] daemon: overlay2: remove world writable permission from the lower file
2024-03-06 15:19:26 +01:00
Jaroslav Jindrak
f2954d7622 daemon: overlay2: remove world writable permission from the lower file
In de2447c, the creation of the 'lower' file was changed from using
os.Create to using ioutils.AtomicWriteFile, which ignores the system's
umask. This means that even though the requested permission in the
source code was always 0666, it was 0644 on systems with default
umask of 0022 prior to de2447c, so the move to AtomicFile potentially
increased the file's permissions.

This is not a security issue because the parent directory does not
allow writes into the file, but it can confuse security scanners on
Linux-based systems into giving false positives.

Signed-off-by: Jaroslav Jindrak <dzejrou@gmail.com>
(cherry picked from commit cadb124ab6)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-03-06 13:14:18 +01:00
Sebastiaan van Stijn
548f37a132 Merge pull request #47339 from vvoland/cache-fix-older-windows-23
[23.0 backport] image/cache: Ignore Build and Revision on Windows
2024-02-16 17:00:24 +01:00
Akihiro Suda
5cc674870c Merge pull request #47346 from thaJeztah/23.0_backport_seccomp_updates
[23.0 backport] profiles/seccomp: add syscalls for kernel v5.17 - v6.6, match containerd's profile
2024-02-07 19:47:54 +09:00
Paweł Gronowski
d10756f356 image/cache: Require Major and Minor match for Windows OSVersion
The platform comparison was backported from the branch that vendors
containerd 1.7.

In this branch the vendored containerd version is older and doesn't have
the same comparison logic for Windows specific OSVersion.

Require both major and minor components of Windows OSVersion to match.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit b3888ed899)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 17:52:16 +01:00
Sebastiaan van Stijn
20b867b732 seccomp: add futex_wake syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 9f6c532f59

    futex: Add sys_futex_wake()

    To complement sys_futex_waitv() add sys_futex_wake(). This syscall
    implements what was previously known as FUTEX_WAKE_BITSET except it
    uses 'unsigned long' for the bitmask and takes FUTEX2 flags.

    The 'unsigned long' allows FUTEX2_SIZE_U64 on 64bit platforms.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d69729e053)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:53 +01:00
Sebastiaan van Stijn
fe0619e49f seccomp: add futex_wait syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: cb8c4312af

    futex: Add sys_futex_wait()

    To complement sys_futex_waitv()/wake(), add sys_futex_wait(). This
    syscall implements what was previously known as FUTEX_WAIT_BITSET
    except it uses 'unsigned long' for the value and bitmask arguments,
    takes timespec and clockid_t arguments for the absolute timeout and
    uses FUTEX2 flags.

    The 'unsigned long' allows FUTEX2_SIZE_U64 on 64bit platforms.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 10d344d176)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:53 +01:00
Sebastiaan van Stijn
68e7b988b1 seccomp: add futex_requeue syscall (kernel v6.7, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 0f4b5f9722

    futex: Add sys_futex_requeue()

    Finish off the 'simple' futex2 syscall group by adding
    sys_futex_requeue(). Unlike sys_futex_{wait,wake}() its arguments are
    too numerous to fit into a regular syscall. As such, use struct
    futex_waitv to pass the 'source' and 'destination' futexes to the
    syscall.

    This syscall implements what was previously known as FUTEX_CMP_REQUEUE
    and uses {val, uaddr, flags} for source and {uaddr, flags} for
    destination.

    This design explicitly allows requeueing between different types of
    futex by having a different flags word per uaddr.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit df57a080b6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:53 +01:00
Sebastiaan van Stijn
26d766450c seccomp: add map_shadow_stack syscall (kernel v6.6, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: c35559f94e

    x86/shstk: Introduce map_shadow_stack syscall

    When operating with shadow stacks enabled, the kernel will automatically
    allocate shadow stacks for new threads, however in some cases userspace
    will need additional shadow stacks. The main example of this is the
    ucontext family of functions, which require userspace allocating and
    pivoting to userspace managed stacks.

    Unlike most other user memory permissions, shadow stacks need to be
    provisioned with special data in order to be useful. They need to be setup
    with a restore token so that userspace can pivot to them via the RSTORSSP
    instruction. But, the security design of shadow stacks is that they
    should not be written to except in limited circumstances. This presents a
    problem for userspace, as to how userspace can provision this special
    data, without allowing for the shadow stack to be generally writable.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 8826f402f9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:53 +01:00
Sebastiaan van Stijn
c98179d3c7 seccomp: add fchmodat2 syscall (kernel v6.6, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: 09da082b07

    fs: Add fchmodat2()

    On the userspace side fchmodat(3) is implemented as a wrapper
    function which implements the POSIX-specified interface. This
    interface differs from the underlying kernel system call, which does not
    have a flags argument. Most implementations require procfs [1][2].

    There doesn't appear to be a good userspace workaround for this issue
    but the implementation in the kernel is pretty straight-forward.

    The new fchmodat2() syscall allows to pass the AT_SYMLINK_NOFOLLOW flag,
    unlike existing fchmodat.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6f242f1a28)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:52 +01:00
Sebastiaan van Stijn
573ebdba6e seccomp: add cachestat syscall (kernel v6.5, libseccomp v2.5.5)
Add this syscall to match the profile in containerd

containerd: a6e52c74fa
libseccomp: 53267af3fb
kernel: cf264e1329

    NAME
        cachestat - query the page cache statistics of a file.

    SYNOPSIS
        #include <sys/mman.h>

        struct cachestat_range {
            __u64 off;
            __u64 len;
        };

        struct cachestat {
            __u64 nr_cache;
            __u64 nr_dirty;
            __u64 nr_writeback;
            __u64 nr_evicted;
            __u64 nr_recently_evicted;
        };

        int cachestat(unsigned int fd, struct cachestat_range *cstat_range,
            struct cachestat *cstat, unsigned int flags);

    DESCRIPTION
        cachestat() queries the number of cached pages, number of dirty
        pages, number of pages marked for writeback, number of evicted
        pages, number of recently evicted pages, in the bytes range given by
        `off` and `len`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4d0d5ee10d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:52 +01:00
Sebastiaan van Stijn
b9f01eea45 seccomp: add set_mempolicy_home_node syscall (kernel v5.17, libseccomp v2.5.4)
This syscall is gated by CAP_SYS_NICE, matching the profile in containerd.

containerd: a6e52c74fa
libseccomp: d83cb7ac25
kernel: c6018b4b25

    mm/mempolicy: add set_mempolicy_home_node syscall
    This syscall can be used to set a home node for the MPOL_BIND and
    MPOL_PREFERRED_MANY memory policy.  Users should use this syscall after
    setting up a memory policy for the specified range as shown below.

      mbind(p, nr_pages * page_size, MPOL_BIND, new_nodes->maskp,
            new_nodes->size + 1, 0);
      sys_set_mempolicy_home_node((unsigned long)p, nr_pages * page_size,
                    home_node, 0);

    The syscall allows specifying a home node/preferred node from which
    kernel will fulfill memory allocation requests first.
    ...

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 1251982cf7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-06 15:29:52 +01:00
Paweł Gronowski
a51e65b334 image/cache: Use Platform from ocispec
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 2c01d53d96)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 14:30:18 +01:00
Paweł Gronowski
347be7f928 image/cache: Ignore Build and Revision on Windows
The compatibility depends on whether `hyperv` or `process` container
isolation is used.
This fixes cache not being used when building images based on older
Windows versions on a newer Windows host.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 91ea04089b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-02-06 13:22:43 +01:00
Laura Brehm
58a26e4fda Merge pull request #47325 from thaJeztah/23.0_backport_plugin-install-digest
[23.0 backport] plugins: Fix panic when fetching by digest
2024-02-05 13:47:13 +00:00
Laura Brehm
d5c9f26e3d plugins: fix panic installing from repo w/ digest
Only print the tag when the received reference has a tag, if
we can't cast the received tag to a `reference.Tagged` then
skip printing the tag as it's likely a digest.

Fixes panic when trying to install a plugin from a reference
with a digest such as
`vieux/sshfs@sha256:1d3c3e42c12138da5ef7873b97f7f32cf99fb6edde75fa4f0bcf9ed277855811`

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 10:12:40 +01:00
Laura Brehm
2dc6b50863 tests: add plugin install test w/ digest
Adds a test case for installing a plugin from a remote in the form
of `plugin-content-trust@sha256:d98f2f8061...`, which is currently
causing the daemon to panic, as we found while running the CLI e2e
tests:

```
docker plugin install registry:5000/plugin-content-trust@sha256:d98f2f806144bf4ba62d4ecaf78fec2f2fe350df5a001f6e3b491c393326aedb
```

Signed-off-by: Laura Brehm <laurabrehm@hey.com>
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-02-05 10:12:34 +01:00
Sebastiaan van Stijn
79de67d989 Merge pull request from GHSA-xw73-rw38-6vjc
[23.0 backport] image/cache: Restrict cache candidates to locally built images
2024-02-01 01:12:23 +01:00
Sebastiaan van Stijn
43e3116aa7 Merge pull request #47284 from thaJeztah/23.0_bump_containerd_binary_1.6.28
[23.0] update containerd binary to v1.6.28
2024-02-01 00:33:37 +01:00
Sebastiaan van Stijn
b523cec06b Merge pull request #47271 from thaJeztah/23.0_backport_bump_runc_binary_1.1.12
[23.0 backport] update runc binary to v1.1.12
2024-02-01 00:08:16 +01:00
Sebastiaan van Stijn
045dfe9e09 update containerd binary to v1.6.28
Update the containerd binary that's used in CI

- full diff: https://github.com/containerd/containerd/compare/v1.6.27...v1.6.28
- release notes: https://github.com/containerd/containerd/releases/tag/v1.6.28

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 23:42:13 +01:00
Sebastiaan van Stijn
1c867c02ef Merge pull request #47261 from thaJeztah/23.0_bump_containerd_windows_binary_1.6.27
[23.0] Dockerfile: update containerd binary for windows to 1.6.27
2024-01-31 23:41:19 +01:00
Sebastiaan van Stijn
e6ff860b96 Merge pull request #47282 from aepifanov/23.0_backport_bump_runc_1.1.12
[23.0 backport] vendor: github.com/opencontainers/runc v1.1.12
2024-01-31 23:40:13 +01:00
Andrey Epifanov
76894512ab vendor: github.com/opencontainers/runc v1.1.12
- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.12
- full diff: https://github.com/opencontainers/runc/compare/v1.1.5...v1.1.12

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2024-01-31 13:22:09 -08:00
Andrey Epifanov
e23be1cc51 vendor: github.com/cyphar/filepath-securejoin v0.2.4
full diff: https://github.com/cyphar/filepath-securejoin/compare/v0.2.3..v0.2.4

Signed-off-by: Andrey Epifanov <aepifanov@mirantis.com>
2024-01-31 13:19:19 -08:00
Sebastiaan van Stijn
46cfa2e34c Dockerfile: update containerd binary for windows to 1.6.27
Update the containerd binary that's used in CI to align with the version used
for Linux. This was missed in d6abda0710.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 21:43:42 +01:00
Sebastiaan van Stijn
5fbbb68dad Merge pull request #47267 from vvoland/ci-fix-makeps1-templatefail-23
[23.0 backport] hack/make.ps1: Fix go list pattern
2024-01-31 21:36:36 +01:00
Sebastiaan van Stijn
448e7bed7d update runc binary to v1.1.12
Update the runc binary that's used in CI and for the static packages, which
includes a fix for [CVE-2024-21626].

- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.12
- full diff: https://github.com/opencontainers/runc/compare/v1.1.11...v1.1.12

[CVE-2024-21626]: https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 44bf407d4d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-31 21:06:43 +01:00
Paweł Gronowski
4bb779015a hack/make.ps1: Fix go list pattern
The double quotes inside a single quoted string don't need to be
escaped.
Looks like different Powershell versions are treating this differently
and it started failing unexpectedly without any changes on our side.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit ecb217cf69)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-31 19:58:08 +01:00
Sebastiaan van Stijn
42d92e8903 Merge pull request #47226 from thaJeztah/23.0_update_containerd_binary
[23.0] Dockerfile: update containerd binary to v1.6.27
2024-01-25 21:40:50 +01:00
Paweł Gronowski
d5de9f7779 builder/windows: Don't set ArgsEscaped for RUN cache probe
Previously this was done indirectly - the `compare` function didn't
check the `ArgsEscaped`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 96d461d27e)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 44e6f3da60)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:33:36 +01:00
Paweł Gronowski
bb03b5b86e image/cache: Check image platform
Make sure the cache candidate platform matches the requested.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 877ebbe038)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 8a19bb7193)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:33:33 +01:00
Sebastiaan van Stijn
d6abda0710 Dockerfile: update containerd binary to v1.6.27
Update the binary version used in CI and for the static packages.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-25 16:31:10 +01:00
Paweł Gronowski
75c70b08b5 image/cache: Restrict cache candidates to locally built images
Restrict cache candidates only to images that were built locally.
This doesn't affect builds using `--cache-from`.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 96ac22768a)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 17af50f46b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:23:08 +01:00
Paweł Gronowski
8617cd0570 daemon/imageStore: Mark images built locally
Store additional image property which makes it possible to distinguish
if image was built locally.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit c6156dc51b)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit ffb63c0bae)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:23:06 +01:00
Paweł Gronowski
e9e21b6bf1 image/cache: Compare all config fields
Add checks for some image config fields that were missing.

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 537348763f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 593b754d8f)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 16:23:05 +01:00
Sebastiaan van Stijn
d363536dca Merge pull request #47223 from vvoland/pkg-pools-close-noop-23
[23.0 backport] pkg/ioutils: Make subsequent Close attempts noop
2024-01-25 16:16:18 +01:00
Paweł Gronowski
43c2952a31 pkg/ioutils: Make subsequent Close attempts noop
Turn subsequent `Close` calls into a no-op and produce a warning with an
optional stack trace (if debug mode is enabled).

Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
(cherry picked from commit 585d74bad1)
Signed-off-by: Paweł Gronowski <pawel.gronowski@docker.com>
2024-01-25 15:23:28 +01:00
Sebastiaan van Stijn
c0ee655462 Merge pull request #46900 from thaJeztah/23.0_update_golang_1.20.12
[23.0] update to go1.20.13
2024-01-20 14:52:48 +01:00
Sebastiaan van Stijn
751d19fd3c update to go1.20.13
go1.20.13 (released 2024-01-09) includes fixes to the runtime and the crypto/tls
package. See the Go 1.20.13 milestone on our issue tracker for details:

- https://github.com/golang/go/issues?q=milestone%3AGo1.20.13+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.20.12...go1.20.13

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-19 12:12:23 +01:00
Sebastiaan van Stijn
583e29949e update to go1.20.12
go1.20.12 (released 2023-12-05) includes security fixes to the go command,
and the net/http and path/filepath packages, as well as bug fixes to the
compiler and the go command. See the Go 1.20.12 milestone on our issue
tracker for details.

- https://github.com/golang/go/issues?q=milestone%3AGo1.20.12+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.20.11...go1.20.12

from the security mailing:

[security] Go 1.21.5 and Go 1.20.12 are released

Hello gophers,

We have just released Go versions 1.21.5 and 1.20.12, minor point releases.

These minor releases include 3 security fixes following the security policy:

- net/http: limit chunked data overhead

  A malicious HTTP sender can use chunk extensions to cause a receiver
  reading from a request or response body to read many more bytes from
  the network than are in the body.

  A malicious HTTP client can further exploit this to cause a server to
  automatically read a large amount of data (up to about 1GiB) when a
  handler fails to read the entire body of a request.

  Chunk extensions are a little-used HTTP feature which permit including
  additional metadata in a request or response body sent using the chunked
  encoding. The net/http chunked encoding reader discards this metadata.
  A sender can exploit this by inserting a large metadata segment with
  each byte transferred. The chunk reader now produces an error if the
  ratio of real body to encoded bytes grows too small.

  Thanks to Bartek Nowotarski for reporting this issue.

  This is CVE-2023-39326 and Go issue https://go.dev/issue/64433.

- cmd/go: go get may unexpectedly fallback to insecure git

  Using go get to fetch a module with the ".git" suffix may unexpectedly
  fallback to the insecure "git://" protocol if the module is unavailable
  via the secure "https://" and "git+ssh://" protocols, even if GOINSECURE
  is not set for said module. This only affects users who are not using
  the module proxy and are fetching modules directly (i.e. GOPROXY=off).

  Thanks to David Leadbeater for reporting this issue.

  This is CVE-2023-45285 and Go issue https://go.dev/issue/63845.

- path/filepath: retain trailing \ when cleaning paths like \\?\c:\

  Go 1.20.11 and Go 1.21.4 inadvertently changed the definition of the
  volume name in Windows paths starting with \\?\, resulting in
  filepath.Clean(\\?\c:\) returning \\?\c: rather than \\?\c:\ (among
  other effects). The previous behavior has been restored.

  This is an update to CVE-2023-45283 and Go issue https://go.dev/issue/64028.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-19 12:12:23 +01:00
Sebastiaan van Stijn
1dcd83acb4 update to go1.20.11
go1.20.11 (released 2023-11-07) includes security fixes to the path/filepath
package, as well as bug fixes to the linker and the net/http package. See the
Go 1.20.11 milestone on our issue tracker for details:

- https://github.com/golang/go/issues?q=milestone%3AGo1.20.11+label%3ACherryPickApproved
- full diff: https://github.com/golang/go/compare/go1.20.10...go1.20.11

from the security mailing:

[security] Go 1.21.4 and Go 1.20.11 are released

Hello gophers,

We have just released Go versions 1.21.4 and 1.20.11, minor point releases.

These minor releases include 2 security fixes following the security policy:

- path/filepath: recognize `\??\` as a Root Local Device path prefix.

  On Windows, a path beginning with `\??\` is a Root Local Device path equivalent
  to a path beginning with `\\?\`. Paths with a `\??\` prefix may be used to
  access arbitrary locations on the system. For example, the path `\??\c:\x`
  is equivalent to the more common path c:\x.

  The filepath package did not recognize paths with a `\??\` prefix as special.

  Clean could convert a rooted path such as `\a\..\??\b` into
  the root local device path `\??\b`. It will now convert this
  path into `.\??\b`.

  `IsAbs` did not report paths beginning with `\??\` as absolute.
  It now does so.

  VolumeName now reports the `\??\` prefix as a volume name.

  `Join(`\`, `??`, `b`)` could convert a seemingly innocent
  sequence of path elements into the root local device path
  `\??\b`. It will now convert this to `\.\??\b`.

  This is CVE-2023-45283 and https://go.dev/issue/63713.

- path/filepath: recognize device names with trailing spaces and superscripts

  The `IsLocal` function did not correctly detect reserved names in some cases:

  - reserved names followed by spaces, such as "COM1 ".
  - "COM" or "LPT" followed by a superscript 1, 2, or 3.

  `IsLocal` now correctly reports these names as non-local.

  This is CVE-2023-45284 and https://go.dev/issue/63713.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-19 12:12:23 +01:00
Sebastiaan van Stijn
97280b92b5 Merge pull request #46951 from thaJeztah/23.0_backport_local_logs_timezone
[23.0 backport] daemon/logger/local: always use UTC for timestamps
2024-01-19 12:11:37 +01:00
Sebastiaan van Stijn
190138d0ec Merge pull request #47110 from corhere/backport-23.0/lock-container-when-deleting-root
[23.0 backport] Lock container when deleting its root directory
2024-01-19 12:11:16 +01:00
Cory Snider
440d3b00fe Lock container when deleting its root directory
Attempting to delete the directory while another goroutine is
concurrently executing a CheckpointTo() can fail on Windows due to file
locking. As all callers of CheckpointTo() are required to hold the
container lock, holding the lock while deleting the directory ensures
that there will be no interference.

Signed-off-by: Cory Snider <csnider@mirantis.com>
(cherry picked from commit 18e322bc7c)
Signed-off-by: Cory Snider <csnider@mirantis.com>
2024-01-18 15:23:24 -05:00
Akihiro Suda
6aeb5db52a Merge pull request #47095 from thaJeztah/23.0_backport_bump_golangci_lint
[23.0 backport] update golangci-lint to v1.55.2
2024-01-18 18:11:37 +09:00
Sebastiaan van Stijn
5ccc9357db update golangci-lint to v1.55.2
- full diff: https://github.com/golangci/golangci-lint/compare/v1.54.2...v1.55.2

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d5a3fccb06)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-17 22:21:47 +01:00
Sebastiaan van Stijn
b19284a8de daemon/logger/local: always use UTC for timestamps
When reading logs, timestamps should always be presented in UTC. Unlike
the "json-file" and other logging drivers, the "local" logging driver
was using local time.

Thanks to Roman Valov for reporting this issue, and locating the bug.

Before this change:

    echo $TZ
    Europe/Amsterdam

    docker run -d --log-driver=local nginx:alpine
    fc166c6b2c35c871a13247dddd95de94f5796459e2130553eee91cac82766af3

    docker logs --timestamps fc166c6b2c35c871a13247dddd95de94f5796459e2130553eee91cac82766af3
    2023-12-08T18:16:56.291023422+01:00 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    2023-12-08T18:16:56.291056463+01:00 /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    2023-12-08T18:16:56.291890130+01:00 /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    ...

With this patch:

    echo $TZ
    Europe/Amsterdam

    docker run -d --log-driver=local nginx:alpine
    14e780cce4c827ce7861d7bc3ccf28b21f6e460b9bfde5cd39effaa73a42b4d5

    docker logs --timestamps 14e780cce4c827ce7861d7bc3ccf28b21f6e460b9bfde5cd39effaa73a42b4d5
    2023-12-08T17:18:46.635967625Z /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    2023-12-08T17:18:46.635989792Z /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    2023-12-08T17:18:46.636897417Z /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    ...

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit afe281964d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-04 18:33:33 +01:00
Sebastiaan van Stijn
14e65c6ff6 Merge pull request #47011 from thaJeztah/23.0_backport_bump_runc_binary
[23.0 backport] update runc binary to v1.1.11
2024-01-03 19:26:13 +01:00
Sebastiaan van Stijn
8c6145e6ba update runc binary to v1.1.11
This is the eleventh patch release in the 1.1.z release branch of runc.
It primarily fixes a few issues with runc's handling of containers that
are configured to join existing user namespaces, as well as improvements
to cgroupv2 support.

- Fix several issues with userns path handling.
- Support memory.peak and memory.swap.peak in cgroups v2.
  Add swapOnlyUsage in MemoryStats. This field reports swap-only usage.
  For cgroupv1, Usage and Failcnt are set by subtracting memory usage
  from memory+swap usage. For cgroupv2, Usage, Limit, and MaxUsage
  are set.
- build(deps): bump github.com/cyphar/filepath-securejoin.

- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.11
- full diff: https://github.com/opencontainers/runc/compare/v1.1.10...v1.1.11

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5fa4cfcabf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2024-01-02 23:44:36 +01:00
Brian Goff
d67439d7b6 Merge pull request #46994 from thaJeztah/23.0_backport_46621-container_wait
[23.0 backport] Ensure that non-JSON-parsing errors are returned to the caller
2024-01-02 10:36:25 -08:00
Stefan Gehrig
69f337c285 Ensure that non-JSON-parsing errors are returned to the caller
Signed-off-by: Stefan Gehrig <stefan.gehrig.hn@googlemail.com>
Co-authored-by: Cory Snider <corhere@gmail.com>
(cherry picked from commit 0d27579fc7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-27 14:19:46 +01:00
Sebastiaan van Stijn
03bc5f4f1c Merge pull request #46948 from thaJeztah/23.0_backport_gc_time_filter
[23.0 backport] builder-next: fix timing filter for default policy
2023-12-17 18:01:03 +01:00
Tonis Tiigi
1fcbc7cf93 builder-next: fix timing filter for default policy
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 49d088d9ce)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-12-15 19:57:57 +01:00
Sebastiaan van Stijn
cad80da86f Merge pull request #46807 from thaJeztah/23.0_backport_bump_runc_binary_1.1.10
[23.0 backport] update runc binary to v1.1.10
2023-11-13 20:51:01 +01:00
Sebastiaan van Stijn
d3acc23a0c update runc binary to v1.1.10
- full diff: https://github.com/opencontainers/runc/compare/v1.1.9...v1.1.10
- release notes: https://github.com/opencontainers/runc/releases/tag/v1.1.10

This is the tenth (and most likely final) patch release in the 1.1.z
release branch of runc. It mainly fixes a few issues in cgroups, and a
umask-related issue in tmpcopyup.

- Add support for `hugetlb.<pagesize>.rsvd` limiting and accounting.
  Fixes the issue of postgres failing when hugepage limits are set.
- Fixed permissions of a newly created directories to not depend on the value
  of umask in tmpcopyup feature implementation.
- libcontainer: cgroup v1 GetStats now ignores missing `kmem.limit_in_bytes`
  (fixes the compatibility with Linux kernel 6.1+).
- Fix a semi-arbitrary cgroup write bug when given a malicious hugetlb
  configuration. This issue is not a security issue because it requires a
  malicious config.json, which is outside of our threat model.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 15bcc707e6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2023-11-13 16:08:52 +01:00
940 changed files with 24332 additions and 7328 deletions

View File

@@ -3,6 +3,15 @@ name: .dco
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
@@ -11,23 +20,24 @@ env:
jobs:
run:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Dump context
uses: actions/github-script@v6
uses: actions/github-script@v7
with:
script: |
console.log(JSON.stringify(context, null, 2));
-
name: Get base ref
id: base-ref
uses: actions/github-script@v6
uses: actions/github-script@v7
with:
result-encoding: string
script: |

View File

@@ -3,6 +3,15 @@ name: .windows
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
on:
workflow_call:
inputs:
@@ -15,7 +24,7 @@ on:
default: false
env:
GO_VERSION: "1.20.10"
GO_VERSION: "1.23.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.3
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
@@ -29,6 +38,7 @@ env:
jobs:
build:
runs-on: ${{ inputs.os }}
timeout-minutes: 120 # guardrails timeout for the whole job
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
@@ -39,7 +49,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
@@ -58,7 +68,7 @@ jobs:
}
-
name: Cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
@@ -96,7 +106,7 @@ jobs:
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
-
name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: build-${{ inputs.os }}
path: ${{ env.BIN_OUT }}/*
@@ -105,7 +115,7 @@ jobs:
unit-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
timeout-minutes: 120 # guardrails timeout for the whole job
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
@@ -115,7 +125,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
@@ -135,7 +145,7 @@ jobs:
}
-
name: Cache
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: |
~\AppData\Local\go-build
@@ -166,7 +176,7 @@ jobs:
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
@@ -175,25 +185,27 @@ jobs:
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.os }}-unit-reports
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
unit-test-report:
runs-on: ubuntu-latest
timeout-minutes: 120 # guardrails timeout for the whole job
if: always()
needs:
- unit-test
steps:
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: ${{ inputs.os }}-unit-reports
path: /tmp/artifacts
@@ -208,15 +220,16 @@ jobs:
integration-test-prepare:
runs-on: ubuntu-latest
timeout-minutes: 120 # guardrails timeout for the whole job
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
@@ -241,7 +254,7 @@ jobs:
integration-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build
- integration-test-prepare
@@ -262,7 +275,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
@@ -271,7 +284,7 @@ jobs:
Get-ChildItem Env: | Out-String
-
name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: build-${{ inputs.os }}
path: ${{ env.BIN_OUT }}
@@ -285,6 +298,8 @@ jobs:
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
$testName = ([System.BitConverter]::ToString((New-Object System.Security.Cryptography.SHA256Managed).ComputeHash([System.Text.Encoding]::UTF8.GetBytes("${{ matrix.test }}"))) -replace '-').ToLower()
echo "TESTREPORTS_NAME=$testName" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
-
# removes docker service that is currently installed on the runner. we
# could use Uninstall-Package but not yet available on Windows runners.
@@ -390,7 +405,7 @@ jobs:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
@@ -415,7 +430,7 @@ jobs:
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
@@ -461,13 +476,15 @@ jobs:
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.os }}-integration-reports-${{ matrix.runtime }}
name: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-${{ env.TESTREPORTS_NAME }}
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
retention-days: 1
integration-test-report:
runs-on: ubuntu-latest
timeout-minutes: 120 # guardrails timeout for the whole job
if: always()
needs:
- integration-test
@@ -480,15 +497,16 @@ jobs:
steps:
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v3
name: Download reports
uses: actions/download-artifact@v4
with:
name: ${{ inputs.os }}-integration-reports-${{ matrix.runtime }}
path: /tmp/artifacts
path: /tmp/reports
pattern: ${{ inputs.os }}-${{ inputs.storage }}-integration-reports-${{ matrix.runtime }}-*
merge-multiple: true
-
name: Install teststat
run: |
@@ -496,4 +514,4 @@ jobs:
-
name: Create summary
run: |
teststat -markdown $(find /tmp/artifacts -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY

View File

@@ -1,5 +1,14 @@
name: buildkit
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -20,24 +29,25 @@ jobs:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: binary
-
name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: binary
path: ${{ env.DESTDIR }}
@@ -45,8 +55,8 @@ jobs:
retention-days: 1
test:
runs-on: ubuntu-20.04
timeout-minutes: 120
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- build
strategy:
@@ -63,9 +73,14 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
path: moby
-
name: Set up Go
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: BuildKit ref
run: |
@@ -73,20 +88,20 @@ jobs:
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: "moby/buildkit"
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Download binary artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: binary
path: ./buildkit/build/moby/

View File

@@ -1,5 +1,14 @@
name: ci
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -22,7 +31,8 @@ jobs:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
@@ -34,15 +44,15 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: ${{ matrix.target }}
-
@@ -53,17 +63,10 @@ jobs:
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.target }}
path: ${{ env.DESTDIR }}
if-no-files-found: error
retention-days: 7
prepare-cross:
runs-on: ubuntu-latest
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
outputs:
@@ -71,7 +74,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
@@ -84,7 +87,8 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
cross:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 20 # guardrails timeout for the whole job
needs:
- validate-dco
- prepare-cross
@@ -95,7 +99,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
-
@@ -105,10 +109,10 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: all
set: |
@@ -121,11 +125,3 @@ jobs:
name: Check artifacts
run: |
find ${{ env.DESTDIR }} -type f -exec file -e ascii -- {} +
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: cross-${{ env.PLATFORM_PAIR }}
path: ${{ env.DESTDIR }}
if-no-files-found: error
retention-days: 7

View File

@@ -1,5 +1,14 @@
name: test
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
@@ -15,7 +24,7 @@ on:
pull_request:
env:
GO_VERSION: "1.20.10"
GO_VERSION: "1.23.9"
GOTESTLIST_VERSION: v0.3.1
TESTSTAT_VERSION: v0.1.3
ITG_CLI_MATRIX_SIZE: 6
@@ -27,7 +36,8 @@ jobs:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
strategy:
@@ -45,13 +55,13 @@ jobs:
fi
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -60,7 +70,8 @@ jobs:
*.output=type=cacheonly
validate-prepare:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
outputs:
@@ -68,7 +79,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Create matrix
id: scripts
@@ -81,8 +92,8 @@ jobs:
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-20.04
timeout-minutes: 120
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-prepare
- build-dev
@@ -93,7 +104,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
-
@@ -101,10 +112,10 @@ jobs:
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -115,23 +126,23 @@ jobs:
make -o build validate-${{ matrix.script }}
unit:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -151,7 +162,7 @@ jobs:
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
directory: ./bundles
env_vars: RUNNER_OS
@@ -159,13 +170,14 @@ jobs:
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: unit-reports
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports/*
retention-days: 1
unit-report:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 10
if: always()
needs:
@@ -173,14 +185,14 @@ jobs:
steps:
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: unit-reports
name: test-reports-unit-${{ inputs.storage }}
path: /tmp/reports
-
name: Install teststat
@@ -192,23 +204,23 @@ jobs:
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -216,7 +228,7 @@ jobs:
-
name: Test
run: |
make -o build test-docker-py
make TEST_SKIP_INTEGRATION_CLI=1 -o build test-docker-py
-
name: Prepare reports
if: always()
@@ -234,29 +246,29 @@ jobs:
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: docker-py-reports
name: test-reports-docker-py-${{ inputs.storage }}
path: /tmp/reports/*
integration-flaky:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -277,17 +289,20 @@ jobs:
fail-fast: false
matrix:
os:
- ubuntu-20.04
- ubuntu-22.04
- ubuntu-24.04
mode:
- ""
- rootless
- systemd
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
exclude:
- os: ubuntu-24.04
mode: rootless # FIXME: https://github.com/moby/moby/pull/49579#issuecomment-2698622223
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
@@ -305,10 +320,10 @@ jobs:
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -324,10 +339,12 @@ jobs:
name: Prepare reports
if: always()
run: |
reportsPath="/tmp/reports/${{ matrix.os }}"
reportsName=${{ matrix.os }}
if [ -n "${{ matrix.mode }}" ]; then
reportsPath="$reportsPath-${{ matrix.mode }}"
reportsName="$reportsName-${{ matrix.mode }}"
fi
reportsPath="/tmp/reports/$reportsName"
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
@@ -335,7 +352,7 @@ jobs:
tree -nh $reportsPath
-
name: Send to Codecov
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
@@ -348,13 +365,13 @@ jobs:
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: integration-reports
name: test-reports-integration-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
integration-report:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 10
if: always()
needs:
@@ -362,15 +379,16 @@ jobs:
steps:
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: integration-reports
path: /tmp/reports
pattern: test-reports-integration-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat
run: |
@@ -381,7 +399,7 @@ jobs:
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
needs:
- validate-dco
outputs:
@@ -389,10 +407,10 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
@@ -417,7 +435,7 @@ jobs:
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120
needs:
- build-dev
@@ -429,16 +447,16 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Build dev image
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: dev
set: |
@@ -455,7 +473,9 @@ jobs:
name: Prepare reports
if: always()
run: |
reportsPath=/tmp/reports/$(echo -n "${{ matrix.test }}" | sha256sum | cut -d " " -f 1)
reportsName=$(echo -n "${{ matrix.test }}" | sha256sum | cut -d " " -f 1)
reportsPath=/tmp/reports/$reportsName
echo "TESTREPORTS_NAME=$reportsName" >> $GITHUB_ENV
mkdir -p bundles $reportsPath
echo "${{ matrix.test }}" | tr -s '|' '\n' | tee -a "$reportsPath/tests.txt"
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
@@ -464,7 +484,7 @@ jobs:
tree -nh $reportsPath
-
name: Send to Codecov
uses: codecov/codecov-action@v3
uses: codecov/codecov-action@v4
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
@@ -477,13 +497,13 @@ jobs:
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: integration-cli-reports
name: test-reports-integration-cli-${{ inputs.storage }}-${{ env.TESTREPORTS_NAME }}
path: /tmp/reports/*
integration-cli-report:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 10
if: always()
needs:
@@ -491,15 +511,16 @@ jobs:
steps:
-
name: Set up Go
uses: actions/setup-go@v3
uses: actions/setup-go@v5
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: integration-cli-reports
path: /tmp/reports
pattern: test-reports-integration-cli-${{ inputs.storage }}-*
merge-multiple: true
-
name: Install teststat
run: |
@@ -510,7 +531,8 @@ jobs:
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
prepare-smoke:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- validate-dco
outputs:
@@ -518,7 +540,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Create matrix
id: platforms
@@ -531,7 +553,8 @@ jobs:
echo ${{ steps.platforms.outputs.matrix }}
smoke:
runs-on: ubuntu-20.04
runs-on: ubuntu-24.04
timeout-minutes: 120 # guardrails timeout for the whole job
needs:
- prepare-smoke
strategy:
@@ -541,7 +564,7 @@ jobs:
steps:
-
name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
-
name: Prepare
run: |
@@ -549,13 +572,13 @@ jobs:
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
uses: docker/setup-qemu-action@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
uses: docker/setup-buildx-action@v3
-
name: Test
uses: docker/bake-action@v2
uses: docker/bake-action@v5
with:
targets: binary-smoketest
set: |

View File

@@ -1,5 +1,14 @@
name: windows-2019
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

View File

@@ -1,5 +1,14 @@
name: windows-2022
# Default to 'contents: read', which grants actions to read commits.
#
# If any permission is set, any permission not included in the list is
# implicitly set to "none".
#
# see https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#permissions
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

View File

@@ -1,15 +1,14 @@
# syntax=docker/dockerfile:1
ARG GO_VERSION=1.20.10
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
ARG XX_VERSION=1.2.1
ARG XX_VERSION=1.6.1
ARG VPNKIT_VERSION=0.5.0
ARG DOCKERCLI_VERSION=v17.06.2-ce
ARG SYSTEMD="false"
ARG DEBIAN_FRONTEND=noninteractive
ARG DOCKER_STATIC=1
# cross compilation helper
@@ -27,13 +26,12 @@ FROM --platform=$BUILDPLATFORM ${GOLANG_IMAGE} AS base
COPY --from=xx / /
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
ARG APT_MIRROR
RUN test -n "$APT_MIRROR" && sed -ri "s/(httpredir|deb|security).debian.org/${APT_MIRROR}/g" /etc/apt/sources.list || true
ARG DEBIAN_FRONTEND
RUN test -n "$APT_MIRROR" && sed -ri "s#(httpredir|deb|security).debian.org#${APT_MIRROR}#g" /etc/apt/sources.list.d/debian.sources || true
RUN apt-get update && apt-get install --no-install-recommends -y file
ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
FROM base AS criu
ARG DEBIAN_FRONTEND
ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
@@ -108,7 +106,6 @@ EOT
# See also frozenImages in "testutil/environment/protect.go" (which needs to
# be updated when adding images to this list)
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
@@ -122,7 +119,7 @@ ARG TARGETVARIANT
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
debian:bookworm-slim@sha256:2bc5c236e9b262645a323e9088dfa3bb1ecb16cc75811daf40a23a824d665be9 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
@@ -191,17 +188,19 @@ RUN git init . && git remote add origin "https://github.com/containerd/container
# When updating the binary version you may also need to update the vendor
# version to pick up bug fixes or new APIs, however, usually the Go packages
# are built from a commit from the master branch.
ARG CONTAINERD_VERSION=v1.6.22
ARG CONTAINERD_VERSION=v1.6.38
RUN git fetch -q --depth 1 origin "${CONTAINERD_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS containerd-build
WORKDIR /go/src/github.com/containerd/containerd
ARG DEBIAN_FRONTEND
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc libbtrfs-dev libsecret-1-dev
gcc \
libbtrfs-dev \
libsecret-1-dev \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=containerd-src,src=/usr/src/containerd,rw \
--mount=type=cache,target=/root/.cache/go-build,id=containerd-build-$TARGETPLATFORM <<EOT
@@ -222,7 +221,7 @@ FROM binary-dummy AS containerd-windows
FROM containerd-${TARGETOS} AS containerd
FROM base AS golangci_lint
ARG GOLANGCI_LINT_VERSION=v1.54.2
ARG GOLANGCI_LINT_VERSION=v1.64.5
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
@@ -279,17 +278,20 @@ RUN git init . && git remote add origin "https://github.com/opencontainers/runc.
# that is used. If you need to update runc, open a pull request in the containerd
# project first, and update both after that is merged. When updating RUNC_VERSION,
# consider updating runc in vendor.mod accordingly.
ARG RUNC_VERSION=v1.1.9
ARG RUNC_VERSION=v1.1.12
RUN git fetch -q --depth 1 origin "${RUNC_VERSION}" +refs/tags/*:refs/tags/* && git checkout -q FETCH_HEAD
FROM base AS runc-build
WORKDIR /go/src/github.com/opencontainers/runc
ARG DEBIAN_FRONTEND
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-runc-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-runc-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
dpkg-dev gcc libc6-dev libseccomp-dev
dpkg-dev \
gcc \
libc6-dev \
libseccomp-dev \
pkg-config
ARG DOCKER_STATIC
RUN --mount=from=runc-src,src=/usr/src/runc,rw \
--mount=type=cache,target=/root/.cache/go-build,id=runc-build-$TARGETPLATFORM <<EOT
@@ -316,7 +318,6 @@ RUN git fetch -q --depth 1 origin "${TINI_VERSION}" +refs/tags/*:refs/tags/* &&
FROM base AS tini-build
WORKDIR /go/src/github.com/krallin/tini
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends cmake
@@ -324,7 +325,9 @@ ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
xx-apt-get install -y --no-install-recommends \
gcc libc6-dev
gcc \
libc6-dev \
pkg-config
RUN --mount=from=tini-src,src=/usr/src/tini,rw \
--mount=type=cache,target=/root/.cache/go-build,id=tini-build-$TARGETPLATFORM <<EOT
set -e
@@ -349,12 +352,13 @@ RUN git fetch -q --depth 1 origin "${ROOTLESSKIT_VERSION}" +refs/tags/*:refs/tag
FROM base AS rootlesskit-build
WORKDIR /go/src/github.com/rootless-containers/rootlesskit
ARG DEBIAN_FRONTEND
ARG TARGETPLATFORM
RUN --mount=type=cache,sharing=locked,id=moby-rootlesskit-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-rootlesskit-aptcache,target=/var/cache/apt \
apt-get update && xx-apt-get install -y --no-install-recommends \
gcc libc6-dev
gcc \
libc6-dev \
pkg-config
ENV GO111MODULE=on
ARG DOCKER_STATIC
RUN --mount=from=rootlesskit-src,src=/usr/src/rootlesskit,rw \
@@ -422,7 +426,11 @@ RUN git fetch -q --depth 1 origin "${CONTAINERUTILITY_VERSION}" +refs/tags/*:ref
FROM base AS containerutil-build
WORKDIR /usr/src/containerutil
ARG TARGETPLATFORM
RUN xx-apt-get install -y --no-install-recommends gcc g++ libc6-dev
RUN xx-apt-get install -y --no-install-recommends \
gcc \
g++ \
libc6-dev \
pkg-config
RUN --mount=from=containerutil-src,src=/usr/src/containerutil,rw \
--mount=type=cache,target=/root/.cache/go-build,id=containerutil-build-$TARGETPLATFORM <<EOT
set -e
@@ -477,13 +485,9 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
dbus-user-session \
systemd \
systemd-sysv
RUN mkdir -p hack \
&& curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \
&& chmod +x hack/dind-systemd
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev-base
ARG DEBIAN_FRONTEND
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
@@ -517,9 +521,6 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
net-tools \
patch \
pigz \
python3-pip \
python3-setuptools \
python3-wheel \
sudo \
thin-provisioning-tools \
uidmap \
@@ -534,8 +535,6 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
ARG YAMLLINT_VERSION=1.27.1
RUN pip3 install yamllint==${YAMLLINT_VERSION}
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
@@ -547,14 +546,14 @@ RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev
libudev-dev \
yamllint
FROM base AS build
COPY --from=gowinres /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
ENV GO111MODULE=off
ENV CGO_ENABLED=1
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-build-aptcache,target=/var/cache/apt \
apt-get update && apt-get install --no-install-recommends -y \
@@ -573,7 +572,8 @@ RUN --mount=type=cache,sharing=locked,id=moby-build-aptlib,target=/var/lib/apt \
libseccomp-dev \
libsecret-1-dev \
libsystemd-dev \
libudev-dev
libudev-dev \
pkg-config
ARG DOCKER_BUILDTAGS
ARG DOCKER_DEBUG
ARG DOCKER_GITCOMMIT=HEAD
@@ -630,7 +630,7 @@ COPY --from=build /build /
# smoke tests
# usage:
# > docker buildx bake binary-smoketest
FROM --platform=$TARGETPLATFORM base AS smoketest
FROM base AS smoketest
WORKDIR /usr/local/bin
COPY --from=build /build .
RUN <<EOT

View File

@@ -1,4 +1,4 @@
ARG GO_VERSION=1.20.5
ARG GO_VERSION=1.23.9
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off

View File

@@ -5,17 +5,18 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.20.10
ARG GO_VERSION=1.23.9
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG BASE_DEBIAN_DISTRO="bookworm"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE}
ENV GO111MODULE=off
ENV GOTOOLCHAIN=local
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
# allow replacing debian mirror
ARG APT_MIRROR
RUN test -n "$APT_MIRROR" && sed -ri "s#(httpredir|deb|security).debian.org#${APT_MIRROR}#g" /etc/apt/sources.list.d/debian.sources || true
# Compile and runtime deps
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies

View File

@@ -165,10 +165,10 @@ FROM microsoft/windowsservercore
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.20.10
ARG GO_VERSION=1.23.9
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOWINRES_VERSION=v0.3.0
ARG CONTAINERD_VERSION=v1.6.22
ARG CONTAINERD_VERSION=v1.6.38
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
@@ -179,6 +179,7 @@ ENV GO_VERSION=${GO_VERSION} `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
GOTOOLCHAIN=local `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION} `
GOWINRES_VERSION=${GOWINRES_VERSION}
@@ -258,14 +259,11 @@ RUN `
Remove-Item C:\gitsetup.zip; `
`
Write-Host INFO: Downloading containerd; `
Install-Package -Force 7Zip4PowerShell; `
$location='https://github.com/containerd/containerd/releases/download/'+$Env:CONTAINERD_VERSION+'/containerd-'+$Env:CONTAINERD_VERSION.TrimStart('v')+'-windows-amd64.tar.gz'; `
Download-File $location C:\containerd.tar.gz; `
New-Item -Path C:\containerd -ItemType Directory; `
Expand-7Zip C:\containerd.tar.gz C:\; `
Expand-7Zip C:\containerd.tar C:\containerd; `
tar -xzf C:\containerd.tar.gz -C C:\containerd; `
Remove-Item C:\containerd.tar.gz; `
Remove-Item C:\containerd.tar; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `

12
Jenkinsfile vendored
View File

@@ -60,6 +60,15 @@ pipeline {
}
stages {
stage("Load kernel modules") {
steps {
sh '''
sudo modprobe ip6table_filter
sudo modprobe -va br_netfilter
sudo systemctl restart docker.service
'''
}
}
stage("Print info") {
steps {
sh 'docker version'
@@ -78,9 +87,6 @@ pipeline {
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \

View File

@@ -49,7 +49,7 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
if p := r.FormValue("platform"); p != "" {
sp, err := platforms.Parse(p)
if err != nil {
return err
return errdefs.InvalidParameter(err)
}
platform = &sp
}

View File

@@ -22,6 +22,9 @@ func (s *snapshotter) GetDiffIDs(ctx context.Context, key string) ([]layer.DiffI
}
func (s *snapshotter) EnsureLayer(ctx context.Context, key string) ([]layer.DiffID, error) {
s.layerCreateLocker.Lock(key)
defer s.layerCreateLocker.Unlock(key)
diffIDs, err := s.GetDiffIDs(ctx, key)
if err != nil {
return nil, err

View File

@@ -16,6 +16,7 @@ import (
"github.com/docker/docker/pkg/idtools"
"github.com/moby/buildkit/identity"
"github.com/moby/buildkit/snapshot"
"github.com/moby/locker"
"github.com/opencontainers/go-digest"
"github.com/pkg/errors"
bolt "go.etcd.io/bbolt"
@@ -48,10 +49,11 @@ type checksumCalculator interface {
type snapshotter struct {
opt Opt
refs map[string]layer.Layer
db *bolt.DB
mu sync.Mutex
reg graphIDRegistrar
refs map[string]layer.Layer
db *bolt.DB
mu sync.Mutex
reg graphIDRegistrar
layerCreateLocker *locker.Locker
}
// NewSnapshotter creates a new snapshotter
@@ -68,10 +70,11 @@ func NewSnapshotter(opt Opt, prevLM leases.Manager) (snapshot.Snapshotter, lease
}
s := &snapshotter{
opt: opt,
db: db,
refs: map[string]layer.Layer{},
reg: reg,
opt: opt,
db: db,
refs: map[string]layer.Layer{},
reg: reg,
layerCreateLocker: locker.New(),
}
lm := newLeaseManager(s, prevLM)

View File

@@ -18,6 +18,7 @@ import (
containerimageexp "github.com/docker/docker/builder/builder-next/exporter"
"github.com/docker/docker/daemon/config"
"github.com/docker/docker/daemon/images"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/libnetwork"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/idtools"
@@ -304,7 +305,7 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
// TODO: remove once opt.Options.Platform is of type specs.Platform
_, err := platforms.Parse(opt.Options.Platform)
if err != nil {
return nil, err
return nil, errdefs.InvalidParameter(err)
}
frontendAttrs["platform"] = opt.Options.Platform
}

View File

@@ -49,6 +49,10 @@ func patchImageConfig(dt []byte, dps []digest.Digest, history []ocispec.History,
return nil, errors.Wrap(err, "failed to parse image config for patch")
}
if m == nil {
return nil, errors.New("null image config")
}
var rootFS ocispec.RootFS
rootFS.Type = "layers"
rootFS.DiffIDs = append(rootFS.DiffIDs, dps...)

View File

@@ -0,0 +1,42 @@
package containerimage
import (
"testing"
"gotest.tools/v3/assert"
)
func TestPatchImageConfig(t *testing.T) {
for _, tc := range []struct {
name string
cfgJSON string
err string
}{
{
name: "empty",
cfgJSON: "{}",
},
{
name: "history only",
cfgJSON: `{"history": []}`,
},
{
name: "rootfs only",
cfgJSON: `{"rootfs": {}}`,
},
{
name: "null",
cfgJSON: "null",
err: "null image config",
},
} {
t.Run(tc.name, func(t *testing.T) {
_, err := patchImageConfig([]byte(tc.cfgJSON), nil, nil, nil, nil)
if tc.err == "" {
assert.NilError(t, err)
} else {
assert.ErrorContains(t, err, tc.err)
}
})
}
}

View File

@@ -2,6 +2,7 @@ package worker
import (
"math"
"time"
"github.com/moby/buildkit/client"
)
@@ -30,12 +31,12 @@ func DefaultGCPolicy(p string, defaultKeepBytes int64) []client.PruneInfo {
// if build cache uses more than 512MB delete the most easily reproducible data after it has not been used for 2 days
{
Filter: []string{"type==source.local,type==exec.cachemount,type==source.git.checkout"},
KeepDuration: 48 * 3600, // 48h
KeepDuration: 48 * time.Hour,
KeepBytes: tempCacheKeepBytes,
},
// remove any data not used for 60 days
{
KeepDuration: 60 * 24 * 3600, // 60d
KeepDuration: 60 * 24 * time.Hour,
KeepBytes: keep,
},
// keep the unshared build cache under cap

View File

@@ -15,6 +15,7 @@ import (
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/pkg/containerfs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
const (
@@ -89,7 +90,7 @@ type ImageCacheBuilder interface {
type ImageCache interface {
// GetCache returns a reference to a cached image whose parent equals `parent`
// and runconfig equals `cfg`. A cache miss is expected to return an empty ID and a nil error.
GetCache(parentID string, cfg *container.Config) (imageID string, err error)
GetCache(parentID string, cfg *container.Config, platform ocispec.Platform) (imageID string, err error)
}
// Image represents a Docker image used by the builder.

View File

@@ -156,7 +156,7 @@ func newBuilder(clientCtx context.Context, options builderOptions) (*Builder, er
if config.Platform != "" {
sp, err := platforms.Parse(config.Platform)
if err != nil {
return nil, err
return nil, errdefs.InvalidParameter(err)
}
b.platform = &sp
}

View File

@@ -9,7 +9,6 @@ import (
"net/url"
"os"
"path/filepath"
"runtime"
"sort"
"strings"
"time"
@@ -25,7 +24,7 @@ import (
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/system"
"github.com/moby/buildkit/frontend/dockerfile/instructions"
specs "github.com/opencontainers/image-spec/specs-go/v1"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -75,7 +74,7 @@ type copier struct {
source builder.Source
pathCache pathCache
download sourceDownloader
platform *specs.Platform
platform ocispec.Platform
// for cleanup. TODO: having copier.cleanup() is error prone and hard to
// follow. Code calling performCopy should manage the lifecycle of its params.
// Copier should take override source as input, not imageMount.
@@ -84,19 +83,7 @@ type copier struct {
}
func copierFromDispatchRequest(req dispatchRequest, download sourceDownloader, imageSource *imageMount) copier {
platform := req.builder.platform
if platform == nil {
// May be nil if not explicitly set in API/dockerfile
platform = &specs.Platform{}
}
if platform.OS == "" {
// Default to the dispatch requests operating system if not explicit in API/dockerfile
platform.OS = req.state.operatingSystem
}
if platform.OS == "" {
// This is a failsafe just in case. Shouldn't be hit.
platform.OS = runtime.GOOS
}
platform := req.builder.getPlatform(req.state)
return copier{
source: req.source,

View File

@@ -164,17 +164,17 @@ func initializeStage(d dispatchRequest, cmd *instructions.Stage) error {
p, err := platforms.Parse(v)
if err != nil {
return errors.Wrapf(err, "failed to parse platform %s", v)
return errors.Wrapf(errdefs.InvalidParameter(err), "failed to parse platform %s", v)
}
platform = &p
}
image, err := d.getFromImage(d.shlex, cmd.BaseName, platform)
img, err := d.getFromImage(d.shlex, cmd.BaseName, platform)
if err != nil {
return err
}
state := d.state
if err := state.beginStage(cmd.Name, image); err != nil {
if err := state.beginStage(cmd.Name, img); err != nil {
return err
}
if len(state.runConfig.OnBuild) > 0 {
@@ -345,9 +345,16 @@ func dispatchRun(d dispatchRequest, c *instructions.RunCommand) error {
saveCmd = prependEnvOnCmd(d.state.buildArgs, buildArgs, cmdFromArgs)
}
cacheArgsEscaped := argsEscaped
// ArgsEscaped is not persisted in the committed image on Windows.
// Use the original from previous build steps for cache probing.
if d.state.operatingSystem == "windows" {
cacheArgsEscaped = stateRunConfig.ArgsEscaped
}
runConfigForCacheProbe := copyRunConfig(stateRunConfig,
withCmd(saveCmd),
withArgsEscaped(argsEscaped),
withArgsEscaped(cacheArgsEscaped),
withEntrypointOverride(saveCmd, nil))
if hit, err := d.builder.probeCache(d.state, runConfigForCacheProbe); err != nil || hit {
return err

View File

@@ -3,6 +3,7 @@ package dockerfile // import "github.com/docker/docker/builder/dockerfile"
import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/builder"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/sirupsen/logrus"
)
@@ -10,7 +11,7 @@ import (
// cache.
type ImageProber interface {
Reset()
Probe(parentID string, runConfig *container.Config) (string, error)
Probe(parentID string, runConfig *container.Config, platform ocispec.Platform) (string, error)
}
type imageProber struct {
@@ -37,11 +38,11 @@ func (c *imageProber) Reset() {
// Probe checks if cache match can be found for current build instruction.
// It returns the cachedID if there is a hit, and the empty string on miss
func (c *imageProber) Probe(parentID string, runConfig *container.Config) (string, error) {
func (c *imageProber) Probe(parentID string, runConfig *container.Config, platform ocispec.Platform) (string, error) {
if c.cacheBusted {
return "", nil
}
cacheID, err := c.cache.GetCache(parentID, runConfig)
cacheID, err := c.cache.GetCache(parentID, runConfig, platform)
if err != nil {
return "", err
}
@@ -58,6 +59,6 @@ type nopProber struct{}
func (c *nopProber) Reset() {}
func (c *nopProber) Probe(_ string, _ *container.Config) (string, error) {
func (c *nopProber) Probe(_ string, _ *container.Config, _ ocispec.Platform) (string, error) {
return "", nil
}

View File

@@ -10,6 +10,7 @@ import (
"io"
"strings"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
@@ -364,7 +365,7 @@ func getShell(c *container.Config, os string) []string {
}
func (b *Builder) probeCache(dispatchState *dispatchState, runConfig *container.Config) (bool, error) {
cachedID, err := b.imageProber.Probe(dispatchState.imageID, runConfig)
cachedID, err := b.imageProber.Probe(dispatchState.imageID, runConfig, b.getPlatform(dispatchState))
if cachedID == "" || err != nil {
return false, err
}
@@ -424,3 +425,17 @@ func hostConfigFromOptions(options *types.ImageBuildOptions) *container.HostConf
}
return hc
}
func (b *Builder) getPlatform(state *dispatchState) specs.Platform {
// May be nil if not explicitly set in API/dockerfile
out := platforms.DefaultSpec()
if b.platform != nil {
out = *b.platform
}
if state.operatingSystem != "" {
out.OS = state.operatingSystem
}
return out
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/idtools"
"github.com/docker/docker/pkg/jsonmessage"
"golang.org/x/sys/windows"
@@ -63,7 +64,7 @@ func lookupNTAccount(builder *Builder, accountName string, state *dispatchState)
optionsPlatform, err := platforms.Parse(builder.options.Platform)
if err != nil {
return idtools.Identity{}, err
return idtools.Identity{}, errdefs.InvalidParameter(err)
}
runConfig := copyRunConfig(state.runConfig,

View File

@@ -14,6 +14,7 @@ import (
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/pkg/containerfs"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// MockBackend implements the builder.Backend interface for unit testing
@@ -111,7 +112,7 @@ type mockImageCache struct {
getCacheFunc func(parentID string, cfg *container.Config) (string, error)
}
func (mic *mockImageCache) GetCache(parentID string, cfg *container.Config) (string, error) {
func (mic *mockImageCache) GetCache(parentID string, cfg *container.Config, _ ocispec.Platform) (string, error) {
if mic.getCacheFunc != nil {
return mic.getCacheFunc(parentID, cfg)
}

View File

@@ -44,8 +44,8 @@ func downloadRemote(remoteURL string) (string, io.ReadCloser, error) {
// GetWithStatusError does an http.Get() and returns an error if the
// status code is 4xx or 5xx.
func GetWithStatusError(address string) (resp *http.Response, err error) {
// #nosec G107
if resp, err = http.Get(address); err != nil {
resp, err = http.Get(address) // #nosec G107 -- ignore G107: Potential HTTP request made with variable url
if err != nil {
if uerr, ok := err.(*url.Error); ok {
if derr, ok := uerr.Err.(*net.DNSError); ok && !derr.IsTimeout {
return nil, errdefs.NotFound(err)

View File

@@ -1,7 +1,7 @@
## Generate `event_messages.bin`
```console
$ docker run --rm -it -v "$(pwd):/winresources" debian:bullseye bash
$ docker run --rm -it -v "$(pwd):/winresources" debian:bookworm-slim bash
root@9ad2260f6ebc:/# apt-get update -y && apt-get install -y binutils-mingw-w64-x86-64
root@9ad2260f6ebc:/# x86_64-w64-mingw32-windmc -v /winresources/event_messages.mc
root@9ad2260f6ebc:/# mv MSG00001.bin /winresources/event_messages.bin

View File

@@ -66,8 +66,12 @@ func (cli *Client) ContainerWait(ctx context.Context, containerID string, condit
//
// If there's a JSON parsing error, read the real error message
// off the body and send it to the client.
_, _ = io.ReadAll(io.LimitReader(stream, containerWaitErrorMsgLimit))
errC <- errors.New(responseText.String())
if errors.As(err, new(*json.SyntaxError)) {
_, _ = io.ReadAll(io.LimitReader(stream, containerWaitErrorMsgLimit))
errC <- errors.New(responseText.String())
} else {
errC <- err
}
return
}

View File

@@ -9,11 +9,14 @@ import (
"log"
"net/http"
"strings"
"syscall"
"testing"
"testing/iotest"
"time"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/errdefs"
"github.com/pkg/errors"
)
func TestContainerWaitError(t *testing.T) {
@@ -117,6 +120,46 @@ func TestContainerWaitProxyInterruptLong(t *testing.T) {
}
}
func TestContainerWaitErrorHandling(t *testing.T) {
for _, test := range []struct {
name string
rdr io.Reader
exp error
}{
{name: "invalid json", rdr: strings.NewReader(`{]`), exp: errors.New("{]")},
{name: "context canceled", rdr: iotest.ErrReader(context.Canceled), exp: context.Canceled},
{name: "context deadline exceeded", rdr: iotest.ErrReader(context.DeadlineExceeded), exp: context.DeadlineExceeded},
{name: "connection reset", rdr: iotest.ErrReader(syscall.ECONNRESET), exp: syscall.ECONNRESET},
} {
t.Run(test.name, func(t *testing.T) {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
client := &Client{
version: "1.30",
client: newMockClient(func(req *http.Request) (*http.Response, error) {
return &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(test.rdr),
}, nil
}),
}
resultC, errC := client.ContainerWait(ctx, "container_id", "")
select {
case err := <-errC:
if err.Error() != test.exp.Error() {
t.Fatalf("ContainerWait() errC = %v; want %v", err, test.exp)
}
return
case result := <-resultC:
t.Fatalf("expected to not get a wait result, got %d", result.StatusCode)
return
}
// Unexpected - we should not reach this line
})
}
}
func ExampleClient_ContainerWait_withTimeout() {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

View File

@@ -2,6 +2,7 @@ package stream // import "github.com/docker/docker/container/stream"
import (
"context"
"errors"
"fmt"
"io"
"strings"
@@ -91,24 +92,24 @@ func (c *Config) NewNopInputPipe() {
// CloseStreams ensures that the configured streams are properly closed.
func (c *Config) CloseStreams() error {
var errors []string
var errs []string
if c.stdin != nil {
if err := c.stdin.Close(); err != nil {
errors = append(errors, fmt.Sprintf("error close stdin: %s", err))
errs = append(errs, fmt.Sprintf("error close stdin: %s", err))
}
}
if err := c.stdout.Clean(); err != nil {
errors = append(errors, fmt.Sprintf("error close stdout: %s", err))
errs = append(errs, fmt.Sprintf("error close stdout: %s", err))
}
if err := c.stderr.Clean(); err != nil {
errors = append(errors, fmt.Sprintf("error close stderr: %s", err))
errs = append(errs, fmt.Sprintf("error close stderr: %s", err))
}
if len(errors) > 0 {
return fmt.Errorf(strings.Join(errors, "\n"))
if len(errs) > 0 {
return errors.New(strings.Join(errs, "\n"))
}
return nil

View File

@@ -59,30 +59,11 @@ fetch_blob() {
shift
local curlArgs=("$@")
local curlHeaders
curlHeaders="$(
curl -S "${curlArgs[@]}" \
-H "Authorization: Bearer $token" \
"$registryBase/v2/$image/blobs/$digest" \
-o "$targetFile" \
-D-
)"
curlHeaders="$(echo "$curlHeaders" | tr -d '\r')"
if grep -qE "^HTTP/[0-9].[0-9] 3" <<< "$curlHeaders"; then
rm -f "$targetFile"
local blobRedirect
blobRedirect="$(echo "$curlHeaders" | awk -F ': ' 'tolower($1) == "location" { print $2; exit }')"
if [ -z "$blobRedirect" ]; then
echo >&2 "error: failed fetching '$image' blob '$digest'"
echo "$curlHeaders" | head -1 >&2
return 1
fi
curl -fSL "${curlArgs[@]}" \
"$blobRedirect" \
-o "$targetFile"
fi
curl -L -S "${curlArgs[@]}" \
-H "Authorization: Bearer $token" \
"$registryBase/v2/$image/blobs/$digest" \
-o "$targetFile" \
-D-
}
# handle 'application/vnd.docker.distribution.manifest.v2+json' manifest

View File

@@ -1,4 +1,4 @@
FROM debian:bullseye-slim
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y gcc libc6-dev --no-install-recommends
COPY . /usr/src/

View File

@@ -1,4 +1,4 @@
FROM debian:bullseye-slim
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y gcc libc6-dev --no-install-recommends
COPY . /usr/src/

View File

@@ -279,6 +279,19 @@ func (n *nodeRunner) handleNodeExit(node *swarmnode.Node) {
close(n.done)
select {
case <-n.ready:
// there is a case where a node can be promoted to manager while
// another node is leaving the cluster. the node being promoted, by
// random chance, picks the IP of the node being demoted as the one it
// tries to connect to. in this case, the promotion will fail, and the
// whole swarm Node object packs it in.
//
// when the Node object is relaunched by this code, because it has
// joinAddr in the config, it attempts again to connect to the same
// no-longer-manager node, and crashes again. this continues forever.
//
// to avoid this case, in this block, we remove JoinAddr from the
// config.
n.config.joinAddr = ""
n.enableReconnectWatcher()
default:
if n.repeatedRun {

View File

@@ -384,6 +384,7 @@ func serviceDiscoveryOnDefaultNetwork() bool {
func (daemon *Daemon) setupPathsAndSandboxOptions(container *container.Container, sboxOptions *[]libnetwork.SandboxOption) error {
var err error
var originResolvConfPath string
// Set the correct paths for /etc/hosts and /etc/resolv.conf, based on the
// networking-mode of the container. Note that containers with "container"
@@ -397,8 +398,8 @@ func (daemon *Daemon) setupPathsAndSandboxOptions(container *container.Container
*sboxOptions = append(
*sboxOptions,
libnetwork.OptionOriginHostsPath("/etc/hosts"),
libnetwork.OptionOriginResolvConfPath("/etc/resolv.conf"),
)
originResolvConfPath = "/etc/resolv.conf"
case container.HostConfig.NetworkMode.IsUserDefined():
// The container uses a user-defined network. We use the embedded DNS
// server for container name resolution and to act as a DNS forwarder
@@ -411,10 +412,7 @@ func (daemon *Daemon) setupPathsAndSandboxOptions(container *container.Container
// If systemd-resolvd is used, the "upstream" DNS servers can be found in
// /run/systemd/resolve/resolv.conf. We do not query those DNS servers
// directly, as they can be dynamically reconfigured.
*sboxOptions = append(
*sboxOptions,
libnetwork.OptionOriginResolvConfPath("/etc/resolv.conf"),
)
originResolvConfPath = "/etc/resolv.conf"
default:
// For other situations, such as the default bridge network, container
// discovery / name resolution is handled through /etc/hosts, and no
@@ -427,12 +425,16 @@ func (daemon *Daemon) setupPathsAndSandboxOptions(container *container.Container
// DNS servers on the host can be dynamically updated.
//
// Copy the host's resolv.conf for the container (/run/systemd/resolve/resolv.conf or /etc/resolv.conf)
*sboxOptions = append(
*sboxOptions,
libnetwork.OptionOriginResolvConfPath(daemon.configStore.GetResolvConf()),
)
originResolvConfPath = daemon.configStore.GetResolvConf()
}
// Allow tests to point at their own resolv.conf file.
if envPath := os.Getenv("DOCKER_TEST_RESOLV_CONF_PATH"); envPath != "" {
logrus.Infof("Using OriginResolvConfPath from env: %s", envPath)
originResolvConfPath = envPath
}
*sboxOptions = append(*sboxOptions, libnetwork.OptionOriginResolvConfPath(originResolvConfPath))
container.HostsPath, err = container.GetRootResourcePath("hosts")
if err != nil {
return err

View File

@@ -101,8 +101,8 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
memory.DisableOOMKiller = config.OomKillDisable
}
if config.KernelMemory != 0 {
memory.Kernel = &config.KernelMemory
if config.KernelMemory != 0 { //nolint:staticcheck // ignore SA1019: memory.Kernel is deprecated: kernel-memory limits are not supported in cgroups v2, and were obsoleted in [kernel v5.4]. This field should no longer be used, as it may be ignored by runtimes.
memory.Kernel = &config.KernelMemory //nolint:staticcheck // ignore SA1019: memory.Kernel is deprecated: kernel-memory limits are not supported in cgroups v2, and were obsoleted in [kernel v5.4]. This field should no longer be used, as it may be ignored by runtimes.
}
if config.KernelMemoryTCP != 0 {

View File

@@ -138,7 +138,14 @@ func (daemon *Daemon) cleanupContainer(container *container.Container, config ty
container.RWLayer = nil
}
if err := containerfs.EnsureRemoveAll(container.Root); err != nil {
// Hold the container lock while deleting the container root directory
// so that other goroutines don't attempt to concurrently open files
// within it. Having any file open on Windows (without the
// FILE_SHARE_DELETE flag) will block it from being deleted.
container.Lock()
err := containerfs.EnsureRemoveAll(container.Root)
container.Unlock()
if err != nil {
err = errors.Wrapf(err, "unable to remove filesystem for %s", container.ID)
container.SetRemovalError(err)
return err

View File

@@ -412,7 +412,7 @@ func (d *Driver) create(id, parent string, opts *graphdriver.CreateOpts) (retErr
return err
}
if lower != "" {
if err := ioutils.AtomicWriteFile(path.Join(dir, lowerFile), []byte(lower), 0o666); err != nil {
if err := ioutils.AtomicWriteFile(path.Join(dir, lowerFile), []byte(lower), 0o644); err != nil {
return err
}
}

View File

@@ -250,6 +250,9 @@ func (i *ImageService) CreateImage(config []byte, parent string) (builder.Image,
return nil, errors.Wrapf(err, "failed to set parent %s", parent)
}
}
if err := i.imageStore.SetBuiltLocally(id); err != nil {
return nil, errors.Wrapf(err, "failed to mark image %s as built locally", id)
}
return i.imageStore.Get(id)
}

View File

@@ -57,6 +57,9 @@ func (i *ImageService) CommitImage(c backend.CommitConfig) (image.ID, error) {
if err != nil {
return "", err
}
if err := i.imageStore.SetBuiltLocally(id); err != nil {
return "", err
}
if c.ParentImageID != "" {
if err := i.imageStore.SetParent(id, image.ID(c.ParentImageID)); err != nil {

View File

@@ -187,7 +187,7 @@ func messageToProto(msg *logger.Message, proto *logdriver.LogEntry, partial *log
func protoToMessage(proto *logdriver.LogEntry) *logger.Message {
msg := &logger.Message{
Source: proto.Source,
Timestamp: time.Unix(0, proto.TimeNano),
Timestamp: time.Unix(0, proto.TimeNano).UTC(),
}
if proto.Partial {
var md backend.PartialLogMetaData

View File

@@ -66,7 +66,7 @@ func getTailReader(ctx context.Context, r loggerutils.SizeReaderAt, req int) (io
}
if msgLen != binary.BigEndian.Uint32(buf) {
return nil, 0, errdefs.DataLoss(errors.Wrap(err, "log message header and footer indicate different message sizes"))
return nil, 0, errdefs.DataLoss(errors.New("log message header and footer indicate different message sizes"))
}
found++

View File

@@ -83,7 +83,7 @@ func setNvidiaGPUs(s *specs.Spec, dev *deviceInstance) error {
if s.Hooks == nil {
s.Hooks = &specs.Hooks{}
}
s.Hooks.Prestart = append(s.Hooks.Prestart, specs.Hook{
s.Hooks.Prestart = append(s.Hooks.Prestart, specs.Hook{ //nolint:staticcheck // ignore SA1019
Path: path,
Args: []string{
nvidiaHook,

View File

@@ -72,7 +72,7 @@ func WithLibnetwork(daemon *Daemon, c *container.Container) coci.SpecOpts {
if ns.Type == "network" && ns.Path == "" && !c.Config.NetworkDisabled {
target := filepath.Join("/proc", strconv.Itoa(os.Getpid()), "exe")
shortNetCtlrID := stringid.TruncateID(daemon.netController.ID())
s.Hooks.Prestart = append(s.Hooks.Prestart, specs.Hook{
s.Hooks.Prestart = append(s.Hooks.Prestart, specs.Hook{ //nolint:staticcheck // ignore SA1019
Path: target,
Args: []string{
"libnetwork-setkey",

View File

@@ -59,8 +59,8 @@ func toContainerdResources(resources container.Resources) *libcontainerdtypes.Re
if resources.MemoryReservation != 0 {
memory.Reservation = &resources.MemoryReservation
}
if resources.KernelMemory != 0 {
memory.Kernel = &resources.KernelMemory
if resources.KernelMemory != 0 { //nolint:staticcheck // ignore SA1019: memory.Kernel is deprecated: kernel-memory limits are not supported in cgroups v2, and were obsoleted in [kernel v5.4]. This field should no longer be used, as it may be ignored by runtimes.
memory.Kernel = &resources.KernelMemory //nolint:staticcheck // ignore SA1019: memory.Kernel is deprecated: kernel-memory limits are not supported in cgroups v2, and were obsoleted in [kernel v5.4]. This field should no longer be used, as it may be ignored by runtimes.
}
if resources.MemorySwap > 0 {
memory.Swap = &resources.MemorySwap

View File

@@ -11,8 +11,39 @@ set -e
# Usage: dind CMD [ARG...]
# apparmor sucks and Docker needs to know that it's in a container (c) @tianon
#
# Set the container env-var, so that AppArmor is enabled in the daemon and
# containerd when running docker-in-docker.
#
# see: https://github.com/containerd/containerd/blob/787943dc1027a67f3b52631e084db0d4a6be2ccc/pkg/apparmor/apparmor_linux.go#L29-L45
# see: https://github.com/moby/moby/commit/de191e86321f7d3136ff42ff75826b8107399497
export container=docker
# Allow AppArmor to work inside the container;
#
# aa-status
# apparmor filesystem is not mounted.
# apparmor module is loaded.
#
# mount -t securityfs none /sys/kernel/security
#
# aa-status
# apparmor module is loaded.
# 30 profiles are loaded.
# 30 profiles are in enforce mode.
# /snap/snapd/18357/usr/lib/snapd/snap-confine
# ...
#
# Note: https://0xn3va.gitbook.io/cheat-sheets/container/escaping/sensitive-mounts#sys-kernel-security
#
# ## /sys/kernel/security
#
# In /sys/kernel/security mounted the securityfs interface, which allows
# configuration of Linux Security Modules. This allows configuration of
# AppArmor policies, and so access to this may allow a container to disable
# its MAC system.
#
# Given that we're running privileged already, this should not be an issue.
if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
mount -t securityfs none /sys/kernel/security || {
echo >&2 'Could not mount /sys/kernel/security.'

100
hack/dind-systemd Executable file
View File

@@ -0,0 +1,100 @@
#!/bin/bash
set -e
# Set the container env-var, so that AppArmor is enabled in the daemon and
# containerd when running docker-in-docker.
#
# see: https://github.com/containerd/containerd/blob/787943dc1027a67f3b52631e084db0d4a6be2ccc/pkg/apparmor/apparmor_linux.go#L29-L45
# see: https://github.com/moby/moby/commit/de191e86321f7d3136ff42ff75826b8107399497
container=docker
export container
if [ $# -eq 0 ]; then
echo >&2 'ERROR: No command specified. You probably want to run `journalctl -f`, or maybe `bash`?'
exit 1
fi
if [ ! -t 0 ]; then
echo >&2 'ERROR: TTY needs to be enabled (`docker run -t ...`).'
exit 1
fi
# Allow AppArmor to work inside the container;
#
# aa-status
# apparmor filesystem is not mounted.
# apparmor module is loaded.
#
# mount -t securityfs none /sys/kernel/security
#
# aa-status
# apparmor module is loaded.
# 30 profiles are loaded.
# 30 profiles are in enforce mode.
# /snap/snapd/18357/usr/lib/snapd/snap-confine
# ...
#
# Note: https://0xn3va.gitbook.io/cheat-sheets/container/escaping/sensitive-mounts#sys-kernel-security
#
# ## /sys/kernel/security
#
# In /sys/kernel/security mounted the securityfs interface, which allows
# configuration of Linux Security Modules. This allows configuration of
# AppArmor policies, and so access to this may allow a container to disable
# its MAC system.
#
# Given that we're running privileged already, this should not be an issue.
if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
mount -t securityfs none /sys/kernel/security || {
echo >&2 'Could not mount /sys/kernel/security.'
echo >&2 'AppArmor detection and --privileged mode might break.'
}
fi
env > /etc/docker-entrypoint-env
cat > /etc/systemd/system/docker-entrypoint.target << EOF
[Unit]
Description=the target for docker-entrypoint.service
Requires=docker-entrypoint.service systemd-logind.service systemd-user-sessions.service
EOF
quoted_args="$(printf " %q" "${@}")"
echo "${quoted_args}" > /etc/docker-entrypoint-cmd
cat > /etc/systemd/system/docker-entrypoint.service << EOF
[Unit]
Description=docker-entrypoint.service
[Service]
ExecStart=/bin/bash -exc "source /etc/docker-entrypoint-cmd"
# EXIT_STATUS is either an exit code integer or a signal name string, see systemd.exec(5)
ExecStopPost=/bin/bash -ec "if echo \${EXIT_STATUS} | grep [A-Z] > /dev/null; then echo >&2 \"got signal \${EXIT_STATUS}\"; systemctl exit \$(( 128 + \$( kill -l \${EXIT_STATUS} ) )); else systemctl exit \${EXIT_STATUS}; fi"
StandardInput=tty-force
StandardOutput=inherit
StandardError=inherit
WorkingDirectory=$(pwd)
EnvironmentFile=/etc/docker-entrypoint-env
[Install]
WantedBy=multi-user.target
EOF
systemctl mask systemd-firstboot.service systemd-udevd.service
systemctl unmask systemd-logind
systemctl enable docker-entrypoint.service
systemd=
if [ -x /lib/systemd/systemd ]; then
systemd=/lib/systemd/systemd
elif [ -x /usr/lib/systemd/systemd ]; then
systemd=/usr/lib/systemd/systemd
elif [ -x /sbin/init ]; then
systemd=/sbin/init
else
echo >&2 'ERROR: systemd is not installed'
exit 1
fi
systemd_args="--show-status=false --unit=docker-entrypoint.target"
echo "$0: starting $systemd $systemd_args"
exec $systemd $systemd_args

View File

@@ -15,7 +15,7 @@ set -e
# the binary version you may also need to update the vendor version to pick up
# bug fixes or new APIs, however, usually the Go packages are built from a
# commit from the master branch.
: "${CONTAINERD_VERSION:=v1.6.22}"
: "${CONTAINERD_VERSION:=v1.6.38}"
install_containerd() (
echo "Install containerd version $CONTAINERD_VERSION"

View File

@@ -9,7 +9,7 @@ set -e
# the containerd project first, and update both after that is merged.
#
# When updating RUNC_VERSION, consider updating runc in vendor.mod accordingly
: "${RUNC_VERSION:=v1.1.9}"
: "${RUNC_VERSION:=v1.1.12}"
install_runc() {
RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp"}"

View File

@@ -332,10 +332,26 @@ Function Run-UnitTests() {
$pkgList = $pkgList | Select-String -NotMatch "github.com/docker/docker/integration"
$pkgList = $pkgList -replace "`r`n", " "
$jsonFilePath = $bundlesDir + "\go-test-report-unit-flaky-tests.json"
$xmlFilePath = $bundlesDir + "\junit-report-unit-flaky-tests.xml"
$coverageFilePath = $bundlesDir + "\coverage-report-unit-flaky-tests.txt"
$goTestArg = "--rerun-fails=4 --format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath """ + "--packages=$pkgList" + """ -- " + $raceParm + " -coverprofile=$coverageFilePath -covermode=atomic -ldflags -w -a -test.timeout=10m -test.run=TestFlaky.*"
Write-Host "INFO: Invoking unit tests run with $GOTESTSUM_LOCATION\gotestsum.exe $goTestArg"
$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = "$GOTESTSUM_LOCATION\gotestsum.exe"
$pinfo.WorkingDirectory = "$($PWD.Path)"
$pinfo.UseShellExecute = $false
$pinfo.Arguments = $goTestArg
$p = New-Object System.Diagnostics.Process
$p.StartInfo = $pinfo
$p.Start() | Out-Null
$p.WaitForExit()
if ($p.ExitCode -ne 0) { Throw "Unit tests (flaky) failed" }
$jsonFilePath = $bundlesDir + "\go-test-report-unit-tests.json"
$xmlFilePath = $bundlesDir + "\junit-report-unit-tests.xml"
$coverageFilePath = $bundlesDir + "\coverage-report-unit-tests.txt"
$goTestArg = "--format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " + $raceParm + " -coverprofile=$coverageFilePath -covermode=atomic -ldflags -w -a """ + "-test.timeout=10m" + """ $pkgList"
$goTestArg = "--format=standard-verbose --jsonfile=$jsonFilePath --junitfile=$xmlFilePath -- " + $raceParm + " -coverprofile=$coverageFilePath -covermode=atomic -ldflags -w -a -test.timeout=10m -test.skip=TestFlaky.*" + " $pkgList"
Write-Host "INFO: Invoking unit tests run with $GOTESTSUM_LOCATION\gotestsum.exe $goTestArg"
$pinfo = New-Object System.Diagnostics.ProcessStartInfo
$pinfo.FileName = "$GOTESTSUM_LOCATION\gotestsum.exe"
@@ -353,7 +369,7 @@ Function Run-UnitTests() {
Function Run-IntegrationTests() {
$escRoot = [Regex]::Escape($root)
$env:DOCKER_INTEGRATION_DAEMON_DEST = $bundlesDir + "\tmp"
$dirs = go list -test -f '{{- if ne .ForTest `"`" -}}{{- .Dir -}}{{- end -}}' .\integration\...
$dirs = go list -test -f '{{- if ne .ForTest "" -}}{{- .Dir -}}{{- end -}}' .\integration\...
ForEach($dir in $dirs) {
# Normalize directory name for using in the test results files.
$normDir = $dir.Trim()

View File

@@ -74,6 +74,18 @@ source "${MAKEDIR}/.go-autogen"
fi
fi
if [ "$(go env GOARCH)" = "arm" ] && [ "$(go env GOARM)" = "5" ]; then
# cross-compiling for arm/v5 fails on go1.22; a fix is included for this
# in go1.23 (https://github.com/golang/go/issues/65290), but for go1.22
# we can set the correct option manually.
CGO_CFLAGS+=" -Wno-atomic-alignment"
export CGO_CFLAGS
# Make sure libatomic is included on arm/v5, because clang does not auto-link it.
# see https://github.com/moby/moby/pull/46982#issuecomment-2206992611
export CGO_LDFLAGS="-latomic"
fi
echo "Building $([ "$DOCKER_STATIC" = "1" ] && echo "static" || echo "dynamic") $DEST/$BINARY_FULLNAME ($PLATFORM_NAME)..."
if [ -n "$DOCKER_DEBUG" ]; then
set -x

View File

@@ -67,12 +67,6 @@ if [ "$DOCKER_EXPERIMENTAL" ]; then
fi
dockerd="dockerd"
if [ -f "/sys/fs/cgroup/cgroup.controllers" ]; then
if [ -z "$TEST_SKIP_INTEGRATION_CLI" ]; then
echo >&2 '# cgroup v2 requires TEST_SKIP_INTEGRATION_CLI to be set'
exit 1
fi
fi
if [ -n "$DOCKER_ROOTLESS" ]; then
if [ -z "$TEST_SKIP_INTEGRATION_CLI" ]; then

View File

@@ -38,15 +38,38 @@ if [ -n "${base_pkg_list}" ]; then
${base_pkg_list}
fi
if [ -n "${libnetwork_pkg_list}" ]; then
rerun_flaky=1
gotest_extra_flags="-skip=TestFlaky.*"
# Custom -run passed, don't run flaky tests separately.
if echo "$TESTFLAGS" | grep -Eq '(-run|-test.run)[= ]'; then
rerun_flaky=0
gotest_extra_flags=""
fi
# libnetwork tests invoke iptables, and cannot be run in parallel. Execute
# tests within /libnetwork with '-p=1' to run them sequentially. See
# https://github.com/moby/moby/issues/42458#issuecomment-873216754 for details.
gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml -- \
"${BUILDFLAGS[@]}" \
gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork.json --junitfile=bundles/junit-report-libnetwork.xml \
-- "${BUILDFLAGS[@]}" \
-cover \
-coverprofile=bundles/coverage-libnetwork.out \
-covermode=atomic \
-p=1 \
${gotest_extra_flags} \
${TESTFLAGS} \
${libnetwork_pkg_list}
if [ $rerun_flaky -eq 1 ]; then
gotestsum --format=standard-quiet --jsonfile=bundles/go-test-report-libnetwork-flaky.json --junitfile=bundles/junit-report-libnetwork-flaky.xml \
--packages "${libnetwork_pkg_list}" \
--rerun-fails=4 \
-- "${BUILDFLAGS[@]}" \
-cover \
-coverprofile=bundles/coverage-libnetwork-flaky.out \
-covermode=atomic \
-p=1 \
-test.run 'TestFlaky.*' \
${TESTFLAGS}
fi
fi

View File

@@ -37,6 +37,10 @@ linters-settings:
# FIXME make sure all packages have a description. Currently, there's many packages without.
- name: package-comments
disabled: true
gosec:
excludes:
- G115 # G115: integer overflow conversion; (TODO: verify these: https://github.com/moby/moby/issues/48358)
issues:
# The default exclusion rules are a bit too permissive, so copying the relevant ones below
exclude-use-default: false

View File

@@ -25,7 +25,7 @@ else
tee "${ROOTDIR}/go.mod" >&2 <<- EOF
module github.com/docker/docker
go 1.19
go 1.23
EOF
trap 'rm -f "${ROOTDIR}/go.mod"' EXIT
fi

82
image/cache/cache.go vendored
View File

@@ -10,7 +10,9 @@ import (
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
// NewLocal returns a local image cache, based on parent chain
@@ -26,8 +28,8 @@ type LocalImageCache struct {
}
// GetCache returns the image id found in the cache
func (lic *LocalImageCache) GetCache(imgID string, config *containertypes.Config) (string, error) {
return getImageIDAndError(getLocalCachedImage(lic.store, image.ID(imgID), config))
func (lic *LocalImageCache) GetCache(imgID string, config *containertypes.Config, platform ocispec.Platform) (string, error) {
return getImageIDAndError(getLocalCachedImage(lic.store, image.ID(imgID), config, platform))
}
// New returns an image cache, based on history objects
@@ -51,8 +53,8 @@ func (ic *ImageCache) Populate(image *image.Image) {
}
// GetCache returns the image id found in the cache
func (ic *ImageCache) GetCache(parentID string, cfg *containertypes.Config) (string, error) {
imgID, err := ic.localImageCache.GetCache(parentID, cfg)
func (ic *ImageCache) GetCache(parentID string, cfg *containertypes.Config, platform ocispec.Platform) (string, error) {
imgID, err := ic.localImageCache.GetCache(parentID, cfg, platform)
if err != nil {
return "", err
}
@@ -215,7 +217,23 @@ func getImageIDAndError(img *image.Image, err error) (string, error) {
// of the image with imgID, that had the same config when it was
// created. nil is returned if a child cannot be found. An error is
// returned if the parent image cannot be found.
func getLocalCachedImage(imageStore image.Store, imgID image.ID, config *containertypes.Config) (*image.Image, error) {
func getLocalCachedImage(imageStore image.Store, imgID image.ID, config *containertypes.Config, platform ocispec.Platform) (*image.Image, error) {
if config == nil {
return nil, nil
}
isBuiltLocally := func(id image.ID) bool {
builtLocally, err := imageStore.IsBuiltLocally(id)
if err != nil {
logrus.WithFields(logrus.Fields{
"error": err,
"id": id,
}).Warn("failed to check if image was built locally")
return false
}
return builtLocally
}
// Loop on the children of the given image and check the config
getMatch := func(siblings []image.ID) (*image.Image, error) {
var match *image.Image
@@ -225,6 +243,26 @@ func getLocalCachedImage(imageStore image.Store, imgID image.ID, config *contain
return nil, fmt.Errorf("unable to find image %q", id)
}
if !isBuiltLocally(id) {
continue
}
imgPlatform := ocispec.Platform{
Architecture: img.Architecture,
OS: img.OS,
OSVersion: img.OSVersion,
OSFeatures: img.OSFeatures,
Variant: img.Variant,
}
// Discard old linux/amd64 images with empty platform.
if imgPlatform.OS == "" && imgPlatform.Architecture == "" {
continue
}
if !comparePlatform(platform, imgPlatform) {
continue
}
if compare(&img.ContainerConfig, config) {
// check for the most up to date match
if match == nil || match.Created.Before(img.Created) {
@@ -238,11 +276,29 @@ func getLocalCachedImage(imageStore image.Store, imgID image.ID, config *contain
// In this case, this is `FROM scratch`, which isn't an actual image.
if imgID == "" {
images := imageStore.Map()
var siblings []image.ID
for id, img := range images {
if img.Parent == imgID {
siblings = append(siblings, id)
if img.Parent != "" {
continue
}
if !isBuiltLocally(id) {
continue
}
// Do a quick initial filter on the Cmd to avoid adding all
// non-local images with empty parent to the siblings slice and
// performing a full config compare.
//
// config.Cmd is set to the current Dockerfile instruction so we
// check it against the img.ContainerConfig.Cmd which is the
// command of the last layer.
if !strSliceEqual(img.ContainerConfig.Cmd, config.Cmd) {
continue
}
siblings = append(siblings, id)
}
return getMatch(siblings)
}
@@ -251,3 +307,15 @@ func getLocalCachedImage(imageStore image.Store, imgID image.ID, config *contain
siblings := imageStore.Children(imgID)
return getMatch(siblings)
}
func strSliceEqual(a, b []string) bool {
if len(a) != len(b) {
return false
}
for i := 0; i < len(a); i++ {
if a[i] != b[i] {
return false
}
}
return true
}

195
image/cache/compare.go vendored
View File

@@ -1,45 +1,106 @@
package cache // import "github.com/docker/docker/image/cache"
import (
"strings"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/types/container"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// compare two Config struct. Do not compare the "Image" nor "Hostname" fields
// If OpenStdin is set, then it differs
func compare(a, b *container.Config) bool {
if a == nil || b == nil ||
a.OpenStdin || b.OpenStdin {
return false
}
if a.AttachStdout != b.AttachStdout ||
a.AttachStderr != b.AttachStderr ||
a.User != b.User ||
a.OpenStdin != b.OpenStdin ||
a.Tty != b.Tty {
return false
}
// TODO: Remove once containerd image service directly uses the ImageCache and
// LocalImageCache structs.
func CompareConfig(a, b *container.Config) bool {
return compare(a, b)
}
if len(a.Cmd) != len(b.Cmd) ||
len(a.Env) != len(b.Env) ||
len(a.Labels) != len(b.Labels) ||
len(a.ExposedPorts) != len(b.ExposedPorts) ||
len(a.Entrypoint) != len(b.Entrypoint) ||
len(a.Volumes) != len(b.Volumes) {
return false
}
func comparePlatform(builderPlatform, imagePlatform ocispec.Platform) bool {
// On Windows, only check the Major and Minor versions.
// The Build and Revision compatibility depends on whether `process` or
// `hyperv` isolation used.
//
// Fixes https://github.com/moby/moby/issues/47307
if builderPlatform.OS == "windows" && imagePlatform.OS == builderPlatform.OS {
// OSVersion format is:
// Major.Minor.Build.Revision
builderParts := strings.Split(builderPlatform.OSVersion, ".")
imageParts := strings.Split(imagePlatform.OSVersion, ".")
for i := 0; i < len(a.Cmd); i++ {
if a.Cmd[i] != b.Cmd[i] {
return false
// Major and minor must match.
for i := 0; i < 2; i++ {
if len(builderParts) > i && len(imageParts) > i && builderParts[i] != imageParts[i] {
return false
}
}
if len(builderParts) >= 3 && len(imageParts) >= 3 {
// Keep only Major & Minor.
builderParts[0] = imageParts[0]
builderParts[1] = imageParts[1]
imagePlatform.OSVersion = strings.Join(builderParts, ".")
}
}
return platforms.Only(builderPlatform).Match(imagePlatform)
}
// compare two Config struct. Do not container-specific fields:
// - Image
// - Hostname
// - Domainname
// - MacAddress
func compare(a, b *container.Config) bool {
if a == nil || b == nil {
return false
}
if len(a.Env) != len(b.Env) {
return false
}
if len(a.Cmd) != len(b.Cmd) {
return false
}
if len(a.Entrypoint) != len(b.Entrypoint) {
return false
}
if len(a.Shell) != len(b.Shell) {
return false
}
if len(a.ExposedPorts) != len(b.ExposedPorts) {
return false
}
if len(a.Volumes) != len(b.Volumes) {
return false
}
if len(a.Labels) != len(b.Labels) {
return false
}
if len(a.OnBuild) != len(b.OnBuild) {
return false
}
for i := 0; i < len(a.Env); i++ {
if a.Env[i] != b.Env[i] {
return false
}
}
for k, v := range a.Labels {
if v != b.Labels[k] {
for i := 0; i < len(a.OnBuild); i++ {
if a.OnBuild[i] != b.OnBuild[i] {
return false
}
}
for i := 0; i < len(a.Cmd); i++ {
if a.Cmd[i] != b.Cmd[i] {
return false
}
}
for i := 0; i < len(a.Entrypoint); i++ {
if a.Entrypoint[i] != b.Entrypoint[i] {
return false
}
}
for i := 0; i < len(a.Shell); i++ {
if a.Shell[i] != b.Shell[i] {
return false
}
}
@@ -48,16 +109,84 @@ func compare(a, b *container.Config) bool {
return false
}
}
for i := 0; i < len(a.Entrypoint); i++ {
if a.Entrypoint[i] != b.Entrypoint[i] {
return false
}
}
for key := range a.Volumes {
if _, exists := b.Volumes[key]; !exists {
return false
}
}
for k, v := range a.Labels {
if v != b.Labels[k] {
return false
}
}
if a.AttachStdin != b.AttachStdin {
return false
}
if a.AttachStdout != b.AttachStdout {
return false
}
if a.AttachStderr != b.AttachStderr {
return false
}
if a.NetworkDisabled != b.NetworkDisabled {
return false
}
if a.Tty != b.Tty {
return false
}
if a.OpenStdin != b.OpenStdin {
return false
}
if a.StdinOnce != b.StdinOnce {
return false
}
if a.ArgsEscaped != b.ArgsEscaped {
return false
}
if a.User != b.User {
return false
}
if a.WorkingDir != b.WorkingDir {
return false
}
if a.StopSignal != b.StopSignal {
return false
}
if (a.StopTimeout == nil) != (b.StopTimeout == nil) {
return false
}
if a.StopTimeout != nil && b.StopTimeout != nil {
if *a.StopTimeout != *b.StopTimeout {
return false
}
}
if (a.Healthcheck == nil) != (b.Healthcheck == nil) {
return false
}
if a.Healthcheck != nil && b.Healthcheck != nil {
if a.Healthcheck.Interval != b.Healthcheck.Interval {
return false
}
if a.Healthcheck.StartPeriod != b.Healthcheck.StartPeriod {
return false
}
if a.Healthcheck.Timeout != b.Healthcheck.Timeout {
return false
}
if a.Healthcheck.Retries != b.Healthcheck.Retries {
return false
}
if len(a.Healthcheck.Test) != len(b.Healthcheck.Test) {
return false
}
for i := 0; i < len(a.Healthcheck.Test); i++ {
if a.Healthcheck.Test[i] != b.Healthcheck.Test[i] {
return false
}
}
}
return true
}

View File

@@ -1,11 +1,15 @@
package cache // import "github.com/docker/docker/image/cache"
import (
"runtime"
"testing"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/strslice"
"github.com/docker/go-connections/nat"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
)
// Just to make life easier
@@ -124,3 +128,73 @@ func TestCompare(t *testing.T) {
}
}
}
func TestPlatformCompare(t *testing.T) {
for _, tc := range []struct {
name string
builder ocispec.Platform
image ocispec.Platform
expected bool
}{
{
name: "same os and arch",
builder: ocispec.Platform{Architecture: "amd64", OS: runtime.GOOS},
image: ocispec.Platform{Architecture: "amd64", OS: runtime.GOOS},
expected: true,
},
{
name: "same os different arch",
builder: ocispec.Platform{Architecture: "amd64", OS: runtime.GOOS},
image: ocispec.Platform{Architecture: "arm64", OS: runtime.GOOS},
expected: false,
},
{
name: "same os smaller host variant",
builder: ocispec.Platform{Variant: "v7", Architecture: "arm", OS: runtime.GOOS},
image: ocispec.Platform{Variant: "v8", Architecture: "arm", OS: runtime.GOOS},
expected: false,
},
{
name: "same os higher host variant",
builder: ocispec.Platform{Variant: "v8", Architecture: "arm", OS: runtime.GOOS},
image: ocispec.Platform{Variant: "v7", Architecture: "arm", OS: runtime.GOOS},
expected: true,
},
{
// Test for https://github.com/moby/moby/issues/47307
name: "different build and revision",
builder: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.22621"},
image: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.17763.5329"},
expected: true,
},
{
name: "different revision",
builder: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.17763.1234"},
image: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.17763.5329"},
expected: true,
},
{
name: "different major",
builder: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "11.0.17763.5329"},
image: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.17763.5329"},
expected: false,
},
{
name: "different minor same osver",
builder: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.17763.5329"},
image: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.1.17763.5329"},
expected: false,
},
{
name: "different arch same osver",
builder: ocispec.Platform{Architecture: "arm64", OS: "windows", OSVersion: "10.0.17763.5329"},
image: ocispec.Platform{Architecture: "amd64", OS: "windows", OSVersion: "10.0.17763.5329"},
expected: false,
},
} {
tc := tc
t.Run(tc.name, func(t *testing.T) {
assert.Check(t, is.Equal(comparePlatform(tc.builder, tc.image), tc.expected))
})
}
}

View File

@@ -2,6 +2,7 @@ package image // import "github.com/docker/docker/image"
import (
"fmt"
"os"
"sync"
"time"
@@ -24,6 +25,8 @@ type Store interface {
GetParent(id ID) (ID, error)
SetLastUpdated(id ID) error
GetLastUpdated(id ID) (time.Time, error)
SetBuiltLocally(id ID) error
IsBuiltLocally(id ID) (bool, error)
Children(id ID) []ID
Map() map[ID]*Image
Heads() map[ID]*Image
@@ -291,6 +294,23 @@ func (is *store) GetLastUpdated(id ID) (time.Time, error) {
return time.Parse(time.RFC3339Nano, string(bytes))
}
// SetBuiltLocally sets whether image can be used as a builder cache
func (is *store) SetBuiltLocally(id ID) error {
return is.fs.SetMetadata(id.Digest(), "builtLocally", []byte{1})
}
// IsBuiltLocally returns whether image can be used as a builder cache
func (is *store) IsBuiltLocally(id ID) (bool, error) {
bytes, err := is.fs.GetMetadata(id.Digest(), "builtLocally")
if err != nil || len(bytes) == 0 {
if errors.Is(err, os.ErrNotExist) {
err = nil
}
return false, err
}
return bytes[0] == 1, nil
}
func (is *store) Children(id ID) []ID {
is.RLock()
defer is.RUnlock()

View File

@@ -1,7 +1,7 @@
package main
import (
"fmt"
"errors"
"os"
"runtime"
"strings"
@@ -44,7 +44,7 @@ func (s *DockerBenchmarkSuite) BenchmarkConcurrentContainerActions(c *testing.B)
args = append(args, sleepCommandForDaemonPlatform()...)
out, _, err := dockerCmdWithError(args...)
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
return
}
@@ -57,29 +57,29 @@ func (s *DockerBenchmarkSuite) BenchmarkConcurrentContainerActions(c *testing.B)
defer os.RemoveAll(tmpDir)
out, _, err = dockerCmdWithError("cp", id+":/tmp", tmpDir)
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
return
}
out, _, err = dockerCmdWithError("kill", id)
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
}
out, _, err = dockerCmdWithError("start", id)
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
}
out, _, err = dockerCmdWithError("kill", id)
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
}
// don't do an rm -f here since it can potentially ignore errors from the graphdriver
out, _, err = dockerCmdWithError("rm", id)
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
}
}
}()
@@ -89,7 +89,7 @@ func (s *DockerBenchmarkSuite) BenchmarkConcurrentContainerActions(c *testing.B)
for i := 0; i < numIterations; i++ {
out, _, err := dockerCmdWithError("ps")
if err != nil {
chErr <- fmt.Errorf(out)
chErr <- errors.New(out)
}
}
}()
@@ -114,7 +114,7 @@ func (s *DockerBenchmarkSuite) BenchmarkLogsCLIRotateFollow(c *testing.B) {
ch <- nil
out, _, _ := dockerCmdWithError("logs", "-f", id)
// if this returns at all, it's an error
ch <- fmt.Errorf(out)
ch <- errors.New(out)
}()
<-ch

View File

@@ -25,8 +25,8 @@ var expectedNetworkInterfaceStats = strings.Split("rx_bytes rx_dropped rx_errors
func (s *DockerAPISuite) TestAPIStatsNoStreamGetCpu(c *testing.T) {
skip.If(c, RuntimeIsWindowsContainerd(), "FIXME: Broken on Windows + containerd combination")
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
out, _ := dockerCmd(c, "run", "-d", "busybox", "/bin/sh", "-c", "while true;usleep 100; do echo 'Hello'; done")
id := strings.TrimSpace(out)
assert.NilError(c, waitRun(id))
resp, body, err := request.Get(fmt.Sprintf("/containers/%s/stats?stream=false", id))

View File

@@ -29,6 +29,7 @@ import (
"gotest.tools/v3/assert"
"gotest.tools/v3/assert/cmp"
"gotest.tools/v3/icmd"
"gotest.tools/v3/skip"
)
type DockerCLIBuildSuite struct {
@@ -3949,6 +3950,7 @@ func (s *DockerCLIBuildSuite) TestBuildEmptyStringVolume(c *testing.T) {
func (s *DockerCLIBuildSuite) TestBuildContainerWithCgroupParent(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
cgroupParent := "test"
data, err := os.ReadFile("/proc/self/cgroup")

View File

@@ -41,6 +41,7 @@ import (
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/icmd"
"gotest.tools/v3/poll"
"gotest.tools/v3/skip"
)
const containerdSocket = "/var/run/docker/containerd/containerd.sock"
@@ -861,8 +862,8 @@ func (s *DockerDaemonSuite) TestDaemonICCPing(c *testing.T) {
// which may happen if it was created with the same IP range.
deleteInterface(c, "docker0")
bridgeName := "ext-bridge5"
bridgeIP := "192.169.1.1/24"
const bridgeName = "ext-bridge5"
const bridgeIP = "192.169.1.1/24"
createInterface(c, "bridge", bridgeName, bridgeIP)
defer deleteInterface(c, bridgeName)
@@ -870,19 +871,30 @@ func (s *DockerDaemonSuite) TestDaemonICCPing(c *testing.T) {
d.StartWithBusybox(c, "--bridge", bridgeName, "--icc=false")
defer d.Restart(c)
result := icmd.RunCommand("iptables", "-nvL", "FORWARD")
result := icmd.RunCommand("sh", "-c", "iptables -vL FORWARD | grep DROP")
result.Assert(c, icmd.Success)
regex := fmt.Sprintf("DROP.*all.*%s.*%s", bridgeName, bridgeName)
matched, _ := regexp.MatchString(regex, result.Combined())
assert.Equal(c, matched, true, fmt.Sprintf("iptables output should have contained %q, but was %q", regex, result.Combined()))
// strip whitespace and newlines to verify we only found a single DROP
out := strings.TrimSpace(result.Stdout())
assert.Assert(c, is.Equal(strings.Count(out, "\n"), 0), "only expected a single DROP rules")
// Column headers are stripped because of grep-ing, but should be:
//
// pkts bytes target prot opt in out source destination
// 0 0 DROP all -- ext-bridge5 ext-bridge5 anywhere anywhere
cols := strings.Fields(out)
expected := []string{"0", "0", "DROP", "all", "--", bridgeName, bridgeName, "anywhere", "anywhere"}
assert.DeepEqual(c, cols, expected)
// Pinging another container must fail with --icc=false
pingContainers(c, d, true)
ipStr := "192.171.1.1/24"
ip, _, _ := net.ParseCIDR(ipStr)
ifName := "icc-dummy"
const cidr = "192.171.1.1/24"
ip, _, _ := net.ParseCIDR(cidr)
const ifName = "icc-dummy"
createInterface(c, "dummy", ifName, ipStr)
createInterface(c, "dummy", ifName, cidr)
defer deleteInterface(c, ifName)
// But, Pinging external or a Host interface must succeed
@@ -899,8 +911,8 @@ func (s *DockerDaemonSuite) TestDaemonICCLinkExpose(c *testing.T) {
// which may happen if it was created with the same IP range.
deleteInterface(c, "docker0")
bridgeName := "ext-bridge6"
bridgeIP := "192.169.1.1/24"
const bridgeName = "ext-bridge6"
const bridgeIP = "192.169.1.1/24"
createInterface(c, "bridge", bridgeName, bridgeIP)
defer deleteInterface(c, bridgeName)
@@ -908,11 +920,22 @@ func (s *DockerDaemonSuite) TestDaemonICCLinkExpose(c *testing.T) {
d.StartWithBusybox(c, "--bridge", bridgeName, "--icc=false")
defer d.Restart(c)
result := icmd.RunCommand("iptables", "-nvL", "FORWARD")
result := icmd.RunCommand("sh", "-c", "iptables -vL FORWARD | grep DROP")
result.Assert(c, icmd.Success)
regex := fmt.Sprintf("DROP.*all.*%s.*%s", bridgeName, bridgeName)
matched, _ := regexp.MatchString(regex, result.Combined())
assert.Equal(c, matched, true, fmt.Sprintf("iptables output should have contained %q, but was %q", regex, result.Combined()))
// strip whitespace and newlines to verify we only found a single DROP
out := strings.TrimSpace(result.Stdout())
assert.Assert(c, is.Equal(strings.Count(out, "\n"), 0), "only expected a single DROP rules")
// Column headers are stripped because of grep-ing, but should be:
//
// pkts bytes target prot opt in out source destination
// 0 0 DROP all -- ext-bridge6 ext-bridge6 anywhere anywhere
cols := strings.Fields(out)
expected := []string{"0", "0", "DROP", "all", "--", bridgeName, bridgeName, "anywhere", "anywhere"}
assert.DeepEqual(c, cols, expected)
out, err := d.Cmd("run", "-d", "--expose", "4567", "--name", "icc1", "busybox", "nc", "-l", "-p", "4567")
assert.NilError(c, err, out)
@@ -1700,8 +1723,11 @@ func (s *DockerDaemonSuite) TestDaemonNoSpaceLeftOnDeviceError(c *testing.T) {
defer mount.Unmount(testDir)
// create a 3MiB image (with a 2MiB ext4 fs) and mount it as graph root
// Why in a container? Because `mount` sometimes behaves weirdly and often fails outright on this test in debian:bullseye (which is what the test suite runs under if run from the Makefile)
dockerCmd(c, "run", "--rm", "-v", testDir+":/test", "busybox", "sh", "-c", "dd of=/test/testfs.img bs=1M seek=3 count=0")
//
// Why in a container? Because `mount` sometimes behaves weirdly and often
// fails outright on this test in debian:jessie (which is what the test suite
// runs under if run from the Makefile at the time this patch was added).
cli.DockerCmd(c, "run", "--rm", "-v", testDir+":/test", "busybox", "sh", "-c", "dd of=/test/testfs.img bs=1M seek=3 count=0")
icmd.RunCommand("mkfs.ext4", "-F", filepath.Join(testDir, "testfs.img")).Assert(c, icmd.Success)
dockerCmd(c, "run", "--privileged", "--rm", "-v", testDir+":/test:shared", "busybox", "sh", "-c", "mkdir -p /test/test-mount/vfs && mount -n -t ext4 /test/testfs.img /test/test-mount/vfs")
@@ -1711,7 +1737,7 @@ func (s *DockerDaemonSuite) TestDaemonNoSpaceLeftOnDeviceError(c *testing.T) {
defer s.d.Stop(c)
// pull a repository large enough to overfill the mounted filesystem
pullOut, err := s.d.Cmd("pull", "debian:bullseye-slim")
pullOut, err := s.d.Cmd("pull", "debian:bookworm-slim")
assert.Assert(c, err != nil, pullOut)
assert.Assert(c, strings.Contains(pullOut, "no space left on device"))
}
@@ -1777,6 +1803,7 @@ func (s *DockerDaemonSuite) TestDaemonRestartContainerLinksRestart(c *testing.T)
func (s *DockerDaemonSuite) TestDaemonCgroupParent(c *testing.T) {
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
cgroupParent := "test"
name := "cgroup-test"

View File

@@ -1571,7 +1571,7 @@ func (s *DockerCLINetworkSuite) TestEmbeddedDNSInvalidInput(c *testing.T) {
dockerCmd(c, "network", "create", "-d", "bridge", "nw1")
// Sending garbage to embedded DNS shouldn't crash the daemon
dockerCmd(c, "run", "-i", "--net=nw1", "--name=c1", "debian:bullseye-slim", "bash", "-c", "echo InvalidQuery > /dev/udp/127.0.0.11/53")
dockerCmd(c, "run", "-i", "--net=nw1", "--name=c1", "debian:bookworm-slim", "bash", "-c", "echo InvalidQuery > /dev/udp/127.0.0.11/53")
}
func (s *DockerCLINetworkSuite) TestDockerNetworkConnectFailsNoInspectChange(c *testing.T) {

View File

@@ -2913,7 +2913,7 @@ func (s *DockerCLIRunSuite) TestRunUnshareProc(c *testing.T) {
go func() {
name := "acidburn"
out, _, err := dockerCmdWithError("run", "--name", name, "--security-opt", "seccomp=unconfined", "debian:bullseye-slim", "unshare", "-p", "-m", "-f", "-r", "--mount-proc=/proc", "mount")
out, _, err := dockerCmdWithError("run", "--name", name, "--security-opt", "seccomp=unconfined", "debian:bookworm-slim", "unshare", "-p", "-m", "-f", "-r", "--mount-proc=/proc", "mount")
if err == nil ||
!(strings.Contains(strings.ToLower(out), "permission denied") ||
strings.Contains(strings.ToLower(out), "operation not permitted")) {
@@ -2925,7 +2925,7 @@ func (s *DockerCLIRunSuite) TestRunUnshareProc(c *testing.T) {
go func() {
name := "cereal"
out, _, err := dockerCmdWithError("run", "--name", name, "--security-opt", "seccomp=unconfined", "debian:bullseye-slim", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc")
out, _, err := dockerCmdWithError("run", "--name", name, "--security-opt", "seccomp=unconfined", "debian:bookworm-slim", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc")
if err == nil ||
!(strings.Contains(strings.ToLower(out), "mount: cannot mount none") ||
strings.Contains(strings.ToLower(out), "permission denied") ||
@@ -2939,7 +2939,7 @@ func (s *DockerCLIRunSuite) TestRunUnshareProc(c *testing.T) {
/* Ensure still fails if running privileged with the default policy */
go func() {
name := "crashoverride"
out, _, err := dockerCmdWithError("run", "--privileged", "--security-opt", "seccomp=unconfined", "--security-opt", "apparmor=docker-default", "--name", name, "debian:bullseye-slim", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc")
out, _, err := dockerCmdWithError("run", "--privileged", "--security-opt", "seccomp=unconfined", "--security-opt", "apparmor=docker-default", "--name", name, "debian:bookworm-slim", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc")
if err == nil ||
!(strings.Contains(strings.ToLower(out), "mount: cannot mount none") ||
strings.Contains(strings.ToLower(out), "permission denied") ||
@@ -3219,6 +3219,7 @@ func (s *DockerCLIRunSuite) TestRunWithUlimits(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunContainerWithCgroupParent(c *testing.T) {
// Not applicable on Windows as uses Unix specific functionality
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
// cgroup-parent relative path
testRunContainerWithCgroupParent(c, "test", "cgroup-test")
@@ -3254,6 +3255,7 @@ func testRunContainerWithCgroupParent(c *testing.T, cgroupParent, name string) {
func (s *DockerCLIRunSuite) TestRunInvalidCgroupParent(c *testing.T) {
// Not applicable on Windows as uses Unix specific functionality
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
testRunInvalidCgroupParent(c, "../../../../../../../../SHOULD_NOT_EXIST", "SHOULD_NOT_EXIST", "cgroup-invalid-test")
@@ -3294,6 +3296,7 @@ func (s *DockerCLIRunSuite) TestRunContainerWithCgroupMountRO(c *testing.T) {
// Not applicable on Windows as uses Unix specific functionality
// --read-only + userns has remount issues
testRequires(c, DaemonIsLinux, NotUserNamespace)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
filename := "/sys/fs/cgroup/devices/test123"
out, _, err := dockerCmdWithError("run", "busybox", "touch", filename)
@@ -4416,6 +4419,7 @@ func (s *DockerCLIRunSuite) TestRunHostnameInHostMode(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunAddDeviceCgroupRule(c *testing.T) {
testRequires(c, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
deviceRule := "c 7:128 rwm"

View File

@@ -28,6 +28,7 @@ import (
"github.com/moby/sys/mount"
"gotest.tools/v3/assert"
"gotest.tools/v3/icmd"
"gotest.tools/v3/skip"
)
// #6509
@@ -450,6 +451,7 @@ func (s *DockerCLIRunSuite) TestRunAttachInvalidDetachKeySequencePreserved(c *te
// "test" should be printed
func (s *DockerCLIRunSuite) TestRunWithCPUQuota(c *testing.T) {
testRequires(c, cpuCfsQuota)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/cpu/cpu.cfs_quota_us"
out, _ := dockerCmd(c, "run", "--cpu-quota", "8000", "--name", "test", "busybox", "cat", file)
@@ -461,6 +463,7 @@ func (s *DockerCLIRunSuite) TestRunWithCPUQuota(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithCpuPeriod(c *testing.T) {
testRequires(c, cpuCfsPeriod)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/cpu/cpu.cfs_period_us"
out, _ := dockerCmd(c, "run", "--cpu-period", "50000", "--name", "test", "busybox", "cat", file)
@@ -491,6 +494,7 @@ func (s *DockerCLIRunSuite) TestRunWithInvalidCpuPeriod(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithCPUShares(c *testing.T) {
testRequires(c, cpuShare)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/cpu/cpu.shares"
out, _ := dockerCmd(c, "run", "--cpu-shares", "1000", "--name", "test", "busybox", "cat", file)
@@ -511,6 +515,7 @@ func (s *DockerCLIRunSuite) TestRunEchoStdoutWithCPUSharesAndMemoryLimit(c *test
func (s *DockerCLIRunSuite) TestRunWithCpusetCpus(c *testing.T) {
testRequires(c, cgroupCpuset)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/cpuset/cpuset.cpus"
out, _ := dockerCmd(c, "run", "--cpuset-cpus", "0", "--name", "test", "busybox", "cat", file)
@@ -522,6 +527,7 @@ func (s *DockerCLIRunSuite) TestRunWithCpusetCpus(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithCpusetMems(c *testing.T) {
testRequires(c, cgroupCpuset)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/cpuset/cpuset.mems"
out, _ := dockerCmd(c, "run", "--cpuset-mems", "0", "--name", "test", "busybox", "cat", file)
@@ -533,6 +539,7 @@ func (s *DockerCLIRunSuite) TestRunWithCpusetMems(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithBlkioWeight(c *testing.T) {
testRequires(c, blkioWeight)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/blkio/blkio.weight"
out, _ := dockerCmd(c, "run", "--blkio-weight", "300", "--name", "test", "busybox", "cat", file)
@@ -544,6 +551,7 @@ func (s *DockerCLIRunSuite) TestRunWithBlkioWeight(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithInvalidBlkioWeight(c *testing.T) {
testRequires(c, blkioWeight)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
out, _, err := dockerCmdWithError("run", "--blkio-weight", "5", "busybox", "true")
assert.ErrorContains(c, err, "", out)
expected := "Range of blkio weight is from 10 to 1000"
@@ -602,6 +610,7 @@ func (s *DockerCLIRunSuite) TestRunOOMExitCode(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithMemoryLimit(c *testing.T) {
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/memory/memory.limit_in_bytes"
cli.DockerCmd(c, "run", "-m", "32M", "--name", "test", "busybox", "cat", file).Assert(c, icmd.Expected{
@@ -646,6 +655,7 @@ func (s *DockerCLIRunSuite) TestRunWithSwappinessInvalid(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunWithMemoryReservation(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, memoryReservationSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/memory/memory.soft_limit_in_bytes"
out, _ := dockerCmd(c, "run", "--memory-reservation", "200M", "--name", "test", "busybox", "cat", file)
@@ -730,6 +740,8 @@ func (s *DockerCLIRunSuite) TestRunInvalidCpusetMemsFlagValue(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunInvalidCPUShares(c *testing.T) {
testRequires(c, cpuShare, DaemonIsLinux)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
out, _, err := dockerCmdWithError("run", "--cpu-shares", "1", "busybox", "echo", "test")
assert.ErrorContains(c, err, "", out)
expected := "minimum allowed cpu-shares is 2"
@@ -840,12 +852,12 @@ func (s *DockerCLIRunSuite) TestRunTmpfsMountsWithOptions(c *testing.T) {
assert.Assert(c, strings.Contains(out, option))
}
// We use debian:bullseye-slim as there is no findmnt in busybox. Also the output will be in the format of
// We use debian:bookworm-slim as there is no findmnt in busybox. Also the output will be in the format of
// TARGET PROPAGATION
// /tmp shared
// so we only capture `shared` here.
expectedOptions = []string{"shared"}
out, _ = dockerCmd(c, "run", "--tmpfs", "/tmp:shared", "debian:bullseye-slim", "findmnt", "-o", "TARGET,PROPAGATION", "/tmp")
out, _ = dockerCmd(c, "run", "--tmpfs", "/tmp:shared", "debian:bookworm-slim", "findmnt", "-o", "TARGET,PROPAGATION", "/tmp")
for _, option := range expectedOptions {
assert.Assert(c, strings.Contains(out, option))
}
@@ -881,7 +893,7 @@ func (s *DockerCLIRunSuite) TestRunSysctls(c *testing.T) {
})
}
// TestRunSeccompProfileDenyUnshare checks that 'docker run --security-opt seccomp=/tmp/profile.json debian:bullseye-slim unshare' exits with operation not permitted.
// TestRunSeccompProfileDenyUnshare checks that 'docker run --security-opt seccomp=/tmp/profile.json debian:bookworm-slim unshare' exits with operation not permitted.
func (s *DockerCLIRunSuite) TestRunSeccompProfileDenyUnshare(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, seccompEnabled, NotArm, Apparmor)
jsonData := `{
@@ -904,7 +916,7 @@ func (s *DockerCLIRunSuite) TestRunSeccompProfileDenyUnshare(c *testing.T) {
}
icmd.RunCommand(dockerBinary, "run", "--security-opt", "apparmor=unconfined",
"--security-opt", "seccomp="+tmpFile.Name(),
"debian:bullseye-slim", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc").Assert(c, icmd.Expected{
"debian:bookworm-slim", "unshare", "-p", "-m", "-f", "-r", "mount", "-t", "proc", "none", "/proc").Assert(c, icmd.Expected{
ExitCode: 1,
Err: "Operation not permitted",
})
@@ -944,7 +956,7 @@ func (s *DockerCLIRunSuite) TestRunSeccompProfileDenyChmod(c *testing.T) {
})
}
// TestRunSeccompProfileDenyUnshareUserns checks that 'docker run debian:bullseye-slim unshare --map-root-user --user sh -c whoami' with a specific profile to
// TestRunSeccompProfileDenyUnshareUserns checks that 'docker run debian:bookworm-slim unshare --map-root-user --user sh -c whoami' with a specific profile to
// deny unshare of a userns exits with operation not permitted.
func (s *DockerCLIRunSuite) TestRunSeccompProfileDenyUnshareUserns(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, seccompEnabled, NotArm, Apparmor)
@@ -976,7 +988,7 @@ func (s *DockerCLIRunSuite) TestRunSeccompProfileDenyUnshareUserns(c *testing.T)
}
icmd.RunCommand(dockerBinary, "run",
"--security-opt", "apparmor=unconfined", "--security-opt", "seccomp="+tmpFile.Name(),
"debian:bullseye-slim", "unshare", "--map-root-user", "--user", "sh", "-c", "whoami").Assert(c, icmd.Expected{
"debian:bookworm-slim", "unshare", "--map-root-user", "--user", "sh", "-c", "whoami").Assert(c, icmd.Expected{
ExitCode: 1,
Err: "Operation not permitted",
})
@@ -1028,12 +1040,12 @@ func (s *DockerCLIRunSuite) TestRunSeccompProfileAllow32Bit(c *testing.T) {
icmd.RunCommand(dockerBinary, "run", "syscall-test", "exit32-test").Assert(c, icmd.Success)
}
// TestRunSeccompAllowSetrlimit checks that 'docker run debian:bullseye-slim ulimit -v 1048510' succeeds.
// TestRunSeccompAllowSetrlimit checks that 'docker run debian:bookworm-slim ulimit -v 1048510' succeeds.
func (s *DockerCLIRunSuite) TestRunSeccompAllowSetrlimit(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, seccompEnabled)
// ulimit uses setrlimit, so we want to make sure we don't break it
icmd.RunCommand(dockerBinary, "run", "debian:bullseye-slim", "bash", "-c", "ulimit -v 1048510").Assert(c, icmd.Success)
icmd.RunCommand(dockerBinary, "run", "debian:bookworm-slim", "bash", "-c", "ulimit -v 1048510").Assert(c, icmd.Success)
}
func (s *DockerCLIRunSuite) TestRunSeccompDefaultProfileAcct(c *testing.T) {
@@ -1329,7 +1341,7 @@ func (s *DockerCLIRunSuite) TestRunApparmorProcDirectory(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunSeccompWithDefaultProfile(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, seccompEnabled)
out, _, err := dockerCmdWithError("run", "--security-opt", "seccomp=../profiles/seccomp/default.json", "debian:bullseye-slim", "unshare", "--map-root-user", "--user", "sh", "-c", "whoami")
out, _, err := dockerCmdWithError("run", "--security-opt", "seccomp=../profiles/seccomp/default.json", "debian:bookworm-slim", "unshare", "--map-root-user", "--user", "sh", "-c", "whoami")
assert.ErrorContains(c, err, "", out)
assert.Equal(c, strings.TrimSpace(out), "unshare: unshare failed: Operation not permitted")
}
@@ -1384,6 +1396,7 @@ func (s *DockerCLIRunSuite) TestRunDeviceSymlink(c *testing.T) {
// TestRunPIDsLimit makes sure the pids cgroup is set with --pids-limit
func (s *DockerCLIRunSuite) TestRunPIDsLimit(c *testing.T) {
testRequires(c, testEnv.IsLocalDaemon, pidsLimit)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/pids/pids.max"
out, _ := dockerCmd(c, "run", "--name", "skittles", "--pids-limit", "4", "busybox", "cat", file)
@@ -1395,6 +1408,7 @@ func (s *DockerCLIRunSuite) TestRunPIDsLimit(c *testing.T) {
func (s *DockerCLIRunSuite) TestRunPrivilegedAllowedDevices(c *testing.T) {
testRequires(c, DaemonIsLinux, NotUserNamespace)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file := "/sys/fs/cgroup/devices/devices.list"
out, _ := dockerCmd(c, "run", "--privileged", "busybox", "cat", file)
@@ -1545,6 +1559,7 @@ func (s *DockerDaemonSuite) TestRunWithDaemonDefaultSeccompProfile(c *testing.T)
func (s *DockerCLIRunSuite) TestRunWithNanoCPUs(c *testing.T) {
testRequires(c, cpuCfsQuota, cpuCfsPeriod)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file1 := "/sys/fs/cgroup/cpu/cpu.cfs_quota_us"
file2 := "/sys/fs/cgroup/cpu/cpu.cfs_period_us"

View File

@@ -75,7 +75,7 @@ func (s *DockerSwarmSuite) TestServiceLogsCompleteness(c *testing.T) {
name := "TestServiceLogsCompleteness"
// make a service that prints 6 lines
out, err := d.Cmd("service", "create", "--detach", "--no-resolve-image", "--name", name, "busybox", "sh", "-c", "for line in $(seq 0 5); do echo log test $line; done; exec tail /dev/null")
out, err := d.Cmd("service", "create", "--detach", "--no-resolve-image", "--name", name, "busybox", "sh", "-c", "for line in $(seq 0 5); do echo log test $line; done; exec tail -f /dev/null")
assert.NilError(c, err)
assert.Assert(c, strings.TrimSpace(out) != "")
@@ -126,7 +126,7 @@ func (s *DockerSwarmSuite) TestServiceLogsSince(c *testing.T) {
name := "TestServiceLogsSince"
out, err := d.Cmd("service", "create", "--detach", "--no-resolve-image", "--name", name, "busybox", "sh", "-c", "for i in $(seq 1 3); do usleep 100000; echo log$i; done; exec tail /dev/null")
out, err := d.Cmd("service", "create", "--detach", "--no-resolve-image", "--name", name, "busybox", "sh", "-c", "for i in $(seq 1 3); do usleep 100000; echo log$i; done; exec tail -f /dev/null")
assert.NilError(c, err)
assert.Assert(c, strings.TrimSpace(out) != "")
poll.WaitOn(c, pollCheck(c, d.CheckActiveContainerCount, checker.Equals(1)), poll.WithTimeout(defaultReconciliationTimeout))

View File

@@ -17,6 +17,7 @@ import (
"github.com/docker/docker/client"
"github.com/docker/docker/testutil/request"
"gotest.tools/v3/assert"
"gotest.tools/v3/skip"
)
func (s *DockerCLIUpdateSuite) TearDownTest(c *testing.T) {
@@ -30,6 +31,7 @@ func (s *DockerCLIUpdateSuite) OnTimeout(c *testing.T) {
func (s *DockerCLIUpdateSuite) TestUpdateRunningContainer(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
dockerCmd(c, "run", "-d", "--name", name, "-m", "300M", "busybox", "top")
@@ -45,6 +47,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateRunningContainer(c *testing.T) {
func (s *DockerCLIUpdateSuite) TestUpdateRunningContainerWithRestart(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
dockerCmd(c, "run", "-d", "--name", name, "-m", "300M", "busybox", "top")
@@ -61,6 +64,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateRunningContainerWithRestart(c *testing.
func (s *DockerCLIUpdateSuite) TestUpdateStoppedContainer(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
file := "/sys/fs/cgroup/memory/memory.limit_in_bytes"
@@ -76,6 +80,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateStoppedContainer(c *testing.T) {
func (s *DockerCLIUpdateSuite) TestUpdatePausedContainer(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, cpuShare)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
dockerCmd(c, "run", "-d", "--name", name, "--cpu-shares", "1000", "busybox", "top")
@@ -94,6 +99,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateWithUntouchedFields(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
testRequires(c, cpuShare)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
dockerCmd(c, "run", "-d", "--name", name, "-m", "300M", "--cpu-shares", "800", "busybox", "top")
@@ -134,6 +140,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateSwapMemoryOnly(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
testRequires(c, swapMemorySupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
dockerCmd(c, "run", "-d", "--name", name, "--memory", "300M", "--memory-swap", "500M", "busybox", "top")
@@ -150,6 +157,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateInvalidSwapMemory(c *testing.T) {
testRequires(c, DaemonIsLinux)
testRequires(c, memoryLimitSupport)
testRequires(c, swapMemorySupport)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
name := "test-update-container"
dockerCmd(c, "run", "-d", "--name", name, "--memory", "300M", "--memory-swap", "500M", "busybox", "top")
@@ -244,6 +252,7 @@ func (s *DockerCLIUpdateSuite) TestUpdateNotAffectMonitorRestartPolicy(c *testin
func (s *DockerCLIUpdateSuite) TestUpdateWithNanoCPUs(c *testing.T) {
testRequires(c, cpuCfsQuota, cpuCfsPeriod)
skip.If(c, onlyCgroupsv2(), "FIXME: cgroupsV2 not supported yet")
file1 := "/sys/fs/cgroup/cpu/cpu.cfs_quota_us"
file2 := "/sys/fs/cgroup/cpu/cpu.cfs_period_us"

View File

@@ -48,7 +48,7 @@ func ensureSyscallTest(c *testing.T) {
dockerFile := filepath.Join(tmp, "Dockerfile")
content := []byte(`
FROM debian:bullseye-slim
FROM debian:bookworm-slim
COPY . /usr/bin/
`)
err = os.WriteFile(dockerFile, content, 0600)
@@ -64,7 +64,7 @@ func ensureSyscallTest(c *testing.T) {
}
func ensureSyscallTestBuild(c *testing.T) {
err := load.FrozenImagesLinux(testEnv.APIClient(), "debian:bullseye-slim")
err := load.FrozenImagesLinux(testEnv.APIClient(), "debian:bookworm-slim")
assert.NilError(c, err)
var buildArgs []string
@@ -102,7 +102,7 @@ func ensureNNPTest(c *testing.T) {
dockerfile := filepath.Join(tmp, "Dockerfile")
content := `
FROM debian:bullseye-slim
FROM debian:bookworm-slim
COPY . /usr/bin
RUN chmod +s /usr/bin/nnp-test
`
@@ -119,7 +119,7 @@ func ensureNNPTest(c *testing.T) {
}
func ensureNNPTestBuild(c *testing.T) {
err := load.FrozenImagesLinux(testEnv.APIClient(), "debian:bullseye-slim")
err := load.FrozenImagesLinux(testEnv.APIClient(), "debian:bookworm-slim")
assert.NilError(c, err)
var buildArgs []string

View File

@@ -9,6 +9,7 @@ import (
"os/exec"
"strings"
"github.com/containerd/cgroups"
"github.com/docker/docker/pkg/sysinfo"
)
@@ -69,6 +70,11 @@ func bridgeNfIptables() bool {
return !SysInfo.BridgeNFCallIPTablesDisabled
}
func onlyCgroupsv2() bool {
// Only check for unified, cgroup v1 tests can run under other modes
return cgroups.Mode() == cgroups.Unified
}
func unprivilegedUsernsClone() bool {
content, err := os.ReadFile("/proc/sys/kernel/unprivileged_userns_clone")
return err != nil || !strings.Contains(string(content), "0")

View File

@@ -0,0 +1,5 @@
package main
func onlyCgroupsv2() bool {
return false
}

View File

@@ -42,7 +42,7 @@ func TestBuildUserNamespaceValidateCapabilitiesAreV2(t *testing.T) {
clientUserRemap := dUserRemap.NewClientT(t)
defer clientUserRemap.Close()
err = load.FrozenImagesLinux(clientUserRemap, "debian:bullseye-slim")
err = load.FrozenImagesLinux(clientUserRemap, "debian:bookworm-slim")
assert.NilError(t, err)
dUserRemapRunning := true
@@ -54,7 +54,7 @@ func TestBuildUserNamespaceValidateCapabilitiesAreV2(t *testing.T) {
}()
dockerfile := `
FROM debian:bullseye-slim
FROM debian:bookworm-slim
RUN apt-get update && apt-get install -y libcap2-bin --no-install-recommends
RUN setcap CAP_NET_BIND_SERVICE=+eip /bin/sleep
`

View File

@@ -11,10 +11,12 @@ import (
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/testutil"
"github.com/docker/docker/testutil/daemon"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/skip"
)
@@ -133,7 +135,8 @@ func TestImportWithCustomPlatform(t *testing.T) {
reference,
types.ImageImportOptions{Platform: tc.platform})
if tc.expectedErr != "" {
assert.ErrorContains(t, err, tc.expectedErr)
assert.Check(t, is.ErrorType(err, errdefs.IsInvalidParameter))
assert.Check(t, is.ErrorContains(err, tc.expectedErr))
} else {
assert.NilError(t, err)

View File

@@ -1,14 +1,18 @@
package container
import (
"bytes"
"context"
"errors"
"runtime"
"sync"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/client"
"github.com/docker/docker/pkg/stdcopy"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"gotest.tools/v3/assert"
)
@@ -71,3 +75,75 @@ func Run(ctx context.Context, t *testing.T, client client.APIClient, ops ...func
return id
}
type RunResult struct {
ContainerID string
ExitCode int
Stdout *bytes.Buffer
Stderr *bytes.Buffer
}
func RunAttach(ctx context.Context, t *testing.T, client client.APIClient, ops ...func(config *TestContainerConfig)) RunResult {
t.Helper()
ops = append(ops, func(c *TestContainerConfig) {
c.Config.AttachStdout = true
c.Config.AttachStderr = true
})
id := Create(ctx, t, client, ops...)
aresp, err := client.ContainerAttach(ctx, id, types.ContainerAttachOptions{
Stream: true,
Stdout: true,
Stderr: true,
})
assert.NilError(t, err)
err = client.ContainerStart(ctx, id, types.ContainerStartOptions{})
assert.NilError(t, err)
s, err := demultiplexStreams(ctx, aresp)
if !errors.Is(err, context.DeadlineExceeded) && !errors.Is(err, context.Canceled) {
assert.NilError(t, err)
}
// Inspect to get the exit code. A new context is used here to make sure that if the context passed as argument as
// reached timeout during the demultiplexStream call, we still return a RunResult.
resp, err := client.ContainerInspect(context.Background(), id)
assert.NilError(t, err)
return RunResult{ContainerID: id, ExitCode: resp.State.ExitCode, Stdout: &s.stdout, Stderr: &s.stderr}
}
type streams struct {
stdout, stderr bytes.Buffer
}
// demultiplexStreams starts a goroutine to demultiplex stdout and stderr from the types.HijackedResponse resp and
// waits until either multiplexed stream reaches EOF or the context expires. It unconditionally closes resp and waits
// until the demultiplexing goroutine has finished its work before returning.
func demultiplexStreams(ctx context.Context, resp types.HijackedResponse) (streams, error) {
var s streams
outputDone := make(chan error, 1)
var wg sync.WaitGroup
wg.Add(1)
go func() {
_, err := stdcopy.StdCopy(&s.stdout, &s.stderr, resp.Reader)
outputDone <- err
wg.Done()
}()
var err error
select {
case copyErr := <-outputDone:
err = copyErr
break
case <-ctx.Done():
err = ctx.Err()
}
resp.Close()
wg.Wait()
return s, err
}

View File

@@ -33,3 +33,10 @@ func CreateNoError(ctx context.Context, t *testing.T, client client.APIClient, n
assert.NilError(t, err)
return name
}
func RemoveNoError(ctx context.Context, t *testing.T, apiClient client.APIClient, name string) {
t.Helper()
err := apiClient.NetworkRemove(ctx, name)
assert.NilError(t, err)
}

View File

@@ -0,0 +1,34 @@
package networking
import (
"context"
"os"
"testing"
"github.com/docker/docker/testutil/environment"
)
var testEnv *environment.Execution
func TestMain(m *testing.M) {
var err error
testEnv, err = environment.New()
if err != nil {
panic(err)
}
err = environment.EnsureFrozenImagesLinux(testEnv)
if err != nil {
panic(err)
}
testEnv.Print()
code := m.Run()
os.Exit(code)
}
func setupTest(t *testing.T) context.Context {
environment.ProtectAll(t, testEnv)
t.Cleanup(func() { testEnv.Clean(t) })
return context.Background()
}

View File

@@ -0,0 +1,142 @@
package networking
import (
"net"
"os"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/integration/internal/network"
"github.com/docker/docker/testutil/daemon"
"github.com/miekg/dns"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/v3/skip"
)
// writeTempResolvConf writes a resolv.conf that only contains a single
// nameserver line, with address addr.
// It returns the name of the temp file.
func writeTempResolvConf(t *testing.T, addr string) string {
t.Helper()
// Not using t.TempDir() here because in rootless mode, while the temporary
// directory gets mode 0777, it's a subdir of an 0700 directory owned by root.
// So, it's not accessible by the daemon.
f, err := os.CreateTemp("", "resolv.conf")
assert.NilError(t, err)
t.Cleanup(func() { os.Remove(f.Name()) })
err = f.Chmod(0644)
assert.NilError(t, err)
f.Write([]byte("nameserver " + addr + "\n"))
return f.Name()
}
const dnsRespAddr = "10.11.12.13"
// startDaftDNS starts and returns a really, really daft DNS server that only
// responds to type-A requests, and always with address dnsRespAddr.
func startDaftDNS(t *testing.T, addr string) *dns.Server {
serveDNS := func(w dns.ResponseWriter, query *dns.Msg) {
if query.Question[0].Qtype == dns.TypeA {
resp := &dns.Msg{}
resp.SetReply(query)
answer := &dns.A{
Hdr: dns.RR_Header{
Name: query.Question[0].Name,
Rrtype: dns.TypeA,
Class: dns.ClassINET,
Ttl: 600,
},
}
answer.A = net.ParseIP(dnsRespAddr)
resp.Answer = append(resp.Answer, answer)
_ = w.WriteMsg(resp)
}
}
conn, err := net.ListenUDP("udp", &net.UDPAddr{
IP: net.ParseIP(addr),
Port: 53,
})
assert.NilError(t, err)
server := &dns.Server{Handler: dns.HandlerFunc(serveDNS), PacketConn: conn}
go func() {
_ = server.ActivateAndServe()
}()
return server
}
// Check that when a container is connected to an internal network, DNS
// requests sent to daemon's internal DNS resolver are not forwarded to
// an upstream resolver listening on a localhost address.
// (Assumes the host does not already have a DNS server on 127.0.0.1.)
func TestInternalNetworkDNS(t *testing.T) {
skip.If(t, testEnv.DaemonInfo.OSType == "windows", "No resolv.conf on Windows")
skip.If(t, testEnv.IsRootless, "Can't use resolver on host in rootless mode")
ctx := setupTest(t)
// Start a DNS server on the loopback interface.
server := startDaftDNS(t, "127.0.0.1")
defer server.Shutdown()
// Set up a temp resolv.conf pointing at that DNS server, and a daemon using it.
tmpFileName := writeTempResolvConf(t, "127.0.0.1")
d := daemon.New(t, daemon.WithEnvVars("DOCKER_TEST_RESOLV_CONF_PATH="+tmpFileName))
d.StartWithBusybox(t, "--experimental", "--ip6tables")
defer d.Stop(t)
c := d.NewClientT(t)
defer c.Close()
intNetName := "intnet"
network.CreateNoError(ctx, t, c, intNetName,
network.WithDriver("bridge"),
network.WithInternal(),
)
defer network.RemoveNoError(ctx, t, c, intNetName)
extNetName := "extnet"
network.CreateNoError(ctx, t, c, extNetName,
network.WithDriver("bridge"),
)
defer network.RemoveNoError(ctx, t, c, extNetName)
// Create a container, initially with external connectivity.
// Expect the external DNS server to respond to a request from the container.
ctrId := container.Run(ctx, t, c, container.WithNetworkMode(extNetName))
defer c.ContainerRemove(ctx, ctrId, types.ContainerRemoveOptions{Force: true})
res, err := container.Exec(ctx, c, ctrId, []string{"nslookup", "test.example"})
assert.NilError(t, err)
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, is.Contains(res.Stdout(), dnsRespAddr))
// Connect the container to the internal network as well.
// External DNS should still be used.
err = c.NetworkConnect(ctx, intNetName, ctrId, nil)
assert.NilError(t, err)
res, err = container.Exec(ctx, c, ctrId, []string{"nslookup", "test.example"})
assert.NilError(t, err)
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, is.Contains(res.Stdout(), dnsRespAddr))
// Disconnect from the external network.
// Expect no access to the external DNS.
err = c.NetworkDisconnect(ctx, extNetName, ctrId, true)
assert.NilError(t, err)
res, err = container.Exec(ctx, c, ctrId, []string{"nslookup", "test.example"})
assert.NilError(t, err)
assert.Check(t, is.Equal(res.ExitCode, 1))
assert.Check(t, is.Contains(res.Stdout(), "SERVFAIL"))
// Reconnect the external network.
// Check that the external DNS server is used again.
err = c.NetworkConnect(ctx, extNetName, ctrId, nil)
assert.NilError(t, err)
res, err = container.Exec(ctx, c, ctrId, []string{"nslookup", "test.example"})
assert.NilError(t, err)
assert.Check(t, is.Equal(res.ExitCode, 0))
assert.Check(t, is.Contains(res.Stdout(), dnsRespAddr))
}

View File

@@ -117,6 +117,49 @@ func TestPluginInstall(t *testing.T) {
assert.NilError(t, err)
})
t.Run("with digest", func(t *testing.T) {
defer setupTest(t)()
reg := registry.NewV2(t)
defer reg.Close()
name := "test-" + strings.ToLower(t.Name())
repo := path.Join(registry.DefaultURL, name+":latest")
err := plugin.Create(ctx, client, repo)
assert.NilError(t, err)
rdr, err := client.PluginPush(ctx, repo, "")
assert.NilError(t, err)
defer rdr.Close()
buf := &strings.Builder{}
assert.NilError(t, err)
var digest string
assert.NilError(t, jsonmessage.DisplayJSONMessagesStream(rdr, buf, 0, false, func(j jsonmessage.JSONMessage) {
if j.Aux != nil {
var r types.PushResult
assert.NilError(t, json.Unmarshal(*j.Aux, &r))
digest = r.Digest
}
}), buf)
err = client.PluginRemove(ctx, repo, types.PluginRemoveOptions{Force: true})
assert.NilError(t, err)
rdr, err = client.PluginInstall(ctx, repo, types.PluginInstallOptions{
Disabled: true,
RemoteRef: repo + "@" + digest,
})
assert.NilError(t, err)
defer rdr.Close()
_, err = io.Copy(io.Discard, rdr)
assert.NilError(t, err)
_, _, err = client.PluginInspectWithRaw(ctx, repo)
assert.NilError(t, err)
})
t.Run("with htpasswd", func(t *testing.T) {
defer setupTest(t)()

View File

@@ -186,3 +186,27 @@ func (sb *sandbox) getGatewayEndpoint() *endpoint {
}
return nil
}
// hasExternalConnectivity returns true if the sandbox is connected to any
// endpoint which provides external connectivity.
//
// This function is only necessary on branches without
// https://github.com/moby/moby/pull/46603. With that PR applied, this function
// would be equivalent to sb.getGatewayEndpoint() != nil.
func (sb *sandbox) hasExternalConnectivity() bool {
for _, ep := range sb.getConnectedEndpoints() {
n := ep.getNetwork()
switch n.Type() {
case "null", "host":
continue
case "bridge":
if n.Internal() {
continue
}
}
if len(ep.Gateway()) != 0 {
return true
}
}
return false
}

View File

@@ -364,7 +364,7 @@ func setINC(version iptables.IPVersion, iface string, enable bool) error {
logrus.Warnf("Failed to rollback iptables rule after failure (%v): %v", err, err2)
}
}
return fmt.Errorf(msg)
return errors.New(msg)
}
logrus.Warn(msg)
}

View File

@@ -544,8 +544,12 @@ func (ep *endpoint) sbJoin(sb *sandbox, options ...EndpointOption) (err error) {
return sb.setupDefaultGW()
}
moveExtConn := sb.getGatewayEndpoint() != extEp
// Enable upstream forwarding if the sandbox gained external connectivity.
if sb.resolver != nil {
sb.resolver.SetForwardingPolicy(sb.hasExternalConnectivity())
}
moveExtConn := sb.getGatewayEndpoint() != extEp
if moveExtConn {
if extEp != nil {
logrus.Debugf("Revoking external connectivity on endpoint %s (%s)", extEp.Name(), extEp.ID())
@@ -777,6 +781,11 @@ func (ep *endpoint) sbLeave(sb *sandbox, force bool, options ...EndpointOption)
return sb.setupDefaultGW()
}
// Disable upstream forwarding if the sandbox lost external connectivity.
if sb.resolver != nil {
sb.resolver.SetForwardingPolicy(sb.hasExternalConnectivity())
}
// New endpoint providing external connectivity for the sandbox
extEp = sb.getGatewayEndpoint()
if moveExtConn && extEp != nil {

View File

@@ -11,7 +11,6 @@ import (
"net/http/httptest"
"os"
"path/filepath"
"runtime"
"testing"
"github.com/docker/docker/libnetwork"
@@ -31,10 +30,6 @@ import (
var controller libnetwork.NetworkController
func TestMain(m *testing.M) {
if runtime.GOOS == "windows" {
logrus.Info("Test suite does not currently support windows")
os.Exit(0)
}
if reexec.Init() {
return
}

View File

@@ -241,7 +241,7 @@ func TestNetworkDBJoinLeaveNetworks(t *testing.T) {
closeNetworkDBInstances(t, dbs)
}
func TestNetworkDBCRUDTableEntry(t *testing.T) {
func TestFlakyNetworkDBCRUDTableEntry(t *testing.T) {
dbs := createNetworkDBInstances(t, 3, "node", DefaultConfig())
err := dbs[0].JoinNetwork("network1")
@@ -271,7 +271,7 @@ func TestNetworkDBCRUDTableEntry(t *testing.T) {
closeNetworkDBInstances(t, dbs)
}
func TestNetworkDBCRUDTableEntries(t *testing.T) {
func TestFlakyNetworkDBCRUDTableEntries(t *testing.T) {
dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig())
err := dbs[0].JoinNetwork("network1")
@@ -341,7 +341,7 @@ func TestNetworkDBCRUDTableEntries(t *testing.T) {
closeNetworkDBInstances(t, dbs)
}
func TestNetworkDBNodeLeave(t *testing.T) {
func TestFlakyNetworkDBNodeLeave(t *testing.T) {
dbs := createNetworkDBInstances(t, 2, "node", DefaultConfig())
err := dbs[0].JoinNetwork("network1")
@@ -835,7 +835,7 @@ func TestParallelDelete(t *testing.T) {
closeNetworkDBInstances(t, dbs)
}
func TestNetworkDBIslands(t *testing.T) {
func TestFlakyNetworkDBIslands(t *testing.T) {
pollTimeout := func() time.Duration {
const defaultTimeout = 120 * time.Second
dl, ok := t.Deadline()

View File

@@ -6,6 +6,7 @@ import (
"net"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/docker/docker/libnetwork/types"
@@ -30,6 +31,9 @@ type Resolver interface {
// SetExtServers configures the external nameservers the resolver
// should use to forward queries
SetExtServers([]extDNSEntry)
// SetForwardingPolicy re-configures the embedded DNS resolver to either
// enable or disable forwarding DNS queries to external servers.
SetForwardingPolicy(policy bool)
// ResolverOptions returns resolv.conf options that should be set
ResolverOptions() []string
}
@@ -89,21 +93,23 @@ type resolver struct {
tStamp time.Time
queryLock sync.Mutex
listenAddress string
proxyDNS bool
proxyDNS atomic.Bool
resolverKey string
startCh chan struct{}
}
// NewResolver creates a new instance of the Resolver
func NewResolver(address string, proxyDNS bool, resolverKey string, backend DNSBackend) Resolver {
return &resolver{
r := &resolver{
backend: backend,
proxyDNS: proxyDNS,
listenAddress: address,
resolverKey: resolverKey,
err: fmt.Errorf("setup not done yet"),
startCh: make(chan struct{}, 1),
}
r.proxyDNS.Store(proxyDNS)
return r
}
func (r *resolver) SetupFunc(port int) func() {
@@ -196,6 +202,10 @@ func (r *resolver) SetExtServers(extDNS []extDNSEntry) {
}
}
func (r *resolver) SetForwardingPolicy(policy bool) {
r.proxyDNS.Store(policy)
}
func (r *resolver) NameServer() string {
return r.listenAddress
}
@@ -407,7 +417,7 @@ func (r *resolver) ServeDNS(w dns.ResponseWriter, query *dns.Msg) {
if resp == nil {
// If the backend doesn't support proxying dns request
// fail the response
if !r.proxyDNS {
if !r.proxyDNS.Load() {
resp = new(dns.Msg)
resp.SetRcode(query, dns.RcodeServerFailure)
if err := w.WriteMsg(resp); err != nil {

View File

@@ -27,7 +27,11 @@ const (
func (sb *sandbox) startResolver(restore bool) {
sb.resolverOnce.Do(func() {
var err error
sb.resolver = NewResolver(resolverIPSandbox, true, sb.Key(), sb)
// The resolver is started with proxyDNS=false if the sandbox does not currently
// have a gateway. So, if the Sandbox is only connected to an 'internal' network,
// it will not forward DNS requests to external resolvers. The resolver's
// proxyDNS setting is then updated as network Endpoints are added/removed.
sb.resolver = NewResolver(resolverIPSandbox, sb.hasExternalConnectivity(), sb.Key(), sb)
defer func() {
if err != nil {
sb.resolver = nil

View File

@@ -5,6 +5,7 @@ package libnetwork
import (
"encoding/json"
"errors"
"flag"
"fmt"
"io"
@@ -107,7 +108,7 @@ func processReturn(r io.Reader) error {
return fmt.Errorf("failed to read buf in processReturn : %v", err)
}
if string(buf[0:n]) != success {
return fmt.Errorf(string(buf[0:n]))
return errors.New(string(buf[0:n]))
}
return nil
}

View File

@@ -51,7 +51,7 @@ func (overlayWhiteoutConverter) ConvertWrite(hdr *tar.Header, path string, fi os
wo = &tar.Header{
Typeflag: tar.TypeReg,
Mode: hdr.Mode & int64(os.ModePerm),
Name: filepath.Join(hdr.Name, WhiteoutOpaqueDir),
Name: filepath.Join(hdr.Name, WhiteoutOpaqueDir), // #nosec G305 -- An archive is being created, not extracted.
Size: 0,
Uid: hdr.Uid,
Uname: hdr.Uname,
@@ -59,7 +59,7 @@ func (overlayWhiteoutConverter) ConvertWrite(hdr *tar.Header, path string, fi os
Gname: hdr.Gname,
AccessTime: hdr.AccessTime,
ChangeTime: hdr.ChangeTime,
} //#nosec G305 -- An archive is being created, not extracted.
}
}
}

View File

@@ -7,6 +7,8 @@ import (
"io"
"mime"
"net/http"
"net/url"
"regexp"
"strings"
"github.com/docker/docker/pkg/ioutils"
@@ -52,10 +54,23 @@ type Ctx struct {
authReq *Request
}
func isChunked(r *http.Request) bool {
// RFC 7230 specifies that content length is to be ignored if Transfer-Encoding is chunked
if strings.EqualFold(r.Header.Get("Transfer-Encoding"), "chunked") {
return true
}
for _, v := range r.TransferEncoding {
if strings.EqualFold(v, "chunked") {
return true
}
}
return false
}
// AuthZRequest authorized the request to the docker daemon using authZ plugins
func (ctx *Ctx) AuthZRequest(w http.ResponseWriter, r *http.Request) error {
var body []byte
if sendBody(ctx.requestURI, r.Header) && r.ContentLength > 0 && r.ContentLength < maxBodySize {
if sendBody(ctx.requestURI, r.Header) && (r.ContentLength > 0 || isChunked(r)) && r.ContentLength < maxBodySize {
var err error
body, r.Body, err = drainBody(r.Body)
if err != nil {
@@ -108,7 +123,6 @@ func (ctx *Ctx) AuthZResponse(rm ResponseModifier, r *http.Request) error {
if sendBody(ctx.requestURI, rm.Header()) {
ctx.authReq.ResponseBody = rm.RawBody()
}
for _, plugin := range ctx.plugins {
logrus.Debugf("AuthZ response using plugin %s", plugin.Name())
@@ -146,10 +160,26 @@ func drainBody(body io.ReadCloser) ([]byte, io.ReadCloser, error) {
return nil, newBody, err
}
func isAuthEndpoint(urlPath string) (bool, error) {
// eg www.test.com/v1.24/auth/optional?optional1=something&optional2=something (version optional)
matched, err := regexp.MatchString(`^[^\/]*\/(v\d[\d\.]*\/)?auth.*`, urlPath)
if err != nil {
return false, err
}
return matched, nil
}
// sendBody returns true when request/response body should be sent to AuthZPlugin
func sendBody(url string, header http.Header) bool {
func sendBody(inURL string, header http.Header) bool {
u, err := url.Parse(inURL)
// Assume no if the URL cannot be parsed - an empty request will still be forwarded to the plugin and should be rejected
if err != nil {
return false
}
// Skip body for auth endpoint
if strings.HasSuffix(url, "/auth") {
isAuth, err := isAuthEndpoint(u.Path)
if isAuth || err != nil {
return false
}

View File

@@ -175,8 +175,8 @@ func TestDrainBody(t *testing.T) {
func TestSendBody(t *testing.T) {
var (
url = "nothing.com"
testcases = []struct {
url string
contentType string
expected bool
}{
@@ -220,15 +220,93 @@ func TestSendBody(t *testing.T) {
contentType: "",
expected: false,
},
{
url: "nothing.com/auth",
contentType: "",
expected: false,
},
{
url: "nothing.com/auth",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "nothing.com/auth?p1=test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "nothing.com/test?p1=/auth",
contentType: "application/json;charset=UTF8",
expected: true,
},
{
url: "nothing.com/something/auth",
contentType: "application/json;charset=UTF8",
expected: true,
},
{
url: "nothing.com/auth/test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "nothing.com/v1.24/auth/test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "nothing.com/v1/auth/test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "www.nothing.com/v1.24/auth/test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "https://www.nothing.com/v1.24/auth/test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "http://nothing.com/v1.24/auth/test",
contentType: "application/json;charset=UTF8",
expected: false,
},
{
url: "www.nothing.com/test?p1=/auth",
contentType: "application/json;charset=UTF8",
expected: true,
},
{
url: "http://www.nothing.com/test?p1=/auth",
contentType: "application/json;charset=UTF8",
expected: true,
},
{
url: "www.nothing.com/something/auth",
contentType: "application/json;charset=UTF8",
expected: true,
},
{
url: "https://www.nothing.com/something/auth",
contentType: "application/json;charset=UTF8",
expected: true,
},
}
)
for _, testcase := range testcases {
header := http.Header{}
header.Set("Content-Type", testcase.contentType)
if testcase.url == "" {
testcase.url = "nothing.com"
}
if b := sendBody(url, header); b != testcase.expected {
t.Fatalf("Unexpected Content-Type; Expected: %t, Actual: %t", testcase.expected, b)
if b := sendBody(testcase.url, header); b != testcase.expected {
t.Fatalf("sendBody failed: url: %s, content-type: %s; Expected: %t, Actual: %t", testcase.url, testcase.contentType, testcase.expected, b)
}
}
}

View File

@@ -150,7 +150,7 @@ func dmTaskGetDepsFct(task *cdmTask) *Deps {
// golang issue: https://github.com/golang/go/issues/11925
var devices []C.uint64_t
devicesHdr := (*reflect.SliceHeader)(unsafe.Pointer(&devices))
devicesHdr := (*reflect.SliceHeader)(unsafe.Pointer(&devices)) //nolint:staticcheck // ignore SA1019
devicesHdr.Data = uintptr(unsafe.Pointer(uintptr(unsafe.Pointer(Cdeps)) + unsafe.Sizeof(*Cdeps)))
devicesHdr.Len = int(Cdeps.count)
devicesHdr.Cap = int(Cdeps.count)

View File

@@ -3,11 +3,15 @@ package ioutils // import "github.com/docker/docker/pkg/ioutils"
import (
"context"
"io"
"runtime/debug"
"sync/atomic"
// make sure crypto.SHA256, crypto.sha512 and crypto.SHA384 are registered
// TODO remove once https://github.com/opencontainers/go-digest/pull/64 is merged.
_ "crypto/sha256"
_ "crypto/sha512"
"github.com/sirupsen/logrus"
)
// ReadCloserWrapper wraps an io.Reader, and implements an io.ReadCloser
@@ -16,10 +20,15 @@ import (
type ReadCloserWrapper struct {
io.Reader
closer func() error
closed atomic.Bool
}
// Close calls back the passed closer function
func (r *ReadCloserWrapper) Close() error {
if !r.closed.CompareAndSwap(false, true) {
subsequentCloseWarn("ReadCloserWrapper")
return nil
}
return r.closer()
}
@@ -87,6 +96,7 @@ type cancelReadCloser struct {
cancel func()
pR *io.PipeReader // Stream to read from
pW *io.PipeWriter
closed atomic.Bool
}
// NewCancelReadCloser creates a wrapper that closes the ReadCloser when the
@@ -146,6 +156,17 @@ func (p *cancelReadCloser) closeWithError(err error) {
// Close closes the wrapper its underlying reader. It will cause
// future calls to Read to return io.EOF.
func (p *cancelReadCloser) Close() error {
if !p.closed.CompareAndSwap(false, true) {
subsequentCloseWarn("cancelReadCloser")
return nil
}
p.closeWithError(io.EOF)
return nil
}
func subsequentCloseWarn(name string) {
logrus.Error("subsequent attempt to close " + name)
if logrus.GetLevel() >= logrus.DebugLevel {
logrus.Errorf("stack trace: %s", string(debug.Stack()))
}
}

View File

@@ -1,6 +1,9 @@
package ioutils // import "github.com/docker/docker/pkg/ioutils"
import "io"
import (
"io"
"sync/atomic"
)
// NopWriter represents a type which write operation is nop.
type NopWriter struct{}
@@ -29,9 +32,14 @@ func (f *NopFlusher) Flush() {}
type writeCloserWrapper struct {
io.Writer
closer func() error
closed atomic.Bool
}
func (r *writeCloserWrapper) Close() error {
if !r.closed.CompareAndSwap(false, true) {
subsequentCloseWarn("WriteCloserWrapper")
return nil
}
return r.closer()
}

View File

@@ -200,8 +200,13 @@ func withFetchProgress(cs content.Store, out progress.Output, ref reference.Name
switch desc.MediaType {
case specs.MediaTypeImageManifest, images.MediaTypeDockerSchema2Manifest:
tn := reference.TagNameOnly(ref)
tagged := tn.(reference.Tagged)
progress.Messagef(out, tagged.Tag(), "Pulling from %s", reference.FamiliarName(ref))
var tagOrDigest string
if tagged, ok := tn.(reference.Tagged); ok {
tagOrDigest = tagged.Tag()
} else {
tagOrDigest = tn.String()
}
progress.Messagef(out, tagOrDigest, "Pulling from %s", reference.FamiliarName(ref))
progress.Messagef(out, "", "Digest: %s", desc.Digest.String())
return nil, nil
case

View File

@@ -27,6 +27,10 @@ profile {{.Name}} flags=(attach_disconnected,mediate_deleted) {
{{if ge .Version 208096}}
# Host (privileged) processes may send signals to container processes.
signal (receive) peer=unconfined,
# runc may send signals to container processes (for "docker stop").
signal (receive) peer=runc,
# crun may send signals to container processes (for "docker stop" when used with crun OCI runtime).
signal (receive) peer=crun,
# dockerd may send signals to container processes (for "docker kill").
signal (receive) peer={{.DaemonProfile}},
# Container processes may send signals amongst themselves.

View File

@@ -64,6 +64,7 @@
"alarm",
"bind",
"brk",
"cachestat",
"capget",
"capset",
"chdir",
@@ -109,6 +110,7 @@
"fchdir",
"fchmod",
"fchmodat",
"fchmodat2",
"fchown",
"fchown32",
"fchownat",
@@ -130,8 +132,11 @@
"ftruncate",
"ftruncate64",
"futex",
"futex_requeue",
"futex_time64",
"futex_wait",
"futex_waitv",
"futex_wake",
"futimesat",
"getcpu",
"getcwd",
@@ -206,6 +211,7 @@
"lstat",
"lstat64",
"madvise",
"map_shadow_stack",
"membarrier",
"memfd_create",
"memfd_secret",
@@ -783,7 +789,8 @@
"names": [
"get_mempolicy",
"mbind",
"set_mempolicy"
"set_mempolicy",
"set_mempolicy_home_node"
],
"action": "SCMP_ACT_ALLOW",
"includes": {

View File

@@ -56,6 +56,7 @@ func DefaultProfile() *Seccomp {
"alarm",
"bind",
"brk",
"cachestat", // kernel v6.5, libseccomp v2.5.5
"capget",
"capset",
"chdir",
@@ -101,6 +102,7 @@ func DefaultProfile() *Seccomp {
"fchdir",
"fchmod",
"fchmodat",
"fchmodat2", // kernel v6.6, libseccomp v2.5.5
"fchown",
"fchown32",
"fchownat",
@@ -122,8 +124,11 @@ func DefaultProfile() *Seccomp {
"ftruncate",
"ftruncate64",
"futex",
"futex_requeue", // kernel v6.7, libseccomp v2.5.5
"futex_time64",
"futex_wait", // kernel v6.7, libseccomp v2.5.5
"futex_waitv",
"futex_wake", // kernel v6.7, libseccomp v2.5.5
"futimesat",
"getcpu",
"getcwd",
@@ -198,6 +203,7 @@ func DefaultProfile() *Seccomp {
"lstat",
"lstat64",
"madvise",
"map_shadow_stack", // kernel v6.6, libseccomp v2.5.5
"membarrier",
"memfd_create",
"memfd_secret",
@@ -771,6 +777,7 @@ func DefaultProfile() *Seccomp {
"get_mempolicy",
"mbind",
"set_mempolicy",
"set_mempolicy_home_node", // kernel v5.17, libseccomp v2.5.4
},
Action: specs.ActAllow,
},

View File

@@ -39,6 +39,7 @@ var nativeToSeccomp = map[string]specs.Arch{
"ppc": specs.ArchPPC,
"ppc64": specs.ArchPPC64,
"ppc64le": specs.ArchPPC64LE,
"riscv64": specs.ArchRISCV64,
"s390": specs.ArchS390,
"s390x": specs.ArchS390X,
}
@@ -57,6 +58,7 @@ var goToNative = map[string]string{
"ppc": "ppc",
"ppc64": "ppc64",
"ppc64le": "ppc64le",
"riscv64": "riscv64",
"s390": "s390",
"s390x": "s390x",
}

View File

@@ -11,8 +11,6 @@ import (
"golang.org/x/sys/unix"
)
const imageSize = 64 * 1024 * 1024
// CanTestQuota - checks if xfs prjquota can be tested
// returns a reason if not
func CanTestQuota() (string, bool) {
@@ -29,6 +27,12 @@ func CanTestQuota() (string, bool) {
// PrepareQuotaTestImage - prepares an xfs prjquota test image
// returns the path the the image on success
func PrepareQuotaTestImage(t *testing.T) (string, error) {
// imageSize is the size of the test-image. The minimum size allowed
// is 300MB.
//
// See https://git.kernel.org/pub/scm/fs/xfs/xfsprogs-dev.git/commit/?id=6e0ed3d19c54603f0f7d628ea04b550151d8a262
const imageSize = 300 * 1024 * 1024
mkfs, err := exec.LookPath("mkfs.xfs")
if err != nil {
return "", err

Some files were not shown because too many files have changed in this diff Show More