Compare commits

..

157 Commits

Author SHA1 Message Date
Andrew Hsu
705d9623b7 Merge pull request #299 from thaJeztah/19.03_backport_scrub
[19.03 backport] DebugRequestMiddleware: unconditionally scrub data field
2019-07-17 09:10:51 -07:00
Sebastiaan van Stijn
c687381870 DebugRequestMiddleware: Remove path handling
Path-specific rules were removed, so this is no longer used.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 530e63c1a61b105a6f7fc143c5acb9b5cd87f958)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f8a0f26843)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:29 +02:00
Sebastiaan van Stijn
1eadbf1bd0 DebugRequestMiddleware: unconditionally scrub data field
Commit 77b8465d7e added a secret update
endpoint to allow updating labels on existing secrets. However, when
implementing the endpoint, the DebugRequestMiddleware was not updated
to scrub the Data field (as is being done when creating a secret).

When updating a secret (to set labels), the Data field should be either
`nil` (not set), or contain the same value as the existing secret. In
situations where the Data field is set, and the `dockerd` daemon is
running with debugging enabled / log-level debug, the base64-encoded
value of the secret is printed to the daemon logs.

The docker cli does not have a `docker secret update` command, but
when using `docker stack deploy`, the docker cli sends the secret
data both when _creating_ a stack, and when _updating_ a stack, thus
leaking the secret data if the daemon runs with debug enabled:

1. Start the daemon in debug-mode

        dockerd --debug

2. Initialize swarm

        docker swarm init

3. Create a file containing a secret

        echo secret > my_secret.txt

4. Create a docker-compose file using that secret

        cat > docker-compose.yml <<'EOF'
        version: "3.3"
        services:
          web:
            image: nginx:alpine
            secrets:
              - my_secret
        secrets:
          my_secret:
            file: ./my_secret.txt
        EOF

5. Deploy the stack

        docker stack deploy -c docker-compose.yml test

6. Verify that the secret is scrubbed in the daemon logs

        DEBU[2019-07-01T22:36:08.170617400Z] Calling POST /v1.30/secrets/create
        DEBU[2019-07-01T22:36:08.171364900Z] form data: {"Data":"*****","Labels":{"com.docker.stack.namespace":"test"},"Name":"test_my_secret"}

7. Re-deploy the stack to trigger an "update"

        docker stack deploy -c docker-compose.yml test

8. Notice that this time, the Data field is not scrubbed, and the base64-encoded secret is logged

        DEBU[2019-07-01T22:37:35.828819400Z] Calling POST /v1.30/secrets/w3hgvwpzl8yooq5ctnyp71v52/update?version=34
        DEBU[2019-07-01T22:37:35.829993700Z] form data: {"Data":"c2VjcmV0Cg==","Labels":{"com.docker.stack.namespace":"test"},"Name":"test_my_secret"}

This patch modifies `maskSecretKeys` to unconditionally scrub `Data` fields.
Currently, only the `secrets` and `configs` endpoints use a field with this
name, and no other POST API endpoints use a data field, so scrubbing this
field unconditionally will only scrub requests for those endpoints.

If a new endpoint is added in future where this field should not be scrubbed,
we can re-introduce more fine-grained (path-specific) handling.

This patch introduces some change in behavior:

- In addition to secrets, requests to create or update _configs_ will
  now have their `Data` field scrubbed. Generally, the actual data should
  not be interesting for debugging, so likely will not be problematic.
  In addition, scrubbing this data for configs may actually be desirable,
  because (even though they are not explicitely designed for this purpose)
  configs may contain sensitive data (credentials inside a configuration
  file, e.g.).
- Requests that send key/value pairs as a "map" and that contain a
  key named "data", will see the value of that field scrubbed. This
  means that (e.g.) setting a `label` named `data` on a config, will
  scrub/mask the value of that label.
- Note that this is already the case for any label named `jointoken`,
  `password`, `secret`, `signingcakey`, or `unlockkey`.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c7ce4be93ae8edd2da62a588e01c67313a4aba0c)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 73db8c77bf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:27 +02:00
Sebastiaan van Stijn
685f13f3fd TestMaskSecretKeys: use subtests
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 32d70c7e21631224674cd60021d3ec908c2d888c)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit ebb542b3f8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:25 +02:00
Sebastiaan van Stijn
638cf86cbe TestMaskSecretKeys: add more test-cases
Add tests for

- case-insensitive matching of fields
- recursive masking

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit db5f811216e70bcb4a10e477c1558d6c68f618c5)
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 18dac2cf32)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-07-17 17:33:22 +02:00
Andrew Hsu
a69cd8239f Merge pull request #287 from thaJeztah/19.03_backport_testForIpvlan
[19.03 backport] For ipvlan tests check that the ipvlan module is enabled
2019-06-25 11:48:57 -07:00
Kir Kolyshkin
8a2f96096a For ipvlan tests check that the ipvlan module is enabled (instead of just ensuring the kernel version is greater than 4.2)
Co-Authored-By: Jim Ehrismann <jim-docker@users.noreply.github.com>
Co-Authored-By: Sebastiaan van Stijn <thaJeztah@users.noreply.github.com>
Signed-off-by: Jim Ehrismann <jim.ehrismann@docker.com>
(cherry picked from commit a77e147d32)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-25 00:40:32 +02:00
Andrew Hsu
b07f53d0a4 Merge pull request #284 from tiborvass/19.03-revert-remove-legacy-registry
[19.03] Keep but deprecate registry v2 schema1 logic and revert to libtrust-key-based engine ID
2019-06-18 14:30:11 -07:00
Tibor Vass
e61e107040 validate: temporarily disable deprecate-integration-cli as part of a revert
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 3f1cdd5364)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:55:04 +00:00
Tibor Vass
023166b530 Add deprecation message for schema1
This will add a warning log in the daemon, and will send the message
to be displayed by the CLI.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit d35f8f4329)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:55:02 +00:00
Tibor Vass
884c9e268f Add test for keeping same daemon ID on upgrade
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f923321aae)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:55:00 +00:00
Tibor Vass
99678a93ed Remove v1 manifest code
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 53dad9f027)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:59 +00:00
Tibor Vass
99cd23cefd Revert "Remove the rest of v1 manifest support"
This reverts commit 98fc09128b in order to
keep registry v2 schema1 handling and libtrust-key-based engine ID.

Because registry v2 schema1 was not officially deprecated and
registries are still relying on it, this patch puts its logic back.

However, registry v1 relics are not added back since v1 logic has been
removed a while ago.

This also fixes an engine upgrade issue in a swarm cluster. It was relying
on the Engine ID to be the same upon upgrade, but the mentioned commit
modified the logic to use UUID and from a different file.

Since the libtrust key is always needed to support v2 schema1 pushes,
that the old engine ID is based on the libtrust key, and that the engine ID
needs to be conserved across upgrades, adding a UUID-based engine ID logic
seems to add more complexity than it solves the problems.

Hence reverting the engine ID changes as well.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f695e98cb7)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:57 +00:00
Tibor Vass
4d3dfd24ec use gotest.tools assertions in docker_cli_push_test.go
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 0811297608)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:55 +00:00
Tibor Vass
21ae66c664 Revert "Remove Schema1 integration test suite"
This reverts commit 13b7d11be1.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit f23a51a860)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-18 18:54:45 +00:00
Andrew Hsu
da6dddcd04 Merge pull request #279 from thaJeztah/19.03_backport_attach_to_existing_network_error
[19.03 backport] Handle the error case when a container reattaches to the same network
2019-06-18 10:30:28 -07:00
Andrew Hsu
42757e8794 Merge pull request #277 from thaJeztah/19.03_backport_enable_new_integration_tests_for_win
[19.03 backport] Enable integrations API tests for Windows CI
2019-06-17 12:05:26 -07:00
Andrew Hsu
3452f743ab Merge pull request #280 from tiborvass/19.03-chroot-tar-untar-and-cp-slash-fix
[19.03] Add chroot back to Tar/Untar without the previously introduced regression
2019-06-17 12:04:34 -07:00
Andrew Hsu
b9cd7b59b6 Merge pull request #261 from kolyshkin/19.03-aufs-lock
[19.03 backport ENGCORE-831] aufs optimizations #39107
2019-06-17 12:02:48 -07:00
Tibor Vass
8f4b96f19e integration: have container.Create call compile
For reference on why this is needed:
https://github.com/docker/engine/pull/280#issuecomment-502056661

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 18:26:12 +00:00
Tibor Vass
186afe3ce3 pkg/archive: keep walkRoot clean if source is /
Previously, getWalkRoot("/", "foo") would return "//foo"
Now it returns "/foo"

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 7410f1a859)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 04:02:08 +00:00
Tibor Vass
a0063c534a daemon: fix docker cp when container source is /
Before 7a7357da, archive.TarResourceRebase was being used to copy files
and folders from the container. That function splits the source path
into a dirname + basename pair to support copying a file:
if you wanted to tar `dir/file` it would tar from `dir` the file `file`
(as part of the IncludedFiles option).

However, that path splitting logic was kept for folders as well, which
resulted in weird inputs to archive.TarWithOptions:
if you wanted to tar `dir1/dir2` it would tar from `dir1` the directory
`dir2` (as part of IncludedFiles option).

Although it was weird, it worked fine until we started chrooting into
the container rootfs when doing a `docker cp` with container source set
to `/` (cf 3029e765).

The fix is to only do the path splitting logic if the source is a file.

Unfortunately, 7a7357da added support for LCOW by duplicating some of
this subtle logic. Ideally we would need to do more refactoring of the
archive codebase to properly encapsulate these behaviors behind well-
documented APIs.

This fix does not do that. Instead, it fixes the issue inline.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 171538c190)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:57 +00:00
Tibor Vass
9b97965f22 add more tests
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 02f1eb89a4)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:57 +00:00
Brian Goff
e3f83e7aa7 Add test for copying entire container rootfs
CID=$(docker create alpine)
docker cp $CID:/ out

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6db9f1c3d6)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:57 +00:00
Tibor Vass
44023afb7d Revert "Revert "Add chroot for tar packing operations""
This reverts commit 96df6d4d0b.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:32 +00:00
Tibor Vass
29ff2800c3 Revert "Revert "Pass root to chroot to for chroot Untar""
This reverts commit 60013ba69b.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-14 01:37:30 +00:00
Arko Dasgupta
d44a48835f Change Forbidden Error (403) to Conflict(409)
Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 31e8fcc678)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-12 20:00:59 +02:00
Arko Dasgupta
275bf7ec03 Gracefully take care of the error case when a container
retries to attach to a network, it is already connected to

Fixes - https://github.com/docker/for-linux/issues/632

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 871acb1c86)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-12 20:00:51 +02:00
Olli Janatuinen
de45ce73eb Enable integrations API tests for Windows CI
Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>
(cherry picked from commit 2f22247cad)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-12 10:16:04 +02:00
Andrew Hsu
ceb773e1ff Merge pull request #275 from tiborvass/19.03-revert-chroot-tar-untar
[19.03] Revert Pass root to chroot to for chroot Tar/Untar (CVE-2018-15664)
2019-06-11 23:42:09 -07:00
Tibor Vass
60013ba69b Revert "Pass root to chroot to for chroot Untar"
This reverts commit 9781cceb09.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-12 04:06:21 +00:00
Tibor Vass
96df6d4d0b Revert "Add chroot for tar packing operations"
This reverts commit 3e057d527d.

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-06-12 04:06:14 +00:00
Andrew Hsu
a33a82b42f Merge pull request #267 from thaJeztah/19.03_backport_test_fixes
[19.03 backport] test-fixes and improvements
2019-06-11 15:15:40 -07:00
Andrew Hsu
367870a4d5 Merge pull request #274 from thaJeztah/19.03_backport_entropy_cannot_be_saved
[19.03 backport] Entropy cannot be saved
2019-06-11 13:53:11 -07:00
Andrew Hsu
175013d0cb Merge pull request #270 from thaJeztah/19.03_backport_fix_swagger_copy
[19.03 backport] fix: fix lack of copyUIDGID in swagger.yaml
2019-06-11 10:55:29 -07:00
Andrew Hsu
a6905fa2e5 Merge pull request #272 from thaJeztah/19.03_backport_align_libnetwork
[19.03 backport] Re-align proxy commit with libnetwork vendor
2019-06-11 10:52:38 -07:00
Justin Cormack
510e79ebe9 Entropy cannot be saved
Remove non cryptographic randomness.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit 2df693e533)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-11 17:40:09 +02:00
Sebastiaan van Stijn
a8d1b4a1ab Merge pull request #266 from thaJeztah/19.03_backport_do_not_order_uid_gid_mappings
[19.03 backport] Stop sorting uid and gid ranges in id maps
2019-06-11 09:43:14 +02:00
Sebastiaan van Stijn
88374fa982 Re-align proxy commit with libnetwork vendor
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 35069de3fd)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-09 11:18:39 +02:00
zhangyue
049a1090c3 fix: fix lack of copyUIDGID in swagger.yaml
Signed-off-by: Zhang Yue <zy675793960@yeah.net>
Signed-off-by: zhangyue <zy675793960@yeah.net>
(cherry picked from commit a4f828cb89)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:40:21 +02:00
Sebastiaan van Stijn
020bb75219 Harden TestPsListContainersFilterExited
This test runs on a daemon also used by other tests
so make sure we don't get failures if another test
doesn't cleanup or is running in parallel.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 915acffdb4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:13:23 +02:00
Brian Goff
a24b9087ce Add log entries for daemon startup/shutdown
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 595987fd08)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:10:45 +02:00
Brian Goff
48786ba842 Optimize test daemon startup
This adds some logs, handles timers better, and sets a request timeout
for the ping request.

I'm not sure the ticker in that loop is what we really want since the
ticker keeps ticking while we are (attempting) to make a request... but
I opted to not change that for now.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 20ea8942b8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-07 14:10:37 +02:00
Sebastiaan van Stijn
dde48c6715 Merge pull request #264 from thaJeztah/19.03_backport_39290alternate
[19.03 backport] Windows: Don't attempt detach VHD for R/O layers
2019-06-06 14:19:47 +02:00
Jonas Dohse
e7c02a0508 Stop sorting uid and gid ranges in id maps
Moby currently sorts uid and gid ranges in id maps. This causes subuid
and subgid files to be interpreted wrongly.

The subuid file

```
> cat /etc/subuid
jonas:100000:1000
jonas:1000:1
```

configures that the container uids 0-999 are mapped to the host uids
100000-100999 and uid 1000 in the container is mapped to uid 1000 on the
host. The expected uid_map is:

```
> docker run ubuntu cat /proc/self/uid_map
         0     100000       1000
      1000       1000          1
```

Moby currently sorts the ranges by the first id in the range. Therefore
with the subuid file above the uid 0 in the container is mapped to uid
100000 on host and the uids 1-1000 in container are mapped to the uids
1-1000 on the host. The resulting uid_map is:

```
> docker run ubuntu cat /proc/self/uid_map
         0       1000          1
         1     100000       1000
```

The ordering was implemented to work around a limitation in Linux 3.8.
This is fixed since Linux 3.9 as stated on the user namespaces manpage
[1]:

> In the initial implementation (Linux 3.8), this requirement was
> satisfied by a simplistic implementation that imposed the further
> requirement that the values in both field 1 and field 2 of successive
> lines must be in ascending numerical order, which prevented some
> otherwise valid maps from being created.  Linux 3.9 and later fix this
> limitation, allowing any valid set of nonoverlapping maps.

This fix changes the interpretation of subuid and subgid files which do
not have the ids of in the numerical order for each individual user.
This breaks users that rely on the current behaviour.

The desired mapping above - map low user ids in the container to high
user ids on the host and some higher user ids in the container to lower
user on host - can unfortunately not archived with the current
behaviour.

[1] http://man7.org/linux/man-pages/man7/user_namespaces.7.html

Signed-off-by: Jonas Dohse <jonas@dohse.ch>
(cherry picked from commit c4628d79d2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-05 14:46:55 +02:00
John Howard
31722d3f5a Windows: Don't attempt detach VHD for R/O layers
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 293c74ba79)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-05 14:45:56 +02:00
Kir Kolyshkin
a81278befe aufs: retry auplink flush
Running a bundled aufs benchmark sometimes results in this warning:

> WARN[0001] Couldn't run auplink before unmount /tmp/aufs-tests/aufs/mnt/XXXXX  error="exit status 22" storage-driver=aufs

If we take a look at what aulink utility produces on stderr, we'll see:

> auplink:proc_mnt.c:96: /tmp/aufs-tests/aufs/mnt/XXXXX: Invalid argument

and auplink exits with exit code of 22 (EINVAL).

Looking into auplink source code, what happens is it tries to find a
record in /proc/self/mounts corresponding to the mount point (by using
setmntent()/getmntent_r() glibc functions), and it fails.

Some manual testing, as well as runtime testing with lots of printf
added on mount/unmount, as well as calls to check the superblock fs
magic on mount point (as in graphdriver.Mounted(graphdriver.FsMagicAufs, target)
confirmed that this record is in fact there, but sometimes auplink
can't find it. I was also able to reproduce the same error (inability
to find a mount in /proc/self/mounts that should definitely be there)
using a small C program, mocking what `auplink` does:

```c
 #include <stdio.h>
 #include <err.h>
 #include <mntent.h>
 #include <string.h>
 #include <stdlib.h>

int main(int argc, char **argv)
{
	FILE *fp;
	struct mntent m, *p;
	char a[4096];
	char buf[4096 + 1024];
	int found =0, lines = 0;

	if (argc != 2) {
		fprintf(stderr, "Usage: %s <mountpoint>\n", argv[0]);
		exit(1);
	}

	fp = setmntent("/proc/self/mounts", "r");
	if (!fp) {
		err(1, "setmntent");
	}
	setvbuf(fp, a, _IOLBF, sizeof(a));
	while ((p = getmntent_r(fp, &m, buf, sizeof(buf)))) {
		lines++;
		if (!strcmp(p->mnt_dir, argv[1])) {
			found++;
		}
	}
	printf("found %d entries for %s (%d lines seen)\n", found, argv[1], lines);
	return !found;
}
```

I have also wrote a few other C proggies -- one that reads
/proc/self/mounts directly, one that reads /proc/self/mountinfo instead.
They are also prone to the same occasional error.

It is not perfectly clear why this happens, but so far my best theory
is when a lot of mounts/unmounts happen in parallel with reading
contents of /proc/self/mounts, sometimes the kernel fails to provide
continuity (i.e. it skips some part of file or mixes it up in some
other way). In other words, this is a kernel bug (which is probably
hard to fix unless some other interface to get a mount entry is added).

Now, there is no real fix, and a workaround I was able to come up
with is to retry when we got EINVAL. It usually works on the second
attempt, although I've once seen it took two attempts to go through.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit ae431b10a9)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
cad766f6c7 aufs.Cleanup: optimize
Do not use filepath.Walk() as there's no requirement to recursively
go into every directory under mnt -- a (non-recursive) list of
directories in mnt is sufficient.

With filepath.Walk(), in case some container will fail to unmount,
it'll go through the whole container filesystem which is both
excessive and useless.

This is similar to commit f1a4592297 ("devmapper.shutdown:
optimize")

While at it, raise the priority of "unmount error" message from debug
to a warning. Note we don't have to explicitly add `m` as unmount error (from
pkg/mount) will have it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 8fda12c607)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
f0f7020b5d aufs: optimize lots of layers case
In case there are a big number of layers, so that mount data won't fit
into a single memory page (4096 bytes on most platforms, which is good
enough for about 40 layers, depending on how long graphdriver root path
is), we supply additional layers with O_REMOUNT, as described in aufs
documentation.

Problem is, the current implementation does that one layer at a time
(i.e. there is one mount syscall per each additional layer).

Optimize the code to supply as many layers as we can fit in one page
(basically reusing the same code as for the original mount).

Note, per aufs docs, "[a]t remount-time, the options are interpreted
in the given order, e.g. left to right" so we should be good.

Tested on an image with ~100 layers.

Before (35 syscalls):
> [pid 22756] 1556919088.686955 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", "aufs", 0, "br:/mnt/volume_sfo2_09/docker-au"...) = 0 <0.000504>
> [pid 22756] 1556919088.687643 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", 0xc000c451b0, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.000105>
> [pid 22756] 1556919088.687851 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", 0xc000c451ba, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.000098>
> ..... (~30 lines skipped for clarity)
> [pid 22756] 1556919088.696182 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/a86f8c9dd0ec2486293119c20b0ec026e19bbc4d51332c554f7cf05d777c9866", 0xc000c45310, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.000266>

After (2 syscalls):
> [pid 24352] 1556919361.799889 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/8e7ba189e347a834e99eea4ed568f95b86cec809c227516afdc7c70286ff9a20", "aufs", 0, "br:/mnt/volume_sfo2_09/docker-au"...) = 0 <0.001717>
> [pid 24352] 1556919361.801761 mount("none", "/mnt/volume_sfo2_09/docker-aufs/aufs/mnt/8e7ba189e347a834e99eea4ed568f95b86cec809c227516afdc7c70286ff9a20", 0xc000dbecb0, MS_REMOUNT, "append:/mnt/volume_sfo2_09/docke"...) = 0 <0.001358>

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit d58c434bff)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
65ba452bb0 aufs: add lock around mount
Apparently there is some kind of race in aufs kernel module code,
which leads to the errors like:

[98221.158606] aufs au_xino_create2:186:dockerd[25801]: aufs.xino create err -17
[98221.162128] aufs au_xino_set:1229:dockerd[25801]: I/O Error, failed creating xino(-17).
[98362.239085] aufs au_xino_create2:186:dockerd[6348]: aufs.xino create err -17
[98362.243860] aufs au_xino_set:1229:dockerd[6348]: I/O Error, failed creating xino(-17).
[98373.775380] aufs au_xino_create:767:dockerd[27435]: open /dev/shm/aufs.xino(-17)
[98389.015640] aufs au_xino_create2:186:dockerd[26753]: aufs.xino create err -17
[98389.018776] aufs au_xino_set:1229:dockerd[26753]: I/O Error, failed creating xino(-17).
[98424.117584] aufs au_xino_create:767:dockerd[27105]: open /dev/shm/aufs.xino(-17)

So, we have to have a lock around mount syscall.

While at it, don't call the whole Unmount() on an error path, as
it leads to bogus error from auplink flush.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 5cd62852fa)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
76d936ae76 aufs: aufsMount: better errors for unix.Mount()
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 5873768dbe)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
d1eae89590 aufs: use mount.Unmount
1. Use mount.Unmount() which ignores EINVAL ("not mounted") error,
and provides better error diagnostics (so we don't have to explicitly
add target to error messages).

2. Since we're ignoring "not mounted" error, we can call
multiple unmounts without any locking -- but since "auplink flush"
is still involved and can produce an error in logs, let's keep
the check for fs being mounted (it's just a statfs so should be fast).

2. While at it, improve the "can't unmount" error message in Put().

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 4beee98026)
2019-06-04 15:07:53 -07:00
Kir Kolyshkin
7d1414ec3e aufs: remove extra locking
Both mount and unmount calls are already protected by fine-grained
(per id) locks in Get()/Put() introduced in commit fc1cf1911b
("Add more locking to storage drivers"), so there's no point in
having a global lock in mount/unmount.

The only place from which unmount is called without any locking
is Cleanup() -- this is to be addressed in the next patch.

This reverts commit 824c24e680.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f93750b2c4)
2019-06-04 15:07:53 -07:00
Andrew Hsu
5fbc0a16e2 Merge pull request #260 from thaJeztah/19.03_backport_buildkit_systemd_resolvconf
[19.03 backport] build: buildkit now also uses systemd's resolv.conf
2019-06-04 11:44:18 -07:00
Andrew Hsu
0678d71038 Merge pull request #252 from thaJeztah/19.03_bump_swarmkit
[19.03 backport] Revert docker/swarmkit#2804
2019-06-04 10:33:22 -07:00
Sebastiaan van Stijn
746dce1994 Merge pull request #256 from thaJeztah/19.03_backport_increase_swarmkit_grpc
[19.03 backport] Increase max recv gRPC message size for nodes and secrets
2019-06-04 19:01:27 +02:00
Sebastiaan van Stijn
36f0fe6524 Merge pull request #247 from thaJeztah/19.03_aufs_backports
[19.03 backport] backport layer store optimizations
2019-06-04 18:43:43 +02:00
Sebastiaan van Stijn
737d57bad6 Merge pull request #251 from thaJeztah/19.03_backport_fix_fix_win_tmp
[19.03 backport] Windows CI - Corrected LOCALAPPDATA location
2019-06-04 18:42:37 +02:00
Sebastiaan van Stijn
287240a965 Merge pull request #255 from thaJeztah/19.03_backport_ro_none_cgroupdriver
[19.03 backport] info: report cgroup driver as "none" when running rootless
2019-06-04 18:41:58 +02:00
Sebastiaan van Stijn
ca602fa7c6 Merge pull request #249 from thaJeztah/19.03_backport_fix_api_operation_PutContainerArchive
[19.03 backport] API: Set format of body parameter in operation PutContainerArchive to "binary"
2019-06-04 18:41:19 +02:00
Andrew Hsu
36324c3bbd Merge pull request #254 from thaJeztah/19.03_backport_root_dir_on_copy
[19.03 backport] Pass root to chroot to for chroot Tar/Untar (CVE-2018-15664)
2019-06-04 09:33:40 -07:00
Andrew Hsu
21c33eb7e3 Merge pull request #259 from thaJeztah/19.03_backport_fix_build_panic
[19.03 backport] build: fix panic when exporting to tar
2019-06-04 09:16:37 -07:00
Tibor Vass
feb373a216 build: buildkit now also uses systemd's resolv.conf
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8ff4ec98cf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-04 18:11:14 +02:00
Andrew Hsu
d7080a7a2e Merge pull request #258 from thaJeztah/19.03_backport_update_buildkit
[19.03 backport] vendor: update buildkit to 37d53758
2019-06-04 09:08:14 -07:00
Tibor Vass
b915ec1e7b build: fix panic when exporting to tar
Fixes a panic on `docker build -t foo -o - . >/dev/null`

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 6104eb1ae2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-04 11:21:18 +02:00
Tonis Tiigi
2de4afdee5 vendor: update buildkit to 37d53758
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 85bbbd4495)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-04 11:17:10 +02:00
Drew Erny
26a35ddcd1 Increase max recv gRPC message size for nodes and secrets
Increases the max recieved gRPC message size for Node and Secret list
operations. This has already been done for the other swarm types, but
was not done for these.

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit a0903e1fa3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 23:00:07 +02:00
Akihiro Suda
d575af39ac rootless: update docker info docs
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit ca5aab19b4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 22:54:22 +02:00
Akihiro Suda
57b59f876e info: report cgroup driver as "none" when running rootless
Previously `docker info` had reported "cgroupfs" as the cgroup driver
but the driver wasn't actually used at all.

This PR reports "none" as the cgroup driver so as to avoid confusion.
e.g. kubeadm/kubelet will detect cgroupless-ness by checking this docker
info field. https://github.com/rootless-containers/usernetes/pull/97

Note that user still cannot specify `native.cgroupdriver=none` manually.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 153466ba0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 22:48:36 +02:00
Brian Goff
3e057d527d Add chroot for tar packing operations
Previously only unpack operations were supported with chroot.
This adds chroot support for packing operations.
This prevents potential breakouts when copying data from a container.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 3029e765e2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 18:55:45 +02:00
Brian Goff
9781cceb09 Pass root to chroot to for chroot Untar
This is useful for preventing CVE-2018-15664 where a malicious container
process can take advantage of a race on symlink resolution/sanitization.

Before this change chrootarchive would chroot to the destination
directory which is attacker controlled. With this patch we always chroot
to the container's root which is not attacker controlled.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit d089b63937)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 18:55:37 +02:00
Drew Erny
d0f4f42bd4 Revert docker/swarmkit#2804
Reverts the change to swarmkit that made all updates set UpdateStatus to
Completed

Signed-off-by: Drew Erny <drew.erny@docker.com>
(cherry picked from commit c7d9599e3d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 15:52:54 +02:00
Olli Janatuinen
d59fb97c5b Windows CI - Corrected LOCALAPPDATA location
Signed-off-by: Olli Janatuinen <olli.janatuinen@gmail.com>
(cherry picked from commit 61815f6763)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 11:25:45 +02:00
Dominic Tubach
e1e47d090d API: Set format of body parameter in operation PutContainerArchive to "binary"
Signed-off-by: Dominic Tubach <dominic.tubach@to.com>
(cherry picked from commit fa6f63e79b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-06-03 11:08:05 +02:00
Sebastiaan van Stijn
a62d9b9c21 Merge pull request #232 from thaJeztah/19.03_backport_lb_stale_force_leave
[19.03 backport] Network not deleted after stack is removed
2019-05-29 22:47:45 +03:00
Sebastiaan van Stijn
a004854097 Merge pull request #229 from thaJeztah/19.03_backport_windows_tag
[19.03 backport] Consider WINDOWS_BASE_IMAGE_TAG override when setting Windows base image for tests
2019-05-27 21:07:10 +03:00
Sebastiaan van Stijn
5925508b31 Merge pull request #222 from thaJeztah/19.03_backport_swarmnanocpu
[19.03 backport] Switch swarmmode services to NanoCpu
2019-05-27 21:04:31 +03:00
Sebastiaan van Stijn
5051fe047c Merge pull request #231 from AkihiroSuda/bk-ramdisk-1903
[19.03 backport] builder-next: support DOCKER_RAMDISK
2019-05-27 21:01:01 +03:00
Sebastiaan van Stijn
57a9697161 Merge pull request #241 from thaJeztah/19.03_swagger_fixes
[19.03 backport] swagger fixes
2019-05-27 20:54:35 +03:00
Kir Kolyshkin
936432326a layer: protect from same-name races
As pointed out by Tonis, there's a race between ReleaseRWLayer()
and GetRWLayer():

```
----- goroutine 1 -----               ----- goroutine 2 -----
ReleaseRWLayer()
  m := ls.mounts[l.Name()]
  ...
  m.deleteReference(l)
  m.hasReferences()
  ...                                 GetRWLayer()
  ...                                   mount := ls.mounts[id]
  ls.driver.Remove(m.mountID)
  ls.store.RemoveMount(m.name)          return mount.getReference()
  delete(ls.mounts, m.Name())
-----------------------               -----------------------
```

When something like this happens, GetRWLayer will return
an RWLayer without a storage. Oops.

There might be more races like this, and it seems the best
solution is to lock by layer id/name by using pkg/locker.

With this in place, name collision could not happen, so remove
the part of previous commit that protected against it in
CreateRWLayer (temporary nil assigmment and associated rollback).

So, now we have
* layerStore.mountL sync.Mutex to protect layerStore.mount map[]
  (against concurrent access);
* mountedLayer's embedded `sync.Mutex` to protect its references map[];
* layerStore.layerL (which I haven't touched);
* per-id locker, to avoid name conflicts and concurrent operations
  on the same rw layer.

The whole rig seems to look more readable now (mutexes use is
straightforward, no nested locks).

Reported-by: Tonis Tiigi <tonistiigi@gmail.com>
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit af433dd200)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:33:45 +02:00
Kir Kolyshkin
9eeb2b5ef0 layer/CreateRWLayerByGraphID: remove
This is an additon to commit 1fea38856a ("Remove v1.10 migrator")
aka PR #38265. Since that one, CreateRWLayerByGraphID() is not
used anywhere, so let's drop it.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit b4e9b50765)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:33:37 +02:00
Xinfeng Liu
eaa3e69d14 layer: optimize layerStore mountL
Goroutine stack analisys shown some lock contention
while doing massively (100 instances of `docker rm`)
parallel image removal, with many goroutines waiting
for the mountL mutex. Optimize it.

With this commit, the above operation is about 3x
faster, with no noticeable change to container
creation times (tested on aufs and overlay2).

kolyshkin@:
- squashed commits
- added description
- protected CreateRWLayer against name collisions by
temporary assiging nil to ls.mounts[name], and treating
nil as "non-existent" in all the other functions.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 05250a4f00)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:32:51 +02:00
Kir Kolyshkin
80a35e0bd4 layer: protect mountedLayer.references
Add a mutex to protect concurrent access to mountedLayer.references map.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit f73b5cb4e8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 22:32:43 +02:00
Adam Dobrawy
cdeef06801 Update docs to remove restriction of tty resize
Signed-off-by: Adam Dobrawy <naczelnik@jawnosc.tk>
(cherry picked from commit 4898f493d8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:58:53 +02:00
Dominic Tubach
181a64a5aa API: Move "x-nullable: true" from type PortBinding to type PortMap
Currently the API spec would allow `"443/tcp": [null]`, but what should
be allowed is `"443/tcp": null`
Signed-off-by: Dominic Tubach <dominic.tubach@to.com>
(cherry picked from commit 32b5d296ea)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:58:40 +02:00
Dominic Tubach
63eecadf82 API: Change type of RemotrAddrs to array of strings in operation SwarmJoin
Signed-off-by: Dominic Tubach <dominic.tubach@to.com>
(cherry picked from commit d5f6bdb027)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:58:28 +02:00
Arko Dasgupta
2b216674da Network not deleted after stack is removed
Make sure adapter.removeNetworks executes during task Remove
adapter.removeNetworks was being skipped for cases when
isUnknownContainer(err) was true after adapter.remove was executed

This fix eliminates the nil return case forcing the function
to continue executing unless there is a true error

Fixes https://github.com/moby/moby/issues/39225

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 70fa7b6a3f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-25 21:53:13 +02:00
Andrew Hsu
868d87b08e Merge pull request #224 from thaJeztah/19.03_backport_devno
[19.03 backport] bugfix: fetch the right device number which great than 255
2019-05-24 09:46:38 -07:00
Andrew Hsu
a7e03f69be Merge pull request #216 from AkihiroSuda/rootless-fix-kill-1903
[19.03 backport] rootless: fix killing daemon
2019-05-24 09:46:01 -07:00
Sebastiaan van Stijn
96daf37c83 Merge pull request #238 from thaJeztah/19.03_backport_remove_TestSearchCmdOptions
[19.03 backport] Remove TestSearchCmdOptions test
2019-05-23 22:52:11 +02:00
Sebastiaan van Stijn
3dec835d84 Merge pull request #217 from thaJeztah/19.03_backport_EDGE374_TestDaemonNoSpaceLeftOnDeviceError
[19.03 backport] explicitly set filesystem type for mount to avoid 'invalid argument' error on arm
2019-05-23 22:51:01 +02:00
Sebastiaan van Stijn
4da607559f Merge pull request #211 from thaJeztah/19.03_backport_api_fixes
[19.03 backport] backport small API fixes
2019-05-23 22:50:12 +02:00
Sebastiaan van Stijn
8dd7bd9981 Merge pull request #234 from thaJeztah/19.03_backport_update_seccomp_test_for_aarch64
[19.03 backport] Update TestRunWithDaemonDefaultSeccompProfile for ARM64
2019-05-23 22:49:21 +02:00
Sebastiaan van Stijn
7cc3681ad6 Merge pull request #206 from thaJeztah/19.03_backport_no_retry_ping_on_errconn
[19.03 backport] client: do not fallback to GET if HEAD on _ping fail to connect
2019-05-23 22:48:02 +02:00
Sebastiaan van Stijn
e205cd89cd Merge pull request #228 from thaJeztah/19.03_backport_bump_libnetwork
[19.03 backport] bump libnetwork 5ac07abef4eee176423fdc1b870d435258e2d381
2019-05-23 21:47:57 +02:00
Sebastiaan van Stijn
c56df1abf3 Merge pull request #235 from thaJeztah/19.03_backport_make_sure_to_hydrate_when_eating_pretzels
[19.03 backport] Fix error handling for bind mount spec parser.
2019-05-23 12:09:08 +02:00
Sebastiaan van Stijn
d8185417d9 Remove TestSearchCmdOptions test
This test is dependent on the search results returned by Docker Hub, which
can change at any moment, and causes this test to be unpredictable.

Removing this test instead of trying to catch up with Docker Hub any time
the results change, because it's effectively testing Docker Hub, and not
the daemon.

Unit tests are already in place to test the core functionality of the daemon,
so it should be safe to remove this test.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 21e662c774)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-23 10:58:06 +02:00
Brian Goff
7cb78b6259 Fix error handling for bind mount spec parser.
Errors were being ignored and always telling the user that the path
doesn't exist even if it was some other problem, such as a permission
error.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit ebcef28834)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-23 01:09:37 +02:00
Sebastiaan van Stijn
79ac8f95af Update TestRunWithDaemonDefaultSeccompProfile for ARM64
`chmod` is a legacy syscall, and not present on arm64, which
caused this test to fail.

Add `fchmodat` to the profile so that this test can run both
on x64 and arm64.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4bd8964b23)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-22 13:33:18 +02:00
Akihiro Suda
1c346f16a3 builder-next: support DOCKER_RAMDISK
For https://github.com/kubernetes/minikube/issues/4143

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit b4247b433e)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-05-22 06:58:44 +09:00
Deep Debroy
d347049802 Consider WINDOWS_BASE_IMAGE_TAG override when setting Windows base image for tests
Signed-off-by: Deep Debroy <ddebroy@docker.com>
(cherry picked from commit 15419d7ba0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 17:00:07 +02:00
Sebastiaan van Stijn
939aa52465 bump libnetwork 5ac07abef4eee176423fdc1b870d435258e2d381
full diff: 9ff9b57c34...5ac07abef4

brings in:

- docker/libnetwork#2376 Forcing a nil IP specified in PortBindings to IPv4zero (0.0.0.0)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a66ddd8ab8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 14:49:34 +02:00
Kir Kolyshkin
29c50668b3 int-cli/TestSearchCmdOptions: fail earlier
Sometimes this test fails (allegedly due to problems with Docker Hub),
but it fails later than it should, for example:

> 01:20:34.845 assertion failed: expression is false: strings.Count(outSearchCmdStars, "[OK]") <= strings.Count(outSearchCmd, "[OK]"): The quantity of images with stars should be less than that of all images: <...>

This, with non-empty list of images following, means that the initial
`docker search busybox` command returned not enough results. So, add
a check that `docker search busybox` returns something.

While at it,
 * raise the number of stars to 10;
 * simplify check for number of lines (no need to count [OK]'s);
 * improve error message.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 4f80a1953d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 12:51:09 +02:00
Sebastiaan van Stijn
55c5381584 Merge pull request #220 from dperny/19.03-backport-constraintenforcer-fix
[19.03 backport] ConstraintEnforcer fix
2019-05-21 12:47:38 +02:00
frankyang
750e0ace06 bugfix: fetch the right device number which great than 255
Signed-off-by: frankyang <yyb196@gmail.com>
(cherry picked from commit b9f31912de)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 12:06:26 +02:00
Olly Pomeroy
29498693dd Switch swarmmode services to NanoCpu
Today `$ docker service create --limit-cpu` configures a containers
`CpuPeriod` and `CpuQuota` variables, this commit switches this to
configure a containers `NanoCpu` variable instead.

Signed-off-by: Olly Pomeroy <olly@docker.com>
(cherry picked from commit 8a60a1e14a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-21 11:17:37 +02:00
Drew Erny
56e92239a6 backport ConstraintEnforcer fix
Revendors docker/swarmkit to backport fixes made to the
ConstraintEnforcer (see docker/swarmkit#2857)

Signed-off-by: Drew Erny <drew.erny@docker.com>
2019-05-20 13:30:00 -05:00
Jim Ehrismann
11319732ab explicitly set filesystem type for mount to avoid 'invalid argument' error on arm
Signed-off-by: Jim Ehrismann <jim.ehrismann@docker.com>
(cherry picked from commit d7de1a8b9f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-17 17:18:16 +02:00
Akihiro Suda
853816ae79 dockerd-rootless.sh: use exec
Killing the shell script process does not kill the forked process.

This commit switches to `exec` so that the executed process can be
easily killed.

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 34cc5c24d0)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-05-16 22:06:01 +09:00
Akihiro Suda
8f61032ec4 bump up rootlesskit to v0.4.1
Now the child process is killed when the parent dies (rootless-containers/rootlesskit#66)

e92d5e7...27a0c7a

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 00c92a6719)
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-05-16 22:05:48 +09:00
Sebastiaan van Stijn
bff7e300e6 Merge pull request #215 from thaJeztah/19.03_backport_buildkit_fixes
[19.03 backport] BuildKit fixes
2019-05-13 20:16:34 -07:00
Andrew Hsu
ff44133643 Merge pull request #214 from thaJeztah/19.03_backport_log-daemon-exit-before-tests-finish
[19.03 backport] Ensure all integration daemon logging happens before test exit
2019-05-13 19:13:34 -07:00
Andrew Hsu
9fdccf6a47 Merge pull request #212 from thaJeztah/19.03_backport_gcr_fix
[19.03 backport] builder-next: fix gcr workaround token cache
2019-05-13 19:11:55 -07:00
Andrew Hsu
3f4657f6db Merge pull request #210 from thaJeztah/19.03_backport_bump_runc_1.0.0-rc.8
[19.03 backport] Bump runc 1.0.0-rc8, opencontainers/selinux v1.2.2
2019-05-13 19:06:00 -07:00
Sebastiaan van Stijn
dcc05fcf3e Merge pull request #213 from thaJeztah/19.03_backport_remove_stale_lb_ep
[19.03 backport] Remove a network during task SHUTDOWN instead of REMOVE to
2019-05-13 18:52:53 -07:00
Tibor Vass
03ce4080a4 Merge pull request #208 from thaJeztah/19.03_backport_rootless_fixes
[19.03 backport] backport rootless fixes
2019-05-13 18:38:17 -07:00
Andrew Hsu
61828453db Merge pull request #209 from thaJeztah/19.03_backport_bump_golang_1.12.5
[19.03 backport] Bump Golang 1.12.5
2019-05-13 18:05:15 -07:00
Sebastiaan van Stijn
d371b283c3 bump google.golang.org/grpc v1.20.1
full diff: https://github.com/grpc/grpc-go/compare/v1.12.2...v1.20.1

includes  grpc/grpc-go#2695 transport: do not close channel that can lead to panic
addresses moby/moby#39053

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 28ad54d84f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:59:40 -07:00
Tonis Tiigi
4784740273 builder-next: call stopprogress on download error
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 91a57f3e7f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:44:38 -07:00
Tonis Tiigi
31b0688de7 vendor: update buildkit to f238f1ef
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit a3cbd53ed2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:44:23 -07:00
Sebastiaan van Stijn
6896305b57 bump golang.org/x/crypto 88737f569e3a9c7ab309cdc09a07fe7fc87233c3
no local changes, just syncing with containerd

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d6d2b30fd2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:56 -07:00
Sebastiaan van Stijn
931c4c1023 bump gogo/googleapis v1.2.0
full diff: 08a7655d27...v1.2.0

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5d51ac544b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:45 -07:00
Sebastiaan van Stijn
6cc14f5854 bump gogo/protobuf v1.2.1
full diff: https://github.com/gogo/protobuf/compare/v1.2.0...v1.2.1

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 647f31b7d0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:25 -07:00
Sebastiaan van Stijn
1910607215 bump containerd/console 0650fd9eeb50bab4fc99dceb9f2e14cf58f36e7f
full diff: c12b1e7919...0650fd9eeb

- containerd/console#30 Add common project repo checks/README references

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3d7d8a579f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:43:14 -07:00
Sebastiaan van Stijn
dfa1031015 bump containerd 3a3f0aac8819165839a41fee77a4f4ac8b103097
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 25e6487fc2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:42:40 -07:00
Sebastiaan van Stijn
790388a8c5 bump containerd/continuity aaeac12a7ffcd198ae25440a9dff125c2e2703a7
- containerd/continuity#140 Fix directory comparison in changes

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 447cbff50a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:25:31 -07:00
Sebastiaan van Stijn
ea09008423 bump buildkit v0.5.0
full diff: 8818c67cff...v0.5.0

- moby/buildkit#909 exporter: support unpack opt for image exporter
- moby/buildkit#961 dockerfile: allow subdirs for remote contexts

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3e4723cf33)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 17:25:19 -07:00
Daniel Sweet
8d8904f02b Ensure all integration daemon logging happens before test exit
As of Go 1.12, the `testing` package panics if a goroutine logs to a
`testing.T` after the relevant test has completed. This was not
documented as a change at all; see the commit
95d06ab6c982f58b127b14a52c3325acf0bd3926 in the Go repository for the
relevant change.

At any point in the integration tests, tests could panic with the
message "Log in goroutine after TEST_FUNCTION has completed". This was
exacerbated by less direct logging I/O, e.g. running `make test` with
its output piped instead of attached to a TTY.

The most common cause of panics was that there was a race condition
between an exit logging goroutine and the `StopWithError` method:
`StopWithError` could return, causing the calling test method to return,
causing the `testing.T` to be marked as finished, before the goroutine
could log that the test daemon had exited. The fix is simple: capture
the result of `cmd.Wait()`, _then_ log, _then_ send the captured
result over the `Wait` channel. This ensures that the message is
logged before `StopWithError` can return, blocking the test method
so that the target `testing.T` is not marked as finished.

Signed-off-by: Daniel Sweet <danieljsweet@icloud.com>
(cherry picked from commit 7546322e99)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:50:43 -07:00
Arko Dasgupta
2a7513a972 Remove a network during task SHUTDOWN instead of REMOVE to
make sure the LB sandbox is removed when a service is updated
with a --network-rm option

Signed-off-by: Arko Dasgupta <arko.dasgupta@docker.com>
(cherry picked from commit 680d0ba4ab)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:47:35 -07:00
Tonis Tiigi
c47f2a4a1a builder-next: fix gcr workaround token cache
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit cfce0acd33)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:45:27 -07:00
Yash Murty
526a72fd77 Remove DiskQouta field.
Signed-off-by: Yash Murty <yashmurty@gmail.com>
(cherry picked from commit a31a088665)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:42:41 -07:00
Sebastiaan van Stijn
f76879dd64 Add "import" statement to generated API types
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 93886fcc5a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:42:22 -07:00
Sebastiaan van Stijn
e7a837120d bump opencontainers/selinux v1.2.2
full diff: https://github.com/opencontainers/selinux/compare/v1.2.1...v1.2.2

- opencontainers/selinux#51 Older kernels do not support keyring labeling

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0d453115fe)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:40:02 -07:00
Sebastiaan van Stijn
04c51495da bump runc binary v1.0.0-rc8
full diff: 029124da7a...425e105d5a

- opencontainers/runc#2043 Vendor in latest selinux code for keycreate errors

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 4bc310c11b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:39:54 -07:00
Sebastiaan van Stijn
02baf07d77 bump runc vendor v1.0.0-rc8
full diff: 029124da7a...425e105d5a

- opencontainers/runc#2043 Vendor in latest selinux code for keycreate errors

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 6df6fe6020)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:39:46 -07:00
Jintao Zhang
6d0823af0a Bump Golang 1.12.5
Signed-off-by: Jintao Zhang <zhangjintao9020@gmail.com>
(cherry picked from commit 3a4c5b6a0d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:37:07 -07:00
Akihiro Suda
8493fb18ae dockerd: fix rootless detection (alternative to #39024)
The `--rootless` flag had a couple of issues:
* #38702: euid=0, $USER="root" but no access to cgroup ("rootful" Docker in rootless Docker)
* #39009: euid=0 but $USER="docker" (rootful boot2docker)

To fix #38702, XDG dirs are ignored as in rootful Docker, unless the
dockerd is directly running under RootlessKit namespaces.

RootlessKit detection is implemented by checking whether `$ROOTLESSKIT_STATE_DIR` is set.

To fix #39009, the non-robust `$USER` check is now completely removed.

The entire logic can be illustrated as follows:

```
withRootlessKit := getenv("ROOTLESSKIT_STATE_DIR")
rootlessMode := withRootlessKit || cliFlag("--rootless")
honorXDG := withRootlessKit
useRootlessKitDockerProxy := withRootlessKit
removeCgroupSpec := rootlessMode
adjustOOMScoreAdj := rootlessMode
```

Close #39024
Fix #38702 #39009

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 3518383ed9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:31:31 -07:00
Akihiro Suda
e8b9a752d3 rootless: optional support for lxc-user-nic SUID binary
lxc-user-nic can eliminate slirp overhead but needs /etc/lxc/lxc-usernet to be configured for the current user.

To use lxc-user-nic, $DOCKERD_ROOTLESS_ROOTLESSKIT_NET=lxc-user-nic also needs to be set.

This commit also bumps up RootlessKit from v0.3.0 to v0.4.0:
70e0502f32...e92d5e772e

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
(cherry picked from commit 63a66b0eb0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:31:11 -07:00
Kir Kolyshkin
14bb71d508 Dockerfile.e2e: fix DOCKER_GITCOMMIT handling
1. There is no need to persist DOCKER_GITCOMMIT,
as it's not needed for runtime, only for build.
So, remove ENV.

2. In case $GITCOMMIT is not defined during build time
(and it happens if .git directory is not present),
we still need to have some value set, so set it to
`undefined`. Otherwise we'll have something like

>  => ERROR [builder 2/3] RUN hack/make.sh build-integration-test-binary
> ------
>  > [builder 2/3] RUN hack/make.sh build-integration-test-binary:
> #32 0.488
> #32 0.505 error: .git directory missing and DOCKER_GITCOMMIT not specified
> #32 0.505   Please either build with the .git directory accessible, or specify the
> #32 0.505   exact (--short) commit hash you are building using DOCKER_GITCOMMIT for
> #32 0.505   future accountability in diagnosing build issues.  Thanks!

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit c3b24944ca)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:27:02 -07:00
Kir Kolyshkin
2e95499142 Dockerfile.e2e: copy test sources
Package "gotest.tools/assert" uses source introspection to
print more info in case of assertion failure. When source code
is not available, it prints an error instead.

In other words, before this commit:

> --- SKIP: TestCgroupDriverSystemdMemoryLimit (0.00s)
>     cgroupdriver_systemd_test.go:32: failed to parse source file: /go/src/github.com/docker/docker/integration/system/cgroupdriver_systemd_test.go: open /go/src/github.com/docker/docker/integration/system/cgroupdriver_systemd_test.go: no such file or directory
>     cgroupdriver_systemd_test.go:32:

and after:

> --- SKIP: TestCgroupDriverSystemdMemoryLimit (0.09s)
>    cgroupdriver_systemd_test.go:32: !hasSystemd()

This increases the resulting image size by about 2 MB
on my system (from 758 to 760 MB).

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 0deb18ab42)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:53 -07:00
Kir Kolyshkin
8d428458a2 TestIpcModeOlderClient: skip if client < 1.40
This test case requires not just daemon >= 1.40, but also
client API >= 1.40. In case older client is used, we'll
get failure from the very first check:

> ipcmode_linux_test.go:313: assertion failed: shareable (string) != private (string)

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 1ada1c8391)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:34 -07:00
Sebastiaan van Stijn
5f60a56544 Skip TestImagesFilterMultiReference on API < v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 83ac2b4c13)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:22 -07:00
Sebastiaan van Stijn
a3b4e92d66 Skip TestUUIDGeneration on API < v1.40
Older versions did not use an UUID as ID

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 05bd9958f2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:26:10 -07:00
Sebastiaan van Stijn
cedf201aef Skip TestPingCacheHeaders on API < v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d080a866cc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:55 -07:00
Sebastiaan van Stijn
545bc6b4d8 Skip TestBuildWithEmptyDockerfile on API < v1.40
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 0e7b46aafe)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:47 -07:00
Sebastiaan van Stijn
620d9d3c75 Fix TestVolumesCreateAndList when running against a shared daemon
The daemon may already have other volumes, so filter out those
when running the test.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 566eea13e6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:34 -07:00
Sebastiaan van Stijn
e1b045c25e Remove TestContainerAPICreateWithHostName
TestNISDomainname in the integration suite covers this

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b5880c2eb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:25 -07:00
Sebastiaan van Stijn
11e2802015 Skip TestNISDomainname on API < 1.40
Older versions of the daemon would concatenate hostname and
domainname, so hostname "foobar" and domainname "baz.cyphar.com"
would produce `foobar.baz.cyphar.com` as hostname.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c91c3776ea)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:17 -07:00
Sebastiaan van Stijn
cb8d67505d Dockerfile.e2e: builder: change output directory to simplify copy
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b73e3407e3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:25:07 -07:00
Sebastiaan van Stijn
7d3405b4ba Dockerfile.e2e: move "contrib" to a separate build-stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3ededb850f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:58 -07:00
Sebastiaan van Stijn
d36c7de19e Dockerfile.e2e: move dockercli to a separate build-stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e7784a6c7e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:50 -07:00
Sebastiaan van Stijn
6605a26c75 Dockerfile.e2e: use /build to be consistent with main Dockerfile
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 045beed6c8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:41 -07:00
Sebastiaan van Stijn
ce9cabf0f0 Dockerfile.e2e: re-order steps for caching
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 63aefbfbca)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:32 -07:00
Sebastiaan van Stijn
dc6d1ac663 Dockerfile.e2e: move frozen-images to a separate stage
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 5554bd1a7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:24 -07:00
Sebastiaan van Stijn
1fdd24579c Dockerfile.e2e: use alpine 3.9
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 20262688df)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:24:15 -07:00
Sebastiaan van Stijn
3afbf83cc5 Dockerfile.e2e fix TestBuildPreserveOwnership
The Dockerfile missed some fixtures, which caused this test
fail when running from this image.

I also noticed some other fixtures missing in integration-cli,
where the image had symlinks to some certificates, but the
original files were not included;

```
|-- integration-cli
    |-- fixtures
    |   |-- auth
    |   |   `-- docker-credential-shell-test
    |   |-- credentialspecs
    |   |   `-- valid.json
    |   |-- https
    |   |   |-- ca.pem -> ../../../integration/testdata/https/ca.pem
    |   |   |-- client-cert.pem -> ../../../integration/testdata/https/client-cert.pem
    |   |   |-- client-key.pem -> ../../../integration/testdata/https/client-key.pem
    |   |   |-- client-rogue-cert.pem
    |   |   |-- client-rogue-key.pem
    |   |   |-- server-cert.pem -> ../../../integration/testdata/https/server-cert.pem
    |   |   |-- server-key.pem -> ../../../integration/testdata/https/server-key.pem
    |   |   |-- server-rogue-cert.pem
    |   |   `-- server-rogue-key.pem
```

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 48fd0e921c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:23:48 -07:00
Ian Campbell
61a234d562 client: do not fallback to GET if HEAD on _ping fail to connect
When we see an `ECONNREFUSED` (or equivalent) from an attempted `HEAD` on the
`/_ping` endpoint there is no point in trying again with `GET` since the server
is not responding/available at all.

Once vendored into the cli this will partially mitigate https://github.com/docker/cli/issues/1739
("Docker commands take 1 minute to timeout if context endpoint is unreachable")
by cutting the effective timeout in half.

Signed-off-by: Ian Campbell <ijc@docker.com>
(cherry picked from commit 8c8457b0f2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-13 15:17:51 -07:00
7316 changed files with 537341 additions and 1082454 deletions

21
.DEREK.yml Normal file
View File

@@ -0,0 +1,21 @@
curators:
- aboch
- alexellis
- andrewhsu
- anonymuse
- chanwit
- ehazlett
- fntlnz
- gianarb
- kolyshkin
- mgoelzer
- olljanat
- programmerq
- rheinwein
- ripcurld0
- thajeztah
features:
- comments
- pr_description_required

View File

@@ -1,4 +1,7 @@
bundles
.gopath
vendor/pkg
.go-pkg-cache
.git
bundles/
cli/winresources/**/winres.json
cli/winresources/**/*.syso
hack/integration-cli-on-swarm/integration-cli-on-swarm

3
.gitattributes vendored
View File

@@ -1,3 +0,0 @@
Dockerfile* linguist-language=Dockerfile
vendor.mod linguist-language=Go-Module
vendor.sum linguist-language=Go-Checksums

4
.github/CODEOWNERS vendored
View File

@@ -6,10 +6,12 @@
builder/** @tonistiigi
contrib/mkimage/** @tianon
daemon/graphdriver/devmapper/** @rhvgoyal
daemon/graphdriver/lcow/** @johnstep @jhowardmsft
daemon/graphdriver/overlay/** @dmcgowan
daemon/graphdriver/overlay2/** @dmcgowan
daemon/graphdriver/windows/** @johnstep
daemon/graphdriver/windows/** @johnstep @jhowardmsft
daemon/logger/awslogs/** @samuelkarp
hack/** @tianon
hack/integration-cli-on-swarm/** @AkihiroSuda
plugin/** @cpuguy83
project/** @thaJeztah

View File

@@ -1,27 +0,0 @@
name: 'Setup Runner'
description: 'Composite action to set up the GitHub Runner for jobs in the test.yml workflow'
runs:
using: composite
steps:
- run: |
sudo modprobe ip_vs
sudo modprobe ipv6
sudo modprobe ip6table_filter
sudo modprobe -r overlay
sudo modprobe overlay redirect_dir=off
shell: bash
- run: |
if [ ! -e /etc/docker/daemon.json ]; then
echo '{}' | tee /etc/docker/daemon.json >/dev/null
fi
DOCKERD_CONFIG=$(jq '.+{"experimental":true,"live-restore":true,"ipv6":true,"fixed-cidr-v6":"2001:db8:1::/64"}' /etc/docker/daemon.json)
sudo tee /etc/docker/daemon.json <<<"$DOCKERD_CONFIG" >/dev/null
sudo service docker restart
shell: bash
- run: |
./contrib/check-config.sh || true
shell: bash
- run: |
docker info
shell: bash

View File

@@ -1,48 +0,0 @@
# reusable workflow
name: .dco
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
env:
ALPINE_VERSION: 3.16
jobs:
run:
runs-on: ubuntu-20.04
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Dump context
uses: actions/github-script@v6
with:
script: |
console.log(JSON.stringify(context, null, 2));
-
name: Get base ref
id: base-ref
uses: actions/github-script@v6
with:
result-encoding: string
script: |
if (/^refs\/pull\//.test(context.ref) && context.payload?.pull_request?.base?.ref != undefined) {
return context.payload.pull_request.base.ref;
}
return context.ref.replace(/^refs\/heads\//g, '');
-
name: Validate
run: |
docker run --rm \
-v "$(pwd):/workspace" \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
alpine:${{ env.ALPINE_VERSION }} sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
env:
VALIDATE_REPO: ${{ github.server_url }}/${{ github.repository }}.git
VALIDATE_BRANCH: ${{ steps.base-ref.outputs.result }}

View File

@@ -1,498 +0,0 @@
# reusable workflow
name: .windows
# TODO: hide reusable workflow from the UI. Tracked in https://github.com/community/community/discussions/12025
on:
workflow_call:
inputs:
os:
required: true
type: string
send_coverage:
required: false
type: boolean
default: false
env:
GO_VERSION: 1.19.3
GOTESTLIST_VERSION: v0.2.0
TESTSTAT_VERSION: v0.1.3
WINDOWS_BASE_IMAGE: mcr.microsoft.com/windows/servercore
WINDOWS_BASE_TAG_2019: ltsc2019
WINDOWS_BASE_TAG_2022: ltsc2022
TEST_IMAGE_NAME: moby:test
TEST_CTN_NAME: moby
DOCKER_BUILDKIT: 0
ITG_CLI_MATRIX_SIZE: 6
jobs:
build:
runs-on: ${{ inputs.os }}
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
docker pull ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }}
docker tag ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }} microsoft/windowsservercore
docker build --build-arg GO_VERSION -t ${{ env.TEST_IMAGE_NAME }} -f Dockerfile.windows .
-
name: Build binaries
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -Daemon -Client
-
name: Copy artifacts
run: |
New-Item -ItemType "directory" -Path "${{ env.BIN_OUT }}"
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\docker.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\src\github.com\docker\docker\bundles\dockerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\gopath\bin\gotestsum.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd.exe" ${{ env.BIN_OUT }}\
docker cp "${{ env.TEST_CTN_NAME }}`:c`:\containerd\bin\containerd-shim-runhcs-v1.exe" ${{ env.BIN_OUT }}\
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: build-${{ inputs.os }}
path: ${{ env.BIN_OUT }}/*
if-no-files-found: error
retention-days: 2
unit-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Init
run: |
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go-build"
New-Item -ItemType "directory" -Path "${{ github.workspace }}\go\pkg\mod"
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
-
name: Cache
uses: actions/cache@v3
with:
path: |
~\AppData\Local\go-build
~\go\pkg\mod
${{ github.workspace }}\go-build
${{ env.GOPATH }}\pkg\mod
key: ${{ inputs.os }}-${{ github.job }}-${{ hashFiles('**/vendor.sum') }}
restore-keys: |
${{ inputs.os }}-${{ github.job }}-
-
name: Docker info
run: |
docker info
-
name: Build base image
run: |
docker pull ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }}
docker tag ${{ env.WINDOWS_BASE_IMAGE }}:${{ env.WINDOWS_BASE_IMAGE_TAG }} microsoft/windowsservercore
docker build --build-arg GO_VERSION -t ${{ env.TEST_IMAGE_NAME }} -f Dockerfile.windows .
-
name: Test
run: |
& docker run --name ${{ env.TEST_CTN_NAME }} -e "DOCKER_GITCOMMIT=${{ github.sha }}" `
-v "${{ github.workspace }}\go-build:C:\Users\ContainerAdministrator\AppData\Local\go-build" `
-v "${{ github.workspace }}\go\pkg\mod:C:\gopath\pkg\mod" `
-v "${{ env.GOPATH }}\src\github.com\docker\docker\bundles:C:\gopath\src\github.com\docker\docker\bundles" `
${{ env.TEST_IMAGE_NAME }} hack\make.ps1 -TestUnit
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v3
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: unit
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: ${{ inputs.os }}-unit-reports
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
unit-test-report:
runs-on: ubuntu-latest
if: always()
needs:
- unit-test
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v3
with:
name: ${{ inputs.os }}-unit-reports
path: /tmp/artifacts
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/artifacts -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
integration-test-prepare:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# Distribute integration-cli tests for the matrix in integration-test job.
# Also prepend ./... to the matrix. This is a special case to run "Test integration" step exclusively.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} ./...)"
matrix="$(echo "$matrix" | jq -c '. |= ["./..."] + .')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-test:
runs-on: ${{ inputs.os }}
timeout-minutes: 120
needs:
- build
- integration-test-prepare
strategy:
fail-fast: false
matrix:
runtime:
- builtin
- containerd
test: ${{ fromJson(needs.integration-test-prepare.outputs.matrix) }}
env:
GOPATH: ${{ github.workspace }}\go
GOBIN: ${{ github.workspace }}\go\bin
BIN_OUT: ${{ github.workspace }}\out
defaults:
run:
working-directory: ${{ env.GOPATH }}/src/github.com/docker/docker
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: ${{ env.GOPATH }}/src/github.com/docker/docker
-
name: Env
run: |
Get-ChildItem Env: | Out-String
-
name: Download artifacts
uses: actions/download-artifact@v3
with:
name: build-${{ inputs.os }}
path: ${{ env.BIN_OUT }}
-
name: Init
run: |
New-Item -ItemType "directory" -Path "bundles"
If ("${{ inputs.os }}" -eq "windows-2019") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2019 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
} ElseIf ("${{ inputs.os }}" -eq "windows-2022") {
echo "WINDOWS_BASE_IMAGE_TAG=${{ env.WINDOWS_BASE_TAG_2022 }}" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
Write-Output "${{ env.BIN_OUT }}" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append
-
# removes docker service that is currently installed on the runner. we
# could use Uninstall-Package but not yet available on Windows runners.
# more info: https://github.com/actions/virtual-environments/blob/d3a5bad25f3b4326c5666bab0011ac7f1beec95e/images/win/scripts/Installers/Install-Docker.ps1#L11
name: Removing current daemon
run: |
if (Get-Service docker -ErrorAction SilentlyContinue) {
$dockerVersion = (docker version -f "{{.Server.Version}}")
Write-Host "Current installed Docker version: $dockerVersion"
# remove service
Stop-Service -Force -Name docker
Remove-Service -Name docker
# removes event log entry. we could use "Remove-EventLog -LogName -Source docker"
# but this cmd is not available atm
$ErrorActionPreference = "SilentlyContinue"
& reg delete "HKLM\SYSTEM\CurrentControlSet\Services\EventLog\Application\docker" /f 2>&1 | Out-Null
$ErrorActionPreference = "Stop"
Write-Host "Service removed"
}
-
name: Starting containerd
if: matrix.runtime == 'containerd'
run: |
Write-Host "Generating config"
& "${{ env.BIN_OUT }}\containerd.exe" config default | Out-File "$env:TEMP\ctn.toml" -Encoding ascii
Write-Host "Creating service"
New-Item -ItemType Directory "$env:TEMP\ctn-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\ctn-state" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait "${{ env.BIN_OUT }}\containerd.exe" `
-ArgumentList "--log-level=debug", `
"--config=$env:TEMP\ctn.toml", `
"--address=\\.\pipe\containerd-containerd", `
"--root=$env:TEMP\ctn-root", `
"--state=$env:TEMP\ctn-state", `
"--log-file=$env:TEMP\ctn.log", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name containerd
Start-Sleep -Seconds 5
Write-Host "Service started successfully!"
-
name: Starting test daemon
run: |
Write-Host "Creating service"
If ("${{ matrix.runtime }}" -eq "containerd") {
$runtimeArg="--containerd=\\.\pipe\containerd-containerd"
echo "DOCKER_WINDOWS_CONTAINERD_RUNTIME=1" | Out-File -FilePath $Env:GITHUB_ENV -Encoding utf-8 -Append
}
New-Item -ItemType Directory "$env:TEMP\moby-root" -ErrorAction SilentlyContinue | Out-Null
New-Item -ItemType Directory "$env:TEMP\moby-exec" -ErrorAction SilentlyContinue | Out-Null
Start-Process -Wait -NoNewWindow "${{ env.BIN_OUT }}\dockerd" `
-ArgumentList $runtimeArg, "--debug", `
"--host=npipe:////./pipe/docker_engine", `
"--data-root=$env:TEMP\moby-root", `
"--exec-root=$env:TEMP\moby-exec", `
"--pidfile=$env:TEMP\docker.pid", `
"--register-service"
Write-Host "Starting service"
Start-Service -Name docker
Write-Host "Service started successfully!"
-
name: Waiting for test daemon to start
run: |
$tries=20
Write-Host "Waiting for the test daemon to start..."
While ($true) {
$ErrorActionPreference = "SilentlyContinue"
& "${{ env.BIN_OUT }}\docker" version
$ErrorActionPreference = "Stop"
If ($LastExitCode -eq 0) {
break
}
$tries--
If ($tries -le 0) {
Throw "Failed to get a response from the daemon"
}
Write-Host -NoNewline "."
Start-Sleep -Seconds 1
}
Write-Host "Test daemon started and replied!"
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Building contrib/busybox
run: |
& "${{ env.BIN_OUT }}\docker" build -t busybox `
--build-arg WINDOWS_BASE_IMAGE `
--build-arg WINDOWS_BASE_IMAGE_TAG `
.\contrib\busybox\
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: List images
run: |
& "${{ env.BIN_OUT }}\docker" images
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Test integration
if: matrix.test == './...'
run: |
.\hack\make.ps1 -TestIntegration
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
-
name: Test integration-cli
if: matrix.test != './...'
run: |
.\hack\make.ps1 -TestIntegrationCli
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
GO111MODULE: "off"
TEST_CLIENT_BINARY: ${{ env.BIN_OUT }}\docker
INTEGRATION_TESTRUN: ${{ matrix.test }}
-
name: Send to Codecov
if: inputs.send_coverage
uses: codecov/codecov-action@v3
with:
working-directory: ${{ env.GOPATH }}\src\github.com\docker\docker
directory: bundles
env_vars: RUNNER_OS
flags: integration,${{ matrix.runtime }}
-
name: Docker info
run: |
& "${{ env.BIN_OUT }}\docker" info
env:
DOCKER_HOST: npipe:////./pipe/docker_engine
-
name: Stop containerd
if: always() && matrix.runtime == 'containerd'
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name containerd
$ErrorActionPreference = "Stop"
-
name: Containerd logs
if: always() && matrix.runtime == 'containerd'
run: |
Copy-Item "$env:TEMP\ctn.log" -Destination ".\bundles\containerd.log"
Get-Content "$env:TEMP\ctn.log" | Out-Host
-
name: Stop daemon
if: always()
run: |
$ErrorActionPreference = "SilentlyContinue"
Stop-Service -Force -Name docker
$ErrorActionPreference = "Stop"
-
# as the daemon is registered as a service we have to check the event
# logs against the docker provider.
name: Daemon event logs
if: always()
run: |
Get-WinEvent -ea SilentlyContinue `
-FilterHashtable @{ProviderName= "docker"; LogName = "application"} |
Select-Object -Property TimeCreated, @{N='Detailed Message'; E={$_.Message}} |
Sort-Object @{Expression="TimeCreated";Descending=$false} |
Select-Object -ExpandProperty 'Detailed Message' | Tee-Object -file ".\bundles\daemon.log"
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: ${{ inputs.os }}-integration-reports-${{ matrix.runtime }}
path: ${{ env.GOPATH }}\src\github.com\docker\docker\bundles\*
integration-test-report:
runs-on: ubuntu-latest
if: always()
needs:
- integration-test
strategy:
fail-fast: false
matrix:
runtime:
- builtin
- containerd
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download artifacts
uses: actions/download-artifact@v3
with:
name: ${{ inputs.os }}-integration-reports-${{ matrix.runtime }}
path: /tmp/artifacts
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/artifacts -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY

View File

@@ -1,113 +0,0 @@
name: buildkit
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]{2}'
pull_request:
env:
BUNDLES_OUTPUT: ./bundles
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/bake-action@v2
with:
targets: binary
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: binary
path: ${{ env.BUNDLES_OUTPUT }}
if-no-files-found: error
retention-days: 1
test:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build
strategy:
fail-fast: false
matrix:
pkg:
- client
- cmd/buildctl
- solver
- frontend
- frontend/dockerfile
typ:
- integration
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
path: moby
-
name: BuildKit ref
run: |
./hack/go-mod-prepare.sh
# FIXME(thaJeztah) temporarily overriding version to use for tests; remove with the next release of buildkit
# echo "BUILDKIT_REF=$(./hack/buildkit-ref)" >> $GITHUB_ENV
echo "BUILDKIT_REF=4febae4f874bd8ef52dec30e988c8fe0bc96b3b9" >> $GITHUB_ENV
working-directory: moby
-
name: Checkout BuildKit ${{ env.BUILDKIT_REF }}
uses: actions/checkout@v3
with:
repository: "moby/buildkit"
ref: ${{ env.BUILDKIT_REF }}
path: buildkit
-
name: Set up QEMU
uses: docker/setup-qemu-action@v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Download binary artifacts
uses: actions/download-artifact@v3
with:
name: binary
path: ./buildkit/build/moby/
-
name: Update daemon.json
run: |
sudo rm /etc/docker/daemon.json
sudo service docker restart
docker version
docker info
-
name: Test
run: |
./hack/test ${{ matrix.typ }}
env:
CONTEXT: "."
TEST_DOCKERD: "1"
TEST_DOCKERD_BINARY: "./build/moby/binary-daemon/dockerd"
TESTPKGS: "./${{ matrix.pkg }}"
TESTFLAGS: "-v --parallel=1 --timeout=30m --run=//worker=dockerd$"
working-directory: buildkit

View File

@@ -1,102 +0,0 @@
name: ci
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
tags:
- 'v*'
pull_request:
env:
BUNDLES_OUTPUT: ./bundles
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
target:
- binary
- dynbinary
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/bake-action@v2
with:
targets: ${{ matrix.target }}
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.target }}
path: ${{ env.BUNDLES_OUTPUT }}
if-no-files-found: error
retention-days: 7
cross:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
platform:
- linux/amd64
- linux/arm/v5
- linux/arm/v6
- linux/arm/v7
- linux/arm64
- linux/ppc64le
- linux/s390x
- windows/amd64
- windows/arm64
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Prepare
run: |
platform=${{ matrix.platform }}
echo "PLATFORM_PAIR=${platform//\//-}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build
uses: docker/bake-action@v2
with:
targets: cross
env:
DOCKER_CROSSPLATFORMS: ${{ matrix.platform }}
-
name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: cross-${{ env.PLATFORM_PAIR }}
path: ${{ env.BUNDLES_OUTPUT }}
if-no-files-found: error
retention-days: 7

View File

@@ -1,504 +0,0 @@
name: test
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
tags:
- 'v*'
pull_request:
env:
GO_VERSION: 1.19.3
GOTESTLIST_VERSION: v0.2.0
TESTSTAT_VERSION: v0.1.3
ITG_CLI_MATRIX_SIZE: 6
DOCKER_EXPERIMENTAL: 1
DOCKER_GRAPHDRIVER: overlay2
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
build-dev:
runs-on: ubuntu-20.04
needs:
- validate-dco
strategy:
fail-fast: false
matrix:
mode:
- ""
- systemd
steps:
-
name: Prepare
run: |
if [ "${{ matrix.mode }}" = "systemd" ]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
fi
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
*.cache-from=type=gha,scope=dev${{ matrix.mode }}
*.cache-to=type=gha,scope=dev${{ matrix.mode }},mode=max
*.output=type=cacheonly
validate-prepare:
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
matrix: ${{ steps.scripts.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Create matrix
id: scripts
run: |
scripts=$(jq -ncR '[inputs]' <<< "$(ls -I .validate -I all -I default -I dco -I golangci-lint.yml -I yamllint.yaml -A ./hack/validate/)")
echo "matrix=$scripts" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.scripts.outputs.matrix }}
validate:
runs-on: ubuntu-20.04
needs:
- validate-prepare
- build-dev
strategy:
fail-fast: true
matrix:
script: ${{ fromJson(needs.validate-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Validate
run: |
make -o build validate-${{ matrix.script }}
unit:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-unit
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Send to Codecov
uses: codecov/codecov-action@v3
with:
directory: ./bundles
env_vars: RUNNER_OS
flags: unit
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: unit-reports
path: /tmp/reports/*
unit-report:
runs-on: ubuntu-20.04
if: always()
needs:
- unit
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
with:
name: unit-reports
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
docker-py:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-docker-py
-
name: Prepare reports
if: always()
run: |
mkdir -p bundles /tmp/reports
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C /tmp/reports
sudo chown -R $(id -u):$(id -g) /tmp/reports
tree -nh /tmp/reports
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-docker-py/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: docker-py-reports
path: /tmp/reports/*
integration-flaky:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration-flaky
env:
TEST_SKIP_INTEGRATION_CLI: 1
integration:
runs-on: ${{ matrix.os }}
timeout-minutes: 120
needs:
- build-dev
strategy:
fail-fast: false
matrix:
os:
- ubuntu-20.04
- ubuntu-22.04
mode:
- ""
- rootless
- systemd
#- rootless-systemd FIXME: https://github.com/moby/moby/issues/44084
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Prepare
run: |
CACHE_DEV_SCOPE=dev
if [[ "${{ matrix.mode }}" == *"rootless"* ]]; then
echo "DOCKER_ROOTLESS=1" >> $GITHUB_ENV
fi
if [[ "${{ matrix.mode }}" == *"systemd"* ]]; then
echo "SYSTEMD=true" >> $GITHUB_ENV
CACHE_DEV_SCOPE="${CACHE_DEV_SCOPE}systemd"
fi
echo "CACHE_DEV_SCOPE=${CACHE_DEV_SCOPE}" >> $GITHUB_ENV
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=${{ env.CACHE_DEV_SCOPE }}
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION_CLI: 1
TESTCOVERAGE: 1
-
name: Prepare reports
if: always()
run: |
reportsPath="/tmp/reports/${{ matrix.os }}"
if [ -n "${{ matrix.mode }}" ]; then
reportsPath="$reportsPath-${{ matrix.mode }}"
fi
mkdir -p bundles $reportsPath
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
-
name: Send to Codecov
uses: codecov/codecov-action@v3
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration,${{ matrix.mode }}
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: integration-reports
path: /tmp/reports/*
integration-report:
runs-on: ubuntu-20.04
if: always()
needs:
- integration
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
with:
name: integration-reports
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY
integration-cli-prepare:
runs-on: ubuntu-20.04
needs:
- validate-dco
outputs:
matrix: ${{ steps.tests.outputs.matrix }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Install gotestlist
run:
go install github.com/crazy-max/gotestlist/cmd/gotestlist@${{ env.GOTESTLIST_VERSION }}
-
name: Create matrix
id: tests
working-directory: ./integration-cli
run: |
# Distribute integration-cli tests for the matrix in integration-test job.
# Also prepend ./... to the matrix. This is a special case to run "Test integration" step exclusively.
matrix="$(gotestlist -d ${{ env.ITG_CLI_MATRIX_SIZE }} ./...)"
matrix="$(echo "$matrix" | jq -c '. |= ["./..."] + .')"
echo "matrix=$matrix" >> $GITHUB_OUTPUT
-
name: Show matrix
run: |
echo ${{ steps.tests.outputs.matrix }}
integration-cli:
runs-on: ubuntu-20.04
timeout-minutes: 120
needs:
- build-dev
- integration-cli-prepare
strategy:
fail-fast: false
matrix:
test: ${{ fromJson(needs.integration-cli-prepare.outputs.matrix) }}
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Set up runner
uses: ./.github/actions/setup-runner
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build dev image
uses: docker/bake-action@v2
with:
targets: dev
set: |
dev.cache-from=type=gha,scope=dev
-
name: Test
run: |
make -o build test-integration
env:
TEST_SKIP_INTEGRATION: 1
TESTCOVERAGE: 1
TESTFLAGS: "-test.run (${{ matrix.test }})/"
-
name: Prepare reports
if: always()
run: |
reportsPath=/tmp/reports/$(echo -n "${{ matrix.test }}" | sha256sum | cut -d " " -f 1)
mkdir -p bundles $reportsPath
echo "${{ matrix.test }}" | tr -s '|' '\n' | tee -a "$reportsPath/tests.txt"
find bundles -path '*/root/*overlay2' -prune -o -type f \( -name '*-report.json' -o -name '*.log' -o -name '*.out' -o -name '*.prof' -o -name '*-report.xml' \) -print | xargs sudo tar -czf /tmp/reports.tar.gz
tar -xzf /tmp/reports.tar.gz -C $reportsPath
sudo chown -R $(id -u):$(id -g) $reportsPath
tree -nh $reportsPath
-
name: Send to Codecov
uses: codecov/codecov-action@v3
with:
directory: ./bundles/test-integration
env_vars: RUNNER_OS
flags: integration-cli
-
name: Test daemon logs
if: always()
run: |
cat bundles/test-integration/docker.log
-
name: Upload reports
if: always()
uses: actions/upload-artifact@v3
with:
name: integration-cli-reports
path: /tmp/reports/*
integration-cli-report:
runs-on: ubuntu-20.04
if: always()
needs:
- integration-cli
steps:
-
name: Set up Go
uses: actions/setup-go@v3
with:
go-version: ${{ env.GO_VERSION }}
-
name: Download reports
uses: actions/download-artifact@v3
with:
name: integration-cli-reports
path: /tmp/reports
-
name: Install teststat
run: |
go install github.com/vearutop/teststat@${{ env.TESTSTAT_VERSION }}
-
name: Create summary
run: |
teststat -markdown $(find /tmp/reports -type f -name '*.json' -print0 | xargs -0) >> $GITHUB_STEP_SUMMARY

View File

@@ -1,22 +0,0 @@
name: windows-2019
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
schedule:
- cron: '0 10 * * *'
workflow_dispatch:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
run:
needs:
- validate-dco
uses: ./.github/workflows/.windows.yml
with:
os: windows-2019
send_coverage: false

View File

@@ -1,25 +0,0 @@
name: windows-2022
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
on:
workflow_dispatch:
push:
branches:
- 'master'
- '[0-9]+.[0-9]+'
pull_request:
jobs:
validate-dco:
uses: ./.github/workflows/.dco.yml
run:
needs:
- validate-dco
uses: ./.github/workflows/.windows.yml
with:
os: windows-2022
send_coverage: true

38
.gitignore vendored
View File

@@ -1,30 +1,24 @@
# If you want to ignore files created by your editor/tools, please consider a
# [global .gitignore](https://help.github.com/articles/ignoring-files).
*~
*.bak
# Docker project generated files to ignore
# if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.orig
test.main
.*.swp
.DS_Store
thumbs.db
# local repository customization
.envrc
# a .bashrc may be added to customize the build environment
.bashrc
.editorconfig
# top-level go.mod is not meant to be checked in
/go.mod
# build artifacts
.gopath/
.go-pkg-cache/
autogen/
bundles/
cli/winresources/*/*.syso
cli/winresources/*/winres.json
cmd/dockerd/dockerd
contrib/builder/rpm/*/changelog
# ci artifacts
*.exe
*.gz
go-test-report.json
junit-report.xml
dockerversion/version_autogen.go
dockerversion/version_autogen_unix.go
vendor/pkg/
hack/integration-cli-on-swarm/integration-cli-on-swarm
coverage.txt
profile.out
test.main

288
.mailmap
View File

@@ -1,24 +1,15 @@
# This file lists the canonical name and email of contributors, and is used to
# generate AUTHORS (in hack/generate-authors.sh).
#
# To find new duplicates, regenerate AUTHORS and scan for name duplicates, or
# run the following to find email duplicates:
# git log --format='%aE - %aN' | sort -uf | awk -v IGNORECASE=1 '$1 in a {print a[$1]; print}; {a[$1]=$0}'
#
# For an explanation of this file format, consult gitmailmap(5).
# Generate AUTHORS: hack/generate-authors.sh
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
# duplicates that aren't also email duplicates): scan the output of:
# git log --format='%aE - %aN' | sort -uf
#
# For explanation on this file format: man git-shortlog
<21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
<mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Aaron L. Xu <liker.xu@foxmail.com>
Aaron L. Xu <liker.xu@foxmail.com> <likexu@harmonycloud.cn>
Aaron Lehmann <alehmann@netflix.com>
Aaron Lehmann <alehmann@netflix.com> <aaron.lehmann@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com>
Abhinandan Prativadi <aprativadi@gmail.com> <abhi@docker.com>
Abhinandan Prativadi <aprativadi@gmail.com> abhi <user.email>
Abhishek Chanda <abhishek.becs@gmail.com>
Abhishek Chanda <abhishek.becs@gmail.com> <abhishek.chanda@emc.com>
Ada Mancini <ada@docker.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Dobrawy <naczelnik@jawnosc.tk> <ad-m@users.noreply.github.com>
Abhinandan Prativadi <abhi@docker.com>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
Ahmed Kamal <email.ahmedkamal@googlemail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
@@ -27,39 +18,23 @@ AJ Bowen <aj@soulshake.net> <aj@gandi.net>
AJ Bowen <aj@soulshake.net> <amy@gandi.net>
Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akshay Moghe <akshay.moghe@gmail.com>
Albin Kerouanton <albinker@gmail.com>
Albin Kerouanton <albinker@gmail.com> <albin@akerouanton.name>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
Aleksandrs Fadins <aleks@s-ko.net>
Alessandro Boch <aboch@tetrationanalytics.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@docker.com>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@socketplane.io>
Alessandro Boch <aboch@tetrationanalytics.com> <aboch@users.noreply.github.com>
Alex Chan <alex@alexwlchan.net>
Alex Chan <alex@alexwlchan.net> <alex.chan@metaswitch.com>
Alex Chen <alexchenunix@gmail.com> <root@localhost.localdomain>
Alex Ellis <alexellis2@gmail.com>
Alex Goodman <wagoodman@gmail.com> <wagoodman@users.noreply.github.com>
Alexander Larsson <alexl@redhat.com> <alexander.larsson@gmail.com>
Alexander Morozov <lk4d4math@gmail.com>
Alexander Morozov <lk4d4math@gmail.com> <lk4d4@docker.com>
Alexander Morozov <lk4d4@docker.com>
Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com>
Alexandre González <agonzalezro@gmail.com>
Alexis Ries <ries.alexis@gmail.com>
Alexis Ries <ries.alexis@gmail.com> <alexis.ries.ext@orange.com>
Alexis Thomas <fr.alexisthomas@gmail.com>
Alicia Lauerman <alicia@eta.im> <allydevour@me.com>
Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Anca Iordache <anca.iordache@docker.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Kim <taeyeonkim90@gmail.com>
Andrew Kim <taeyeonkim90@gmail.com> <akim01@fortinet.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com>
@@ -67,8 +42,6 @@ Andrey Kolomentsev <andrey.kolomentsev@docker.com> <andrey.kolomentsev@gmail.com
André Martins <aanm90@gmail.com> <martins@noironetworks.com>
Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Andy Zhang <andy.zhangtao@hotmail.com>
Andy Zhang <andy.zhangtao@hotmail.com> <ztao@tibco-support.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
@@ -78,45 +51,30 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Anyu Wang <wanganyu@outlook.com>
Arko Dasgupta <arko@tetrate.io>
Arko Dasgupta <arko@tetrate.io> <arko.dasgupta@docker.com>
Arko Dasgupta <arko@tetrate.io> <arkodg@users.noreply.github.com>
Arnaud Porterie <icecrime@gmail.com>
Arnaud Porterie <icecrime@gmail.com> <arnaud.porterie@docker.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com>
Arnaud Rebillout <arnaud.rebillout@collabora.com> <elboulangero@gmail.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Artur Meyster <arthurfbi@yahoo.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
Ben Toews <mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com>
Benny Ng <benny.tpng@gmail.com>
Benoit Chesneau <bchesneau@gmail.com>
Bevisy Zhang <binbin36520@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Bhumika Bayani <bhumikabayani@gmail.com>
Bilal Amarni <bilal.amarni@gmail.com> <bamarni@users.noreply.github.com>
Bill Wang <ozbillwang@gmail.com> <SydOps@users.noreply.github.com>
Bily Zhang <xcoder@tenxcloud.com>
Bill Wang <ozbillwang@gmail.com> <SydOps@users.noreply.github.com>
Bin Liu <liubin0329@gmail.com>
Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Boaz Shuster <ripcurld.github@gmail.com>
Bojun Zhu <bojun.zhu@foxmail.com>
Boqin Qin <bobbqqin@gmail.com>
Boshi Lian <farmer1992@gmail.com>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Brian Goff <cpuguy83@gmail.com>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Brian Goff <cpuguy83@gmail.com> <brian.goff@microsoft.com>
Brian Goff <cpuguy83@gmail.com> <cpuguy@hey.com>
Cameron Sparr <gh@sparr.email>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain>
Charles Hooper <charles.hooper@dotcloud.com> <chooper@plumata.com>
@@ -128,16 +86,10 @@ Chen Qiu <cheney-90@hotmail.com> <21321229@zju.edu.cn>
Chengfei Shang <cfshang@alauda.io>
Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
Chris Price <cprice@mirantis.com> <chris.price@docker.com>
Chris Telfer <ctelfer@docker.com>
Chris Telfer <ctelfer@docker.com> <ctelfer@users.noreply.github.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Corbin Coleman <corbin.coleman@docker.com>
Cristian Ariza <dev@cristianrz.com>
Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
@@ -171,29 +123,18 @@ David M. Karr <davidmichaelkarr@gmail.com>
David Sheets <dsheets@docker.com> <sheets@alum.mit.edu>
David Sissitka <me@dsissitka.com>
David Williamson <david.williamson@docker.com> <davidwilliamson@users.noreply.github.com>
Derek Ch <denc716@gmail.com>
Derek McGowan <derek@mcg.dev>
Derek McGowan <derek@mcg.dev> <derek@mcgstyle.net>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Dhilip Kumars <dhilip.kumar.s@huawei.com>
Diego Siqueira <dieg0@live.com>
Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com>
Dmytro Iakovliev <dmytro.iakovliev@zodiacsystems.com>
Dominic Yin <yindongchao@inspur.com>
Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Doug Tangren <d.tangren@gmail.com>
Drew Erny <derny@mirantis.com>
Drew Erny <derny@mirantis.com> <drew.erny@docker.com>
Elan Ruusamäe <glen@pld-linux.org>
Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee>
Elango Sivanandam <elango.siva@docker.com>
Elango Sivanandam <elango.siva@docker.com> <elango@docker.com>
Eli Uriegas <seemethere101@gmail.com>
Eli Uriegas <seemethere101@gmail.com> <eli.uriegas@docker.com>
Eric G. Noriega <enoriega@vizuri.com> <egnoriega@users.noreply.github.com>
Eric Hanchrow <ehanchrow@ine.com> <eric.hanchrow@gmail.com>
Eric Rosenberg <ehaydenr@gmail.com> <ehaydenr@users.noreply.github.com>
@@ -215,10 +156,8 @@ Feng Yan <fy2462@gmail.com>
Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frank Yang <yyb196@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Fu JinLin <withlin@yeah.net>
Gabriel Goller <gabrielgoller123@gmail.com>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com>
@@ -229,70 +168,43 @@ Giampaolo Mancini <giampaolo@trampolineup.com>
Giovan Isa Musthofa <giovanism@outlook.co.id>
Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
Gou Rao <gou@portworx.com> <gourao@users.noreply.github.com>
Grant Millar <rid@cylo.io>
Grant Millar <rid@cylo.io> <grant@cylo.io>
Grant Millar <rid@cylo.io> <grant@seednet.eu>
Greg Stephens <greg@udon.org>
Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@charmes.net>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@docker.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume@dotcloud.com>
Gunadhya S. <6939749+gunadhya@users.noreply.github.com>
Guoqiang QI <guoqiang.qi1@gmail.com>
Guri <odg0318@gmail.com>
Gurjeet Singh <gurjeet@singh.im> <singh.gurjeet@gmail.com>
Gustav Sinder <gustav.sinder@gmail.com>
Günther Jungbluth <gunther@gameslabs.net>
Hakan Özler <hakan.ozler@kodcu.com>
Hao Shu Wei <haoshuwei24@gmail.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haoshuwei1989@163.com>
Hao Shu Wei <haoshuwei24@gmail.com> <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com> <haoshuwei1989@163.com>
Harald Albers <github@albersweb.de> <albers@users.noreply.github.com>
Harald Niesche <harald@niesche.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh>
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
Harry Zhang <harryz@hyper.sh> <resouer@163.com>
Harry Zhang <harryz@hyper.sh> <resouer@gmail.com>
Harry Zhang <resouer@163.com>
Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com>
He Simei <hesimei@zju.edu.cn>
Helen Xie <chenjg@harmonycloud.cn>
Hiroyuki Sasagawa <hs19870702@gmail.com>
Hollie Teal <hollie@docker.com>
Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Hu Keping <hukeping@huawei.com>
Hui Kang <hkang.sunysb@gmail.com>
Hui Kang <hkang.sunysb@gmail.com> <kangh@us.ibm.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
Hyeongkyu Lee <hyeongkyu.lee@navercorp.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com>
Ian Campbell <ian.campbell@docker.com>
Ian Campbell <ian.campbell@docker.com> <ijc@docker.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Iskander Sharipov <quasilyte@gmail.com>
Ivan Babrou <ibobrik@gmail.com>
Ivan Markin <sw@nogoegst.net> <twim@riseup.net>
Jack Laxson <jackjrabbit@gmail.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com>
Jaivish Kothari <janonymous.codevulture@gmail.com>
Jake Moshenko <jake@devtable.com>
Jakub Drahos <jdrahos@pulsepoint.com>
Jakub Drahos <jdrahos@pulsepoint.com> <jack.drahos@gmail.com>
James Nesbitt <jnesbitt@mirantis.com>
James Nesbitt <jnesbitt@mirantis.com> <james.nesbitt@wunderkraut.com>
Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com>
Jan Götte <jaseg@jaseg.net>
Jana Radhakrishnan <mrjana@docker.com>
Jana Radhakrishnan <mrjana@docker.com> <mrjana@socketplane.io>
Javier Bassi <javierbassi@gmail.com>
Javier Bassi <javierbassi@gmail.com> <CrimsonGlory@users.noreply.github.com>
Jay Lim <jay@imjching.com>
Jay Lim <jay@imjching.com> <imjching@hotmail.com>
Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
@@ -300,16 +212,15 @@ Jean-Tiare Le Bigot <jt@yadutaf.fr> <admin@jtlebi.fr>
Jeff Anderson <jeff@docker.com> <jefferya@programmerq.net>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
Jeroen Franse <jeroenfranse@gmail.com>
Jessica Frazelle <jess@oxide.computer>
Jessica Frazelle <jess@oxide.computer> <acidburn@docker.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@google.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@microsoft.com>
Jessica Frazelle <jess@oxide.computer> <jess@docker.com>
Jessica Frazelle <jess@oxide.computer> <jess@mesosphere.com>
Jessica Frazelle <jess@oxide.computer> <jessfraz@google.com>
Jessica Frazelle <jess@oxide.computer> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@oxide.computer> <me@jessfraz.com>
Jessica Frazelle <jess@oxide.computer> <princess@docker.com>
Jessica Frazelle <acidburn@microsoft.com>
Jessica Frazelle <acidburn@microsoft.com> <acidburn@docker.com>
Jessica Frazelle <acidburn@microsoft.com> <acidburn@google.com>
Jessica Frazelle <acidburn@microsoft.com> <jess@docker.com>
Jessica Frazelle <acidburn@microsoft.com> <jess@mesosphere.com>
Jessica Frazelle <acidburn@microsoft.com> <jessfraz@google.com>
Jessica Frazelle <acidburn@microsoft.com> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <acidburn@microsoft.com> <me@jessfraz.com>
Jessica Frazelle <acidburn@microsoft.com> <princess@docker.com>
Jian Liao <jliao@alauda.io>
Jiang Jinyang <jjyruby@gmail.com>
Jiang Jinyang <jjyruby@gmail.com> <jiangjinyang@outlook.com>
@@ -321,17 +232,15 @@ Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Harris <john@johnharris.io>
John Howard <github@lowenna.com>
John Howard <github@lowenna.com> <10522484+lowenna@users.noreply.github.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com>
John Howard <github@lowenna.com> <jhoward@ntdev.microsoft.com>
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com>
John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard <github@lowenna.com> <john@lowenna.com>
John Howard (VM) <John.Howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@ntdev.microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhowardmsft@users.noreply.github.com>
John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan Choy <jonathan.j.choy@gmail.com>
Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jordan Arentsen <blissdev@gmail.com>
Jordan Jennings <jjn2009@gmail.com> <jjn2009@users.noreply.github.com>
Jorit Kleine-Möllhoff <joppich@bricknet.de> <joppich@users.noreply.github.com>
@@ -347,12 +256,9 @@ Josh Wilson <josh.wilson@fivestars.com> <jcwilson@users.noreply.github.com>
Joyce Jang <mail@joycejang.com>
Julien Bordellier <julienbordellier@gmail.com> <git@julienbordellier.com>
Julien Bordellier <julienbordellier@gmail.com> <me@julienbordellier.com>
Jun Du <dujun5@huawei.com>
Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com>
Justin Keller <85903732+jk-vb@users.noreply.github.com> <jkeller@vb-jkeller-mbp.local>
Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
Justin Terry <juterry@microsoft.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com>
@@ -369,7 +275,6 @@ Ken Cochrane <kencochrane@gmail.com> <KenCochrane@gmail.com>
Ken Herner <kherner@progress.com> <chosenken@gmail.com>
Ken Reese <krrgithub@gmail.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Alvarez <crazy-max@users.noreply.github.com>
Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
Kevin Meredith <kevin.m.meredith@gmail.com>
@@ -380,17 +285,11 @@ Konrad Kleine <konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
Konstantin Gribov <grossws@gmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <btkushuwahak@KUNAL-PC.swh.swh.nttdata.co.jp>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com>
Kyle Squizzato <ksquizz@gmail.com>
Kyle Squizzato <ksquizz@gmail.com> <kyle.squizzato@docker.com>
Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com>
Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com>
Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com>
Lei Jitang <leijitang@huawei.com> <leijitang@outlook.com>
Leiiwang <u2takey@gmail.com>
Liang Mingqiang <mqliang.zju@gmail.com>
Liang-Chi Hsieh <viirya@gmail.com>
Liao Qingwei <liaoqingwei@huawei.com>
@@ -407,11 +306,8 @@ Lyn <energylyn@zju.edu.cn>
Lynda O'Leary <lyndaoleary29@gmail.com>
Lynda O'Leary <lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
Ma Müller <mueller-ma@users.noreply.github.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@corp.microsoft.com>
Madhan Raj Mookkandy <MadhanRaj.Mookkandy@microsoft.com> <madhanm@microsoft.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@docker.com>
Madhu Venugopal <mavenugo@gmail.com> <madhu@socketplane.io>
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Mageee <fangpuyi@foxmail.com> <21521230.zju.edu.cn>
Mansi Nahar <mmn4185@rit.edu> <mansi.nahar@macbookpro-mansinahar.local>
Mansi Nahar <mmn4185@rit.edu> <mansinahar@users.noreply.github.com>
@@ -424,12 +320,10 @@ Markan Patel <mpatel678@gmail.com>
Markus Kortlang <hyp3rdino@googlemail.com> <markus.kortlang@lhsystems.com>
Martin Redmond <redmond.martin@gmail.com> <martin@tinychat.com>
Martin Redmond <redmond.martin@gmail.com> <xgithub@redmond5.com>
Maru Newby <mnewby@thesprawl.net>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Masato Ohba <over.rye@gmail.com>
Mathieu Paturel <mathieu.paturel@gmail.com>
Matt Bentley <matt.bentley@docker.com> <mbentley@mbentley.net>
Matt Schurenko <matt.schurenko@gmail.com>
Matt Williams <mattyw@me.com>
@@ -441,43 +335,30 @@ Matthias Kühnle <git.nivoc@neverbox.com> <kuehnle@online.de>
Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com>
Maxwell <csuhp007@gmail.com>
Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com>
Menghui Chen <menghui.chen@alibaba-inc.com>
Michael Beskin <mrbeskin@gmail.com>
Michael Crosby <crosbymichael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <crosby.michael@gmail.com>
Michael Crosby <crosbymichael@gmail.com> <michael@crosbymichael.com>
Michael Crosby <crosbymichael@gmail.com> <michael@docker.com>
Michael Crosby <crosbymichael@gmail.com> <michael@thepasture.io>
Michael Crosby <michael@docker.com> <crosby.michael@gmail.com>
Michael Crosby <michael@docker.com> <crosbymichael@gmail.com>
Michael Crosby <michael@docker.com> <michael@crosbymichael.com>
Michał Gryko <github@odkurzacz.org>
Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org>
Michael Huettermann <michael@huettermann.net>
Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com>
Michael Spetsiotis <michael_spets@hotmail.com>
Michael Stapelberg <michael+gh@stapelberg.de>
Michael Stapelberg <michael+gh@stapelberg.de> <stapelberg@google.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com>
Michal Kostrzewa <michal.kostrzewa@codilime.com> <kostrzewa.michal@o2.pl>
Michal Minář <miminar@redhat.com>
Michał Gryko <github@odkurzacz.org>
Michiel de Jong <michiel@unhosted.org>
Mickaël Fortunato <morsi.morsicus@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
Mikael Davranche <mikael.davranche@corp.ovh.com>
Mikael Davranche <mikael.davranche@corp.ovh.com> <mikael.davranche@corp.ovh.net>
Mike Casas <mkcsas0@gmail.com> <mikecasas@users.noreply.github.com>
Mike Goelzer <mike.goelzer@docker.com> <mgoelzer@docker.com>
Milind Chawre <milindchawre@gmail.com>
Misty Stanley-Jones <misty@docker.com> <misty@apache.org>
Mohammad Banikazemi <MBanikazemi@gmail.com>
Mohammad Banikazemi <MBanikazemi@gmail.com> <mb@us.ibm.com>
Mohit Soni <mosoni@ebay.com> <mohitsoni1989@gmail.com>
Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
mrfly <mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@@ -494,8 +375,6 @@ Oh Jinkyun <tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
Oliver Reason <oli@overrateddev.co>
Olli Janatuinen <olli.janatuinen@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Onur Filiz <onur.filiz@microsoft.com>
Onur Filiz <onur.filiz@microsoft.com> <ofiliz@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
@@ -506,61 +385,41 @@ Peter Dave Hello <hsu@peterdavehello.org> <PeterDaveHello@users.noreply.github.c
Peter Jaffe <pjaffe@nevo.com>
Peter Nagy <xificurC@gmail.com> <pnagy@gratex.com>
Peter Waller <p@pwaller.net> <peter@scraperwiki.com>
Phil Estes <estesp@gmail.com>
Phil Estes <estesp@gmail.com> <estesp@amazon.com>
Phil Estes <estesp@gmail.com> <estesp@linux.vnet.ibm.com>
Phil Estes <estesp@linux.vnet.ibm.com> <estesp@gmail.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com>
Prasanna Gautam <prasannagautam@gmail.com>
Puneet Pruthi <puneet.pruthi@oracle.com>
Puneet Pruthi <puneet.pruthi@oracle.com> <puneetpruthi@gmail.com>
Qiang Huang <h.huangqiang@huawei.com>
Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Qin TianHuan <tianhuan@bingotree.cn>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Richard Scothern <richard.scothern@gmail.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>
Rony Weng <ronyweng@synology.com>
Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io>
Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Ryoga Saito <contact@proelbtn.com>
Ryoga Saito <contact@proelbtn.com> <proelbtn@users.noreply.github.com>
Sainath Grandhi <sainath.grandhi@intel.com>
Sainath Grandhi <sainath.grandhi@intel.com> <saiallforums@gmail.com>
Sakeven Jiang <jc5930@sina.cn>
Samuel Karp <me@samuelkarp.com> <skarp@amazon.com>
Sandeep Bansal <sabansal@microsoft.com>
Sandeep Bansal <sabansal@microsoft.com> <msabansal@microsoft.com>
Santhosh Manohar <santhosh@docker.com>
Sargun Dhillon <sargun@netflix.com> <sargun@sargun.me>
Satoshi Tagomori <tagomoris@gmail.com>
Sean Lee <seanlee@tw.ibm.com> <scaleoutsean@users.noreply.github.com>
Sebastiaan van Stijn <github@gone.nl>
Sebastiaan van Stijn <github@gone.nl> <moby@example.com>
Sebastiaan van Stijn <github@gone.nl> <sebastiaan@ws-key-sebas3.dpi1.dpi>
Sebastiaan van Stijn <github@gone.nl> <thaJeztah@users.noreply.github.com>
Seongyeol Lim <seongyeol37@gmail.com>
Shaun Kaasten <shaunk@gmail.com>
Shawn Landden <shawn@churchofgit.com> <shawnlandden@gmail.com>
Shengbo Song <thomassong@tencent.com>
Shengbo Song <thomassong@tencent.com> <mymneo@163.com>
Shih-Yuan Lee <fourdollars@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
Shukui Yang <yangshukui@huawei.com>
Shuwei Hao <haosw@cn.ibm.com>
Shuwei Hao <haosw@cn.ibm.com> <haoshuwei24@gmail.com>
Sidhartha Mani <sidharthamn@gmail.com>
Sjoerd Langkemper <sjoerd-github@linuxonly.nl> <sjoerd@byte.nl>
Smark Meng <smark@freecoop.net>
Smark Meng <smark@freecoop.net> <smarkm@users.noreply.github.com>
Solomon Hykes <solomon@docker.com> <s@docker.com>
Solomon Hykes <solomon@docker.com> <solomon.hykes@dotcloud.com>
Solomon Hykes <solomon@docker.com> <solomon@dotcloud.com>
@@ -574,12 +433,9 @@ Stefan Berger <stefanb@linux.vnet.ibm.com>
Stefan Berger <stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
Stefan J. Wernli <swernli@microsoft.com> <swernli@ntdev.microsoft.com>
Stefan S. <tronicum@user.github.com>
Stefan Scherer <stefan.scherer@docker.com>
Stefan Scherer <stefan.scherer@docker.com> <scherer_stefan@icloud.com>
Stephan Spindler <shutefan@gmail.com> <shutefan@users.noreply.github.com>
Stephen Day <stevvooe@gmail.com>
Stephen Day <stevvooe@gmail.com> <stephen.day@docker.com>
Stephen Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
Stephen Day <stephen.day@docker.com>
Stephen Day <stephen.day@docker.com> <stevvooe@users.noreply.github.com>
Steve Desmond <steve@vtsv.ca> <stevedesmond-ca@users.noreply.github.com>
Sun Gengze <690388648@qq.com>
Sun Jianbo <wonderflow.sun@gmail.com>
@@ -591,48 +447,28 @@ Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@fosiki.com>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@home.org.au>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
Sylvain Baubeau <lebauce@gmail.com>
Sylvain Baubeau <lebauce@gmail.com> <sbaubeau@redhat.com>
Sylvain Bellemare <sylvain@ascribe.io>
Sylvain Bellemare <sylvain@ascribe.io> <sylvain.bellemare@ezeep.com>
Takuto Sato <tockn.jp@gmail.com>
Tangi Colin <tangicolin@gmail.com>
Tejesh Mehta <tejesh.mehta@gmail.com> <tj@init.me>
Terry Chu <zue.hterry@gmail.com>
Terry Chu <zue.hterry@gmail.com> <jubosh.tw@gmail.com>
Thatcher Peskens <thatcher@docker.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@dotcloud.com>
Thatcher Peskens <thatcher@docker.com> <thatcher@gmx.net>
Thiago Alves Silva <thiago.alves@aurea.com>
Thiago Alves Silva <thiago.alves@aurea.com> <thiagoalves@users.noreply.github.com>
Thomas Gazagnaire <thomas@gazagnaire.org> <thomas@gazagnaire.com>
Thomas Ledos <thomas.ledos92@gmail.com>
Thomas Léveil <thomasleveil@gmail.com>
Thomas Léveil <thomasleveil@gmail.com> <thomasleveil@users.noreply.github.com>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Till Claassen <pixelistik@users.noreply.github.com>
Tim Bart <tim@fewagainstmany.com>
Tim Bosse <taim@bosboot.org> <maztaim@users.noreply.github.com>
Tim Potter <tpot@hpe.com>
Tim Potter <tpot@hpe.com> <tpot@Tims-MacBook-Pro.local>
Tim Ruffles <oi@truffles.me.uk> <timruffles@googlemail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Tim Wagner <tim.wagner@freenet.ag>
Tim Wagner <tim.wagner@freenet.ag> <33624860+herrwagner@users.noreply.github.com>
Tim Zju <21651152@zju.edu.cn>
Timothy Hobbs <timothyhobbs@seznam.cz>
Toli Kuznets <toli@docker.com>
Tom Barlow <tomwbarlow@gmail.com>
Tom Denham <tom@tomdee.co.uk>
Tom Denham <tom@tomdee.co.uk> <tom.denham@metaswitch.com>
Tom Sweeney <tsweeney@redhat.com>
Tom Wilkie <tom.wilkie@gmail.com>
Tom Wilkie <tom.wilkie@gmail.com> <tom@weave.works>
Tõnis Tiigi <tonistiigi@gmail.com>
Trace Andreason <tandreason@gmail.com>
Trapier Marshall <tmarshall@mirantis.com>
Trapier Marshall <tmarshall@mirantis.com> <trapier.marshall@docker.com>
Trishna Guha <trishnaguha17@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com>
@@ -646,25 +482,15 @@ Victor Vieux <victor.vieux@docker.com> <victor@docker.com>
Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com>
Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com>
Victor Vieux <victor.vieux@docker.com> <vieux@docker.com>
Vikas Choudhary <choudharyvikas16@gmail.com>
Vikram bir Singh <vsingh@mirantis.com>
Vikram bir Singh <vsingh@mirantis.com> <vikrambir.singh@docker.com>
Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Bernat <vincent@bernat.ch>
Vincent Bernat <vincent@bernat.ch> <bernat@luffy.cx>
Vincent Bernat <vincent@bernat.ch> <Vincent.Bernat@exoscale.ch>
Vincent Bernat <vincent@bernat.ch> <vincent@bernat.im>
Vincent Boulineau <vincent.boulineau@datadoghq.com>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <vincent@bernat.im>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@sbr.pm>
Vishnu Kannan <vishnuk@google.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com> <tmp6154@yandex.ru>
Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com>
Vladislav Kolesnikov <vkolesnikov@beget.ru>
Vladislav Kolesnikov <vkolesnikov@beget.ru> <prime@vladqa.ru>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>
Wang Chao <chao.wang@ucloud.cn> <wcwxyz@gmail.com>
@@ -676,24 +502,15 @@ Wang Yuexiao <wang.yuexiao@zte.com.cn>
Wayne Chang <wayne@neverfear.org>
Wayne Song <wsong@docker.com> <wsong@users.noreply.github.com>
Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com>
Wei-Ting Kuo <waitingkuo0527@gmail.com>
Wen Cheng Ma <wenchma@cn.ibm.com>
Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
Will Weaver <monkey@buildingbananas.com>
Wing-Kam Wong <wingkwong.code@gmail.com>
WuLonghui <wlh6666@qq.com>
Xian Chaobo <xianchaobo@huawei.com>
Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn>
Xianglin Gao <xlgao@zju.edu.cn>
Xianjie <guxianjie@gmail.com>
Xianjie <guxianjie@gmail.com> <datastream@datastream-laptop.local>
Xianlu Bird <xianlubird@gmail.com>
Xiao YongBiao <xyb4638@gmail.com>
Xiao Zhang <xiaozhang0210@hotmail.com>
Xiaodong Liu <liuxiaodong@loongson.cn>
Xiaodong Zhang <a4012017@sina.com>
Xiaohua Ding <xiao_hua_ding@sina.cn>
Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
Xuecong Liao <satorulogic@gmail.com>
Yamasaki Masahide <masahide.y@gmail.com>
@@ -711,23 +528,12 @@ Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>
Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn>
Yuan Sun <sunyuan3@huawei.com>
Yue Zhang <zy675793960@yeah.net>
Yufei Xiong <yufei.xiong@qq.com>
Zach Gershman <zachgersh@gmail.com>
Zach Gershman <zachgersh@gmail.com> <zachgersh@users.noreply.github.com>
Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu>
Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org>
Zhang Kun <zkazure@gmail.com>
Zhang Wentao <zhangwentao234@huawei.com>
ZhangHang <stevezhang2014@gmail.com>
Zhenkun Bi <bi.zhenkun@zte.com.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhoulin Xie <zhoulin.xie@daocloud.io>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhu Kunjia <zhu.kunjia@zte.com.cn>
Ziheng Liu <lzhfromustc@gmail.com>
Zou Yu <zouyu7@huawei.com>
Zuhayr Elahi <zuhayr.elahi@docker.com>
Zuhayr Elahi <zuhayr.elahi@docker.com> <elahi.zuhayr@gmail.com>
정재영 <jjy600901@gmail.com>
정재영 <jjy600901@gmail.com> <43400316+J-jaeyoung@users.noreply.github.com>

394
AUTHORS

File diff suppressed because it is too large Load Diff

3609
CHANGELOG.md Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -27,10 +27,10 @@ issue, please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated and we will publicly thank you for it,
although we keep your name confidential if you request it. We also like to send
gifts&mdash;if you're into schwag, make sure to let us know. We currently do not
offer a paid security bounty program, but are not ruling it out in the future.
Security reports are greatly appreciated and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
## Reporting other issues
@@ -101,8 +101,9 @@ the contributors guide.
<td>
<p>
Register for the Docker Community Slack at
<a href="https://dockr.ly/slack" target="_blank">https://dockr.ly/slack</a>.
<a href="https://community.docker.com/registrations/groups/4316" target="_blank">https://community.docker.com/registrations/groups/4316</a>.
We use the #moby-project channel for general discussion, and there are separate channels for other Moby projects such as #containerd.
Archives are available at <a href="https://dockercommunity.slackarchive.io/" target="_blank">https://dockercommunity.slackarchive.io/</a>.
</p>
</td>
</tr>

View File

@@ -1,317 +1,215 @@
# syntax=docker/dockerfile:1
# This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Use make to build a development environment image and run it in a container.
# # This is slow the first time.
# make BIND_DIR=. shell
#
# The following commands are executed inside the running container.
# # Make a dockerd binary.
# # hack/make.sh binary
#
# # Install dockerd to /usr/local/bin
# # make install
#
# # Run unit tests
# # hack/test/unit
#
# # Run tests e.g. integration, py
# # hack/make.sh binary test-integration test-docker-py
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
ARG CROSS="false"
ARG SYSTEMD="false"
ARG GO_VERSION=1.19.3
ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_VERSION=0.5.0
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE} AS base
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
ARG APT_MIRROR
RUN sed -ri "s/(httpredir|deb).debian.org/${APT_MIRROR:-deb.debian.org}/g" /etc/apt/sources.list \
&& sed -ri "s/(security).debian.org/${APT_MIRROR:-security.debian.org}/g" /etc/apt/sources.list
ENV GO111MODULE=off
FROM golang:1.12.5 AS base
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
FROM base AS criu
ARG DEBIAN_FRONTEND
ADD --chmod=0644 https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/Release.key /etc/apt/trusted.gpg.d/criu.gpg.asc
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
echo 'deb https://download.opensuse.org/repositories/devel:/tools:/criu/Debian_11/ /' > /etc/apt/sources.list.d/criu.list \
&& apt-get update \
&& apt-get install -y --no-install-recommends criu \
&& install -D /usr/sbin/criu /build/criu
# Install CRIU for checkpoint/restore support
ENV CRIU_VERSION 3.11
# Install dependency packages specific to criu
RUN apt-get update && apt-get install -y \
libnet-dev \
libprotobuf-c0-dev \
libprotobuf-dev \
libnl-3-dev \
libcap-dev \
protobuf-compiler \
protobuf-c-compiler \
python-protobuf \
&& mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|ppc64*|s390x) \
(cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1"); \
GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac \
&& rm -rf "$GOPATH"
FROM base AS docker-py
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT ac922192959870774ad8428344d9faa0555f7ba6
RUN git clone https://github.com/docker/docker-py.git /build \
&& cd /build \
&& git checkout -q $DOCKER_PY_COMMIT
# REGISTRY_VERSION specifies the version of the registry to build and install
# from the https://github.com/docker/distribution repository. This version of
# the registry is used to test both schema 1 and schema 2 manifests. Generally,
# the version specified here should match a current release.
ARG REGISTRY_VERSION=v2.3.0
# REGISTRY_VERSION_SCHEMA1 specifies the version of the registry to build and
# install from the https://github.com/docker/distribution repository. This is
# an older (pre v2.3.0) version of the registry that only supports schema1
# manifests. This version of the registry is not working on arm64, so installation
# is skipped on that architecture.
ARG REGISTRY_VERSION_SCHEMA1=v2.1.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/docker/distribution.git . \
&& git checkout -q "$REGISTRY_VERSION" \
&& GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|armhf|ppc64*|s390x) \
git checkout -q "$REGISTRY_VERSION_SCHEMA1"; \
GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac
FROM base AS swagger
WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger
# Install go-swagger for validating swagger.yaml
ENV GO_SWAGGER_COMMIT c28258affb0b6251755d92489ef685af8d4ff3eb
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/go-swagger/go-swagger.git "$GOPATH/src/github.com/go-swagger/go-swagger" \
&& (cd "$GOPATH/src/github.com/go-swagger/go-swagger" && git checkout -q "$GO_SWAGGER_COMMIT") \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger \
&& rm -rf "$GOPATH"
# GO_SWAGGER_COMMIT specifies the version of the go-swagger binary to build and
# install. Go-swagger is used in CI for validating swagger.yaml in hack/validate/swagger-gen
#
# Currently uses a fork from https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix,
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
ENV GO_SWAGGER_COMMIT c56166c036004ba7a3a321e5951ba472b9ae298c
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/kolyshkin/go-swagger.git . \
&& git checkout -q "$GO_SWAGGER_COMMIT" \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger
# frozen-images
# See also frozenImages in "testutil/environment/protect.go" (which needs to
# be updated when adding images to this list)
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq
FROM base AS frozen-images
RUN apt-get update && apt-get install -y jq ca-certificates --no-install-recommends
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
ARG TARGETARCH
ARG TARGETVARIANT
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
FROM base AS cross-false
FROM --platform=linux/amd64 base AS cross-true
ARG DEBIAN_FRONTEND
FROM base AS cross-true
RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture arm64
RUN dpkg --add-architecture armel
RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture ppc64el
RUN dpkg --add-architecture s390x
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
crossbuild-essential-arm64 \
crossbuild-essential-armel \
crossbuild-essential-armhf \
crossbuild-essential-ppc64el \
crossbuild-essential-s390x
RUN if [ "$(go env GOHOSTARCH)" = "amd64" ]; then \
apt-get update \
&& apt-get install -y --no-install-recommends \
crossbuild-essential-armhf \
crossbuild-essential-arm64 \
crossbuild-essential-armel; \
fi
FROM cross-${CROSS} AS dev-base
FROM cross-${CROSS} as dev-base
FROM dev-base AS runtime-dev-cross-false
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
libsystemd-dev \
libudev-dev
RUN apt-get update && apt-get install -y \
libapparmor-dev \
libseccomp-dev
FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true
ARG DEBIAN_FRONTEND
FROM cross-true AS runtime-dev-cross-true
# These crossbuild packages rely on gcc-<arch>, but this doesn't want to install
# on non-amd64 systems, so other architectures cannot crossbuild amd64.
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
libapparmor-dev:armhf \
libapparmor-dev:ppc64el \
libapparmor-dev:s390x \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libseccomp-dev:armhf \
libseccomp-dev:ppc64el \
libseccomp-dev:s390x
# on non-amd64 systems.
# Additionally, the crossbuild-amd64 is currently only on debian:buster, so
# other architectures cannnot crossbuild amd64.
RUN if [ "$(go env GOHOSTARCH)" = "amd64" ]; then \
apt-get update \
&& apt-get install -y \
libseccomp-dev:armhf \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libapparmor-dev:armhf \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
# install this arches seccomp here due to compat issues with the v0 builder
# This is as opposed to inheriting from runtime-dev-cross-false
libapparmor-dev \
libseccomp-dev; \
fi
FROM runtime-dev-cross-${CROSS} AS runtime-dev
FROM base AS delve
# DELVE_VERSION specifies the version of the Delve debugger binary
# from the https://github.com/go-delve/delve repository.
# It can be used to run Docker with a possibility of
# attaching debugger to it.
#
ARG DELVE_VERSION=v1.8.1
# Delve on Linux is currently only supported on amd64 and arm64;
# https://github.com/go-delve/delve/blob/v1.8.1/pkg/proc/native/support_sentinel.go#L1-L6
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
case $(dpkg --print-architecture) in \
amd64|arm64) \
GOBIN=/build/ GO111MODULE=on go install "github.com/go-delve/delve/cmd/dlv@${DELVE_VERSION}" \
&& /build/dlv --help \
;; \
*) \
mkdir -p /build/ \
;; \
esac
FROM base AS tomlv
ENV INSTALL_BINARY_NAME=tomlv
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM base AS tomll
# GOTOML_VERSION specifies the version of the tomll binary to build and install
# from the https://github.com/pelletier/go-toml repository. This binary is used
# in CI in the hack/validate/toml script.
#
# When updating this version, consider updating the github.com/pelletier/go-toml
# dependency in vendor.mod accordingly.
ARG GOTOML_VERSION=v1.8.1
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/pelletier/go-toml/cmd/tomll@${GOTOML_VERSION}" \
&& /build/tomll --help
FROM base AS gowinres
# GOWINRES_VERSION defines go-winres tool version
ARG GOWINRES_VERSION=v0.3.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/tc-hib/go-winres@${GOWINRES_VERSION}" \
&& /build/go-winres --help
FROM base AS vndr
ENV INSTALL_BINARY_NAME=vndr
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev
ARG CONTAINERD_VERSION
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/containerd.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh containerd
RUN apt-get update && apt-get install -y btrfs-tools
ENV INSTALL_BINARY_NAME=containerd
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM base AS golangci_lint
# FIXME: when updating golangci-lint, remove the temporary "nolint" in https://github.com/moby/moby/blob/7860686a8df15eea9def9e6189c6f9eca031bb6f/libnetwork/networkdb/cluster.go#L246
ARG GOLANGCI_LINT_VERSION=v1.49.0
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "github.com/golangci/golangci-lint/cmd/golangci-lint@${GOLANGCI_LINT_VERSION}" \
&& /build/golangci-lint --version
FROM dev-base AS proxy
ENV INSTALL_BINARY_NAME=proxy
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM base AS gotestsum
ARG GOTESTSUM_VERSION=v1.8.2
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "gotest.tools/gotestsum@${GOTESTSUM_VERSION}" \
&& /build/gotestsum --version
FROM base AS shfmt
ARG SHFMT_VERSION=v3.0.2
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
GOBIN=/build/ GO111MODULE=on go install "mvdan.cc/sh/v3/cmd/shfmt@${SHFMT_VERSION}" \
&& /build/shfmt --version
FROM base AS gometalinter
ENV INSTALL_BINARY_NAME=gometalinter
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS dockercli
ARG DOCKERCLI_CHANNEL
ARG DOCKERCLI_VERSION
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/dockercli.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM runtime-dev AS runc
ARG RUNC_VERSION
ARG RUNC_BUILDTAGS
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/runc.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh runc
ENV INSTALL_BINARY_NAME=runc
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS tini
ARG DEBIAN_FRONTEND
ARG TINI_VERSION
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
cmake \
vim-common
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/tini.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
PREFIX=/build /install.sh tini
RUN apt-get update && apt-get install -y cmake vim-common
COPY hack/dockerfile/install/install.sh ./install.sh
ENV INSTALL_BINARY_NAME=tini
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS rootlesskit
ARG ROOTLESSKIT_VERSION
ARG PREFIX=/build
COPY /hack/dockerfile/install/install.sh /hack/dockerfile/install/rootlesskit.installer /
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
/install.sh rootlesskit \
&& "${PREFIX}"/rootlesskit --version \
&& "${PREFIX}"/rootlesskit-docker-proxy --help
ENV INSTALL_BINARY_NAME=rootlesskit
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
COPY ./contrib/dockerd-rootless.sh /build
COPY ./contrib/dockerd-rootless-setuptool.sh /build
FROM base AS crun
ARG CRUN_VERSION=1.4.5
RUN --mount=type=cache,sharing=locked,id=moby-crun-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-crun-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
autoconf \
automake \
build-essential \
libcap-dev \
libprotobuf-c-dev \
libseccomp-dev \
libsystemd-dev \
libtool \
libudev-dev \
libyajl-dev \
python3 \
;
RUN --mount=type=tmpfs,target=/tmp/crun-build \
git clone https://github.com/containers/crun.git /tmp/crun-build && \
cd /tmp/crun-build && \
git checkout -q "${CRUN_VERSION}" && \
./autogen.sh && \
./configure --bindir=/build && \
make -j install
# vpnkit
# use dummy scratch stage to avoid build to fail for unsupported platforms
FROM scratch AS vpnkit-windows
FROM scratch AS vpnkit-linux-386
FROM scratch AS vpnkit-linux-arm
FROM scratch AS vpnkit-linux-ppc64le
FROM scratch AS vpnkit-linux-riscv64
FROM scratch AS vpnkit-linux-s390x
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-amd64
FROM djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-linux-arm64
FROM vpnkit-linux-${TARGETARCH} AS vpnkit-linux
FROM vpnkit-${TARGETOS} AS vpnkit
# TODO: Some of this is only really needed for testing, it would be nice to split this up
FROM runtime-dev AS dev-systemd-false
ARG DEBIAN_FRONTEND
FROM runtime-dev AS dev
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
RUN useradd --create-home --gid docker unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
@@ -320,146 +218,79 @@ RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig
# This should only install packages that are specifically needed for the dev environment and nothing else
# Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
bash-completion \
bzip2 \
inetutils-ping \
iproute2 \
iptables \
jq \
libcap2-bin \
libnet1 \
libnl-3-200 \
libprotobuf-c1 \
libyajl2 \
net-tools \
patch \
pigz \
python3-pip \
python3-setuptools \
python3-wheel \
sudo \
thin-provisioning-tools \
uidmap \
vim \
vim-common \
xfsprogs \
xz-utils \
zip \
zstd
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
ARG YAMLLINT_VERSION=1.27.1
RUN pip3 install yamllint==${YAMLLINT_VERSION}
COPY --from=dockercli /build/ /usr/local/cli
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
bash-completion \
btrfs-tools \
iptables \
jq \
libcap2-bin \
libdevmapper-dev \
# libffi-dev and libssl-dev appear to be required for compiling paramiko on s390x/ppc64le
libffi-dev \
libssl-dev \
libudev-dev \
libsystemd-dev \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
net-tools \
pigz \
python-backports.ssl-match-hostname \
python-dev \
# python-cffi appears to be required for compiling paramiko on s390x/ppc64le
python-cffi \
python-mock \
python-pip \
python-requests \
python-setuptools \
python-websocket \
python-wheel \
thin-provisioning-tools \
vim \
vim-common \
xfsprogs \
zip \
bzip2 \
xz-utils \
libprotobuf-c1 \
libnet1 \
libnl-3-200 \
--no-install-recommends
COPY --from=swagger /build/swagger* /usr/local/bin/
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=swagger /build/ /usr/local/bin/
COPY --from=delve /build/ /usr/local/bin/
COPY --from=tomll /build/ /usr/local/bin/
COPY --from=gowinres /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=registry /build/ /usr/local/bin/
COPY --from=criu /build/ /usr/local/bin/
COPY --from=gotestsum /build/ /usr/local/bin/
COPY --from=golangci_lint /build/ /usr/local/bin/
COPY --from=shfmt /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit / /usr/local/bin/
COPY --from=crun /build/ /usr/local/bin/
COPY hack/dockerfile/etc/docker/ /etc/docker/
COPY --from=gometalinter /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=dockercli /build/ /usr/local/cli
COPY --from=registry /build/registry* /usr/local/bin/
COPY --from=criu /build/ /usr/local/
COPY --from=docker-py /build/ /docker-py
# TODO: This is for the docker-py tests, which shouldn't really be needed for
# this image, but currently CI is expecting to run this image. This should be
# split out into a separate image, including all the `python-*` deps installed
# above.
RUN cd /docker-py \
&& pip install docker-pycreds==0.4.0 \
&& pip install paramiko==2.4.2 \
&& pip install yamllint==1.5.0 \
&& pip install -r test-requirements.txt
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=djs55/vpnkit@sha256:e508a17cfacc8fd39261d5b4e397df2b953690da577e2c987a47630cd0c42f8e /vpnkit /usr/local/bin/vpnkit.x86_64
ENV PATH=/usr/local/cli:$PATH
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
# Options for hack/validate/gometalinter
ENV GOMETALINTER_OPTS="--deadline=2m"
WORKDIR /go/src/github.com/docker/docker
VOLUME /var/lib/docker
VOLUME /home/unprivilegeduser/.local/share/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
FROM dev-systemd-false AS dev-systemd-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
dbus \
dbus-user-session \
systemd \
systemd-sysv
RUN mkdir -p hack \
&& curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \
&& chmod +x hack/dind-systemd
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev
FROM runtime-dev AS binary-base
ARG DOCKER_GITCOMMIT=HEAD
ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT}
ARG VERSION
ENV VERSION=${VERSION}
ARG PLATFORM
ENV PLATFORM=${PLATFORM}
ARG PRODUCT
ENV PRODUCT=${PRODUCT}
ARG DEFAULT_PRODUCT_LICENSE
ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE}
ARG PACKAGER_NAME
ENV PACKAGER_NAME=${PACKAGER_NAME}
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV PREFIX=/build
# TODO: This is here because hack/make.sh binary copies these extras binaries
# from $PATH into the bundles dir.
# It would be nice to handle this in a different way.
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit / /usr/local/bin/
COPY --from=gowinres /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
FROM binary-base AS build-binary
RUN --mount=type=cache,target=/root/.cache \
--mount=type=bind,target=.,ro \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
hack/make.sh binary
FROM binary-base AS build-dynbinary
RUN --mount=type=cache,target=/root/.cache \
--mount=type=bind,target=.,ro \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
hack/make.sh dynbinary
FROM binary-base AS build-cross
ARG DOCKER_CROSSPLATFORMS
RUN --mount=type=cache,target=/root/.cache \
--mount=type=bind,target=.,ro \
--mount=type=tmpfs,target=cli/winresources/dockerd \
--mount=type=tmpfs,target=cli/winresources/docker-proxy \
hack/make.sh cross
FROM scratch AS binary
COPY --from=build-binary /build/bundles/ /
FROM scratch AS dynbinary
COPY --from=build-dynbinary /build/bundles/ /
FROM scratch AS cross
COPY --from=build-cross /build/bundles/ /
FROM dev AS final
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,7 +1,5 @@
ARG GO_VERSION=1.19.3
FROM golang:1.12.5-alpine as base
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off
RUN apk --no-cache add \
bash \
btrfs-progs-dev \
@@ -18,19 +16,20 @@ FROM base AS frozen-images
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/dockercli.installer ./
RUN PREFIX=/build ./install.sh dockercli
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
# TestDockerCLIBuildSuite dependency
# Build DockerSuite.TestBuild* dependency
FROM base AS contrib
COPY contrib/syscall-test /build/syscall-test
COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile
@@ -50,7 +49,7 @@ RUN hack/make.sh build-integration-test-binary
RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \;
## Generate testing image
FROM alpine:3.10 as runner
FROM alpine:3.9 as runner
ENV DOCKER_REMOTE_DAEMON=1
ENV DOCKER_INTEGRATION_DAEMON_DEST=/
@@ -65,9 +64,7 @@ RUN apk --no-cache add \
ca-certificates \
g++ \
git \
inetutils-ping \
iptables \
libcap2-bin \
pigz \
tar \
xz

View File

@@ -5,13 +5,7 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.19.3
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE}
ENV GO111MODULE=off
FROM golang:1.12.5
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
@@ -21,13 +15,13 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
btrfs-tools \
build-essential \
curl \
cmake \
gcc \
git \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
ca-certificates \
@@ -39,6 +33,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
xfsprogs \
xz-utils \
\
aufs-tools \
vim-common \
&& rm -rf /var/lib/apt/lists/*

View File

@@ -45,8 +45,8 @@
#
# 1. Clone the sources from github.com:
#
# >> git clone https://github.com/docker/docker.git C:\gopath\src\github.com\docker\docker
# >> Cloning into 'C:\gopath\src\github.com\docker\docker'...
# >> git clone https://github.com/docker/docker.git C:\go\src\github.com\docker\docker
# >> Cloning into 'C:\go\src\github.com\docker\docker'...
# >> remote: Counting objects: 186216, done.
# >> remote: Compressing objects: 100% (21/21), done.
# >> remote: Total 186216 (delta 5), reused 0 (delta 0), pack-reused 186195
@@ -59,7 +59,7 @@
#
# 2. Change directory to the cloned docker sources:
#
# >> cd C:\gopath\src\github.com\docker\docker
# >> cd C:\go\src\github.com\docker\docker
#
#
# 3. Build a docker image with the components required to build the docker binaries from source
@@ -79,8 +79,8 @@
# 5. Copy the binaries out of the container, replacing HostPath with an appropriate destination
# folder on the host system where you want the binaries to be located.
#
# >> docker cp binaries:C:\gopath\src\github.com\docker\docker\bundles\docker.exe C:\HostPath\docker.exe
# >> docker cp binaries:C:\gopath\src\github.com\docker\docker\bundles\dockerd.exe C:\HostPath\dockerd.exe
# >> docker cp binaries:C:\go\src\github.com\docker\docker\bundles\docker.exe C:\HostPath\docker.exe
# >> docker cp binaries:C:\go\src\github.com\docker\docker\bundles\dockerd.exe C:\HostPath\dockerd.exe
#
#
# 6. (Optional) Remove the interim container holding the built executable binaries:
@@ -147,7 +147,7 @@
# The docker integration tests do not currently run in a container on Windows, predominantly
# due to Windows not supporting privileged mode, so anything using a volume would fail.
# They (along with the rest of the docker CI suite) can be run using
# https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
# https://github.com/jhowardmsft/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
#
# -----------------------------------------------------------------------------------------
@@ -155,7 +155,7 @@
# The number of build steps below are explicitly minimised to improve performance.
# Extremely important - do not change the following line to reference a "specific" image,
# such as `mcr.microsoft.com/windows/servercore:ltsc2022`. If using this Dockerfile in process
# such as `mcr.microsoft.com/windows/servercore:ltsc2019`. If using this Dockerfile in process
# isolated containers, the kernel of the host must match the container image, and hence
# would fail between Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).
# It is expected that the image `microsoft/windowsservercore:latest` is present, and matches
@@ -165,23 +165,13 @@ FROM microsoft/windowsservercore
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.19.3
ARG GOTESTSUM_VERSION=v1.8.2
ARG GOWINRES_VERSION=v0.3.0
ARG CONTAINERD_VERSION=v1.6.10
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - CONTAINERD_VERSION must be consistent with 'hack/dockerfile/install/containerd.installer' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=${GO_VERSION} `
CONTAINERD_VERSION=${CONTAINERD_VERSION} `
ENV GO_VERSION=1.12.5 `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION} `
GOWINRES_VERSION=${GOWINRES_VERSION}
GOPATH=C:\go `
FROM_DOCKERFILE=1
RUN `
Function Test-Nano() { `
@@ -210,30 +200,28 @@ RUN `
Throw ("Failed to download " + $source) `
}`
} else { `
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
$webClient = New-Object System.Net.WebClient; `
$webClient.DownloadFile($source, $target); `
} `
} `
`
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin;C:\containerd\bin'); `
setx /M PATH $('C:\git\cmd;C:\git\usr\bin;'+$Env:PATH+';C:\gcc\bin;C:\go\bin'); `
`
Write-Host INFO: Downloading git...; `
$location='https://www.nuget.org/api/v2/package/GitForWindows/'+$Env:GIT_VERSION; `
Download-File $location C:\gitsetup.zip; `
`
Write-Host INFO: Downloading go...; `
$dlGoVersion=$Env:GO_VERSION -replace '\.0$',''; `
Download-File "https://golang.org/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; `
Download-File $('https://golang.org/dl/go'+$Env:GO_VERSION+'.windows-amd64.zip') C:\go.zip; `
`
Write-Host INFO: Downloading compiler 1 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
`
Write-Host INFO: Downloading compiler 2 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/runtime.zip C:\runtime.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/runtime.zip C:\runtime.zip; `
`
Write-Host INFO: Downloading compiler 3 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/binutils.zip C:\binutils.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/binutils.zip C:\binutils.zip; `
`
Write-Host INFO: Extracting git...; `
Expand-Archive C:\gitsetup.zip C:\git-tmp; `
@@ -257,61 +245,19 @@ RUN `
Remove-Item C:\binutils.zip; `
Remove-Item C:\gitsetup.zip; `
`
Write-Host INFO: Downloading containerd; `
Install-Package -Force 7Zip4PowerShell; `
$location='https://github.com/containerd/containerd/releases/download/'+$Env:CONTAINERD_VERSION+'/containerd-'+$Env:CONTAINERD_VERSION.TrimStart('v')+'-windows-amd64.tar.gz'; `
Download-File $location C:\containerd.tar.gz; `
New-Item -Path C:\containerd -ItemType Directory; `
Expand-7Zip C:\containerd.tar.gz C:\; `
Expand-7Zip C:\containerd.tar C:\containerd; `
Remove-Item C:\containerd.tar.gz; `
Remove-Item C:\containerd.tar; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `
Write-Host INFO: Ensuring existence of directory $srcDir...; `
New-Item -Force -ItemType Directory -Path $srcDir | Out-Null; `
Write-Host INFO: Creating source directory...; `
New-Item -ItemType Directory -Path C:\go\src\github.com\docker\docker | Out-Null; `
`
Write-Host INFO: Configuring git core.autocrlf...; `
C:\git\cmd\git config --global core.autocrlf true;
RUN `
Function Install-GoTestSum() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
} `
} `
C:\git\cmd\git config --global core.autocrlf true; `
`
Install-GoTestSum
RUN `
Function Install-GoWinres() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing go-winres version $Env:GOWINRES_VERSION in $Env:GOBIN"; `
&go install "github.com/tc-hib/go-winres@${Env:GOWINRES_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"go-winres install failed..."'; `
} `
} `
`
Install-GoWinres
Write-Host INFO: Completed
# Make PowerShell the default entrypoint
ENTRYPOINT ["powershell.exe"]
# Set the working directory to the location of the sources
WORKDIR ${GOPATH}\src\github.com\docker\docker
WORKDIR C:\go\src\github.com\docker\docker
# Copy the sources into the container
COPY . .

880
Jenkinsfile vendored
View File

@@ -1,563 +1,325 @@
#!groovy
pipeline {
agent none
options {
buildDiscarder(logRotator(daysToKeepStr: '30'))
timeout(time: 2, unit: 'HOURS')
timestamps()
def withGithubStatus(String context, Closure cl) {
def setGithubStatus = { String state ->
try {
def backref = "${BUILD_URL}flowGraphTable/"
def reposSourceURL = scm.repositories[0].getURIs()[0].toString()
step(
$class: 'GitHubCommitStatusSetter',
contextSource: [$class: "ManuallyEnteredCommitContextSource", context: context],
errorHandlers: [[$class: 'ShallowAnyErrorHandler']],
reposSource: [$class: 'ManuallyEnteredRepositorySource', url: reposSourceURL],
statusBackrefSource: [$class: 'ManuallyEnteredBackrefSource', backref: backref],
statusResultSource: [$class: 'ConditionalStatusResultSource', results: [[$class: 'AnyBuildResult', state: state]]],
)
} catch (err) {
echo "Exception from GitHubCommitStatusSetter for $context: $err"
}
parameters {
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test')
booleanParam(name: 's390x', defaultValue: false, description: 'IBM Z (s390x) Build/Test')
booleanParam(name: 'ppc64le', defaultValue: false, description: 'PowerPC (ppc64le) Build/Test')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check')
}
environment {
DOCKER_BUILDKIT = '1'
DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2'
APT_MIRROR = 'cdn-fastly.deb.debian.org'
CHECK_CONFIG_COMMIT = '33a3680e08d1007e72c3b3f1454f823d8e9948ee'
TESTDEBUG = '0'
TIMEOUT = '120m'
}
stages {
stage('pr-hack') {
when { changeRequest() }
steps {
script {
echo "Workaround for PR auto-cancel feature. Borrowed from https://issues.jenkins-ci.org/browse/JENKINS-43353"
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
}
}
stage('DCO-check') {
when {
beforeAgent true
expression { params.dco }
}
agent { label 'arm64 && ubuntu-2004' }
steps {
sh '''
docker run --rm \
-v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
'''
}
}
stage('Build') {
parallel {
stage('s390x') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.s390x }
}
}
agent { label 's390x-ubuntu-2004' }
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
setGithubStatus 'PENDING'
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=s390x-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('s390x integration-cli') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.s390x }
}
}
agent { label 's390x-ubuntu-2004' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration-cli tests") {
environment { TEST_SKIP_INTEGRATION = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_SKIP_INTEGRATION \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=s390x-integration-cli
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('ppc64le') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.ppc64le }
}
}
agent { label 'ppc64le-ubuntu-1604' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker buildx build --load --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=ppc64le-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('ppc64le integration-cli') {
when {
beforeAgent true
// Skip this stage on PRs unless the checkbox is selected
anyOf {
not { changeRequest() }
expression { params.ppc64le }
}
}
agent { label 'ppc64le-ubuntu-1604' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker buildx build --load --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration-cli tests") {
environment { TEST_SKIP_INTEGRATION = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_SKIP_INTEGRATION \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=ppc64le-integration-cli
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('arm64') {
when {
beforeAgent true
expression { params.arm64 }
}
agent { label 'arm64 && ubuntu-2004' }
environment {
TEST_SKIP_INTEGRATION_CLI = '1'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
}
}
stage("Unit tests") {
steps {
sh '''
sudo modprobe ip6table_filter
'''
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report*.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=arm64-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
}
}
}
try {
cl()
} catch (err) {
// AbortException signals a "normal" build failure.
if (!(err instanceof hudson.AbortException)) {
echo "Exception in withGithubStatus for $context: $err"
}
setGithubStatus 'FAILURE'
throw err
}
setGithubStatus 'SUCCESS'
}
pipeline {
agent none
options {
buildDiscarder(logRotator(daysToKeepStr: '30'))
timeout(time: 3, unit: 'HOURS')
}
parameters {
booleanParam(name: 'janky', defaultValue: true, description: 'x86 Build/Test')
booleanParam(name: 'experimental', defaultValue: true, description: 'x86 Experimental Build/Test ')
booleanParam(name: 'z', defaultValue: true, description: 'IBM Z (s390x) Build/Test')
booleanParam(name: 'powerpc', defaultValue: true, description: 'PowerPC (ppc64le) Build/Test')
booleanParam(name: 'vendor', defaultValue: true, description: 'Vendor')
booleanParam(name: 'windowsRS1', defaultValue: true, description: 'Windows 2016 (RS1) Build/Test')
booleanParam(name: 'windowsRS5', defaultValue: true, description: 'Windows 2019 (RS5) Build/Test')
}
stages {
stage('Build') {
parallel {
stage('janky') {
when {
beforeAgent true
expression { params.janky }
}
agent {
node {
label 'ubuntu-1604-overlay2-stable'
}
}
steps {
withCredentials([string(credentialsId: '52af932f-f13f-429e-8467-e7ff8b965cdb', variable: 'CODECOV_TOKEN')]) {
withGithubStatus('janky') {
sh '''
# todo: include ip_vs in base image
sudo modprobe ip_vs
GITCOMMIT=$(git rev-parse --short HEAD)
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t docker:$GITCOMMIT .
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GITCOMMIT} \
-e DOCKER_GRAPHDRIVER=vfs \
-e DOCKER_EXECDRIVER=native \
-e CODECOV_TOKEN \
-e GIT_SHA1=${GIT_COMMIT} \
docker:$GITCOMMIT \
hack/ci/janky
'''
sh '''
GITCOMMIT=$(git rev-parse --short HEAD)
echo "Building e2e image"
docker build --build-arg DOCKER_GITCOMMIT=$GITCOMMIT -t moby-e2e-test -f Dockerfile.e2e .
'''
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
sh '''
echo "Creating bundles.tar.gz"
(find bundles -name '*.log' -o -name '*.prof' -o -name integration.test | xargs tar -czf bundles.tar.gz) || true
'''
archiveArtifacts artifacts: 'bundles.tar.gz'
}
}
}
stage('experimental') {
when {
beforeAgent true
expression { params.experimental }
}
agent {
node {
label 'ubuntu-1604-aufs-stable'
}
}
steps {
withGithubStatus('experimental') {
sh '''
GITCOMMIT=$(git rev-parse --short HEAD)
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t docker:${GITCOMMIT}-exp .
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-e DOCKER_EXPERIMENTAL=y \
--name docker-pr-exp$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GITCOMMIT} \
-e DOCKER_GRAPHDRIVER=vfs \
-e DOCKER_EXECDRIVER=native \
docker:${GITCOMMIT}-exp \
hack/ci/experimental
'''
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr-exp$BUILD_NUMBER || true
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
sh '''
echo "Creating bundles.tar.gz"
(find bundles -name '*.log' -o -name '*.prof' -o -name integration.test | xargs tar -czf bundles.tar.gz) || true
'''
archiveArtifacts artifacts: 'bundles.tar.gz'
}
}
}
stage('z') {
when {
beforeAgent true
expression { params.z }
}
agent {
node {
label 's390x-ubuntu-1604'
}
}
steps {
withGithubStatus('z') {
sh '''
GITCOMMIT=$(git rev-parse --short HEAD)
test -f Dockerfile.s390x && \
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t docker-s390x:$GITCOMMIT -f Dockerfile.s390x . || \
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t docker-s390x:$GITCOMMIT -f Dockerfile .
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr-s390x$BUILD_NUMBER \
-e DOCKER_GRAPHDRIVER=vfs \
-e DOCKER_EXECDRIVER=native \
-e TIMEOUT="300m" \
-e DOCKER_GITCOMMIT=${GITCOMMIT} \
docker-s390x:$GITCOMMIT \
hack/ci/z
'''
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr-s390x$BUILD_NUMBER || true
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" s390x/busybox chown -R "$(id -u):$(id -g)" /workspace
'''
sh '''
echo "Creating bundles.tar.gz"
find bundles -name '*.log' | xargs tar -czf bundles.tar.gz
'''
archiveArtifacts artifacts: 'bundles.tar.gz'
}
}
}
stage('powerpc') {
when {
beforeAgent true
expression { params.powerpc }
}
agent {
node {
label 'ppc64le-ubuntu-1604'
}
}
steps {
withGithubStatus('powerpc') {
sh '''
GITCOMMIT=$(git rev-parse --short HEAD)
test -f Dockerfile.ppc64le && \
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t docker-powerpc:$GITCOMMIT -f Dockerfile.ppc64le . || \
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t docker-powerpc:$GITCOMMIT -f Dockerfile .
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr-power$BUILD_NUMBER \
-e DOCKER_GRAPHDRIVER=vfs \
-e DOCKER_EXECDRIVER=native \
-e DOCKER_GITCOMMIT=${GITCOMMIT} \
-e TIMEOUT="180m" \
docker-powerpc:$GITCOMMIT \
hack/ci/powerpc
'''
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr-power$BUILD_NUMBER || true
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" ppc64le/busybox chown -R "$(id -u):$(id -g)" /workspace
'''
sh '''
echo "Creating bundles.tar.gz"
find bundles -name '*.log' | xargs tar -czf bundles.tar.gz
'''
archiveArtifacts artifacts: 'bundles.tar.gz'
}
}
}
stage('vendor') {
when {
beforeAgent true
expression { params.vendor }
}
agent {
node {
label 'ubuntu-1604-aufs-stable'
}
}
steps {
withGithubStatus('vendor') {
sh '''
GITCOMMIT=$(git rev-parse --short HEAD)
docker build --rm --force-rm --build-arg APT_MIRROR=cdn-fastly.deb.debian.org -t dockerven:$GITCOMMIT .
docker run --rm -t --privileged \
--name dockerven-pr$BUILD_NUMBER \
-e DOCKER_GRAPHDRIVER=vfs \
-e DOCKER_EXECDRIVER=native \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
-e DOCKER_GITCOMMIT=${GITCOMMIT} \
-e TIMEOUT=120m dockerven:$GITCOMMIT \
hack/validate/vendor
'''
}
}
}
stage('windowsRS1') {
when {
beforeAgent true
expression { params.windowsRS1 }
}
agent {
node {
label 'windows-rs1'
customWorkspace 'c:\\gopath\\src\\github.com\\docker\\docker'
}
}
steps {
withGithubStatus('windowsRS1') {
powershell '''
$ErrorActionPreference = 'Stop'
.\\hack\\ci\\windows.ps1
exit $LastExitCode
'''
}
}
}
stage('windowsRS5-process') {
when {
beforeAgent true
expression { params.windowsRS5 }
}
agent {
node {
label 'windows-rs5'
customWorkspace 'c:\\gopath\\src\\github.com\\docker\\docker'
}
}
steps {
withGithubStatus('windowsRS5-process') {
powershell '''
$ErrorActionPreference = 'Stop'
.\\hack\\ci\\windows.ps1
exit $LastExitCode
'''
}
}
}
}
}
}
}

View File

@@ -24,17 +24,22 @@
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [
"aaronlehmann",
"akihirosuda",
"anusha",
"coolljt0725",
"cpuguy83",
"crosbymichael",
"dnephin",
"duglin",
"estesp",
"jhowardmsft",
"johnstep",
"justincormack",
"kolyshkin",
"mhbauer",
"mlaventure",
"runcom",
"samuelkarp",
"stevvooe",
"thajeztah",
"tianon",
@@ -61,16 +66,14 @@
people = [
"alexellis",
"andrewhsu",
"corhere",
"anonymuse",
"chanwit",
"fntlnz",
"gianarb",
"ndeloof",
"neersighted",
"olljanat",
"programmerq",
"rheinwein",
"ripcurld",
"rumpl",
"samwhited",
"thajeztah"
]
@@ -81,12 +84,6 @@
# Thank you!
people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository.
@@ -106,30 +103,6 @@
# and tweets as @calavera.
"calavera",
# Michael Crosby was "chief maintainer" of the Docker project.
# During his time as a maintainer, Michael contributed to many
# milestones of the project; he was release captain of Docker v1.0.0,
# started the development of "libcontainer" (what later became runc)
# and containerd, as well as demoing cool hacks such as live migrating
# a game server container with checkpoint/restore.
#
# Michael is currently a maintainer of containerd, but you may see
# him around in other projects on GitHub.
"crosbymichael",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
@@ -138,7 +111,7 @@
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Evan Hazlett is the creator of the Shipyard and Interlock open source projects,
# Evan Hazlett is the creator of of the Shipyard and Interlock open source projects,
# and the author of "Orca", which became the foundation of Docker Universal Control
# Plane (UCP). As a maintainer, Evan helped integrating SwarmKit (secrets, tasks)
# into the Docker engine.
@@ -180,16 +153,6 @@
# check out her open source projects on GitHub https://github.com/jessfraz (a must-try).
"jessfraz",
# As a maintainer, John Howard managed to make the impossible possible;
# to run Docker on Windows. After facing many challenges, teaching
# fellow-maintainers that 'Windows is not Linux', and many changes in
# Windows Server to facilitate containers, native Windows containers
# saw the light of day in 2015.
#
# John is now enjoying life without containers: playing piano, painting,
# and walking his dogs, but you may occasionally see him drop by on GitHub.
"lowenna",
# Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
@@ -203,13 +166,6 @@
# Swarm mode networking.
"mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of
@@ -302,6 +258,11 @@
Email = "andrewhsu@docker.com"
GitHub = "andrewhsu"
[people.anonymuse]
Name = "Jesse White"
Email = "anonymuse@gmail.com"
GitHub = "anonymuse"
[people.anusha]
Name = "Anusha Ragunathan"
Email = "anusha@docker.com"
@@ -317,16 +278,16 @@
Email = "leijitang@huawei.com"
GitHub = "coolljt0725"
[people.corhere]
Name = "Cory Snider"
Email = "csnider@mirantis.com"
GitHub = "corhere"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83"
[people.chanwit]
Name = "Chanwit Kaewkasi"
Email = "chanwit@gmail.com"
GitHub = "chanwit"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
@@ -377,6 +338,11 @@
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jhowardmsft]
Name = "John Howard"
Email = "jhoward@microsoft.com"
GitHub = "jhowardmsft"
[people.jessfraz]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
@@ -402,11 +368,6 @@
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo]
Name = "Madhu Venugopal"
Email = "madhu@docker.com"
@@ -432,16 +393,6 @@
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.ndeloof]
Name = "Nicolas De Loof"
Email = "nicolas.deloof@gmail.com"
GitHub = "ndeloof"
[people.neersighted]
Name = "Bjorn Neergaard"
Email = "bneergaard@mirantis.com"
GitHub = "neersighted"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
@@ -452,31 +403,21 @@
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.rheinwein]
Name = "Laura Frank"
Email = "laura@codeship.com"
GitHub = "rheinwein"
[people.ripcurld]
Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com"
GitHub = "ripcurld"
[people.rumpl]
Name = "Djordje Lukic"
Email = "djordje.lukic@docker.com"
GitHub = "rumpl"
[people.runcom]
Name = "Antonio Murdaca"
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.samuelkarp]
Name = "Samuel Karp"
Email = "me@samuelkarp.com"
GitHub = "samuelkarp"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"

139
Makefile
View File

@@ -1,12 +1,12 @@
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate validate-% win
DOCKER ?= docker
BUILDX ?= $(DOCKER) buildx
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate win
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
export DOCKER_GRAPHDRIVER
# enable/disable cross-compile
DOCKER_CROSS ?= false
# get OS/Arch of docker engine
DOCKER_OSARCH := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKER_ENGINE_OSARCH}')
DOCKERFILE := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKERFILE}')
@@ -49,33 +49,26 @@ DOCKER_ENVS := \
-e DOCKER_LDFLAGS \
-e DOCKER_PORT \
-e DOCKER_REMAP_ROOT \
-e DOCKER_ROOTLESS \
-e DOCKER_STORAGE_OPTS \
-e DOCKER_TEST_HOST \
-e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \
-e DELVE_PORT \
-e GITHUB_ACTIONS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e TESTCOVERAGE \
-e TESTDEBUG \
-e TESTDIRS \
-e TESTFLAGS \
-e TESTFLAGS_INTEGRATION \
-e TESTFLAGS_INTEGRATION_CLI \
-e TEST_FILTER \
-e TIMEOUT \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
-e VALIDATE_ORIGIN_BRANCH \
-e HTTP_PROXY \
-e HTTPS_PROXY \
-e NO_PROXY \
-e http_proxy \
-e https_proxy \
-e no_proxy \
-e VERSION \
-e PLATFORM \
-e DEFAULT_PRODUCT_LICENSE \
-e PRODUCT \
-e PACKAGER_NAME
-e PRODUCT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`
@@ -86,14 +79,13 @@ BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
# DOCKER_MOUNT can be overriden, but use at your own risk!
ifndef DOCKER_MOUNT
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT))
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
# The volume will be cleaned up when the container is removed due to `--rm`.
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git"
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache -v docker-mod-cache:/go/pkg/mod/
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
@@ -102,16 +94,20 @@ endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),)
DOCKER_IMAGE := docker-dev
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",)
DELVE_PORT_FORWARD := $(if $(DELVE_PORT),-p "$(DELVE_PORT)",)
DOCKER_FLAGS := $(DOCKER) run --rm --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD) $(DELVE_PORT_FORWARD)
DOCKER_FLAGS := docker run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
BUILD_APT_MIRROR := $(if $(DOCKER_BUILD_APT_MIRROR),--build-arg APT_MIRROR=$(DOCKER_BUILD_APT_MIRROR))
export BUILD_APT_MIRROR
SWAGGER_DOCS_PORT ?= 9000
INTEGRATION_CLI_MASTER_IMAGE := $(if $(INTEGRATION_CLI_MASTER_IMAGE), $(INTEGRATION_CLI_MASTER_IMAGE), integration-cli-master)
INTEGRATION_CLI_WORKER_IMAGE := $(if $(INTEGRATION_CLI_WORKER_IMAGE), $(INTEGRATION_CLI_WORKER_IMAGE), integration-cli-worker)
define \n
@@ -125,49 +121,36 @@ ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t
endif
# on GitHub Runners input device is not a TTY but we allocate a pseudo-one,
# otherwise keep STDIN open even if not attached if not a GitHub Runner.
ifeq ($(GITHUB_ACTIONS),true)
DOCKER_FLAGS += -t
else
DOCKER_FLAGS += -i
endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
DOCKER_BUILD_ARGS += --build-arg=GO_VERSION
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
BUILD_OPTS := ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -f "$(DOCKERFILE)"
BUILD_CMD := $(BUILDX) build
# This is used for the legacy "build" target and anything still depending on it
BUILD_CROSS =
ifdef DOCKER_CROSS
BUILD_CROSS = --build-arg CROSS=$(DOCKER_CROSS)
endif
ifdef DOCKER_CROSSPLATFORMS
BUILD_CROSS = --build-arg CROSS=true
endif
VERSION_AUTOGEN_ARGS = --build-arg VERSION --build-arg DOCKER_GITCOMMIT --build-arg PRODUCT --build-arg PLATFORM --build-arg DEFAULT_PRODUCT_LICENSE --build-arg PACKAGER_NAME
default: binary
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives
all: build ## validate all checks, build linux binaries, run all tests\ncross build non-linux binaries and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
binary: bundles ## build statically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
binary: build ## build the linux binaries
$(DOCKER_RUN_DOCKER) hack/make.sh binary
dynbinary: bundles ## build dynamically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
dynbinary: build ## build the linux dynbinaries
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary
cross: BUILD_OPTS += --build-arg CROSS=true --build-arg DOCKER_CROSSPLATFORMS
cross: bundles ## cross build the binaries for darwin, freebsd and\nwindows
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
cross: DOCKER_CROSS := true
cross: build ## cross build the binaries for darwin, freebsd and\nwindows
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross
ifdef DOCKER_CROSSPLATFORMS
build: DOCKER_CROSS := true
endif
ifeq ($(BIND_DIR), .)
build: DOCKER_BUILD_OPTS += --target=dev
endif
build: DOCKER_BUILD_ARGS += --build-arg=CROSS=$(DOCKER_CROSS)
build: DOCKER_BUILDKIT ?= 1
build: bundles
$(warning The docker client CLI has moved to github.com/docker/cli. For a dev-test cycle involving the CLI, run:${\n} DOCKER_CLI_PATH=/host/path/to/cli/binary make shell ${\n} then change the cli and compile into a binary at the same location.${\n})
DOCKER_BUILDKIT="${DOCKER_BUILDKIT}" docker build ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -t "$(DOCKER_IMAGE)" -f "$(DOCKERFILE)" .
bundles:
mkdir bundles
@@ -176,8 +159,8 @@ bundles:
clean: clean-cache
.PHONY: clean-cache
clean-cache: ## remove the docker volumes that are used for caching in the dev-container
docker volume rm -f docker-dev-cache docker-mod-cache
clean-cache:
docker volume rm -f docker-dev-cache
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@@ -187,17 +170,8 @@ install: ## install the linux binaries
run: build ## run the docker daemon in a container
$(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run"
.PHONY: build
ifeq ($(BIND_DIR), .)
build: shell_target := --target=dev
else
build: shell_target := --target=final
endif
build: bundles
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) --load $(BUILD_CROSS) -t "$(DOCKER_IMAGE)" .
shell: build ## start a shell inside the build env
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
test: build test-unit ## run the unit, integration and docker-py tests
@@ -208,13 +182,8 @@ test-docker-py: build ## run the docker-py tests
test-integration-cli: test-integration ## (DEPRECATED) use test-integration
ifneq ($(and $(TEST_SKIP_INTEGRATION),$(TEST_SKIP_INTEGRATION_CLI)),)
test-integration:
@echo Both integrations suites skipped per environment variables
else
test-integration: build ## run the integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration
endif
test-integration-flaky: build ## run the stress test for all new integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration-flaky
@@ -225,9 +194,6 @@ test-unit: build ## run the unit tests
validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isolation, golint, tests, tomls, go vet and vendor
$(DOCKER_RUN_DOCKER) hack/validate/all
validate-%: build ## validate specific check
$(DOCKER_RUN_DOCKER) hack/validate/$*
win: build ## cross build the binary for windows
$(DOCKER_RUN_DOCKER) DOCKER_CROSSPLATFORMS=windows/amd64 hack/make.sh cross
@@ -245,4 +211,19 @@ swagger-docs: ## preview the API documentation
@docker run --rm -v $(PWD)/api/swagger.yaml:/usr/share/nginx/html/swagger.yaml \
-e 'REDOC_OPTIONS=hide-hostname="true" lazy-rendering' \
-p $(SWAGGER_DOCS_PORT):80 \
bfirsh/redoc:1.14.0
bfirsh/redoc:1.6.2
build-integration-cli-on-swarm: build ## build images and binary for running integration-cli on Swarm in parallel
@echo "Building hack/integration-cli-on-swarm (if build fails, please refer to hack/integration-cli-on-swarm/README.md)"
go build -buildmode=pie -o ./hack/integration-cli-on-swarm/integration-cli-on-swarm ./hack/integration-cli-on-swarm/host
@echo "Building $(INTEGRATION_CLI_MASTER_IMAGE)"
docker build -t $(INTEGRATION_CLI_MASTER_IMAGE) hack/integration-cli-on-swarm/agent
@echo "Building $(INTEGRATION_CLI_WORKER_IMAGE) from $(DOCKER_IMAGE)"
$(eval tmp := integration-cli-worker-tmp)
# We mount pkgcache, but not bundle (bundle needs to be baked into the image)
# For avoiding bakings DOCKER_GRAPHDRIVER and so on to image, we cannot use $(DOCKER_ENVS) here
docker run -t -d --name $(tmp) -e DOCKER_GITCOMMIT -e BUILDFLAGS --privileged $(DOCKER_IMAGE) top
docker exec $(tmp) hack/make.sh build-integration-test-binary dynbinary
docker exec $(tmp) go build -buildmode=pie -o /worker github.com/docker/docker/hack/integration-cli-on-swarm/agent/worker
docker commit -c 'ENTRYPOINT ["/worker"]' $(tmp) $(INTEGRATION_CLI_WORKER_IMAGE)
docker rm -f $(tmp)

2
NOTICE
View File

@@ -3,7 +3,7 @@ Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/creack/pty) developed
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:

View File

@@ -1,9 +0,0 @@
# Reporting security issues
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View File

@@ -28,7 +28,7 @@ Most code changes will fall into one of the following categories.
### Writing tests for new features
New code should be covered by unit tests. If the code is difficult to test with
unit tests, then that is a good sign that it should be refactored to make it
a unit tests then that is a good sign that it should be refactored to make it
easier to reuse and maintain. Consider accepting unexported interfaces instead
of structs so that fakes can be provided for dependencies.
@@ -44,23 +44,16 @@ case. Error cases should be handled by unit tests.
Bugs fixes should include a unit test case which exercises the bug.
A bug fix may also include new assertions in existing integration tests for the
A bug fix may also include new assertions in an existing integration tests for the
API endpoint.
### Writing new integration tests
Note the `integration-cli` tests are deprecated; new tests will be rejected by
the CI.
Instead, implement new tests under `integration/`.
### Integration tests environment considerations
When adding new tests or modifying existing tests under `integration/`, testing
When adding new tests or modifying existing test under `integration/`, testing
environment should be properly considered. `skip.If` from
[gotest.tools/skip](https://godoc.org/gotest.tools/skip) can be used to make the
test run conditionally. Full testing environment conditions can be found at
[environment.go](https://github.com/moby/moby/blob/6b6eeed03b963a27085ea670f40cd5ff8a61f32e/testutil/environment/environment.go)
[environment.go](https://github.com/moby/moby/blob/cb37987ee11655ed6bbef663d245e55922354c68/internal/test/environment/environment.go)
Here is a quick example. If the test needs to interact with a docker daemon on
the same host, the following condition should be checked within the test code
@@ -74,8 +67,6 @@ If a remote daemon is detected, the test will be skipped.
## Running tests
### Unit Tests
To run the unit test suite:
```
@@ -91,36 +82,8 @@ The following environment variables may be used to run a subset of tests:
* `TESTFLAGS` - flags passed to `go test`, to run tests which match a pattern
use `TESTFLAGS="-test.run TestNameOrPrefix"`
### Integration Tests
To run the integration test suite:
```
make test-integration
```
This make target runs both the "integration" suite and the "integration-cli"
suite.
You can specify which integration test dirs to build and run by specifying
the list of dirs in the TEST_INTEGRATION_DIR environment variable.
You can also explicitly skip either suite by setting (any value) in
TEST_SKIP_INTEGRATION and/or TEST_SKIP_INTEGRATION_CLI environment variables.
Flags specific to each suite can be set in the TESTFLAGS_INTEGRATION and
TESTFLAGS_INTEGRATION_CLI environment variables.
If all you want is to specify a test filter to run, you can set the
`TEST_FILTER` environment variable. This ends up getting passed directly to `go
test -run` (or `go test -check-f`, depending on the test suite). It will also
automatically set the other above mentioned environment variables accordingly.
### Go Version
You can change a version of golang used for building stuff that is being tested
by setting `GO_VERSION` variable, for example:
```
make GO_VERSION=1.12.8 test
```

View File

@@ -3,7 +3,7 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
DefaultVersion = "1.42"
DefaultVersion = "1.40"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package api // import "github.com/docker/docker/api"

View File

@@ -3,18 +3,17 @@ package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
"fmt"
"strconv"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/builder"
buildkit "github.com/docker/docker/builder/builder-next"
daemonevents "github.com/docker/docker/daemon/events"
"github.com/docker/docker/builder/fscache"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/stringid"
"github.com/pkg/errors"
"golang.org/x/sync/errgroup"
"google.golang.org/grpc"
)
@@ -32,14 +31,14 @@ type Builder interface {
// Backend provides build functionality to the API router
type Backend struct {
builder Builder
fsCache *fscache.FSCache
imageComponent ImageComponent
buildkit *buildkit.Builder
eventsService *daemonevents.Events
}
// NewBackend creates a new build backend from components
func NewBackend(components ImageComponent, builder Builder, buildkit *buildkit.Builder, es *daemonevents.Events) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, buildkit: buildkit, eventsService: es}, nil
func NewBackend(components ImageComponent, builder Builder, fsCache *fscache.FSCache, buildkit *buildkit.Builder) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, fsCache: fsCache, buildkit: buildkit}, nil
}
// RegisterGRPC registers buildkit controller to the grpc server.
@@ -100,16 +99,34 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
// PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
buildCacheSize, cacheIDs, err := b.buildkit.Prune(ctx, opts)
if err != nil {
return nil, errors.Wrap(err, "failed to prune build cache")
}
b.eventsService.Log("prune", events.BuilderEventType, events.Actor{
Attributes: map[string]string{
"reclaimed": strconv.FormatInt(buildCacheSize, 10),
},
eg, ctx := errgroup.WithContext(ctx)
var fsCacheSize uint64
eg.Go(func() error {
var err error
fsCacheSize, err = b.fsCache.Prune(ctx)
if err != nil {
return errors.Wrap(err, "failed to prune fscache")
}
return nil
})
return &types.BuildCachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
var buildCacheSize int64
var cacheIDs []string
eg.Go(func() error {
var err error
buildCacheSize, cacheIDs, err = b.buildkit.Prune(ctx, opts)
if err != nil {
return errors.Wrap(err, "failed to prune build cache")
}
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
return &types.BuildCachePruneReport{SpaceReclaimed: fsCacheSize + uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
}
// Cancel cancels the build by ID

View File

@@ -1,34 +0,0 @@
package server
import (
"net/http"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/gorilla/mux"
"google.golang.org/grpc/status"
)
// makeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func makeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := httpstatus.FromError(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = httputils.WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}

View File

@@ -1,150 +0,0 @@
package httpstatus // import "github.com/docker/docker/api/server/httpstatus"
import (
"fmt"
"net/http"
containerderrors "github.com/containerd/containerd/errdefs"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/docker/errdefs"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type causer interface {
Cause() error
}
// FromError retrieves status code from error message.
func FromError(err error) int {
if err == nil {
logrus.WithFields(logrus.Fields{"error": err}).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case errdefs.IsNotFound(err):
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case errdefs.IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case errdefs.IsForbidden(err):
statusCode = http.StatusForbidden
case errdefs.IsNotModified(err):
statusCode = http.StatusNotModified
case errdefs.IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case errdefs.IsSystem(err) || errdefs.IsUnknown(err) || errdefs.IsDataLoss(err) || errdefs.IsDeadline(err) || errdefs.IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return FromError(e.Cause())
}
logrus.WithFields(logrus.Fields{
"module": "api",
"error_type": fmt.Sprintf("%T", err),
}).Debugf("FIXME: Got an API for which error does not match any expected type!!!: %+v", err)
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case containerderrors.IsInvalidArgument(err):
return http.StatusBadRequest
case containerderrors.IsNotFound(err):
return http.StatusNotFound
case containerderrors.IsAlreadyExists(err):
return http.StatusConflict
case containerderrors.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case containerderrors.IsUnavailable(err):
return http.StatusServiceUnavailable
case containerderrors.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View File

@@ -0,0 +1,9 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import "github.com/docker/docker/errdefs"
// GetHTTPErrorStatusCode retrieves status code from error message.
//
// Deprecated: use errdefs.GetHTTPErrorStatusCode
func GetHTTPErrorStatusCode(err error) int {
return errdefs.GetHTTPErrorStatusCode(err)
}

View File

@@ -23,7 +23,7 @@ func TestBoolValue(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := BoolValue(r, "test")
@@ -34,14 +34,14 @@ func TestBoolValue(t *testing.T) {
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest(http.MethodGet, "", nil)
r, _ := http.NewRequest("GET", "", nil)
if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest(http.MethodGet, "", nil)
r, _ = http.NewRequest("GET", "", nil)
r.Form = v
if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
@@ -59,7 +59,7 @@ func TestInt64ValueOrZero(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := Int64ValueOrZero(r, "test")
@@ -79,7 +79,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1)
@@ -95,7 +95,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{}
v.Set("test", "invalid")
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1)

View File

@@ -2,14 +2,18 @@ package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"context"
"encoding/json"
"io"
"mime"
"net/http"
"strings"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/status"
)
// APIVersionKey is the client's requested API version.
@@ -27,7 +31,7 @@ func HijackConnection(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
return nil, nil, err
}
// Flush the options to make sure the client sets the raw mode
_, _ = conn.Write([]byte{})
conn.Write([]byte{})
return conn, conn, nil
}
@@ -37,9 +41,9 @@ func CloseStreams(streams ...interface{}) {
if tcpc, ok := stream.(interface {
CloseWrite() error
}); ok {
_ = tcpc.CloseWrite()
tcpc.CloseWrite()
} else if closer, ok := stream.(io.Closer); ok {
_ = closer.Close()
closer.Close()
}
}
}
@@ -49,50 +53,17 @@ func CheckForJSON(r *http.Request) error {
ct := r.Header.Get("Content-Type")
// No Content-Type header is ok as long as there's no Body
if ct == "" && (r.Body == nil || r.ContentLength == 0) {
return nil
if ct == "" {
if r.Body == nil || r.ContentLength == 0 {
return nil
}
}
// Otherwise it better be json
return matchesContentType(ct, "application/json")
}
// ReadJSON validates the request to have the correct content-type, and decodes
// the request's Body into out.
func ReadJSON(r *http.Request, out interface{}) error {
err := CheckForJSON(r)
if err != nil {
return err
}
if r.Body == nil || r.ContentLength == 0 {
// an empty body is not invalid, so don't return an error; see
// https://lists.w3.org/Archives/Public/ietf-http-wg/2010JulSep/0272.html
if matchesContentType(ct, "application/json") {
return nil
}
dec := json.NewDecoder(r.Body)
err = dec.Decode(out)
defer r.Body.Close()
if err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("invalid JSON: got EOF while reading request body"))
}
return errdefs.InvalidParameter(errors.Wrap(err, "invalid JSON"))
}
if dec.More() {
return errdefs.InvalidParameter(errors.New("unexpected content after JSON"))
}
return nil
}
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
return errdefs.InvalidParameter(errors.Errorf("Content-Type specified (%s) must be 'application/json'", ct))
}
// ParseForm ensures the request form is parsed even with invalid content types.
@@ -121,14 +92,33 @@ func VersionFromContext(ctx context.Context) string {
return ""
}
// MakeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func MakeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := errdefs.GetHTTPErrorStatusCode(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}
// matchesContentType validates the content type against the expected one
func matchesContentType(contentType, expectedType string) error {
func matchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
return errdefs.InvalidParameter(errors.Wrapf(err, "malformed Content-Type header (%s)", contentType))
logrus.Errorf("Error parsing media type: %s error: %v", contentType, err)
}
if mimetype != expectedType {
return errdefs.InvalidParameter(errors.Errorf("unsupported Content-Type header (%s): must be '%s'", contentType, expectedType))
}
return nil
return err == nil && mimetype == expectedType
}

View File

@@ -1,130 +1,18 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"net/http"
"strings"
"testing"
)
import "testing"
// matchesContentType
func TestJsonContentType(t *testing.T) {
err := matchesContentType("application/json", "application/json")
if err != nil {
t.Error(err)
if !matchesContentType("application/json", "application/json") {
t.Fail()
}
err = matchesContentType("application/json; charset=utf-8", "application/json")
if err != nil {
t.Error(err)
if !matchesContentType("application/json; charset=utf-8", "application/json") {
t.Fail()
}
expected := "unsupported Content-Type header (dockerapplication/json): must be 'application/json'"
err = matchesContentType("dockerapplication/json", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
}
expected = "malformed Content-Type header (foo;;;bar): mime: invalid media parameter"
err = matchesContentType("foo;;;bar", "application/json")
if err == nil || err.Error() != expected {
t.Errorf(`expected "%s", got "%v"`, expected, err)
if matchesContentType("dockerapplication/json", "application/json") {
t.Fail()
}
}
func TestReadJSON(t *testing.T) {
t.Run("nil body", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", nil)
if err != nil {
t.Error(err)
}
foo := struct{}{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
})
t.Run("empty body", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(""))
if err != nil {
t.Error(err)
}
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "" {
t.Errorf("expected: '', got: %s", foo.SomeField)
}
})
t.Run("with valid request", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"}`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with whitespace", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`
{"SomeField":"some value"}
`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err != nil {
t.Error(err)
}
if foo.SomeField != "some value" {
t.Errorf("expected: 'some value', got: %s", foo.SomeField)
}
})
t.Run("with extra content", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`{"SomeField":"some value"} and more content`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "unexpected content after JSON"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
t.Run("invalid JSON", func(t *testing.T) {
req, err := http.NewRequest("POST", "https://example.com/some/path", strings.NewReader(`{invalid json`))
if err != nil {
t.Error(err)
}
req.Header.Set("Content-Type", "application/json")
foo := struct{ SomeField string }{}
err = ReadJSON(req, &foo)
if err == nil {
t.Error("expected an error, got none")
}
expected := "invalid JSON: invalid character 'i' looking for beginning of object key string"
if err.Error() != expected {
t.Errorf("expected: '%s', got: %s", expected, err.Error())
}
})
}

View File

@@ -0,0 +1,15 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"encoding/json"
"net/http"
)
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
enc := json.NewEncoder(w)
enc.SetEscapeHTML(false)
return enc.Encode(v)
}

View File

@@ -51,10 +51,10 @@ func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMess
logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...)
}
if msg.Source == "stdout" && config.ShowStdout {
_, _ = outStream.Write(logLine)
outStream.Write(logLine)
}
if msg.Source == "stderr" && config.ShowStderr {
_, _ = errStream.Write(logLine)
errStream.Write(logLine)
}
}
}

View File

@@ -16,7 +16,7 @@ func (s *Server) handlerWithGlobalMiddlewares(handler httputils.APIFunc) httputi
next = m.WrapHandler(next)
}
if logrus.GetLevel() == logrus.DebugLevel {
if s.cfg.Logging && logrus.GetLevel() == logrus.DebugLevel {
next = middleware.DebugRequestMiddleware(next)
}

View File

@@ -18,7 +18,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost {
if r.Method != "POST" {
return handler(ctx, w, r, vars)
}
if err := httputils.CheckForJSON(r); err != nil {

View File

@@ -3,8 +3,8 @@ package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestMaskSecretKeys(t *testing.T) {

View File

@@ -61,4 +61,5 @@ func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
return handler(ctx, w, r, vars)
}
}

View File

@@ -8,8 +8,8 @@ import (
"testing"
"github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestVersionMiddlewareVersion(t *testing.T) {
@@ -25,7 +25,7 @@ func TestVersionMiddlewareVersion(t *testing.T) {
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
@@ -76,7 +76,7 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()

View File

@@ -1,8 +1,6 @@
package build // import "github.com/docker/docker/api/server/router/build"
import (
"runtime"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types"
)
@@ -39,24 +37,17 @@ func (r *buildRouter) initRoutes() {
}
}
// BuilderVersion derives the default docker builder version from the config.
//
// The default on Linux is version "2" (BuildKit), but the daemon can be
// configured to recommend version "1" (classic Builder). Windows does not
// yet support BuildKit for native Windows images, and uses "1" (classic builder)
// as a default.
//
// This value is only a recommendation as advertised by the daemon, and it is
// up to the client to choose which builder to use.
// BuilderVersion derives the default docker builder version from the config
// Note: it is valid to have BuilderVersion unset which means it is up to the
// client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion {
// TODO(thaJeztah) move the default to daemon/config
if runtime.GOOS == "windows" {
return types.BuilderV1
}
bv := types.BuilderBuildKit
if v, ok := features["buildkit"]; ok && !v {
bv = types.BuilderV1
var bv types.BuilderVersion
if v, ok := features["buildkit"]; ok {
if v {
bv = types.BuilderBuildKit
} else {
bv = types.BuilderV1
}
}
return bv
}

View File

@@ -38,36 +38,8 @@ func (e invalidIsolationError) Error() string {
func (e invalidIsolationError) InvalidParameter() {}
func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) {
options := &types.ImageBuildOptions{
Version: types.BuilderV1, // Builder V1 is the default, but can be overridden
Dockerfile: r.FormValue("dockerfile"),
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
CPUPeriod: httputils.Int64ValueOrZero(r, "cpuperiod"),
CPUQuota: httputils.Int64ValueOrZero(r, "cpuquota"),
CPUSetCPUs: r.FormValue("cpusetcpus"),
CPUSetMems: r.FormValue("cpusetmems"),
CgroupParent: r.FormValue("cgroupparent"),
NetworkMode: r.FormValue("networkmode"),
Tags: r.Form["t"],
ExtraHosts: r.Form["extrahosts"],
SecurityOpt: r.Form["securityopt"],
Squash: httputils.BoolValue(r, "squash"),
Target: r.FormValue("target"),
RemoteContext: r.FormValue("remote"),
SessionID: r.FormValue("session"),
BuildID: r.FormValue("buildid"),
}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
version := httputils.VersionFromContext(ctx)
options := &types.ImageBuildOptions{}
if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
@@ -78,37 +50,52 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
options.Dockerfile = r.FormValue("dockerfile")
options.SuppressOutput = httputils.BoolValue(r, "q")
options.NoCache = httputils.BoolValue(r, "nocache")
options.ForceRemove = httputils.BoolValue(r, "forcerm")
options.MemorySwap = httputils.Int64ValueOrZero(r, "memswap")
options.Memory = httputils.Int64ValueOrZero(r, "memory")
options.CPUShares = httputils.Int64ValueOrZero(r, "cpushares")
options.CPUPeriod = httputils.Int64ValueOrZero(r, "cpuperiod")
options.CPUQuota = httputils.Int64ValueOrZero(r, "cpuquota")
options.CPUSetCPUs = r.FormValue("cpusetcpus")
options.CPUSetMems = r.FormValue("cpusetmems")
options.CgroupParent = r.FormValue("cgroupparent")
options.NetworkMode = r.FormValue("networkmode")
options.Tags = r.Form["t"]
options.ExtraHosts = r.Form["extrahosts"]
options.SecurityOpt = r.Form["securityopt"]
options.Squash = httputils.BoolValue(r, "squash")
options.Target = r.FormValue("target")
options.RemoteContext = r.FormValue("remote")
if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform")
}
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, err
}
options.Outputs = outputs
}
}
if s := r.Form.Get("shmsize"); s != "" {
shmSize, err := strconv.ParseInt(s, 10, 64)
if r.Form.Get("shmsize") != "" {
shmSize, err := strconv.ParseInt(r.Form.Get("shmsize"), 10, 64)
if err != nil {
return nil, err
}
options.ShmSize = shmSize
}
if i := r.FormValue("isolation"); i != "" {
options.Isolation = container.Isolation(i)
if !options.Isolation.IsValid() {
return nil, invalidIsolationError(options.Isolation)
if i := container.Isolation(r.FormValue("isolation")); i != "" {
if !container.Isolation.IsValid(i) {
return nil, invalidIsolationError(i)
}
options.Isolation = i
}
if ulimitsJSON := r.FormValue("ulimits"); ulimitsJSON != "" {
var buildUlimits = []*units.Ulimit{}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
var buildUlimits = []*units.Ulimit{}
ulimitsJSON := r.FormValue("ulimits")
if ulimitsJSON != "" {
if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading ulimit settings")
}
@@ -127,7 +114,8 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
// the fact they mentioned it, we need to pass that along to the builder
// so that it can print a warning about "foo" being unused if there is
// no "ARG foo" in the Dockerfile.
if buildArgsJSON := r.FormValue("buildargs"); buildArgsJSON != "" {
buildArgsJSON := r.FormValue("buildargs")
if buildArgsJSON != "" {
var buildArgs = map[string]*string{}
if err := json.Unmarshal([]byte(buildArgsJSON), &buildArgs); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading build args")
@@ -135,7 +123,8 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.BuildArgs = buildArgs
}
if labelsJSON := r.FormValue("labels"); labelsJSON != "" {
labelsJSON := r.FormValue("labels")
if labelsJSON != "" {
var labels = map[string]string{}
if err := json.Unmarshal([]byte(labelsJSON), &labels); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading labels")
@@ -143,41 +132,51 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.Labels = labels
}
if cacheFromJSON := r.FormValue("cachefrom"); cacheFromJSON != "" {
cacheFromJSON := r.FormValue("cachefrom")
if cacheFromJSON != "" {
var cacheFrom = []string{}
if err := json.Unmarshal([]byte(cacheFromJSON), &cacheFrom); err != nil {
return nil, err
}
options.CacheFrom = cacheFrom
}
options.SessionID = r.FormValue("session")
options.BuildID = r.FormValue("buildid")
builderVersion, err := parseVersion(r.FormValue("version"))
if err != nil {
return nil, err
}
options.Version = builderVersion
if bv := r.FormValue("version"); bv != "" {
v, err := parseVersion(bv)
if err != nil {
return nil, err
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, err
}
options.Outputs = outputs
}
options.Version = v
}
return options, nil
}
func parseVersion(s string) (types.BuilderVersion, error) {
switch types.BuilderVersion(s) {
case types.BuilderV1:
if s == "" || s == string(types.BuilderV1) {
return types.BuilderV1, nil
case types.BuilderBuildKit:
return types.BuilderBuildKit, nil
default:
return "", errors.Errorf("invalid version %q", s)
}
if s == string(types.BuilderBuildKit) {
return types.BuilderBuildKit, nil
}
return "", errors.Errorf("invalid version %s", s)
}
func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
fltrs, err := filters.FromJSON(r.Form.Get("filters"))
filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return errors.Wrap(err, "could not parse filters")
}
@@ -192,7 +191,7 @@ func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
Filters: filters,
KeepStorage: int64(ks),
}
@@ -235,11 +234,12 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
}
output := ioutils.NewWriteFlusher(ww)
defer func() { _ = output.Close() }()
defer output.Close()
errf := func(err error) error {
if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 {
_, _ = output.Write(notVerboseBuffer.Bytes())
output.Write(notVerboseBuffer.Bytes())
}
// Do not write the error in the http output if it's still empty.
@@ -290,7 +290,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
// Everything worked so if -q was provided the output from the daemon
// should be just the image ID and we'll print that to stdout.
if buildOptions.SuppressOutput {
_, _ = fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
}
return nil
}
@@ -306,7 +306,7 @@ func getAuthConfigs(header http.Header) map[string]types.AuthConfig {
authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authConfigsEncoded))
// Pulling an image does not error when no auth is provided so to remain
// consistent with the existing api decode errors are ignored
_ = json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
return authConfigs
}
@@ -428,7 +428,7 @@ func (w *wcf) notify() {
w.mu.Lock()
if !w.ready {
if w.buf.Len() > 0 {
_, _ = io.Copy(w.Writer, w.buf)
io.Copy(w.Writer, w.buf)
}
if w.flushed {
w.flusher.Flush()

View File

@@ -2,6 +2,7 @@ package checkpoint // import "github.com/docker/docker/api/server/router/checkpo
import (
"context"
"encoding/json"
"net/http"
"github.com/docker/docker/api/server/httputils"
@@ -14,7 +15,9 @@ func (s *checkpointRouter) postContainerCheckpoint(ctx context.Context, w http.R
}
var options types.CheckpointCreateOptions
if err := httputils.ReadJSON(r, &options); err != nil {
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&options); err != nil {
return err
}

View File

@@ -17,7 +17,7 @@ type execBackend interface {
ContainerExecCreate(name string, config *types.ExecConfig) (string, error)
ContainerExecInspect(id string) (*backend.ExecInspect, error)
ContainerExecResize(name string, height, width int) error
ContainerExecStart(ctx context.Context, name string, options container.ExecStartOptions) error
ContainerExecStart(ctx context.Context, name string, stdin io.Reader, stdout io.Writer, stderr io.Writer) error
ExecExists(name string) (bool, error)
}
@@ -32,15 +32,15 @@ type copyBackend interface {
// stateBackend includes functions to implement to provide container state lifecycle functionality.
type stateBackend interface {
ContainerCreate(config types.ContainerCreateConfig) (container.CreateResponse, error)
ContainerKill(name string, signal string) error
ContainerCreate(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error)
ContainerKill(name string, sig uint64) error
ContainerPause(name string) error
ContainerRename(oldName, newName string) error
ContainerResize(name string, height, width int) error
ContainerRestart(ctx context.Context, name string, options container.StopOptions) error
ContainerRestart(name string, seconds *int) error
ContainerRm(name string, config *types.ContainerRmConfig) error
ContainerStart(name string, hostConfig *container.HostConfig, checkpoint string, checkpointDir string) error
ContainerStop(ctx context.Context, name string, options container.StopOptions) error
ContainerStop(name string, seconds *int) error
ContainerUnpause(name string) error
ContainerUpdate(name string, hostConfig *container.HostConfig) (container.ContainerUpdateOKBody, error)
ContainerWait(ctx context.Context, name string, condition containerpkg.WaitCondition) (<-chan containerpkg.StateStatus, error)

View File

@@ -10,15 +10,13 @@ type containerRouter struct {
backend Backend
decoder httputils.ContainerDecoder
routes []router.Route
cgroup2 bool
}
// NewRouter initializes a new container router
func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) router.Router {
func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
r := &containerRouter{
backend: b,
decoder: decoder,
cgroup2: cgroup2,
}
r.initRoutes()
return r
@@ -56,7 +54,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8, Errors out since 1.12
router.NewPostRoute("/containers/{name:.*}/exec", r.postContainerExecCreate),
router.NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),

View File

@@ -6,22 +6,19 @@ import (
"fmt"
"io"
"net/http"
"runtime"
"strconv"
"syscall"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/versions"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/docker/docker/pkg/signal"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/websocket"
@@ -44,7 +41,7 @@ func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
}
config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && err != io.EOF { // Do not fail if body is empty.
if err != nil && err != io.EOF { //Do not fail if body is empty.
return err
}
@@ -108,14 +105,9 @@ func (s *containerRouter) getContainersStats(ctx context.Context, w http.Respons
if !stream {
w.Header().Set("Content-Type", "application/json")
}
var oneShot bool
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.41") {
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false)
}
config := &backend.ContainerStatsConfig{
Stream: stream,
OneShot: oneShot,
OutStream: w,
Version: httputils.VersionFromContext(ctx),
}
@@ -155,12 +147,6 @@ func (s *containerRouter) getContainersLogs(ctx context.Context, w http.Response
return err
}
contentType := types.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
// if has a tty, we're not muxing streams. if it doesn't, we are. simple.
// this is the point of no return for writing a response. once we call
// WriteLogStream, the response has been started and errors will be
@@ -227,26 +213,20 @@ func (s *containerRouter) postContainersStop(ctx context.Context, w http.Respons
return err
}
var (
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
var seconds *int
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil {
return err
}
options.Timeout = &valSeconds
seconds = &valSeconds
}
if err := s.backend.ContainerStop(ctx, vars["name"], options); err != nil {
if err := s.backend.ContainerStop(vars["name"], seconds); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -255,8 +235,18 @@ func (s *containerRouter) postContainersKill(ctx context.Context, w http.Respons
return err
}
var sig syscall.Signal
name := vars["name"]
if err := s.backend.ContainerKill(name, r.Form.Get("signal")); err != nil {
// If we have a signal, look at it. Otherwise, do nothing
if sigStr := r.Form.Get("signal"); sigStr != "" {
var err error
if sig, err = signal.ParseSignal(sigStr); err != nil {
return errdefs.InvalidParameter(err)
}
}
if err := s.backend.ContainerKill(name, uint64(sig)); err != nil {
var isStopped bool
if errdefs.IsConflict(err) {
isStopped = true
@@ -280,26 +270,21 @@ func (s *containerRouter) postContainersRestart(ctx context.Context, w http.Resp
return err
}
var (
options container.StopOptions
version = httputils.VersionFromContext(ctx)
)
if versions.GreaterThanOrEqualTo(version, "1.42") {
options.Signal = r.Form.Get("signal")
}
var seconds *int
if tmpSeconds := r.Form.Get("t"); tmpSeconds != "" {
valSeconds, err := strconv.Atoi(tmpSeconds)
if err != nil {
return err
}
options.Timeout = &valSeconds
seconds = &valSeconds
}
if err := s.backend.ContainerRestart(ctx, vars["name"], options); err != nil {
if err := s.backend.ContainerRestart(vars["name"], seconds); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -344,18 +329,12 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
if err := httputils.ParseForm(r); err != nil {
return err
}
if v := r.Form.Get("condition"); v != "" {
switch container.WaitCondition(v) {
case container.WaitConditionNotRunning:
waitCondition = containerpkg.WaitConditionNotRunning
case container.WaitConditionNextExit:
waitCondition = containerpkg.WaitConditionNextExit
case container.WaitConditionRemoved:
waitCondition = containerpkg.WaitConditionRemoved
legacyRemovalWaitPre134 = versions.LessThan(version, "1.34")
default:
return errdefs.InvalidParameter(errors.Errorf("invalid condition: %q", v))
}
switch container.WaitCondition(r.Form.Get("condition")) {
case container.WaitConditionNextExit:
waitCondition = containerpkg.WaitConditionNextExit
case container.WaitConditionRemoved:
waitCondition = containerpkg.WaitConditionRemoved
legacyRemovalWaitPre134 = versions.LessThan(version, "1.34")
}
}
@@ -385,12 +364,12 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
return nil
}
var waitError *container.WaitExitError
var waitError *container.ContainerWaitOKBodyError
if status.Err() != nil {
waitError = &container.WaitExitError{Message: status.Err().Error()}
waitError = &container.ContainerWaitOKBodyError{Message: status.Err().Error()}
}
return json.NewEncoder(w).Encode(&container.WaitResponse{
return json.NewEncoder(w).Encode(&container.ContainerWaitOKBody{
StatusCode: int64(status.ExitCode()),
Error: waitError,
})
@@ -436,20 +415,19 @@ func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
var updateConfig container.UpdateConfig
if err := httputils.ReadJSON(r, &updateConfig); err != nil {
decoder := json.NewDecoder(r.Body)
if err := decoder.Decode(&updateConfig); err != nil {
return err
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") {
updateConfig.PidsLimit = nil
}
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
// Ignore KernelMemory removed in API 1.42.
updateConfig.KernelMemory = 0
}
if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 {
// Both `0` and `-1` are accepted to set "unlimited" when updating.
// Historically, any negative value was accepted, so treat them as
@@ -504,69 +482,12 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
// Ignore KernelMemoryTCP because it was added in API 1.40.
hostConfig.KernelMemoryTCP = 0
// Ignore Capabilities because it was added in API 1.40.
hostConfig.Capabilities = nil
// Older clients (API < 1.40) expects the default to be shareable, make them happy
if hostConfig.IpcMode.IsEmpty() {
hostConfig.IpcMode = container.IPCModeShareable
}
}
if hostConfig != nil && versions.LessThan(version, "1.41") && !s.cgroup2 {
// Older clients expect the default to be "host" on cgroup v1 hosts
if hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsModeHost
}
}
if hostConfig != nil && versions.LessThan(version, "1.42") {
for _, m := range hostConfig.Mounts {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
if bo := m.BindOptions; bo != nil {
bo.CreateMountpoint = false
}
// These combinations are invalid, but weren't validated in API < 1.42.
// We reset them here, so that validation doesn't produce an error.
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
m.VolumeOptions = nil
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
m.TmpfsOptions = nil
}
if bo := m.BindOptions; bo != nil {
// Ignore BindOptions.CreateMountpoint because it was added in API 1.42.
bo.CreateMountpoint = false
}
}
}
if hostConfig != nil && versions.GreaterThanOrEqualTo(version, "1.42") {
// Ignore KernelMemory removed in API 1.42.
hostConfig.KernelMemory = 0
for _, m := range hostConfig.Mounts {
if o := m.VolumeOptions; o != nil && m.Type != mount.TypeVolume {
return errdefs.InvalidParameter(fmt.Errorf("VolumeOptions must not be specified on mount type %q", m.Type))
}
if o := m.BindOptions; o != nil && m.Type != mount.TypeBind {
return errdefs.InvalidParameter(fmt.Errorf("BindOptions must not be specified on mount type %q", m.Type))
}
if o := m.TmpfsOptions; o != nil && m.Type != mount.TypeTmpfs {
return errdefs.InvalidParameter(fmt.Errorf("TmpfsOptions must not be specified on mount type %q", m.Type))
}
}
}
if hostConfig != nil && runtime.GOOS == "linux" && versions.LessThan(version, "1.42") {
// ConsoleSize is not respected by Linux daemon before API 1.42
hostConfig.ConsoleSize = [2]uint{0, 0}
}
var platform *specs.Platform
if versions.GreaterThanOrEqualTo(version, "1.41") {
if v := r.Form.Get("platform"); v != "" {
p, err := platforms.Parse(v)
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = &p
hostConfig.IpcMode = container.IpcMode("shareable")
}
}
@@ -584,7 +505,6 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
AdjustCPUShares: adjustCPUShares,
Platform: platform,
})
if err != nil {
return err
@@ -646,8 +566,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
return errdefs.InvalidParameter(errors.Errorf("error attaching to container %s, hijack connection missing", containerName))
}
contentType := types.MediaTypeRawStream
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
conn, _, err := hijacker.Hijack()
if err != nil {
return nil, nil, nil, err
@@ -657,10 +576,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
conn.Write([]byte{})
if upgrade {
if multiplexed && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
fmt.Fprintf(conn, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n\r\n")
} else {
fmt.Fprintf(conn, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
}
@@ -684,16 +600,16 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
}
if err = s.backend.ContainerAttach(containerName, attachConfig); err != nil {
logrus.WithError(err).Errorf("Handler for %s %s returned error", r.Method, r.URL.Path)
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
// Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack()
if errHijack != nil {
logrus.WithError(err).Errorf("Handler for %s %s: unable to close stream; error when hijacking connection", r.Method, r.URL.Path)
} else {
statusCode := httpstatus.FromError(err)
if errHijack == nil {
statusCode := errdefs.GetHTTPErrorStatusCode(err)
statusText := http.StatusText(statusCode)
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: %s\r\n\r\n%s\r\n", statusCode, statusText, contentType, err.Error())
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n%s\r\n", statusCode, statusText, err.Error())
httputils.CloseStreams(conn)
} else {
logrus.Errorf("Error Hijacking: %v", err)
}
}
return nil
@@ -713,7 +629,7 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
version := httputils.VersionFromContext(ctx)
setupStreams := func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error) {
setupStreams := func() (io.ReadCloser, io.Writer, io.Writer, error) {
wsChan := make(chan *websocket.Conn)
h := func(conn *websocket.Conn) {
wsChan <- conn
@@ -735,22 +651,15 @@ func (s *containerRouter) wsContainersAttach(ctx context.Context, w http.Respons
return conn, conn, conn, nil
}
useStdin, useStdout, useStderr := true, true, true
if versions.GreaterThanOrEqualTo(version, "1.42") {
useStdin = httputils.BoolValue(r, "stdin")
useStdout = httputils.BoolValue(r, "stdout")
useStderr = httputils.BoolValue(r, "stderr")
}
attachConfig := &backend.ContainerAttachConfig{
GetStreams: setupStreams,
UseStdin: useStdin,
UseStdout: useStdout,
UseStderr: useStderr,
Logs: httputils.BoolValue(r, "logs"),
Stream: httputils.BoolValue(r, "stream"),
DetachKeys: detachKeys,
MuxStreams: false, // never multiplex, as we rely on websocket to manage distinct streams
UseStdin: true,
UseStdout: true,
UseStderr: true,
MuxStreams: false, // TODO: this should be true since it's a single stream for both stdout and stderr
}
err = s.backend.ContainerAttach(containerName, attachConfig)
@@ -775,7 +684,7 @@ func (s *containerRouter) postContainersPrune(ctx context.Context, w http.Respon
pruneFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
return errdefs.InvalidParameter(err)
}
pruneReport, err := s.backend.ContainersPrune(ctx, pruneFilters)

View File

@@ -6,12 +6,14 @@ import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
gddohttputil "github.com/golang/gddo/httputil"
)
@@ -24,18 +26,23 @@ func (pathError) Error() string {
func (pathError) InvalidParameter() {}
// postContainersCopy is deprecated in favor of getContainersArchive.
//
// Deprecated since 1.8 (API v1.20), errors out since 1.12 (API v1.24)
func (s *containerRouter) postContainersCopy(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// Deprecated since 1.8, Errors out since 1.12
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.24") {
w.WriteHeader(http.StatusNotFound)
return nil
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
cfg := types.CopyConfig{}
if err := httputils.ReadJSON(r, &cfg); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&cfg); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if cfg.Resource == "" {

View File

@@ -2,6 +2,8 @@ package container // import "github.com/docker/docker/api/server/router/containe
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
@@ -9,7 +11,6 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/stdcopy"
@@ -37,26 +38,27 @@ func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.Re
if err := httputils.ParseForm(r); err != nil {
return err
}
if err := httputils.CheckForJSON(r); err != nil {
return err
}
name := vars["name"]
execConfig := &types.ExecConfig{}
if err := httputils.ReadJSON(r, execConfig); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(execConfig); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if len(execConfig.Cmd) == 0 {
return execCommandError{}
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.42") {
// Not supported by API versions before 1.42
execConfig.ConsoleSize = nil
}
// Register an instance of Exec in container.
id, err := s.backend.ContainerExecCreate(vars["name"], execConfig)
id, err := s.backend.ContainerExecCreate(name, execConfig)
if err != nil {
logrus.Errorf("Error setting up exec command in container %s: %v", vars["name"], err)
logrus.Errorf("Error setting up exec command in container %s: %v", name, err)
return err
}
@@ -72,11 +74,9 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.22") {
// API versions before 1.22 did not enforce application/json content-type.
// Allow older clients to work by patching the content-type.
if r.Header.Get("Content-Type") != "application/json" {
r.Header.Set("Content-Type", "application/json")
if versions.GreaterThan(version, "1.21") {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
}
@@ -87,26 +87,17 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
)
execStartCheck := &types.ExecStartCheck{}
if err := httputils.ReadJSON(r, execStartCheck); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(execStartCheck); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if exists, err := s.backend.ExecExists(execName); !exists {
return err
}
if execStartCheck.ConsoleSize != nil {
// Not supported before 1.42
if versions.LessThan(version, "1.42") {
execStartCheck.ConsoleSize = nil
}
// No console without tty
if !execStartCheck.Tty {
execStartCheck.ConsoleSize = nil
}
}
if !execStartCheck.Detach {
var err error
// Setting up the streaming http interface.
@@ -117,11 +108,7 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
defer httputils.CloseStreams(inStream, outStream)
if _, ok := r.Header["Upgrade"]; ok {
contentType := types.MediaTypeRawStream
if !execStartCheck.Tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = types.MediaTypeMultiplexedStream
}
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: "+contentType+"\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
fmt.Fprint(outStream, "HTTP/1.1 101 UPGRADED\r\nContent-Type: application/vnd.docker.raw-stream\r\nConnection: Upgrade\r\nUpgrade: tcp\r\n")
} else {
fmt.Fprint(outStream, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n")
}
@@ -140,16 +127,9 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
}
}
options := container.ExecStartOptions{
Stdin: stdin,
Stdout: stdout,
Stderr: stderr,
ConsoleSize: execStartCheck.ConsoleSize,
}
// Now run the user process in container.
// Maybe we should we pass ctx here if we're not detaching?
if err := s.backend.ContainerExecStart(context.Background(), execName, options); err != nil {
if err := s.backend.ContainerExecStart(context.Background(), execName, stdin, stdout, stderr); err != nil {
if execStartCheck.Detach {
return err
}

View File

@@ -11,5 +11,5 @@ import (
// Backend is all the methods that need to be implemented
// to provide image specific functionality.
type Backend interface {
GetRepository(context.Context, reference.Named, *types.AuthConfig) (distribution.Repository, error)
GetRepository(context.Context, reference.Named, *types.AuthConfig) (distribution.Repository, bool, error)
}

View File

@@ -15,7 +15,7 @@ import (
"github.com/docker/docker/api/types"
registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -57,7 +57,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
}
distrepo, err := s.backend.GetRepository(ctx, namedRef, config)
distrepo, _, err := s.backend.GetRepository(ctx, namedRef, config)
if err != nil {
return err
}

View File

@@ -2,7 +2,6 @@ package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"github.com/docker/docker/api/server/router"
"github.com/moby/buildkit/util/grpcerrors"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
@@ -16,11 +15,8 @@ type grpcRouter struct {
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
r := &grpcRouter{
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
),
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(),
}
for _, b := range backends {
b.RegisterGRPC(r.grpcServer)
@@ -30,12 +26,12 @@ func NewRouter(backends ...Backend) router.Router {
}
// Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route {
return gr.routes
func (r *grpcRouter) Routes() []router.Route {
return r.routes
}
func (gr *grpcRouter) initRoutes() {
gr.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC),
func (r *grpcRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/grpc", r.serveGRPC),
}
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/image"
"github.com/docker/docker/api/types/registry"
dockerimage "github.com/docker/docker/image"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
@@ -23,20 +22,20 @@ type Backend interface {
type imageBackend interface {
ImageDelete(imageRef string, force, prune bool) ([]types.ImageDeleteResponseItem, error)
ImageHistory(imageName string) ([]*image.HistoryResponseItem, error)
Images(ctx context.Context, opts types.ImageListOptions) ([]*types.ImageSummary, error)
GetImage(refOrID string, platform *specs.Platform) (retImg *dockerimage.Image, retErr error)
Images(imageFilters filters.Args, all bool, withExtraAttrs bool) ([]*types.ImageSummary, error)
LookupImage(name string) (*types.ImageInspect, error)
TagImage(imageName, repository, tag string) (string, error)
ImagesPrune(ctx context.Context, pruneFilters filters.Args) (*types.ImagesPruneReport, error)
}
type importExportBackend interface {
LoadImage(inTar io.ReadCloser, outStream io.Writer, quiet bool) error
ImportImage(src string, repository string, platform *specs.Platform, tag string, msg string, inConfig io.ReadCloser, outStream io.Writer, changes []string) error
ImportImage(src string, repository, platform string, tag string, msg string, inConfig io.ReadCloser, outStream io.Writer, changes []string) error
ExportImage(names []string, outStream io.Writer) error
}
type registryBackend interface {
PullImage(ctx context.Context, image, tag string, platform *specs.Platform, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
PushImage(ctx context.Context, image, tag string, metaHeaders map[string][]string, authConfig *types.AuthConfig, outStream io.Writer) error
SearchRegistryForImages(ctx context.Context, searchFilters filters.Args, term string, limit int, authConfig *types.AuthConfig, metaHeaders map[string][]string) (*registry.SearchResults, error)
SearchRegistryForImages(ctx context.Context, filtersArgs string, term string, limit int, authConfig *types.AuthConfig, metaHeaders map[string][]string) (*registry.SearchResults, error)
}

View File

@@ -2,28 +2,17 @@ package image // import "github.com/docker/docker/api/server/router/image"
import (
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/reference"
)
// imageRouter is a router to talk with the image controller
type imageRouter struct {
backend Backend
referenceBackend reference.Store
imageStore image.Store
layerStore layer.Store
routes []router.Route
backend Backend
routes []router.Route
}
// NewRouter initializes a new image router
func NewRouter(backend Backend, referenceBackend reference.Store, imageStore image.Store, layerStore layer.Store) router.Router {
r := &imageRouter{
backend: backend,
referenceBackend: referenceBackend,
imageStore: imageStore,
layerStore: layerStore,
}
func NewRouter(backend Backend) router.Router {
r := &imageRouter{backend: backend}
r.initRoutes()
return r
}

View File

@@ -7,19 +7,17 @@ import (
"net/http"
"strconv"
"strings"
"time"
"github.com/containerd/containerd/platforms"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/image"
"github.com/docker/docker/layer"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/registry"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -32,13 +30,13 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
}
var (
image = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
message = r.Form.Get("message")
progressErr error
output = ioutils.NewWriteFlusher(w)
platform *specs.Platform
image = r.Form.Get("fromImage")
repo = r.Form.Get("repo")
tag = r.Form.Get("tag")
message = r.Form.Get("message")
err error
output = ioutils.NewWriteFlusher(w)
platform *specs.Platform
)
defer output.Close()
@@ -46,16 +44,20 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, "1.32") {
if p := r.FormValue("platform"); p != "" {
sp, err := platforms.Parse(p)
apiPlatform := r.FormValue("platform")
if apiPlatform != "" {
sp, err := platforms.Parse(apiPlatform)
if err != nil {
return err
}
if err := system.ValidatePlatform(sp); err != nil {
return err
}
platform = &sp
}
}
if image != "" { // pull
if image != "" { //pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
@@ -73,16 +75,23 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
authConfig = &types.AuthConfig{}
}
}
progressErr = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { // import
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { //import
src := r.Form.Get("fromSrc")
progressErr = s.backend.ImportImage(src, repo, platform, tag, message, r.Body, output, r.Form["changes"])
}
if progressErr != nil {
if !output.Flushed() {
return progressErr
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
_, _ = output.Write(streamformatter.FormatError(progressErr))
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
if err != nil {
if !output.Flushed() {
return err
}
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -127,7 +136,7 @@ func (s *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter,
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -152,7 +161,7 @@ func (s *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -168,7 +177,7 @@ func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter,
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -204,12 +213,7 @@ func (s *imageRouter) deleteImages(ctx context.Context, w http.ResponseWriter, r
}
func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
image, err := s.backend.GetImage(vars["name"], nil)
if err != nil {
return err
}
imageInspect, err := s.toImageInspect(image)
imageInspect, err := s.backend.LookupImage(vars["name"])
if err != nil {
return err
}
@@ -217,85 +221,6 @@ func (s *imageRouter) getImagesByName(ctx context.Context, w http.ResponseWriter
return httputils.WriteJSON(w, http.StatusOK, imageInspect)
}
func (s *imageRouter) toImageInspect(img *image.Image) (*types.ImageInspect, error) {
refs := s.referenceBackend.References(img.ID().Digest())
repoTags := []string{}
repoDigests := []string{}
for _, ref := range refs {
switch ref.(type) {
case reference.NamedTagged:
repoTags = append(repoTags, reference.FamiliarString(ref))
case reference.Canonical:
repoDigests = append(repoDigests, reference.FamiliarString(ref))
}
}
var size int64
var layerMetadata map[string]string
layerID := img.RootFS.ChainID()
if layerID != "" {
l, err := s.layerStore.Get(layerID)
if err != nil {
return nil, err
}
defer layer.ReleaseAndLog(s.layerStore, l)
size = l.Size()
layerMetadata, err = l.Metadata()
if err != nil {
return nil, err
}
}
comment := img.Comment
if len(comment) == 0 && len(img.History) > 0 {
comment = img.History[len(img.History)-1].Comment
}
lastUpdated, err := s.imageStore.GetLastUpdated(img.ID())
if err != nil {
return nil, err
}
return &types.ImageInspect{
ID: img.ID().String(),
RepoTags: repoTags,
RepoDigests: repoDigests,
Parent: img.Parent.String(),
Comment: comment,
Created: img.Created.Format(time.RFC3339Nano),
Container: img.Container,
ContainerConfig: &img.ContainerConfig,
DockerVersion: img.DockerVersion,
Author: img.Author,
Config: img.Config,
Architecture: img.Architecture,
Variant: img.Variant,
Os: img.OperatingSystem(),
OsVersion: img.OSVersion,
Size: size,
VirtualSize: size, // TODO: field unused, deprecate
GraphDriver: types.GraphDriverData{
Name: s.layerStore.DriverName(),
Data: layerMetadata,
},
RootFS: rootFSToAPIType(img.RootFS),
Metadata: types.ImageMetadata{
LastTagTime: lastUpdated,
},
}, nil
}
func rootFSToAPIType(rootfs *image.RootFS) types.RootFS {
var layers []string
for _, l := range rootfs.DiffIDs {
layers = append(layers, l.String())
}
return types.RootFS{
Type: rootfs.Type,
Layers: layers,
}
}
func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
@@ -306,26 +231,13 @@ func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
// NOTE: filter is a shell glob string applied to repository names.
filterParam := r.Form.Get("filter")
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
filterParam := r.Form.Get("filter")
// FIXME(vdemeester) This has been deprecated in 1.13, and is target for removal for v17.12
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
var sharedSize bool
if versions.GreaterThanOrEqualTo(version, "1.42") {
// NOTE: Support for the "shared-size" parameter was added in API 1.42.
sharedSize = httputils.BoolValue(r, "shared-size")
}
images, err := s.backend.Images(ctx, types.ImageListOptions{
All: httputils.BoolValue(r, "all"),
Filters: imageFilters,
SharedSize: sharedSize,
})
images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)
if err != nil {
return err
}
@@ -377,21 +289,15 @@ func (s *imageRouter) getImagesSearch(ctx context.Context, w http.ResponseWriter
headers[k] = v
}
}
var limit int
limit := registry.DefaultSearchLimit
if r.Form.Get("limit") != "" {
var err error
limit, err = strconv.Atoi(r.Form.Get("limit"))
if err != nil || limit < 0 {
return errdefs.InvalidParameter(errors.Wrap(err, "invalid limit specified"))
limitValue, err := strconv.Atoi(r.Form.Get("limit"))
if err != nil {
return err
}
limit = limitValue
}
searchFilters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
}
query, err := s.backend.SearchRegistryForImages(ctx, searchFilters, r.Form.Get("term"), limit, config, headers)
query, err := s.backend.SearchRegistryForImages(ctx, r.Form.Get("filters"), r.Form.Get("term"), limit, config, headers)
if err != nil {
return err
}

View File

@@ -1,8 +1,6 @@
package router // import "github.com/docker/docker/api/server/router"
import (
"net/http"
"github.com/docker/docker/api/server/httputils"
)
@@ -44,30 +42,30 @@ func NewRoute(method, path string, handler httputils.APIFunc, opts ...RouteWrapp
// NewGetRoute initializes a new route with the http method GET.
func NewGetRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodGet, path, handler, opts...)
return NewRoute("GET", path, handler, opts...)
}
// NewPostRoute initializes a new route with the http method POST.
func NewPostRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPost, path, handler, opts...)
return NewRoute("POST", path, handler, opts...)
}
// NewPutRoute initializes a new route with the http method PUT.
func NewPutRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPut, path, handler, opts...)
return NewRoute("PUT", path, handler, opts...)
}
// NewDeleteRoute initializes a new route with the http method DELETE.
func NewDeleteRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodDelete, path, handler, opts...)
return NewRoute("DELETE", path, handler, opts...)
}
// NewOptionsRoute initializes a new route with the http method OPTIONS.
func NewOptionsRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodOptions, path, handler, opts...)
return NewRoute("OPTIONS", path, handler, opts...)
}
// NewHeadRoute initializes a new route with the http method HEAD.
func NewHeadRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodHead, path, handler, opts...)
return NewRoute("HEAD", path, handler, opts...)
}

View File

@@ -6,7 +6,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/libnetwork"
"github.com/docker/libnetwork"
)
// Backend is all the methods that need to be implemented

View File

@@ -2,6 +2,8 @@ package network // import "github.com/docker/docker/api/server/router/network"
import (
"context"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
@@ -12,8 +14,8 @@ import (
"github.com/docker/docker/api/types/network"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/libnetwork"
netconst "github.com/docker/docker/libnetwork/datastore"
"github.com/docker/libnetwork"
netconst "github.com/docker/libnetwork/datastore"
"github.com/pkg/errors"
)
@@ -203,15 +205,23 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var create types.NetworkCreateRequest
if err := httputils.ParseForm(r); err != nil {
return err
}
var create types.NetworkCreateRequest
if err := httputils.ReadJSON(r, &create); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&create); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if nws, err := n.cluster.GetNetworksByName(create.Name); err == nil && len(nws) > 0 {
return nameConflict(create.Name)
}
@@ -245,15 +255,22 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
}
func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var connect types.NetworkConnect
if err := httputils.ParseForm(r); err != nil {
return err
}
var connect types.NetworkConnect
if err := httputils.ReadJSON(r, &connect); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&connect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
// Unlike other operations, we does not check ambiguity of the name/ID here.
// The reason is that, In case of attachable network in swarm scope, the actual local network
// may not be available at the time. At the same time, inside daemon `ConnectContainerToNetwork`
@@ -262,15 +279,22 @@ func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseW
}
func (n *networkRouter) postNetworkDisconnect(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var disconnect types.NetworkDisconnect
if err := httputils.ParseForm(r); err != nil {
return err
}
var disconnect types.NetworkDisconnect
if err := httputils.ReadJSON(r, &disconnect); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
if err := json.NewDecoder(r.Body).Decode(&disconnect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
return n.backend.DisconnectContainerFromNetwork(disconnect.Container, vars["id"], disconnect.Force)
}

View File

@@ -4,6 +4,7 @@ import (
"context"
"encoding/base64"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
@@ -12,6 +13,7 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/streamformatter"
"github.com/pkg/errors"
@@ -94,8 +96,12 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
}
var privileges types.PluginPrivileges
if err := httputils.ReadJSON(r, &privileges); err != nil {
return err
dec := json.NewDecoder(r.Body)
if err := dec.Decode(&privileges); err != nil {
return errors.Wrap(err, "failed to parse privileges")
}
if dec.More() {
return errors.New("invalid privileges")
}
metaHeaders, authConfig := parseHeaders(r.Header)
@@ -117,7 +123,7 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -129,8 +135,12 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
}
var privileges types.PluginPrivileges
if err := httputils.ReadJSON(r, &privileges); err != nil {
return err
dec := json.NewDecoder(r.Body)
if err := dec.Decode(&privileges); err != nil {
return errors.Wrap(err, "failed to parse privileges")
}
if dec.More() {
return errors.New("invalid privileges")
}
metaHeaders, authConfig := parseHeaders(r.Header)
@@ -152,7 +162,7 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -201,7 +211,7 @@ func (pr *pluginRouter) createPlugin(ctx context.Context, w http.ResponseWriter,
if err := pr.backend.CreateFromContext(ctx, r.Body, options); err != nil {
return err
}
// TODO: send progress bar
//TODO: send progress bar
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -260,15 +270,18 @@ func (pr *pluginRouter) pushPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
func (pr *pluginRouter) setPlugin(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var args []string
if err := httputils.ReadJSON(r, &args); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&args); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if err := pr.backend.Set(vars["name"], args); err != nil {
return err

View File

@@ -2,7 +2,9 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
@@ -19,8 +21,11 @@ import (
func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.InitRequest
if err := httputils.ReadJSON(r, &req); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
@@ -43,8 +48,11 @@ func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) joinCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.JoinRequest
if err := httputils.ReadJSON(r, &req); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
return sr.backend.Join(req)
}
@@ -70,8 +78,11 @@ func (sr *swarmRouter) inspectCluster(ctx context.Context, w http.ResponseWriter
func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var swarm types.Spec
if err := httputils.ReadJSON(r, &swarm); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&swarm); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -121,8 +132,11 @@ func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) unlockCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.UnlockRequest
if err := httputils.ReadJSON(r, &req); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
if err := sr.backend.UnlockSwarm(req); err != nil {
@@ -150,22 +164,10 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
}
filter, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return err
return errdefs.InvalidParameter(err)
}
// the status query parameter is only support in API versions >= 1.41. If
// the client is using a lesser version, ignore the parameter.
cliVersion := httputils.VersionFromContext(ctx)
var status bool
if value := r.URL.Query().Get("status"); value != "" && !versions.LessThan(cliVersion, "1.41") {
var err error
status, err = strconv.ParseBool(value)
if err != nil {
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for status: %s", value)
}
}
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter, Status: status})
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter})
if err != nil {
logrus.Errorf("Error getting services: %v", err)
return err
@@ -176,21 +178,15 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var insertDefaults bool
if value := r.URL.Query().Get("insertDefaults"); value != "" {
var err error
insertDefaults, err = strconv.ParseBool(value)
if err != nil {
err := fmt.Errorf("invalid value for insertDefaults: %s", value)
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for insertDefaults: %s", value)
}
}
// you may note that there is no code here to handle the "status" query
// parameter, as in getServices. the Status field is not supported when
// retrieving an individual service because the Backend API changes
// required to accommodate it would be too disruptive, and because that
// field is so rarely needed as part of an individual service inspection.
service, err := sr.backend.GetService(vars["id"], insertDefaults)
if err != nil {
logrus.Errorf("Error getting service %s: %v", vars["id"], err)
@@ -202,19 +198,24 @@ func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec
if err := httputils.ReadJSON(r, &service); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
// Get returns "" if the header does not exist
encodedAuth := r.Header.Get("X-Registry-Auth")
cliVersion := r.Header.Get("version")
queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" {
if versions.LessThan(v, "1.30") {
if cliVersion != "" {
if versions.LessThan(cliVersion, "1.30") {
queryRegistry = true
}
adjustForAPIVersion(v, &service)
adjustForAPIVersion(cliVersion, &service)
}
resp, err := sr.backend.CreateService(service, encodedAuth, queryRegistry)
if err != nil {
logrus.Errorf("Error creating service %s: %v", service.Name, err)
@@ -226,8 +227,11 @@ func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec
if err := httputils.ReadJSON(r, &service); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -243,12 +247,13 @@ func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter,
flags.EncodedRegistryAuth = r.Header.Get("X-Registry-Auth")
flags.RegistryAuthFrom = r.URL.Query().Get("registryAuthFrom")
flags.Rollback = r.URL.Query().Get("rollback")
cliVersion := r.Header.Get("version")
queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" {
if versions.LessThan(v, "1.30") {
if cliVersion != "" {
if versions.LessThan(cliVersion, "1.30") {
queryRegistry = true
}
adjustForAPIVersion(v, &service)
adjustForAPIVersion(cliVersion, &service)
}
resp, err := sr.backend.UpdateService(vars["id"], version, service, flags, queryRegistry)
@@ -321,8 +326,11 @@ func (sr *swarmRouter) getNode(ctx context.Context, w http.ResponseWriter, r *ht
func (sr *swarmRouter) updateNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var node types.NodeSpec
if err := httputils.ReadJSON(r, &node); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&node); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -400,8 +408,11 @@ func (sr *swarmRouter) getSecrets(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec
if err := httputils.ReadJSON(r, &secret); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
if secret.Templating != nil && versions.LessThan(version, "1.37") {
@@ -438,8 +449,11 @@ func (sr *swarmRouter) getSecret(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec
if err := httputils.ReadJSON(r, &secret); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")
@@ -471,8 +485,11 @@ func (sr *swarmRouter) getConfigs(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec
if err := httputils.ReadJSON(r, &config); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
@@ -510,8 +527,11 @@ func (sr *swarmRouter) getConfig(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec
if err := httputils.ReadJSON(r, &config); err != nil {
return err
if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
rawVersion := r.URL.Query().Get("version")

View File

@@ -3,6 +3,7 @@ package swarm // import "github.com/docker/docker/api/server/router/swarm"
import (
"context"
"fmt"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
@@ -14,7 +15,7 @@ import (
// swarmLogs takes an http response, request, and selector, and writes the logs
// specified by the selector to the response
func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *http.Request, selector *backend.LogSelector) error {
func (sr *swarmRouter) swarmLogs(ctx context.Context, w io.Writer, r *http.Request, selector *backend.LogSelector) error {
// Args are validated before the stream starts because when it starts we're
// sending HTTP 200 by writing an empty chunk of data to tell the client that
// daemon is going to stream. By sending this initial HTTP 200 we can't report
@@ -62,11 +63,6 @@ func (sr *swarmRouter) swarmLogs(ctx context.Context, w http.ResponseWriter, r *
return err
}
contentType := basictypes.MediaTypeRawStream
if !tty && versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.42") {
contentType = basictypes.MediaTypeMultiplexedStream
}
w.Header().Set("Content-Type", contentType)
httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty)
return nil
}
@@ -99,23 +95,4 @@ func adjustForAPIVersion(cliVersion string, service *swarm.ServiceSpec) {
service.TaskTemplate.Placement.MaxReplicas = 0
}
}
if versions.LessThan(cliVersion, "1.41") {
if service.TaskTemplate.ContainerSpec != nil {
// Capabilities and Ulimits for docker swarm services weren't
// supported before API version 1.41
service.TaskTemplate.ContainerSpec.CapabilityAdd = nil
service.TaskTemplate.ContainerSpec.CapabilityDrop = nil
service.TaskTemplate.ContainerSpec.Ulimits = nil
}
if service.TaskTemplate.Resources != nil && service.TaskTemplate.Resources.Limits != nil {
// Limits.Pids not supported before API version 1.41
service.TaskTemplate.Resources.Limits.Pids = 0
}
// jobs were only introduced in API version 1.41. Nil out both Job
// modes; if the service is one of these modes and subsequently has no
// mode, then something down the pipe will thrown an error.
service.Mode.ReplicatedJob = nil
service.Mode.GlobalJob = nil
}
}

View File

@@ -5,7 +5,6 @@ import (
"testing"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/go-units"
)
func TestAdjustForAPIVersion(t *testing.T) {
@@ -40,40 +39,21 @@ func TestAdjustForAPIVersion(t *testing.T) {
ConfigName: "configRuntime",
},
},
Ulimits: []*units.Ulimit{
{
Name: "nofile",
Soft: 100,
Hard: 200,
},
},
},
Placement: &swarm.Placement{
MaxReplicas: 222,
},
Resources: &swarm.ResourceRequirements{
Limits: &swarm.Limit{
Pids: 300,
},
},
},
}
// first, does calling this with a later version correctly NOT strip
// fields? do the later version first, so we can reuse this spec in the
// next test.
adjustForAPIVersion("1.41", spec)
adjustForAPIVersion("1.40", spec)
if !reflect.DeepEqual(spec.TaskTemplate.ContainerSpec.Sysctls, expectedSysctls) {
t.Error("Sysctls was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids == 0 {
t.Error("PidsLimit was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 300 {
t.Error("PidsLimit did not preserve the value from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "someconfig" {
t.Error("CredentialSpec.Config field was stripped from spec")
}
@@ -86,20 +66,12 @@ func TestAdjustForAPIVersion(t *testing.T) {
t.Error("MaxReplicas was stripped from spec")
}
if len(spec.TaskTemplate.ContainerSpec.Ulimits) == 0 {
t.Error("Ulimits were stripped from spec")
}
// next, does calling this with an earlier version correctly strip fields?
adjustForAPIVersion("1.29", spec)
if spec.TaskTemplate.ContainerSpec.Sysctls != nil {
t.Error("Sysctls was not stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 0 {
t.Error("PidsLimit was not stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "" {
t.Error("CredentialSpec.Config field was not stripped from spec")
}
@@ -112,7 +84,4 @@ func TestAdjustForAPIVersion(t *testing.T) {
t.Error("MaxReplicas was not stripped from spec")
}
if len(spec.TaskTemplate.ContainerSpec.Ulimits) != 0 {
t.Error("Ulimits were not stripped from spec")
}
}

View File

@@ -10,24 +10,12 @@ import (
"github.com/docker/docker/api/types/swarm"
)
// DiskUsageOptions holds parameters for system disk usage query.
type DiskUsageOptions struct {
// Containers controls whether container disk usage should be computed.
Containers bool
// Images controls whether image disk usage should be computed.
Images bool
// Volumes controls whether volume disk usage should be computed.
Volumes bool
}
// Backend is the methods that need to be implemented to provide
// system specific functionality.
type Backend interface {
SystemInfo() *types.Info
SystemInfo() (*types.Info, error)
SystemVersion() types.Version
SystemDiskUsage(ctx context.Context, opts DiskUsageOptions) (*types.DiskUsage, error)
SystemDiskUsage(ctx context.Context) (*types.DiskUsage, error)
SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{})
UnsubscribeFromEvents(chan interface{})
AuthenticateToRegistry(ctx context.Context, authConfig *types.AuthConfig) (string, string, error)
@@ -38,8 +26,3 @@ type Backend interface {
type ClusterBackend interface {
Info() swarm.Info
}
// StatusProvider provides methods to get the swarm status of the current node.
type StatusProvider interface {
Status() string
}

View File

@@ -2,7 +2,8 @@ package system // import "github.com/docker/docker/api/server/router/system"
import (
"github.com/docker/docker/api/server/router"
buildkit "github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/fscache"
)
// systemRouter provides information about the Docker system overall.
@@ -11,15 +12,17 @@ type systemRouter struct {
backend Backend
cluster ClusterBackend
routes []router.Route
fscache *fscache.FSCache // legacy
builder *buildkit.Builder
features *map[string]bool
}
// NewRouter initializes a new system router
func NewRouter(b Backend, c ClusterBackend, builder *buildkit.Builder, features *map[string]bool) router.Router {
func NewRouter(b Backend, c ClusterBackend, fscache *fscache.FSCache, builder *buildkit.Builder, features *map[string]bool) router.Router {
r := &systemRouter{
backend: b,
cluster: c,
fscache: fscache,
builder: builder,
features: features,
}

View File

@@ -13,11 +13,10 @@ import (
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
timetypes "github.com/docker/docker/api/types/time"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/pkg/ioutils"
"github.com/pkg/errors"
pkgerrors "github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/errgroup"
)
@@ -35,9 +34,6 @@ func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r
if bv := builderVersion; bv != "" {
w.Header().Set("Builder-Version", string(bv))
}
w.Header().Set("Swarm", s.swarmStatus())
if r.Method == http.MethodHead {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Length", "0")
@@ -47,25 +43,17 @@ func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r
return err
}
func (s *systemRouter) swarmStatus() string {
if s.cluster != nil {
if p, ok := s.cluster.(StatusProvider); ok {
return p.Status()
}
}
return string(swarm.LocalNodeStateInactive)
}
func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
info := s.backend.SystemInfo()
info, err := s.backend.SystemInfo()
if err != nil {
return err
}
if s.cluster != nil {
info.Swarm = s.cluster.Info()
info.Warnings = append(info.Warnings, info.Swarm.Warnings...)
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.25") {
if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") {
// TODO: handle this conversion in engine-api
type oldInfo struct {
*types.Info
@@ -86,7 +74,7 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
old.SecurityOptions = nameOnlySecurityOptions
return httputils.WriteJSON(w, http.StatusOK, old)
}
if versions.LessThan(version, "1.39") {
if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") {
if info.KernelVersion == "" {
info.KernelVersion = "<unknown>"
}
@@ -94,9 +82,6 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
info.OperatingSystem = "<unknown>"
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
info.KernelMemory = false
}
return httputils.WriteJSON(w, http.StatusOK, info)
}
@@ -107,95 +92,46 @@ func (s *systemRouter) getVersion(ctx context.Context, w http.ResponseWriter, r
}
func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
var getContainers, getImages, getVolumes, getBuildCache bool
typeStrs, ok := r.Form["type"]
if versions.LessThan(version, "1.42") || !ok {
getContainers, getImages, getVolumes, getBuildCache = true, true, true, true
} else {
for _, typ := range typeStrs {
switch types.DiskUsageObject(typ) {
case types.ContainerObject:
getContainers = true
case types.ImageObject:
getImages = true
case types.VolumeObject:
getVolumes = true
case types.BuildCacheObject:
getBuildCache = true
default:
return invalidRequestError{Err: fmt.Errorf("unknown object type: %s", typ)}
}
}
}
eg, ctx := errgroup.WithContext(ctx)
var systemDiskUsage *types.DiskUsage
if getContainers || getImages || getVolumes {
eg.Go(func() error {
var err error
systemDiskUsage, err = s.backend.SystemDiskUsage(ctx, DiskUsageOptions{
Containers: getContainers,
Images: getImages,
Volumes: getVolumes,
})
return err
})
}
var du *types.DiskUsage
eg.Go(func() error {
var err error
du, err = s.backend.SystemDiskUsage(ctx)
return err
})
var builderSize int64 // legacy
eg.Go(func() error {
var err error
builderSize, err = s.fscache.DiskUsage(ctx)
if err != nil {
return pkgerrors.Wrap(err, "error getting fscache build cache usage")
}
return nil
})
var buildCache []*types.BuildCache
if getBuildCache {
eg.Go(func() error {
var err error
buildCache, err = s.builder.DiskUsage(ctx)
if err != nil {
return errors.Wrap(err, "error getting build cache usage")
}
if buildCache == nil {
// Ensure empty `BuildCache` field is represented as empty JSON array(`[]`)
// instead of `null` to be consistent with `Images`, `Containers` etc.
buildCache = []*types.BuildCache{}
}
return nil
})
}
eg.Go(func() error {
var err error
buildCache, err = s.builder.DiskUsage(ctx)
if err != nil {
return pkgerrors.Wrap(err, "error getting build cache usage")
}
return nil
})
if err := eg.Wait(); err != nil {
return err
}
var builderSize int64
if versions.LessThan(version, "1.42") {
for _, b := range buildCache {
builderSize += b.Size
// Parents field was added in API 1.42 to replace the Parent field.
b.Parents = nil
}
}
if versions.GreaterThanOrEqualTo(version, "1.42") {
for _, b := range buildCache {
// Parent field is deprecated in API v1.42 and up, as it is deprecated
// in BuildKit. Empty the field to omit it in the API response.
b.Parent = "" //nolint:staticcheck // ignore SA1019 (Parent field is deprecated)
}
for _, b := range buildCache {
builderSize += b.Size
}
du := types.DiskUsage{
BuildCache: buildCache,
BuilderSize: builderSize,
}
if systemDiskUsage != nil {
du.LayersSize = systemDiskUsage.LayersSize
du.Images = systemDiskUsage.Images
du.Containers = systemDiskUsage.Containers
du.Volumes = systemDiskUsage.Volumes
}
du.BuilderSize = builderSize
du.BuildCache = buildCache
return httputils.WriteJSON(w, http.StatusOK, du)
}
@@ -238,9 +174,7 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
if !onlyPastEvents {
dur := until.Sub(now)
timer := time.NewTimer(dur)
defer timer.Stop()
timeout = timer.C
timeout = time.After(dur)
}
}

View File

@@ -7,28 +7,14 @@ import (
// TODO return types need to be refactored into pkg
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
)
// Backend is the methods that need to be implemented to provide
// volume specific functionality
type Backend interface {
List(ctx context.Context, filter filters.Args) ([]*volume.Volume, []string, error)
Get(ctx context.Context, name string, opts ...opts.GetOption) (*volume.Volume, error)
Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*volume.Volume, error)
List(ctx context.Context, filter filters.Args) ([]*types.Volume, []string, error)
Get(ctx context.Context, name string, opts ...opts.GetOption) (*types.Volume, error)
Create(ctx context.Context, name, driverName string, opts ...opts.CreateOption) (*types.Volume, error)
Remove(ctx context.Context, name string, opts ...opts.RemoveOption) error
Prune(ctx context.Context, pruneFilters filters.Args) (*types.VolumesPruneReport, error)
}
// ClusterBackend is the backend used for Swarm Cluster Volumes. Regular
// volumes go through the volume service, but to avoid across-dependency
// between the cluster package and the volume package, we simply provide two
// backends here.
type ClusterBackend interface {
GetVolume(nameOrID string) (volume.Volume, error)
GetVolumes(options volume.ListOptions) ([]*volume.Volume, error)
CreateVolume(volume volume.CreateOptions) (*volume.Volume, error)
RemoveVolume(nameOrID string, force bool) error
UpdateVolume(nameOrID string, version uint64, volume volume.UpdateOptions) error
IsManager() bool
}

View File

@@ -5,15 +5,13 @@ import "github.com/docker/docker/api/server/router"
// volumeRouter is a router to talk with the volumes controller
type volumeRouter struct {
backend Backend
cluster ClusterBackend
routes []router.Route
}
// NewRouter initializes a new volume router
func NewRouter(b Backend, cb ClusterBackend) router.Router {
func NewRouter(b Backend) router.Router {
r := &volumeRouter{
backend: b,
cluster: cb,
}
r.initRoutes()
return r
@@ -32,8 +30,6 @@ func (r *volumeRouter) initRoutes() {
// POST
router.NewPostRoute("/volumes/create", r.postVolumesCreate),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune),
// PUT
router.NewPutRoute("/volumes/{name:.*}", r.putVolumesUpdate),
// DELETE
router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes),
}

View File

@@ -2,24 +2,16 @@ package volume // import "github.com/docker/docker/api/server/router/volume"
import (
"context"
"fmt"
"encoding/json"
"io"
"net/http"
"strconv"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/api/types/volume"
volumetypes "github.com/docker/docker/api/types/volume"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/volume/service/opts"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
const (
// clusterVolumesVersion defines the API version that swarm cluster volume
// functionality was introduced. avoids the use of magic numbers.
clusterVolumesVersion = "1.42"
)
func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -29,62 +21,25 @@ func (v *volumeRouter) getVolumesList(ctx context.Context, w http.ResponseWriter
filters, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return errors.Wrap(err, "error reading volume filters")
return errdefs.InvalidParameter(errors.Wrap(err, "error reading volume filters"))
}
volumes, warnings, err := v.backend.List(ctx, filters)
if err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
if versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
clusterVolumes, swarmErr := v.cluster.GetVolumes(volume.ListOptions{Filters: filters})
if swarmErr != nil {
// if there is a swarm error, we may not want to error out right
// away. the local list probably worked. instead, let's do what we
// do if there's a bad driver while trying to list: add the error
// to the warnings. don't do this if swarm is not initialized.
warnings = append(warnings, swarmErr.Error())
}
// add the cluster volumes to the return
volumes = append(volumes, clusterVolumes...)
}
return httputils.WriteJSON(w, http.StatusOK, &volume.ListResponse{Volumes: volumes, Warnings: warnings})
return httputils.WriteJSON(w, http.StatusOK, &volumetypes.VolumeListOKBody{Volumes: volumes, Warnings: warnings})
}
func (v *volumeRouter) getVolumeByName(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
version := httputils.VersionFromContext(ctx)
// re: volume name duplication
//
// we prefer to get volumes locally before attempting to get them from the
// cluster. Local volumes can only be looked up by name, but cluster
// volumes can also be looked up by ID.
vol, err := v.backend.Get(ctx, vars["name"], opts.WithGetResolveStatus)
// if the volume is not found in the regular volume backend, and the client
// is using an API version greater than 1.42 (when cluster volumes were
// introduced), then check if Swarm has the volume.
if errdefs.IsNotFound(err) && versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
swarmVol, err := v.cluster.GetVolume(vars["name"])
// if swarm returns an error and that error indicates that swarm is not
// initialized, return original NotFound error. Otherwise, we'd return
// a weird swarm unavailable error on non-swarm engines.
if err != nil {
return err
}
vol = &swarmVol
} else if err != nil {
// otherwise, if this isn't NotFound, or this isn't a high enough version,
// just return the error by itself.
volume, err := v.backend.Get(ctx, vars["name"], opts.WithGetResolveStatus)
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusOK, vol)
return httputils.WriteJSON(w, http.StatusOK, volume)
}
func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -92,65 +47,23 @@ func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWri
return err
}
var req volume.CreateOptions
if err := httputils.ReadJSON(r, &req); err != nil {
if err := httputils.CheckForJSON(r); err != nil {
return err
}
var (
vol *volume.Volume
err error
version = httputils.VersionFromContext(ctx)
)
// if the ClusterVolumeSpec is filled in, then this is a cluster volume
// and is created through the swarm cluster volume backend.
//
// re: volume name duplication
//
// As it happens, there is no good way to prevent duplication of a volume
// name between local and cluster volumes. This is because Swarm volumes
// can be created from any manager node, bypassing most of the protections
// we could put into the engine side.
//
// Instead, we will allow creating a volume with a duplicate name, which
// should not break anything.
if req.ClusterVolumeSpec != nil && versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) {
logrus.Debug("using cluster volume")
vol, err = v.cluster.CreateVolume(req)
} else {
logrus.Debug("using regular volume")
vol, err = v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))
}
if err != nil {
return err
}
return httputils.WriteJSON(w, http.StatusCreated, vol)
}
func (v *volumeRouter) putVolumesUpdate(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if !v.cluster.IsManager() {
return errdefs.Unavailable(errors.New("volume update only valid for cluster volumes, but swarm is unavailable"))
}
if err := httputils.ParseForm(r); err != nil {
return err
}
rawVersion := r.URL.Query().Get("version")
version, err := strconv.ParseUint(rawVersion, 10, 64)
if err != nil {
err = fmt.Errorf("invalid swarm object version '%s': %v", rawVersion, err)
var req volumetypes.VolumeCreateBody
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
var req volume.UpdateOptions
if err := httputils.ReadJSON(r, &req); err != nil {
volume, err := v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))
if err != nil {
return err
}
return v.cluster.UpdateVolume(vars["name"], version, req)
return httputils.WriteJSON(w, http.StatusCreated, volume)
}
func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -158,26 +71,9 @@ func (v *volumeRouter) deleteVolumes(ctx context.Context, w http.ResponseWriter,
return err
}
force := httputils.BoolValue(r, "force")
version := httputils.VersionFromContext(ctx)
err := v.backend.Remove(ctx, vars["name"], opts.WithPurgeOnError(force))
// when a removal is forced, if the volume does not exist, no error will be
// returned. this means that to ensure forcing works on swarm volumes as
// well, we should always also force remove against the cluster.
if err != nil || force {
if versions.GreaterThanOrEqualTo(version, clusterVolumesVersion) && v.cluster.IsManager() {
if errdefs.IsNotFound(err) || force {
err := v.cluster.RemoveVolume(vars["name"], force)
if err != nil {
return err
}
}
} else {
return err
}
if err := v.backend.Remove(ctx, vars["name"], opts.WithPurgeOnError(force)); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -192,12 +88,6 @@ func (v *volumeRouter) postVolumesPrune(ctx context.Context, w http.ResponseWrit
return err
}
// API version 1.42 changes behavior where prune should only prune anonymous volumes.
// To keep older API behavior working, we need to add this filter option to consider all (local) volumes for pruning, not just anonymous ones.
if versions.LessThan(httputils.VersionFromContext(ctx), "1.42") {
pruneFilters.Add("all", "true")
}
pruneReport, err := v.backend.Prune(ctx, pruneFilters)
if err != nil {
return err

View File

@@ -1,760 +0,0 @@
package volume
import (
"bytes"
"context"
"encoding/json"
"fmt"
"net/http/httptest"
"testing"
"gotest.tools/v3/assert"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/volume"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/volume/service/opts"
)
func callGetVolume(v *volumeRouter, name string) (*httptest.ResponseRecorder, error) {
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
vars := map[string]string{"name": name}
req := httptest.NewRequest("GET", fmt.Sprintf("/volumes/%s", name), nil)
resp := httptest.NewRecorder()
err := v.getVolumeByName(ctx, resp, req, vars)
return resp, err
}
func callListVolumes(v *volumeRouter) (*httptest.ResponseRecorder, error) {
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
vars := map[string]string{}
req := httptest.NewRequest("GET", "/volumes", nil)
resp := httptest.NewRecorder()
err := v.getVolumesList(ctx, resp, req, vars)
return resp, err
}
func TestGetVolumeByNameNotFoundNoSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameNotFoundNotManager(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{swarm: true},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameNotFound(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{swarm: true, manager: true},
}
_, err := callGetVolume(v, "notReal")
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestGetVolumeByNameFoundRegular(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"volume1": {
Name: "volume1",
},
},
},
cluster: &fakeClusterBackend{swarm: true, manager: true},
}
_, err := callGetVolume(v, "volume1")
assert.NilError(t, err)
}
func TestGetVolumeByNameFoundSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{},
cluster: &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"volume1": {
Name: "volume1",
},
},
},
}
_, err := callGetVolume(v, "volume1")
assert.NilError(t, err)
}
func TestListVolumes(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"v3": {Name: "v3"},
"v4": {Name: "v4"},
},
},
}
resp, err := callListVolumes(v)
assert.NilError(t, err)
d := json.NewDecoder(resp.Result().Body)
respVols := volume.ListResponse{}
assert.NilError(t, d.Decode(&respVols))
assert.Assert(t, respVols.Volumes != nil)
assert.Equal(t, len(respVols.Volumes), 4, "volumes %v", respVols.Volumes)
}
func TestListVolumesNoSwarm(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{},
}
_, err := callListVolumes(v)
assert.NilError(t, err)
}
func TestListVolumesNoManager(t *testing.T) {
v := &volumeRouter{
backend: &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"v1": {Name: "v1"},
"v2": {Name: "v2"},
},
},
cluster: &fakeClusterBackend{swarm: true},
}
resp, err := callListVolumes(v)
assert.NilError(t, err)
d := json.NewDecoder(resp.Result().Body)
respVols := volume.ListResponse{}
assert.NilError(t, d.Decode(&respVols))
assert.Equal(t, len(respVols.Volumes), 2)
assert.Equal(t, len(respVols.Warnings), 0)
}
func TestCreateRegularVolume(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
Name: "vol1",
Driver: "foodriver",
}
buf := bytes.Buffer{}
e := json.NewEncoder(&buf)
e.Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
assert.NilError(t, json.NewDecoder(resp.Result().Body).Decode(&respVolume))
assert.Equal(t, respVolume.Name, "vol1")
assert.Equal(t, respVolume.Driver, "foodriver")
assert.Equal(t, 1, len(b.volumes))
assert.Equal(t, 0, len(c.volumes))
}
func TestCreateSwarmVolumeNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestCreateSwarmVolumeNotManager(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{swarm: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestCreateVolumeCluster(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeCreate := volume.CreateOptions{
ClusterVolumeSpec: &volume.ClusterVolumeSpec{},
Name: "volCluster",
Driver: "someCSI",
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeCreate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/create", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.postVolumesCreate(ctx, resp, req, nil)
assert.NilError(t, err)
respVolume := volume.Volume{}
assert.NilError(t, json.NewDecoder(resp.Result().Body).Decode(&respVolume))
assert.Equal(t, respVolume.Name, "volCluster")
assert.Equal(t, respVolume.Driver, "someCSI")
assert.Equal(t, 0, len(b.volumes))
assert.Equal(t, 1, len(c.volumes))
}
func TestUpdateVolume(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vo1",
ClusterVolume: &volume.ClusterVolume{
ID: "vol1",
},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, c.volumes["vol1"].ClusterVolume.Meta.Version.Index, uint64(1))
}
func TestUpdateVolumeNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsUnavailable(err))
}
func TestUpdateVolumeNotFound(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
volumeUpdate := volume.UpdateOptions{
Spec: &volume.ClusterVolumeSpec{},
}
buf := bytes.Buffer{}
json.NewEncoder(&buf).Encode(volumeUpdate)
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("POST", "/volumes/vol1/update?version=0", &buf)
req.Header.Add("Content-Type", "application/json")
resp := httptest.NewRecorder()
err := v.putVolumesUpdate(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestVolumeRemove(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
},
},
}
c := &fakeClusterBackend{swarm: true, manager: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
}
func TestVolumeRemoveSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
ClusterVolume: &volume.ClusterVolume{},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(c.volumes), 0)
}
func TestVolumeRemoveNotFoundNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err), err.Error())
}
func TestVolumeRemoveNotFoundNoManager(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{swarm: true}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsNotFound(err))
}
func TestVolumeRemoveFoundNoSwarm(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
},
},
}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
}
func TestVolumeRemoveNoSwarmInUse(t *testing.T) {
b := &fakeVolumeBackend{
volumes: map[string]*volume.Volume{
"inuse": {
Name: "inuse",
},
},
}
c := &fakeClusterBackend{}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/inuse", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "inuse"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsConflict(err))
}
func TestVolumeRemoveSwarmForce(t *testing.T) {
b := &fakeVolumeBackend{}
c := &fakeClusterBackend{
swarm: true,
manager: true,
volumes: map[string]*volume.Volume{
"vol1": {
Name: "vol1",
ClusterVolume: &volume.ClusterVolume{},
Options: map[string]string{"mustforce": "yes"},
},
},
}
v := &volumeRouter{
backend: b,
cluster: c,
}
ctx := context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req := httptest.NewRequest("DELETE", "/volumes/vol1", nil)
resp := httptest.NewRecorder()
err := v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.Assert(t, err != nil)
assert.Assert(t, errdefs.IsConflict(err))
ctx = context.WithValue(context.Background(), httputils.APIVersionKey{}, clusterVolumesVersion)
req = httptest.NewRequest("DELETE", "/volumes/vol1?force=1", nil)
resp = httptest.NewRecorder()
err = v.deleteVolumes(ctx, resp, req, map[string]string{"name": "vol1"})
assert.NilError(t, err)
assert.Equal(t, len(b.volumes), 0)
assert.Equal(t, len(c.volumes), 0)
}
type fakeVolumeBackend struct {
volumes map[string]*volume.Volume
}
func (b *fakeVolumeBackend) List(_ context.Context, _ filters.Args) ([]*volume.Volume, []string, error) {
volumes := []*volume.Volume{}
for _, v := range b.volumes {
volumes = append(volumes, v)
}
return volumes, nil, nil
}
func (b *fakeVolumeBackend) Get(_ context.Context, name string, _ ...opts.GetOption) (*volume.Volume, error) {
if v, ok := b.volumes[name]; ok {
return v, nil
}
return nil, errdefs.NotFound(fmt.Errorf("volume %s not found", name))
}
func (b *fakeVolumeBackend) Create(_ context.Context, name, driverName string, _ ...opts.CreateOption) (*volume.Volume, error) {
if _, ok := b.volumes[name]; ok {
// TODO(dperny): return appropriate error type
return nil, fmt.Errorf("already exists")
}
v := &volume.Volume{
Name: name,
Driver: driverName,
}
if b.volumes == nil {
b.volumes = map[string]*volume.Volume{
name: v,
}
} else {
b.volumes[name] = v
}
return v, nil
}
func (b *fakeVolumeBackend) Remove(_ context.Context, name string, o ...opts.RemoveOption) error {
removeOpts := &opts.RemoveConfig{}
for _, opt := range o {
opt(removeOpts)
}
if v, ok := b.volumes[name]; !ok {
if !removeOpts.PurgeOnError {
return errdefs.NotFound(fmt.Errorf("volume %s not found", name))
}
} else if v.Name == "inuse" {
return errdefs.Conflict(fmt.Errorf("volume in use"))
}
delete(b.volumes, name)
return nil
}
func (b *fakeVolumeBackend) Prune(_ context.Context, _ filters.Args) (*types.VolumesPruneReport, error) {
return nil, nil
}
type fakeClusterBackend struct {
swarm bool
manager bool
idCount int
volumes map[string]*volume.Volume
}
func (c *fakeClusterBackend) checkSwarm() error {
if !c.swarm {
return errdefs.Unavailable(fmt.Errorf("this node is not a swarm manager. Use \"docker swarm init\" or \"docker swarm join\" to connect this node to swarm and try again"))
} else if !c.manager {
return errdefs.Unavailable(fmt.Errorf("this node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager"))
}
return nil
}
func (c *fakeClusterBackend) IsManager() bool {
return c.swarm && c.manager
}
func (c *fakeClusterBackend) GetVolume(nameOrID string) (volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return volume.Volume{}, err
}
if v, ok := c.volumes[nameOrID]; ok {
return *v, nil
}
return volume.Volume{}, errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
func (c *fakeClusterBackend) GetVolumes(options volume.ListOptions) ([]*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
volumes := []*volume.Volume{}
for _, v := range c.volumes {
volumes = append(volumes, v)
}
return volumes, nil
}
func (c *fakeClusterBackend) CreateVolume(volumeCreate volume.CreateOptions) (*volume.Volume, error) {
if err := c.checkSwarm(); err != nil {
return nil, err
}
if _, ok := c.volumes[volumeCreate.Name]; ok {
// TODO(dperny): return appropriate already exists error
return nil, fmt.Errorf("already exists")
}
v := &volume.Volume{
Name: volumeCreate.Name,
Driver: volumeCreate.Driver,
Labels: volumeCreate.Labels,
Options: volumeCreate.DriverOpts,
Scope: "global",
}
v.ClusterVolume = &volume.ClusterVolume{
ID: fmt.Sprintf("cluster_%d", c.idCount),
Spec: *volumeCreate.ClusterVolumeSpec,
}
c.idCount = c.idCount + 1
if c.volumes == nil {
c.volumes = map[string]*volume.Volume{
v.Name: v,
}
} else {
c.volumes[v.Name] = v
}
return v, nil
}
func (c *fakeClusterBackend) RemoveVolume(nameOrID string, force bool) error {
if err := c.checkSwarm(); err != nil {
return err
}
v, ok := c.volumes[nameOrID]
if !ok {
return errdefs.NotFound(fmt.Errorf("volume %s not found", nameOrID))
}
if _, mustforce := v.Options["mustforce"]; mustforce && !force {
return errdefs.Conflict(fmt.Errorf("volume %s must be force removed", nameOrID))
}
delete(c.volumes, nameOrID)
return nil
}
func (c *fakeClusterBackend) UpdateVolume(nameOrID string, version uint64, _ volume.UpdateOptions) error {
if err := c.checkSwarm(); err != nil {
return err
}
if v, ok := c.volumes[nameOrID]; ok {
if v.ClusterVolume.Meta.Version.Index != version {
return fmt.Errorf("wrong version")
}
v.ClusterVolume.Meta.Version.Index = v.ClusterVolume.Meta.Version.Index + 1
// for testing, we don't actually need to change anything about the
// volume object. let's just increment the version so we can see the
// call happened.
} else {
return errdefs.NotFound(fmt.Errorf("volume %q not found", nameOrID))
}
return nil
}

View File

@@ -0,0 +1,30 @@
package server // import "github.com/docker/docker/api/server"
import (
"net/http"
"sync"
"github.com/gorilla/mux"
)
// routerSwapper is an http.Handler that allows you to swap
// mux routers.
type routerSwapper struct {
mu sync.Mutex
router *mux.Router
}
// Swap changes the old router with the new one.
func (rs *routerSwapper) Swap(newRouter *mux.Router) {
rs.mu.Lock()
rs.router = newRouter
rs.mu.Unlock()
}
// ServeHTTP makes the routerSwapper to implement the http.Handler interface.
func (rs *routerSwapper) ServeHTTP(w http.ResponseWriter, r *http.Request) {
rs.mu.Lock()
router := rs.router
rs.mu.Unlock()
router.ServeHTTP(w, r)
}

View File

@@ -6,14 +6,13 @@ import (
"net"
"net/http"
"strings"
"time"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/server/router/debug"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/sirupsen/logrus"
)
@@ -24,20 +23,20 @@ const versionMatcher = "/v{version:[0-9.]+}"
// Config provides the configuration for the API server
type Config struct {
Logging bool
CorsHeaders string
Version string
SocketGroup string
TLSConfig *tls.Config
// Hosts is a list of addresses for the API to listen on.
Hosts []string
}
// Server contains instance details for the server
type Server struct {
cfg *Config
servers []*HTTPServer
routers []router.Router
middlewares []middleware.Middleware
cfg *Config
servers []*HTTPServer
routers []router.Router
routerSwapper *routerSwapper
middlewares []middleware.Middleware
}
// New returns a new instance of the server based on the specified configuration.
@@ -59,8 +58,7 @@ func (s *Server) Accept(addr string, listeners ...net.Listener) {
for _, listener := range listeners {
httpServer := &HTTPServer{
srv: &http.Server{
Addr: addr,
ReadHeaderTimeout: 5 * time.Minute, // "G112: Potential Slowloris Attack (gosec)"; not a real concern for our use, so setting a long timeout.
Addr: addr,
},
l: listener,
}
@@ -82,7 +80,7 @@ func (s *Server) Close() {
func (s *Server) serveAPI() error {
var chErrors = make(chan error, len(s.servers))
for _, srv := range s.servers {
srv.srv.Handler = s.createMux()
srv.srv.Handler = s.routerSwapper
go func(srv *HTTPServer) {
var err error
logrus.Infof("API listen on %s", srv.l.Addr())
@@ -142,11 +140,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
}
if err := handlerFunc(ctx, w, r, vars); err != nil {
statusCode := httpstatus.FromError(err)
statusCode := errdefs.GetHTTPErrorStatusCode(err)
if statusCode >= 500 {
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
}
makeErrorHandler(err)(w, r)
httputils.MakeErrorHandler(err)(w, r)
}
}
}
@@ -155,6 +153,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
// This method also enables the Go profiler.
func (s *Server) InitRouter(routers ...router.Router) {
s.routers = append(s.routers, routers...)
m := s.createMux()
s.routerSwapper = &routerSwapper{
router: m,
}
}
type pageNotFoundError struct{}
@@ -187,7 +190,7 @@ func (s *Server) createMux() *mux.Router {
m.Path("/debug" + r.Path()).Handler(f)
}
notFoundHandler := makeErrorHandler(pageNotFoundError{})
notFoundHandler := httputils.MakeErrorHandler(pageNotFoundError{})
m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler)
m.NotFoundHandler = notFoundHandler
m.MethodNotAllowedHandler = notFoundHandler

View File

@@ -22,7 +22,7 @@ func TestMiddlewares(t *testing.T) {
srv.UseMiddleware(middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, api.MinVersion))
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,8 @@
package {{ .Package }} // import "github.com/docker/docker/api/types/{{ .Package }}"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -10,15 +10,18 @@ import (
// ContainerAttachConfig holds the streams to use when connecting to a container to view logs.
type ContainerAttachConfig struct {
GetStreams func(multiplexed bool) (io.ReadCloser, io.Writer, io.Writer, error)
GetStreams func() (io.ReadCloser, io.Writer, io.Writer, error)
UseStdin bool
UseStdout bool
UseStderr bool
Logs bool
Stream bool
DetachKeys string
// Used to signify that streams must be multiplexed by producer as endpoint can't manage multiple streams.
// This is typically set by HTTP endpoint, while websocket can transport raw streams
// Used to signify that streams are multiplexed and therefore need a StdWriter to encode stdout/stderr messages accordingly.
// TODO @cpuguy83: This shouldn't be needed. It was only added so that http and websocket endpoints can use the same function, and the websocket function was not using a stdwriter prior to this change...
// HOWEVER, the websocket endpoint is using a single stream and SHOULD be encoded with stdout/stderr as is done for HTTP since it is still just a single stream.
// Since such a change is an API change unrelated to the current changeset we'll keep it as is here and change separately.
MuxStreams bool
}
@@ -27,7 +30,7 @@ type ContainerAttachConfig struct {
// expectation is for the logger endpoints to assemble the chunks using this
// metadata.
type PartialLogMetaData struct {
Last bool // true if this message is last of a partial
Last bool //true if this message is last of a partial
ID string // identifies group of messages comprising a single record
Ordinal int // ordering of message in partial group
}
@@ -35,6 +38,8 @@ type PartialLogMetaData struct {
// LogMessage is datastructure that represents piece of output produced by some
// container. The Line member is a slice of an array whose contents can be
// changed after a log driver's Log() method returns.
// changes to this struct need to be reflect in the reset method in
// daemon/logger/logger.go
type LogMessage struct {
Line []byte
Source string
@@ -68,7 +73,6 @@ type LogSelector struct {
// behavior of a backend.ContainerStats() call.
type ContainerStatsConfig struct {
Stream bool
OneShot bool
OutStream io.Writer
Version string
}

View File

@@ -50,7 +50,7 @@ type ContainerCommitOptions struct {
// ContainerExecInspect holds information returned by exec inspect.
type ContainerExecInspect struct {
ExecID string `json:"ID"`
ExecID string
ContainerID string
Running bool
ExitCode int
@@ -59,6 +59,7 @@ type ContainerExecInspect struct {
// ContainerListOptions holds parameters to list containers with.
type ContainerListOptions struct {
Quiet bool
Size bool
All bool
Latest bool
@@ -112,16 +113,10 @@ type NetworkListOptions struct {
Filters filters.Args
}
// NewHijackedResponse intializes a HijackedResponse type
func NewHijackedResponse(conn net.Conn, mediaType string) HijackedResponse {
return HijackedResponse{Conn: conn, Reader: bufio.NewReader(conn), mediaType: mediaType}
}
// HijackedResponse holds connection information for a hijacked request.
type HijackedResponse struct {
mediaType string
Conn net.Conn
Reader *bufio.Reader
Conn net.Conn
Reader *bufio.Reader
}
// Close closes the hijacked connection and reader.
@@ -129,15 +124,6 @@ func (h *HijackedResponse) Close() {
h.Conn.Close()
}
// MediaType let client know if HijackedResponse hold a raw or multiplexed stream.
// returns false if HTTP Content-Type is not relevant, and container must be inspected
func (h *HijackedResponse) MediaType() (string, bool) {
if h.mediaType == "" {
return "", false
}
return h.mediaType, true
}
// CloseWriter is an interface that implements structs
// that close input streams to prevent from writing.
type CloseWriter interface {
@@ -219,7 +205,7 @@ const (
// BuilderV1 is the first generation builder in docker daemon
BuilderV1 BuilderVersion = "1"
// BuilderBuildKit is builder based on moby/buildkit project
BuilderBuildKit BuilderVersion = "2"
BuilderBuildKit = "2"
)
// ImageBuildResponse holds information
@@ -250,20 +236,10 @@ type ImageImportOptions struct {
Platform string // Platform is the target platform of the image
}
// ImageListOptions holds parameters to list images with.
// ImageListOptions holds parameters to filter the list of images with.
type ImageListOptions struct {
// All controls whether all images in the graph are filtered, or just
// the heads.
All bool
// Filters is a JSON-encoded set of filter arguments.
All bool
Filters filters.Args
// SharedSize indicates whether the shared size of images should be computed.
SharedSize bool
// ContainerCount indicates whether container count should be computed.
ContainerCount bool
}
// ImageLoadResponse returns information to the client about a load process.
@@ -289,7 +265,7 @@ type ImagePullOptions struct {
// if the privilege request fails.
type RequestPrivilegeFunc func() (string, error)
// ImagePushOptions holds information to push images.
//ImagePushOptions holds information to push images.
type ImagePushOptions ImagePullOptions
// ImageRemoveOptions holds parameters to remove images.
@@ -387,10 +363,6 @@ type ServiceUpdateOptions struct {
// ServiceListOptions holds parameters to list services with.
type ServiceListOptions struct {
Filters filters.Args
// Status indicates whether the server should include the service task
// count of running and desired tasks.
Status bool
}
// ServiceInspectOptions holds parameters related to the "service inspect"

View File

@@ -3,7 +3,6 @@ package types // import "github.com/docker/docker/api/types"
import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// configs holds structs used for internal communication between the
@@ -16,7 +15,6 @@ type ContainerCreateConfig struct {
Config *container.Config
HostConfig *container.HostConfig
NetworkingConfig *network.NetworkingConfig
Platform *specs.Platform
AdjustCPUShares bool
}
@@ -33,7 +31,6 @@ type ExecConfig struct {
User string // User that will run the command
Privileged bool // Is the container in privileged mode
Tty bool // Attach standard streams to a tty.
ConsoleSize *[2]uint `json:",omitempty"` // Initial console size [height, width]
AttachStdin bool // Attach the standard input, makes possible user interaction
AttachStderr bool // Attach the standard error
AttachStdout bool // Attach the standard output

View File

@@ -1,7 +1,6 @@
package container // import "github.com/docker/docker/api/types/container"
import (
"io"
"time"
"github.com/docker/docker/api/types/strslice"
@@ -14,24 +13,6 @@ import (
// Docker interprets it as 3 nanoseconds.
const MinimumDuration = 1 * time.Millisecond
// StopOptions holds the options to stop or restart a container.
type StopOptions struct {
// Signal (optional) is the signal to send to the container to (gracefully)
// stop it before forcibly terminating the container with SIGKILL after the
// timeout expires. If not value is set, the default (SIGTERM) is used.
Signal string `json:",omitempty"`
// Timeout (optional) is the timeout (in seconds) to wait for the container
// to stop gracefully before forcibly terminating it with SIGKILL.
//
// - Use nil to use the default timeout (10 seconds).
// - Use '-1' to wait indefinitely.
// - Use '0' to not wait for the container to exit gracefully, and
// immediately proceeds to forcibly terminating the container.
// - Other positive values are used as timeout (in seconds).
Timeout *int `json:",omitempty"`
}
// HealthConfig holds configuration settings for the HEALTHCHECK feature.
type HealthConfig struct {
// Test is the test to perform to check that the container is healthy.
@@ -53,14 +34,6 @@ type HealthConfig struct {
Retries int `json:",omitempty"`
}
// ExecStartOptions holds the options to start container's exec.
type ExecStartOptions struct {
Stdin io.Reader
Stdout io.Writer
Stderr io.Writer
ConsoleSize *[2]uint `json:",omitempty"`
}
// Config contains the configuration data about a container.
// It should hold only portable information about the container.
// Here, "portable" means "independent from the host we are running on".

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -0,0 +1,21 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerCreateCreatedBody OK response to ContainerCreate operation
// swagger:model ContainerCreateCreatedBody
type ContainerCreateCreatedBody struct {
// The ID of the created container
// Required: true
ID string `json:"Id"`
// Warnings encountered when creating the container
// Required: true
Warnings []string `json:"Warnings"`
}

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
@@ -10,9 +11,7 @@ package container // import "github.com/docker/docker/api/types/container"
// swagger:model ContainerTopOKBody
type ContainerTopOKBody struct {
// Each process running in the container, where each is process
// is an array of values corresponding to the titles.
//
// Each process running in the container, where each is process is an array of values corresponding to the titles
// Required: true
Processes [][]string `json:"Processes"`

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -0,0 +1,29 @@
package container // import "github.com/docker/docker/api/types/container"
// ----------------------------------------------------------------------------
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
// ContainerWaitOKBodyError container waiting error, if any
// swagger:model ContainerWaitOKBodyError
type ContainerWaitOKBodyError struct {
// Details of an error
Message string `json:"Message,omitempty"`
}
// ContainerWaitOKBody OK response to ContainerWait operation
// swagger:model ContainerWaitOKBody
type ContainerWaitOKBody struct {
// error
// Required: true
Error *ContainerWaitOKBodyError `json:"Error"`
// Exit code of the container
// Required: true
StatusCode int64 `json:"StatusCode"`
}

View File

@@ -1,19 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// CreateResponse ContainerCreateResponse
//
// OK response to ContainerCreate operation
// swagger:model CreateResponse
type CreateResponse struct {
// The ID of the created container
// Required: true
ID string `json:"Id"`
// Warnings encountered when creating the container
// Required: true
Warnings []string `json:"Warnings"`
}

View File

@@ -1,16 +0,0 @@
package container // import "github.com/docker/docker/api/types/container"
// ContainerCreateCreatedBody OK response to ContainerCreate operation
//
// Deprecated: use CreateResponse
type ContainerCreateCreatedBody = CreateResponse
// ContainerWaitOKBody OK response to ContainerWait operation
//
// Deprecated: use WaitResponse
type ContainerWaitOKBody = WaitResponse
// ContainerWaitOKBodyError container waiting error, if any
//
// Deprecated: use WaitExitError
type ContainerWaitOKBodyError = WaitExitError

View File

@@ -7,106 +7,67 @@ import (
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/strslice"
"github.com/docker/go-connections/nat"
units "github.com/docker/go-units"
"github.com/docker/go-units"
)
// CgroupnsMode represents the cgroup namespace mode of the container
type CgroupnsMode string
// cgroup namespace modes for containers
const (
CgroupnsModeEmpty CgroupnsMode = ""
CgroupnsModePrivate CgroupnsMode = "private"
CgroupnsModeHost CgroupnsMode = "host"
)
// IsPrivate indicates whether the container uses its own private cgroup namespace
func (c CgroupnsMode) IsPrivate() bool {
return c == CgroupnsModePrivate
}
// IsHost indicates whether the container shares the host's cgroup namespace
func (c CgroupnsMode) IsHost() bool {
return c == CgroupnsModeHost
}
// IsEmpty indicates whether the container cgroup namespace mode is unset
func (c CgroupnsMode) IsEmpty() bool {
return c == CgroupnsModeEmpty
}
// Valid indicates whether the cgroup namespace mode is valid
func (c CgroupnsMode) Valid() bool {
return c.IsEmpty() || c.IsPrivate() || c.IsHost()
}
// Isolation represents the isolation technology of a container. The supported
// values are platform specific
type Isolation string
// Isolation modes for containers
const (
IsolationEmpty Isolation = "" // IsolationEmpty is unspecified (same behavior as default)
IsolationDefault Isolation = "default" // IsolationDefault is the default isolation mode on current daemon
IsolationProcess Isolation = "process" // IsolationProcess is process isolation mode
IsolationHyperV Isolation = "hyperv" // IsolationHyperV is HyperV isolation mode
)
// IsDefault indicates the default isolation technology of a container. On Linux this
// is the native driver. On Windows, this is a Windows Server Container.
func (i Isolation) IsDefault() bool {
// TODO consider making isolation-mode strict (case-sensitive)
v := Isolation(strings.ToLower(string(i)))
return v == IsolationDefault || v == IsolationEmpty
return strings.ToLower(string(i)) == "default" || string(i) == ""
}
// IsHyperV indicates the use of a Hyper-V partition for isolation
func (i Isolation) IsHyperV() bool {
// TODO consider making isolation-mode strict (case-sensitive)
return Isolation(strings.ToLower(string(i))) == IsolationHyperV
return strings.ToLower(string(i)) == "hyperv"
}
// IsProcess indicates the use of process isolation
func (i Isolation) IsProcess() bool {
// TODO consider making isolation-mode strict (case-sensitive)
return Isolation(strings.ToLower(string(i))) == IsolationProcess
return strings.ToLower(string(i)) == "process"
}
const (
// IsolationEmpty is unspecified (same behavior as default)
IsolationEmpty = Isolation("")
// IsolationDefault is the default isolation mode on current daemon
IsolationDefault = Isolation("default")
// IsolationProcess is process isolation mode
IsolationProcess = Isolation("process")
// IsolationHyperV is HyperV isolation mode
IsolationHyperV = Isolation("hyperv")
)
// IpcMode represents the container ipc stack.
type IpcMode string
// IpcMode constants
const (
IPCModeNone IpcMode = "none"
IPCModeHost IpcMode = "host"
IPCModeContainer IpcMode = "container"
IPCModePrivate IpcMode = "private"
IPCModeShareable IpcMode = "shareable"
)
// IsPrivate indicates whether the container uses its own private ipc namespace which can not be shared.
func (n IpcMode) IsPrivate() bool {
return n == IPCModePrivate
return n == "private"
}
// IsHost indicates whether the container shares the host's ipc namespace.
func (n IpcMode) IsHost() bool {
return n == IPCModeHost
return n == "host"
}
// IsShareable indicates whether the container's ipc namespace can be shared with another container.
func (n IpcMode) IsShareable() bool {
return n == IPCModeShareable
return n == "shareable"
}
// IsContainer indicates whether the container uses another container's ipc namespace.
func (n IpcMode) IsContainer() bool {
return strings.HasPrefix(string(n), string(IPCModeContainer)+":")
parts := strings.SplitN(string(n), ":", 2)
return len(parts) > 1 && parts[0] == "container"
}
// IsNone indicates whether container IpcMode is set to "none".
func (n IpcMode) IsNone() bool {
return n == IPCModeNone
return n == "none"
}
// IsEmpty indicates whether container IpcMode is empty
@@ -121,8 +82,9 @@ func (n IpcMode) Valid() bool {
// Container returns the name of the container ipc stack is going to be used.
func (n IpcMode) Container() string {
if n.IsContainer() {
return strings.TrimPrefix(string(n), string(IPCModeContainer)+":")
parts := strings.SplitN(string(n), ":", 2)
if len(parts) > 1 && parts[0] == "container" {
return parts[1]
}
return ""
}
@@ -160,7 +122,7 @@ func (n NetworkMode) ConnectedContainer() string {
return ""
}
// UserDefined indicates user-created network
//UserDefined indicates user-created network
func (n NetworkMode) UserDefined() string {
if n.IsUserDefined() {
return string(n)
@@ -341,7 +303,7 @@ type LogMode string
// Available logging modes
const (
LogModeUnset LogMode = ""
LogModeUnset = ""
LogModeBlocking LogMode = "blocking"
LogModeNonBlock LogMode = "non-blocking"
)
@@ -376,17 +338,14 @@ type Resources struct {
Devices []DeviceMapping // List of devices to map inside the container
DeviceCgroupRules []string // List of rule to be added to the device cgroup
DeviceRequests []DeviceRequest // List of device requests for device drivers
// KernelMemory specifies the kernel memory limit (in bytes) for the container.
// Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes.
KernelMemory int64 `json:",omitempty"`
KernelMemoryTCP int64 `json:",omitempty"` // Hard limit for kernel TCP buffer memory (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap
MemorySwappiness *int64 // Tuning container memory swappiness behaviour
OomKillDisable *bool // Whether to disable OOM Killer or not
PidsLimit *int64 // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
Ulimits []*units.Ulimit // List of ulimits to be set in the container
KernelMemory int64 // Kernel memory limit (in bytes)
KernelMemoryTCP int64 // Hard limit for kernel TCP buffer memory (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap
MemorySwappiness *int64 // Tuning container memory swappiness behaviour
OomKillDisable *bool // Whether to disable OOM Killer or not
PidsLimit *int64 // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
Ulimits []*units.Ulimit // List of ulimits to be set in the container
// Applicable to Windows
CPUCount int64 `json:"CpuCount"` // CPU count
@@ -417,15 +376,14 @@ type HostConfig struct {
AutoRemove bool // Automatically remove container when it exits
VolumeDriver string // Name of the volume driver used to mount volumes
VolumesFrom []string // List of volumes to take from other container
ConsoleSize [2]uint // Initial console size (height,width)
// Applicable to UNIX platforms
CapAdd strslice.StrSlice // List of kernel capabilities to add to the container
CapDrop strslice.StrSlice // List of kernel capabilities to remove from the container
CgroupnsMode CgroupnsMode // Cgroup namespace mode to use for the container
DNS []string `json:"Dns"` // List of DNS server to lookup
DNSOptions []string `json:"DnsOptions"` // List of DNSOption to look for
DNSSearch []string `json:"DnsSearch"` // List of DNSSearch to look for
Capabilities []string `json:"Capabilities"` // List of kernel capabilities to be available for container (this overrides the default set)
DNS []string `json:"Dns"` // List of DNS server to lookup
DNSOptions []string `json:"DnsOptions"` // List of DNSOption to look for
DNSSearch []string `json:"DnsSearch"` // List of DNSSearch to look for
ExtraHosts []string // List of extra hosts
GroupAdd []string // List of additional groups that the container process will run as
IpcMode IpcMode // IPC namespace to use for the container
@@ -446,7 +404,8 @@ type HostConfig struct {
Runtime string `json:",omitempty"` // Runtime to use with this container
// Applicable to Windows
Isolation Isolation // Isolation technology of the container (e.g. default, hyperv)
ConsoleSize [2]uint // Initial console size (height,width)
Isolation Isolation // Isolation technology of the container (e.g. default, hyperv)
// Contains container's resources (cgroups, ulimits)
Resources

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package container // import "github.com/docker/docker/api/types/container"

View File

@@ -1,12 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// WaitExitError container waiting error, if any
// swagger:model WaitExitError
type WaitExitError struct {
// Details of an error
Message string `json:"Message,omitempty"`
}

View File

@@ -1,18 +0,0 @@
package container
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// WaitResponse ContainerWaitResponse
//
// OK response to ContainerWait operation
// swagger:model WaitResponse
type WaitResponse struct {
// error
Error *WaitExitError `json:"Error,omitempty"`
// Exit code of the container
// Required: true
StatusCode int64 `json:"StatusCode"`
}

View File

@@ -1,14 +0,0 @@
package types // import "github.com/docker/docker/api/types"
import "github.com/docker/docker/api/types/volume"
// Volume volume
//
// Deprecated: use github.com/docker/docker/api/types/volume.Volume
type Volume = volume.Volume
// VolumeUsageData Usage details about the volume. This information is used by the
// `GET /system/df` endpoint, and omitted in other endpoints.
//
// Deprecated: use github.com/docker/docker/api/types/volume.UsageData
type VolumeUsageData = volume.UsageData

View File

@@ -1,6 +0,0 @@
package types
// Error returns the error message
func (e ErrorResponse) Error() string {
return e.Message
}

View File

@@ -1,26 +1,31 @@
package events // import "github.com/docker/docker/api/types/events"
// Type is used for event-types.
type Type = string
// List of known event types.
const (
BuilderEventType Type = "builder" // BuilderEventType is the event type that the builder generates.
ConfigEventType Type = "config" // ConfigEventType is the event type that configs generate.
ContainerEventType Type = "container" // ContainerEventType is the event type that containers generate.
DaemonEventType Type = "daemon" // DaemonEventType is the event type that daemon generate.
ImageEventType Type = "image" // ImageEventType is the event type that images generate.
NetworkEventType Type = "network" // NetworkEventType is the event type that networks generate.
NodeEventType Type = "node" // NodeEventType is the event type that nodes generate.
PluginEventType Type = "plugin" // PluginEventType is the event type that plugins generate.
SecretEventType Type = "secret" // SecretEventType is the event type that secrets generate.
ServiceEventType Type = "service" // ServiceEventType is the event type that services generate.
VolumeEventType Type = "volume" // VolumeEventType is the event type that volumes generate.
// ContainerEventType is the event type that containers generate
ContainerEventType = "container"
// DaemonEventType is the event type that daemon generate
DaemonEventType = "daemon"
// ImageEventType is the event type that images generate
ImageEventType = "image"
// NetworkEventType is the event type that networks generate
NetworkEventType = "network"
// PluginEventType is the event type that plugins generate
PluginEventType = "plugin"
// VolumeEventType is the event type that volumes generate
VolumeEventType = "volume"
// ServiceEventType is the event type that services generate
ServiceEventType = "service"
// NodeEventType is the event type that nodes generate
NodeEventType = "node"
// SecretEventType is the event type that secrets generate
SecretEventType = "secret"
// ConfigEventType is the event type that configs generate
ConfigEventType = "config"
)
// Actor describes something that generates events,
// like a container, or a network, or a volume.
// It has a defined name and a set of attributes.
// It has a defined name and a set or attributes.
// The container attributes are its labels, other actors
// can generate these attributes from other properties.
type Actor struct {
@@ -32,11 +37,11 @@ type Actor struct {
type Message struct {
// Deprecated information from JSONMessage.
// With data only in container events.
Status string `json:"status,omitempty"` // Deprecated: use Action instead.
ID string `json:"id,omitempty"` // Deprecated: use Actor.ID instead.
From string `json:"from,omitempty"` // Deprecated: use Actor.Attributes["image"] instead.
Status string `json:"status,omitempty"`
ID string `json:"id,omitempty"`
From string `json:"from,omitempty"`
Type Type
Type string
Action string
Actor Actor
// Engine events are local scope. Cluster events are swarm scope.

View File

@@ -1,5 +1,4 @@
/*
Package filters provides tools for encoding a mapping of keys to a set of
/*Package filters provides tools for encoding a mapping of keys to a set of
multiple values.
*/
package filters // import "github.com/docker/docker/api/types/filters"
@@ -10,7 +9,6 @@ import (
"strings"
"github.com/docker/docker/api/types/versions"
"github.com/pkg/errors"
)
// Args stores a mapping of keys to a set of multiple values.
@@ -38,15 +36,6 @@ func NewArgs(initialArgs ...KeyValuePair) Args {
return args
}
// Keys returns all the keys in list of Args
func (args Args) Keys() []string {
keys := make([]string, 0, len(args.fields))
for k := range args.fields {
keys = append(keys, k)
}
return keys
}
// MarshalJSON returns a JSON byte representation of the Args
func (args Args) MarshalJSON() ([]byte, error) {
if len(args.fields) == 0 {
@@ -68,7 +57,7 @@ func ToJSON(a Args) (string, error) {
// then the encoded format will use an older legacy format where the values are a
// list of strings, instead of a set.
//
// Deprecated: do not use in any new code; use ToJSON instead
// Deprecated: Use ToJSON
func ToParamWithVersion(version string, a Args) (string, error) {
if a.Len() == 0 {
return "", nil
@@ -99,7 +88,7 @@ func FromJSON(p string) (Args, error) {
// Fallback to parsing arguments in the legacy slice format
deprecated := map[string][]string{}
if legacyErr := json.Unmarshal(raw, &deprecated); legacyErr != nil {
return args, invalidFilter{errors.Wrap(err, "invalid filter")}
return args, err
}
args.fields = deprecatedArgs(deprecated)
@@ -156,7 +145,7 @@ func (args Args) Len() int {
func (args Args) MatchKVList(key string, sources map[string]string) bool {
fieldValues := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
@@ -202,7 +191,7 @@ func (args Args) Match(field, source string) bool {
// ExactMatch returns true if the source matches exactly one of the values.
func (args Args) ExactMatch(key, source string) bool {
fieldValues, ok := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if !ok || len(fieldValues) == 0 {
return true
}
@@ -215,7 +204,7 @@ func (args Args) ExactMatch(key, source string) bool {
// matches exactly the value.
func (args Args) UniqueExactMatch(key, source string) bool {
fieldValues := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
@@ -249,10 +238,10 @@ func (args Args) Contains(field string) bool {
return ok
}
type invalidFilter struct{ error }
type invalidFilter string
func (e invalidFilter) Error() string {
return e.error.Error()
return "Invalid filter '" + string(e) + "'"
}
func (invalidFilter) InvalidParameter() {}
@@ -262,7 +251,7 @@ func (invalidFilter) InvalidParameter() {}
func (args Args) Validate(accepted map[string]bool) error {
for name := range args.fields {
if !accepted[name] {
return invalidFilter{errors.New("invalid filter '" + name + "'")}
return invalidFilter(name)
}
}
return nil

View File

@@ -4,8 +4,8 @@ import (
"errors"
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestToJSON(t *testing.T) {
@@ -69,14 +69,9 @@ func TestFromJSON(t *testing.T) {
}
for _, invalid := range invalids {
_, err := FromJSON(invalid)
if err == nil {
if _, err := FromJSON(invalid); err == nil {
t.Fatalf("Expected an error with %v, got nothing", invalid)
}
var invalidFilterError invalidFilter
if !errors.As(err, &invalidFilterError) {
t.Fatalf("Expected an invalidFilter error, got %T", err)
}
}
for expectedArgs, matchers := range valid {
@@ -332,14 +327,9 @@ func TestValidate(t *testing.T) {
}
f.Add("bogus", "running")
err := f.Validate(valid)
if err == nil {
if err := f.Validate(valid); err == nil {
t.Fatal("Expected to return an error, got nil")
}
var invalidFilterError invalidFilter
if !errors.As(err, &invalidFilterError) {
t.Fatalf("Expected an invalidFilter error, got %T", err)
}
}
func TestWalkValues(t *testing.T) {
@@ -347,17 +337,14 @@ func TestWalkValues(t *testing.T) {
f.Add("status", "running")
f.Add("status", "paused")
err := f.WalkValues("status", func(value string) error {
f.WalkValues("status", func(value string) error {
if value != "running" && value != "paused" {
t.Fatalf("Unexpected value %s", value)
}
return nil
})
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
err = f.WalkValues("status", func(value string) error {
err := f.WalkValues("status", func(value string) error {
return errors.New("return")
})
if err == nil {

View File

@@ -3,21 +3,15 @@ package types
// This file was generated by the swagger tool.
// Editing this file might prove futile when you re-run the swagger generate command
// GraphDriverData Information about the storage driver used to store the container's and
// image's filesystem.
//
// GraphDriverData Information about a container's graph driver.
// swagger:model GraphDriverData
type GraphDriverData struct {
// Low-level storage metadata, provided as key/value pairs.
//
// This information is driver-specific, and depends on the storage-driver
// in use, and should be used for informational purposes only.
//
// data
// Required: true
Data map[string]string `json:"Data"`
// Name of the storage driver.
// name
// Required: true
Name string `json:"Name"`
}

View File

@@ -1,7 +1,8 @@
package image // import "github.com/docker/docker/api/types/image"
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -7,91 +7,43 @@ package types
// swagger:model ImageSummary
type ImageSummary struct {
// Number of containers using this image. Includes both stopped and running
// containers.
//
// This size is not calculated by default, and depends on which API endpoint
// is used. `-1` indicates that the value has not been set / calculated.
//
// containers
// Required: true
Containers int64 `json:"Containers"`
// Date and time at which the image was created as a Unix timestamp
// (number of seconds sinds EPOCH).
//
// created
// Required: true
Created int64 `json:"Created"`
// ID is the content-addressable ID of an image.
//
// This identifier is a content-addressable digest calculated from the
// image's configuration (which includes the digests of layers used by
// the image).
//
// Note that this digest differs from the `RepoDigests` below, which
// holds digests of image manifests that reference the image.
//
// Id
// Required: true
ID string `json:"Id"`
// User-defined key/value metadata.
// labels
// Required: true
Labels map[string]string `json:"Labels"`
// ID of the parent image.
//
// Depending on how the image was created, this field may be empty and
// is only set for images that were built/created locally. This field
// is empty if the image was pulled from an image registry.
//
// parent Id
// Required: true
ParentID string `json:"ParentId"`
// List of content-addressable digests of locally available image manifests
// that the image is referenced from. Multiple manifests can refer to the
// same image.
//
// These digests are usually only available if the image was either pulled
// from a registry, or if the image was pushed to a registry, which is when
// the manifest is generated and its digest calculated.
//
// repo digests
// Required: true
RepoDigests []string `json:"RepoDigests"`
// List of image names/tags in the local image cache that reference this
// image.
//
// Multiple image tags can refer to the same image, and this list may be
// empty if no tags reference the image, in which case the image is
// "untagged", in which case it can still be referenced by its ID.
//
// repo tags
// Required: true
RepoTags []string `json:"RepoTags"`
// Total size of image layers that are shared between this image and other
// images.
//
// This size is not calculated by default. `-1` indicates that the value
// has not been set / calculated.
//
// shared size
// Required: true
SharedSize int64 `json:"SharedSize"`
// Total size of the image including all layers it is composed of.
//
// size
// Required: true
Size int64 `json:"Size"`
// Total size of the image including all layers it is composed of.
//
// In versions of Docker before v1.10, this field was calculated from
// the image itself and all of its parent images. Docker v1.10 and up
// store images self-contained, and no longer use a parent-chain, making
// this field an equivalent of the Size field.
//
// This field is kept for backward compatibility, but may be removed in
// a future version of the API.
//
// virtual size
// Required: true
VirtualSize int64 `json:"VirtualSize"`
}

View File

@@ -17,8 +17,6 @@ const (
TypeTmpfs Type = "tmpfs"
// TypeNamedPipe is the type for mounting Windows named pipes
TypeNamedPipe Type = "npipe"
// TypeCluster is the type for Swarm Cluster Volumes.
TypeCluster Type = "cluster"
)
// Mount represents a mount (volume).
@@ -32,10 +30,9 @@ type Mount struct {
ReadOnly bool `json:",omitempty"`
Consistency Consistency `json:",omitempty"`
BindOptions *BindOptions `json:",omitempty"`
VolumeOptions *VolumeOptions `json:",omitempty"`
TmpfsOptions *TmpfsOptions `json:",omitempty"`
ClusterOptions *ClusterOptions `json:",omitempty"`
BindOptions *BindOptions `json:",omitempty"`
VolumeOptions *VolumeOptions `json:",omitempty"`
TmpfsOptions *TmpfsOptions `json:",omitempty"`
}
// Propagation represents the propagation of a mount.
@@ -82,9 +79,8 @@ const (
// BindOptions defines options specific to mounts of type "bind".
type BindOptions struct {
Propagation Propagation `json:",omitempty"`
NonRecursive bool `json:",omitempty"`
CreateMountpoint bool `json:",omitempty"`
Propagation Propagation `json:",omitempty"`
NonRecursive bool `json:",omitempty"`
}
// VolumeOptions represents the options for a mount of type volume.
@@ -117,7 +113,7 @@ type TmpfsOptions struct {
// TODO(stevvooe): There are several more tmpfs flags, specified in the
// daemon, that are accepted. Only the most basic are added for now.
//
// From https://github.com/moby/sys/blob/mount/v0.1.1/mount/flags.go#L47-L56
// From docker/docker/pkg/mount/flags.go:
//
// var validFlags = map[string]bool{
// "": true,
@@ -133,8 +129,3 @@ type TmpfsOptions struct {
// Some of these may be straightforward to add, but others, such as
// uid/gid have implications in a clustered system.
}
// ClusterOptions specifies options for a Cluster volume.
type ClusterOptions struct {
// intentionally empty
}

Some files were not shown because too many files have changed in this diff Show More