Compare commits

..

65 Commits

Author SHA1 Message Date
Andrew Hsu
cdb0218236 Merge pull request #239 from seemethere/bundle_me_up_1806
[18.06-ce] [ENGSEC-28] CVE-2019-5736 apply fix via git bundle instead of patches
2019-02-06 15:30:06 -08:00
Eli Uriegas
d212dfee1a Switch from applying patches a git bundle
A git bundle allows us keep the same SHA, giving us the ability to
validate our patch against a known entity and allowing us to push
directly from our private forks to public forks without having to
re-apply any patches.

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2019-02-06 22:13:34 +00:00
Andrew Hsu
56ad4cdec6 Merge pull request #102 from andrewhsu/grpc1806
[18.06] cluster: set bigger grpc limit for array requests
2018-10-31 13:49:10 -07:00
Tonis Tiigi
cdabbbf33c cluster: set bigger grpc limit for array requests
4MB client side limit was introduced in vendoring go-grpc#1165 (v1.4.0)
making these requests likely to produce errors

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 489b8eda66)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-10-30 23:06:32 +00:00
Sebastiaan van Stijn
320063a2ad Merge pull request #34 from thaJeztah/18.06-backport-logissue
[18.06] backport select polling based watcher for Windows log watcher
2018-08-16 10:14:46 +02:00
Sebastiaan van Stijn
29871648df Merge pull request #31 from thaJeztah/18.06-backport-jjh.37562
[18.06] backport "don't invoke HCS shutdown if terminate called"
2018-08-16 10:14:03 +02:00
Andrew Hsu
68a4625393 Merge pull request #33 from thaJeztah/18.06-update-containerd
[18.06] backport bump containerd daemon to v1.1.2
2018-08-14 15:07:14 -07:00
Tejaswini Duggaraju
a86643ba40 Select polling based watcher for Windows log watcher
Signed-off-by: Tejaswini Duggaraju <naduggar@microsoft.com>
(cherry picked from commit df84cdd091)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-14 14:43:52 +02:00
Sebastiaan van Stijn
460fb308ed Bump containerd daemon to v1.1.2
Updates cri version to 1.0.4, to add `max-container-log-line-size`

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9e773a12fb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-08 10:48:31 +02:00
Andrew Hsu
c4a6ecfcd5 Merge pull request #25 from thaJeztah/18.06-backport-error_when_base_name_resolved_to_blank
[18.06] Return error if basename is expanded to blank
2018-08-07 16:00:37 -07:00
Andrew Hsu
576e49dd6a Merge pull request #21 from tiborvass/18.06-vendor-buildkit
[18.06] Set BuildKit's ExportedProduct variable to show useful errors in the future
2018-08-07 11:51:29 -07:00
Andrew Hsu
5150e8235c Merge pull request #32 from thaJeztah/18.06-bump_swarmkit
[18.06] Bump SwarmKit to 8852e88
2018-08-06 20:59:01 -07:00
Andrew Hsu
b974b42acf Merge pull request #29 from thaJeztah/18.06-disable-cri
[18.06] disable containerd CRI plugin
2018-08-06 17:39:11 -07:00
Yuichiro Kaneko
226fadc9fc Return error if basename is expanded to blank
Fix: https://github.com/moby/moby/issues/37325

Signed-off-by: Yuichiro Kaneko <spiketeika@gmail.com>
(cherry picked from commit c9542d313e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-06 15:50:22 +02:00
Sebastiaan van Stijn
4f8777f6af Bump SwarmKit to 8852e8840e30d69db0b39a4a3d6447362e17c64f
Relevant changes;

- swarmkit #2593 agent: return error when failing to apply network key
- swarmkit #2645 Replace deprecated grpc functions
- swarmkit #2720 Test if error is nil before to log it
- swarmkit #2712 [orchestrator] Fix task sorting
- swarmkit #2677 [manager/orchestrator/reaper] Fix the condition used for skipping over running tasks

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 660fa129c0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-06 15:05:35 +02:00
John Howard
777d535c23 Don't invoke HCS shutdown if terminate called
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 5cfededc7c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-04 01:21:32 +02:00
Sebastiaan van Stijn
3dbf955771 18.06: disable containerd CRI plugin
Docker 18.06 does not have a configuration option to
disable the CRI plugin, and this plugin is not very
useful if containerd is not running standalone.

This patch disables the plugin if containerd is running
as child-process of dockerd.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-31 15:32:13 +02:00
Tibor Vass
0832d1e1d4 validate: please vet
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 81599222fc)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-20 23:48:44 +00:00
Tibor Vass
89a8e672e0 builder: set buildkit's exported product variable via PRODUCT
This introduces a PRODUCT environment variable that is used to set a constant
at dockerversion.ProductName.

That is then used to set BuildKit's ExportedProduct variable in order to show
useful error messages to users when a certain version of the product doesn't
support a BuildKit feature.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 195919d9d6)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-20 23:48:44 +00:00
Tibor Vass
918ea06954 vendor: buildkit to 98f1604134f945d48538ffca0e18662337b4a850
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 0ab7c1c5ba)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-20 23:48:44 +00:00
Andrew Hsu
a3ef7e9a9b Merge pull request #26 from thaJeztah/18.06-backport-fix_TestExternalGraphDriver_pull
[18.06] Fix flaky TestExternalGraphDriver/pull test
2018-07-18 08:09:40 -07:00
Andrew Hsu
fe1b4aa571 Merge pull request #22 from thaJeztah/18.06-backport-errorfix
[18.06] Fix error string in docker CLI test (Windows RS5)
2018-07-18 08:08:33 -07:00
Andrew Hsu
9475fe4f57 Merge pull request #27 from thaJeztah/18.06-backport-bump_swarmkit
[18.06] Update swarmkit to 68266392a176434d282760d2d6d0ab4c68edcae6
2018-07-18 08:07:57 -07:00
Sebastiaan van Stijn
2211a7dc42 Update swarmkit to 68266392a176434d282760d2d6d0ab4c68edcae6
changes included:

- swarmkit #2706 address unassigned task leak when service is removed
- swarmkit #2676 Fix racy batching on the dispatcher
- swarmkit #2693 Fix linting issues revealed by Go 1.11

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c9377f4552)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-17 20:59:07 +02:00
Sebastiaan van Stijn
8fcc0d53cb Fix flaky TestExternalGraphDriver/pull test
This test occassionally fails on s390x and Power;

    03:16:04 --- FAIL: TestExternalGraphDriver/pull (1.08s)
    03:16:04 external_test.go:402: assertion failed: error is not nil: Error: No such image: busybox:latest

Most likely these failures are caused due to Docker Hub updating
the busybox:latest image, but not all architectures yet being
available.

Instead of using `:latest`, pull an image by digest, so that
the test doesn't depend on Docker Hub having all architectures
available for `:latest`.

I selected the same digest as is currently used as "frozen image"
in the Dockerfile.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 352db26d5f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-17 20:44:02 +02:00
Andrew Hsu
8308a17b6a Merge pull request #24 from thaJeztah/18.06-backport-fix-internal
[18.06] Fix flakyness in TestDockerNetworkInternalMode
2018-07-16 09:23:10 -07:00
Flavio Crisciani
b4c762831c Fix flakyness in TestDockerNetworkInternalMode
Instead of waiting for the DNS to fail, try to access
a specific external IP and verify that 100% of the pakcets
are being lost.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
(cherry picked from commit a2bb2144b3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-15 21:42:00 +02:00
Sandeep Bansal
d025942c33 Fix error string in docker CLI test
Signed-off-by: Sandeep Bansal <sabansal@microsoft.com>
(cherry picked from commit 76ace9bb5e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-12 13:52:32 +02:00
Andrew Hsu
371b590ace Merge pull request #20 from thaJeztah/18.06-backport-do_not_Healthcheck_RUN_command
[18.06] Ensure RUN instruction to run without Healthcheck
2018-07-11 17:47:16 -07:00
Yuichiro Kaneko
160be68bbd Ensure RUN instruction to run without Healthcheck
Before this commit Healthcheck run if HEALTHCHECK
instruction appears before RUN instruction.
By passing `withoutHealthcheck` to `copyRunConfig`,
always RUN instruction run without Healthcheck.

Fix: https://github.com/moby/moby/issues/37362

Signed-off-by: Yuichiro Kaneko <spiketeika@gmail.com>
(cherry picked from commit 44e08d8a7d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-11 20:39:49 +02:00
Sebastiaan van Stijn
26055afbd4 Merge pull request #19 from thaJeztah/18.06-backport-bump_libnetwork
[18.06] Bump libnetwork to d00ceed44cc447c77f25cdf5d59e83163bdcb4c9
2018-07-11 19:55:11 +02:00
Sebastiaan van Stijn
f2dffe3b15 Merge pull request #18 from tiborvass/18.06-mountable-bugfix
[18.06] builder: fix duplicate calls to mountable
2018-07-11 19:54:44 +02:00
Sebastiaan van Stijn
d65a47f438 Bump libnetwork to d00ceed44cc447c77f25cdf5d59e83163bdcb4c9
The absence of the file /proc/sys/net/ipv6/conf/all/disable_ipv6
doesn't appear to affect functionality, at least at this time.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d58c4cbe6c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-11 12:46:51 +02:00
Tonis Tiigi
622743a9b5 builder: fix duplicate calls to mountable
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit ffa7233d15)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-11 02:34:47 +00:00
Andrew Hsu
4fc6e3ab7b Merge pull request #17 from thaJeztah/18.06-backport-bump_containerd_1.1.1
[18.06] Bump containerd daemon to v1.1.1
2018-07-10 15:25:14 -07:00
Andrew Hsu
6abf11d448 Merge pull request #15 from thaJeztah/18.06-backport-vendor-containerd
[18.06] vendor: update containerd to b41633746
2018-07-10 09:28:55 -07:00
Andrew Hsu
c863835e99 Merge pull request #16 from thaJeztah/18.06-backport-scalable-lb
[18.06] Improve scalability of the Linux load balancing
2018-07-10 09:26:50 -07:00
Brian Goff
df5aa018c6 Bump containerd daemon to v1.1.1
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit c083eb7595)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-10 01:50:02 +02:00
Andrew Hsu
22a77b0b92 Merge pull request #11 from thaJeztah/18.06-fix_bindmount_src_create_race
[18.06] Fix bindmount autocreate race
2018-07-09 14:53:50 -07:00
Chris Telfer
1c4232670a Bump libnetwork to 3ac297bc
Bump libnetwork to 3ac297bc7fd0afec9051bbb47024c9bc1d75bf5b in order to
get fix 0c3d9f00 which addresses a flaw that the scalable load balancing
code revealed.  Attempting to print sandbox IDs where the sandbox name
was too short results in a goroutine panic.  This can occur with
sandboxes with names of 1 or 2 characters in the previous code. But due
to naming updates in the scalable load balancing code, it could now
occur for networks whose name was 3 characters and at least one of the
integration tests employed such networks (named 'foo', 'bar' and 'baz').

This update also brings in several changes as well:
 * 6c7c6017 - Fix error handling about bridgeSetup
 * 5ed38221 - Optimize networkDB queue
 * cfa9afdb - ndots: produce error on negative numbers
 * 5586e226 - improve error message for invalid ndots number
 * 449672e5 - Allows to set generic knobs on the Sandbox
 * 6b4c4af7 - do not ignore user-provided "ndots:0" option
 * 843a0e42 - Adjust corner case for reconnect logic

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 0e162d9923)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 20:27:41 +02:00
Chris Telfer
cd3596eabc Update moby to use scalable-lb libnetwork APIs
This patch is required for the updated version of libnetwork and entails
two minor changes.

First, it uses the new libnetwork.NetworkDeleteOptionRemoveLB option to
the network.Delete() method to automatically remove the load balancing
endpoint for ingress networks.   This allows removal of the
deleteLoadBalancerSandbox() function whose functionality is now within
libnetwork.

The second change is to allocate a load balancer endpoint IP address for
all overlay networks rather than just "ingress" and windows overlay
networks.  Swarmkit is already performing this allocation, but moby was
not making use of these IP addresses for Linux overlay networks (except
ingress).  The current version of libnetwork makes use of these IP
addresses by creating a load balancing sandbox and endpoint similar to
ingress's  for all overlay network and putting all load balancing state
for a given node in that sandbox only.  This reduces the amount of linux
kernel state required per node.

In the prior scheme, libnetwork would program each container's network
namespace with every piece of load balancing state for every other
container that shared *any* network with the first container.  This
meant that the amount of kernel state on a given node scaled with the
square of the number of services in the cluster and with the square of
the number of containers per service.  With the new scheme, kernel state
at each node scales linearly with the number of services and the number
of containers per service.  This also reduces the number of system calls
required to add or remove tasks and containers.  Previously the number
of system calls required grew linearly with the number of other
tasks that shared a network with the container.  Now the number of
system calls grows linearly only with the number of networks that the
task/container is attached to.  This results in a significant
performance improvement when adding and removing services to a cluster
that already heavily loaded.

The primary disadvantage to this scheme is that it requires the
allocation of an additional IP address per node per subnet for every
node in the cluster that has a task on the given subnet.  However, as
mentioned, swarmkit is already allocating these IP addresses for every
node and they are going unused.  Future swarmkit modifications should be
examined to only allocate said IP addresses when nodes actually require
them.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 8e0f6bc903)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 20:09:14 +02:00
Chris Telfer
0cd9442c75 bump libnetwork to b0186632
Bump libnetwork to b0186632522c68f4e1222c4f6d7dbe518882024f.   This
includes the following changes:
 * Dockerize protocol buffer generation and update (78d9390a..e12dd44c)
 * Use new plugin interfaces provided by plugin pkg (be94e134)
 * Improve linux load-balancing scalability (5111c24e..366b9110)

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 92335eaef1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 20:08:39 +02:00
Tonis Tiigi
5e14dc7cb7 vendor: update containerd to b41633746
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit f0e6158266)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 10:37:32 +02:00
Andrew Hsu
73ca638f39 Merge pull request #13 from thaJeztah/18.06-backport_bump-swarmkit
[18.06] Bump swarmkit to include task reaper fixes and more metrics.
2018-07-08 12:10:20 -07:00
Andrew Hsu
496e208a24 Merge pull request #14 from thaJeztah/18.06-backport-CVE-2018-10892
[18.06] Add /proc/acpi to masked paths
2018-07-06 16:50:13 -07:00
Antonio Murdaca
caf82772b7 Add /proc/acpi to masked paths
The deafult OCI linux spec in oci/defaults{_linux}.go in Docker/Moby
from 1.11 to current upstream master does not block /proc/acpi pathnames
allowing attackers to modify host's hardware like enabling/disabling
bluetooth or turning up/down keyboard brightness. SELinux prevents all
of this if enabled.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
(cherry picked from commit 569b9702a5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-06 15:59:05 +02:00
Ying Li
ea4127cc0e Bump swarmkit to include task reaper fixes and more metrics.
This includes the following behavior-modifying PRs:

- docker/swarmkit#2673
- docker/swarmkit#2669
- docker/swarmkit#2675
- docker/swarmkit#2664

Signed-off-by: Ying Li <ying.li@docker.com>
(cherry picked from commit b322705750)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-06 00:43:16 +02:00
Brian Goff
8ade047f3c Fix bindmount autocreate race
When using the mounts API, bind mounts are not supposed to be
automatically created.

Before this patch there is a race condition between valiating that a
bind path exists and then actually setting up the bind mount where the
bind path may exist during validation but was removed during mountpooint
setup.

This adds a field to the mountpoint struct to ensure that binds created
over the mounts API are not accidentally created.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 1caeb79963)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-05 21:01:17 +02:00
Andrew Hsu
85b4dbd3db Merge pull request #4 from thaJeztah/18.06-backport-register-oci-mediatypes
[18.06] Register OCI image media types
2018-07-05 11:52:39 -07:00
Andrew Hsu
cba80832a6 Merge pull request #3 from thaJeztah/18.06-backport-update-windows-manifest-sorting
[18.06] LCOW: Prefer Windows over Linux in a manifest list
2018-07-05 11:52:18 -07:00
Andrew Hsu
0d029b0a42 Merge pull request #2 from thaJeztah/18.06-update_go_winio
[18.06] Update Microsoft/go-winio to 0.4.8
2018-07-05 10:21:43 -07:00
Andrew Hsu
c9bfc3c842 Merge pull request #10 from thaJeztah/18.06-update_buildkit
[18.06] vendor: update buildkit to 9acf51e491
2018-07-05 08:41:47 -07:00
Tibor Vass
12eee7ce0e Merge pull request #1 from thaJeztah/18.03-update-containerd-1.1.1-rc.2
[18.06] Update containerd to v1.1.1-rc.2
2018-07-05 08:00:22 -07:00
Andrew Hsu
394fdb711c Merge pull request #7 from tiborvass/18.06-cp
[18.06] cherry-pick set for rc2
2018-07-05 02:00:07 -07:00
Tonis Tiigi
c9757e3efc vendor: update buildkit to 9acf51e491
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 6144f50e55)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-04 16:34:34 +02:00
Tonis Tiigi
3500f3f27e builder: do not send duplicate status for completed jobs
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 6f7dd9428e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-04 16:34:28 +02:00
Tibor Vass
f030a49747 api: Change Platform field back to string (temporary workaround)
This partially reverts https://github.com/moby/moby/pull/37350

Although specs.Platform is desirable in the API, there is more work
to be done on helper functions, namely containerd's platforms.Parse
that assumes the default platform of the Go runtime.

That prevents a client to use the recommended Parse function to
retrieve a specs.Platform object.

With this change, no parsing is expected from the client.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit facad55744)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-04 00:28:41 +00:00
Andrew Hsu
d29c1fa7d1 Merge pull request #5 from thaJeztah/18.06-bump-libnetwork
[18.06] bump libnetwork to 430c00a
2018-07-03 16:15:29 -07:00
Tibor Vass
26dd527fab builder: return image ID in API when using buildkit
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit ca8022ec63)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-03 22:47:03 +00:00
Chris Telfer
cc1b68d5f2 Update tests w/ new libnetwork contraints
The TestDockerNetworkIPAMMultipleNetworks test allocates several
networks simultaneously with overlapping IP addresses.  Libnetwork now
forbids this.  Adjust the test case to use distinct IP ranges for the
networks it creates.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit efb7909bef)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:25:41 +02:00
Derek McGowan
126b5bce9d Register OCI image media types
OCI types are backwards compatible with Docker manifest
types, however the media types must be registered.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit c4f0515837)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:25:07 +02:00
John Stephens
605cc35dc6 LCOW: Prefer Windows over Linux in a manifest list
When a manifest list contains both Linux and Windows images, always
prefer Windows when the platform OS is unspecified. Also, filter out any
Windows images with a higher build than the host, since they cannot run.

Signed-off-by: John Stephens <johnstep@docker.com>
(cherry picked from commit ddcdb7255d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:24:27 +02:00
Sebastiaan van Stijn
d3a04c2092 Update Microsoft/go-winio to 0.4.8
Fixes named pipe support for hyper-v isolated containers

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 74095588ba)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:23:55 +02:00
Derek McGowan
bcfbeb8998 Update containerd to v1.1.1-rc.2
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit 735517928b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:23:01 +02:00
Chris Telfer
a0c213f90c bump libnetwork to 430c00a
Bump libnetwork to 430c00a6a6b3dfdd774f21e1abd4ad6b0216c629.  This
includes the following moby-affecting changes:

 * Update vendoring for go-sockaddr (8df9f31a)
 * Fix inconsistent subnet allocation by preventing allocation of
   overlapping subnets (8579c5d2)
 * Handle IPv6 literals correctly in port bindings (474fcaf4)
 * Update vendoring for miekg/dns (8f307ac8)
 * Avoid subnet reallocation until required (9756ff7ed)
 * Bump libnetwork build to use go version 1.10.2 (603d2c1a)
 * Unwrap error type returned by PluginGetter (aacec8e1)
 * Update vendored components to match moby (d768021dd)
 * Add retry field to cluster-peers probe (dbbd06a7)
 * Fix net driver response loss on createEndpoint (1ab6e506)
   (fixes https://github.com/docker/for-linux/issues/348)

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit f155f828a2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 19:17:51 +02:00
5658 changed files with 334615 additions and 843420 deletions

View File

@@ -3,14 +3,11 @@ curators:
- alexellis
- andrewhsu
- anonymuse
- arkodg
- chanwit
- ehazlett
- fntlnz
- gianarb
- kolyshkin
- mgoelzer
- olljanat
- programmerq
- rheinwein
- ripcurld0
@@ -18,5 +15,3 @@ curators:
features:
- comments
- pr_description_required

View File

@@ -1,6 +1,7 @@
.git
.go-pkg-cache
.gopath
bundles
.gopath
vendor/pkg
.go-pkg-cache
.git
hack/integration-cli-on-swarm/integration-cli-on-swarm

8
.github/CODEOWNERS vendored
View File

@@ -4,13 +4,17 @@
# KEEP THIS FILE SORTED. Order is important. Last match takes precedence.
builder/** @tonistiigi
client/** @dnephin
contrib/mkimage/** @tianon
daemon/graphdriver/devmapper/** @rhvgoyal
daemon/graphdriver/lcow/** @johnstep
daemon/graphdriver/lcow/** @johnstep @jhowardmsft
daemon/graphdriver/overlay/** @dmcgowan
daemon/graphdriver/overlay2/** @dmcgowan
daemon/graphdriver/windows/** @johnstep
daemon/graphdriver/windows/** @johnstep @jhowardmsft
daemon/logger/awslogs/** @samuelkarp
hack/** @tianon
hack/integration-cli-on-swarm/** @AkihiroSuda
integration-cli/** @vdemeester
integration/** @vdemeester
plugin/** @cpuguy83
project/** @thaJeztah

7
.gitignore vendored
View File

@@ -3,7 +3,6 @@
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.gz
*.orig
test.main
.*.swp
@@ -17,7 +16,9 @@ autogen/
bundles/
cmd/dockerd/dockerd
contrib/builder/rpm/*/changelog
dockerversion/version_autogen.go
dockerversion/version_autogen_unix.go
vendor/pkg/
go-test-report.json
hack/integration-cli-on-swarm/integration-cli-on-swarm
coverage.txt
profile.out
junit-report.xml

130
.mailmap
View File

@@ -10,8 +10,6 @@
<mr.wrfly@gmail.com> <wrfly@users.noreply.github.com>
Aaron L. Xu <liker.xu@foxmail.com>
Abhinandan Prativadi <abhi@docker.com>
Adam Dobrawy <naczelnik@jawnosc.tk>
Adam Dobrawy <naczelnik@jawnosc.tk> <ad-m@users.noreply.github.com>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
Ahmed Kamal <email.ahmedkamal@googlemail.com>
Ahmet Alp Balkan <ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
@@ -19,9 +17,7 @@ AJ Bowen <aj@soulshake.net>
AJ Bowen <aj@soulshake.net> <aj@gandi.net>
AJ Bowen <aj@soulshake.net> <amy@gandi.net>
Akihiro Matsushima <amatsusbit@gmail.com> <amatsus@users.noreply.github.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.akihiro@lab.ntt.co.jp>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> <suda.kyoto@gmail.com>
Akihiro Suda <suda.akihiro@lab.ntt.co.jp> <suda.kyoto@gmail.com>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
@@ -37,11 +33,8 @@ Alexandre Beslic <alexandre.beslic@gmail.com> <abronan@docker.com>
Alicia Lauerman <alicia@eta.im> <allydevour@me.com>
Allen Sun <allensun.shl@alibaba-inc.com> <allen.sun@daocloud.io>
Allen Sun <allensun.shl@alibaba-inc.com> <shlallen1990@gmail.com>
Andrea Denisse Gómez <crypto.andrea@protonmail.ch>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@microsoft.com>
Andrew Weiss <andrew.weiss@docker.com> <andrew.weiss@outlook.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com>
Andrey Kolomentsev <andrey.kolomentsev@docker.com> <andrey.kolomentsev@gmail.com>
André Martins <aanm90@gmail.com> <martins@noironetworks.com>
Andy Rothfusz <github@developersupport.net> <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
@@ -54,8 +47,6 @@ Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com> <abahuguna@fiberlink.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Arko Dasgupta <arko.dasgupta@docker.com>
Arko Dasgupta <arko.dasgupta@docker.com> <arkodg@users.noreply.github.com>
Arnaud Porterie <arnaud.porterie@docker.com>
Arnaud Porterie <arnaud.porterie@docker.com> <icecrime@gmail.com>
Arthur Gautier <baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
@@ -63,26 +54,21 @@ Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Ben Bonnefoy <frenchben@docker.com>
Ben Golub <ben.golub@dotcloud.com>
Ben Toews <mastahyeti@gmail.com> <mastahyeti@users.noreply.github.com>
Benny Ng <benny.tpng@gmail.com>
Benoit Chesneau <bchesneau@gmail.com>
Bevisy Zhang <binbin36520@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Bhumika Bayani <bhumikabayani@gmail.com>
Bilal Amarni <bilal.amarni@gmail.com> <bamarni@users.noreply.github.com>
Bill Wang <ozbillwang@gmail.com> <SydOps@users.noreply.github.com>
Bily Zhang <xcoder@tenxcloud.com>
Bin Liu <liubin0329@gmail.com>
Bin Liu <liubin0329@gmail.com> <liubin0329@users.noreply.github.com>
Bingshen Wang <bingshen.wbs@alibaba-inc.com>
Boaz Shuster <ripcurld.github@gmail.com>
Boqin Qin <bobbqqin@gmail.com>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.co>
Brandon Philips <brandon.philips@coreos.com> <brandon@ifup.org>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Brian Goff <cpuguy83@gmail.com>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
Brian Goff <cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.local>
Carlos de Paula <me@carlosedp.com>
Chander Govindarajan <chandergovind@gmail.com>
Chao Wang <wangchao.fnst@cn.fujitsu.com> <chaowang@localhost.localdomain>
Charles Hooper <charles.hooper@dotcloud.com> <chooper@plumata.com>
@@ -91,17 +77,12 @@ Chen Chuanliang <chen.chuanliang@zte.com.cn>
Chen Mingjie <chenmingjie0828@163.com>
Chen Qiu <cheney-90@hotmail.com>
Chen Qiu <cheney-90@hotmail.com> <21321229@zju.edu.cn>
Chengfei Shang <cfshang@alauda.io>
Chris Dias <cdias@microsoft.com>
Chris McKinnel <chris.mckinnel@tangentlabs.co.uk>
Chris Price <cprice@mirantis.com>
Chris Price <cprice@mirantis.com> <chris.price@docker.com>
Christopher Biscardi <biscarch@sketcht.com>
Christopher Latham <sudosurootdev@gmail.com>
Christy Norman <christy@linux.vnet.ibm.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Corbin Coleman <corbin.coleman@docker.com>
Cristian Ariza <dev@cristianrz.com>
Cristian Staretu <cristian.staretu@gmail.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejack@users.noreply.github.com>
Cristian Staretu <cristian.staretu@gmail.com> <unclejacksons@gmail.com>
@@ -116,7 +97,6 @@ Daniel Garcia <daniel@danielgarcia.info>
Daniel Gasienica <daniel@gasienica.ch> <dgasienica@zynga.com>
Daniel Goosen <daniel.goosen@surveysampling.com> <djgoosen@users.noreply.github.com>
Daniel Grunwell <mwgrunny@gmail.com>
Daniel Hiltgen <daniel.hiltgen@docker.com> <dhiltgen@users.noreply.github.com>
Daniel J Walsh <dwalsh@redhat.com>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
@@ -124,7 +104,6 @@ Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com> <root@vagrant-ubuntu-12.10.vagr
Daniel Nephin <dnephin@docker.com> <dnephin@gmail.com>
Daniel Norberg <dano@spotify.com> <daniel.norberg@gmail.com>
Daniel Watkins <daniel@daniel-watkins.co.uk>
Daniel Zhang <jmzwcn@gmail.com>
Danny Yates <danny@codeaholics.org> <Danny.Yates@mailonline.co.uk>
Darren Shepherd <darren.s.shepherd@gmail.com> <darren@rancher.com>
Dattatraya Kumbhar <dattatraya.kumbhar@gslab.com>
@@ -139,14 +118,9 @@ Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Diego Siqueira <dieg0@live.com>
Diogo Monica <diogo@docker.com> <diogo.monica@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com>
Dmitry Sharshakov <d3dx12.xx@gmail.com> <sh7dm@outlook.com>
Dominic Yin <yindongchao@inspur.com>
Dominik Honnef <dominik@honnef.co> <dominikh@fork-bomb.org>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Doug Tangren <d.tangren@gmail.com>
Drew Erny <derny@mirantis.com>
Drew Erny <derny@mirantis.com> <drew.erny@docker.com>
Elan Ruusamäe <glen@pld-linux.org>
Elan Ruusamäe <glen@pld-linux.org> <glen@delfi.ee>
Elango Sivanandam <elango.siva@docker.com>
@@ -172,20 +146,14 @@ Fengtu Wang <wangfengtu@huawei.com> <wangfengtu@huawei.com>
Francisco Carriedo <fcarriedo@gmail.com>
Frank Rosquin <frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Fu JinLin <withlin@yeah.net>
Gabriel Nicolas Avellaneda <avellaneda.gabriel@gmail.com>
Gaetan de Villele <gdevillele@gmail.com>
Gang Qiao <qiaohai8866@gmail.com> <1373319223@qq.com>
Geon Kim <geon0250@gmail.com>
George Kontridze <george@bugsnag.com>
Gerwim Feiken <g.feiken@tfe.nl> <gerwim@gmail.com>
Giampaolo Mancini <giampaolo@trampolineup.com>
Giovan Isa Musthofa <giovanism@outlook.co.id>
Gopikannan Venugopalsamy <gopikannan.venugopalsamy@gmail.com>
Gou Rao <gou@portworx.com> <gourao@users.noreply.github.com>
Grant Millar <rid@cylo.io>
Grant Millar <rid@cylo.io> <grant@cylo.io>
Grant Millar <rid@cylo.io> <grant@seednet.eu>
Greg Stephens <greg@udon.org>
Guillaume J. Charmes <guillaume.charmes@docker.com> <charmes.guillaume@gmail.com>
Guillaume J. Charmes <guillaume.charmes@docker.com> <guillaume.charmes@dotcloud.com>
@@ -200,7 +168,6 @@ Hakan Özler <hakan.ozler@kodcu.com>
Hao Shu Wei <haosw@cn.ibm.com>
Hao Shu Wei <haosw@cn.ibm.com> <haoshuwei1989@163.com>
Harald Albers <github@albersweb.de> <albers@users.noreply.github.com>
Harald Niesche <harald@niesche.de>
Harold Cooper <hrldcpr@gmail.com>
Harry Zhang <harryz@hyper.sh> <harryzhang@zju.edu.cn>
Harry Zhang <harryz@hyper.sh> <resouer@163.com>
@@ -208,7 +175,6 @@ Harry Zhang <harryz@hyper.sh> <resouer@gmail.com>
Harry Zhang <resouer@163.com>
Harshal Patil <harshal.patil@in.ibm.com> <harche@users.noreply.github.com>
Helen Xie <chenjg@harmonycloud.cn>
Hiroyuki Sasagawa <hs19870702@gmail.com>
Hollie Teal <hollie@docker.com>
Hollie Teal <hollie@docker.com> <hollie.teal@docker.com>
Hollie Teal <hollie@docker.com> <hollietealok@users.noreply.github.com>
@@ -216,38 +182,27 @@ Hu Keping <hukeping@huawei.com>
Huu Nguyen <huu@prismskylabs.com> <whoshuu@gmail.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com>
Hyzhou Zhy <hyzhou.zhy@alibaba-inc.com> <1187766782@qq.com>
Ian Campbell <ian.campbell@docker.com>
Ian Campbell <ian.campbell@docker.com> <ijc@docker.com>
Ilya Khlopotov <ilya.khlopotov@gmail.com>
Iskander Sharipov <quasilyte@gmail.com>
Ivan Markin <sw@nogoegst.net> <twim@riseup.net>
Jack Laxson <jackjrabbit@gmail.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jacob Tomlinson <jacob@tom.linson.uk> <jacobtomlinson@users.noreply.github.com>
Jaivish Kothari <janonymous.codevulture@gmail.com>
James Nesbitt <jnesbitt@mirantis.com>
James Nesbitt <jnesbitt@mirantis.com> <james.nesbitt@wunderkraut.com>
Jamie Hannaford <jamie@limetree.org> <jamie.hannaford@rackspace.com>
Jean Rouge <rougej+github@gmail.com> <jer329@cornell.edu>
Jean-Baptiste Barth <jeanbaptiste.barth@gmail.com>
Jean-Baptiste Dalido <jeanbaptiste@appgratis.com>
Jean-Tiare Le Bigot <jt@yadutaf.fr> <admin@jtlebi.fr>
Jeff Anderson <jeff@docker.com> <jefferya@programmerq.net>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
Jeroen Franse <jeroenfranse@gmail.com>
Jessica Frazelle <jess@oxide.computer>
Jessica Frazelle <jess@oxide.computer> <acidburn@docker.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@google.com>
Jessica Frazelle <jess@oxide.computer> <acidburn@microsoft.com>
Jessica Frazelle <jess@oxide.computer> <jess@docker.com>
Jessica Frazelle <jess@oxide.computer> <jess@mesosphere.com>
Jessica Frazelle <jess@oxide.computer> <jessfraz@google.com>
Jessica Frazelle <jess@oxide.computer> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@oxide.computer> <me@jessfraz.com>
Jessica Frazelle <jess@oxide.computer> <princess@docker.com>
Jian Liao <jliao@alauda.io>
Jiang Jinyang <jjyruby@gmail.com>
Jiang Jinyang <jjyruby@gmail.com> <jiangjinyang@outlook.com>
Jessica Frazelle <jessfraz@google.com>
Jessica Frazelle <jessfraz@google.com> <acidburn@docker.com>
Jessica Frazelle <jessfraz@google.com> <acidburn@google.com>
Jessica Frazelle <jessfraz@google.com> <jess@docker.com>
Jessica Frazelle <jessfraz@google.com> <jess@mesosphere.com>
Jessica Frazelle <jessfraz@google.com> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jessfraz@google.com> <me@jessfraz.com>
Jessica Frazelle <jessfraz@google.com> <princess@docker.com>
Jim Galasyn <jim.galasyn@docker.com>
Jiuyue Ma <majiuyue@huawei.com>
Joey Geiger <jgeiger@gmail.com>
@@ -256,22 +211,19 @@ Joffrey F <joffrey@docker.com> <f.joffrey@gmail.com>
Joffrey F <joffrey@docker.com> <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com> <proppy@aminche.com>
John Harris <john@johnharris.io>
John Howard <github@lowenna.com>
John Howard <github@lowenna.com> <jhoward@microsoft.com>
John Howard <github@lowenna.com> <jhoward@ntdev.microsoft.com>
John Howard <github@lowenna.com> <jhowardmsft@users.noreply.github.com>
John Howard <github@lowenna.com> <John.Howard@microsoft.com>
John Howard <github@lowenna.com> <john.howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@ntdev.microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhowardmsft@users.noreply.github.com>
John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
John Stephens <johnstep@docker.com> <johnstep@users.noreply.github.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jonathan Choy <jonathan.j.choy@gmail.com>
Jonathan Choy <jonathan.j.choy@gmail.com> <oni@tetsujinlabs.com>
Jon Surrell <jon.surrell@gmail.com> <jon.surrell@automattic.com>
Jordan Arentsen <blissdev@gmail.com>
Jordan Jennings <jjn2009@gmail.com> <jjn2009@users.noreply.github.com>
Jorit Kleine-Möllhoff <joppich@bricknet.de> <joppich@users.noreply.github.com>
Jose Diaz-Gonzalez <email@josediazgonzalez.com>
Jose Diaz-Gonzalez <email@josediazgonzalez.com> <jose@seatgeek.com>
Jose Diaz-Gonzalez <email@josediazgonzalez.com> <josegonzalez@users.noreply.github.com>
Jose Diaz-Gonzalez <jose@seatgeek.com> <josegonzalez@users.noreply.github.com>
Josh Bonczkowski <josh.bonczkowski@gmail.com>
Josh Eveleth <joshe@opendns.com> <jeveleth@users.noreply.github.com>
Josh Hawn <josh.hawn@docker.com> <jlhawn@berkeley.edu>
@@ -285,7 +237,6 @@ Justin Cormack <justin.cormack@docker.com>
Justin Cormack <justin.cormack@docker.com> <justin.cormack@unikernel.com>
Justin Cormack <justin.cormack@docker.com> <justin@specialbusservice.com>
Justin Simonelis <justin.p.simonelis@gmail.com> <justin.simonelis@PTS-JSIMON2.toronto.exclamation.com>
Justin Terry <juterry@microsoft.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@dotcloud.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jerome.petazzoni@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@docker.com> <jp@enix.org>
@@ -294,11 +245,8 @@ Kai Qiang Wu (Kennan) <wkq5325@gmail.com>
Kai Qiang Wu (Kennan) <wkq5325@gmail.com> <wkqwu@cn.ibm.com>
Kamil Domański <kamil@domanski.co>
Kamjar Gerami <kami.gerami@gmail.com>
Karthik Nayak <karthik.188@gmail.com>
Karthik Nayak <karthik.188@gmail.com> <Karthik.188@gmail.com>
Ken Cochrane <kencochrane@gmail.com> <KenCochrane@gmail.com>
Ken Herner <kherner@progress.com> <chosenken@gmail.com>
Ken Reese <krrgithub@gmail.com>
Kenfe-Mickaël Laventure <mickael.laventure@gmail.com>
Kevin Feyrer <kevin.feyrer@btinternet.com> <kevinfeyrer@users.noreply.github.com>
Kevin Kern <kaiwentan@harmonycloud.cn>
@@ -312,7 +260,6 @@ Konstantin Pelykh <kpelykh@zettaset.com>
Kotaro Yoshimatsu <kotaro.yoshimatsu@gmail.com>
Kunal Kushwaha <kushwaha_kunal_v7@lab.ntt.co.jp> <kunal.kushwaha@gmail.com>
Lajos Papp <lajos.papp@sequenceiq.com> <lalyos@yahoo.com>
Lei Gong <lgong@alauda.io>
Lei Jitang <leijitang@huawei.com>
Lei Jitang <leijitang@huawei.com> <leijitang@gmail.com>
Liang Mingqiang <mqliang.zju@gmail.com>
@@ -321,8 +268,7 @@ Liao Qingwei <liaoqingwei@huawei.com>
Linus Heckemann <lheckemann@twig-world.com>
Linus Heckemann <lheckemann@twig-world.com> <anonymouse2048@gmail.com>
Lokesh Mandvekar <lsm5@fedoraproject.org> <lsm5@redhat.com>
Lorenzo Fontana <fontanalorenz@gmail.com> <fontanalorenzo@me.com>
Lorenzo Fontana <fontanalorenz@gmail.com> <lo@linux.com>
Lorenzo Fontana <lo@linux.com> <fontanalorenzo@me.com>
Louis Opter <kalessin@kalessin.fr>
Louis Opter <kalessin@kalessin.fr> <louis@dotcloud.com>
Luca Favatella <luca.favatella@erlang-solutions.com> <lucafavatella@users.noreply.github.com>
@@ -358,11 +304,10 @@ Matthew Mosesohn <raytrac3r@gmail.com>
Matthew Mueller <mattmuelle@gmail.com>
Matthias Kühnle <git.nivoc@neverbox.com> <kuehnle@online.de>
Mauricio Garavaglia <mauricio@medallia.com> <mauriciogaravaglia@gmail.com>
Maxwell <csuhp007@gmail.com>
Maxwell <csuhp007@gmail.com> <csuhqg@foxmail.com>
Michael Crosby <michael@docker.com> <crosby.michael@gmail.com>
Michael Crosby <michael@docker.com> <crosbymichael@gmail.com>
Michael Crosby <michael@docker.com> <michael@crosbymichael.com>
Michał Gryko <github@odkurzacz.org>
Michael Hudson-Doyle <michael.hudson@canonical.com> <michael.hudson@linaro.org>
Michael Huettermann <michael@huettermann.net>
Michael Käufl <docker@c.michael-kaeufl.de> <michael-k@users.noreply.github.com>
@@ -370,9 +315,6 @@ Michael Nussbaum <michael.nussbaum@getbraintree.com>
Michael Nussbaum <michael.nussbaum@getbraintree.com> <code@getbraintree.com>
Michael Spetsiotis <michael_spets@hotmail.com>
Michal Minář <miminar@redhat.com>
Michał Gryko <github@odkurzacz.org>
Michiel de Jong <michiel@unhosted.org>
Mickaël Fortunato <morsi.morsicus@gmail.com>
Miguel Angel Alvarez Cabrerizo <doncicuto@gmail.com> <30386061+doncicuto@users.noreply.github.com>
Miguel Angel Fernández <elmendalerenda@gmail.com>
Mihai Borobocea <MihaiBorob@gmail.com> <MihaiBorobocea@gmail.com>
@@ -385,7 +327,6 @@ Moorthy RS <rsmoorthy@gmail.com> <rsmoorthy@users.noreply.github.com>
Moysés Borges <moysesb@gmail.com>
Moysés Borges <moysesb@gmail.com> <moyses.furtado@wplex.com.br>
Nace Oroz <orkica@gmail.com>
Natasha Jarus <linuxmercedes@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
Nathan LeClaire <nathan.leclaire@docker.com> <nathanleclaire@gmail.com>
Neil Horman <nhorman@tuxdriver.com> <nhorman@hmswarspite.think-freely.org>
@@ -397,9 +338,6 @@ Nolan Darilek <nolan@thewordnerd.info>
O.S. Tezer <ostezer@gmail.com>
O.S. Tezer <ostezer@gmail.com> <ostezer@users.noreply.github.com>
Oh Jinkyun <tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
Oliver Reason <oli@overrateddev.co>
Olli Janatuinen <olli.janatuinen@gmail.com>
Olli Janatuinen <olli.janatuinen@gmail.com> <olljanat@users.noreply.github.com>
Ouyang Liduo <oyld0210@163.com>
Patrick Stapleton <github@gdi2290.com>
Paul Liljenberg <liljenberg.paul@gmail.com> <letters@paulnotcom.se>
@@ -413,7 +351,6 @@ Peter Waller <p@pwaller.net> <peter@scraperwiki.com>
Phil Estes <estesp@linux.vnet.ibm.com> <estesp@gmail.com>
Philip Alexander Etling <paetling@gmail.com>
Philipp Gillé <philipp.gille@gmail.com> <philippgille@users.noreply.github.com>
Prasanna Gautam <prasannagautam@gmail.com>
Qiang Huang <h.huangqiang@huawei.com>
Qiang Huang <h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Ray Tsang <rayt@google.com> <saturnism@users.noreply.github.com>
@@ -421,12 +358,8 @@ Renaud Gaubert <rgaubert@nvidia.com> <renaud.gaubert@gmail.com>
Robert Terhaar <rterhaar@atlanticdynamic.com> <robbyt@users.noreply.github.com>
Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Roberto Muñoz Fernández <robertomf@gmail.com> <roberto.munoz.fernandez.contractor@bbva.com>
Robin Thoni <robin@rthoni.com>
Roman Dudin <katrmr@gmail.com> <decadent@users.noreply.github.com>
Rong Zhang <rongzhang@alauda.io>
Rongxiang Song <tinysong1226@gmail.com>
Ross Boucher <rboucher@gmail.com>
Rui Cao <ruicao@alauda.io>
Runshen Zhu <runshen.zhu@gmail.com>
Ryan Stelly <ryan.stelly@live.com>
Sakeven Jiang <jc5930@sina.cn>
@@ -442,7 +375,6 @@ Shengbo Song <thomassong@tencent.com>
Shengbo Song <thomassong@tencent.com> <mymneo@163.com>
Shih-Yuan Lee <fourdollars@gmail.com>
Shishir Mahajan <shishir.mahajan@redhat.com> <smahajan@redhat.com>
Shu-Wai Chow <shu-wai.chow@seattlechildrens.org>
Shukui Yang <yangshukui@huawei.com>
Shuwei Hao <haosw@cn.ibm.com>
Shuwei Hao <haosw@cn.ibm.com> <haoshuwei24@gmail.com>
@@ -461,12 +393,9 @@ Stefan Berger <stefanb@linux.vnet.ibm.com>
Stefan Berger <stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
Stefan J. Wernli <swernli@microsoft.com> <swernli@ntdev.microsoft.com>
Stefan S. <tronicum@user.github.com>
Stefan Scherer <stefan.scherer@docker.com>
Stefan Scherer <stefan.scherer@docker.com> <scherer_stefan@icloud.com>
Stephan Spindler <shutefan@gmail.com> <shutefan@users.noreply.github.com>
Stephen Day <stevvooe@gmail.com>
Stephen Day <stevvooe@gmail.com> <stephen.day@docker.com>
Stephen Day <stevvooe@gmail.com> <stevvooe@users.noreply.github.com>
Stephen Day <stephen.day@docker.com>
Stephen Day <stephen.day@docker.com> <stevvooe@users.noreply.github.com>
Steve Desmond <steve@vtsv.ca> <stevedesmond-ca@users.noreply.github.com>
Sun Gengze <690388648@qq.com>
Sun Jianbo <wonderflow.sun@gmail.com>
@@ -500,11 +429,9 @@ Toli Kuznets <toli@docker.com>
Tom Barlow <tomwbarlow@gmail.com>
Tom Sweeney <tsweeney@redhat.com>
Tõnis Tiigi <tonistiigi@gmail.com>
Trace Andreason <tandreason@gmail.com>
Trishna Guha <trishnaguha17@gmail.com>
Tristan Carel <tristan@cogniteev.com>
Tristan Carel <tristan@cogniteev.com> <tristan.carel@gmail.com>
Tyler Brown <tylers.pile@gmail.com>
Umesh Yadav <umesh4257@gmail.com>
Umesh Yadav <umesh4257@gmail.com> <dungeonmaster18@users.noreply.github.com>
Victor Lyuboslavsky <victor@victoreda.com>
@@ -514,19 +441,14 @@ Victor Vieux <victor.vieux@docker.com> <victor@docker.com>
Victor Vieux <victor.vieux@docker.com> <victor@dotcloud.com>
Victor Vieux <victor.vieux@docker.com> <victorvieux@gmail.com>
Victor Vieux <victor.vieux@docker.com> <vieux@docker.com>
Vikram bir Singh <vsingh@mirantis.com>
Vikram bir Singh <vsingh@mirantis.com> <vikrambir.singh@docker.com>
Viktor Vojnovski <viktor.vojnovski@amadeus.com> <vojnovski@gmail.com>
Vincent Batts <vbatts@redhat.com> <vbatts@hashbangbash.com>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Vincent Bernat <Vincent.Bernat@exoscale.ch> <vincent@bernat.im>
Vincent Boulineau <vincent.boulineau@datadoghq.com>
Vincent Demeester <vincent.demeester@docker.com> <vincent+github@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@demeester.fr>
Vincent Demeester <vincent.demeester@docker.com> <vincent@sbr.pm>
Vishnu Kannan <vishnuk@google.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com>
Vitaly Ostrosablin <vostrosablin@virtuozzo.com> <tmp6154@yandex.ru>
Vladimir Rutsky <altsysrq@gmail.com> <iamironbob@gmail.com>
Walter Stanish <walter@pratyeka.org>
Wang Chao <chao.wang@ucloud.cn>
@@ -542,14 +464,8 @@ Wei Wu <wuwei4455@gmail.com> cizixs <cizixs@163.com>
Wenjun Tang <tangwj2@lenovo.com> <dodia@163.com>
Wewang Xiaorenfine <wang.xiaoren@zte.com.cn>
Will Weaver <monkey@buildingbananas.com>
Wing-Kam Wong <wingkwong.code@gmail.com>
Xian Chaobo <xianchaobo@huawei.com>
Xian Chaobo <xianchaobo@huawei.com> <jimmyxian2004@yahoo.com.cn>
Xianglin Gao <xlgao@zju.edu.cn>
Xianlu Bird <xianlubird@gmail.com>
Xiao YongBiao <xyb4638@gmail.com>
Xiaodong Liu <liuxiaodong@loongson.cn>
Xiaodong Zhang <a4012017@sina.com>
Xiaoyu Zhang <zhang.xiaoyu33@zte.com.cn>
Xuecong Liao <satorulogic@gmail.com>
Yamasaki Masahide <masahide.y@gmail.com>
@@ -561,21 +477,15 @@ Yi EungJun <eungjun.yi@navercorp.com> <semtlenori@gmail.com>
Ying Li <ying.li@docker.com>
Ying Li <ying.li@docker.com> <cyli@twistedmatrix.com>
Yong Tang <yong.tang.github@outlook.com> <yongtang@users.noreply.github.com>
Yongxin Li <yxli@alauda.io>
Yosef Fertel <yfertel@gmail.com> <frosforever@users.noreply.github.com>
Yu Changchun <yuchangchun1@huawei.com>
Yu Chengxia <yuchengxia@huawei.com>
Yu Peng <yu.peng36@zte.com.cn>
Yu Peng <yu.peng36@zte.com.cn> <yupeng36@zte.com.cn>
Yue Zhang <zy675793960@yeah.net>
Zachary Jaffee <zjaffee@us.ibm.com> <zij@case.edu>
Zachary Jaffee <zjaffee@us.ibm.com> <zjaffee@apache.org>
ZhangHang <stevezhang2014@gmail.com>
Zhenkun Bi <bi.zhenkun@zte.com.cn>
Zhou Hao <zhouhao@cn.fujitsu.com>
Zhoulin Xie <zhoulin.xie@daocloud.io>
Zhu Kunjia <zhu.kunjia@zte.com.cn>
Ziheng Liu <lzhfromustc@gmail.com>
Zou Yu <zouyu7@huawei.com>
Zuhayr Elahi <zuhayr.elahi@docker.com>
Zuhayr Elahi <zuhayr.elahi@docker.com> <elahi.zuhayr@gmail.com>

233
AUTHORS

File diff suppressed because it is too large Load Diff

View File

@@ -99,7 +99,7 @@ be found.
* Add `--format` option to `docker node ls` [#30424](https://github.com/docker/docker/pull/30424)
* Add `--prune` option to `docker stack deploy` to remove services that are no longer defined in the docker-compose file [#31302](https://github.com/docker/docker/pull/31302)
* Add `PORTS` column for `docker service ls` when using `ingress` mode [#30813](https://github.com/docker/docker/pull/30813)
- Fix unnecessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
- Fix unnescessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
- Fix `docker stack deploy` not supporting `endpoint_mode` when deploying from a docker compose file [#32333](https://github.com/docker/docker/pull/32333)
- Proceed with startup if cluster component cannot be created to allow recovering from a broken swarm setup [#31631](https://github.com/docker/docker/pull/31631)

View File

@@ -27,10 +27,10 @@ issue, please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated and we will publicly thank you for it,
although we keep your name confidential if you request it. We also like to send
gifts&mdash;if you're into schwag, make sure to let us know. We currently do not
offer a paid security bounty program, but are not ruling it out in the future.
Security reports are greatly appreciated and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
## Reporting other issues

View File

@@ -1,408 +1,241 @@
# syntax=docker/dockerfile:1
# This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Use make to build a development environment image and run it in a container.
# # This is slow the first time.
# make BIND_DIR=. shell
#
# The following commands are executed inside the running container.
ARG CROSS="false"
ARG SYSTEMD="false"
ARG GO_VERSION=1.20.10
ARG DEBIAN_FRONTEND=noninteractive
ARG VPNKIT_VERSION=0.5.0
ARG DOCKER_BUILDTAGS="apparmor seccomp"
# # Make a dockerd binary.
# # hack/make.sh binary
#
# # Install dockerd to /usr/local/bin
# # make install
#
# # Run unit tests
# # hack/test/unit
#
# # Run tests e.g. integration, py
# # hack/make.sh binary test-integration test-docker-py
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
ARG BASE_DEBIAN_DISTRO="bullseye"
ARG GOLANG_IMAGE="golang:${GO_VERSION}-${BASE_DEBIAN_DISTRO}"
FROM ${GOLANG_IMAGE} AS base
RUN echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
ARG APT_MIRROR
RUN test -n "$APT_MIRROR" && sed -ri "s/(httpredir|deb|security).debian.org/${APT_MIRROR}/g" /etc/apt/sources.list || true
ENV GO111MODULE=off
FROM golang:1.10.3 AS base
# FIXME(vdemeester) this is kept for other script depending on it to not fail right away
# Remove this once the other scripts uses something else to detect the version
ENV GO_VERSION 1.10.3
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
FROM base AS criu
ARG DEBIAN_FRONTEND
# Install dependency packages specific to criu
RUN --mount=type=cache,sharing=locked,id=moby-criu-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-criu-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libcap-dev \
libnet-dev \
libnl-3-dev \
libprotobuf-c-dev \
libprotobuf-dev \
protobuf-c-compiler \
protobuf-compiler \
python3-protobuf
# Install CRIU for checkpoint/restore support
ARG CRIU_VERSION=3.14
RUN mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
ENV CRIU_VERSION 3.6
# Install dependancy packages specific to criu
RUN apt-get update && apt-get install -y \
libnet-dev \
libprotobuf-c0-dev \
libprotobuf-dev \
libnl-3-dev \
libcap-dev \
protobuf-compiler \
protobuf-c-compiler \
python-protobuf \
&& mkdir -p /usr/src/criu \
&& curl -sSL https://github.com/checkpoint-restore/criu/archive/v${CRIU_VERSION}.tar.gz | tar -C /usr/src/criu/ -xz --strip-components=1 \
&& cd /usr/src/criu \
&& make \
&& make PREFIX=/build/ install-criu
FROM base AS registry
WORKDIR /go/src/github.com/docker/distribution
# Install two versions of the registry. The first one is a recent version that
# supports both schema 1 and 2 manifests. The second one is an older version that
# only supports schema1 manifests. This allows integration-cli tests to cover
# push/pull with both schema1 and schema2 manifests.
# The old version of the registry is not working on arm64, so installation is
# skipped on that architecture.
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/docker/distribution.git . \
&& git checkout -q "$REGISTRY_COMMIT" \
&& GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|armhf|ppc64*|s390x) \
git checkout -q "$REGISTRY_COMMIT_SCHEMA1"; \
GOPATH="/go/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -buildmode=pie -o /build/registry-v2 github.com/docker/distribution/cmd/registry \
&& case $(dpkg --print-architecture) in \
amd64|ppc64*|s390x) \
(cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1"); \
GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH"; \
go build -buildmode=pie -o /build/registry-v2-schema1 github.com/docker/distribution/cmd/registry; \
;; \
esac \
&& rm -rf "$GOPATH"
FROM base AS docker-py
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT 8b246db271a85d6541dc458838627e89c683e42f
RUN git clone https://github.com/docker/docker-py.git /build \
&& cd /build \
&& git checkout -q $DOCKER_PY_COMMIT
FROM base AS swagger
WORKDIR $GOPATH/src/github.com/go-swagger/go-swagger
# Install go-swagger for validating swagger.yaml
# This is https://github.com/kolyshkin/go-swagger/tree/golang-1.13-fix
# TODO: move to under moby/ or fix upstream go-swagger to work for us.
ENV GO_SWAGGER_COMMIT c56166c036004ba7a3a321e5951ba472b9ae298c
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=tmpfs,target=/go/src/ \
set -x \
&& git clone https://github.com/kolyshkin/go-swagger.git . \
&& git checkout -q "$GO_SWAGGER_COMMIT" \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger
ENV GO_SWAGGER_COMMIT c28258affb0b6251755d92489ef685af8d4ff3eb
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/go-swagger/go-swagger.git "$GOPATH/src/github.com/go-swagger/go-swagger" \
&& (cd "$GOPATH/src/github.com/go-swagger/go-swagger" && git checkout -q "$GO_SWAGGER_COMMIT") \
&& go build -o /build/swagger github.com/go-swagger/go-swagger/cmd/swagger \
&& rm -rf "$GOPATH"
FROM debian:${BASE_DEBIAN_DISTRO} AS frozen-images
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-frozen-images-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-frozen-images-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq
FROM base AS frozen-images
RUN apt-get update && apt-get install -y jq ca-certificates --no-install-recommends
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
ARG TARGETARCH
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9 \
arm32v7/hello-world:latest@sha256:50b8560ad574c779908da71f7ce370c0a2471c098d44d1c8f6b513c5a55eeeb1
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
FROM base AS cross-false
# Just a little hack so we don't have to install these deps twice, once for runc and once for dockerd
FROM base AS runtime-dev
RUN apt-get update && apt-get install -y \
libapparmor-dev \
libseccomp-dev
FROM --platform=linux/amd64 base AS cross-true
ARG DEBIAN_FRONTEND
RUN dpkg --add-architecture arm64
RUN dpkg --add-architecture armel
RUN dpkg --add-architecture armhf
RUN dpkg --add-architecture ppc64el
RUN dpkg --add-architecture s390x
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
crossbuild-essential-arm64 \
crossbuild-essential-armel \
crossbuild-essential-armhf \
crossbuild-essential-ppc64el \
crossbuild-essential-s390x
FROM cross-${CROSS} as dev-base
FROM dev-base AS runtime-dev-cross-false
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-cross-false-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-false-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
libsystemd-dev \
libudev-dev
FROM --platform=linux/amd64 runtime-dev-cross-false AS runtime-dev-cross-true
ARG DEBIAN_FRONTEND
# These crossbuild packages rely on gcc-<arch>, but this doesn't want to install
# on non-amd64 systems, so other architectures cannot crossbuild amd64.
RUN --mount=type=cache,sharing=locked,id=moby-cross-true-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-cross-true-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libapparmor-dev:arm64 \
libapparmor-dev:armel \
libapparmor-dev:armhf \
libapparmor-dev:ppc64el \
libapparmor-dev:s390x \
libseccomp-dev:arm64 \
libseccomp-dev:armel \
libseccomp-dev:armhf \
libseccomp-dev:ppc64el \
libseccomp-dev:s390x
FROM runtime-dev-cross-${CROSS} AS runtime-dev
FROM base AS tomlv
ARG TOMLV_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tomlv
ENV INSTALL_BINARY_NAME=tomlv
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM base AS vndr
ARG VNDR_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh vndr
ENV INSTALL_BINARY_NAME=vndr
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS containerd
ARG DEBIAN_FRONTEND
RUN --mount=type=cache,sharing=locked,id=moby-containerd-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-containerd-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
libbtrfs-dev
ARG CONTAINERD_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh containerd
FROM base AS containerd
RUN apt-get update && apt-get install -y btrfs-tools
ENV INSTALL_BINARY_NAME=containerd
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS proxy
ARG LIBNETWORK_COMMIT
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh proxy
FROM base AS proxy
ENV INSTALL_BINARY_NAME=proxy
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM base AS golangci_lint
ARG GOLANGCI_LINT_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh golangci_lint
FROM base AS gometalinter
ENV INSTALL_BINARY_NAME=gometalinter
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM base AS gotestsum
ARG GOTESTSUM_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh gotestsum
FROM base AS shfmt
ARG SHFMT_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh shfmt
FROM dev-base AS dockercli
ARG DOCKERCLI_CHANNEL
ARG DOCKERCLI_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh dockercli
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM runtime-dev AS runc
ARG RUNC_VERSION
ARG RUNC_BUILDTAGS
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh runc
ENV INSTALL_BINARY_NAME=runc
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
COPY git-bundles git-bundles
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS tini
ARG DEBIAN_FRONTEND
ARG TINI_VERSION
RUN --mount=type=cache,sharing=locked,id=moby-tini-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-tini-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
cmake \
vim-common
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh tini
FROM base AS tini
RUN apt-get update && apt-get install -y cmake vim-common
COPY hack/dockerfile/install/install.sh ./install.sh
ENV INSTALL_BINARY_NAME=tini
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build/ ./install.sh $INSTALL_BINARY_NAME
FROM dev-base AS rootlesskit
ARG ROOTLESSKIT_VERSION
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=cache,target=/go/pkg/mod \
--mount=type=bind,src=hack/dockerfile/install,target=/tmp/install \
PREFIX=/build /tmp/install/install.sh rootlesskit
COPY ./contrib/dockerd-rootless.sh /build
COPY ./contrib/dockerd-rootless-setuptool.sh /build
FROM --platform=amd64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-amd64
FROM --platform=arm64 djs55/vpnkit:${VPNKIT_VERSION} AS vpnkit-arm64
FROM scratch AS vpnkit
COPY --from=vpnkit-amd64 /vpnkit /build/vpnkit.x86_64
COPY --from=vpnkit-arm64 /vpnkit /build/vpnkit.aarch64
# TODO: Some of this is only really needed for testing, it would be nice to split this up
FROM runtime-dev AS dev-systemd-false
ARG DEBIAN_FRONTEND
FROM runtime-dev AS dev
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser \
&& mkdir -p /home/unprivilegeduser/.local/share/docker \
&& chown -R unprivilegeduser /home/unprivilegeduser
# Let us use a .bashrc file
RUN ln -sfv /go/src/github.com/docker/docker/.bashrc ~/.bashrc
RUN useradd --create-home --gid docker unprivilegeduser
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH
RUN echo "source /usr/share/bash-completion/bash_completion" >> /etc/bash.bashrc
RUN ln -s /usr/local/completion/bash/docker /etc/bash_completion.d/docker
RUN ldconfig
# Set dev environment as safe git directory to prevent "dubious ownership" errors
# when bind-mounting the source into the dev-container. See https://github.com/moby/moby/pull/44930
RUN git config --global --add safe.directory $GOPATH/src/github.com/docker/docker
# This should only install packages that are specifically needed for the dev environment and nothing else
# Do you really need to add another package here? Can it be done in a different build stage?
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
apparmor \
bash-completion \
bzip2 \
inetutils-ping \
iproute2 \
iptables \
jq \
libcap2-bin \
libnet1 \
libnl-3-200 \
libprotobuf-c1 \
net-tools \
patch \
pigz \
python3-pip \
python3-setuptools \
python3-wheel \
sudo \
thin-provisioning-tools \
uidmap \
vim \
vim-common \
xfsprogs \
xz-utils \
zip
# Switch to use iptables instead of nftables (to match the CI hosts)
# TODO use some kind of runtime auto-detection instead if/when nftables is supported (https://github.com/moby/moby/issues/26824)
RUN update-alternatives --set iptables /usr/sbin/iptables-legacy || true \
&& update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy || true \
&& update-alternatives --set arptables /usr/sbin/arptables-legacy || true
RUN pip3 install yamllint==1.26.1
COPY --from=dockercli /build/ /usr/local/cli
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
bash-completion \
btrfs-tools \
iptables \
jq \
libdevmapper-dev \
libudev-dev \
libsystemd-dev \
binutils-mingw-w64 \
g++-mingw-w64-x86-64 \
net-tools \
pigz \
python-backports.ssl-match-hostname \
python-dev \
python-mock \
python-pip \
python-requests \
python-setuptools \
python-websocket \
python-wheel \
thin-provisioning-tools \
vim \
vim-common \
xfsprogs \
zip \
bzip2 \
xz-utils \
--no-install-recommends
COPY --from=swagger /build/swagger* /usr/local/bin/
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=swagger /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=registry /build/ /usr/local/bin/
COPY --from=gometalinter /build/ /usr/local/bin/
COPY --from=tomlv /build/ /usr/local/bin/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=dockercli /build/ /usr/local/cli
COPY --from=registry /build/registry* /usr/local/bin/
COPY --from=criu /build/ /usr/local/
COPY --from=docker-py /build/ /docker-py
# TODO: This is for the docker-py tests, which shouldn't really be needed for
# this image, but currently CI is expecting to run this image. This should be
# split out into a separate image, including all the `python-*` deps installed
# above.
RUN cd /docker-py \
&& pip install docker-pycreds==0.2.1 \
&& pip install yamllint==1.5.0 \
&& pip install -r test-requirements.txt
# Skip the CRIU stage for now, as the opensuse package repository is sometimes
# unstable, and we're currently not using it in CI.
#
# FIXME(thaJeztah): re-enable this stage when https://github.com/moby/moby/issues/38963 is resolved (see https://github.com/moby/moby/pull/38984)
# COPY --from=criu /build/ /usr/local/
COPY --from=vndr /build/ /usr/local/bin/
COPY --from=gotestsum /build/ /usr/local/bin/
COPY --from=golangci_lint /build/ /usr/local/bin/
COPY --from=shfmt /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=vpnkit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
ENV PATH=/usr/local/cli:$PATH
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
# Options for hack/validate/gometalinter
ENV GOMETALINTER_OPTS="--deadline=2m"
WORKDIR /go/src/github.com/docker/docker
VOLUME /var/lib/docker
VOLUME /home/unprivilegeduser/.local/share/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
FROM dev-systemd-false AS dev-systemd-true
RUN --mount=type=cache,sharing=locked,id=moby-dev-aptlib,target=/var/lib/apt \
--mount=type=cache,sharing=locked,id=moby-dev-aptcache,target=/var/cache/apt \
apt-get update && apt-get install -y --no-install-recommends \
dbus \
dbus-user-session \
systemd \
systemd-sysv
RUN mkdir -p hack \
&& curl -o hack/dind-systemd https://raw.githubusercontent.com/AkihiroSuda/containerized-systemd/b70bac0daeea120456764248164c21684ade7d0d/docker-entrypoint.sh \
&& chmod +x hack/dind-systemd
ENTRYPOINT ["hack/dind-systemd"]
FROM dev-systemd-${SYSTEMD} AS dev
FROM runtime-dev AS binary-base
ARG DOCKER_GITCOMMIT=HEAD
ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT}
ARG VERSION
ENV VERSION=${VERSION}
ARG PLATFORM
ENV PLATFORM=${PLATFORM}
ARG PRODUCT
ENV PRODUCT=${PRODUCT}
ARG DEFAULT_PRODUCT_LICENSE
ENV DEFAULT_PRODUCT_LICENSE=${DEFAULT_PRODUCT_LICENSE}
ARG DOCKER_BUILDTAGS
ENV DOCKER_BUILDTAGS="${DOCKER_BUILDTAGS}"
ENV PREFIX=/build
# TODO: This is here because hack/make.sh binary copies these extras binaries
# from $PATH into the bundles dir.
# It would be nice to handle this in a different way.
COPY --from=tini /build/ /usr/local/bin/
COPY --from=runc /build/ /usr/local/bin/
COPY --from=containerd /build/ /usr/local/bin/
COPY --from=rootlesskit /build/ /usr/local/bin/
COPY --from=proxy /build/ /usr/local/bin/
COPY --from=vpnkit /build/ /usr/local/bin/
WORKDIR /go/src/github.com/docker/docker
FROM binary-base AS build-binary
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh binary
FROM binary-base AS build-dynbinary
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
hack/make.sh dynbinary
FROM binary-base AS build-cross
ARG DOCKER_CROSSPLATFORMS
RUN --mount=type=cache,target=/root/.cache/go-build \
--mount=type=bind,target=/go/src/github.com/docker/docker \
--mount=type=tmpfs,target=/go/src/github.com/docker/docker/autogen \
hack/make.sh cross
FROM scratch AS binary
COPY --from=build-binary /build/bundles/ /
FROM scratch AS dynbinary
COPY --from=build-dynbinary /build/bundles/ /
FROM scratch AS cross
COPY --from=build-cross /build/bundles/ /
FROM dev AS final
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,67 +1,49 @@
ARG GO_VERSION=1.20.10
## Step 1: Build tests
FROM golang:1.10.3-alpine3.7 as builder
FROM golang:${GO_VERSION}-alpine AS base
ENV GO111MODULE=off
RUN apk --no-cache add \
RUN apk add --update \
bash \
btrfs-progs-dev \
build-base \
curl \
lvm2-dev \
jq
jq \
&& rm -rf /var/cache/apk/*
RUN mkdir -p /build/
RUN mkdir -p /go/src/github.com/docker/docker/
WORKDIR /go/src/github.com/docker/docker/
FROM base AS frozen-images
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /
RUN /download-frozen-image-v2.sh /build \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:latest@sha256:95cf004f559831017cdf4628aaf1bb30133677be8702a8c5f2994629f637a209 \
busybox:glibc@sha256:1f81263701cddf6402afe9f33fca0266d9fff379e59b1748f33d3072da71ee85 \
debian:bullseye-slim@sha256:dacf278785a4daa9de07596ec739dbc07131e189942772210709c5c0777e8437 \
hello-world:latest@sha256:d58e752213a51785838f9eed2b7a498ffa1cb3aa7f946dda11af39286c3db9a9
# See also frozenImages in "testutil/environment/protect.go" (which needs to be updated when adding images to this list)
# Generate frozen images
COPY contrib/download-frozen-image-v2.sh contrib/download-frozen-image-v2.sh
RUN contrib/download-frozen-image-v2.sh /output/docker-frozen-images \
buildpack-deps:jessie@sha256:dd86dced7c9cd2a724e779730f0a53f93b7ef42228d4344b25ce9a42a1486251 \
busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 \
busybox:glibc@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6 \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
FROM base AS dockercli
ENV INSTALL_BINARY_NAME=dockercli
COPY hack/dockerfile/install/install.sh ./install.sh
COPY hack/dockerfile/install/$INSTALL_BINARY_NAME.installer ./
RUN PREFIX=/build ./install.sh $INSTALL_BINARY_NAME
# Build DockerSuite.TestBuild* dependency
FROM base AS contrib
COPY contrib/syscall-test /build/syscall-test
COPY contrib/httpserver/Dockerfile /build/httpserver/Dockerfile
COPY contrib/httpserver contrib/httpserver
RUN CGO_ENABLED=0 go build -buildmode=pie -o /build/httpserver/httpserver github.com/docker/docker/contrib/httpserver
# Build the integration tests and copy the resulting binaries to /build/tests
FROM base AS builder
# Install dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN ./hack/dockerfile/install/install.sh dockercli
# Set tag and add sources
COPY . .
# Copy test sources tests that use assert can print errors
RUN mkdir -p /build${PWD} && find integration integration-cli -name \*_test.go -exec cp --parents '{}' /build${PWD} \;
# Build and install test binaries
ARG DOCKER_GITCOMMIT=undefined
ARG DOCKER_GITCOMMIT
ENV DOCKER_GITCOMMIT=${DOCKER_GITCOMMIT:-undefined}
ADD . .
# Build DockerSuite.TestBuild* dependency
RUN CGO_ENABLED=0 go build -buildmode=pie -o /output/httpserver github.com/docker/docker/contrib/httpserver
# Build the integration tests and copy the resulting binaries to /output/tests
RUN hack/make.sh build-integration-test-binary
RUN mkdir -p /build/tests && find . -name test.main -exec cp --parents '{}' /build/tests \;
RUN mkdir -p /output/tests && find . -name test.main -exec cp --parents '{}' /output/tests \;
## Generate testing image
FROM alpine:3.10 as runner
ENV DOCKER_REMOTE_DAEMON=1
ENV DOCKER_INTEGRATION_DAEMON_DEST=/
ENTRYPOINT ["/scripts/run.sh"]
# Add an unprivileged user to be used for tests which need it
RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash
## Step 2: Generate testing image
FROM alpine:3.7 as runner
# GNU tar is used for generating the emptyfs image
RUN apk --no-cache add \
RUN apk add --update \
bash \
ca-certificates \
g++ \
@@ -69,16 +51,24 @@ RUN apk --no-cache add \
iptables \
pigz \
tar \
xz
xz \
&& rm -rf /var/cache/apk/*
COPY hack/test/e2e-run.sh /scripts/run.sh
COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh
# Add an unprivileged user to be used for tests which need it
RUN addgroup docker && adduser -D -G docker unprivilegeduser -s /bin/ash
COPY integration/testdata /tests/integration/testdata
COPY integration/build/testdata /tests/integration/build/testdata
COPY integration-cli/fixtures /tests/integration-cli/fixtures
COPY contrib/httpserver/Dockerfile /tests/contrib/httpserver/Dockerfile
COPY contrib/syscall-test /tests/contrib/syscall-test
COPY integration-cli/fixtures /tests/integration-cli/fixtures
COPY --from=frozen-images /build/ /docker-frozen-images
COPY --from=dockercli /build/ /usr/bin/
COPY --from=contrib /build/ /tests/contrib/
COPY --from=builder /build/ /
COPY hack/test/e2e-run.sh /scripts/run.sh
COPY hack/make/.ensure-emptyfs /scripts/ensure-emptyfs.sh
COPY --from=builder /output/docker-frozen-images /docker-frozen-images
COPY --from=builder /output/httpserver /tests/contrib/httpserver/httpserver
COPY --from=builder /output/tests /tests
COPY --from=builder /usr/local/bin/docker /usr/bin/docker
ENV DOCKER_REMOTE_DAEMON=1 DOCKER_INTEGRATION_DAEMON_DEST=/
ENTRYPOINT ["/scripts/run.sh"]

View File

@@ -5,10 +5,7 @@
# This represents the bare minimum required to build and test Docker.
ARG GO_VERSION=1.20.10
FROM golang:${GO_VERSION}-buster
ENV GO111MODULE=off
FROM debian:stretch
# allow replacing httpredir or deb mirror
ARG APT_MIRROR=deb.debian.org
@@ -18,13 +15,13 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
btrfs-tools \
build-essential \
curl \
cmake \
gcc \
git \
libapparmor-dev \
libbtrfs-dev \
libdevmapper-dev \
libseccomp-dev \
ca-certificates \
@@ -40,6 +37,18 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
vim-common \
&& rm -rf /var/lib/apt/lists/*
# Install Go
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.10.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH
ENV GOPATH /go
ENV CGO_LDFLAGS -L/lib
# Install runc, containerd, tini and docker-proxy
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install

View File

@@ -45,8 +45,8 @@
#
# 1. Clone the sources from github.com:
#
# >> git clone https://github.com/docker/docker.git C:\gopath\src\github.com\docker\docker
# >> Cloning into 'C:\gopath\src\github.com\docker\docker'...
# >> git clone https://github.com/docker/docker.git C:\go\src\github.com\docker\docker
# >> Cloning into 'C:\go\src\github.com\docker\docker'...
# >> remote: Counting objects: 186216, done.
# >> remote: Compressing objects: 100% (21/21), done.
# >> remote: Total 186216 (delta 5), reused 0 (delta 0), pack-reused 186195
@@ -59,7 +59,7 @@
#
# 2. Change directory to the cloned docker sources:
#
# >> cd C:\gopath\src\github.com\docker\docker
# >> cd C:\go\src\github.com\docker\docker
#
#
# 3. Build a docker image with the components required to build the docker binaries from source
@@ -79,8 +79,8 @@
# 5. Copy the binaries out of the container, replacing HostPath with an appropriate destination
# folder on the host system where you want the binaries to be located.
#
# >> docker cp binaries:C:\gopath\src\github.com\docker\docker\bundles\docker.exe C:\HostPath\docker.exe
# >> docker cp binaries:C:\gopath\src\github.com\docker\docker\bundles\dockerd.exe C:\HostPath\dockerd.exe
# >> docker cp binaries:C:\go\src\github.com\docker\docker\bundles\docker.exe C:\HostPath\docker.exe
# >> docker cp binaries:C:\go\src\github.com\docker\docker\bundles\dockerd.exe C:\HostPath\dockerd.exe
#
#
# 6. (Optional) Remove the interim container holding the built executable binaries:
@@ -147,36 +147,24 @@
# The docker integration tests do not currently run in a container on Windows, predominantly
# due to Windows not supporting privileged mode, so anything using a volume would fail.
# They (along with the rest of the docker CI suite) can be run using
# https://github.com/kevpar/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
# https://github.com/jhowardmsft/docker-w2wCIScripts/blob/master/runCI/Invoke-DockerCI.ps1.
#
# -----------------------------------------------------------------------------------------
# The number of build steps below are explicitly minimised to improve performance.
# Extremely important - do not change the following line to reference a "specific" image,
# such as `mcr.microsoft.com/windows/servercore:ltsc2019`. If using this Dockerfile in process
# isolated containers, the kernel of the host must match the container image, and hence
# would fail between Windows Server 2016 (aka RS1) and Windows Server 2019 (aka RS5).
# It is expected that the image `microsoft/windowsservercore:latest` is present, and matches
# the hosts kernel version before doing a build.
FROM microsoft/windowsservercore
# Use PowerShell as the default shell
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ARG GO_VERSION=1.20.10
ARG GOTESTSUM_VERSION=v1.8.2
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=${GO_VERSION} `
ENV GO_VERSION=1.10.3 `
GIT_VERSION=2.11.1 `
GOPATH=C:\gopath `
GO111MODULE=off `
FROM_DOCKERFILE=1 `
GOTESTSUM_VERSION=${GOTESTSUM_VERSION}
GOPATH=C:\go `
FROM_DOCKERFILE=1
RUN `
Function Test-Nano() { `
@@ -205,7 +193,6 @@ RUN `
Throw ("Failed to download " + $source) `
}`
} else { `
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `
$webClient = New-Object System.Net.WebClient; `
$webClient.DownloadFile($source, $target); `
} `
@@ -218,17 +205,16 @@ RUN `
Download-File $location C:\gitsetup.zip; `
`
Write-Host INFO: Downloading go...; `
$dlGoVersion=$Env:GO_VERSION -replace '\.0$',''; `
Download-File "https://golang.org/dl/go${dlGoVersion}.windows-amd64.zip" C:\go.zip; `
Download-File $('https://golang.org/dl/go'+$Env:GO_VERSION+'.windows-amd64.zip') C:\go.zip; `
`
Write-Host INFO: Downloading compiler 1 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/gcc.zip C:\gcc.zip; `
`
Write-Host INFO: Downloading compiler 2 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/runtime.zip C:\runtime.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/runtime.zip C:\runtime.zip; `
`
Write-Host INFO: Downloading compiler 3 of 3...; `
Download-File https://raw.githubusercontent.com/moby/docker-tdmgcc/master/binutils.zip C:\binutils.zip; `
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/binutils.zip C:\binutils.zip; `
`
Write-Host INFO: Extracting git...; `
Expand-Archive C:\gitsetup.zip C:\git-tmp; `
@@ -252,35 +238,19 @@ RUN `
Remove-Item C:\binutils.zip; `
Remove-Item C:\gitsetup.zip; `
`
# Ensure all directories exist that we will require below....
$srcDir = """$Env:GOPATH`\src\github.com\docker\docker\bundles"""; `
Write-Host INFO: Ensuring existence of directory $srcDir...; `
New-Item -Force -ItemType Directory -Path $srcDir | Out-Null; `
Write-Host INFO: Creating source directory...; `
New-Item -ItemType Directory -Path C:\go\src\github.com\docker\docker | Out-Null; `
`
Write-Host INFO: Configuring git core.autocrlf...; `
C:\git\cmd\git config --global core.autocrlf true;
RUN `
Function Install-GoTestSum() { `
$Env:GO111MODULE = 'on'; `
$tmpGobin = "${Env:GOBIN_TMP}"; `
$Env:GOBIN = """${Env:GOPATH}`\bin"""; `
Write-Host "INFO: Installing gotestsum version $Env:GOTESTSUM_VERSION in $Env:GOBIN"; `
&go install "gotest.tools/gotestsum@${Env:GOTESTSUM_VERSION}"; `
$Env:GOBIN = "${tmpGobin}"; `
$Env:GO111MODULE = 'off'; `
if ($LASTEXITCODE -ne 0) { `
Throw '"gotestsum install failed..."'; `
} `
} `
C:\git\cmd\git config --global core.autocrlf true; `
`
Install-GoTestSum
Write-Host INFO: Completed
# Make PowerShell the default entrypoint
ENTRYPOINT ["powershell.exe"]
# Set the working directory to the location of the sources
WORKDIR ${GOPATH}\src\github.com\docker\docker
WORKDIR C:\go\src\github.com\docker\docker
# Copy the sources into the container
COPY . .

713
Jenkinsfile vendored
View File

@@ -1,713 +0,0 @@
#!groovy
pipeline {
agent none
options {
buildDiscarder(logRotator(daysToKeepStr: '30'))
timeout(time: 2, unit: 'HOURS')
timestamps()
}
parameters {
booleanParam(name: 'unit_validate', defaultValue: true, description: 'amd64 (x86_64) unit tests and vendor check')
booleanParam(name: 'validate_force', defaultValue: false, description: 'force validation steps to be run, even if no changes were detected')
booleanParam(name: 'amd64', defaultValue: true, description: 'amd64 (x86_64) Build/Test')
booleanParam(name: 'rootless', defaultValue: true, description: 'amd64 (x86_64) Build/Test (Rootless mode)')
booleanParam(name: 'cgroup2', defaultValue: true, description: 'amd64 (x86_64) Build/Test (cgroup v2)')
booleanParam(name: 'arm64', defaultValue: true, description: 'ARM (arm64) Build/Test')
booleanParam(name: 'windowsRS5', defaultValue: true, description: 'Windows 2019 (RS5) Build/Test')
booleanParam(name: 'dco', defaultValue: true, description: 'Run the DCO check')
}
environment {
DOCKER_BUILDKIT = '1'
DOCKER_EXPERIMENTAL = '1'
DOCKER_GRAPHDRIVER = 'overlay2'
CHECK_CONFIG_COMMIT = '78405559cfe5987174aa2cb6463b9b2c1b917255'
TESTDEBUG = '0'
TIMEOUT = '120m'
}
stages {
stage('pr-hack') {
when { changeRequest() }
steps {
script {
echo "Workaround for PR auto-cancel feature. Borrowed from https://issues.jenkins-ci.org/browse/JENKINS-43353"
def buildNumber = env.BUILD_NUMBER as int
if (buildNumber > 1) milestone(buildNumber - 1)
milestone(buildNumber)
}
}
}
stage('DCO-check') {
when {
beforeAgent true
expression { params.dco }
}
agent { label 'arm64 && ubuntu-2004' }
steps {
sh '''
docker run --rm \
-v "$WORKSPACE:/workspace" \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
alpine sh -c 'apk add --no-cache -q bash git openssh-client && git config --system --add safe.directory /workspace && cd /workspace && hack/validate/dco'
'''
}
}
stage('Build') {
parallel {
stage('unit-validate') {
when {
beforeAgent true
expression { params.unit_validate }
}
agent { label 'amd64 && ubuntu-2004 && overlay2' }
environment {
// On master ("non-pull-request"), force running some validation checks (vendor, swagger),
// even if no files were changed. This allows catching problems caused by pull-requests
// that were merged out-of-sequence.
TEST_FORCE_VALIDATE = sh returnStdout: true, script: 'if [ "${BRANCH_NAME%%-*}" != "PR" ] || [ "${CHANGE_TARGET:-master}" != "master" ] || [ "${validate_force}" = "true" ]; then echo "1"; fi'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .'
}
}
stage("Validate") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_FORCE_VALIDATE \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/default
'''
}
}
stage("Docker-py") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon \
test-docker-py
'''
}
post {
always {
junit testResults: 'bundles/test-docker-py/junit-report.xml', allowEmptyResults: true
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=docker-py
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czf ${bundleName}-bundles.tar.gz bundles/test-docker-py/*.xml bundles/test-docker-py/*.log
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
}
}
stage("Static") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh binary-daemon
'''
}
}
stage("Cross") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh cross
'''
}
}
// needs to be last stage that calls make.sh for the junit report to work
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Validate vendor") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TEST_FORCE_VALIDATE \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/validate/vendor
'''
}
}
}
post {
always {
sh '''
echo 'Ensuring container killed.'
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo 'Chowning /workspace to jenkins user'
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=unit
echo "Creating ${bundleName}-bundles.tar.gz"
tar -czvf ${bundleName}-bundles.tar.gz bundles/junit-report.xml bundles/go-test-report.json bundles/profile.out
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('amd64') {
when {
beforeAgent true
expression { params.amd64 }
}
agent { label 'amd64 && ubuntu-2004 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
# todo: include ip_vs in base image
sudo modprobe ip_vs
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Run tests") {
steps {
sh '''#!/bin/bash
# bash is needed so 'jobs -p' works properly
# it also accepts setting inline envvars for functions without explicitly exporting
set -x
run_tests() {
[ -n "$TESTDEBUG" ] && rm= || rm=--rm;
docker run $rm -t --privileged \
-v "$WORKSPACE/bundles/${TEST_INTEGRATION_DEST}:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/bundles/dynbinary-daemon:/go/src/github.com/docker/docker/bundles/dynbinary-daemon" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name "$CONTAINER_NAME" \
-e KEEPBUNDLE=1 \
-e TESTDEBUG \
-e TESTFLAGS \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
"$1" \
test-integration
}
trap "exit" INT TERM
trap 'pids=$(jobs -p); echo "Remaining pids to kill: [$pids]"; [ -z "$pids" ] || kill $pids' EXIT
CONTAINER_NAME=docker-pr$BUILD_NUMBER
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
-v "$WORKSPACE/.git:/go/src/github.com/docker/docker/.git" \
--name ${CONTAINER_NAME}-build \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary-daemon
# flaky + integration
TEST_INTEGRATION_DEST=1 CONTAINER_NAME=${CONTAINER_NAME}-1 TEST_SKIP_INTEGRATION_CLI=1 run_tests test-integration-flaky &
# integration-cli first set
TEST_INTEGRATION_DEST=2 CONTAINER_NAME=${CONTAINER_NAME}-2 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSuite|DockerNetworkSuite|DockerHubPullSuite|DockerRegistrySuite|DockerSchema1RegistrySuite|DockerRegistryAuthTokenSuite|DockerRegistryAuthHtpasswdSuite)/" run_tests &
# integration-cli second set
TEST_INTEGRATION_DEST=3 CONTAINER_NAME=${CONTAINER_NAME}-3 TEST_SKIP_INTEGRATION=1 TESTFLAGS="-test.run Test(DockerSwarmSuite|DockerDaemonSuite|DockerExternalVolumeSuite)/" run_tests &
c=0
for job in $(jobs -p); do
wait ${job} || c=$?
done
exit $c
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
cids=$(docker ps -aq -f name=docker-pr${BUILD_NUMBER}-*)
[ -n "$cids" ] && docker rm -vf $cids || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('rootless') {
when {
beforeAgent true
expression { params.rootless }
}
agent { label 'amd64 && ubuntu-2004 && overlay2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration tests") {
environment {
DOCKER_ROOTLESS = '1'
TEST_SKIP_INTEGRATION_CLI = '1'
}
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_ROOTLESS \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64-rootless
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('cgroup2') {
when {
beforeAgent true
expression { params.cgroup2 }
}
agent { label 'amd64 && ubuntu-2004 && cgroup2' }
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Build dev image") {
steps {
sh '''
docker build --force-rm --build-arg APT_MIRROR --build-arg SYSTEMD=true -t docker:${GIT_COMMIT} .
'''
}
}
stage("Integration tests") {
environment {
DOCKER_SYSTEMD = '1' // recommended cgroup driver for v2
TEST_SKIP_INTEGRATION_CLI = '1' // CLI tests do not support v2
}
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_SYSTEMD \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=amd64-cgroup2
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('arm64') {
when {
beforeAgent true
expression { params.arm64 }
}
agent { label 'arm64 && ubuntu-2004' }
environment {
TEST_SKIP_INTEGRATION_CLI = '1'
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
sh '''
echo "check-config.sh version: ${CHECK_CONFIG_COMMIT}"
curl -fsSL -o ${WORKSPACE}/check-config.sh "https://raw.githubusercontent.com/moby/moby/${CHECK_CONFIG_COMMIT}/contrib/check-config.sh" \
&& bash ${WORKSPACE}/check-config.sh || true
'''
}
}
stage("Build dev image") {
steps {
sh 'docker build --force-rm -t docker:${GIT_COMMIT} .'
}
}
stage("Unit tests") {
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/test/unit
'''
}
post {
always {
junit testResults: 'bundles/junit-report.xml', allowEmptyResults: true
}
}
}
stage("Integration tests") {
environment { TEST_SKIP_INTEGRATION_CLI = '1' }
steps {
sh '''
docker run --rm -t --privileged \
-v "$WORKSPACE/bundles:/go/src/github.com/docker/docker/bundles" \
--name docker-pr$BUILD_NUMBER \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT=${GIT_COMMIT} \
-e DOCKER_GRAPHDRIVER \
-e TESTDEBUG \
-e TEST_SKIP_INTEGRATION_CLI \
-e TIMEOUT \
-e VALIDATE_REPO=${GIT_URL} \
-e VALIDATE_BRANCH=${CHANGE_TARGET} \
docker:${GIT_COMMIT} \
hack/make.sh \
dynbinary \
test-integration
'''
}
post {
always {
junit testResults: 'bundles/**/*-report.xml', allowEmptyResults: true
}
}
}
}
post {
always {
sh '''
echo "Ensuring container killed."
docker rm -vf docker-pr$BUILD_NUMBER || true
'''
sh '''
echo "Chowning /workspace to jenkins user"
docker run --rm -v "$WORKSPACE:/workspace" busybox chown -R "$(id -u):$(id -g)" /workspace
'''
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
sh '''
bundleName=arm64-integration
echo "Creating ${bundleName}-bundles.tar.gz"
# exclude overlay2 directories
find bundles -path '*/root/*overlay2' -prune -o -type f \\( -name '*-report.json' -o -name '*.log' -o -name '*.prof' -o -name '*-report.xml' \\) -print | xargs tar -czf ${bundleName}-bundles.tar.gz
'''
archiveArtifacts artifacts: '*-bundles.tar.gz', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
stage('win-RS5') {
when {
beforeAgent true
expression { params.windowsRS5 }
}
environment {
DOCKER_BUILDKIT = '0'
DOCKER_DUT_DEBUG = '1'
SKIP_VALIDATION_TESTS = '1'
SOURCES_DRIVE = 'd'
SOURCES_SUBDIR = 'gopath'
TESTRUN_DRIVE = 'd'
TESTRUN_SUBDIR = "CI"
WINDOWS_BASE_IMAGE = 'mcr.microsoft.com/windows/servercore'
WINDOWS_BASE_IMAGE_TAG = 'ltsc2019'
}
agent {
node {
customWorkspace 'd:\\gopath\\src\\github.com\\docker\\docker'
label 'windows-2019'
}
}
stages {
stage("Print info") {
steps {
sh 'docker version'
sh 'docker info'
}
}
stage("Run tests") {
steps {
powershell '''
$ErrorActionPreference = 'Stop'
Invoke-WebRequest https://github.com/moby/docker-ci-zap/blob/master/docker-ci-zap.exe?raw=true -OutFile C:/Windows/System32/docker-ci-zap.exe
./hack/ci/windows.ps1
exit $LastExitCode
'''
}
}
}
post {
always {
junit testResults: 'bundles/junit-report-*.xml', allowEmptyResults: true
catchError(buildResult: 'SUCCESS', stageResult: 'FAILURE', message: 'Failed to create bundles.tar.gz') {
powershell '''
cd $env:WORKSPACE
$bundleName="windowsRS5-integration"
Write-Host -ForegroundColor Green "Creating ${bundleName}-bundles.zip"
# archiveArtifacts does not support env-vars to , so save the artifacts in a fixed location
Compress-Archive -Path "bundles/CIDUT.out", "bundles/CIDUT.err", "bundles/junit-report-*.xml" -CompressionLevel Optimal -DestinationPath "${bundleName}-bundles.zip"
'''
archiveArtifacts artifacts: '*-bundles.zip', allowEmptyArchive: true
}
}
cleanup {
sh 'make clean'
deleteDir()
}
}
}
}
}
}
}

View File

@@ -176,7 +176,7 @@
END OF TERMS AND CONDITIONS
Copyright 2013-2018 Docker, Inc.
Copyright 2013-2017 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

View File

@@ -24,17 +24,20 @@
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
people = [
"aaronlehmann",
"akihirosuda",
"anusha",
"coolljt0725",
"cpuguy83",
"crosbymichael",
"dnephin",
"duglin",
"estesp",
"jhowardmsft",
"johnstep",
"justincormack",
"kolyshkin",
"lowenna",
"mhbauer",
"mlaventure",
"runcom",
"stevvooe",
"thajeztah",
@@ -47,6 +50,15 @@
"yongtang"
]
[Org."Docs maintainers"]
# TODO Describe the docs maintainers role.
people = [
"misty",
"thajeztah"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
@@ -62,12 +74,13 @@
people = [
"alexellis",
"andrewhsu",
"anonymuse",
"chanwit",
"fntlnz",
"gianarb",
"olljanat",
"programmerq",
"rheinwein",
"ripcurld",
"samwhited",
"thajeztah"
]
@@ -78,12 +91,6 @@
# Thank you!
people = [
# Aaron Lehmann was a maintainer for swarmkit, the registry, and the engine,
# and contributed many improvements, features, and bugfixes in those areas,
# among which "automated service rollbacks", templated secrets and configs,
# and resumable image layer downloads.
"aaronlehmann",
# Harald Albers is the mastermind behind the bash completion scripts for the
# Docker CLI. The completion scripts moved to the Docker CLI repository, so
# you can now find him perform his magic in the https://github.com/docker/cli repository.
@@ -103,19 +110,6 @@
# and tweets as @calavera.
"calavera",
# Before becoming a maintainer, Daniel Nephin was a core contributor
# to "Fig" (now known as Docker Compose). As a maintainer for both the
# Engine and Docker CLI, Daniel contributed many features, among which
# the `docker stack` commands, allowing users to deploy their Docker
# Compose projects as a Swarm service.
"dnephin",
# Doug Davis contributed many features and fixes for the classic builder,
# such as "wildcard" copy, the dockerignore file, custom paths/names
# for the Dockerfile, as well as enhancements to the API and documentation.
# Follow Doug on Twitter, where he tweets as @duginabox.
"duglin",
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
@@ -124,7 +118,7 @@
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Evan Hazlett is the creator of the Shipyard and Interlock open source projects,
# Evan Hazlett is the creator of of the Shipyard and Interlock open source projects,
# and the author of "Orca", which became the foundation of Docker Universal Control
# Plane (UCP). As a maintainer, Evan helped integrating SwarmKit (secrets, tasks)
# into the Docker engine.
@@ -168,7 +162,7 @@
# Alexander Morozov contributed many features to Docker, worked on the premise of
# what later became containerd (and worked on that too), and made a "stupid" Go
# vendor tool specifically for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
# vendor tool specificaly for docker/docker needs: vndr (https://github.com/LK4D4/vndr).
# Not many know that Alexander is a master negotiator, being able to change course
# of action with a single "Nope, we're not gonna do that".
"lk4d4",
@@ -179,13 +173,6 @@
# Swarm mode networking.
"mavenugo",
# As a maintainer, Kenfe-Mickaël Laventure worked on the container runtime,
# integrating containerd 1.0 with the daemon, and adding support for custom
# OCI runtimes, as well as implementing the `docker prune` subcommands,
# which was a welcome feature to be added. You can keep up with Mickaél on
# Twitter (@kmlaventure).
"mlaventure",
# As a docs maintainer, Mary Anthony contributed greatly to the Docker
# docs. She wrote the Docker Contributor Guide and Getting Started
# Guides. She helped create a doc build system independent of
@@ -260,7 +247,7 @@
[people.akihirosuda]
Name = "Akihiro Suda"
Email = "akihiro.suda.cz@hco.ntt.co.jp"
Email = "suda.akihiro@lab.ntt.co.jp"
GitHub = "AkihiroSuda"
[people.aluzzardi]
@@ -278,6 +265,11 @@
Email = "andrewhsu@docker.com"
GitHub = "andrewhsu"
[people.anonymuse]
Name = "Jesse White"
Email = "anonymuse@gmail.com"
GitHub = "anonymuse"
[people.anusha]
Name = "Anusha Ragunathan"
Email = "anusha@docker.com"
@@ -298,6 +290,11 @@
Email = "cpuguy83@gmail.com"
GitHub = "cpuguy83"
[people.chanwit]
Name = "Chanwit Kaewkasi"
Email = "chanwit@gmail.com"
GitHub = "chanwit"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
@@ -348,6 +345,11 @@
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jhowardmsft]
Name = "John Howard"
Email = "jhoward@microsoft.com"
GitHub = "jhowardmsft"
[people.jessfraz]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
@@ -363,21 +365,11 @@
Email = "justin.cormack@docker.com"
GitHub = "justincormack"
[people.kolyshkin]
Name = "Kir Kolyshkin"
Email = "kolyshkin@gmail.com"
GitHub = "kolyshkin"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.lowenna]
Name = "John Howard"
Email = "github@lowenna.com"
GitHub = "lowenna"
[people.mavenugo]
Name = "Madhu Venugopal"
Email = "madhu@docker.com"
@@ -388,6 +380,11 @@
Email = "mbauer@us.ibm.com"
GitHub = "mhbauer"
[people.misty]
Name = "Misty Stanley-Jones"
Email = "misty@docker.com"
GitHub = "mistyhacks"
[people.mlaventure]
Name = "Kenfe-Mickaël Laventure"
Email = "mickael.laventure@gmail.com"
@@ -403,16 +400,16 @@
Email = "mrjana@docker.com"
GitHub = "mrjana"
[people.olljanat]
Name = "Olli Janatuinen"
Email = "olli.janatuinen@gmail.com"
GitHub = "olljanat"
[people.programmerq]
Name = "Jeff Anderson"
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.rheinwein]
Name = "Laura Frank"
Email = "laura@codeship.com"
GitHub = "rheinwein"
[people.ripcurld]
Name = "Boaz Shuster"
Email = "ripcurld.github@gmail.com"
@@ -423,11 +420,6 @@
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.samwhited]
Name = "Sam Whited"
Email = "sam@samwhited.com"
GitHub = "samwhited"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"

183
Makefile
View File

@@ -1,25 +1,10 @@
.PHONY: all binary dynbinary build cross help install manpages run shell test test-docker-py test-integration test-unit validate win
BUILDX_VERSION ?= v0.8.2
ifdef USE_BUILDX
BUILDX ?= $(shell command -v buildx)
BUILDX ?= $(shell command -v docker-buildx)
DOCKER_BUILDX_CLI_PLUGIN_PATH ?= ~/.docker/cli-plugins/docker-buildx
BUILDX ?= $(shell if [ -x "$(DOCKER_BUILDX_CLI_PLUGIN_PATH)" ]; then echo $(DOCKER_BUILDX_CLI_PLUGIN_PATH); fi)
endif
ifndef USE_BUILDX
DOCKER_BUILDKIT := 1
export DOCKER_BUILDKIT
endif
BUILDX ?= bundles/buildx
DOCKER ?= docker
.PHONY: all binary dynbinary build cross help init-go-pkg-cache install manpages run shell test test-docker-py test-integration test-unit validate win
# set the graph driver as the current graphdriver if not set
DOCKER_GRAPHDRIVER := $(if $(DOCKER_GRAPHDRIVER),$(DOCKER_GRAPHDRIVER),$(shell docker info 2>&1 | grep "Storage Driver" | sed 's/.*: //'))
export DOCKER_GRAPHDRIVER
DOCKER_INCREMENTAL_BINARY := $(if $(DOCKER_INCREMENTAL_BINARY),$(DOCKER_INCREMENTAL_BINARY),1)
export DOCKER_INCREMENTAL_BINARY
# get OS/Arch of docker engine
DOCKER_OSARCH := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKER_ENGINE_OSARCH}')
@@ -28,12 +13,6 @@ DOCKERFILE := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $$
DOCKER_GITCOMMIT := $(shell git rev-parse --short HEAD || echo unsupported)
export DOCKER_GITCOMMIT
# allow overriding the repository and branch that validation scripts are running
# against these are used in hack/validate/.validate to check what changed in the PR.
export VALIDATE_REPO
export VALIDATE_BRANCH
export VALIDATE_ORIGIN_BRANCH
# env vars passed through directly to Docker's build scripts
# to allow things like `make KEEPBUNDLE=1 binary` easily
# `project/PACKAGERS.md` have some limited documentation of some of these
@@ -46,11 +25,11 @@ export VALIDATE_ORIGIN_BRANCH
#
DOCKER_ENVS := \
-e DOCKER_CROSSPLATFORMS \
-e BUILD_APT_MIRROR \
-e BUILDFLAGS \
-e KEEPBUNDLE \
-e DOCKER_BUILD_ARGS \
-e DOCKER_BUILD_GOGC \
-e DOCKER_BUILD_OPTS \
-e DOCKER_BUILD_PKGS \
-e DOCKER_BUILDKIT \
-e DOCKER_BASH_COMPLETION_PATH \
@@ -59,28 +38,17 @@ DOCKER_ENVS := \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_GITCOMMIT \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_INCREMENTAL_BINARY \
-e DOCKER_LDFLAGS \
-e DOCKER_PORT \
-e DOCKER_REMAP_ROOT \
-e DOCKER_ROOTLESS \
-e DOCKER_STORAGE_OPTS \
-e DOCKER_TEST_HOST \
-e DOCKER_USERLANDPROXY \
-e DOCKERD_ARGS \
-e TEST_FORCE_VALIDATE \
-e TEST_INTEGRATION_DIR \
-e TEST_SKIP_INTEGRATION \
-e TEST_SKIP_INTEGRATION_CLI \
-e TESTDEBUG \
-e TESTDIRS \
-e TESTFLAGS \
-e TESTFLAGS_INTEGRATION \
-e TESTFLAGS_INTEGRATION_CLI \
-e TEST_FILTER \
-e TIMEOUT \
-e VALIDATE_REPO \
-e VALIDATE_BRANCH \
-e VALIDATE_ORIGIN_BRANCH \
-e HTTP_PROXY \
-e HTTPS_PROXY \
-e NO_PROXY \
@@ -89,7 +57,6 @@ DOCKER_ENVS := \
-e no_proxy \
-e VERSION \
-e PLATFORM \
-e DEFAULT_PRODUCT_LICENSE \
-e PRODUCT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
@@ -97,35 +64,39 @@ DOCKER_ENVS := \
# (default to no bind mount if DOCKER_HOST is set)
# note: BINDDIR is supported for backwards-compatibility here
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
# DOCKER_MOUNT can be overriden, but use at your own risk!
ifndef DOCKER_MOUNT
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
DOCKER_MOUNT := $(if $(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT):$(DOCKER_BINDDIR_MOUNT_OPTS),$(DOCKER_MOUNT))
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
# The volume will be cleaned up when the container is removed due to `--rm`.
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v /go/src/github.com/docker/docker/bundles) -v "$(CURDIR)/.git:/go/src/github.com/docker/docker/.git"
DOCKER_MOUNT_CACHE := -v docker-dev-cache:/root/.cache
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_CACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
endif # ifndef DOCKER_MOUNT
# This allows to set the docker-dev container name
DOCKER_CONTAINER_NAME := $(if $(CONTAINER_NAME),--name $(CONTAINER_NAME),)
# enable package cache if DOCKER_INCREMENTAL_BINARY and DOCKER_MOUNT (i.e.DOCKER_HOST) are set
PKGCACHE_MAP := gopath:/go/pkg goroot-linux_amd64:/usr/local/go/pkg/linux_amd64 goroot-linux_amd64_netgo:/usr/local/go/pkg/linux_amd64_netgo
PKGCACHE_VOLROOT := dockerdev-go-pkg-cache
PKGCACHE_VOL := $(if $(PKGCACHE_DIR),$(CURDIR)/$(PKGCACHE_DIR)/,$(PKGCACHE_VOLROOT)-)
DOCKER_MOUNT_PKGCACHE := $(if $(DOCKER_INCREMENTAL_BINARY),$(shell echo $(PKGCACHE_MAP) | sed -E 's@([^ ]*)@-v "$(PKGCACHE_VOL)\1"@g'),)
DOCKER_MOUNT_CLI := $(if $(DOCKER_CLI_PATH),-v $(shell dirname $(DOCKER_CLI_PATH)):/usr/local/cli,)
DOCKER_MOUNT_BASH_COMPLETION := $(if $(DOCKER_BASH_COMPLETION_PATH),-v $(shell dirname $(DOCKER_BASH_COMPLETION_PATH)):/usr/local/completion/bash,)
DOCKER_MOUNT := $(DOCKER_MOUNT) $(DOCKER_MOUNT_PKGCACHE) $(DOCKER_MOUNT_CLI) $(DOCKER_MOUNT_BASH_COMPLETION)
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_PORT_FORWARD := $(if $(DOCKER_PORT),-p "$(DOCKER_PORT)",)
DOCKER_FLAGS := $(DOCKER) run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
DOCKER_FLAGS := docker run --rm -i --privileged $(DOCKER_CONTAINER_NAME) $(DOCKER_ENVS) $(DOCKER_MOUNT) $(DOCKER_PORT_FORWARD)
BUILD_APT_MIRROR := $(if $(DOCKER_BUILD_APT_MIRROR),--build-arg APT_MIRROR=$(DOCKER_BUILD_APT_MIRROR))
export BUILD_APT_MIRROR
SWAGGER_DOCS_PORT ?= 9000
INTEGRATION_CLI_MASTER_IMAGE := $(if $(INTEGRATION_CLI_MASTER_IMAGE), $(INTEGRATION_CLI_MASTER_IMAGE), integration-cli-master)
INTEGRATION_CLI_WORKER_IMAGE := $(if $(INTEGRATION_CLI_WORKER_IMAGE), $(INTEGRATION_CLI_WORKER_IMAGE), integration-cli-worker)
define \n
@@ -141,84 +112,47 @@ endif
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
DOCKER_BUILD_ARGS += --build-arg=GO_VERSION
ifdef DOCKER_SYSTEMD
DOCKER_BUILD_ARGS += --build-arg=SYSTEMD=true
endif
BUILD_OPTS := ${DOCKER_BUILD_ARGS} ${DOCKER_BUILD_OPTS} -f "$(DOCKERFILE)"
ifdef USE_BUILDX
BUILD_OPTS += $(BUILDX_BUILD_EXTRA_OPTS)
BUILD_CMD := $(BUILDX) build
else
BUILD_CMD := $(DOCKER) build
endif
# This is used for the legacy "build" target and anything still depending on it
BUILD_CROSS =
ifdef DOCKER_CROSS
BUILD_CROSS = --build-arg CROSS=$(DOCKER_CROSS)
endif
ifdef DOCKER_CROSSPLATFORMS
BUILD_CROSS = --build-arg CROSS=true
endif
VERSION_AUTOGEN_ARGS = --build-arg VERSION --build-arg DOCKER_GITCOMMIT --build-arg PRODUCT --build-arg PLATFORM --build-arg DEFAULT_PRODUCT_LICENSE
default: binary
all: build ## validate all checks, build linux binaries, run all tests,\ncross build non-linux binaries, and generate archives
all: build ## validate all checks, build linux binaries, run all tests\ncross build non-linux binaries and generate archives
$(DOCKER_RUN_DOCKER) bash -c 'hack/validate/default && hack/make.sh'
# This is only used to work around read-only bind mounts of the source code into
# binary build targets. We end up mounting a tmpfs over autogen which allows us
# to write build-time generated assets even though the source is mounted read-only
# ...But in order to do so, this dir needs to already exist.
autogen:
mkdir -p autogen
binary: build ## build the linux binaries
$(DOCKER_RUN_DOCKER) hack/make.sh binary
binary: buildx autogen ## build statically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
dynbinary: build ## build the linux dynbinaries
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary
dynbinary: buildx autogen ## build dynamically linked linux binaries
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
cross: BUILD_OPTS += --build-arg CROSS=true --build-arg DOCKER_CROSSPLATFORMS
cross: buildx autogen ## cross build the binaries for darwin, freebsd and\nwindows
$(BUILD_CMD) $(BUILD_OPTS) --output=bundles/ --target=$@ $(VERSION_AUTOGEN_ARGS) .
build: bundles init-go-pkg-cache
$(warning The docker client CLI has moved to github.com/docker/cli. For a dev-test cycle involving the CLI, run:${\n} DOCKER_CLI_PATH=/host/path/to/cli/binary make shell ${\n} then change the cli and compile into a binary at the same location.${\n})
docker build ${BUILD_APT_MIRROR} ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" -f "$(DOCKERFILE)" .
bundles:
mkdir bundles
.PHONY: clean
clean: clean-cache
clean: clean-pkg-cache-vol ## clean up cached resources
.PHONY: clean-cache
clean-cache:
docker volume rm -f docker-dev-cache
clean-pkg-cache-vol:
@- $(foreach mapping,$(PKGCACHE_MAP), \
$(shell docker volume rm $(PKGCACHE_VOLROOT)-$(shell echo $(mapping) | awk -F':/' '{ print $$1 }') > /dev/null 2>&1) \
)
cross: build ## cross build the binaries for darwin, freebsd and\nwindows
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross
help: ## this help
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {sub("\\\\n",sprintf("\n%22c"," "), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)
init-go-pkg-cache:
$(if $(PKGCACHE_DIR), mkdir -p $(shell echo $(PKGCACHE_MAP) | sed -E 's@([^: ]*):[^ ]*@$(PKGCACHE_DIR)/\1@g'))
install: ## install the linux binaries
KEEPBUNDLE=1 hack/make.sh install-binary
run: build ## run the docker daemon in a container
$(DOCKER_RUN_DOCKER) sh -c "KEEPBUNDLE=1 hack/make.sh install-binary run"
.PHONY: build
ifeq ($(BIND_DIR), .)
build: shell_target := --target=dev
else
build: shell_target := --target=final
endif
ifdef USE_BUILDX
build: buildx_load := --load
endif
build: buildx
$(BUILD_CMD) $(BUILD_OPTS) $(shell_target) $(buildx_load) $(BUILD_CROSS) -t "$(DOCKER_IMAGE)" .
shell: build ## start a shell inside the build env
shell: build ## start a shell inside the build env
$(DOCKER_RUN_DOCKER) bash
test: build test-unit ## run the unit, integration and docker-py tests
@@ -229,16 +163,8 @@ test-docker-py: build ## run the docker-py tests
test-integration-cli: test-integration ## (DEPRECATED) use test-integration
ifneq ($(and $(TEST_SKIP_INTEGRATION),$(TEST_SKIP_INTEGRATION_CLI)),)
test-integration:
@echo Both integrations suites skipped per environment variables
else
test-integration: build ## run the integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration
endif
test-integration-flaky: build ## run the stress test for all new integration tests
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration-flaky
test-unit: build ## run the unit tests
$(DOCKER_RUN_DOCKER) hack/test/unit
@@ -247,7 +173,7 @@ validate: build ## validate DCO, Seccomp profile generation, gofmt,\n./pkg/ isol
$(DOCKER_RUN_DOCKER) hack/validate/all
win: build ## cross build the binary for windows
$(DOCKER_RUN_DOCKER) DOCKER_CROSSPLATFORMS=windows/amd64 hack/make.sh cross
$(DOCKER_RUN_DOCKER) hack/make.sh win
.PHONY: swagger-gen
swagger-gen:
@@ -265,13 +191,18 @@ swagger-docs: ## preview the API documentation
-p $(SWAGGER_DOCS_PORT):80 \
bfirsh/redoc:1.6.2
.PHONY: buildx
ifdef USE_BUILDX
ifeq ($(BUILDX), bundles/buildx)
buildx: bundles/buildx ## build buildx cli tool
endif
endif
bundles/buildx: bundles ## build buildx CLI tool
curl -fsSL https://raw.githubusercontent.com/moby/buildkit/70deac12b5857a1aa4da65e90b262368e2f71500/hack/install-buildx | VERSION="$(BUILDX_VERSION)" BINDIR="$(@D)" bash
$@ version
build-integration-cli-on-swarm: build ## build images and binary for running integration-cli on Swarm in parallel
@echo "Building hack/integration-cli-on-swarm (if build fails, please refer to hack/integration-cli-on-swarm/README.md)"
go build -buildmode=pie -o ./hack/integration-cli-on-swarm/integration-cli-on-swarm ./hack/integration-cli-on-swarm/host
@echo "Building $(INTEGRATION_CLI_MASTER_IMAGE)"
docker build -t $(INTEGRATION_CLI_MASTER_IMAGE) hack/integration-cli-on-swarm/agent
# For worker, we don't use `docker build` so as to enable DOCKER_INCREMENTAL_BINARY and so on
@echo "Building $(INTEGRATION_CLI_WORKER_IMAGE) from $(DOCKER_IMAGE)"
$(eval tmp := integration-cli-worker-tmp)
# We mount pkgcache, but not bundle (bundle needs to be baked into the image)
# For avoiding bakings DOCKER_GRAPHDRIVER and so on to image, we cannot use $(DOCKER_ENVS) here
docker run -t -d --name $(tmp) -e DOCKER_GITCOMMIT -e BUILDFLAGS -e DOCKER_INCREMENTAL_BINARY --privileged $(DOCKER_MOUNT_PKGCACHE) $(DOCKER_IMAGE) top
docker exec $(tmp) hack/make.sh build-integration-test-binary dynbinary
docker exec $(tmp) go build -buildmode=pie -o /worker github.com/docker/docker/hack/integration-cli-on-swarm/agent/worker
docker commit -c 'ENTRYPOINT ["/worker"]' $(tmp) $(INTEGRATION_CLI_WORKER_IMAGE)
docker rm -f $(tmp)

2
NOTICE
View File

@@ -3,7 +3,7 @@ Copyright 2012-2017 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product contains software (https://github.com/creack/pty) developed
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
The following is courtesy of our legal counsel:

View File

@@ -35,83 +35,34 @@ issue, in the Slack channel, or in person at the Moby Summits that happen every
## 1.1 Runtime improvements
Over time we have accumulated a lot of functionality in the container runtime
aspect of Moby while also growing in other areas. Much of the container runtime
pieces are now duplicated work available in other, lower level components such
as [containerd](https://containerd.io).
We introduced [`runC`](https://runc.io) as a standalone low-level tool for container
execution in 2015, the first stage in spinning out parts of the Engine into standalone tools.
Moby currently only utilizes containerd for basic runtime state management, e.g. starting
and stopping a container, which is what the pre-containerd 1.0 daemon provided.
Now that containerd is a full-fledged container runtime which supports full
container life-cycle management, we would like to start relying more on containerd
and removing the bits in Moby which are now duplicated. This will necessitate
a significant effort to refactor and even remove large parts of Moby's codebase.
As runC continued evolving, and the OCI specification along with it, we created
[`containerd`](https://github.com/containerd/containerd), a daemon to control and monitor `runC`.
In late 2016 this was relaunched as the `containerd` 1.0 track, aiming to provide a common runtime
for the whole spectrum of container systems, including Kubernetes, with wide community support.
This change meant that there was an increased scope for `containerd`, including image management
and storage drivers.
Tracking issues:
Moby will rely on a long-running `containerd` companion daemon for all container execution
related operations. This could open the door in the future for Engine restarts without interrupting
running containers. The switch over to containerd 1.0 is an important goal for the project, and
will result in a significant simplification of the functions implemented in this repository.
- [#38043](https://github.com/moby/moby/issues/38043) Proposal: containerd image integration
## 1.2 Image Builder
Work is ongoing to integrate [BuildKit](https://github.com/moby/buildkit) into
Moby and replace the "v0" build implementation. Buildkit offers better cache
management, parallelizable build steps, and better extensibility while also
keeping builds portable, a chief tenent of Moby's builder.
Upon completion of this effort, users will have a builder that performs better
while also being more extensible, enabling users to provide their own custom
syntax which can be either Dockerfile-like or something completely different.
See [buildpacks on buildkit](https://github.com/tonistiigi/buildkit-pack) as an
example of this extensibility.
New features for the builder and Dockerfile should be implemented first in the
BuildKit backend using an external Dockerfile implementation from the container
images. This allows everyone to test and evaluate the feature without upgrading
their daemon. New features should go to the experimental channel first, and can be
part of the `docker/dockerfile:experimental` image. From there they graduate to
`docker/dockerfile:latest` and binary releases. The Dockerfile frontend source
code is temporarily located at
[https://github.com/moby/buildkit/tree/master/frontend/dockerfile](https://github.com/moby/buildkit/tree/master/frontend/dockerfile)
with separate new features defined with go build tags.
Tracking issues:
- [#32925](https://github.com/moby/moby/issues/32925) discussion: builder future: buildkit
## 1.3 Rootless Mode
Running the daemon requires elevated privileges for many tasks. We would like to
support running the daemon as a normal, unprivileged user without requiring `suid`
binaries.
Tracking issues:
- [#37375](https://github.com/moby/moby/issues/37375) Proposal: allow running `dockerd` as an unprivileged user (aka rootless mode)
## 1.4 Testing
Moby has many tests, both unit and integration. Moby needs more tests which can
cover the full spectrum functionality and edge cases out there.
Tests in the `integration-cli` folder should also be migrated into (both in
location and style) the `integration` folder. These newer tests are simpler to
run in isolation, simpler to read, simpler to write, and more fully exercise the
API. Meanwhile tests of the docker CLI should generally live in docker/cli.
Tracking issues:
- [#32866](https://github.com/moby/moby/issues/32866) Replace integration-cli suite with API test suite
## 1.5 Internal decoupling
## 1.2 Internal decoupling
A lot of work has been done in trying to decouple Moby internals. This process of creating
standalone projects with a well defined function that attract a dedicated community should continue.
As well as integrating `containerd` we would like to integrate [BuildKit](https://github.com/moby/buildkit)
as the next standalone component.
We see gRPC as the natural communication layer between decoupled components.
In addition to pushing out large components into other projects, much of the
internal code structure, and in particular the
["Daemon"](https://godoc.org/github.com/docker/docker/daemon#Daemon) object,
should be split into smaller, more manageable, and more testable components.
## 1.3 Custom assembly tooling
We have been prototyping the Moby [assembly tool](https://github.com/moby/tool) which was originally
developed for LinuxKit and intend to turn it into a more generic packaging and assembly mechanism
that can build not only the default version of Moby, as distribution packages or other useful forms,
but can also build very different container systems, themselves built of cooperating daemons built in
and running in containers. We intend to merge this functionality into this repo.

View File

@@ -1,9 +0,0 @@
# Reporting security issues
The Moby maintainers take security seriously. If you discover a security issue, please bring it to their attention right away!
### Reporting a Vulnerability
Please **DO NOT** file a public issue, instead send your report privately to security@docker.com.
Security reports are greatly appreciated and we will publicly thank you for it, although we keep your name confidential if you request it. We also like to send gifts—if you're into schwag, make sure to let us know. We currently do not offer a paid security bounty program, but are not ruling it out in the future.

View File

@@ -28,7 +28,7 @@ Most code changes will fall into one of the following categories.
### Writing tests for new features
New code should be covered by unit tests. If the code is difficult to test with
unit tests, then that is a good sign that it should be refactored to make it
a unit tests then that is a good sign that it should be refactored to make it
easier to reuse and maintain. Consider accepting unexported interfaces instead
of structs so that fakes can be provided for dependencies.
@@ -44,23 +44,16 @@ case. Error cases should be handled by unit tests.
Bugs fixes should include a unit test case which exercises the bug.
A bug fix may also include new assertions in existing integration tests for the
A bug fix may also include new assertions in an existing integration tests for the
API endpoint.
### Writing new integration tests
Note the `integration-cli` tests are deprecated; new tests will be rejected by
the CI.
Instead, implement new tests under `integration/`.
### Integration tests environment considerations
When adding new tests or modifying existing tests under `integration/`, testing
When adding new tests or modifying existing test under `integration/`, testing
environment should be properly considered. `skip.If` from
[gotest.tools/skip](https://godoc.org/gotest.tools/skip) can be used to make the
test run conditionally. Full testing environment conditions can be found at
[environment.go](https://github.com/moby/moby/blob/6b6eeed03b963a27085ea670f40cd5ff8a61f32e/testutil/environment/environment.go)
[environment.go](https://github.com/moby/moby/blob/cb37987ee11655ed6bbef663d245e55922354c68/internal/test/environment/environment.go)
Here is a quick example. If the test needs to interact with a docker daemon on
the same host, the following condition should be checked within the test code
@@ -74,8 +67,6 @@ If a remote daemon is detected, the test will be skipped.
## Running tests
### Unit Tests
To run the unit test suite:
```
@@ -91,36 +82,8 @@ The following environment variables may be used to run a subset of tests:
* `TESTFLAGS` - flags passed to `go test`, to run tests which match a pattern
use `TESTFLAGS="-test.run TestNameOrPrefix"`
### Integration Tests
To run the integration test suite:
```
make test-integration
```
This make target runs both the "integration" suite and the "integration-cli"
suite.
You can specify which integration test dirs to build and run by specifying
the list of dirs in the TEST_INTEGRATION_DIR environment variable.
You can also explicitly skip either suite by setting (any value) in
TEST_SKIP_INTEGRATION and/or TEST_SKIP_INTEGRATION_CLI environment variables.
Flags specific to each suite can be set in the TESTFLAGS_INTEGRATION and
TESTFLAGS_INTEGRATION_CLI environment variables.
If all you want is to specify a test filter to run, you can set the
`TEST_FILTER` environment variable. This ends up getting passed directly to `go
test -run` (or `go test -check-f`, depending on the test suite). It will also
automatically set the other above mentioned environment variables accordingly.
### Go Version
You can change a version of golang used for building stuff that is being tested
by setting `GO_VERSION` variable, for example:
```
make GO_VERSION=1.12.8 test
```

View File

@@ -3,7 +3,7 @@ package api // import "github.com/docker/docker/api"
// Common constants for daemon and client.
const (
// DefaultVersion of Current REST API
DefaultVersion = "1.41"
DefaultVersion = "1.38"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package api // import "github.com/docker/docker/api"

View File

@@ -3,19 +3,17 @@ package build // import "github.com/docker/docker/api/server/backend/build"
import (
"context"
"fmt"
"strconv"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/builder"
buildkit "github.com/docker/docker/builder/builder-next"
daemonevents "github.com/docker/docker/daemon/events"
"github.com/docker/docker/builder/fscache"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/stringid"
"github.com/pkg/errors"
"google.golang.org/grpc"
"golang.org/x/sync/errgroup"
)
// ImageComponent provides an interface for working with images
@@ -32,21 +30,14 @@ type Builder interface {
// Backend provides build functionality to the API router
type Backend struct {
builder Builder
fsCache *fscache.FSCache
imageComponent ImageComponent
buildkit *buildkit.Builder
eventsService *daemonevents.Events
}
// NewBackend creates a new build backend from components
func NewBackend(components ImageComponent, builder Builder, buildkit *buildkit.Builder, es *daemonevents.Events) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, buildkit: buildkit, eventsService: es}, nil
}
// RegisterGRPC registers buildkit controller to the grpc server.
func (b *Backend) RegisterGRPC(s *grpc.Server) {
if b.buildkit != nil {
b.buildkit.RegisterGRPC(s)
}
func NewBackend(components ImageComponent, builder Builder, fsCache *fscache.FSCache, buildkit *buildkit.Builder) (*Backend, error) {
return &Backend{imageComponent: components, builder: builder, fsCache: fsCache, buildkit: buildkit}, nil
}
// Build builds an image from a Source
@@ -91,25 +82,40 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
if !useBuildKit {
stdout := config.ProgressWriter.StdoutFormatter
fmt.Fprintf(stdout, "Successfully built %s\n", stringid.TruncateID(imageID))
}
if imageID != "" {
err = tagger.TagImages(image.ID(imageID))
}
return imageID, err
}
// PruneCache removes all cached build sources
func (b *Backend) PruneCache(ctx context.Context, opts types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error) {
buildCacheSize, cacheIDs, err := b.buildkit.Prune(ctx, opts)
if err != nil {
return nil, errors.Wrap(err, "failed to prune build cache")
}
b.eventsService.Log("prune", events.BuilderEventType, events.Actor{
Attributes: map[string]string{
"reclaimed": strconv.FormatInt(buildCacheSize, 10),
},
func (b *Backend) PruneCache(ctx context.Context) (*types.BuildCachePruneReport, error) {
eg, ctx := errgroup.WithContext(ctx)
var fsCacheSize uint64
eg.Go(func() error {
var err error
fsCacheSize, err = b.fsCache.Prune(ctx)
if err != nil {
return errors.Wrap(err, "failed to prune fscache")
}
return nil
})
return &types.BuildCachePruneReport{SpaceReclaimed: uint64(buildCacheSize), CachesDeleted: cacheIDs}, nil
var buildCacheSize int64
eg.Go(func() error {
var err error
buildCacheSize, err = b.buildkit.Prune(ctx)
if err != nil {
return errors.Wrap(err, "failed to prune build cache")
}
return nil
})
if err := eg.Wait(); err != nil {
return nil, err
}
return &types.BuildCachePruneReport{SpaceReclaimed: fsCacheSize + uint64(buildCacheSize)}, nil
}
// Cancel cancels the build by ID

View File

@@ -1,34 +0,0 @@
package server
import (
"net/http"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/gorilla/mux"
"google.golang.org/grpc/status"
)
// makeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func makeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := httpstatus.FromError(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
_ = httputils.WriteJSON(w, statusCode, response)
} else {
http.Error(w, status.Convert(err).Message(), statusCode)
}
}
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}

View File

@@ -1,150 +0,0 @@
package httpstatus // import "github.com/docker/docker/api/server/httpstatus"
import (
"fmt"
"net/http"
containerderrors "github.com/containerd/containerd/errdefs"
"github.com/docker/distribution/registry/api/errcode"
"github.com/docker/docker/errdefs"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
type causer interface {
Cause() error
}
// FromError retrieves status code from error message.
func FromError(err error) int {
if err == nil {
logrus.WithFields(logrus.Fields{"error": err}).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case errdefs.IsNotFound(err):
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case errdefs.IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case errdefs.IsForbidden(err):
statusCode = http.StatusForbidden
case errdefs.IsNotModified(err):
statusCode = http.StatusNotModified
case errdefs.IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case errdefs.IsSystem(err) || errdefs.IsUnknown(err) || errdefs.IsDataLoss(err) || errdefs.IsDeadline(err) || errdefs.IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromContainerdError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
statusCode = statusCodeFromDistributionError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return FromError(e.Cause())
}
logrus.WithFields(logrus.Fields{
"module": "api",
"error_type": fmt.Sprintf("%T", err),
}).Debugf("FIXME: Got an API for which error does not match any expected type!!!: %+v", err)
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch status.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}
// statusCodeFromDistributionError returns status code according to registry errcode
// code is loosely based on errcode.ServeJSON() in docker/distribution
func statusCodeFromDistributionError(err error) int {
switch errs := err.(type) {
case errcode.Errors:
if len(errs) < 1 {
return http.StatusInternalServerError
}
if _, ok := errs[0].(errcode.ErrorCoder); ok {
return statusCodeFromDistributionError(errs[0])
}
case errcode.ErrorCoder:
return errs.ErrorCode().Descriptor().HTTPStatusCode
}
return http.StatusInternalServerError
}
// statusCodeFromContainerdError returns status code for containerd errors when
// consumed directly (not through gRPC)
func statusCodeFromContainerdError(err error) int {
switch {
case containerderrors.IsInvalidArgument(err):
return http.StatusBadRequest
case containerderrors.IsNotFound(err):
return http.StatusNotFound
case containerderrors.IsAlreadyExists(err):
return http.StatusConflict
case containerderrors.IsFailedPrecondition(err):
return http.StatusPreconditionFailed
case containerderrors.IsUnavailable(err):
return http.StatusServiceUnavailable
case containerderrors.IsNotImplemented(err):
return http.StatusNotImplemented
default:
return http.StatusInternalServerError
}
}

View File

@@ -0,0 +1,131 @@
package httputils // import "github.com/docker/docker/api/server/httputils"
import (
"fmt"
"net/http"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/gorilla/mux"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
"google.golang.org/grpc/codes"
)
type causer interface {
Cause() error
}
// GetHTTPErrorStatusCode retrieves status code from error message.
func GetHTTPErrorStatusCode(err error) int {
if err == nil {
logrus.WithFields(logrus.Fields{"error": err}).Error("unexpected HTTP error handling")
return http.StatusInternalServerError
}
var statusCode int
// Stop right there
// Are you sure you should be adding a new error class here? Do one of the existing ones work?
// Note that the below functions are already checking the error causal chain for matches.
switch {
case errdefs.IsNotFound(err):
statusCode = http.StatusNotFound
case errdefs.IsInvalidParameter(err):
statusCode = http.StatusBadRequest
case errdefs.IsConflict(err) || errdefs.IsAlreadyExists(err):
statusCode = http.StatusConflict
case errdefs.IsUnauthorized(err):
statusCode = http.StatusUnauthorized
case errdefs.IsUnavailable(err):
statusCode = http.StatusServiceUnavailable
case errdefs.IsForbidden(err):
statusCode = http.StatusForbidden
case errdefs.IsNotModified(err):
statusCode = http.StatusNotModified
case errdefs.IsNotImplemented(err):
statusCode = http.StatusNotImplemented
case errdefs.IsSystem(err) || errdefs.IsUnknown(err) || errdefs.IsDataLoss(err) || errdefs.IsDeadline(err) || errdefs.IsCancelled(err):
statusCode = http.StatusInternalServerError
default:
statusCode = statusCodeFromGRPCError(err)
if statusCode != http.StatusInternalServerError {
return statusCode
}
if e, ok := err.(causer); ok {
return GetHTTPErrorStatusCode(e.Cause())
}
logrus.WithFields(logrus.Fields{
"module": "api",
"error_type": fmt.Sprintf("%T", err),
}).Debugf("FIXME: Got an API for which error does not match any expected type!!!: %+v", err)
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
return statusCode
}
func apiVersionSupportsJSONErrors(version string) bool {
const firstAPIVersionWithJSONErrors = "1.23"
return version == "" || versions.GreaterThan(version, firstAPIVersionWithJSONErrors)
}
// MakeErrorHandler makes an HTTP handler that decodes a Docker error and
// returns it in the response.
func MakeErrorHandler(err error) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
statusCode := GetHTTPErrorStatusCode(err)
vars := mux.Vars(r)
if apiVersionSupportsJSONErrors(vars["version"]) {
response := &types.ErrorResponse{
Message: err.Error(),
}
WriteJSON(w, statusCode, response)
} else {
http.Error(w, grpc.ErrorDesc(err), statusCode)
}
}
}
// statusCodeFromGRPCError returns status code according to gRPC error
func statusCodeFromGRPCError(err error) int {
switch grpc.Code(err) {
case codes.InvalidArgument: // code 3
return http.StatusBadRequest
case codes.NotFound: // code 5
return http.StatusNotFound
case codes.AlreadyExists: // code 6
return http.StatusConflict
case codes.PermissionDenied: // code 7
return http.StatusForbidden
case codes.FailedPrecondition: // code 9
return http.StatusBadRequest
case codes.Unauthenticated: // code 16
return http.StatusUnauthorized
case codes.OutOfRange: // code 11
return http.StatusBadRequest
case codes.Unimplemented: // code 12
return http.StatusNotImplemented
case codes.Unavailable: // code 14
return http.StatusServiceUnavailable
default:
if e, ok := err.(causer); ok {
return statusCodeFromGRPCError(e.Cause())
}
// codes.Canceled(1)
// codes.Unknown(2)
// codes.DeadlineExceeded(4)
// codes.ResourceExhausted(8)
// codes.Aborted(10)
// codes.Internal(13)
// codes.DataLoss(15)
return http.StatusInternalServerError
}
}

View File

@@ -23,7 +23,7 @@ func TestBoolValue(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := BoolValue(r, "test")
@@ -34,14 +34,14 @@ func TestBoolValue(t *testing.T) {
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest(http.MethodGet, "", nil)
r, _ := http.NewRequest("GET", "", nil)
if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest(http.MethodGet, "", nil)
r, _ = http.NewRequest("GET", "", nil)
r.Form = v
if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
@@ -59,7 +59,7 @@ func TestInt64ValueOrZero(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := Int64ValueOrZero(r, "test")
@@ -79,7 +79,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1)
@@ -95,7 +95,7 @@ func TestInt64ValueOrDefault(t *testing.T) {
func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{}
v.Set("test", "invalid")
r, _ := http.NewRequest(http.MethodPost, "", nil)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1)

View File

@@ -12,8 +12,10 @@ import (
"github.com/sirupsen/logrus"
)
type contextKey string
// APIVersionKey is the client's requested API version.
type APIVersionKey struct{}
const APIVersionKey contextKey = "api-version"
// APIFunc is an adapter to allow the use of ordinary functions as Docker API endpoints.
// Any function that has the appropriate signature can be registered as an API endpoint (e.g. getVersion).
@@ -27,7 +29,7 @@ func HijackConnection(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
return nil, nil, err
}
// Flush the options to make sure the client sets the raw mode
_, _ = conn.Write([]byte{})
conn.Write([]byte{})
return conn, conn, nil
}
@@ -37,9 +39,9 @@ func CloseStreams(streams ...interface{}) {
if tcpc, ok := stream.(interface {
CloseWrite() error
}); ok {
_ = tcpc.CloseWrite()
tcpc.CloseWrite()
} else if closer, ok := stream.(io.Closer); ok {
_ = closer.Close()
closer.Close()
}
}
}
@@ -81,7 +83,7 @@ func VersionFromContext(ctx context.Context) string {
return ""
}
if val := ctx.Value(APIVersionKey{}); val != nil {
if val := ctx.Value(APIVersionKey); val != nil {
return val.(string)
}

View File

@@ -51,10 +51,10 @@ func WriteLogStream(_ context.Context, w io.Writer, msgs <-chan *backend.LogMess
logLine = append([]byte(msg.Timestamp.Format(jsonmessage.RFC3339NanoFixed)+" "), logLine...)
}
if msg.Source == "stdout" && config.ShowStdout {
_, _ = outStream.Write(logLine)
outStream.Write(logLine)
}
if msg.Source == "stderr" && config.ShowStderr {
_, _ = errStream.Write(logLine)
errStream.Write(logLine)
}
}
}

View File

@@ -18,7 +18,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != http.MethodPost {
if r.Method != "POST" {
return handler(ctx, w, r, vars)
}
if err := httputils.CheckForJSON(r); err != nil {
@@ -41,7 +41,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
maskSecretKeys(postForm)
maskSecretKeys(postForm, r.RequestURI)
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
logrus.Debugf("form data: %s", string(formStr))
@@ -54,37 +54,41 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
}
}
func maskSecretKeys(inp interface{}) {
func maskSecretKeys(inp interface{}, path string) {
// Remove any query string from the path
idx := strings.Index(path, "?")
if idx != -1 {
path = path[:idx]
}
// Remove trailing / characters
path = strings.TrimRight(path, "/")
if arr, ok := inp.([]interface{}); ok {
for _, f := range arr {
maskSecretKeys(f)
maskSecretKeys(f, path)
}
return
}
if form, ok := inp.(map[string]interface{}); ok {
scrub := []string{
// Note: The Data field contains the base64-encoded secret in 'secret'
// and 'config' create and update requests. Currently, no other POST
// API endpoints use a data field, so we scrub this field unconditionally.
// Change this handling to be conditional if a new endpoint is added
// in future where this field should not be scrubbed.
"data",
"jointoken",
"password",
"secret",
"signingcakey",
"unlockkey",
}
loop0:
for k, v := range form {
for _, m := range scrub {
for _, m := range []string{"password", "secret", "jointoken", "unlockkey", "signingcakey"} {
if strings.EqualFold(m, k) {
form[k] = "*****"
continue loop0
}
}
maskSecretKeys(v)
maskSecretKeys(v, path)
}
// Route-specific redactions
if strings.HasSuffix(path, "/secrets/create") {
for k := range form {
if k == "Data" {
form[k] = "*****"
}
}
}
}
}

View File

@@ -3,31 +3,37 @@ package middleware // import "github.com/docker/docker/api/server/middleware"
import (
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestMaskSecretKeys(t *testing.T) {
tests := []struct {
doc string
path string
input map[string]interface{}
expected map[string]interface{}
}{
{
doc: "secret/config create and update requests",
path: "/v1.30/secrets/create",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
doc: "masking other fields (recursively)",
path: "/v1.30/secrets/create//",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/secrets/create?key=val",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/v1.30/some/other/path",
input: map[string]interface{}{
"password": "pass",
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
"signingcakey": "signingcakey",
"password": "pass",
"other": map[string]interface{}{
"password": "pass",
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
@@ -35,13 +41,8 @@ func TestMaskSecretKeys(t *testing.T) {
},
},
expected: map[string]interface{}{
"password": "*****",
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
"signingcakey": "*****",
"password": "*****",
"other": map[string]interface{}{
"password": "*****",
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
@@ -49,27 +50,10 @@ func TestMaskSecretKeys(t *testing.T) {
},
},
},
{
doc: "case insensitive field matching",
input: map[string]interface{}{
"PASSWORD": "pass",
"other": map[string]interface{}{
"PASSWORD": "pass",
},
},
expected: map[string]interface{}{
"PASSWORD": "*****",
"other": map[string]interface{}{
"PASSWORD": "*****",
},
},
},
}
for _, testcase := range tests {
t.Run(testcase.doc, func(t *testing.T) {
maskSecretKeys(testcase.input)
assert.Check(t, is.DeepEqual(testcase.expected, testcase.input))
})
maskSecretKeys(testcase.input, testcase.path)
assert.Check(t, is.DeepEqual(testcase.expected, testcase.input))
}
}

View File

@@ -58,7 +58,7 @@ func (v VersionMiddleware) WrapHandler(handler func(ctx context.Context, w http.
if versions.GreaterThan(apiVersion, v.defaultVersion) {
return versionUnsupportedError{version: apiVersion, maxVersion: v.defaultVersion}
}
ctx = context.WithValue(ctx, httputils.APIVersionKey{}, apiVersion)
ctx = context.WithValue(ctx, httputils.APIVersionKey, apiVersion)
return handler(ctx, w, r, vars)
}

View File

@@ -8,8 +8,8 @@ import (
"testing"
"github.com/docker/docker/api/server/httputils"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestVersionMiddlewareVersion(t *testing.T) {
@@ -25,7 +25,7 @@ func TestVersionMiddlewareVersion(t *testing.T) {
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()
@@ -76,7 +76,7 @@ func TestVersionMiddlewareWithErrorsReturnsHeaders(t *testing.T) {
m := NewVersionMiddleware(defaultVersion, defaultVersion, minVersion)
h := m.WrapHandler(handler)
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()

View File

@@ -14,7 +14,7 @@ type Backend interface {
Build(context.Context, backend.BuildConfig) (string, error)
// Prune build cache
PruneCache(context.Context, types.BuildCachePruneOptions) (*types.BuildCachePruneReport, error)
PruneCache(context.Context) (*types.BuildCachePruneReport, error)
Cancel(context.Context, string) error
}

View File

@@ -1,25 +1,17 @@
package build // import "github.com/docker/docker/api/server/router/build"
import (
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/types"
)
import "github.com/docker/docker/api/server/router"
// buildRouter is a router to talk with the build controller
type buildRouter struct {
backend Backend
daemon experimentalProvider
routes []router.Route
features *map[string]bool
backend Backend
daemon experimentalProvider
routes []router.Route
}
// NewRouter initializes a new build router
func NewRouter(b Backend, d experimentalProvider, features *map[string]bool) router.Router {
r := &buildRouter{
backend: b,
daemon: d,
features: features,
}
func NewRouter(b Backend, d experimentalProvider) router.Router {
r := &buildRouter{backend: b, daemon: d}
r.initRoutes()
return r
}
@@ -31,23 +23,8 @@ func (r *buildRouter) Routes() []router.Route {
func (r *buildRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/build", r.postBuild),
router.NewPostRoute("/build/prune", r.postPrune),
router.NewPostRoute("/build", r.postBuild, router.WithCancel),
router.NewPostRoute("/build/prune", r.postPrune, router.WithCancel),
router.NewPostRoute("/build/cancel", r.postCancel),
}
}
// BuilderVersion derives the default docker builder version from the config
// Note: it is valid to have BuilderVersion unset which means it is up to the
// client to choose which builder to use.
func BuilderVersion(features map[string]bool) types.BuilderVersion {
var bv types.BuilderVersion
if v, ok := features["buildkit"]; ok {
if v {
bv = types.BuilderBuildKit
} else {
bv = types.BuilderV1
}
}
return bv
}

View File

@@ -18,7 +18,6 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
@@ -38,36 +37,8 @@ func (e invalidIsolationError) Error() string {
func (e invalidIsolationError) InvalidParameter() {}
func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBuildOptions, error) {
options := &types.ImageBuildOptions{
Version: types.BuilderV1, // Builder V1 is the default, but can be overridden
Dockerfile: r.FormValue("dockerfile"),
SuppressOutput: httputils.BoolValue(r, "q"),
NoCache: httputils.BoolValue(r, "nocache"),
ForceRemove: httputils.BoolValue(r, "forcerm"),
MemorySwap: httputils.Int64ValueOrZero(r, "memswap"),
Memory: httputils.Int64ValueOrZero(r, "memory"),
CPUShares: httputils.Int64ValueOrZero(r, "cpushares"),
CPUPeriod: httputils.Int64ValueOrZero(r, "cpuperiod"),
CPUQuota: httputils.Int64ValueOrZero(r, "cpuquota"),
CPUSetCPUs: r.FormValue("cpusetcpus"),
CPUSetMems: r.FormValue("cpusetmems"),
CgroupParent: r.FormValue("cgroupparent"),
NetworkMode: r.FormValue("networkmode"),
Tags: r.Form["t"],
ExtraHosts: r.Form["extrahosts"],
SecurityOpt: r.Form["securityopt"],
Squash: httputils.BoolValue(r, "squash"),
Target: r.FormValue("target"),
RemoteContext: r.FormValue("remote"),
SessionID: r.FormValue("session"),
BuildID: r.FormValue("buildid"),
}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
version := httputils.VersionFromContext(ctx)
options := &types.ImageBuildOptions{}
if httputils.BoolValue(r, "forcerm") && versions.GreaterThanOrEqualTo(version, "1.12") {
options.Remove = true
} else if r.FormValue("rm") == "" && versions.GreaterThanOrEqualTo(version, "1.12") {
@@ -78,37 +49,52 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
if httputils.BoolValue(r, "pull") && versions.GreaterThanOrEqualTo(version, "1.16") {
options.PullParent = true
}
options.Dockerfile = r.FormValue("dockerfile")
options.SuppressOutput = httputils.BoolValue(r, "q")
options.NoCache = httputils.BoolValue(r, "nocache")
options.ForceRemove = httputils.BoolValue(r, "forcerm")
options.MemorySwap = httputils.Int64ValueOrZero(r, "memswap")
options.Memory = httputils.Int64ValueOrZero(r, "memory")
options.CPUShares = httputils.Int64ValueOrZero(r, "cpushares")
options.CPUPeriod = httputils.Int64ValueOrZero(r, "cpuperiod")
options.CPUQuota = httputils.Int64ValueOrZero(r, "cpuquota")
options.CPUSetCPUs = r.FormValue("cpusetcpus")
options.CPUSetMems = r.FormValue("cpusetmems")
options.CgroupParent = r.FormValue("cgroupparent")
options.NetworkMode = r.FormValue("networkmode")
options.Tags = r.Form["t"]
options.ExtraHosts = r.Form["extrahosts"]
options.SecurityOpt = r.Form["securityopt"]
options.Squash = httputils.BoolValue(r, "squash")
options.Target = r.FormValue("target")
options.RemoteContext = r.FormValue("remote")
if versions.GreaterThanOrEqualTo(version, "1.32") {
options.Platform = r.FormValue("platform")
}
if versions.GreaterThanOrEqualTo(version, "1.40") {
outputsJSON := r.FormValue("outputs")
if outputsJSON != "" {
var outputs []types.ImageBuildOutput
if err := json.Unmarshal([]byte(outputsJSON), &outputs); err != nil {
return nil, err
}
options.Outputs = outputs
}
}
if s := r.Form.Get("shmsize"); s != "" {
shmSize, err := strconv.ParseInt(s, 10, 64)
if r.Form.Get("shmsize") != "" {
shmSize, err := strconv.ParseInt(r.Form.Get("shmsize"), 10, 64)
if err != nil {
return nil, err
}
options.ShmSize = shmSize
}
if i := r.FormValue("isolation"); i != "" {
options.Isolation = container.Isolation(i)
if !options.Isolation.IsValid() {
return nil, invalidIsolationError(options.Isolation)
if i := container.Isolation(r.FormValue("isolation")); i != "" {
if !container.Isolation.IsValid(i) {
return nil, invalidIsolationError(i)
}
options.Isolation = i
}
if ulimitsJSON := r.FormValue("ulimits"); ulimitsJSON != "" {
var buildUlimits = []*units.Ulimit{}
if runtime.GOOS != "windows" && options.SecurityOpt != nil {
return nil, errdefs.InvalidParameter(errors.New("The daemon on this platform does not support setting security options on build"))
}
var buildUlimits = []*units.Ulimit{}
ulimitsJSON := r.FormValue("ulimits")
if ulimitsJSON != "" {
if err := json.Unmarshal([]byte(ulimitsJSON), &buildUlimits); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading ulimit settings")
}
@@ -127,7 +113,8 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
// the fact they mentioned it, we need to pass that along to the builder
// so that it can print a warning about "foo" being unused if there is
// no "ARG foo" in the Dockerfile.
if buildArgsJSON := r.FormValue("buildargs"); buildArgsJSON != "" {
buildArgsJSON := r.FormValue("buildargs")
if buildArgsJSON != "" {
var buildArgs = map[string]*string{}
if err := json.Unmarshal([]byte(buildArgsJSON), &buildArgs); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading build args")
@@ -135,7 +122,8 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.BuildArgs = buildArgs
}
if labelsJSON := r.FormValue("labels"); labelsJSON != "" {
labelsJSON := r.FormValue("labels")
if labelsJSON != "" {
var labels = map[string]string{}
if err := json.Unmarshal([]byte(labelsJSON), &labels); err != nil {
return nil, errors.Wrap(errdefs.InvalidParameter(err), "error reading labels")
@@ -143,60 +131,37 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.Labels = labels
}
if cacheFromJSON := r.FormValue("cachefrom"); cacheFromJSON != "" {
cacheFromJSON := r.FormValue("cachefrom")
if cacheFromJSON != "" {
var cacheFrom = []string{}
if err := json.Unmarshal([]byte(cacheFromJSON), &cacheFrom); err != nil {
return nil, err
}
options.CacheFrom = cacheFrom
}
if bv := r.FormValue("version"); bv != "" {
v, err := parseVersion(bv)
if err != nil {
return nil, err
}
options.Version = v
options.SessionID = r.FormValue("session")
options.BuildID = r.FormValue("buildid")
builderVersion, err := parseVersion(r.FormValue("version"))
if err != nil {
return nil, err
}
options.Version = builderVersion
return options, nil
}
func parseVersion(s string) (types.BuilderVersion, error) {
switch types.BuilderVersion(s) {
case types.BuilderV1:
if s == "" || s == string(types.BuilderV1) {
return types.BuilderV1, nil
case types.BuilderBuildKit:
return types.BuilderBuildKit, nil
default:
return "", errors.Errorf("invalid version %q", s)
}
if s == string(types.BuilderBuildKit) {
return types.BuilderBuildKit, nil
}
return "", errors.Errorf("invalid version %s", s)
}
func (br *buildRouter) postPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
fltrs, err := filters.FromJSON(r.Form.Get("filters"))
if err != nil {
return errors.Wrap(err, "could not parse filters")
}
ksfv := r.FormValue("keep-storage")
if ksfv == "" {
ksfv = "0"
}
ks, err := strconv.Atoi(ksfv)
if err != nil {
return errors.Wrapf(err, "keep-storage is in bytes and expects an integer, got %v", ksfv)
}
opts := types.BuildCachePruneOptions{
All: httputils.BoolValue(r, "all"),
Filters: fltrs,
KeepStorage: int64(ks),
}
report, err := br.backend.PruneCache(ctx, opts)
report, err := br.backend.PruneCache(ctx)
if err != nil {
return err
}
@@ -235,12 +200,12 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
}
output := ioutils.NewWriteFlusher(ww)
defer func() { _ = output.Close() }()
defer output.Close()
errf := func(err error) error {
if httputils.BoolValue(r, "q") && notVerboseBuffer.Len() > 0 {
_, _ = output.Write(notVerboseBuffer.Bytes())
output.Write(notVerboseBuffer.Bytes())
}
// Do not write the error in the http output if it's still empty.
@@ -265,6 +230,10 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return errdefs.InvalidParameter(errors.New("squash is only supported with experimental mode"))
}
if buildOptions.Version == types.BuilderBuildKit && !br.daemon.HasExperimental() {
return errdefs.InvalidParameter(errors.New("buildkit is only supported with experimental mode"))
}
out := io.Writer(output)
if buildOptions.SuppressOutput {
out = notVerboseBuffer
@@ -291,7 +260,7 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
// Everything worked so if -q was provided the output from the daemon
// should be just the image ID and we'll print that to stdout.
if buildOptions.SuppressOutput {
_, _ = fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
fmt.Fprintln(streamformatter.NewStdoutWriter(output), imgID)
}
return nil
}
@@ -307,7 +276,7 @@ func getAuthConfigs(header http.Header) map[string]types.AuthConfig {
authConfigsJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authConfigsEncoded))
// Pulling an image does not error when no auth is provided so to remain
// consistent with the existing api decode errors are ignored
_ = json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
json.NewDecoder(authConfigsJSON).Decode(&authConfigs)
return authConfigs
}
@@ -429,7 +398,7 @@ func (w *wcf) notify() {
w.mu.Lock()
if !w.ready {
if w.buf.Len() > 0 {
_, _ = io.Copy(w.Writer, w.buf)
io.Copy(w.Writer, w.buf)
}
if w.flushed {
w.flusher.Flush()

View File

@@ -10,15 +10,13 @@ type containerRouter struct {
backend Backend
decoder httputils.ContainerDecoder
routes []router.Route
cgroup2 bool
}
// NewRouter initializes a new container router
func NewRouter(b Backend, decoder httputils.ContainerDecoder, cgroup2 bool) router.Router {
func NewRouter(b Backend, decoder httputils.ContainerDecoder) router.Router {
r := &containerRouter{
backend: b,
decoder: decoder,
cgroup2: cgroup2,
}
r.initRoutes()
return r
@@ -40,8 +38,8 @@ func (r *containerRouter) initRoutes() {
router.NewGetRoute("/containers/{name:.*}/changes", r.getContainersChanges),
router.NewGetRoute("/containers/{name:.*}/json", r.getContainersByName),
router.NewGetRoute("/containers/{name:.*}/top", r.getContainersTop),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats),
router.NewGetRoute("/containers/{name:.*}/logs", r.getContainersLogs, router.WithCancel),
router.NewGetRoute("/containers/{name:.*}/stats", r.getContainersStats, router.WithCancel),
router.NewGetRoute("/containers/{name:.*}/attach/ws", r.wsContainersAttach),
router.NewGetRoute("/exec/{id:.*}/json", r.getExecByID),
router.NewGetRoute("/containers/{name:.*}/archive", r.getContainersArchive),
@@ -53,7 +51,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/containers/{name:.*}/restart", r.postContainersRestart),
router.NewPostRoute("/containers/{name:.*}/start", r.postContainersStart),
router.NewPostRoute("/containers/{name:.*}/stop", r.postContainersStop),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait),
router.NewPostRoute("/containers/{name:.*}/wait", r.postContainersWait, router.WithCancel),
router.NewPostRoute("/containers/{name:.*}/resize", r.postContainersResize),
router.NewPostRoute("/containers/{name:.*}/attach", r.postContainersAttach),
router.NewPostRoute("/containers/{name:.*}/copy", r.postContainersCopy), // Deprecated since 1.8, Errors out since 1.12
@@ -62,7 +60,7 @@ func (r *containerRouter) initRoutes() {
router.NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),
router.NewPostRoute("/containers/{name:.*}/rename", r.postContainerRename),
router.NewPostRoute("/containers/{name:.*}/update", r.postContainerUpdate),
router.NewPostRoute("/containers/prune", r.postContainersPrune),
router.NewPostRoute("/containers/prune", r.postContainersPrune, router.WithCancel),
router.NewPostRoute("/commit", r.postCommit),
// PUT
router.NewPutRoute("/containers/{name:.*}/archive", r.putContainersArchive),

View File

@@ -9,8 +9,6 @@ import (
"strconv"
"syscall"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
@@ -21,7 +19,6 @@ import (
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/signal"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/websocket"
@@ -44,7 +41,7 @@ func (s *containerRouter) postCommit(ctx context.Context, w http.ResponseWriter,
}
config, _, _, err := s.decoder.DecodeConfig(r.Body)
if err != nil && err != io.EOF { // Do not fail if body is empty.
if err != nil && err != io.EOF { //Do not fail if body is empty.
return err
}
@@ -108,14 +105,9 @@ func (s *containerRouter) getContainersStats(ctx context.Context, w http.Respons
if !stream {
w.Header().Set("Content-Type", "application/json")
}
var oneShot bool
if versions.GreaterThanOrEqualTo(httputils.VersionFromContext(ctx), "1.41") {
oneShot = httputils.BoolValueOrDefault(r, "one-shot", false)
}
config := &backend.ContainerStatsConfig{
Stream: stream,
OneShot: oneShot,
OutStream: w,
Version: httputils.VersionFromContext(ctx),
}
@@ -346,6 +338,9 @@ func (s *containerRouter) postContainersWait(ctx context.Context, w http.Respons
}
}
// Note: the context should get canceled if the client closes the
// connection since this handler has been wrapped by the
// router.WithCancel() wrapper.
waitC, err := s.backend.ContainerWait(ctx, vars["name"], waitCondition)
if err != nil {
return err
@@ -433,16 +428,6 @@ func (s *containerRouter) postContainerUpdate(ctx context.Context, w http.Respon
if err := decoder.Decode(&updateConfig); err != nil {
return err
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.40") {
updateConfig.PidsLimit = nil
}
if updateConfig.PidsLimit != nil && *updateConfig.PidsLimit <= 0 {
// Both `0` and `-1` are accepted to set "unlimited" when updating.
// Historically, any negative value was accepted, so treat them as
// "unlimited" as well.
var unlimited int64
updateConfig.PidsLimit = &unlimited
}
hostConfig := &container.HostConfig{
Resources: updateConfig.Resources,
@@ -480,54 +465,12 @@ func (s *containerRouter) postContainersCreate(ctx context.Context, w http.Respo
hostConfig.AutoRemove = false
}
if hostConfig != nil && versions.LessThan(version, "1.40") {
// Ignore BindOptions.NonRecursive because it was added in API 1.40.
for _, m := range hostConfig.Mounts {
if bo := m.BindOptions; bo != nil {
bo.NonRecursive = false
}
}
// Ignore KernelMemoryTCP because it was added in API 1.40.
hostConfig.KernelMemoryTCP = 0
// Older clients (API < 1.40) expects the default to be shareable, make them happy
if hostConfig.IpcMode.IsEmpty() {
hostConfig.IpcMode = container.IpcMode("shareable")
}
}
if hostConfig != nil && versions.LessThan(version, "1.41") && !s.cgroup2 {
// Older clients expect the default to be "host" on cgroup v1 hosts
if hostConfig.CgroupnsMode.IsEmpty() {
hostConfig.CgroupnsMode = container.CgroupnsMode("host")
}
}
var platform *specs.Platform
if versions.GreaterThanOrEqualTo(version, "1.41") {
if v := r.Form.Get("platform"); v != "" {
p, err := platforms.Parse(v)
if err != nil {
return errdefs.InvalidParameter(err)
}
platform = &p
}
}
if hostConfig != nil && hostConfig.PidsLimit != nil && *hostConfig.PidsLimit <= 0 {
// Don't set a limit if either no limit was specified, or "unlimited" was
// explicitly set.
// Both `0` and `-1` are accepted as "unlimited", and historically any
// negative value was accepted, so treat those as "unlimited" as well.
hostConfig.PidsLimit = nil
}
ccr, err := s.backend.ContainerCreate(types.ContainerCreateConfig{
Name: name,
Config: config,
HostConfig: hostConfig,
NetworkingConfig: networkingConfig,
AdjustCPUShares: adjustCPUShares,
Platform: platform,
})
if err != nil {
return err
@@ -627,7 +570,7 @@ func (s *containerRouter) postContainersAttach(ctx context.Context, w http.Respo
// Remember to close stream if error happens
conn, _, errHijack := hijacker.Hijack()
if errHijack == nil {
statusCode := httpstatus.FromError(err)
statusCode := httputils.GetHTTPErrorStatusCode(err)
statusText := http.StatusText(statusCode)
fmt.Fprintf(conn, "HTTP/1.1 %d %s\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n%s\r\n", statusCode, statusText, err.Error())
httputils.CloseStreams(conn)

View File

@@ -6,14 +6,12 @@ import (
"context"
"encoding/base64"
"encoding/json"
"errors"
"io"
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions"
"github.com/docker/docker/errdefs"
gddohttputil "github.com/golang/gddo/httputil"
)
@@ -39,10 +37,7 @@ func (s *containerRouter) postContainersCopy(ctx context.Context, w http.Respons
cfg := types.CopyConfig{}
if err := json.NewDecoder(r.Body).Decode(&cfg); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
if cfg.Resource == "" {

View File

@@ -3,7 +3,6 @@ package container // import "github.com/docker/docker/api/server/router/containe
import (
"context"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
@@ -45,10 +44,7 @@ func (s *containerRouter) postContainerExecCreate(ctx context.Context, w http.Re
execConfig := &types.ExecConfig{}
if err := json.NewDecoder(r.Body).Decode(execConfig); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
if len(execConfig.Cmd) == 0 {
@@ -88,10 +84,7 @@ func (s *containerRouter) postContainerExecStart(ctx context.Context, w http.Res
execStartCheck := &types.ExecStartCheck{}
if err := json.NewDecoder(r.Body).Decode(execStartCheck); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
if exists, err := s.backend.ExecExists(execName); !exists {

View File

@@ -14,8 +14,7 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/errdefs"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
)
@@ -43,10 +42,9 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
image := vars["name"]
// TODO why is reference.ParseAnyReference() / reference.ParseNormalizedNamed() not using the reference.ErrTagInvalidFormat (and so on) errors?
ref, err := reference.ParseAnyReference(image)
if err != nil {
return errdefs.InvalidParameter(err)
return err
}
namedRef, ok := ref.(reference.Named)
if !ok {
@@ -54,7 +52,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
// full image ID
return errors.Errorf("no manifest found for full image ID")
}
return errdefs.InvalidParameter(errors.Errorf("unknown image reference format: %s", image))
return errors.Errorf("unknown image reference format: %s", image)
}
distrepo, _, err := s.backend.GetRepository(ctx, namedRef, config)
@@ -68,7 +66,7 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
taggedRef, ok := namedRef.(reference.NamedTagged)
if !ok {
return errdefs.InvalidParameter(errors.Errorf("image reference not tagged: %s", image))
return errors.Errorf("image reference not tagged: %s", image)
}
descriptor, err := distrepo.Tags(ctx).Get(ctx, taggedRef.Tag())
@@ -94,16 +92,6 @@ func (s *distributionRouter) getDistributionInfo(ctx context.Context, w http.Res
}
mnfst, err := mnfstsrvc.Get(ctx, distributionInspect.Descriptor.Digest)
if err != nil {
switch err {
case reference.ErrReferenceInvalidFormat,
reference.ErrTagInvalidFormat,
reference.ErrDigestInvalidFormat,
reference.ErrNameContainsUppercase,
reference.ErrNameEmpty,
reference.ErrNameTooLong,
reference.ErrNameNotCanonical:
return errdefs.InvalidParameter(err)
}
return err
}

View File

@@ -44,7 +44,7 @@ func experimentalHandler(ctx context.Context, w http.ResponseWriter, r *http.Req
return notImplementedError{}
}
// Handler returns the APIFunc to let the server wrap it in middlewares.
// Handler returns returns the APIFunc to let the server wrap it in middlewares.
func (r *experimentalRoute) Handler() httputils.APIFunc {
return r.handler
}

View File

@@ -1,8 +0,0 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import "google.golang.org/grpc"
// Backend abstracts a registerable GRPC service.
type Backend interface {
RegisterGRPC(*grpc.Server)
}

View File

@@ -1,41 +0,0 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"github.com/docker/docker/api/server/router"
"github.com/moby/buildkit/util/grpcerrors"
"golang.org/x/net/http2"
"google.golang.org/grpc"
)
type grpcRouter struct {
routes []router.Route
grpcServer *grpc.Server
h2Server *http2.Server
}
// NewRouter initializes a new grpc http router
func NewRouter(backends ...Backend) router.Router {
r := &grpcRouter{
h2Server: &http2.Server{},
grpcServer: grpc.NewServer(
grpc.UnaryInterceptor(grpcerrors.UnaryServerInterceptor),
grpc.StreamInterceptor(grpcerrors.StreamServerInterceptor),
),
}
for _, b := range backends {
b.RegisterGRPC(r.grpcServer)
}
r.initRoutes()
return r
}
// Routes returns the available routers to the session controller
func (gr *grpcRouter) Routes() []router.Route {
return gr.routes
}
func (gr *grpcRouter) initRoutes() {
gr.routes = []router.Route{
router.NewPostRoute("/grpc", gr.serveGRPC),
}
}

View File

@@ -1,45 +0,0 @@
package grpc // import "github.com/docker/docker/api/server/router/grpc"
import (
"context"
"net/http"
"github.com/pkg/errors"
"golang.org/x/net/http2"
)
func (gr *grpcRouter) serveGRPC(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
h, ok := w.(http.Hijacker)
if !ok {
return errors.New("handler does not support hijack")
}
proto := r.Header.Get("Upgrade")
if proto == "" {
return errors.New("no upgrade proto in request")
}
if proto != "h2c" {
return errors.Errorf("protocol %s not supported", proto)
}
conn, _, err := h.Hijack()
if err != nil {
return err
}
resp := &http.Response{
StatusCode: http.StatusSwitchingProtocols,
ProtoMajor: 1,
ProtoMinor: 1,
Header: http.Header{},
}
resp.Header.Set("Connection", "Upgrade")
resp.Header.Set("Upgrade", proto)
// set raw mode
conn.Write([]byte{})
resp.Write(conn)
// https://godoc.org/golang.org/x/net/http2#Server.ServeConn
// TODO: is it a problem that conn has already been written to?
gr.h2Server.ServeConn(conn, &http2.ServeConnOpts{Handler: gr.grpcServer})
return nil
}

View File

@@ -34,10 +34,10 @@ func (r *imageRouter) initRoutes() {
router.NewGetRoute("/images/{name:.*}/json", r.getImagesByName),
// POST
router.NewPostRoute("/images/load", r.postImagesLoad),
router.NewPostRoute("/images/create", r.postImagesCreate),
router.NewPostRoute("/images/{name:.*}/push", r.postImagesPush),
router.NewPostRoute("/images/create", r.postImagesCreate, router.WithCancel),
router.NewPostRoute("/images/{name:.*}/push", r.postImagesPush, router.WithCancel),
router.NewPostRoute("/images/{name:.*}/tag", r.postImagesTag),
router.NewPostRoute("/images/prune", r.postImagesPrune),
router.NewPostRoute("/images/prune", r.postImagesPrune, router.WithCancel),
// DELETE
router.NewDeleteRoute("/images/{name:.*}", r.deleteImages),
}

View File

@@ -57,41 +57,43 @@ func (s *imageRouter) postImagesCreate(ctx context.Context, w http.ResponseWrite
}
}
if image != "" { // pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
if err == nil {
if image != "" { //pull
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
}
}
}
authEncoded := r.Header.Get("X-Registry-Auth")
authConfig := &types.AuthConfig{}
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
authEncoded := r.Header.Get("X-Registry-Auth")
authConfig := &types.AuthConfig{}
if authEncoded != "" {
authJSON := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJSON).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &types.AuthConfig{}
}
}
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { //import
src := r.Form.Get("fromSrc")
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
err = s.backend.PullImage(ctx, image, tag, platform, metaHeaders, authConfig, output)
} else { // import
src := r.Form.Get("fromSrc")
// 'err' MUST NOT be defined within this block, we need any error
// generated from the download to be available to the output
// stream processing below
os := ""
if platform != nil {
os = platform.OS
}
err = s.backend.ImportImage(src, repo, os, tag, message, r.Body, output, r.Form["changes"])
}
if err != nil {
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -136,7 +138,7 @@ func (s *imageRouter) postImagesPush(ctx context.Context, w http.ResponseWriter,
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -161,7 +163,7 @@ func (s *imageRouter) getImagesGet(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -177,7 +179,7 @@ func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter,
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -231,12 +233,10 @@ func (s *imageRouter) getImagesJSON(ctx context.Context, w http.ResponseWriter,
return err
}
version := httputils.VersionFromContext(ctx)
if versions.LessThan(version, "1.41") {
filterParam := r.Form.Get("filter")
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
filterParam := r.Form.Get("filter")
// FIXME(vdemeester) This has been deprecated in 1.13, and is target for removal for v17.12
if filterParam != "" {
imageFilters.Add("reference", filterParam)
}
images, err := s.backend.Images(imageFilters, httputils.BoolValue(r, "all"), false)

View File

@@ -1,6 +1,7 @@
package router // import "github.com/docker/docker/api/server/router"
import (
"context"
"net/http"
"github.com/docker/docker/api/server/httputils"
@@ -44,30 +45,60 @@ func NewRoute(method, path string, handler httputils.APIFunc, opts ...RouteWrapp
// NewGetRoute initializes a new route with the http method GET.
func NewGetRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodGet, path, handler, opts...)
return NewRoute("GET", path, handler, opts...)
}
// NewPostRoute initializes a new route with the http method POST.
func NewPostRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPost, path, handler, opts...)
return NewRoute("POST", path, handler, opts...)
}
// NewPutRoute initializes a new route with the http method PUT.
func NewPutRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodPut, path, handler, opts...)
return NewRoute("PUT", path, handler, opts...)
}
// NewDeleteRoute initializes a new route with the http method DELETE.
func NewDeleteRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodDelete, path, handler, opts...)
return NewRoute("DELETE", path, handler, opts...)
}
// NewOptionsRoute initializes a new route with the http method OPTIONS.
func NewOptionsRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodOptions, path, handler, opts...)
return NewRoute("OPTIONS", path, handler, opts...)
}
// NewHeadRoute initializes a new route with the http method HEAD.
func NewHeadRoute(path string, handler httputils.APIFunc, opts ...RouteWrapper) Route {
return NewRoute(http.MethodHead, path, handler, opts...)
return NewRoute("HEAD", path, handler, opts...)
}
func cancellableHandler(h httputils.APIFunc) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if notifier, ok := w.(http.CloseNotifier); ok {
notify := notifier.CloseNotify()
notifyCtx, cancel := context.WithCancel(ctx)
finished := make(chan struct{})
defer close(finished)
ctx = notifyCtx
go func() {
select {
case <-notify:
cancel()
case <-finished:
}
}()
}
return h(ctx, w, r, vars)
}
}
// WithCancel makes new route which embeds http.CloseNotifier feature to
// context.Context of handler.
func WithCancel(r Route) Route {
return localRoute{
method: r.Method(),
path: r.Path(),
handler: cancellableHandler(r.Handler()),
}
}

View File

@@ -13,7 +13,7 @@ import (
// to provide network specific functionality.
type Backend interface {
FindNetwork(idName string) (libnetwork.Network, error)
GetNetworks(filters.Args, types.NetworkListConfig) ([]types.NetworkResource, error)
GetNetworks() []libnetwork.Network
CreateNetwork(nc types.NetworkCreateRequest) (*types.NetworkCreateResponse, error)
ConnectContainerToNetwork(containerName, networkName string, endpointConfig *network.EndpointSettings) error
DisconnectContainerFromNetwork(containerName string, networkName string, force bool) error
@@ -24,7 +24,7 @@ type Backend interface {
// ClusterBackend is all the methods that need to be implemented
// to provide cluster network specific functionality.
type ClusterBackend interface {
GetNetworks(filters.Args) ([]types.NetworkResource, error)
GetNetworks() ([]types.NetworkResource, error)
GetNetwork(name string) (types.NetworkResource, error)
GetNetworksByName(name string) ([]types.NetworkResource, error)
CreateNetwork(nc types.NetworkCreateRequest) (string, error)

View File

@@ -1 +1,93 @@
package network // import "github.com/docker/docker/api/server/router/network"
import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/runconfig"
)
func filterNetworkByType(nws []types.NetworkResource, netType string) ([]types.NetworkResource, error) {
retNws := []types.NetworkResource{}
switch netType {
case "builtin":
for _, nw := range nws {
if runconfig.IsPreDefinedNetwork(nw.Name) {
retNws = append(retNws, nw)
}
}
case "custom":
for _, nw := range nws {
if !runconfig.IsPreDefinedNetwork(nw.Name) {
retNws = append(retNws, nw)
}
}
default:
return nil, invalidFilter(netType)
}
return retNws, nil
}
type invalidFilter string
func (e invalidFilter) Error() string {
return "Invalid filter: 'type'='" + string(e) + "'"
}
func (e invalidFilter) InvalidParameter() {}
// filterNetworks filters network list according to user specified filter
// and returns user chosen networks
func filterNetworks(nws []types.NetworkResource, filter filters.Args) ([]types.NetworkResource, error) {
// if filter is empty, return original network list
if filter.Len() == 0 {
return nws, nil
}
displayNet := []types.NetworkResource{}
for _, nw := range nws {
if filter.Contains("driver") {
if !filter.ExactMatch("driver", nw.Driver) {
continue
}
}
if filter.Contains("name") {
if !filter.Match("name", nw.Name) {
continue
}
}
if filter.Contains("id") {
if !filter.Match("id", nw.ID) {
continue
}
}
if filter.Contains("label") {
if !filter.MatchKVList("label", nw.Labels) {
continue
}
}
if filter.Contains("scope") {
if !filter.ExactMatch("scope", nw.Scope) {
continue
}
}
displayNet = append(displayNet, nw)
}
if filter.Contains("type") {
typeNet := []types.NetworkResource{}
errFilter := filter.WalkValues("type", func(fval string) error {
passList, err := filterNetworkByType(displayNet, fval)
if err != nil {
return err
}
typeNet = append(typeNet, passList...)
return nil
})
if errFilter != nil {
return nil, errFilter
}
displayNet = typeNet
}
return displayNet, nil
}

View File

@@ -0,0 +1,149 @@
// +build !windows
package network // import "github.com/docker/docker/api/server/router/network"
import (
"strings"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
)
func TestFilterNetworks(t *testing.T) {
networks := []types.NetworkResource{
{
Name: "host",
Driver: "host",
Scope: "local",
},
{
Name: "bridge",
Driver: "bridge",
Scope: "local",
},
{
Name: "none",
Driver: "null",
Scope: "local",
},
{
Name: "myoverlay",
Driver: "overlay",
Scope: "swarm",
},
{
Name: "mydrivernet",
Driver: "mydriver",
Scope: "local",
},
{
Name: "mykvnet",
Driver: "mykvdriver",
Scope: "global",
},
}
bridgeDriverFilters := filters.NewArgs()
bridgeDriverFilters.Add("driver", "bridge")
overlayDriverFilters := filters.NewArgs()
overlayDriverFilters.Add("driver", "overlay")
nonameDriverFilters := filters.NewArgs()
nonameDriverFilters.Add("driver", "noname")
customDriverFilters := filters.NewArgs()
customDriverFilters.Add("type", "custom")
builtinDriverFilters := filters.NewArgs()
builtinDriverFilters.Add("type", "builtin")
invalidDriverFilters := filters.NewArgs()
invalidDriverFilters.Add("type", "invalid")
localScopeFilters := filters.NewArgs()
localScopeFilters.Add("scope", "local")
swarmScopeFilters := filters.NewArgs()
swarmScopeFilters.Add("scope", "swarm")
globalScopeFilters := filters.NewArgs()
globalScopeFilters.Add("scope", "global")
testCases := []struct {
filter filters.Args
resultCount int
err string
}{
{
filter: bridgeDriverFilters,
resultCount: 1,
err: "",
},
{
filter: overlayDriverFilters,
resultCount: 1,
err: "",
},
{
filter: nonameDriverFilters,
resultCount: 0,
err: "",
},
{
filter: customDriverFilters,
resultCount: 3,
err: "",
},
{
filter: builtinDriverFilters,
resultCount: 3,
err: "",
},
{
filter: invalidDriverFilters,
resultCount: 0,
err: "Invalid filter: 'type'='invalid'",
},
{
filter: localScopeFilters,
resultCount: 4,
err: "",
},
{
filter: swarmScopeFilters,
resultCount: 1,
err: "",
},
{
filter: globalScopeFilters,
resultCount: 1,
err: "",
},
}
for _, testCase := range testCases {
result, err := filterNetworks(networks, testCase.filter)
if testCase.err != "" {
if err == nil {
t.Fatalf("expect error '%s', got no error", testCase.err)
} else if !strings.Contains(err.Error(), testCase.err) {
t.Fatalf("expect error '%s', got '%s'", testCase.err, err)
}
} else {
if err != nil {
t.Fatalf("expect no error, got error '%s'", err)
}
// Make sure result is not nil
if result == nil {
t.Fatal("filterNetworks should not return nil")
}
if len(result) != testCase.resultCount {
t.Fatalf("expect '%d' networks, got '%d' networks", testCase.resultCount, len(result))
}
}
}
}

View File

@@ -36,7 +36,7 @@ func (r *networkRouter) initRoutes() {
router.NewPostRoute("/networks/create", r.postNetworkCreate),
router.NewPostRoute("/networks/{id:.*}/connect", r.postNetworkConnect),
router.NewPostRoute("/networks/{id:.*}/disconnect", r.postNetworkDisconnect),
router.NewPostRoute("/networks/prune", r.postNetworksPrune),
router.NewPostRoute("/networks/prune", r.postNetworksPrune, router.WithCancel),
// DELETE
router.NewDeleteRoute("/networks/{id:.*}", r.deleteNetwork),
}

View File

@@ -3,7 +3,6 @@ package network // import "github.com/docker/docker/api/server/router/network"
import (
"context"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
@@ -16,54 +15,70 @@ import (
"github.com/docker/docker/errdefs"
"github.com/docker/libnetwork"
netconst "github.com/docker/libnetwork/datastore"
"github.com/docker/libnetwork/networkdb"
"github.com/pkg/errors"
)
var (
// acceptedNetworkFilters is a list of acceptable filters
acceptedNetworkFilters = map[string]bool{
"driver": true,
"type": true,
"name": true,
"id": true,
"label": true,
"scope": true,
}
)
func (n *networkRouter) getNetworksList(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
}
filter, err := filters.FromJSON(r.Form.Get("filters"))
filter := r.Form.Get("filters")
netFilters, err := filters.FromJSON(filter)
if err != nil {
return err
}
if err := network.ValidateFilters(filter); err != nil {
return errdefs.InvalidParameter(err)
if err := netFilters.Validate(acceptedNetworkFilters); err != nil {
return err
}
var list []types.NetworkResource
nr, err := n.cluster.GetNetworks(filter)
if err == nil {
list = nr
list := []types.NetworkResource{}
if nr, err := n.cluster.GetNetworks(); err == nil {
list = append(list, nr...)
}
// Combine the network list returned by Docker daemon if it is not already
// returned by the cluster manager
localNetworks, err := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: versions.LessThan(httputils.VersionFromContext(ctx), "1.28")})
SKIP:
for _, nw := range n.backend.GetNetworks() {
for _, nl := range list {
if nl.ID == nw.ID() {
continue SKIP
}
}
var nr *types.NetworkResource
// Versions < 1.28 fetches all the containers attached to a network
// in a network list api call. It is a heavy weight operation when
// run across all the networks. Starting API version 1.28, this detailed
// info is available for network specific GET API (equivalent to inspect)
if versions.LessThan(httputils.VersionFromContext(ctx), "1.28") {
nr = n.buildDetailedNetworkResources(nw, false)
} else {
nr = n.buildNetworkResource(nw)
}
list = append(list, *nr)
}
list, err = filterNetworks(list, netFilters)
if err != nil {
return err
}
var idx map[string]bool
if len(list) > 0 {
idx = make(map[string]bool, len(list))
for _, n := range list {
idx[n.ID] = true
}
}
for _, n := range localNetworks {
if idx[n.ID] {
continue
}
list = append(list, n)
}
if list == nil {
list = []types.NetworkResource{}
}
return httputils.WriteJSON(w, http.StatusOK, list)
}
@@ -106,6 +121,13 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
scope := r.URL.Query().Get("scope")
isMatchingScope := func(scope, term string) bool {
if term != "" {
return scope == term
}
return true
}
// In case multiple networks have duplicate names, return error.
// TODO (yongtang): should we wrap with version here for backward compatibility?
@@ -117,26 +139,20 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
listByFullName := map[string]types.NetworkResource{}
listByPartialID := map[string]types.NetworkResource{}
// TODO(@cpuguy83): All this logic for figuring out which network to return does not belong here
// Instead there should be a backend function to just get one network.
filter := filters.NewArgs(filters.Arg("idOrName", term))
if scope != "" {
filter.Add("scope", scope)
}
nw, _ := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: true, Verbose: verbose})
nw := n.backend.GetNetworks()
for _, network := range nw {
if network.ID == term {
return httputils.WriteJSON(w, http.StatusOK, network)
if network.ID() == term && isMatchingScope(network.Info().Scope(), scope) {
return httputils.WriteJSON(w, http.StatusOK, *n.buildDetailedNetworkResources(network, verbose))
}
if network.Name == term {
if network.Name() == term && isMatchingScope(network.Info().Scope(), scope) {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByFullName[network.ID] = network
listByFullName[network.ID()] = *n.buildDetailedNetworkResources(network, verbose)
}
if strings.HasPrefix(network.ID, term) {
if strings.HasPrefix(network.ID(), term) && isMatchingScope(network.Info().Scope(), scope) {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByPartialID[network.ID] = network
listByPartialID[network.ID()] = *n.buildDetailedNetworkResources(network, verbose)
}
}
@@ -158,12 +174,12 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
}
}
nr, _ := n.cluster.GetNetworks(filter)
nr, _ := n.cluster.GetNetworks()
for _, network := range nr {
if network.ID == term {
if network.ID == term && isMatchingScope(network.Scope, scope) {
return httputils.WriteJSON(w, http.StatusOK, network)
}
if network.Name == term {
if network.Name == term && isMatchingScope(network.Scope, scope) {
// Check the ID collision as we are in swarm scope here, and
// the map (of the listByFullName) may have already had a
// network with the same ID (from local scope previously)
@@ -171,7 +187,7 @@ func (n *networkRouter) getNetwork(ctx context.Context, w http.ResponseWriter, r
listByFullName[network.ID] = network
}
}
if strings.HasPrefix(network.ID, term) {
if strings.HasPrefix(network.ID, term) && isMatchingScope(network.Scope, scope) {
// Check the ID collision as we are in swarm scope here, and
// the map (of the listByPartialID) may have already had a
// network with the same ID (from local scope previously)
@@ -216,10 +232,7 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
}
if err := json.NewDecoder(r.Body).Decode(&create); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
if nws, err := n.cluster.GetNetworksByName(create.Name); err == nil && len(nws) > 0 {
@@ -265,10 +278,7 @@ func (n *networkRouter) postNetworkConnect(ctx context.Context, w http.ResponseW
}
if err := json.NewDecoder(r.Body).Decode(&connect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
// Unlike other operations, we does not check ambiguity of the name/ID here.
@@ -289,10 +299,7 @@ func (n *networkRouter) postNetworkDisconnect(ctx context.Context, w http.Respon
}
if err := json.NewDecoder(r.Body).Decode(&disconnect); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
return n.backend.DisconnectContainerFromNetwork(disconnect.Container, vars["id"], disconnect.Force)
@@ -320,6 +327,182 @@ func (n *networkRouter) deleteNetwork(ctx context.Context, w http.ResponseWriter
return nil
}
func (n *networkRouter) buildNetworkResource(nw libnetwork.Network) *types.NetworkResource {
r := &types.NetworkResource{}
if nw == nil {
return r
}
info := nw.Info()
r.Name = nw.Name()
r.ID = nw.ID()
r.Created = info.Created()
r.Scope = info.Scope()
r.Driver = nw.Type()
r.EnableIPv6 = info.IPv6Enabled()
r.Internal = info.Internal()
r.Attachable = info.Attachable()
r.Ingress = info.Ingress()
r.Options = info.DriverOptions()
r.Containers = make(map[string]types.EndpointResource)
buildIpamResources(r, info)
r.Labels = info.Labels()
r.ConfigOnly = info.ConfigOnly()
if cn := info.ConfigFrom(); cn != "" {
r.ConfigFrom = network.ConfigReference{Network: cn}
}
peers := info.Peers()
if len(peers) != 0 {
r.Peers = buildPeerInfoResources(peers)
}
return r
}
func (n *networkRouter) buildDetailedNetworkResources(nw libnetwork.Network, verbose bool) *types.NetworkResource {
if nw == nil {
return &types.NetworkResource{}
}
r := n.buildNetworkResource(nw)
epl := nw.Endpoints()
for _, e := range epl {
ei := e.Info()
if ei == nil {
continue
}
sb := ei.Sandbox()
tmpID := e.ID()
key := "ep-" + tmpID
if sb != nil {
key = sb.ContainerID()
}
r.Containers[key] = buildEndpointResource(tmpID, e.Name(), ei)
}
if !verbose {
return r
}
services := nw.Info().Services()
r.Services = make(map[string]network.ServiceInfo)
for name, service := range services {
tasks := []network.Task{}
for _, t := range service.Tasks {
tasks = append(tasks, network.Task{
Name: t.Name,
EndpointID: t.EndpointID,
EndpointIP: t.EndpointIP,
Info: t.Info,
})
}
r.Services[name] = network.ServiceInfo{
VIP: service.VIP,
Ports: service.Ports,
Tasks: tasks,
LocalLBIndex: service.LocalLBIndex,
}
}
return r
}
func buildPeerInfoResources(peers []networkdb.PeerInfo) []network.PeerInfo {
peerInfo := make([]network.PeerInfo, 0, len(peers))
for _, peer := range peers {
peerInfo = append(peerInfo, network.PeerInfo{
Name: peer.Name,
IP: peer.IP,
})
}
return peerInfo
}
func buildIpamResources(r *types.NetworkResource, nwInfo libnetwork.NetworkInfo) {
id, opts, ipv4conf, ipv6conf := nwInfo.IpamConfig()
ipv4Info, ipv6Info := nwInfo.IpamInfo()
r.IPAM.Driver = id
r.IPAM.Options = opts
r.IPAM.Config = []network.IPAMConfig{}
for _, ip4 := range ipv4conf {
if ip4.PreferredPool == "" {
continue
}
iData := network.IPAMConfig{}
iData.Subnet = ip4.PreferredPool
iData.IPRange = ip4.SubPool
iData.Gateway = ip4.Gateway
iData.AuxAddress = ip4.AuxAddresses
r.IPAM.Config = append(r.IPAM.Config, iData)
}
if len(r.IPAM.Config) == 0 {
for _, ip4Info := range ipv4Info {
iData := network.IPAMConfig{}
iData.Subnet = ip4Info.IPAMData.Pool.String()
if ip4Info.IPAMData.Gateway != nil {
iData.Gateway = ip4Info.IPAMData.Gateway.IP.String()
}
r.IPAM.Config = append(r.IPAM.Config, iData)
}
}
hasIpv6Conf := false
for _, ip6 := range ipv6conf {
if ip6.PreferredPool == "" {
continue
}
hasIpv6Conf = true
iData := network.IPAMConfig{}
iData.Subnet = ip6.PreferredPool
iData.IPRange = ip6.SubPool
iData.Gateway = ip6.Gateway
iData.AuxAddress = ip6.AuxAddresses
r.IPAM.Config = append(r.IPAM.Config, iData)
}
if !hasIpv6Conf {
for _, ip6Info := range ipv6Info {
if ip6Info.IPAMData.Pool == nil {
continue
}
iData := network.IPAMConfig{}
iData.Subnet = ip6Info.IPAMData.Pool.String()
iData.Gateway = ip6Info.IPAMData.Gateway.String()
r.IPAM.Config = append(r.IPAM.Config, iData)
}
}
}
func buildEndpointResource(id string, name string, info libnetwork.EndpointInfo) types.EndpointResource {
er := types.EndpointResource{}
er.EndpointID = id
er.Name = name
ei := info
if ei == nil {
return er
}
if iface := ei.Iface(); iface != nil {
if mac := iface.MacAddress(); mac != nil {
er.MacAddress = mac.String()
}
if ip := iface.Address(); ip != nil && len(ip.IP) > 0 {
er.IPv4Address = ip.String()
}
if ipv6 := iface.AddressIPv6(); ipv6 != nil && len(ipv6.IP) > 0 {
er.IPv6Address = ipv6.String()
}
}
return er
}
func (n *networkRouter) postNetworksPrune(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := httputils.ParseForm(r); err != nil {
return err
@@ -349,25 +532,25 @@ func (n *networkRouter) findUniqueNetwork(term string) (types.NetworkResource, e
listByFullName := map[string]types.NetworkResource{}
listByPartialID := map[string]types.NetworkResource{}
filter := filters.NewArgs(filters.Arg("idOrName", term))
nw, _ := n.backend.GetNetworks(filter, types.NetworkListConfig{Detailed: true})
nw := n.backend.GetNetworks()
for _, network := range nw {
if network.ID == term {
return network, nil
if network.ID() == term {
return *n.buildDetailedNetworkResources(network, false), nil
}
if network.Name == term && !network.Ingress {
if network.Name() == term && !network.Info().Ingress() {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByFullName[network.ID] = network
listByFullName[network.ID()] = *n.buildDetailedNetworkResources(network, false)
}
if strings.HasPrefix(network.ID, term) {
if strings.HasPrefix(network.ID(), term) {
// No need to check the ID collision here as we are still in
// local scope and the network ID is unique in this scope.
listByPartialID[network.ID] = network
listByPartialID[network.ID()] = *n.buildDetailedNetworkResources(network, false)
}
}
nr, _ := n.cluster.GetNetworks(filter)
nr, _ := n.cluster.GetNetworks()
for _, network := range nr {
if network.ID == term {
return network, nil

View File

@@ -28,11 +28,11 @@ func (r *pluginRouter) initRoutes() {
router.NewGetRoute("/plugins/{name:.*}/json", r.inspectPlugin),
router.NewGetRoute("/plugins/privileges", r.getPrivileges),
router.NewDeleteRoute("/plugins/{name:.*}", r.removePlugin),
router.NewPostRoute("/plugins/{name:.*}/enable", r.enablePlugin),
router.NewPostRoute("/plugins/{name:.*}/enable", r.enablePlugin), // PATCH?
router.NewPostRoute("/plugins/{name:.*}/disable", r.disablePlugin),
router.NewPostRoute("/plugins/pull", r.pullPlugin),
router.NewPostRoute("/plugins/{name:.*}/push", r.pushPlugin),
router.NewPostRoute("/plugins/{name:.*}/upgrade", r.upgradePlugin),
router.NewPostRoute("/plugins/pull", r.pullPlugin, router.WithCancel),
router.NewPostRoute("/plugins/{name:.*}/push", r.pushPlugin, router.WithCancel),
router.NewPostRoute("/plugins/{name:.*}/upgrade", r.upgradePlugin, router.WithCancel),
router.NewPostRoute("/plugins/{name:.*}/set", r.setPlugin),
router.NewPostRoute("/plugins/create", r.createPlugin),
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/base64"
"encoding/json"
"io"
"net/http"
"strconv"
"strings"
@@ -13,7 +12,6 @@ import (
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/errdefs"
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/streamformatter"
"github.com/pkg/errors"
@@ -123,7 +121,7 @@ func (pr *pluginRouter) upgradePlugin(ctx context.Context, w http.ResponseWriter
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -162,7 +160,7 @@ func (pr *pluginRouter) pullPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
@@ -211,7 +209,7 @@ func (pr *pluginRouter) createPlugin(ctx context.Context, w http.ResponseWriter,
if err := pr.backend.CreateFromContext(ctx, r.Body, options); err != nil {
return err
}
// TODO: send progress bar
//TODO: send progress bar
w.WriteHeader(http.StatusNoContent)
return nil
}
@@ -270,7 +268,7 @@ func (pr *pluginRouter) pushPlugin(ctx context.Context, w http.ResponseWriter, r
if !output.Flushed() {
return err
}
_, _ = output.Write(streamformatter.FormatError(err))
output.Write(streamformatter.FormatError(err))
}
return nil
}
@@ -278,10 +276,7 @@ func (pr *pluginRouter) pushPlugin(ctx context.Context, w http.ResponseWriter, r
func (pr *pluginRouter) setPlugin(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var args []string
if err := json.NewDecoder(r.Body).Decode(&args); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
if err := pr.backend.Set(vars["name"], args); err != nil {
return err

View File

@@ -24,6 +24,6 @@ func (r *sessionRouter) Routes() []router.Route {
func (r *sessionRouter) initRoutes() {
r.routes = []router.Route{
router.NewPostRoute("/session", r.startSession),
router.Experimental(router.NewPostRoute("/session", r.startSession)),
}
}

View File

@@ -37,7 +37,7 @@ func (sr *swarmRouter) initRoutes() {
router.NewPostRoute("/services/create", sr.createService),
router.NewPostRoute("/services/{id}/update", sr.updateService),
router.NewDeleteRoute("/services/{id}", sr.removeService),
router.NewGetRoute("/services/{id}/logs", sr.getServiceLogs),
router.NewGetRoute("/services/{id}/logs", sr.getServiceLogs, router.WithCancel),
router.NewGetRoute("/nodes", sr.getNodes),
router.NewGetRoute("/nodes/{id}", sr.getNode),
@@ -46,7 +46,7 @@ func (sr *swarmRouter) initRoutes() {
router.NewGetRoute("/tasks", sr.getTasks),
router.NewGetRoute("/tasks/{id}", sr.getTask),
router.NewGetRoute("/tasks/{id}/logs", sr.getTaskLogs),
router.NewGetRoute("/tasks/{id}/logs", sr.getTaskLogs, router.WithCancel),
router.NewGetRoute("/secrets", sr.getSecrets),
router.NewPostRoute("/secrets/create", sr.createSecret),

View File

@@ -4,7 +4,6 @@ import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strconv"
@@ -22,21 +21,7 @@ import (
func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.InitRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
version := httputils.VersionFromContext(ctx)
// DefaultAddrPool and SubnetSize were added in API 1.39. Ignore on older API versions.
if versions.LessThan(version, "1.39") {
req.DefaultAddrPool = nil
req.SubnetSize = 0
}
// DataPathPort was added in API 1.40. Ignore this option on older API versions.
if versions.LessThan(version, "1.40") {
req.DataPathPort = 0
return err
}
nodeID, err := sr.backend.Init(req)
if err != nil {
@@ -49,10 +34,7 @@ func (sr *swarmRouter) initCluster(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) joinCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.JoinRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
return sr.backend.Join(req)
}
@@ -79,10 +61,7 @@ func (sr *swarmRouter) inspectCluster(ctx context.Context, w http.ResponseWriter
func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var swarm types.Spec
if err := json.NewDecoder(r.Body).Decode(&swarm); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
rawVersion := r.URL.Query().Get("version")
@@ -133,10 +112,7 @@ func (sr *swarmRouter) updateCluster(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) unlockCluster(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var req types.UnlockRequest
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
if err := sr.backend.UnlockSwarm(req); err != nil {
@@ -167,19 +143,7 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
return errdefs.InvalidParameter(err)
}
// the status query parameter is only support in API versions >= 1.41. If
// the client is using a lesser version, ignore the parameter.
cliVersion := httputils.VersionFromContext(ctx)
var status bool
if value := r.URL.Query().Get("status"); value != "" && !versions.LessThan(cliVersion, "1.41") {
var err error
status, err = strconv.ParseBool(value)
if err != nil {
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for status: %s", value)
}
}
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter, Status: status})
services, err := sr.backend.GetServices(basictypes.ServiceListOptions{Filters: filter})
if err != nil {
logrus.Errorf("Error getting services: %v", err)
return err
@@ -190,21 +154,15 @@ func (sr *swarmRouter) getServices(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var insertDefaults bool
if value := r.URL.Query().Get("insertDefaults"); value != "" {
var err error
insertDefaults, err = strconv.ParseBool(value)
if err != nil {
err := fmt.Errorf("invalid value for insertDefaults: %s", value)
return errors.Wrapf(errdefs.InvalidParameter(err), "invalid value for insertDefaults: %s", value)
}
}
// you may note that there is no code here to handle the "status" query
// parameter, as in getServices. the Status field is not supported when
// retrieving an individual service because the Backend API changes
// required to accommodate it would be too disruptive, and because that
// field is so rarely needed as part of an individual service inspection.
service, err := sr.backend.GetService(vars["id"], insertDefaults)
if err != nil {
logrus.Errorf("Error getting service %s: %v", vars["id"], err)
@@ -217,21 +175,17 @@ func (sr *swarmRouter) getService(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec
if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
// Get returns "" if the header does not exist
encodedAuth := r.Header.Get("X-Registry-Auth")
cliVersion := r.Header.Get("version")
queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" {
if versions.LessThan(v, "1.30") {
queryRegistry = true
}
adjustForAPIVersion(v, &service)
if cliVersion != "" && versions.LessThan(cliVersion, "1.30") {
queryRegistry = true
}
resp, err := sr.backend.CreateService(service, encodedAuth, queryRegistry)
if err != nil {
logrus.Errorf("Error creating service %s: %v", service.Name, err)
@@ -244,10 +198,7 @@ func (sr *swarmRouter) createService(ctx context.Context, w http.ResponseWriter,
func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var service types.ServiceSpec
if err := json.NewDecoder(r.Body).Decode(&service); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
rawVersion := r.URL.Query().Get("version")
@@ -263,12 +214,10 @@ func (sr *swarmRouter) updateService(ctx context.Context, w http.ResponseWriter,
flags.EncodedRegistryAuth = r.Header.Get("X-Registry-Auth")
flags.RegistryAuthFrom = r.URL.Query().Get("registryAuthFrom")
flags.Rollback = r.URL.Query().Get("rollback")
cliVersion := r.Header.Get("version")
queryRegistry := false
if v := httputils.VersionFromContext(ctx); v != "" {
if versions.LessThan(v, "1.30") {
queryRegistry = true
}
adjustForAPIVersion(v, &service)
if cliVersion != "" && versions.LessThan(cliVersion, "1.30") {
queryRegistry = true
}
resp, err := sr.backend.UpdateService(vars["id"], version, service, flags, queryRegistry)
@@ -342,10 +291,7 @@ func (sr *swarmRouter) getNode(ctx context.Context, w http.ResponseWriter, r *ht
func (sr *swarmRouter) updateNode(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var node types.NodeSpec
if err := json.NewDecoder(r.Body).Decode(&node); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
rawVersion := r.URL.Query().Get("version")
@@ -424,10 +370,7 @@ func (sr *swarmRouter) getSecrets(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec
if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
version := httputils.VersionFromContext(ctx)
if secret.Templating != nil && versions.LessThan(version, "1.37") {
@@ -465,9 +408,6 @@ func (sr *swarmRouter) getSecret(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateSecret(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var secret types.SecretSpec
if err := json.NewDecoder(r.Body).Decode(&secret); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}
@@ -501,10 +441,7 @@ func (sr *swarmRouter) getConfigs(ctx context.Context, w http.ResponseWriter, r
func (sr *swarmRouter) createConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec
if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
version := httputils.VersionFromContext(ctx)
@@ -543,9 +480,6 @@ func (sr *swarmRouter) getConfig(ctx context.Context, w http.ResponseWriter, r *
func (sr *swarmRouter) updateConfig(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var config types.ConfigSpec
if err := json.NewDecoder(r.Body).Decode(&config); err != nil {
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
}

View File

@@ -9,8 +9,6 @@ import (
"github.com/docker/docker/api/server/httputils"
basictypes "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/api/types/versions"
)
// swarmLogs takes an http response, request, and selector, and writes the logs
@@ -66,52 +64,3 @@ func (sr *swarmRouter) swarmLogs(ctx context.Context, w io.Writer, r *http.Reque
httputils.WriteLogStream(ctx, w, msgs, logsConfig, !tty)
return nil
}
// adjustForAPIVersion takes a version and service spec and removes fields to
// make the spec compatible with the specified version.
func adjustForAPIVersion(cliVersion string, service *swarm.ServiceSpec) {
if cliVersion == "" {
return
}
if versions.LessThan(cliVersion, "1.40") {
if service.TaskTemplate.ContainerSpec != nil {
// Sysctls for docker swarm services weren't supported before
// API version 1.40
service.TaskTemplate.ContainerSpec.Sysctls = nil
if service.TaskTemplate.ContainerSpec.Privileges != nil && service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec != nil {
// Support for setting credential-spec through configs was added in API 1.40
service.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config = ""
}
for _, config := range service.TaskTemplate.ContainerSpec.Configs {
// support for the Runtime target was added in API 1.40
config.Runtime = nil
}
}
if service.TaskTemplate.Placement != nil {
// MaxReplicas for docker swarm services weren't supported before
// API version 1.40
service.TaskTemplate.Placement.MaxReplicas = 0
}
}
if versions.LessThan(cliVersion, "1.41") {
if service.TaskTemplate.ContainerSpec != nil {
// Capabilities and Ulimits for docker swarm services weren't
// supported before API version 1.41
service.TaskTemplate.ContainerSpec.CapabilityAdd = nil
service.TaskTemplate.ContainerSpec.CapabilityDrop = nil
service.TaskTemplate.ContainerSpec.Ulimits = nil
}
if service.TaskTemplate.Resources != nil && service.TaskTemplate.Resources.Limits != nil {
// Limits.Pids not supported before API version 1.41
service.TaskTemplate.Resources.Limits.Pids = 0
}
// jobs were only introduced in API version 1.41. Nil out both Job
// modes; if the service is one of these modes and subsequently has no
// mode, then something down the pipe will thrown an error.
service.Mode.ReplicatedJob = nil
service.Mode.GlobalJob = nil
}
}

View File

@@ -1,119 +0,0 @@
package swarm // import "github.com/docker/docker/api/server/router/swarm"
import (
"reflect"
"testing"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/go-units"
)
func TestAdjustForAPIVersion(t *testing.T) {
var (
expectedSysctls = map[string]string{"foo": "bar"}
)
// testing the negative -- does this leave everything else alone? -- is
// prohibitively time-consuming to write, because it would need an object
// with literally every field filled in.
spec := &swarm.ServiceSpec{
TaskTemplate: swarm.TaskSpec{
ContainerSpec: &swarm.ContainerSpec{
Sysctls: expectedSysctls,
Privileges: &swarm.Privileges{
CredentialSpec: &swarm.CredentialSpec{
Config: "someconfig",
},
},
Configs: []*swarm.ConfigReference{
{
File: &swarm.ConfigReferenceFileTarget{
Name: "foo",
UID: "bar",
GID: "baz",
},
ConfigID: "configFile",
ConfigName: "configFile",
},
{
Runtime: &swarm.ConfigReferenceRuntimeTarget{},
ConfigID: "configRuntime",
ConfigName: "configRuntime",
},
},
Ulimits: []*units.Ulimit{
{
Name: "nofile",
Soft: 100,
Hard: 200,
},
},
},
Placement: &swarm.Placement{
MaxReplicas: 222,
},
Resources: &swarm.ResourceRequirements{
Limits: &swarm.Limit{
Pids: 300,
},
},
},
}
// first, does calling this with a later version correctly NOT strip
// fields? do the later version first, so we can reuse this spec in the
// next test.
adjustForAPIVersion("1.41", spec)
if !reflect.DeepEqual(spec.TaskTemplate.ContainerSpec.Sysctls, expectedSysctls) {
t.Error("Sysctls was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids == 0 {
t.Error("PidsLimit was stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 300 {
t.Error("PidsLimit did not preserve the value from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "someconfig" {
t.Error("CredentialSpec.Config field was stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Configs[1].Runtime == nil {
t.Error("ConfigReferenceRuntimeTarget was stripped from spec")
}
if spec.TaskTemplate.Placement.MaxReplicas != 222 {
t.Error("MaxReplicas was stripped from spec")
}
if len(spec.TaskTemplate.ContainerSpec.Ulimits) == 0 {
t.Error("Ulimits were stripped from spec")
}
// next, does calling this with an earlier version correctly strip fields?
adjustForAPIVersion("1.29", spec)
if spec.TaskTemplate.ContainerSpec.Sysctls != nil {
t.Error("Sysctls was not stripped from spec")
}
if spec.TaskTemplate.Resources.Limits.Pids != 0 {
t.Error("PidsLimit was not stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Privileges.CredentialSpec.Config != "" {
t.Error("CredentialSpec.Config field was not stripped from spec")
}
if spec.TaskTemplate.ContainerSpec.Configs[1].Runtime != nil {
t.Error("ConfigReferenceRuntimeTarget was not stripped from spec")
}
if spec.TaskTemplate.Placement.MaxReplicas != 0 {
t.Error("MaxReplicas was not stripped from spec")
}
if len(spec.TaskTemplate.ContainerSpec.Ulimits) != 0 {
t.Error("Ulimits were not stripped from spec")
}
}

View File

@@ -13,7 +13,7 @@ import (
// Backend is the methods that need to be implemented to provide
// system specific functionality.
type Backend interface {
SystemInfo() *types.Info
SystemInfo() (*types.Info, error)
SystemVersion() types.Version
SystemDiskUsage(ctx context.Context) (*types.DiskUsage, error)
SubscribeToEvents(since, until time.Time, ef filters.Args) ([]events.Message, chan interface{})

View File

@@ -3,35 +3,35 @@ package system // import "github.com/docker/docker/api/server/router/system"
import (
"github.com/docker/docker/api/server/router"
buildkit "github.com/docker/docker/builder/builder-next"
"github.com/docker/docker/builder/fscache"
)
// systemRouter provides information about the Docker system overall.
// It gathers information about host, daemon and container events.
type systemRouter struct {
backend Backend
cluster ClusterBackend
routes []router.Route
builder *buildkit.Builder
features *map[string]bool
backend Backend
cluster ClusterBackend
routes []router.Route
fscache *fscache.FSCache // legacy
builder *buildkit.Builder
}
// NewRouter initializes a new system router
func NewRouter(b Backend, c ClusterBackend, builder *buildkit.Builder, features *map[string]bool) router.Router {
func NewRouter(b Backend, c ClusterBackend, fscache *fscache.FSCache, builder *buildkit.Builder) router.Router {
r := &systemRouter{
backend: b,
cluster: c,
builder: builder,
features: features,
backend: b,
cluster: c,
fscache: fscache,
builder: builder,
}
r.routes = []router.Route{
router.NewOptionsRoute("/{anyroute:.*}", optionsHandler),
router.NewGetRoute("/_ping", r.pingHandler),
router.NewHeadRoute("/_ping", r.pingHandler),
router.NewGetRoute("/events", r.getEvents),
router.NewGetRoute("/_ping", pingHandler),
router.NewGetRoute("/events", r.getEvents, router.WithCancel),
router.NewGetRoute("/info", r.getInfo),
router.NewGetRoute("/version", r.getVersion),
router.NewGetRoute("/system/df", r.getDiskUsage),
router.NewGetRoute("/system/df", r.getDiskUsage, router.WithCancel),
router.NewPostRoute("/auth", r.postAuth),
}

View File

@@ -8,7 +8,6 @@ import (
"time"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router/build"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/events"
"github.com/docker/docker/api/types/filters"
@@ -26,29 +25,18 @@ func optionsHandler(ctx context.Context, w http.ResponseWriter, r *http.Request,
return nil
}
func (s *systemRouter) pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.Header().Add("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Add("Pragma", "no-cache")
builderVersion := build.BuilderVersion(*s.features)
if bv := builderVersion; bv != "" {
w.Header().Set("Builder-Version", string(bv))
}
if r.Method == http.MethodHead {
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Content-Length", "0")
return nil
}
func pingHandler(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
_, err := w.Write([]byte{'O', 'K'})
return err
}
func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
info := s.backend.SystemInfo()
info, err := s.backend.SystemInfo()
if err != nil {
return err
}
if s.cluster != nil {
info.Swarm = s.cluster.Info()
info.Warnings = append(info.Warnings, info.Swarm.Warnings...)
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.25") {
@@ -72,14 +60,6 @@ func (s *systemRouter) getInfo(ctx context.Context, w http.ResponseWriter, r *ht
old.SecurityOptions = nameOnlySecurityOptions
return httputils.WriteJSON(w, http.StatusOK, old)
}
if versions.LessThan(httputils.VersionFromContext(ctx), "1.39") {
if info.KernelVersion == "" {
info.KernelVersion = "<unknown>"
}
if info.OperatingSystem == "" {
info.OperatingSystem = "<unknown>"
}
}
return httputils.WriteJSON(w, http.StatusOK, info)
}
@@ -99,6 +79,16 @@ func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter,
return err
})
var builderSize int64 // legacy
eg.Go(func() error {
var err error
builderSize, err = s.fscache.DiskUsage(ctx)
if err != nil {
return pkgerrors.Wrap(err, "error getting fscache build cache usage")
}
return nil
})
var buildCache []*types.BuildCache
eg.Go(func() error {
var err error
@@ -113,7 +103,6 @@ func (s *systemRouter) getDiskUsage(ctx context.Context, w http.ResponseWriter,
return err
}
var builderSize int64
for _, b := range buildCache {
builderSize += b.Size
}
@@ -163,9 +152,7 @@ func (s *systemRouter) getEvents(ctx context.Context, w http.ResponseWriter, r *
if !onlyPastEvents {
dur := until.Sub(now)
timer := time.NewTimer(dur)
defer timer.Stop()
timeout = timer.C
timeout = time.After(dur)
}
}

View File

@@ -29,7 +29,7 @@ func (r *volumeRouter) initRoutes() {
router.NewGetRoute("/volumes/{name:.*}", r.getVolumeByName),
// POST
router.NewPostRoute("/volumes/create", r.postVolumesCreate),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune),
router.NewPostRoute("/volumes/prune", r.postVolumesPrune, router.WithCancel),
// DELETE
router.NewDeleteRoute("/volumes/{name:.*}", r.deleteVolumes),
}

View File

@@ -56,7 +56,7 @@ func (v *volumeRouter) postVolumesCreate(ctx context.Context, w http.ResponseWri
if err == io.EOF {
return errdefs.InvalidParameter(errors.New("got EOF while reading request body"))
}
return errdefs.InvalidParameter(err)
return err
}
volume, err := v.backend.Create(ctx, req.Name, req.Driver, opts.WithCreateOptions(req.DriverOpts), opts.WithCreateLabels(req.Labels))

View File

@@ -0,0 +1,30 @@
package server // import "github.com/docker/docker/api/server"
import (
"net/http"
"sync"
"github.com/gorilla/mux"
)
// routerSwapper is an http.Handler that allows you to swap
// mux routers.
type routerSwapper struct {
mu sync.Mutex
router *mux.Router
}
// Swap changes the old router with the new one.
func (rs *routerSwapper) Swap(newRouter *mux.Router) {
rs.mu.Lock()
rs.router = newRouter
rs.mu.Unlock()
}
// ServeHTTP makes the routerSwapper to implement the http.Handler interface.
func (rs *routerSwapper) ServeHTTP(w http.ResponseWriter, r *http.Request) {
rs.mu.Lock()
router := rs.router
rs.mu.Unlock()
router.ServeHTTP(w, r)
}

View File

@@ -6,9 +6,7 @@ import (
"net"
"net/http"
"strings"
"time"
"github.com/docker/docker/api/server/httpstatus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/api/server/router"
@@ -33,10 +31,11 @@ type Config struct {
// Server contains instance details for the server
type Server struct {
cfg *Config
servers []*HTTPServer
routers []router.Router
middlewares []middleware.Middleware
cfg *Config
servers []*HTTPServer
routers []router.Router
routerSwapper *routerSwapper
middlewares []middleware.Middleware
}
// New returns a new instance of the server based on the specified configuration.
@@ -58,8 +57,7 @@ func (s *Server) Accept(addr string, listeners ...net.Listener) {
for _, listener := range listeners {
httpServer := &HTTPServer{
srv: &http.Server{
Addr: addr,
ReadHeaderTimeout: 5 * time.Minute, // "G112: Potential Slowloris Attack (gosec)"; not a real concern for our use, so setting a long timeout.
Addr: addr,
},
l: listener,
}
@@ -81,7 +79,7 @@ func (s *Server) Close() {
func (s *Server) serveAPI() error {
var chErrors = make(chan error, len(s.servers))
for _, srv := range s.servers {
srv.srv.Handler = s.createMux()
srv.srv.Handler = s.routerSwapper
go func(srv *HTTPServer) {
var err error
logrus.Infof("API listen on %s", srv.l.Addr())
@@ -131,8 +129,8 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
// use intermediate variable to prevent "should not use basic type
// string as key in context.WithValue" golint errors
ctx := context.WithValue(r.Context(), dockerversion.UAStringKey{}, r.Header.Get("User-Agent"))
r = r.WithContext(ctx)
var ki interface{} = dockerversion.UAStringKey
ctx := context.WithValue(context.Background(), ki, r.Header.Get("User-Agent"))
handlerFunc := s.handlerWithGlobalMiddlewares(handler)
vars := mux.Vars(r)
@@ -141,11 +139,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
}
if err := handlerFunc(ctx, w, r, vars); err != nil {
statusCode := httpstatus.FromError(err)
statusCode := httputils.GetHTTPErrorStatusCode(err)
if statusCode >= 500 {
logrus.Errorf("Handler for %s %s returned error: %v", r.Method, r.URL.Path, err)
}
makeErrorHandler(err)(w, r)
httputils.MakeErrorHandler(err)(w, r)
}
}
}
@@ -154,6 +152,11 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
// This method also enables the Go profiler.
func (s *Server) InitRouter(routers ...router.Router) {
s.routers = append(s.routers, routers...)
m := s.createMux()
s.routerSwapper = &routerSwapper{
router: m,
}
}
type pageNotFoundError struct{}
@@ -186,10 +189,9 @@ func (s *Server) createMux() *mux.Router {
m.Path("/debug" + r.Path()).Handler(f)
}
notFoundHandler := makeErrorHandler(pageNotFoundError{})
notFoundHandler := httputils.MakeErrorHandler(pageNotFoundError{})
m.HandleFunc(versionMatcher+"/{path:.*}", notFoundHandler)
m.NotFoundHandler = notFoundHandler
m.MethodNotAllowedHandler = notFoundHandler
return m
}

View File

@@ -22,7 +22,7 @@ func TestMiddlewares(t *testing.T) {
srv.UseMiddleware(middleware.NewVersionMiddleware("0.1omega2", api.DefaultVersion, api.MinVersion))
req, _ := http.NewRequest(http.MethodGet, "/containers/json", nil)
req, _ := http.NewRequest("GET", "/containers/json", nil)
resp := httptest.NewRecorder()
ctx := context.Background()

File diff suppressed because it is too large Load Diff

View File

@@ -1,14 +1,17 @@
package {{ .Package }} // import "github.com/docker/docker/api/types/{{ .Package }}"
package {{ .Package }}
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
import (
"context"
"net/http"
context "golang.org/x/net/context"
{{ range .DefaultImports }}{{ printf "%q" . }}
{{ end }}
{{ range $key, $value := .Imports }}{{ $key }} {{ printf "%q" $value }}

View File

@@ -30,7 +30,7 @@ type ContainerAttachConfig struct {
// expectation is for the logger endpoints to assemble the chunks using this
// metadata.
type PartialLogMetaData struct {
Last bool // true if this message is last of a partial
Last bool //true if this message is last of a partial
ID string // identifies group of messages comprising a single record
Ordinal int // ordering of message in partial group
}
@@ -73,7 +73,6 @@ type LogSelector struct {
// behavior of a backend.ContainerStats() call.
type ContainerStatsConfig struct {
Stream bool
OneShot bool
OutStream io.Writer
Version string
}

View File

@@ -50,7 +50,7 @@ type ContainerCommitOptions struct {
// ContainerExecInspect holds information returned by exec inspect.
type ContainerExecInspect struct {
ExecID string `json:"ID"`
ExecID string
ContainerID string
Running bool
ExitCode int
@@ -187,15 +187,6 @@ type ImageBuildOptions struct {
// build request. The same identifier can be used to gracefully cancel the
// build with the cancel request.
BuildID string
// Outputs defines configurations for exporting build results. Only supported
// in BuildKit mode
Outputs []ImageBuildOutput
}
// ImageBuildOutput defines configuration for exporting a build result
type ImageBuildOutput struct {
Type string
Attrs map[string]string
}
// BuilderVersion sets the version of underlying builder to use
@@ -205,7 +196,7 @@ const (
// BuilderV1 is the first generation builder in docker daemon
BuilderV1 BuilderVersion = "1"
// BuilderBuildKit is builder based on moby/buildkit project
BuilderBuildKit BuilderVersion = "2"
BuilderBuildKit = "2"
)
// ImageBuildResponse holds information
@@ -265,7 +256,7 @@ type ImagePullOptions struct {
// if the privilege request fails.
type RequestPrivilegeFunc func() (string, error)
// ImagePushOptions holds information to push images.
//ImagePushOptions holds information to push images.
type ImagePushOptions ImagePullOptions
// ImageRemoveOptions holds parameters to remove images.
@@ -363,10 +354,6 @@ type ServiceUpdateOptions struct {
// ServiceListOptions holds parameters to list services with.
type ServiceListOptions struct {
Filters filters.Args
// Status indicates whether the server should include the service task
// count of running and desired tasks.
Status bool
}
// ServiceInspectOptions holds parameters related to the "service inspect"

View File

@@ -3,7 +3,6 @@ package types // import "github.com/docker/docker/api/types"
import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/network"
specs "github.com/opencontainers/image-spec/specs-go/v1"
)
// configs holds structs used for internal communication between the
@@ -16,7 +15,6 @@ type ContainerCreateConfig struct {
Config *container.Config
HostConfig *container.HostConfig
NetworkingConfig *network.NetworkingConfig
Platform *specs.Platform
AdjustCPUShares bool
}
@@ -57,10 +55,3 @@ type PluginEnableConfig struct {
type PluginDisableConfig struct {
ForceDisable bool
}
// NetworkListConfig stores the options available for listing networks
type NetworkListConfig struct {
// TODO(@cpuguy83): naming is hard, this is pulled from what was being used in the router before moving here
Detailed bool
Verbose bool
}

View File

@@ -54,7 +54,7 @@ type Config struct {
Env []string // List of environment variable to set in the container
Cmd strslice.StrSlice // Command to run when starting the container
Healthcheck *HealthConfig `json:",omitempty"` // Healthcheck describes how to check the container is healthy
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (meaning treat as a command line) (Windows specific).
ArgsEscaped bool `json:",omitempty"` // True if command is already escaped (Windows specific)
Image string // Name of the image as it was passed by the operator (e.g. could be symbolic)
Volumes map[string]struct{} // List of volumes (mounts) used for the container
WorkingDir string // Current directory (PWD) in the command will be launched

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------
@@ -10,9 +11,7 @@ package container // import "github.com/docker/docker/api/types/container"
// swagger:model ContainerTopOKBody
type ContainerTopOKBody struct {
// Each process running in the container, where each is process
// is an array of values corresponding to the titles.
//
// Each process running in the container, where each is process is an array of values corresponding to the titles
// Required: true
Processes [][]string `json:"Processes"`

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -1,7 +1,8 @@
package container // import "github.com/docker/docker/api/types/container"
package container
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -7,32 +7,9 @@ import (
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/api/types/strslice"
"github.com/docker/go-connections/nat"
units "github.com/docker/go-units"
"github.com/docker/go-units"
)
// CgroupnsMode represents the cgroup namespace mode of the container
type CgroupnsMode string
// IsPrivate indicates whether the container uses its own private cgroup namespace
func (c CgroupnsMode) IsPrivate() bool {
return c == "private"
}
// IsHost indicates whether the container shares the host's cgroup namespace
func (c CgroupnsMode) IsHost() bool {
return c == "host"
}
// IsEmpty indicates whether the container cgroup namespace mode is unset
func (c CgroupnsMode) IsEmpty() bool {
return c == ""
}
// Valid indicates whether the cgroup namespace mode is valid
func (c CgroupnsMode) Valid() bool {
return c.IsEmpty() || c.IsPrivate() || c.IsHost()
}
// Isolation represents the isolation technology of a container. The supported
// values are platform specific
type Isolation string
@@ -145,7 +122,7 @@ func (n NetworkMode) ConnectedContainer() string {
return ""
}
// UserDefined indicates user-created network
//UserDefined indicates user-created network
func (n NetworkMode) UserDefined() string {
if n.IsUserDefined() {
return string(n)
@@ -267,16 +244,6 @@ func (n PidMode) Container() string {
return ""
}
// DeviceRequest represents a request for devices from a device driver.
// Used by GPU device drivers.
type DeviceRequest struct {
Driver string // Name of device driver
Count int // Number of devices to request (-1 = All)
DeviceIDs []string // List of device IDs as recognizable by the device driver
Capabilities [][]string // An OR list of AND lists of device capabilities (e.g. "gpu")
Options map[string]string // Options to pass onto the device driver
}
// DeviceMapping represents the device mapping between the host and the container.
type DeviceMapping struct {
PathOnHost string
@@ -360,14 +327,13 @@ type Resources struct {
CpusetMems string // CpusetMems 0-2, 0,1
Devices []DeviceMapping // List of devices to map inside the container
DeviceCgroupRules []string // List of rule to be added to the device cgroup
DeviceRequests []DeviceRequest // List of device requests for device drivers
KernelMemory int64 // Kernel memory limit (in bytes), Deprecated: kernel 5.4 deprecated kmem.limit_in_bytes
KernelMemoryTCP int64 // Hard limit for kernel TCP buffer memory (in bytes)
DiskQuota int64 // Disk limit (in bytes)
KernelMemory int64 // Kernel memory limit (in bytes)
MemoryReservation int64 // Memory soft limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1` to enable unlimited swap
MemorySwappiness *int64 // Tuning container memory swappiness behaviour
OomKillDisable *bool // Whether to disable OOM Killer or not
PidsLimit *int64 // Setting PIDs limit for a container; Set `0` or `-1` for unlimited, or `null` to not change.
PidsLimit int64 // Setting pids limit for a container
Ulimits []*units.Ulimit // List of ulimits to be set in the container
// Applicable to Windows
@@ -403,7 +369,6 @@ type HostConfig struct {
// Applicable to UNIX platforms
CapAdd strslice.StrSlice // List of kernel capabilities to add to the container
CapDrop strslice.StrSlice // List of kernel capabilities to remove from the container
CgroupnsMode CgroupnsMode // Cgroup namespace mode to use for the container
DNS []string `json:"Dns"` // List of DNS server to lookup
DNSOptions []string `json:"DnsOptions"` // List of DNSOption to look for
DNSSearch []string `json:"DnsSearch"` // List of DNSSearch to look for

View File

@@ -1,4 +1,3 @@
//go:build !windows
// +build !windows
package container // import "github.com/docker/docker/api/types/container"

View File

@@ -1,6 +0,0 @@
package types
// Error returns the error message
func (e ErrorResponse) Error() string {
return e.Message
}

View File

@@ -1,8 +1,6 @@
package events // import "github.com/docker/docker/api/types/events"
const (
// BuilderEventType is the event type that the builder generates
BuilderEventType = "builder"
// ContainerEventType is the event type that containers generate
ContainerEventType = "container"
// DaemonEventType is the event type that daemon generate

View File

@@ -1,11 +1,11 @@
/*
Package filters provides tools for encoding a mapping of keys to a set of
/*Package filters provides tools for encoding a mapping of keys to a set of
multiple values.
*/
package filters // import "github.com/docker/docker/api/types/filters"
import (
"encoding/json"
"errors"
"regexp"
"strings"
@@ -37,19 +37,45 @@ func NewArgs(initialArgs ...KeyValuePair) Args {
return args
}
// Keys returns all the keys in list of Args
func (args Args) Keys() []string {
keys := make([]string, 0, len(args.fields))
for k := range args.fields {
keys = append(keys, k)
// ParseFlag parses a key=value string and adds it to an Args.
//
// Deprecated: Use Args.Add()
func ParseFlag(arg string, prev Args) (Args, error) {
filters := prev
if len(arg) == 0 {
return filters, nil
}
return keys
if !strings.Contains(arg, "=") {
return filters, ErrBadFormat
}
f := strings.SplitN(arg, "=", 2)
name := strings.ToLower(strings.TrimSpace(f[0]))
value := strings.TrimSpace(f[1])
filters.Add(name, value)
return filters, nil
}
// ErrBadFormat is an error returned when a filter is not in the form key=value
//
// Deprecated: this error will be removed in a future version
var ErrBadFormat = errors.New("bad format of filter (expected name=value)")
// ToParam encodes the Args as args JSON encoded string
//
// Deprecated: use ToJSON
func ToParam(a Args) (string, error) {
return ToJSON(a)
}
// MarshalJSON returns a JSON byte representation of the Args
func (args Args) MarshalJSON() ([]byte, error) {
if len(args.fields) == 0 {
return []byte("{}"), nil
return []byte{}, nil
}
return json.Marshal(args.fields)
}
@@ -67,7 +93,7 @@ func ToJSON(a Args) (string, error) {
// then the encoded format will use an older legacy format where the values are a
// list of strings, instead of a set.
//
// Deprecated: do not use in any new code; use ToJSON instead
// Deprecated: Use ToJSON
func ToParamWithVersion(version string, a Args) (string, error) {
if a.Len() == 0 {
return "", nil
@@ -81,6 +107,13 @@ func ToParamWithVersion(version string, a Args) (string, error) {
return ToJSON(a)
}
// FromParam decodes a JSON encoded string into Args
//
// Deprecated: use FromJSON
func FromParam(p string) (Args, error) {
return FromJSON(p)
}
// FromJSON decodes a JSON encoded string into Args
func FromJSON(p string) (Args, error) {
args := NewArgs()
@@ -107,6 +140,9 @@ func FromJSON(p string) (Args, error) {
// UnmarshalJSON populates the Args from JSON encode bytes
func (args Args) UnmarshalJSON(raw []byte) error {
if len(raw) == 0 {
return nil
}
return json.Unmarshal(raw, &args.fields)
}
@@ -152,7 +188,7 @@ func (args Args) Len() int {
func (args Args) MatchKVList(key string, sources map[string]string) bool {
fieldValues := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
@@ -198,7 +234,7 @@ func (args Args) Match(field, source string) bool {
// ExactMatch returns true if the source matches exactly one of the values.
func (args Args) ExactMatch(key, source string) bool {
fieldValues, ok := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if !ok || len(fieldValues) == 0 {
return true
}
@@ -211,7 +247,7 @@ func (args Args) ExactMatch(key, source string) bool {
// matches exactly the value.
func (args Args) UniqueExactMatch(key, source string) bool {
fieldValues := args.fields[key]
// do not filter if there is no filter set or cannot determine filter
//do not filter if there is no filter set or cannot determine filter
if len(fieldValues) == 0 {
return true
}
@@ -239,6 +275,14 @@ func (args Args) FuzzyMatch(key, source string) bool {
return false
}
// Include returns true if the key exists in the mapping
//
// Deprecated: use Contains
func (args Args) Include(field string) bool {
_, ok := args.fields[field]
return ok
}
// Contains returns true if the key exists in the mapping
func (args Args) Contains(field string) bool {
_, ok := args.fields[field]
@@ -279,22 +323,6 @@ func (args Args) WalkValues(field string, op func(value string) error) error {
return nil
}
// Clone returns a copy of args.
func (args Args) Clone() (newArgs Args) {
newArgs.fields = make(map[string]map[string]bool, len(args.fields))
for k, m := range args.fields {
var mm map[string]bool
if m != nil {
mm = make(map[string]bool, len(m))
for kk, v := range m {
mm[kk] = v
}
}
newArgs.fields[k] = mm
}
return newArgs
}
func deprecatedArgs(d map[string][]string) map[string]map[string]bool {
m := map[string]map[string]bool{}
for k, v := range d {

View File

@@ -1,31 +1,44 @@
package filters // import "github.com/docker/docker/api/types/filters"
import (
"encoding/json"
"errors"
"testing"
"gotest.tools/v3/assert"
is "gotest.tools/v3/assert/cmp"
"gotest.tools/assert"
is "gotest.tools/assert/cmp"
)
func TestMarshalJSON(t *testing.T) {
fields := map[string]map[string]bool{
"created": {"today": true},
"image.name": {"ubuntu*": true, "*untu": true},
func TestParseArgs(t *testing.T) {
// equivalent of `docker ps -f 'created=today' -f 'image.name=ubuntu*' -f 'image.name=*untu'`
flagArgs := []string{
"created=today",
"image.name=ubuntu*",
"image.name=*untu",
}
a := Args{fields: fields}
var (
args = NewArgs()
err error
)
_, err := a.MarshalJSON()
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
for i := range flagArgs {
args, err = ParseFlag(flagArgs[i], args)
assert.NilError(t, err)
}
assert.Check(t, is.Len(args.Get("created"), 1))
assert.Check(t, is.Len(args.Get("image.name"), 2))
}
func TestMarshalJSONWithEmpty(t *testing.T) {
_, err := json.Marshal(NewArgs())
func TestParseArgsEdgeCase(t *testing.T) {
var args Args
args, err := ParseFlag("", args)
if err != nil {
t.Errorf("failed to marshal the filters: %s", err)
t.Fatal(err)
}
if args.Len() != 0 {
t.Fatalf("Expected an empty Args (map), got %v", args)
}
if args, err = ParseFlag("anything", args); err == nil || err != ErrBadFormat {
t.Fatalf("Expected ErrBadFormat, got %v", err)
}
}
@@ -334,6 +347,17 @@ func TestContains(t *testing.T) {
}
}
func TestInclude(t *testing.T) {
f := NewArgs()
if f.Include("status") {
t.Fatal("Expected to not include a status key, got true")
}
f.Add("status", "running")
if !f.Include("status") {
t.Fatal("Expected to include a status key, got false")
}
}
func TestValidate(t *testing.T) {
f := NewArgs()
f.Add("status", "running")
@@ -358,17 +382,14 @@ func TestWalkValues(t *testing.T) {
f.Add("status", "running")
f.Add("status", "paused")
err := f.WalkValues("status", func(value string) error {
f.WalkValues("status", func(value string) error {
if value != "running" && value != "paused" {
t.Fatalf("Unexpected value %s", value)
}
return nil
})
if err != nil {
t.Fatalf("Expected no error, got %v", err)
}
err = f.WalkValues("status", func(value string) error {
err := f.WalkValues("status", func(value string) error {
return errors.New("return")
})
if err == nil {
@@ -400,11 +421,3 @@ func TestFuzzyMatch(t *testing.T) {
}
}
}
func TestClone(t *testing.T) {
f := NewArgs()
f.Add("foo", "bar")
f2 := f.Clone()
f2.Add("baz", "qux")
assert.Check(t, is.Len(f.Get("baz"), 0))
}

View File

@@ -1,7 +1,8 @@
package image // import "github.com/docker/docker/api/types/image"
package image
// ----------------------------------------------------------------------------
// Code generated by `swagger generate operation`. DO NOT EDIT.
// DO NOT EDIT THIS FILE
// This file was generated by `swagger generate operation`
//
// See hack/generate-swagger-api.sh
// ----------------------------------------------------------------------------

View File

@@ -79,8 +79,7 @@ const (
// BindOptions defines options specific to mounts of type "bind".
type BindOptions struct {
Propagation Propagation `json:",omitempty"`
NonRecursive bool `json:",omitempty"`
Propagation Propagation `json:",omitempty"`
}
// VolumeOptions represents the options for a mount of type volume.
@@ -113,7 +112,7 @@ type TmpfsOptions struct {
// TODO(stevvooe): There are several more tmpfs flags, specified in the
// daemon, that are accepted. Only the most basic are added for now.
//
// From https://github.com/moby/sys/blob/mount/v0.1.1/mount/flags.go#L47-L56
// From docker/docker/pkg/mount/flags.go:
//
// var validFlags = map[string]bool{
// "": true,

View File

@@ -1,7 +1,4 @@
package network // import "github.com/docker/docker/api/types/network"
import (
"github.com/docker/docker/api/types/filters"
)
// Address represents an IP address
type Address struct {
@@ -12,7 +9,7 @@ type Address struct {
// IPAM represents IP Address Management
type IPAM struct {
Driver string
Options map[string]string // Per network IPAM driver options
Options map[string]string //Per network IPAM driver options
Config []IPAMConfig
}
@@ -109,18 +106,3 @@ type NetworkingConfig struct {
type ConfigReference struct {
Network string
}
var acceptedFilters = map[string]bool{
"dangling": true,
"driver": true,
"id": true,
"label": true,
"name": true,
"scope": true,
"type": true,
}
// ValidateFilters validates the list of filter args with the available filters.
func ValidateFilters(filter filters.Args) error {
return filter.Validate(acceptedFilters)
}

View File

@@ -10,7 +10,6 @@
It has these top-level messages:
LogEntry
PartialLogEntryMetadata
*/
package logdriver
@@ -32,11 +31,10 @@ var _ = math.Inf
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type LogEntry struct {
Source string `protobuf:"bytes,1,opt,name=source,proto3" json:"source,omitempty"`
TimeNano int64 `protobuf:"varint,2,opt,name=time_nano,json=timeNano,proto3" json:"time_nano,omitempty"`
Line []byte `protobuf:"bytes,3,opt,name=line,proto3" json:"line,omitempty"`
Partial bool `protobuf:"varint,4,opt,name=partial,proto3" json:"partial,omitempty"`
PartialLogMetadata *PartialLogEntryMetadata `protobuf:"bytes,5,opt,name=partial_log_metadata,json=partialLogMetadata" json:"partial_log_metadata,omitempty"`
Source string `protobuf:"bytes,1,opt,name=source,proto3" json:"source,omitempty"`
TimeNano int64 `protobuf:"varint,2,opt,name=time_nano,json=timeNano,proto3" json:"time_nano,omitempty"`
Line []byte `protobuf:"bytes,3,opt,name=line,proto3" json:"line,omitempty"`
Partial bool `protobuf:"varint,4,opt,name=partial,proto3" json:"partial,omitempty"`
}
func (m *LogEntry) Reset() { *m = LogEntry{} }
@@ -72,48 +70,8 @@ func (m *LogEntry) GetPartial() bool {
return false
}
func (m *LogEntry) GetPartialLogMetadata() *PartialLogEntryMetadata {
if m != nil {
return m.PartialLogMetadata
}
return nil
}
type PartialLogEntryMetadata struct {
Last bool `protobuf:"varint,1,opt,name=last,proto3" json:"last,omitempty"`
Id string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
Ordinal int32 `protobuf:"varint,3,opt,name=ordinal,proto3" json:"ordinal,omitempty"`
}
func (m *PartialLogEntryMetadata) Reset() { *m = PartialLogEntryMetadata{} }
func (m *PartialLogEntryMetadata) String() string { return proto.CompactTextString(m) }
func (*PartialLogEntryMetadata) ProtoMessage() {}
func (*PartialLogEntryMetadata) Descriptor() ([]byte, []int) { return fileDescriptorEntry, []int{1} }
func (m *PartialLogEntryMetadata) GetLast() bool {
if m != nil {
return m.Last
}
return false
}
func (m *PartialLogEntryMetadata) GetId() string {
if m != nil {
return m.Id
}
return ""
}
func (m *PartialLogEntryMetadata) GetOrdinal() int32 {
if m != nil {
return m.Ordinal
}
return 0
}
func init() {
proto.RegisterType((*LogEntry)(nil), "LogEntry")
proto.RegisterType((*PartialLogEntryMetadata)(nil), "PartialLogEntryMetadata")
}
func (m *LogEntry) Marshal() (dAtA []byte, err error) {
size := m.Size()
@@ -157,55 +115,6 @@ func (m *LogEntry) MarshalTo(dAtA []byte) (int, error) {
}
i++
}
if m.PartialLogMetadata != nil {
dAtA[i] = 0x2a
i++
i = encodeVarintEntry(dAtA, i, uint64(m.PartialLogMetadata.Size()))
n1, err := m.PartialLogMetadata.MarshalTo(dAtA[i:])
if err != nil {
return 0, err
}
i += n1
}
return i, nil
}
func (m *PartialLogEntryMetadata) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return dAtA[:n], nil
}
func (m *PartialLogEntryMetadata) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if m.Last {
dAtA[i] = 0x8
i++
if m.Last {
dAtA[i] = 1
} else {
dAtA[i] = 0
}
i++
}
if len(m.Id) > 0 {
dAtA[i] = 0x12
i++
i = encodeVarintEntry(dAtA, i, uint64(len(m.Id)))
i += copy(dAtA[i:], m.Id)
}
if m.Ordinal != 0 {
dAtA[i] = 0x18
i++
i = encodeVarintEntry(dAtA, i, uint64(m.Ordinal))
}
return i, nil
}
@@ -253,26 +162,6 @@ func (m *LogEntry) Size() (n int) {
if m.Partial {
n += 2
}
if m.PartialLogMetadata != nil {
l = m.PartialLogMetadata.Size()
n += 1 + l + sovEntry(uint64(l))
}
return n
}
func (m *PartialLogEntryMetadata) Size() (n int) {
var l int
_ = l
if m.Last {
n += 2
}
l = len(m.Id)
if l > 0 {
n += 1 + l + sovEntry(uint64(l))
}
if m.Ordinal != 0 {
n += 1 + sovEntry(uint64(m.Ordinal))
}
return n
}
@@ -417,157 +306,6 @@ func (m *LogEntry) Unmarshal(dAtA []byte) error {
}
}
m.Partial = bool(v != 0)
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field PartialLogMetadata", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowEntry
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthEntry
}
postIndex := iNdEx + msglen
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.PartialLogMetadata == nil {
m.PartialLogMetadata = &PartialLogEntryMetadata{}
}
if err := m.PartialLogMetadata.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipEntry(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthEntry
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
if iNdEx > l {
return io.ErrUnexpectedEOF
}
return nil
}
func (m *PartialLogEntryMetadata) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowEntry
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
return fmt.Errorf("proto: PartialLogEntryMetadata: wiretype end group for non-group")
}
if fieldNum <= 0 {
return fmt.Errorf("proto: PartialLogEntryMetadata: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Last", wireType)
}
var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowEntry
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
m.Last = bool(v != 0)
case 2:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Id", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowEntry
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthEntry
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Id = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 0 {
return fmt.Errorf("proto: wrong wireType = %d for field Ordinal", wireType)
}
m.Ordinal = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowEntry
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
m.Ordinal |= (int32(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
default:
iNdEx = preIndex
skippy, err := skipEntry(dAtA[iNdEx:])
@@ -697,20 +435,15 @@ var (
func init() { proto.RegisterFile("entry.proto", fileDescriptorEntry) }
var fileDescriptorEntry = []byte{
// 237 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x74, 0x90, 0xbd, 0x4a, 0x04, 0x31,
0x14, 0x85, 0xb9, 0xb3, 0x3f, 0xce, 0xdc, 0x5d, 0x2c, 0x82, 0x68, 0x40, 0x18, 0xc2, 0x56, 0xa9,
0xb6, 0xd0, 0x37, 0x10, 0x6c, 0x44, 0x45, 0xd2, 0x58, 0x0e, 0x57, 0x27, 0x2c, 0x81, 0xd9, 0xdc,
0x21, 0x13, 0x0b, 0x1f, 0xcd, 0x37, 0xb0, 0xf4, 0x11, 0x64, 0x9e, 0x44, 0x26, 0x4e, 0xec, 0xec,
0xce, 0x39, 0x5f, 0x8a, 0x2f, 0x17, 0x37, 0xd6, 0xc7, 0xf0, 0xbe, 0xef, 0x03, 0x47, 0xde, 0x7d,
0x00, 0x96, 0xf7, 0x7c, 0xb8, 0x9d, 0x26, 0x71, 0x8e, 0xeb, 0x81, 0xdf, 0xc2, 0xab, 0x95, 0xa0,
0x40, 0x57, 0x66, 0x6e, 0xe2, 0x12, 0xab, 0xe8, 0x8e, 0xb6, 0xf1, 0xe4, 0x59, 0x16, 0x0a, 0xf4,
0xc2, 0x94, 0xd3, 0xf0, 0x48, 0x9e, 0x85, 0xc0, 0x65, 0xe7, 0xbc, 0x95, 0x0b, 0x05, 0x7a, 0x6b,
0x52, 0x16, 0x12, 0x4f, 0x7a, 0x0a, 0xd1, 0x51, 0x27, 0x97, 0x0a, 0x74, 0x69, 0x72, 0x15, 0x77,
0x78, 0x36, 0xc7, 0xa6, 0xe3, 0x43, 0x73, 0xb4, 0x91, 0x5a, 0x8a, 0x24, 0x57, 0x0a, 0xf4, 0xe6,
0x4a, 0xee, 0x9f, 0x7e, 0x61, 0x56, 0x7a, 0x98, 0xb9, 0x11, 0xfd, 0x1f, 0xc8, 0xdb, 0xee, 0x19,
0x2f, 0xfe, 0x79, 0x9e, 0xa4, 0x68, 0x88, 0xe9, 0x1f, 0xa5, 0x49, 0x59, 0x9c, 0x62, 0xe1, 0xda,
0xa4, 0x5f, 0x99, 0xc2, 0xb5, 0x93, 0x24, 0x87, 0xd6, 0x79, 0xea, 0x92, 0xfb, 0xca, 0xe4, 0x7a,
0xb3, 0xfd, 0x1c, 0x6b, 0xf8, 0x1a, 0x6b, 0xf8, 0x1e, 0x6b, 0x78, 0x59, 0xa7, 0x4b, 0x5d, 0xff,
0x04, 0x00, 0x00, 0xff, 0xff, 0x8f, 0xed, 0x9f, 0xb6, 0x38, 0x01, 0x00, 0x00,
// 149 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x4e, 0xcd, 0x2b, 0x29,
0xaa, 0xd4, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x57, 0xca, 0xe5, 0xe2, 0xf0, 0xc9, 0x4f, 0x77, 0x05,
0x89, 0x08, 0x89, 0x71, 0xb1, 0x15, 0xe7, 0x97, 0x16, 0x25, 0xa7, 0x4a, 0x30, 0x2a, 0x30, 0x6a,
0x70, 0x06, 0x41, 0x79, 0x42, 0xd2, 0x5c, 0x9c, 0x25, 0x99, 0xb9, 0xa9, 0xf1, 0x79, 0x89, 0x79,
0xf9, 0x12, 0x4c, 0x0a, 0x8c, 0x1a, 0xcc, 0x41, 0x1c, 0x20, 0x01, 0xbf, 0xc4, 0xbc, 0x7c, 0x21,
0x21, 0x2e, 0x96, 0x9c, 0xcc, 0xbc, 0x54, 0x09, 0x66, 0x05, 0x46, 0x0d, 0x9e, 0x20, 0x30, 0x5b,
0x48, 0x82, 0x8b, 0xbd, 0x20, 0xb1, 0xa8, 0x24, 0x33, 0x31, 0x47, 0x82, 0x45, 0x81, 0x51, 0x83,
0x23, 0x08, 0xc6, 0x75, 0xe2, 0x39, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07, 0x8f,
0xe4, 0x18, 0x93, 0xd8, 0xc0, 0x6e, 0x30, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0x2d, 0x24, 0x5a,
0xd4, 0x92, 0x00, 0x00, 0x00,
}

View File

@@ -5,12 +5,4 @@ message LogEntry {
int64 time_nano = 2;
bytes line = 3;
bool partial = 4;
PartialLogEntryMetadata partial_log_metadata = 5;
}
message PartialLogEntryMetadata {
bool last = 1;
string id = 2;
int32 ordinal = 3;
}

View File

@@ -4,7 +4,7 @@ import (
"encoding/json"
"net"
v1 "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/opencontainers/image-spec/specs-go/v1"
)
// ServiceConfig stores daemon registry services configuration.
@@ -45,32 +45,31 @@ func (ipnet *NetIPNet) UnmarshalJSON(b []byte) (err error) {
// IndexInfo contains information about a registry
//
// RepositoryInfo Examples:
// {
// "Index" : {
// "Name" : "docker.io",
// "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
// "Secure" : true,
// "Official" : true,
// },
// "RemoteName" : "library/debian",
// "LocalName" : "debian",
// "CanonicalName" : "docker.io/debian"
// "Official" : true,
// }
//
// {
// "Index" : {
// "Name" : "docker.io",
// "Mirrors" : ["https://registry-2.docker.io/v1/", "https://registry-3.docker.io/v1/"],
// "Secure" : true,
// "Official" : true,
// },
// "RemoteName" : "library/debian",
// "LocalName" : "debian",
// "CanonicalName" : "docker.io/debian"
// "Official" : true,
// }
//
// {
// "Index" : {
// "Name" : "127.0.0.1:5000",
// "Mirrors" : [],
// "Secure" : false,
// "Official" : false,
// },
// "RemoteName" : "user/repo",
// "LocalName" : "127.0.0.1:5000/user/repo",
// "CanonicalName" : "127.0.0.1:5000/user/repo",
// "Official" : false,
// }
// {
// "Index" : {
// "Name" : "127.0.0.1:5000",
// "Mirrors" : [],
// "Secure" : false,
// "Official" : false,
// },
// "RemoteName" : "user/repo",
// "LocalName" : "127.0.0.1:5000/user/repo",
// "CanonicalName" : "127.0.0.1:5000/user/repo",
// "Official" : false,
// }
type IndexInfo struct {
// Name is the name of the registry, such as "docker.io"
Name string

93
api/types/seccomp.go Normal file
View File

@@ -0,0 +1,93 @@
package types // import "github.com/docker/docker/api/types"
// Seccomp represents the config for a seccomp profile for syscall restriction.
type Seccomp struct {
DefaultAction Action `json:"defaultAction"`
// Architectures is kept to maintain backward compatibility with the old
// seccomp profile.
Architectures []Arch `json:"architectures,omitempty"`
ArchMap []Architecture `json:"archMap,omitempty"`
Syscalls []*Syscall `json:"syscalls"`
}
// Architecture is used to represent a specific architecture
// and its sub-architectures
type Architecture struct {
Arch Arch `json:"architecture"`
SubArches []Arch `json:"subArchitectures"`
}
// Arch used for architectures
type Arch string
// Additional architectures permitted to be used for system calls
// By default only the native architecture of the kernel is permitted
const (
ArchX86 Arch = "SCMP_ARCH_X86"
ArchX86_64 Arch = "SCMP_ARCH_X86_64"
ArchX32 Arch = "SCMP_ARCH_X32"
ArchARM Arch = "SCMP_ARCH_ARM"
ArchAARCH64 Arch = "SCMP_ARCH_AARCH64"
ArchMIPS Arch = "SCMP_ARCH_MIPS"
ArchMIPS64 Arch = "SCMP_ARCH_MIPS64"
ArchMIPS64N32 Arch = "SCMP_ARCH_MIPS64N32"
ArchMIPSEL Arch = "SCMP_ARCH_MIPSEL"
ArchMIPSEL64 Arch = "SCMP_ARCH_MIPSEL64"
ArchMIPSEL64N32 Arch = "SCMP_ARCH_MIPSEL64N32"
ArchPPC Arch = "SCMP_ARCH_PPC"
ArchPPC64 Arch = "SCMP_ARCH_PPC64"
ArchPPC64LE Arch = "SCMP_ARCH_PPC64LE"
ArchS390 Arch = "SCMP_ARCH_S390"
ArchS390X Arch = "SCMP_ARCH_S390X"
)
// Action taken upon Seccomp rule match
type Action string
// Define actions for Seccomp rules
const (
ActKill Action = "SCMP_ACT_KILL"
ActTrap Action = "SCMP_ACT_TRAP"
ActErrno Action = "SCMP_ACT_ERRNO"
ActTrace Action = "SCMP_ACT_TRACE"
ActAllow Action = "SCMP_ACT_ALLOW"
)
// Operator used to match syscall arguments in Seccomp
type Operator string
// Define operators for syscall arguments in Seccomp
const (
OpNotEqual Operator = "SCMP_CMP_NE"
OpLessThan Operator = "SCMP_CMP_LT"
OpLessEqual Operator = "SCMP_CMP_LE"
OpEqualTo Operator = "SCMP_CMP_EQ"
OpGreaterEqual Operator = "SCMP_CMP_GE"
OpGreaterThan Operator = "SCMP_CMP_GT"
OpMaskedEqual Operator = "SCMP_CMP_MASKED_EQ"
)
// Arg used for matching specific syscall arguments in Seccomp
type Arg struct {
Index uint `json:"index"`
Value uint64 `json:"value"`
ValueTwo uint64 `json:"valueTwo"`
Op Operator `json:"op"`
}
// Filter is used to conditionally apply Seccomp rules
type Filter struct {
Caps []string `json:"caps,omitempty"`
Arches []string `json:"arches,omitempty"`
}
// Syscall is used to match a group of syscalls in Seccomp
type Syscall struct {
Name string `json:"name,omitempty"`
Names []string `json:"names,omitempty"`
Action Action `json:"action"`
Args []*Arg `json:"args"`
Comment string `json:"comment"`
Includes Filter `json:"includes"`
Excludes Filter `json:"excludes"`
}

View File

@@ -120,7 +120,7 @@ type NetworkStats struct {
RxBytes uint64 `json:"rx_bytes"`
// Packets received. Windows and Linux.
RxPackets uint64 `json:"rx_packets"`
// Received errors. Not used on Windows. Note that we don't `omitempty` this
// Received errors. Not used on Windows. Note that we dont `omitempty` this
// field as it is expected in the >=v1.21 API stats structure.
RxErrors uint64 `json:"rx_errors"`
// Incoming packets dropped. Windows and Linux.
@@ -129,7 +129,7 @@ type NetworkStats struct {
TxBytes uint64 `json:"tx_bytes"`
// Packets sent. Windows and Linux.
TxPackets uint64 `json:"tx_packets"`
// Sent errors. Not used on Windows. Note that we don't `omitempty` this
// Sent errors. Not used on Windows. Note that we dont `omitempty` this
// field as it is expected in the >=v1.21 API stats structure.
TxErrors uint64 `json:"tx_errors"`
// Outgoing packets dropped. Windows and Linux.

View File

@@ -29,8 +29,8 @@ func TestStrSliceMarshalJSON(t *testing.T) {
func TestStrSliceUnmarshalJSON(t *testing.T) {
parts := map[string][]string{
"": {"default", "values"},
"[]": {},
"": {"default", "values"},
"[]": {},
`["/bin/sh","-c","echo"]`: {"/bin/sh", "-c", "echo"},
}
for json, expectedParts := range parts {

View File

@@ -27,14 +27,9 @@ type ConfigReferenceFileTarget struct {
Mode os.FileMode
}
// ConfigReferenceRuntimeTarget is a target for a config specifying that it
// isn't mounted into the container but instead has some other purpose.
type ConfigReferenceRuntimeTarget struct{}
// ConfigReference is a reference to a config in swarm
type ConfigReference struct {
File *ConfigReferenceFileTarget `json:",omitempty"`
Runtime *ConfigReferenceRuntimeTarget `json:",omitempty"`
File *ConfigReferenceFileTarget
ConfigID string
ConfigName string
}

View File

@@ -5,7 +5,6 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/go-units"
)
// DNSConfig specifies DNS related configurations in resolver configuration file (resolv.conf)
@@ -34,7 +33,6 @@ type SELinuxContext struct {
// CredentialSpec for managed service account (Windows only)
type CredentialSpec struct {
Config string
File string
Registry string
}
@@ -68,13 +66,9 @@ type ContainerSpec struct {
// The format of extra hosts on swarmkit is specified in:
// http://man7.org/linux/man-pages/man5/hosts.5.html
// IP_address canonical_hostname [aliases...]
Hosts []string `json:",omitempty"`
DNSConfig *DNSConfig `json:",omitempty"`
Secrets []*SecretReference `json:",omitempty"`
Configs []*ConfigReference `json:",omitempty"`
Isolation container.Isolation `json:",omitempty"`
Sysctls map[string]string `json:",omitempty"`
CapabilityAdd []string `json:",omitempty"`
CapabilityDrop []string `json:",omitempty"`
Ulimits []*units.Ulimit `json:",omitempty"`
Hosts []string `json:",omitempty"`
DNSConfig *DNSConfig `json:",omitempty"`
Secrets []*SecretReference `json:",omitempty"`
Configs []*ConfigReference `json:",omitempty"`
Isolation container.Isolation `json:",omitempty"`
}

View File

@@ -1,5 +1,6 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// Code generated by protoc-gen-gogo.
// source: plugin.proto
// DO NOT EDIT!
/*
Package runtime is a generated protocol buffer package.
@@ -37,7 +38,6 @@ type PluginSpec struct {
Remote string `protobuf:"bytes,2,opt,name=remote,proto3" json:"remote,omitempty"`
Privileges []*PluginPrivilege `protobuf:"bytes,3,rep,name=privileges" json:"privileges,omitempty"`
Disabled bool `protobuf:"varint,4,opt,name=disabled,proto3" json:"disabled,omitempty"`
Env []string `protobuf:"bytes,5,rep,name=env" json:"env,omitempty"`
}
func (m *PluginSpec) Reset() { *m = PluginSpec{} }
@@ -73,13 +73,6 @@ func (m *PluginSpec) GetDisabled() bool {
return false
}
func (m *PluginSpec) GetEnv() []string {
if m != nil {
return m.Env
}
return nil
}
// PluginPrivilege describes a permission the user has to accept
// upon installing a plugin.
type PluginPrivilege struct {
@@ -167,21 +160,6 @@ func (m *PluginSpec) MarshalTo(dAtA []byte) (int, error) {
}
i++
}
if len(m.Env) > 0 {
for _, s := range m.Env {
dAtA[i] = 0x2a
i++
l = len(s)
for l >= 1<<7 {
dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
l >>= 7
i++
}
dAtA[i] = uint8(l)
i++
i += copy(dAtA[i:], s)
}
}
return i, nil
}
@@ -230,6 +208,24 @@ func (m *PluginPrivilege) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Plugin(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Plugin(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintPlugin(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
@@ -259,12 +255,6 @@ func (m *PluginSpec) Size() (n int) {
if m.Disabled {
n += 2
}
if len(m.Env) > 0 {
for _, s := range m.Env {
l = len(s)
n += 1 + l + sovPlugin(uint64(l))
}
}
return n
}
@@ -439,35 +429,6 @@ func (m *PluginSpec) Unmarshal(dAtA []byte) error {
}
}
m.Disabled = bool(v != 0)
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Env", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowPlugin
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
break
}
}
intStringLen := int(stringLen)
if intStringLen < 0 {
return ErrInvalidLengthPlugin
}
postIndex := iNdEx + intStringLen
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Env = append(m.Env, string(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipPlugin(dAtA[iNdEx:])
@@ -734,21 +695,18 @@ var (
func init() { proto.RegisterFile("plugin.proto", fileDescriptorPlugin) }
var fileDescriptorPlugin = []byte{
// 256 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x90, 0x4d, 0x4b, 0xc3, 0x30,
0x18, 0xc7, 0x89, 0xdd, 0xc6, 0xfa, 0x4c, 0x70, 0x04, 0x91, 0xe2, 0xa1, 0x94, 0x9d, 0x7a, 0x6a,
0x45, 0x2f, 0x82, 0x37, 0x0f, 0x9e, 0x47, 0xbc, 0x09, 0x1e, 0xd2, 0xf6, 0xa1, 0x06, 0x9b, 0x17,
0x92, 0xb4, 0xe2, 0x37, 0xf1, 0x23, 0x79, 0xf4, 0x23, 0x48, 0x3f, 0x89, 0x98, 0x75, 0x32, 0x64,
0xa7, 0xff, 0x4b, 0xc2, 0x9f, 0x1f, 0x0f, 0x9c, 0x9a, 0xae, 0x6f, 0x85, 0x2a, 0x8c, 0xd5, 0x5e,
0x6f, 0x3e, 0x08, 0xc0, 0x36, 0x14, 0x8f, 0x06, 0x6b, 0x4a, 0x61, 0xa6, 0xb8, 0xc4, 0x84, 0x64,
0x24, 0x8f, 0x59, 0xf0, 0xf4, 0x02, 0x16, 0x16, 0xa5, 0xf6, 0x98, 0x9c, 0x84, 0x76, 0x4a, 0xf4,
0x0a, 0xc0, 0x58, 0x31, 0x88, 0x0e, 0x5b, 0x74, 0x49, 0x94, 0x45, 0xf9, 0xea, 0x7a, 0x5d, 0xec,
0xc6, 0xb6, 0xfb, 0x07, 0x76, 0xf0, 0x87, 0x5e, 0xc2, 0xb2, 0x11, 0x8e, 0x57, 0x1d, 0x36, 0xc9,
0x2c, 0x23, 0xf9, 0x92, 0xfd, 0x65, 0xba, 0x86, 0x08, 0xd5, 0x90, 0xcc, 0xb3, 0x28, 0x8f, 0xd9,
0xaf, 0xdd, 0x3c, 0xc3, 0xd9, 0xbf, 0xb1, 0xa3, 0x78, 0x19, 0xac, 0x1a, 0x74, 0xb5, 0x15, 0xc6,
0x0b, 0xad, 0x26, 0xc6, 0xc3, 0x8a, 0x9e, 0xc3, 0x7c, 0xe0, 0x5d, 0x8f, 0x81, 0x31, 0x66, 0xbb,
0x70, 0xff, 0xf0, 0x39, 0xa6, 0xe4, 0x6b, 0x4c, 0xc9, 0xf7, 0x98, 0x92, 0xa7, 0xdb, 0x56, 0xf8,
0x97, 0xbe, 0x2a, 0x6a, 0x2d, 0xcb, 0x46, 0xd7, 0xaf, 0x68, 0xf7, 0xc2, 0x8d, 0x28, 0xfd, 0xbb,
0x41, 0x57, 0xba, 0x37, 0x6e, 0x65, 0x69, 0x7b, 0xe5, 0x85, 0xc4, 0xbb, 0x49, 0xab, 0x45, 0x38,
0xe4, 0xcd, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x99, 0xa8, 0xd9, 0x9b, 0x58, 0x01, 0x00, 0x00,
// 196 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0x29, 0xc8, 0x29, 0x4d,
0xcf, 0xcc, 0xd3, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x57, 0x6a, 0x63, 0xe4, 0xe2, 0x0a, 0x00, 0x0b,
0x04, 0x17, 0xa4, 0x26, 0x0b, 0x09, 0x71, 0xb1, 0xe4, 0x25, 0xe6, 0xa6, 0x4a, 0x30, 0x2a, 0x30,
0x6a, 0x70, 0x06, 0x81, 0xd9, 0x42, 0x62, 0x5c, 0x6c, 0x45, 0xa9, 0xb9, 0xf9, 0x25, 0xa9, 0x12,
0x4c, 0x60, 0x51, 0x28, 0x4f, 0xc8, 0x80, 0x8b, 0xab, 0xa0, 0x28, 0xb3, 0x2c, 0x33, 0x27, 0x35,
0x3d, 0xb5, 0x58, 0x82, 0x59, 0x81, 0x59, 0x83, 0xdb, 0x48, 0x40, 0x0f, 0x62, 0x58, 0x00, 0x4c,
0x22, 0x08, 0x49, 0x8d, 0x90, 0x14, 0x17, 0x47, 0x4a, 0x66, 0x71, 0x62, 0x52, 0x4e, 0x6a, 0x8a,
0x04, 0x8b, 0x02, 0xa3, 0x06, 0x47, 0x10, 0x9c, 0xaf, 0x14, 0xcb, 0xc5, 0x8f, 0xa6, 0x15, 0xab,
0x63, 0x14, 0xb8, 0xb8, 0x53, 0x52, 0x8b, 0x93, 0x8b, 0x32, 0x0b, 0x4a, 0x32, 0xf3, 0xf3, 0xa0,
0x2e, 0x42, 0x16, 0x12, 0x12, 0xe1, 0x62, 0x2d, 0x4b, 0xcc, 0x29, 0x4d, 0x05, 0xbb, 0x88, 0x33,
0x08, 0xc2, 0x71, 0xe2, 0x39, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07, 0x8f, 0xe4,
0x18, 0x93, 0xd8, 0xc0, 0x9e, 0x37, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xb8, 0x84, 0xad, 0x79,
0x0c, 0x01, 0x00, 0x00,
}

View File

@@ -9,7 +9,6 @@ message PluginSpec {
string remote = 2;
repeated PluginPrivilege privileges = 3;
bool disabled = 4;
repeated string env = 5;
}
// PluginPrivilege describes a permission the user has to accept

View File

@@ -10,17 +10,6 @@ type Service struct {
PreviousSpec *ServiceSpec `json:",omitempty"`
Endpoint Endpoint `json:",omitempty"`
UpdateStatus *UpdateStatus `json:",omitempty"`
// ServiceStatus is an optional, extra field indicating the number of
// desired and running tasks. It is provided primarily as a shortcut to
// calculating these values client-side, which otherwise would require
// listing all tasks for a service, an operation that could be
// computation and network expensive.
ServiceStatus *ServiceStatus `json:",omitempty"`
// JobStatus is the status of a Service which is in one of ReplicatedJob or
// GlobalJob modes. It is absent on Replicated and Global services.
JobStatus *JobStatus `json:",omitempty"`
}
// ServiceSpec represents the spec of a service.
@@ -43,10 +32,8 @@ type ServiceSpec struct {
// ServiceMode represents the mode of a service.
type ServiceMode struct {
Replicated *ReplicatedService `json:",omitempty"`
Global *GlobalService `json:",omitempty"`
ReplicatedJob *ReplicatedJob `json:",omitempty"`
GlobalJob *GlobalJob `json:",omitempty"`
Replicated *ReplicatedService `json:",omitempty"`
Global *GlobalService `json:",omitempty"`
}
// UpdateState is the state of a service update.
@@ -83,32 +70,6 @@ type ReplicatedService struct {
// GlobalService is a kind of ServiceMode.
type GlobalService struct{}
// ReplicatedJob is the a type of Service which executes a defined Tasks
// in parallel until the specified number of Tasks have succeeded.
type ReplicatedJob struct {
// MaxConcurrent indicates the maximum number of Tasks that should be
// executing simultaneously for this job at any given time. There may be
// fewer Tasks that MaxConcurrent executing simultaneously; for example, if
// there are fewer than MaxConcurrent tasks needed to reach
// TotalCompletions.
//
// If this field is empty, it will default to a max concurrency of 1.
MaxConcurrent *uint64 `json:",omitempty"`
// TotalCompletions is the total number of Tasks desired to run to
// completion.
//
// If this field is empty, the value of MaxConcurrent will be used.
TotalCompletions *uint64 `json:",omitempty"`
}
// GlobalJob is the type of a Service which executes a Task on every Node
// matching the Service's placement constraints. These tasks run to completion
// and then exit.
//
// This type is deliberately empty.
type GlobalJob struct{}
const (
// UpdateFailureActionPause PAUSE
UpdateFailureActionPause = "pause"
@@ -161,42 +122,3 @@ type UpdateConfig struct {
// started, or the new task is started before the old task is shut down.
Order string
}
// ServiceStatus represents the number of running tasks in a service and the
// number of tasks desired to be running.
type ServiceStatus struct {
// RunningTasks is the number of tasks for the service actually in the
// Running state
RunningTasks uint64
// DesiredTasks is the number of tasks desired to be running by the
// service. For replicated services, this is the replica count. For global
// services, this is computed by taking the number of tasks with desired
// state of not-Shutdown.
DesiredTasks uint64
// CompletedTasks is the number of tasks in the state Completed, if this
// service is in ReplicatedJob or GlobalJob mode. This field must be
// cross-referenced with the service type, because the default value of 0
// may mean that a service is not in a job mode, or it may mean that the
// job has yet to complete any tasks.
CompletedTasks uint64
}
// JobStatus is the status of a job-type service.
type JobStatus struct {
// JobIteration is a value increased each time a Job is executed,
// successfully or otherwise. "Executed", in this case, means the job as a
// whole has been started, not that an individual Task has been launched. A
// job is "Executed" when its ServiceSpec is updated. JobIteration can be
// used to disambiguate Tasks belonging to different executions of a job.
//
// Though JobIteration will increase with each subsequent execution, it may
// not necessarily increase by 1, and so JobIteration should not be used to
// keep track of the number of times a job has been executed.
JobIteration Version
// LastExecution is the time that the job was last executed, as observed by
// Swarm manager.
LastExecution time.Time `json:",omitempty"`
}

View File

@@ -1,8 +1,6 @@
package swarm // import "github.com/docker/docker/api/types/swarm"
import (
"time"
)
import "time"
// ClusterInfo represents info about the cluster for outputting in "info"
// it contains the same information as "Swarm", but without the JoinTokens
@@ -12,9 +10,6 @@ type ClusterInfo struct {
Spec Spec
TLSInfo TLSInfo
RootRotationInProgress bool
DefaultAddrPool []string
SubnetSize uint32
DataPathPort uint32
}
// Swarm represents a swarm.
@@ -154,13 +149,10 @@ type InitRequest struct {
ListenAddr string
AdvertiseAddr string
DataPathAddr string
DataPathPort uint32
ForceNewCluster bool
Spec Spec
AutoLockManagers bool
Availability NodeAvailability
DefaultAddrPool []string
SubnetSize uint32
}
// JoinRequest is the request used to join a swarm.
@@ -209,8 +201,6 @@ type Info struct {
Managers int `json:",omitempty"`
Cluster *ClusterInfo `json:",omitempty"`
Warnings []string `json:",omitempty"`
}
// Peer represents a peer.

Some files were not shown because too many files have changed in this diff Show More