Compare commits

...

68 Commits

Author SHA1 Message Date
Andrew Hsu
cbe11bdc6d Merge pull request #156 from dave-tucker/18.06.3
[18.06] Revert git-bundles and update runc commit
2019-02-19 13:45:28 -08:00
Dave Tucker
e8ab845eb1 runc: Update runc commit to fix CVE-2019-5736
Also update runc version

Signed-off-by: Dave Tucker <dt@docker.com>
2019-02-19 17:56:28 +00:00
Dave Tucker
58e728e06d Revert "Merge pull request #239 from seemethere/bundle_me_up_1806"
This reverts commit cdb0218236, reversing
changes made to 56ad4cdec6.

Signed-off-by: Dave Tucker <dt@docker.com>
2019-02-19 17:56:13 +00:00
Andrew Hsu
cdb0218236 Merge pull request #239 from seemethere/bundle_me_up_1806
[18.06-ce] [ENGSEC-28] CVE-2019-5736 apply fix via git bundle instead of patches
2019-02-06 15:30:06 -08:00
Eli Uriegas
d212dfee1a Switch from applying patches a git bundle
A git bundle allows us keep the same SHA, giving us the ability to
validate our patch against a known entity and allowing us to push
directly from our private forks to public forks without having to
re-apply any patches.

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2019-02-06 22:13:34 +00:00
Andrew Hsu
56ad4cdec6 Merge pull request #102 from andrewhsu/grpc1806
[18.06] cluster: set bigger grpc limit for array requests
2018-10-31 13:49:10 -07:00
Tonis Tiigi
cdabbbf33c cluster: set bigger grpc limit for array requests
4MB client side limit was introduced in vendoring go-grpc#1165 (v1.4.0)
making these requests likely to produce errors

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 489b8eda66)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2018-10-30 23:06:32 +00:00
Sebastiaan van Stijn
320063a2ad Merge pull request #34 from thaJeztah/18.06-backport-logissue
[18.06] backport select polling based watcher for Windows log watcher
2018-08-16 10:14:46 +02:00
Sebastiaan van Stijn
29871648df Merge pull request #31 from thaJeztah/18.06-backport-jjh.37562
[18.06] backport "don't invoke HCS shutdown if terminate called"
2018-08-16 10:14:03 +02:00
Andrew Hsu
68a4625393 Merge pull request #33 from thaJeztah/18.06-update-containerd
[18.06] backport bump containerd daemon to v1.1.2
2018-08-14 15:07:14 -07:00
Tejaswini Duggaraju
a86643ba40 Select polling based watcher for Windows log watcher
Signed-off-by: Tejaswini Duggaraju <naduggar@microsoft.com>
(cherry picked from commit df84cdd091)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-14 14:43:52 +02:00
Sebastiaan van Stijn
460fb308ed Bump containerd daemon to v1.1.2
Updates cri version to 1.0.4, to add `max-container-log-line-size`

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 9e773a12fb)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-08 10:48:31 +02:00
Andrew Hsu
c4a6ecfcd5 Merge pull request #25 from thaJeztah/18.06-backport-error_when_base_name_resolved_to_blank
[18.06] Return error if basename is expanded to blank
2018-08-07 16:00:37 -07:00
Andrew Hsu
576e49dd6a Merge pull request #21 from tiborvass/18.06-vendor-buildkit
[18.06] Set BuildKit's ExportedProduct variable to show useful errors in the future
2018-08-07 11:51:29 -07:00
Andrew Hsu
5150e8235c Merge pull request #32 from thaJeztah/18.06-bump_swarmkit
[18.06] Bump SwarmKit to 8852e88
2018-08-06 20:59:01 -07:00
Andrew Hsu
b974b42acf Merge pull request #29 from thaJeztah/18.06-disable-cri
[18.06] disable containerd CRI plugin
2018-08-06 17:39:11 -07:00
Yuichiro Kaneko
226fadc9fc Return error if basename is expanded to blank
Fix: https://github.com/moby/moby/issues/37325

Signed-off-by: Yuichiro Kaneko <spiketeika@gmail.com>
(cherry picked from commit c9542d313e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-06 15:50:22 +02:00
Sebastiaan van Stijn
4f8777f6af Bump SwarmKit to 8852e8840e30d69db0b39a4a3d6447362e17c64f
Relevant changes;

- swarmkit #2593 agent: return error when failing to apply network key
- swarmkit #2645 Replace deprecated grpc functions
- swarmkit #2720 Test if error is nil before to log it
- swarmkit #2712 [orchestrator] Fix task sorting
- swarmkit #2677 [manager/orchestrator/reaper] Fix the condition used for skipping over running tasks

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 660fa129c0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-06 15:05:35 +02:00
John Howard
777d535c23 Don't invoke HCS shutdown if terminate called
Signed-off-by: John Howard <jhoward@microsoft.com>
(cherry picked from commit 5cfededc7c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-08-04 01:21:32 +02:00
Sebastiaan van Stijn
3dbf955771 18.06: disable containerd CRI plugin
Docker 18.06 does not have a configuration option to
disable the CRI plugin, and this plugin is not very
useful if containerd is not running standalone.

This patch disables the plugin if containerd is running
as child-process of dockerd.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-31 15:32:13 +02:00
Tibor Vass
0832d1e1d4 validate: please vet
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 81599222fc)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-20 23:48:44 +00:00
Tibor Vass
89a8e672e0 builder: set buildkit's exported product variable via PRODUCT
This introduces a PRODUCT environment variable that is used to set a constant
at dockerversion.ProductName.

That is then used to set BuildKit's ExportedProduct variable in order to show
useful error messages to users when a certain version of the product doesn't
support a BuildKit feature.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 195919d9d6)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-20 23:48:44 +00:00
Tibor Vass
918ea06954 vendor: buildkit to 98f1604134f945d48538ffca0e18662337b4a850
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 0ab7c1c5ba)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-20 23:48:44 +00:00
Andrew Hsu
a3ef7e9a9b Merge pull request #26 from thaJeztah/18.06-backport-fix_TestExternalGraphDriver_pull
[18.06] Fix flaky TestExternalGraphDriver/pull test
2018-07-18 08:09:40 -07:00
Andrew Hsu
fe1b4aa571 Merge pull request #22 from thaJeztah/18.06-backport-errorfix
[18.06] Fix error string in docker CLI test (Windows RS5)
2018-07-18 08:08:33 -07:00
Andrew Hsu
9475fe4f57 Merge pull request #27 from thaJeztah/18.06-backport-bump_swarmkit
[18.06] Update swarmkit to 68266392a176434d282760d2d6d0ab4c68edcae6
2018-07-18 08:07:57 -07:00
Sebastiaan van Stijn
2211a7dc42 Update swarmkit to 68266392a176434d282760d2d6d0ab4c68edcae6
changes included:

- swarmkit #2706 address unassigned task leak when service is removed
- swarmkit #2676 Fix racy batching on the dispatcher
- swarmkit #2693 Fix linting issues revealed by Go 1.11

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c9377f4552)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-17 20:59:07 +02:00
Sebastiaan van Stijn
8fcc0d53cb Fix flaky TestExternalGraphDriver/pull test
This test occassionally fails on s390x and Power;

    03:16:04 --- FAIL: TestExternalGraphDriver/pull (1.08s)
    03:16:04 external_test.go:402: assertion failed: error is not nil: Error: No such image: busybox:latest

Most likely these failures are caused due to Docker Hub updating
the busybox:latest image, but not all architectures yet being
available.

Instead of using `:latest`, pull an image by digest, so that
the test doesn't depend on Docker Hub having all architectures
available for `:latest`.

I selected the same digest as is currently used as "frozen image"
in the Dockerfile.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 352db26d5f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-17 20:44:02 +02:00
Andrew Hsu
8308a17b6a Merge pull request #24 from thaJeztah/18.06-backport-fix-internal
[18.06] Fix flakyness in TestDockerNetworkInternalMode
2018-07-16 09:23:10 -07:00
Flavio Crisciani
b4c762831c Fix flakyness in TestDockerNetworkInternalMode
Instead of waiting for the DNS to fail, try to access
a specific external IP and verify that 100% of the pakcets
are being lost.

Signed-off-by: Flavio Crisciani <flavio.crisciani@docker.com>
(cherry picked from commit a2bb2144b3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-15 21:42:00 +02:00
Sandeep Bansal
d025942c33 Fix error string in docker CLI test
Signed-off-by: Sandeep Bansal <sabansal@microsoft.com>
(cherry picked from commit 76ace9bb5e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-12 13:52:32 +02:00
Andrew Hsu
371b590ace Merge pull request #20 from thaJeztah/18.06-backport-do_not_Healthcheck_RUN_command
[18.06] Ensure RUN instruction to run without Healthcheck
2018-07-11 17:47:16 -07:00
Yuichiro Kaneko
160be68bbd Ensure RUN instruction to run without Healthcheck
Before this commit Healthcheck run if HEALTHCHECK
instruction appears before RUN instruction.
By passing `withoutHealthcheck` to `copyRunConfig`,
always RUN instruction run without Healthcheck.

Fix: https://github.com/moby/moby/issues/37362

Signed-off-by: Yuichiro Kaneko <spiketeika@gmail.com>
(cherry picked from commit 44e08d8a7d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-11 20:39:49 +02:00
Sebastiaan van Stijn
26055afbd4 Merge pull request #19 from thaJeztah/18.06-backport-bump_libnetwork
[18.06] Bump libnetwork to d00ceed44cc447c77f25cdf5d59e83163bdcb4c9
2018-07-11 19:55:11 +02:00
Sebastiaan van Stijn
f2dffe3b15 Merge pull request #18 from tiborvass/18.06-mountable-bugfix
[18.06] builder: fix duplicate calls to mountable
2018-07-11 19:54:44 +02:00
Sebastiaan van Stijn
d65a47f438 Bump libnetwork to d00ceed44cc447c77f25cdf5d59e83163bdcb4c9
The absence of the file /proc/sys/net/ipv6/conf/all/disable_ipv6
doesn't appear to affect functionality, at least at this time.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d58c4cbe6c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-11 12:46:51 +02:00
Tonis Tiigi
622743a9b5 builder: fix duplicate calls to mountable
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit ffa7233d15)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-11 02:34:47 +00:00
Andrew Hsu
4fc6e3ab7b Merge pull request #17 from thaJeztah/18.06-backport-bump_containerd_1.1.1
[18.06] Bump containerd daemon to v1.1.1
2018-07-10 15:25:14 -07:00
Andrew Hsu
6abf11d448 Merge pull request #15 from thaJeztah/18.06-backport-vendor-containerd
[18.06] vendor: update containerd to b41633746
2018-07-10 09:28:55 -07:00
Andrew Hsu
c863835e99 Merge pull request #16 from thaJeztah/18.06-backport-scalable-lb
[18.06] Improve scalability of the Linux load balancing
2018-07-10 09:26:50 -07:00
Brian Goff
df5aa018c6 Bump containerd daemon to v1.1.1
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit c083eb7595)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-10 01:50:02 +02:00
Andrew Hsu
22a77b0b92 Merge pull request #11 from thaJeztah/18.06-fix_bindmount_src_create_race
[18.06] Fix bindmount autocreate race
2018-07-09 14:53:50 -07:00
Chris Telfer
1c4232670a Bump libnetwork to 3ac297bc
Bump libnetwork to 3ac297bc7fd0afec9051bbb47024c9bc1d75bf5b in order to
get fix 0c3d9f00 which addresses a flaw that the scalable load balancing
code revealed.  Attempting to print sandbox IDs where the sandbox name
was too short results in a goroutine panic.  This can occur with
sandboxes with names of 1 or 2 characters in the previous code. But due
to naming updates in the scalable load balancing code, it could now
occur for networks whose name was 3 characters and at least one of the
integration tests employed such networks (named 'foo', 'bar' and 'baz').

This update also brings in several changes as well:
 * 6c7c6017 - Fix error handling about bridgeSetup
 * 5ed38221 - Optimize networkDB queue
 * cfa9afdb - ndots: produce error on negative numbers
 * 5586e226 - improve error message for invalid ndots number
 * 449672e5 - Allows to set generic knobs on the Sandbox
 * 6b4c4af7 - do not ignore user-provided "ndots:0" option
 * 843a0e42 - Adjust corner case for reconnect logic

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 0e162d9923)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 20:27:41 +02:00
Chris Telfer
cd3596eabc Update moby to use scalable-lb libnetwork APIs
This patch is required for the updated version of libnetwork and entails
two minor changes.

First, it uses the new libnetwork.NetworkDeleteOptionRemoveLB option to
the network.Delete() method to automatically remove the load balancing
endpoint for ingress networks.   This allows removal of the
deleteLoadBalancerSandbox() function whose functionality is now within
libnetwork.

The second change is to allocate a load balancer endpoint IP address for
all overlay networks rather than just "ingress" and windows overlay
networks.  Swarmkit is already performing this allocation, but moby was
not making use of these IP addresses for Linux overlay networks (except
ingress).  The current version of libnetwork makes use of these IP
addresses by creating a load balancing sandbox and endpoint similar to
ingress's  for all overlay network and putting all load balancing state
for a given node in that sandbox only.  This reduces the amount of linux
kernel state required per node.

In the prior scheme, libnetwork would program each container's network
namespace with every piece of load balancing state for every other
container that shared *any* network with the first container.  This
meant that the amount of kernel state on a given node scaled with the
square of the number of services in the cluster and with the square of
the number of containers per service.  With the new scheme, kernel state
at each node scales linearly with the number of services and the number
of containers per service.  This also reduces the number of system calls
required to add or remove tasks and containers.  Previously the number
of system calls required grew linearly with the number of other
tasks that shared a network with the container.  Now the number of
system calls grows linearly only with the number of networks that the
task/container is attached to.  This results in a significant
performance improvement when adding and removing services to a cluster
that already heavily loaded.

The primary disadvantage to this scheme is that it requires the
allocation of an additional IP address per node per subnet for every
node in the cluster that has a task on the given subnet.  However, as
mentioned, swarmkit is already allocating these IP addresses for every
node and they are going unused.  Future swarmkit modifications should be
examined to only allocate said IP addresses when nodes actually require
them.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 8e0f6bc903)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 20:09:14 +02:00
Chris Telfer
0cd9442c75 bump libnetwork to b0186632
Bump libnetwork to b0186632522c68f4e1222c4f6d7dbe518882024f.   This
includes the following changes:
 * Dockerize protocol buffer generation and update (78d9390a..e12dd44c)
 * Use new plugin interfaces provided by plugin pkg (be94e134)
 * Improve linux load-balancing scalability (5111c24e..366b9110)

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 92335eaef1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 20:08:39 +02:00
Tonis Tiigi
5e14dc7cb7 vendor: update containerd to b41633746
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit f0e6158266)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-09 10:37:32 +02:00
Andrew Hsu
73ca638f39 Merge pull request #13 from thaJeztah/18.06-backport_bump-swarmkit
[18.06] Bump swarmkit to include task reaper fixes and more metrics.
2018-07-08 12:10:20 -07:00
Andrew Hsu
496e208a24 Merge pull request #14 from thaJeztah/18.06-backport-CVE-2018-10892
[18.06] Add /proc/acpi to masked paths
2018-07-06 16:50:13 -07:00
Antonio Murdaca
caf82772b7 Add /proc/acpi to masked paths
The deafult OCI linux spec in oci/defaults{_linux}.go in Docker/Moby
from 1.11 to current upstream master does not block /proc/acpi pathnames
allowing attackers to modify host's hardware like enabling/disabling
bluetooth or turning up/down keyboard brightness. SELinux prevents all
of this if enabled.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
(cherry picked from commit 569b9702a5)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-06 15:59:05 +02:00
Ying Li
ea4127cc0e Bump swarmkit to include task reaper fixes and more metrics.
This includes the following behavior-modifying PRs:

- docker/swarmkit#2673
- docker/swarmkit#2669
- docker/swarmkit#2675
- docker/swarmkit#2664

Signed-off-by: Ying Li <ying.li@docker.com>
(cherry picked from commit b322705750)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-06 00:43:16 +02:00
Brian Goff
8ade047f3c Fix bindmount autocreate race
When using the mounts API, bind mounts are not supposed to be
automatically created.

Before this patch there is a race condition between valiating that a
bind path exists and then actually setting up the bind mount where the
bind path may exist during validation but was removed during mountpooint
setup.

This adds a field to the mountpoint struct to ensure that binds created
over the mounts API are not accidentally created.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 1caeb79963)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-05 21:01:17 +02:00
Andrew Hsu
85b4dbd3db Merge pull request #4 from thaJeztah/18.06-backport-register-oci-mediatypes
[18.06] Register OCI image media types
2018-07-05 11:52:39 -07:00
Andrew Hsu
cba80832a6 Merge pull request #3 from thaJeztah/18.06-backport-update-windows-manifest-sorting
[18.06] LCOW: Prefer Windows over Linux in a manifest list
2018-07-05 11:52:18 -07:00
Andrew Hsu
0d029b0a42 Merge pull request #2 from thaJeztah/18.06-update_go_winio
[18.06] Update Microsoft/go-winio to 0.4.8
2018-07-05 10:21:43 -07:00
Andrew Hsu
c9bfc3c842 Merge pull request #10 from thaJeztah/18.06-update_buildkit
[18.06] vendor: update buildkit to 9acf51e491
2018-07-05 08:41:47 -07:00
Tibor Vass
12eee7ce0e Merge pull request #1 from thaJeztah/18.03-update-containerd-1.1.1-rc.2
[18.06] Update containerd to v1.1.1-rc.2
2018-07-05 08:00:22 -07:00
Andrew Hsu
394fdb711c Merge pull request #7 from tiborvass/18.06-cp
[18.06] cherry-pick set for rc2
2018-07-05 02:00:07 -07:00
Tonis Tiigi
c9757e3efc vendor: update buildkit to 9acf51e491
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 6144f50e55)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-04 16:34:34 +02:00
Tonis Tiigi
3500f3f27e builder: do not send duplicate status for completed jobs
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 6f7dd9428e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-04 16:34:28 +02:00
Tibor Vass
f030a49747 api: Change Platform field back to string (temporary workaround)
This partially reverts https://github.com/moby/moby/pull/37350

Although specs.Platform is desirable in the API, there is more work
to be done on helper functions, namely containerd's platforms.Parse
that assumes the default platform of the Go runtime.

That prevents a client to use the recommended Parse function to
retrieve a specs.Platform object.

With this change, no parsing is expected from the client.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit facad55744)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-04 00:28:41 +00:00
Andrew Hsu
d29c1fa7d1 Merge pull request #5 from thaJeztah/18.06-bump-libnetwork
[18.06] bump libnetwork to 430c00a
2018-07-03 16:15:29 -07:00
Tibor Vass
26dd527fab builder: return image ID in API when using buildkit
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit ca8022ec63)
Signed-off-by: Tibor Vass <tibor@docker.com>
2018-07-03 22:47:03 +00:00
Chris Telfer
cc1b68d5f2 Update tests w/ new libnetwork contraints
The TestDockerNetworkIPAMMultipleNetworks test allocates several
networks simultaneously with overlapping IP addresses.  Libnetwork now
forbids this.  Adjust the test case to use distinct IP ranges for the
networks it creates.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit efb7909bef)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:25:41 +02:00
Derek McGowan
126b5bce9d Register OCI image media types
OCI types are backwards compatible with Docker manifest
types, however the media types must be registered.

Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit c4f0515837)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:25:07 +02:00
John Stephens
605cc35dc6 LCOW: Prefer Windows over Linux in a manifest list
When a manifest list contains both Linux and Windows images, always
prefer Windows when the platform OS is unspecified. Also, filter out any
Windows images with a higher build than the host, since they cannot run.

Signed-off-by: John Stephens <johnstep@docker.com>
(cherry picked from commit ddcdb7255d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:24:27 +02:00
Sebastiaan van Stijn
d3a04c2092 Update Microsoft/go-winio to 0.4.8
Fixes named pipe support for hyper-v isolated containers

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 74095588ba)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:23:55 +02:00
Derek McGowan
bcfbeb8998 Update containerd to v1.1.1-rc.2
Signed-off-by: Derek McGowan <derek@mcgstyle.net>
(cherry picked from commit 735517928b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 22:23:01 +02:00
Chris Telfer
a0c213f90c bump libnetwork to 430c00a
Bump libnetwork to 430c00a6a6b3dfdd774f21e1abd4ad6b0216c629.  This
includes the following moby-affecting changes:

 * Update vendoring for go-sockaddr (8df9f31a)
 * Fix inconsistent subnet allocation by preventing allocation of
   overlapping subnets (8579c5d2)
 * Handle IPv6 literals correctly in port bindings (474fcaf4)
 * Update vendoring for miekg/dns (8f307ac8)
 * Avoid subnet reallocation until required (9756ff7ed)
 * Bump libnetwork build to use go version 1.10.2 (603d2c1a)
 * Unwrap error type returned by PluginGetter (aacec8e1)
 * Update vendored components to match moby (d768021dd)
 * Add retry field to cluster-peers probe (dbbd06a7)
 * Fix net driver response loss on createEndpoint (1ab6e506)
   (fixes https://github.com/docker/for-linux/issues/348)

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit f155f828a2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2018-07-03 19:17:51 +02:00
197 changed files with 6928 additions and 2661 deletions

View File

@@ -56,7 +56,8 @@ DOCKER_ENVS := \
-e https_proxy \
-e no_proxy \
-e VERSION \
-e PLATFORM
-e PLATFORM \
-e PRODUCT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`

View File

@@ -73,7 +73,7 @@ func (b *Backend) Build(ctx context.Context, config backend.BuildConfig) (string
return "", err
}
if config.ProgressWriter.AuxFormatter != nil {
if err = config.ProgressWriter.AuxFormatter.Emit(types.BuildResult{ID: imageID}); err != nil {
if err = config.ProgressWriter.AuxFormatter.Emit("moby.image.id", types.BuildResult{ID: imageID}); err != nil {
return "", err
}
}

View File

@@ -14,7 +14,6 @@ import (
"strings"
"sync"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
@@ -24,8 +23,7 @@ import (
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/system"
"github.com/docker/go-units"
units "github.com/docker/go-units"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -72,17 +70,7 @@ func newImageBuildOptions(ctx context.Context, r *http.Request) (*types.ImageBui
options.Target = r.FormValue("target")
options.RemoteContext = r.FormValue("remote")
if versions.GreaterThanOrEqualTo(version, "1.32") {
apiPlatform := r.FormValue("platform")
if apiPlatform != "" {
sp, err := platforms.Parse(apiPlatform)
if err != nil {
return nil, err
}
if err := system.ValidatePlatform(sp); err != nil {
return nil, err
}
options.Platform = &sp
}
options.Platform = r.FormValue("platform")
}
if r.Form.Get("shmsize") != "" {
@@ -220,7 +208,6 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
output.Write(notVerboseBuffer.Bytes())
}
logrus.Debugf("isflushed %v", output.Flushed())
// Do not write the error in the http output if it's still empty.
// This prevents from writing a 200(OK) when there is an internal error.
if !output.Flushed() {
@@ -243,6 +230,10 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return errdefs.InvalidParameter(errors.New("squash is only supported with experimental mode"))
}
if buildOptions.Version == types.BuilderBuildKit && !br.daemon.HasExperimental() {
return errdefs.InvalidParameter(errors.New("buildkit is only supported with experimental mode"))
}
out := io.Writer(output)
if buildOptions.SuppressOutput {
out = notVerboseBuffer
@@ -255,10 +246,6 @@ func (br *buildRouter) postBuild(ctx context.Context, w http.ResponseWriter, r *
return progress.NewProgressReader(in, progressOutput, r.ContentLength, "Downloading context", buildOptions.RemoteContext)
}
if buildOptions.Version == types.BuilderBuildKit && !br.daemon.HasExperimental() {
return errdefs.InvalidParameter(errors.New("buildkit is only supported with experimental mode"))
}
wantAux := versions.GreaterThanOrEqualTo(version, "1.30")
imgID, err := br.backend.Build(ctx, backend.BuildConfig{

View File

@@ -7,8 +7,7 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/filters"
"github.com/docker/go-units"
specs "github.com/opencontainers/image-spec/specs-go/v1"
units "github.com/docker/go-units"
)
// CheckpointCreateOptions holds parameters to create a checkpoint from a container
@@ -181,7 +180,7 @@ type ImageBuildOptions struct {
ExtraHosts []string // List of extra hosts
Target string
SessionID string
Platform *specs.Platform
Platform string
// Version specifies the version of the unerlying builder to use
Version BuilderVersion
// BuildID is an optional identifier that can be passed together with the

View File

@@ -644,7 +644,7 @@ func showProgress(ctx context.Context, ongoing *jobs, cs content.Store, pw progr
// featured.
type jobs struct {
name string
added map[digest.Digest]job
added map[digest.Digest]*job
mu sync.Mutex
resolved bool
}
@@ -658,7 +658,7 @@ type job struct {
func newJobs(name string) *jobs {
return &jobs{
name: name,
added: make(map[digest.Digest]job),
added: make(map[digest.Digest]*job),
}
}
@@ -669,17 +669,17 @@ func (j *jobs) add(desc ocispec.Descriptor) {
if _, ok := j.added[desc.Digest]; ok {
return
}
j.added[desc.Digest] = job{
j.added[desc.Digest] = &job{
Descriptor: desc,
started: time.Now(),
}
}
func (j *jobs) jobs() []job {
func (j *jobs) jobs() []*job {
j.mu.Lock()
defer j.mu.Unlock()
descs := make([]job, 0, len(j.added))
descs := make([]*job, 0, len(j.added))
for _, j := range j.added {
descs = append(descs, j)
}

View File

@@ -245,21 +245,23 @@ func (s *snapshotter) Mounts(ctx context.Context, key string) (snapshot.Mountabl
}
if l != nil {
id := identity.NewID()
rwlayer, err := s.opt.LayerStore.CreateRWLayer(id, l.ChainID(), nil)
if err != nil {
return nil, err
}
rootfs, err := rwlayer.Mount("")
if err != nil {
return nil, err
}
mnt := []mount.Mount{{
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}
return &constMountable{
mounts: mnt,
var rwlayer layer.RWLayer
return &mountable{
acquire: func() ([]mount.Mount, error) {
rwlayer, err = s.opt.LayerStore.CreateRWLayer(id, l.ChainID(), nil)
if err != nil {
return nil, err
}
rootfs, err := rwlayer.Mount("")
if err != nil {
return nil, err
}
return []mount.Mount{{
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}, nil
},
release: func() error {
_, err := s.opt.LayerStore.ReleaseRWLayer(rwlayer)
return err
@@ -269,17 +271,18 @@ func (s *snapshotter) Mounts(ctx context.Context, key string) (snapshot.Mountabl
id, _ := s.getGraphDriverID(key)
rootfs, err := s.opt.GraphDriver.Get(id, "")
if err != nil {
return nil, err
}
mnt := []mount.Mount{{
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}
return &constMountable{
mounts: mnt,
return &mountable{
acquire: func() ([]mount.Mount, error) {
rootfs, err := s.opt.GraphDriver.Get(id, "")
if err != nil {
return nil, err
}
return []mount.Mount{{
Source: rootfs.Path(),
Type: "bind",
Options: []string{"rbind"},
}}, nil
},
release: func() error {
return s.opt.GraphDriver.Put(id)
},
@@ -428,18 +431,37 @@ func (s *snapshotter) Close() error {
return s.db.Close()
}
type constMountable struct {
type mountable struct {
mu sync.Mutex
mounts []mount.Mount
acquire func() ([]mount.Mount, error)
release func() error
}
func (m *constMountable) Mount() ([]mount.Mount, error) {
func (m *mountable) Mount() ([]mount.Mount, error) {
m.mu.Lock()
defer m.mu.Unlock()
if m.mounts != nil {
return m.mounts, nil
}
mounts, err := m.acquire()
if err != nil {
return nil, err
}
m.mounts = mounts
return m.mounts, nil
}
func (m *constMountable) Release() error {
func (m *mountable) Release() error {
m.mu.Lock()
defer m.mu.Unlock()
if m.release == nil {
return nil
}
m.mounts = nil
return m.release()
}

View File

@@ -2,7 +2,6 @@ package buildkit
import (
"context"
"encoding/json"
"io"
"strings"
"sync"
@@ -14,7 +13,8 @@ import (
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/builder"
"github.com/docker/docker/daemon/images"
"github.com/docker/docker/pkg/jsonmessage"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/system"
controlapi "github.com/moby/buildkit/api/services/control"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/identity"
@@ -209,8 +209,17 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
frontendAttrs["no-cache"] = ""
}
if opt.Options.Platform != nil {
frontendAttrs["platform"] = platforms.Format(*opt.Options.Platform)
if opt.Options.Platform != "" {
// same as in newBuilder in builder/dockerfile.builder.go
// TODO: remove once opt.Options.Platform is of type specs.Platform
sp, err := platforms.Parse(opt.Options.Platform)
if err != nil {
return nil, err
}
if err := system.ValidatePlatform(sp); err != nil {
return nil, err
}
frontendAttrs["platform"] = opt.Options.Platform
}
exporterAttrs := map[string]string{}
@@ -228,6 +237,8 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
Session: opt.Options.SessionID,
}
aux := streamformatter.AuxFormatter{Writer: opt.ProgressWriter.Output}
eg, ctx := errgroup.WithContext(ctx)
eg.Go(func() error {
@@ -240,7 +251,7 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
return errors.Errorf("missing image id")
}
out.ImageID = id
return nil
return aux.Emit("moby.image.id", types.BuildResult{ID: id})
})
ch := make(chan *controlapi.StatusResponse)
@@ -258,25 +269,9 @@ func (b *Builder) Build(ctx context.Context, opt backend.BuildConfig) (*builder.
if err != nil {
return err
}
auxJSONBytes, err := json.Marshal(dt)
if err != nil {
if err := aux.Emit("moby.buildkit.trace", dt); err != nil {
return err
}
auxJSON := new(json.RawMessage)
*auxJSON = auxJSONBytes
msgJSON, err := json.Marshal(&jsonmessage.JSONMessage{ID: "moby.buildkit.trace", Aux: auxJSON})
if err != nil {
return err
}
msgJSON = append(msgJSON, []byte("\r\n")...)
n, err := opt.ProgressWriter.Output.Write(msgJSON)
if err != nil {
return err
}
if n != len(msgJSON) {
return io.ErrShortWrite
}
}
return nil
})

View File

@@ -13,12 +13,13 @@ import (
"github.com/docker/docker/daemon/graphdriver"
"github.com/moby/buildkit/cache"
"github.com/moby/buildkit/cache/metadata"
"github.com/moby/buildkit/cache/remotecache"
registryremotecache "github.com/moby/buildkit/cache/remotecache/registry"
"github.com/moby/buildkit/control"
"github.com/moby/buildkit/exporter"
"github.com/moby/buildkit/frontend"
"github.com/moby/buildkit/frontend/dockerfile"
dockerfile "github.com/moby/buildkit/frontend/dockerfile/builder"
"github.com/moby/buildkit/frontend/gateway"
"github.com/moby/buildkit/frontend/gateway/forwarder"
"github.com/moby/buildkit/snapshot/blobmapping"
"github.com/moby/buildkit/solver/boltdbcachestorage"
"github.com/moby/buildkit/worker"
@@ -113,10 +114,6 @@ func newController(rt http.RoundTripper, opt Opt) (*control.Controller, error) {
return nil, err
}
frontends := map[string]frontend.Frontend{}
frontends["dockerfile.v0"] = dockerfile.NewDockerfileFrontend()
frontends["gateway.v0"] = gateway.NewGatewayFrontend()
wopt := mobyworker.Opt{
ID: "moby",
SessionManager: opt.SessionManager,
@@ -141,17 +138,17 @@ func newController(rt http.RoundTripper, opt Opt) (*control.Controller, error) {
}
wc.Add(w)
ci := remotecache.NewCacheImporter(remotecache.ImportOpt{
Worker: w,
SessionManager: opt.SessionManager,
})
frontends := map[string]frontend.Frontend{
"dockerfile.v0": forwarder.NewGatewayForwarder(wc, dockerfile.Build),
"gateway.v0": gateway.NewGatewayFrontend(wc),
}
return control.NewController(control.Opt{
SessionManager: opt.SessionManager,
WorkerController: wc,
Frontends: frontends,
CacheKeyStorage: cacheStorage,
// CacheExporter: ce,
CacheImporter: ci,
SessionManager: opt.SessionManager,
WorkerController: wc,
Frontends: frontends,
CacheKeyStorage: cacheStorage,
ResolveCacheImporterFunc: registryremotecache.ResolveCacheImporterFunc(opt.SessionManager),
// TODO: set ResolveCacheExporterFunc for exporting cache
})
}

View File

@@ -83,7 +83,7 @@ func (e *imageExporterInstance) Export(ctx context.Context, ref cache.ImmutableR
if ref != nil {
layersDone := oneOffProgress(ctx, "exporting layers")
if err := ref.Finalize(ctx); err != nil {
if err := ref.Finalize(ctx, true); err != nil {
return nil, err
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
@@ -25,6 +26,7 @@ import (
"github.com/moby/buildkit/frontend/dockerfile/parser"
"github.com/moby/buildkit/frontend/dockerfile/shell"
"github.com/moby/buildkit/session"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/sync/syncmap"
@@ -111,7 +113,11 @@ func (bm *BuildManager) Build(ctx context.Context, config backend.BuildConfig) (
PathCache: bm.pathCache,
IDMappings: bm.idMappings,
}
return newBuilder(ctx, builderOptions).build(source, dockerfile)
b, err := newBuilder(ctx, builderOptions)
if err != nil {
return nil, err
}
return b.build(source, dockerfile)
}
func (bm *BuildManager) initializeClientSession(ctx context.Context, cancel func(), options *types.ImageBuildOptions) (builder.Source, error) {
@@ -175,10 +181,11 @@ type Builder struct {
pathCache pathCache
containerManager *containerManager
imageProber ImageProber
platform *specs.Platform
}
// newBuilder creates a new Dockerfile builder from an optional dockerfile and a Options.
func newBuilder(clientCtx context.Context, options builderOptions) *Builder {
func newBuilder(clientCtx context.Context, options builderOptions) (*Builder, error) {
config := options.Options
if config == nil {
config = new(types.ImageBuildOptions)
@@ -199,7 +206,20 @@ func newBuilder(clientCtx context.Context, options builderOptions) *Builder {
containerManager: newContainerManager(options.Backend),
}
return b
// same as in Builder.Build in builder/builder-next/builder.go
// TODO: remove once config.Platform is of type specs.Platform
if config.Platform != "" {
sp, err := platforms.Parse(config.Platform)
if err != nil {
return nil, err
}
if err := system.ValidatePlatform(sp); err != nil {
return nil, err
}
b.platform = &sp
}
return b, nil
}
// Build 'LABEL' command(s) from '--label' options and add to the last stage
@@ -257,7 +277,7 @@ func emitImageID(aux *streamformatter.AuxFormatter, state *dispatchState) error
if aux == nil || state.imageID == "" {
return nil
}
return aux.Emit(types.BuildResult{ID: state.imageID})
return aux.Emit("", types.BuildResult{ID: state.imageID})
}
func processMetaArg(meta instructions.ArgCommand, shlex *shell.Lex, args *BuildArgs) error {
@@ -365,9 +385,12 @@ func BuildFromConfig(config *container.Config, changes []string, os string) (*co
return nil, errdefs.InvalidParameter(err)
}
b := newBuilder(context.Background(), builderOptions{
b, err := newBuilder(context.Background(), builderOptions{
Options: &types.ImageBuildOptions{NoCache: true},
})
if err != nil {
return nil, err
}
// ensure that the commands are valid
for _, n := range dockerfile.AST.Children {

View File

@@ -87,7 +87,7 @@ func copierFromDispatchRequest(req dispatchRequest, download sourceDownloader, i
pathCache: req.builder.pathCache,
download: download,
imageSource: imageSource,
platform: req.builder.options.Platform,
platform: req.builder.platform,
}
}

View File

@@ -146,7 +146,7 @@ func (d *dispatchRequest) getImageMount(imageRefOrID string) (*imageMount, error
imageRefOrID = stage.Image
localOnly = true
}
return d.builder.imageSources.Get(imageRefOrID, localOnly, d.builder.options.Platform)
return d.builder.imageSources.Get(imageRefOrID, localOnly, d.builder.platform)
}
// FROM [--platform=platform] imagename[:tag | @digest] [AS build-stage-name]
@@ -238,7 +238,7 @@ func (d *dispatchRequest) getImageOrStage(name string, platform *specs.Platform)
}
if platform == nil {
platform = d.builder.options.Platform
platform = d.builder.platform
}
// Windows cannot support a container with no base image unless it is LCOW.
@@ -274,11 +274,17 @@ func (d *dispatchRequest) getImageOrStage(name string, platform *specs.Platform)
}
return imageMount.Image(), nil
}
func (d *dispatchRequest) getFromImage(shlex *shell.Lex, name string, platform *specs.Platform) (builder.Image, error) {
name, err := d.getExpandedString(shlex, name)
func (d *dispatchRequest) getFromImage(shlex *shell.Lex, basename string, platform *specs.Platform) (builder.Image, error) {
name, err := d.getExpandedString(shlex, basename)
if err != nil {
return nil, err
}
// Empty string is interpreted to FROM scratch by images.GetImageAndReleasableLayer,
// so validate expanded result is not empty.
if name == "" {
return nil, errors.Errorf("base name (%s) should not be blank", basename)
}
return d.getImageOrStage(name, platform)
}
@@ -365,7 +371,8 @@ func dispatchRun(d dispatchRequest, c *instructions.RunCommand) error {
runConfig := copyRunConfig(stateRunConfig,
withCmd(cmdFromArgs),
withEnv(append(stateRunConfig.Env, buildArgs...)),
withEntrypointOverride(saveCmd, strslice.StrSlice{""}))
withEntrypointOverride(saveCmd, strslice.StrSlice{""}),
withoutHealthcheck())
// set config as already being escaped, this prevents double escaping on windows
runConfig.ArgsEscaped = true

View File

@@ -6,7 +6,6 @@ import (
"runtime"
"testing"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
@@ -23,8 +22,7 @@ import (
func newBuilderWithMockBackend() *Builder {
mockBackend := &MockBackend{}
defaultPlatform := platforms.DefaultSpec()
opts := &types.ImageBuildOptions{Platform: &defaultPlatform}
opts := &types.ImageBuildOptions{}
ctx := context.Background()
b := &Builder{
options: opts,
@@ -116,7 +114,7 @@ func TestFromScratch(t *testing.T) {
err := initializeStage(sb, cmd)
if runtime.GOOS == "windows" && !system.LCOWSupported() {
assert.Check(t, is.Error(err, "Windows does not support FROM scratch"))
assert.Check(t, is.Error(err, "Linux containers are not supported on this system"))
return
}
@@ -139,10 +137,10 @@ func TestFromWithArg(t *testing.T) {
args := NewBuildArgs(make(map[string]*string))
val := "sometag"
metaArg := instructions.ArgCommand{
metaArg := instructions.ArgCommand{KeyValuePairOptional: instructions.KeyValuePairOptional{
Key: "THETAG",
Value: &val,
}
}}
cmd := &instructions.Stage{
BaseName: "alpine:${THETAG}",
}
@@ -159,6 +157,22 @@ func TestFromWithArg(t *testing.T) {
assert.Check(t, is.Len(sb.state.buildArgs.GetAllMeta(), 1))
}
func TestFromWithArgButBuildArgsNotGiven(t *testing.T) {
b := newBuilderWithMockBackend()
args := NewBuildArgs(make(map[string]*string))
metaArg := instructions.ArgCommand{}
cmd := &instructions.Stage{
BaseName: "${THETAG}",
}
err := processMetaArg(metaArg, shell.NewLex('\\'), args)
sb := newDispatchRequest(b, '\\', nil, args, newStagesBuildResults())
assert.NilError(t, err)
err = initializeStage(sb, cmd)
assert.Error(t, err, "base name (${THETAG}) should not be blank")
}
func TestFromWithUndefinedArg(t *testing.T) {
tag, expected := "sometag", "expectedthisid"
@@ -379,7 +393,7 @@ func TestArg(t *testing.T) {
argName := "foo"
argVal := "bar"
cmd := &instructions.ArgCommand{Key: argName, Value: &argVal}
cmd := &instructions.ArgCommand{KeyValuePairOptional: instructions.KeyValuePairOptional{Key: argName, Value: &argVal}}
err := dispatch(sb, cmd)
assert.NilError(t, err)
@@ -475,3 +489,64 @@ func TestRunWithBuildArgs(t *testing.T) {
// Check that runConfig.Cmd has not been modified by run
assert.Check(t, is.DeepEqual(origCmd, sb.state.runConfig.Cmd))
}
func TestRunIgnoresHealthcheck(t *testing.T) {
b := newBuilderWithMockBackend()
args := NewBuildArgs(make(map[string]*string))
sb := newDispatchRequest(b, '`', nil, args, newStagesBuildResults())
b.disableCommit = false
origCmd := strslice.StrSlice([]string{"cmd", "in", "from", "image"})
imageCache := &mockImageCache{
getCacheFunc: func(parentID string, cfg *container.Config) (string, error) {
return "", nil
},
}
mockBackend := b.docker.(*MockBackend)
mockBackend.makeImageCacheFunc = func(_ []string) builder.ImageCache {
return imageCache
}
b.imageProber = newImageProber(mockBackend, nil, false)
mockBackend.getImageFunc = func(_ string) (builder.Image, builder.ROLayer, error) {
return &mockImage{
id: "abcdef",
config: &container.Config{Cmd: origCmd},
}, nil, nil
}
mockBackend.containerCreateFunc = func(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error) {
return container.ContainerCreateCreatedBody{ID: "12345"}, nil
}
mockBackend.commitFunc = func(cfg backend.CommitConfig) (image.ID, error) {
return "", nil
}
from := &instructions.Stage{BaseName: "abcdef"}
err := initializeStage(sb, from)
assert.NilError(t, err)
expectedTest := []string{"CMD-SHELL", "curl -f http://localhost/ || exit 1"}
cmd := &instructions.HealthCheckCommand{
Health: &container.HealthConfig{
Test: expectedTest,
},
}
assert.NilError(t, dispatch(sb, cmd))
assert.Assert(t, sb.state.runConfig.Healthcheck != nil)
mockBackend.containerCreateFunc = func(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error) {
// Check the Healthcheck is disabled.
assert.Check(t, is.DeepEqual([]string{"NONE"}, config.Config.Healthcheck.Test))
return container.ContainerCreateCreatedBody{ID: "123456"}, nil
}
sb.state.buildArgs.AddArg("one", strPtr("two"))
run := &instructions.RunCommand{
ShellDependantCmdLine: instructions.ShellDependantCmdLine{
CmdLine: strslice.StrSlice{"echo foo"},
PrependShell: true,
},
}
assert.NilError(t, dispatch(sb, run))
assert.Check(t, is.DeepEqual(expectedTest, sb.state.runConfig.Healthcheck.Test))
}

View File

@@ -169,7 +169,7 @@ func (b *Builder) performCopy(req dispatchRequest, inst copyInstruction) error {
return err
}
imageMount, err := b.imageSources.Get(state.imageID, true, req.builder.options.Platform)
imageMount, err := b.imageSources.Get(state.imageID, true, req.builder.platform)
if err != nil {
return errors.Wrapf(err, "failed to get destination image %q", state.imageID)
}
@@ -343,6 +343,18 @@ func withEntrypointOverride(cmd []string, entrypoint []string) runConfigModifier
}
}
// withoutHealthcheck disables healthcheck.
//
// The dockerfile RUN instruction expect to run without healthcheck
// so the runConfig Healthcheck needs to be disabled.
func withoutHealthcheck() runConfigModifier {
return func(runConfig *container.Config) {
runConfig.Healthcheck = &container.HealthConfig{
Test: []string{"NONE"},
}
}
}
func copyRunConfig(runConfig *container.Config, modifiers ...runConfigModifier) *container.Config {
copy := *runConfig
copy.Cmd = copyStringSlice(runConfig.Cmd)
@@ -416,7 +428,9 @@ func (b *Builder) probeAndCreate(dispatchState *dispatchState, runConfig *contai
func (b *Builder) create(runConfig *container.Config) (string, error) {
logrus.Debugf("[BUILDER] Command to be executed: %v", runConfig.Cmd)
hostConfig := hostConfigFromOptions(b.options)
isWCOW := runtime.GOOS == "windows" && b.platform != nil && b.platform.OS == "windows"
hostConfig := hostConfigFromOptions(b.options, isWCOW)
container, err := b.containerManager.Create(runConfig, hostConfig)
if err != nil {
return "", err
@@ -429,7 +443,7 @@ func (b *Builder) create(runConfig *container.Config) (string, error) {
return container.ID, nil
}
func hostConfigFromOptions(options *types.ImageBuildOptions) *container.HostConfig {
func hostConfigFromOptions(options *types.ImageBuildOptions, isWCOW bool) *container.HostConfig {
resources := container.Resources{
CgroupParent: options.CgroupParent,
CPUShares: options.CPUShares,
@@ -457,7 +471,7 @@ func hostConfigFromOptions(options *types.ImageBuildOptions) *container.HostConf
// is too small for builder scenarios where many users are
// using RUN statements to install large amounts of data.
// Use 127GB as that's the default size of a VHD in Hyper-V.
if runtime.GOOS == "windows" && options.Platform != nil && options.Platform.OS == "windows" {
if isWCOW {
hc.StorageOpt = make(map[string]string)
hc.StorageOpt["size"] = "127GB"
}

View File

@@ -8,8 +8,8 @@ import (
"net/http"
"net/url"
"strconv"
"strings"
"github.com/containerd/containerd/platforms"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
)
@@ -30,12 +30,6 @@ func (cli *Client) ImageBuild(ctx context.Context, buildContext io.Reader, optio
}
headers.Add("X-Registry-Config", base64.URLEncoding.EncodeToString(buf))
if options.Platform != nil {
if err := cli.NewVersionError("1.32", "platform"); err != nil {
return types.ImageBuildResponse{}, err
}
query.Set("platform", platforms.Format(*options.Platform))
}
headers.Set("Content-Type", "application/x-tar")
serverResp, err := cli.postRaw(ctx, "/build", query, buildContext, headers)
@@ -130,8 +124,11 @@ func (cli *Client) imageBuildOptionsToQuery(options types.ImageBuildOptions) (ur
if options.SessionID != "" {
query.Set("session", options.SessionID)
}
if options.Platform != nil {
query.Set("platform", platforms.Format(*options.Platform))
if options.Platform != "" {
if err := cli.NewVersionError("1.32", "platform"); err != nil {
return query, err
}
query.Set("platform", strings.ToLower(options.Platform))
}
if options.BuildID != "" {
query.Set("buildid", options.BuildID)

View File

@@ -10,6 +10,7 @@ import (
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/pkg/reexec"
"github.com/docker/docker/pkg/term"
"github.com/moby/buildkit/util/apicaps"
"github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
@@ -42,6 +43,12 @@ func newDaemonCommand() *cobra.Command {
return cmd
}
func init() {
if dockerversion.ProductName != "" {
apicaps.ExportedProduct = dockerversion.ProductName
}
}
func main() {
if reexec.Init() {
return

View File

@@ -41,6 +41,7 @@ package cluster // import "github.com/docker/docker/daemon/cluster"
import (
"context"
"fmt"
"math"
"net"
"os"
"path/filepath"
@@ -67,9 +68,10 @@ const stateFile = "docker-state.json"
const defaultAddr = "0.0.0.0:2377"
const (
initialReconnectDelay = 100 * time.Millisecond
maxReconnectDelay = 30 * time.Second
contextPrefix = "com.docker.swarm"
initialReconnectDelay = 100 * time.Millisecond
maxReconnectDelay = 30 * time.Second
contextPrefix = "com.docker.swarm"
defaultRecvSizeForListResponse = math.MaxInt32 // the max recv limit grpc <1.4.0
)
// NetworkSubnetsProvider exposes functions for retrieving the subnets

View File

@@ -23,6 +23,7 @@ import (
gogotypes "github.com/gogo/protobuf/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc"
)
// GetServices returns all services of a managed swarm cluster.
@@ -67,7 +68,9 @@ func (c *Cluster) GetServices(options apitypes.ServiceListOptions) ([]types.Serv
r, err := state.controlClient.ListServices(
ctx,
&swarmapi.ListServicesRequest{Filters: filters})
&swarmapi.ListServicesRequest{Filters: filters},
grpc.MaxCallRecvMsgSize(defaultRecvSizeForListResponse),
)
if err != nil {
return nil, err
}

View File

@@ -8,6 +8,7 @@ import (
types "github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/daemon/cluster/convert"
swarmapi "github.com/docker/swarmkit/api"
"google.golang.org/grpc"
)
// GetTasks returns a list of tasks matching the filter options.
@@ -53,7 +54,9 @@ func (c *Cluster) GetTasks(options apitypes.TaskListOptions) ([]types.Task, erro
r, err = state.controlClient.ListTasks(
ctx,
&swarmapi.ListTasksRequest{Filters: filters})
&swarmapi.ListTasksRequest{Filters: filters},
grpc.MaxCallRecvMsgSize(defaultRecvSizeForListResponse),
)
return err
}); err != nil {
return nil, err

View File

@@ -8,6 +8,7 @@ import (
"fmt"
"io"
"os"
"runtime"
"strconv"
"strings"
"sync"
@@ -641,9 +642,20 @@ func followLogs(f *os.File, logWatcher *logger.LogWatcher, notifyRotate chan int
}
func watchFile(name string) (filenotify.FileWatcher, error) {
fileWatcher, err := filenotify.New()
if err != nil {
return nil, err
var fileWatcher filenotify.FileWatcher
if runtime.GOOS == "windows" {
// FileWatcher on Windows files is based on the syscall notifications which has an issue becuase of file caching.
// It is based on ReadDirectoryChangesW() which doesn't detect writes to the cache. It detects writes to disk only.
// Becuase of the OS lazy writing, we don't get notifications for file writes and thereby the watcher
// doesn't work. Hence for Windows we will use poll based notifier.
fileWatcher = filenotify.NewPollingWatcher()
} else {
var err error
fileWatcher, err = filenotify.New()
if err != nil {
return nil, err
}
}
logger := logrus.WithFields(logrus.Fields{
@@ -652,6 +664,7 @@ func watchFile(name string) (filenotify.FileWatcher, error) {
})
if err := fileWatcher.Add(name); err != nil {
// we will retry using file poller.
logger.WithError(err).Warnf("falling back to file poller")
fileWatcher.Close()
fileWatcher = filenotify.NewPollingWatcher()
@@ -662,5 +675,6 @@ func watchFile(name string) (filenotify.FileWatcher, error) {
return nil, err
}
}
return fileWatcher, nil
}

View File

@@ -4,7 +4,6 @@ import (
"context"
"fmt"
"net"
"runtime"
"sort"
"strconv"
"strings"
@@ -230,9 +229,7 @@ func (daemon *Daemon) releaseIngress(id string) {
return
}
daemon.deleteLoadBalancerSandbox(n)
if err := n.Delete(); err != nil {
if err := n.Delete(libnetwork.NetworkDeleteOptionRemoveLB); err != nil {
logrus.Errorf("Failed to delete ingress network %s: %v", n.ID(), err)
return
}
@@ -349,7 +346,7 @@ func (daemon *Daemon) createNetwork(create types.NetworkCreateRequest, id string
nwOptions = append(nwOptions, libnetwork.NetworkOptionConfigFrom(create.ConfigFrom.Network))
}
if agent && driver == "overlay" && (create.Ingress || runtime.GOOS == "windows") {
if agent && driver == "overlay" {
nodeIP, exists := daemon.GetAttachmentStore().GetIPForNetwork(id)
if !exists {
return nil, fmt.Errorf("Failed to find a load balancer IP to use for network: %v", id)
@@ -512,37 +509,6 @@ func (daemon *Daemon) DeleteNetwork(networkID string) error {
return daemon.deleteNetwork(n, false)
}
func (daemon *Daemon) deleteLoadBalancerSandbox(n libnetwork.Network) {
controller := daemon.netController
//The only endpoint left should be the LB endpoint (nw.Name() + "-endpoint")
endpoints := n.Endpoints()
if len(endpoints) == 1 {
sandboxName := n.Name() + "-sbox"
info := endpoints[0].Info()
if info != nil {
sb := info.Sandbox()
if sb != nil {
if err := sb.DisableService(); err != nil {
logrus.Warnf("Failed to disable service on sandbox %s: %v", sandboxName, err)
//Ignore error and attempt to delete the load balancer endpoint
}
}
}
if err := endpoints[0].Delete(true); err != nil {
logrus.Warnf("Failed to delete endpoint %s (%s) in %s: %v", endpoints[0].Name(), endpoints[0].ID(), sandboxName, err)
//Ignore error and attempt to delete the sandbox.
}
if err := controller.SandboxDestroy(sandboxName); err != nil {
logrus.Warnf("Failed to delete %s sandbox: %v", sandboxName, err)
//Ignore error and attempt to delete the network.
}
}
}
func (daemon *Daemon) deleteNetwork(nw libnetwork.Network, dynamic bool) error {
if runconfig.IsPreDefinedNetwork(nw.Name()) && !dynamic {
err := fmt.Errorf("%s is a pre-defined network and cannot be removed", nw.Name())

View File

@@ -215,6 +215,10 @@ func (daemon *Daemon) registerMountPoints(container *container.Container, hostCo
}
}
if mp.Type == mounttypes.TypeBind {
mp.SkipMountpointCreation = true
}
binds[mp.Destination] = true
dereferenceIfExists(mp.Destination)
mountPoints[mp.Destination] = mp

45
distribution/oci.go Normal file
View File

@@ -0,0 +1,45 @@
package distribution
import (
"fmt"
"github.com/docker/distribution"
"github.com/docker/distribution/manifest/manifestlist"
"github.com/docker/distribution/manifest/schema2"
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
func init() {
// TODO: Remove this registration if distribution is included with OCI support
ocischemaFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
m := new(schema2.DeserializedManifest)
err := m.UnmarshalJSON(b)
if err != nil {
return nil, distribution.Descriptor{}, err
}
dgst := digest.FromBytes(b)
return m, distribution.Descriptor{Digest: dgst, Size: int64(len(b)), MediaType: ocispec.MediaTypeImageManifest}, err
}
err := distribution.RegisterManifestSchema(ocispec.MediaTypeImageManifest, ocischemaFunc)
if err != nil {
panic(fmt.Sprintf("Unable to register manifest: %s", err))
}
manifestListFunc := func(b []byte) (distribution.Manifest, distribution.Descriptor, error) {
m := new(manifestlist.DeserializedManifestList)
err := m.UnmarshalJSON(b)
if err != nil {
return nil, distribution.Descriptor{}, err
}
dgst := digest.FromBytes(b)
return m, distribution.Descriptor{Digest: dgst, Size: int64(len(b)), MediaType: ocispec.MediaTypeImageIndex}, err
}
err = distribution.RegisterManifestSchema(ocispec.MediaTypeImageIndex, manifestListFunc)
if err != nil {
panic(fmt.Sprintf("Unable to register manifest: %s", err))
}
}

View File

@@ -74,11 +74,14 @@ func filterManifests(manifests []manifestlist.ManifestDescriptor, p specs.Platfo
if (manifestDescriptor.Platform.Architecture == runtime.GOARCH) &&
((p.OS != "" && manifestDescriptor.Platform.OS == p.OS) || // Explicit user request for an OS we know we support
(p.OS == "" && system.IsOSSupported(manifestDescriptor.Platform.OS))) { // No user requested OS, but one we can support
matches = append(matches, manifestDescriptor)
logrus.Debugf("found match %s/%s %s with media type %s, digest %s", manifestDescriptor.Platform.OS, runtime.GOARCH, manifestDescriptor.Platform.OSVersion, manifestDescriptor.MediaType, manifestDescriptor.Digest.String())
if strings.EqualFold("windows", manifestDescriptor.Platform.OS) {
if err := checkImageCompatibility("windows", manifestDescriptor.Platform.OSVersion); err != nil {
continue
}
foundWindowsMatch = true
}
matches = append(matches, manifestDescriptor)
logrus.Debugf("found match %s/%s %s with media type %s, digest %s", manifestDescriptor.Platform.OS, runtime.GOARCH, manifestDescriptor.Platform.OSVersion, manifestDescriptor.MediaType, manifestDescriptor.Digest.String())
} else {
logrus.Debugf("ignoring %s/%s %s with media type %s, digest %s", manifestDescriptor.Platform.OS, manifestDescriptor.Platform.Architecture, manifestDescriptor.Platform.OSVersion, manifestDescriptor.MediaType, manifestDescriptor.Digest.String())
}
@@ -103,7 +106,8 @@ func (mbv manifestsByVersion) Less(i, j int) bool {
// TODO: Split version by parts and compare
// TODO: Prefer versions which have a greater version number
// Move compatible versions to the top, with no other ordering changes
return versionMatch(mbv.list[i].Platform.OSVersion, mbv.version) && !versionMatch(mbv.list[j].Platform.OSVersion, mbv.version)
return (strings.EqualFold("windows", mbv.list[i].Platform.OS) && !strings.EqualFold("windows", mbv.list[j].Platform.OS)) ||
(versionMatch(mbv.list[i].Platform.OSVersion, mbv.version) && !versionMatch(mbv.list[j].Platform.OSVersion, mbv.version))
}
func (mbv manifestsByVersion) Len() int {

View File

@@ -17,11 +17,13 @@ import (
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/registry"
"github.com/docker/go-connections/sockets"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
)
// ImageTypes represents the schema2 config types for images
var ImageTypes = []string{
schema2.MediaTypeImageConfig,
ocispec.MediaTypeImageConfig,
// Handle unexpected values from https://github.com/docker/distribution/issues/1621
// (see also https://github.com/docker/docker/issues/22378,
// https://github.com/docker/docker/issues/30083)

View File

@@ -14,4 +14,5 @@ const (
RuncCommitID = "library-import"
InitCommitID = "library-import"
PlatformName = ""
ProductName = ""
)

View File

@@ -4,7 +4,7 @@
# containerd is also pinned in vendor.conf. When updating the binary
# version you may also need to update the vendor version to pick up bug
# fixes or new APIs.
CONTAINERD_COMMIT=cbef57047e900aeb2bafe7a634919bec13f4a2a5 # v1.1.1-rc.1
CONTAINERD_COMMIT=468a545b9edcd5932818eb9de8e72413e616e86e # v1.1.2
install_containerd() {
echo "Install containerd version $CONTAINERD_COMMIT"

View File

@@ -3,7 +3,7 @@
# LIBNETWORK_COMMIT is used to build the docker-userland-proxy binary. When
# updating the binary version, consider updating github.com/docker/libnetwork
# in vendor.conf accordingly
LIBNETWORK_COMMIT=19279f0492417475b6bfbd0aa529f73e8f178fb5
LIBNETWORK_COMMIT=3ac297bc7fd0afec9051bbb47024c9bc1d75bf5b
install_proxy() {
case "$1" in

View File

@@ -1,14 +1,14 @@
#!/bin/sh
# When updating RUNC_COMMIT, also update runc in vendor.conf accordingly
RUNC_COMMIT=69663f0bd4b60df09991c08812a60108003fa340
RUNC_COMMIT=a592beb5bc4c4092b1b1bac971afed27687340c5
install_runc() {
# Do not build with ambient capabilities support
RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp apparmor selinux"}"
echo "Install runc version $RUNC_COMMIT"
git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc"
git clone https://github.com/docker/runc.git "$GOPATH/src/github.com/opencontainers/runc"
cd "$GOPATH/src/github.com/opencontainers/runc"
git checkout -q "$RUNC_COMMIT"
if [ -z "$1" ]; then
@@ -16,7 +16,8 @@ install_runc() {
else
target="$1"
fi
make BUILDTAGS="$RUNC_BUILDTAGS" "$target"
# TODO: Remove me before 18.06.4
make BUILDTAGS="$RUNC_BUILDTAGS" VERSION="1.0.0-rc5+dev.docker-18.06" "$target"
mkdir -p ${PREFIX}
cp runc ${PREFIX}/docker-runc
}

View File

@@ -365,7 +365,7 @@ Try {
# Run autogen if building binaries or running unit tests.
if ($Client -or $Daemon -or $TestUnit) {
Write-Host "INFO: Invoking autogen..."
Try { .\hack\make\.go-autogen.ps1 -CommitString $gitCommit -DockerVersion $dockerVersion -Platform "$env:PLATFORM" }
Try { .\hack\make\.go-autogen.ps1 -CommitString $gitCommit -DockerVersion $dockerVersion -Platform "$env:PLATFORM" -Product "$env:PRODUCT" }
Catch [Exception] { Throw $_ }
}

View File

@@ -21,6 +21,7 @@ const (
IAmStatic string = "${IAMSTATIC:-true}"
ContainerdCommitID string = "${CONTAINERD_COMMIT}"
PlatformName string = "${PLATFORM}"
ProductName string = "${PRODUCT}"
)
// AUTOGENERATED FILE; see /go/src/github.com/docker/docker/hack/make/.go-autogen

View File

@@ -15,7 +15,8 @@
param(
[Parameter(Mandatory=$true)][string]$CommitString,
[Parameter(Mandatory=$true)][string]$DockerVersion,
[Parameter(Mandatory=$false)][string]$Platform
[Parameter(Mandatory=$false)][string]$Platform,
[Parameter(Mandatory=$false)][string]$Product
)
$ErrorActionPreference = "Stop"
@@ -45,6 +46,7 @@ const (
Version string = "'+$DockerVersion+'"
BuildTime string = "'+$buildDateTime+'"
PlatformName string = "'+$Platform+'"
ProductName string = "'+$Product+'"
)
// AUTOGENERATED FILE; see hack\make\.go-autogen.ps1

View File

@@ -610,17 +610,17 @@ func (s *DockerNetworkSuite) TestDockerNetworkIPAMMultipleNetworks(c *check.C) {
// test network with multiple subnets
// bridge network doesn't support multiple subnets. hence, use a dummy driver that supports
dockerCmd(c, "network", "create", "-d", dummyNetworkDriver, "--subnet=192.168.0.0/16", "--subnet=192.170.0.0/16", "test6")
dockerCmd(c, "network", "create", "-d", dummyNetworkDriver, "--subnet=192.170.0.0/16", "--subnet=192.171.0.0/16", "test6")
assertNwIsAvailable(c, "test6")
// test network with multiple subnets with valid ipam combinations
// also check same subnet across networks when the driver supports it.
dockerCmd(c, "network", "create", "-d", dummyNetworkDriver,
"--subnet=192.168.0.0/16", "--subnet=192.170.0.0/16",
"--gateway=192.168.0.100", "--gateway=192.170.0.100",
"--ip-range=192.168.1.0/24",
"--aux-address", "a=192.168.1.5", "--aux-address", "b=192.168.1.6",
"--aux-address", "c=192.170.1.5", "--aux-address", "d=192.170.1.6",
"--subnet=192.172.0.0/16", "--subnet=192.173.0.0/16",
"--gateway=192.172.0.100", "--gateway=192.173.0.100",
"--ip-range=192.172.1.0/24",
"--aux-address", "a=192.172.1.5", "--aux-address", "b=192.172.1.6",
"--aux-address", "c=192.173.1.5", "--aux-address", "d=192.173.1.6",
"test7")
assertNwIsAvailable(c, "test7")
@@ -1646,9 +1646,9 @@ func (s *DockerSuite) TestDockerNetworkInternalMode(c *check.C) {
c.Assert(waitRun("first"), check.IsNil)
dockerCmd(c, "run", "-d", "--net=internal", "--name=second", "busybox:glibc", "top")
c.Assert(waitRun("second"), check.IsNil)
out, _, err := dockerCmdWithError("exec", "first", "ping", "-W", "4", "-c", "1", "www.google.com")
out, _, err := dockerCmdWithError("exec", "first", "ping", "-W", "4", "-c", "1", "8.8.8.8")
c.Assert(err, check.NotNil)
c.Assert(out, checker.Contains, "ping: bad address")
c.Assert(out, checker.Contains, "100% packet loss")
_, _, err = dockerCmdWithError("exec", "second", "ping", "-c", "1", "first")
c.Assert(err, check.IsNil)
}

View File

@@ -3948,6 +3948,7 @@ func (s *DockerSuite) TestRunAttachFailedNoLeak(c *check.C) {
// TODO Windows Post TP5. Fix the error message string
c.Assert(strings.Contains(string(out), "port is already allocated") ||
strings.Contains(string(out), "were not connected because a duplicate name exists") ||
strings.Contains(string(out), "The specified port already exists") ||
strings.Contains(string(out), "HNS failed with error : Failed to create endpoint") ||
strings.Contains(string(out), "HNS failed with error : The object already exists"), checker.Equals, true, check.Commentf("Output: %s", out))
dockerCmd(c, "rm", "-f", "test")

View File

@@ -394,12 +394,12 @@ func testGraphDriverPull(c client.APIClient, d *daemon.Daemon) func(*testing.T)
defer d.Stop(t)
ctx := context.Background()
r, err := c.ImagePull(ctx, "busybox:latest", types.ImagePullOptions{})
r, err := c.ImagePull(ctx, "busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0", types.ImagePullOptions{})
assert.NilError(t, err)
_, err = io.Copy(ioutil.Discard, r)
assert.NilError(t, err)
container.Run(t, ctx, c, container.WithImage("busybox:latest"))
container.Run(t, ctx, c, container.WithImage("busybox:latest@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0"))
}
}

View File

@@ -42,18 +42,17 @@ type container struct {
// have access to the Spec
ociSpec *specs.Spec
isWindows bool
manualStopRequested bool
hcsContainer hcsshim.Container
isWindows bool
hcsContainer hcsshim.Container
id string
status Status
exitedAt time.Time
exitCode uint32
waitCh chan struct{}
init *process
execs map[string]*process
updatePending bool
id string
status Status
exitedAt time.Time
exitCode uint32
waitCh chan struct{}
init *process
execs map[string]*process
terminateInvoked bool
}
// Win32 error codes that are used for various workarounds
@@ -324,15 +323,15 @@ func (c *client) createWindows(id string, spec *specs.Spec, runtimeOptions inter
logger.Debug("starting container")
if err = hcsContainer.Start(); err != nil {
c.logger.WithError(err).Error("failed to start container")
ctr.debugGCS()
ctr.Lock()
if err := c.terminateContainer(ctr); err != nil {
c.logger.WithError(err).Error("failed to cleanup after a failed Start")
} else {
c.logger.Debug("cleaned up after failed Start by calling Terminate")
}
ctr.Unlock()
return err
}
ctr.debugGCS()
c.Lock()
c.containers[id] = ctr
@@ -524,11 +523,13 @@ func (c *client) createLinux(id string, spec *specs.Spec, runtimeOptions interfa
if err = hcsContainer.Start(); err != nil {
c.logger.WithError(err).Error("failed to start container")
ctr.debugGCS()
ctr.Lock()
if err := c.terminateContainer(ctr); err != nil {
c.logger.WithError(err).Error("failed to cleanup after a failed Start")
} else {
c.logger.Debug("cleaned up after failed Start by calling Terminate")
}
ctr.Unlock()
return err
}
ctr.debugGCS()
@@ -848,8 +849,6 @@ func (c *client) SignalProcess(_ context.Context, containerID, processID string,
return err
}
ctr.manualStopRequested = true
logger := c.logger.WithFields(logrus.Fields{
"container": containerID,
"process": processID,
@@ -861,11 +860,14 @@ func (c *client) SignalProcess(_ context.Context, containerID, processID string,
if processID == InitProcessName {
if syscall.Signal(signal) == syscall.SIGKILL {
// Terminate the compute system
ctr.Lock()
ctr.terminateInvoked = true
if err := ctr.hcsContainer.Terminate(); err != nil {
if !hcsshim.IsPending(err) {
logger.WithError(err).Error("failed to terminate hccshim container")
}
}
ctr.Unlock()
} else {
// Shut down the container
if err := ctr.hcsContainer.Shutdown(); err != nil {
@@ -1167,12 +1169,17 @@ func (c *client) getProcess(containerID, processID string) (*container, *process
return ctr, p, nil
}
// ctr mutex must be held when calling this function.
func (c *client) shutdownContainer(ctr *container) error {
const shutdownTimeout = time.Minute * 5
err := ctr.hcsContainer.Shutdown()
var err error
const waitTimeout = time.Minute * 5
if hcsshim.IsPending(err) {
err = ctr.hcsContainer.WaitTimeout(shutdownTimeout)
if !ctr.terminateInvoked {
err = ctr.hcsContainer.Shutdown()
}
if hcsshim.IsPending(err) || ctr.terminateInvoked {
err = ctr.hcsContainer.WaitTimeout(waitTimeout)
} else if hcsshim.IsAlreadyStopped(err) {
err = nil
}
@@ -1192,8 +1199,10 @@ func (c *client) shutdownContainer(ctr *container) error {
return nil
}
// ctr mutex must be held when calling this function.
func (c *client) terminateContainer(ctr *container) error {
const terminateTimeout = time.Minute * 5
ctr.terminateInvoked = true
err := ctr.hcsContainer.Terminate()
if hcsshim.IsPending(err) {
@@ -1259,7 +1268,6 @@ func (c *client) reapProcess(ctr *container, p *process) int {
ctr.exitedAt = exitedAt
ctr.exitCode = uint32(exitCode)
close(ctr.waitCh)
ctr.Unlock()
if err := c.shutdownContainer(ctr); err != nil {
exitCode = -1
@@ -1273,6 +1281,7 @@ func (c *client) reapProcess(ctr *container, p *process) int {
} else {
logger.Debug("completed container shutdown")
}
ctr.Unlock()
if err := ctr.hcsContainer.Close(); err != nil {
exitCode = -1

View File

@@ -37,6 +37,10 @@ func (r *remote) setDefaults() {
if r.snapshotter == "" {
r.snapshotter = "overlay"
}
// Disable CRI plugin by default if containerd is managed as child-process
// of dockerd. See https://github.com/moby/moby/issues/37507
r.DisabledPlugins = append(r.DisabledPlugins, "cri")
delete(r.pluginConfs.Plugins, "cri")
}
func (r *remote) stopDaemon() {

View File

@@ -114,6 +114,7 @@ func DefaultLinuxSpec() specs.Spec {
s.Linux = &specs.Linux{
MaskedPaths: []string{
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",

View File

@@ -139,14 +139,14 @@ type AuxFormatter struct {
}
// Emit emits the given interface as an aux progress message
func (sf *AuxFormatter) Emit(aux interface{}) error {
func (sf *AuxFormatter) Emit(id string, aux interface{}) error {
auxJSONBytes, err := json.Marshal(aux)
if err != nil {
return err
}
auxJSON := new(json.RawMessage)
*auxJSON = auxJSONBytes
msgJSON, err := json.Marshal(&jsonmessage.JSONMessage{Aux: auxJSON})
msgJSON, err := json.Marshal(&jsonmessage.JSONMessage{ID: id, Aux: auxJSON})
if err != nil {
return err
}

View File

@@ -106,7 +106,7 @@ func TestAuxFormatterEmit(t *testing.T) {
sampleAux := &struct {
Data string
}{"Additional data"}
err := aux.Emit(sampleAux)
err := aux.Emit("", sampleAux)
assert.NilError(t, err)
assert.Check(t, is.Equal(`{"aux":{"Data":"Additional data"}}`+streamNewline, b.String()))
}

View File

@@ -1,7 +1,7 @@
# the following lines are in sorted order, FYI
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/Microsoft/hcsshim v0.6.11
github.com/Microsoft/go-winio v0.4.7
github.com/Microsoft/go-winio v0.4.8
github.com/docker/libtrust 9cbd2a1374f46905c68a4eb3694a130610adc62a
github.com/go-check/check 4ed411733c5785b40214c70bce814c3a3a689609 https://github.com/cpuguy83/check.git
github.com/golang/gddo 9b12a26f3fbd7397dee4e20939ddca719d840d2a
@@ -26,7 +26,7 @@ github.com/imdario/mergo v0.3.5
golang.org/x/sync fd80eb99c8f653c847d294a001bdf2a3a6f768f5
# buildkit
github.com/moby/buildkit cce2080ddbe4698912f2290892b247c83627efa8
github.com/moby/buildkit 98f1604134f945d48538ffca0e18662337b4a850
github.com/tonistiigi/fsutil 8abad97ee3969cdf5e9c367f46adba2c212b3ddb
github.com/grpc-ecosystem/grpc-opentracing 8e809c8a86450a29b90dcc9efbf062d0fe6d9746
github.com/opentracing/opentracing-go 1361b9cd60be79c4c3a7fa9841b3c132e40066a7
@@ -37,14 +37,14 @@ github.com/mitchellh/hashstructure 2bca23e0e452137f789efbc8610126fd8b94f73b
#get libnetwork packages
# When updating, also update LIBNETWORK_COMMIT in hack/dockerfile/install/proxy accordingly
github.com/docker/libnetwork 19279f0492417475b6bfbd0aa529f73e8f178fb5
github.com/docker/libnetwork d00ceed44cc447c77f25cdf5d59e83163bdcb4c9
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80
github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b
github.com/hashicorp/memberlist 3d8438da9589e7b608a83ffac1ef8211486bcb7c
github.com/sean-/seed e2103e2c35297fb7e17febb81e49b312087a2372
github.com/hashicorp/go-sockaddr acd314c5781ea706c710d9ea70069fd2e110d61d
github.com/hashicorp/go-sockaddr 6d291a969b86c4b633730bfc6b8b9d64c3aafed9
github.com/hashicorp/go-multierror fcdddc395df1ddf4247c69bd436e84cfa0733f7e
github.com/hashicorp/serf 598c54895cc5a7b1a24a398d635e8c0ea0959870
github.com/docker/libkv 1d8431073ae03cdaedb198a89722f3aab6d418ef
@@ -114,18 +114,18 @@ github.com/googleapis/gax-go v2.0.0
google.golang.org/genproto 694d95ba50e67b2e363f3483057db5d4910c18f9
# containerd
github.com/containerd/containerd 08f7ee9828af1783dc98cc5cc1739e915697c667
github.com/containerd/containerd b41633746ed4833f52c3c071e8edcfa2713e5677
github.com/containerd/fifo 3d5202aec260678c48179c56f40e6f38a095738c
github.com/containerd/continuity d3c23511c1bf5851696cba83143d9cbcd666869b
github.com/containerd/cgroups fe281dd265766145e943a034aa41086474ea6130
github.com/containerd/console 9290d21dc56074581f619579c43d970b4514bc08
github.com/containerd/console 5d1b48d6114b8c9666f0c8b916f871af97b0a761
github.com/containerd/go-runc f271fa2021de855d4d918dbef83c5fe19db1bdd
github.com/containerd/typeurl f6943554a7e7e88b3c14aad190bf05932da84788
github.com/stevvooe/ttrpc d4528379866b0ce7e9d71f3eb96f0582fc374577
github.com/containerd/typeurl a93fcdb778cd272c6e9b3028b2f42d813e785d40
github.com/containerd/ttrpc 94dde388801693c54f88a6596f713b51a8b30b2d
github.com/gogo/googleapis 08a7655d27152912db7aaf4f983275eaf8d128ef
# cluster
github.com/docker/swarmkit edd5641391926a50bc5f7040e20b7efc05003c26
github.com/docker/swarmkit 8852e8840e30d69db0b39a4a3d6447362e17c64f
github.com/gogo/protobuf v1.0.0
github.com/cloudflare/cfssl 7fb22c8cba7ecaf98e4082d22d65800cf45e042a
github.com/fernet/fernet-go 1b2437bc582b3cfbb341ee5a29f8ef5b42912ff2

View File

@@ -121,6 +121,11 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
// zero-byte message, ensure that all future Read() calls
// also return EOF.
f.readEOF = true
} else if err == syscall.ERROR_MORE_DATA {
// ERROR_MORE_DATA indicates that the pipe's read mode is message mode
// and the message still has more bytes. Treat this as a success, since
// this package presents all named pipes as byte streams.
err = nil
}
return n, err
}
@@ -175,16 +180,6 @@ func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
return nil, err
}
var state uint32
err = getNamedPipeHandleState(h, &state, nil, nil, nil, nil, 0)
if err != nil {
return nil, err
}
if state&cPIPE_READMODE_MESSAGE != 0 {
return nil, &os.PathError{Op: "open", Path: path, Err: errors.New("message readmode pipes not supported")}
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)

View File

@@ -262,10 +262,14 @@ func (ec *EpollConsole) Shutdown(close func(int) error) error {
// signalRead signals that the console is readable.
func (ec *EpollConsole) signalRead() {
ec.readc.L.Lock()
ec.readc.Signal()
ec.readc.L.Unlock()
}
// signalWrite signals that the console is writable.
func (ec *EpollConsole) signalWrite() {
ec.writec.L.Lock()
ec.writec.Signal()
ec.writec.L.Unlock()
}

View File

@@ -255,3 +255,14 @@ func (l *logIO) Wait() {
func (l *logIO) Close() error {
return nil
}
// Load the io for a container but do not attach
//
// Allows io to be loaded on the task for deletion without
// starting copy routines
func Load(set *FIFOSet) (IO, error) {
return &cio{
config: set.Config,
closers: []io.Closer{set},
}, nil
}

View File

@@ -307,6 +307,12 @@ func (c *container) get(ctx context.Context) (containers.Container, error) {
// get the existing fifo paths from the task information stored by the daemon
func attachExistingIO(response *tasks.GetResponse, ioAttach cio.Attach) (cio.IO, error) {
fifoSet := loadFifos(response)
return ioAttach(fifoSet)
}
// loadFifos loads the containers fifos
func loadFifos(response *tasks.GetResponse) *cio.FIFOSet {
path := getFifoDir([]string{
response.Process.Stdin,
response.Process.Stdout,
@@ -315,13 +321,12 @@ func attachExistingIO(response *tasks.GetResponse, ioAttach cio.Attach) (cio.IO,
closer := func() error {
return os.RemoveAll(path)
}
fifoSet := cio.NewFIFOSet(cio.Config{
return cio.NewFIFOSet(cio.Config{
Stdin: response.Process.Stdin,
Stdout: response.Process.Stdout,
Stderr: response.Process.Stderr,
Terminal: response.Process.Terminal,
}, closer)
return ioAttach(fifoSet)
}
// getFifoDir looks for any non-empty path for a stdio fifo

View File

@@ -153,7 +153,9 @@ func createDefaultSpec(ctx context.Context, id string) (*Spec, error) {
},
Linux: &specs.Linux{
MaskedPaths: []string{
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",

View File

@@ -26,8 +26,8 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/runtime"
shim "github.com/containerd/containerd/runtime/shim/v1"
"github.com/containerd/ttrpc"
"github.com/pkg/errors"
"github.com/stevvooe/ttrpc"
)
// Process implements a linux process

View File

@@ -32,9 +32,9 @@ import (
"github.com/containerd/containerd/runtime/shim/client"
shim "github.com/containerd/containerd/runtime/shim/v1"
runc "github.com/containerd/go-runc"
"github.com/containerd/ttrpc"
"github.com/gogo/protobuf/types"
"github.com/pkg/errors"
"github.com/stevvooe/ttrpc"
)
// Task on a linux based system

View File

@@ -31,9 +31,9 @@ import (
"golang.org/x/sys/unix"
"github.com/containerd/ttrpc"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"github.com/stevvooe/ttrpc"
"github.com/containerd/containerd/events"
"github.com/containerd/containerd/log"

View File

@@ -50,7 +50,7 @@ import strings "strings"
import reflect "reflect"
import context "context"
import ttrpc "github.com/stevvooe/ttrpc"
import ttrpc "github.com/containerd/ttrpc"
import io "io"

View File

@@ -1,7 +1,7 @@
github.com/containerd/go-runc f271fa2021de855d4d918dbef83c5fe19db1bdd5
github.com/containerd/console 9290d21dc56074581f619579c43d970b4514bc08
github.com/containerd/console 5d1b48d6114b8c9666f0c8b916f871af97b0a761
github.com/containerd/cgroups fe281dd265766145e943a034aa41086474ea6130
github.com/containerd/typeurl f6943554a7e7e88b3c14aad190bf05932da84788
github.com/containerd/typeurl a93fcdb778cd272c6e9b3028b2f42d813e785d40
github.com/containerd/fifo 3d5202aec260678c48179c56f40e6f38a095738c
github.com/containerd/btrfs 2e1aa0ddf94f91fa282b6ed87c23bf0d64911244
github.com/containerd/continuity d3c23511c1bf5851696cba83143d9cbcd666869b
@@ -37,13 +37,14 @@ github.com/Microsoft/hcsshim v0.6.11
github.com/boltdb/bolt e9cf4fae01b5a8ff89d0ec6b32f0d9c9f79aefdd
google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
golang.org/x/text 19e51611da83d6be54ddafce4a4af510cb3e9ea4
github.com/stevvooe/ttrpc d4528379866b0ce7e9d71f3eb96f0582fc374577
github.com/containerd/ttrpc 94dde388801693c54f88a6596f713b51a8b30b2d
github.com/syndtr/gocapability db04d3cc01c8b54962a58ec7e491717d06cfcc16
gotest.tools v2.1.0
github.com/google/go-cmp v0.1.0
github.com/containerd/cri 8bcb9a95394e8d7845da1d6a994d3ac2a86d22f0
github.com/containerd/go-cni f2d7272f12d045b16ed924f50e91f9f9cecc55a7
# cri dependencies
github.com/containerd/cri v1.11.0
github.com/containerd/go-cni 5882530828ecf62032409b298a3e8b19e08b6534
github.com/blang/semver v3.1.0
github.com/containernetworking/cni v0.6.0
github.com/containernetworking/plugins v0.7.0
@@ -57,22 +58,26 @@ github.com/golang/glog 44145f04b68cf362d9c4df2182967c2275eaefed
github.com/google/gofuzz 44d81051d367757e1c7c6a5a86423ece9afcf63c
github.com/hashicorp/errwrap 7554cd9344cec97297fa6649b055a8c98c2a1e55
github.com/hashicorp/go-multierror ed905158d87462226a13fe39ddf685ea65f1c11f
github.com/json-iterator/go 1.0.4
github.com/opencontainers/runtime-tools 6073aff4ac61897f75895123f7e24135204a404d
github.com/json-iterator/go f2b4162afba35581b6d4a50d3b8f34e33c144682
github.com/modern-go/reflect2 05fbef0ca5da472bbf96c9322b84a53edc03c9fd
github.com/modern-go/concurrent 1.0.3
github.com/opencontainers/runtime-tools v0.6.0
github.com/opencontainers/selinux 4a2974bf1ee960774ffd517717f1f45325af0206
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
github.com/spf13/pflag v1.0.0
github.com/tchap/go-patricia 5ad6cdb7538b0097d5598c7e57f0a24072adf7dc
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b
github.com/xeipuuv/gojsonschema 1d523034197ff1f222f6429836dd36a2457a1874
golang.org/x/crypto 49796115aa4b964c318aad4f3084fdb41e9aa067
golang.org/x/time f51c12702a4d776e4c1fa9b0fabab841babae631
gopkg.in/inf.v0 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4
gopkg.in/yaml.v2 53feefa2559fb8dfa8d81baad31be332c97d6c77
k8s.io/api 7e796de92438aede7cb5d6bcf6c10f4fa65db560
k8s.io/apimachinery fcb9a12f7875d01f8390b28faedc37dcf2e713b9
k8s.io/apiserver 4a8377c547bbff4576a35b5b5bf4026d9b5aa763
k8s.io/client-go b9a0cf870f239c4a4ecfd3feb075a50e7cbe1473
k8s.io/kubernetes v1.10.0
k8s.io/utils 258e2a2fa64568210fbd6267cf1d8fd87c3cb86e
k8s.io/api 9e5ffd1f1320950b238cfce291b926411f0af722
k8s.io/apimachinery ed135c5b96450fd24e5e981c708114fbbd950697
k8s.io/apiserver a90e3a95c2e91b944bfca8225c4e0d12e42a9eb5
k8s.io/client-go 03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb
k8s.io/kubernetes v1.11.0
k8s.io/utils 733eca437aa39379e4bcc25e726439dfca40fcff
# zfs dependencies
github.com/containerd/zfs 9a0b8b8b5982014b729cd34eb7cd7a11062aa6ec

View File

@@ -1,6 +1,6 @@
# ttrpc
[![Build Status](https://travis-ci.org/stevvooe/ttrpc.svg?branch=master)](https://travis-ci.org/stevvooe/ttrpc)
[![Build Status](https://travis-ci.org/containerd/ttrpc.svg?branch=master)](https://travis-ci.org/containerd/ttrpc)
GRPC for low-memory environments.
@@ -25,7 +25,7 @@ Create a gogo vanity binary (see
[`cmd/protoc-gen-gogottrpc/main.go`](cmd/protoc-gen-gogottrpc/main.go) for an
example with the ttrpc plugin enabled.
It's recommended to use [`protobuild`](https://github.com/stevvooe/protobuild)
It's recommended to use [`protobuild`](https://github.com//stevvooe/protobuild)
to build the protobufs for this project, but this will work with protoc
directly, if required.

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (
@@ -9,9 +25,9 @@ import (
"sync"
"syscall"
"github.com/containerd/containerd/log"
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/status"
)
@@ -180,7 +196,7 @@ func (c *Client) run() {
case msg := <-incoming:
call, ok := waiters[msg.StreamID]
if !ok {
log.L.Errorf("ttrpc: received message for unknown channel %v", msg.StreamID)
logrus.Errorf("ttrpc: received message for unknown channel %v", msg.StreamID)
continue
}

42
vendor/github.com/containerd/ttrpc/codec.go generated vendored Normal file
View File

@@ -0,0 +1,42 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (
"github.com/gogo/protobuf/proto"
"github.com/pkg/errors"
)
type codec struct{}
func (c codec) Marshal(msg interface{}) ([]byte, error) {
switch v := msg.(type) {
case proto.Message:
return proto.Marshal(v)
default:
return nil, errors.Errorf("ttrpc: cannot marshal unknown type: %T", msg)
}
}
func (c codec) Unmarshal(p []byte, msg interface{}) error {
switch v := msg.(type) {
case proto.Message:
return proto.Unmarshal(p, v)
default:
return errors.Errorf("ttrpc: cannot unmarshal into unknown type: %T", msg)
}
}

39
vendor/github.com/containerd/ttrpc/config.go generated vendored Normal file
View File

@@ -0,0 +1,39 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import "github.com/pkg/errors"
type serverConfig struct {
handshaker Handshaker
}
type ServerOpt func(*serverConfig) error
// WithServerHandshaker can be passed to NewServer to ensure that the
// handshaker is called before every connection attempt.
//
// Only one handshaker is allowed per server.
func WithServerHandshaker(handshaker Handshaker) ServerOpt {
return func(c *serverConfig) error {
if c.handshaker != nil {
return errors.New("only one handshaker allowed per server")
}
c.handshaker = handshaker
return nil
}
}

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (
@@ -9,8 +25,8 @@ import (
"sync/atomic"
"time"
"github.com/containerd/containerd/log"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
)
@@ -51,12 +67,11 @@ func (s *Server) Register(name string, methods map[string]Method) {
s.services.register(name, methods)
}
func (s *Server) Serve(l net.Listener) error {
func (s *Server) Serve(ctx context.Context, l net.Listener) error {
s.addListener(l)
defer s.closeListener(l)
var (
ctx = context.Background()
backoff time.Duration
handshaker = s.config.handshaker
)
@@ -88,7 +103,7 @@ func (s *Server) Serve(l net.Listener) error {
}
sleep := time.Duration(rand.Int63n(int64(backoff)))
log.L.WithError(err).Errorf("ttrpc: failed accept; backoff %v", sleep)
logrus.WithError(err).Errorf("ttrpc: failed accept; backoff %v", sleep)
time.Sleep(sleep)
continue
}
@@ -100,7 +115,7 @@ func (s *Server) Serve(l net.Listener) error {
approved, handshake, err := handshaker.Handshake(ctx, conn)
if err != nil {
log.L.WithError(err).Errorf("ttrpc: refusing connection after handshake")
logrus.WithError(err).Errorf("ttrpc: refusing connection after handshake")
conn.Close()
continue
}
@@ -416,12 +431,12 @@ func (c *serverConn) run(sctx context.Context) {
case response := <-responses:
p, err := c.server.codec.Marshal(response.resp)
if err != nil {
log.L.WithError(err).Error("failed marshaling response")
logrus.WithError(err).Error("failed marshaling response")
return
}
if err := ch.send(ctx, response.id, messageTypeResponse, p); err != nil {
log.L.WithError(err).Error("failed sending message on channel")
logrus.WithError(err).Error("failed sending message on channel")
return
}
@@ -432,7 +447,7 @@ func (c *serverConn) run(sctx context.Context) {
// requests due to a terminal error.
recvErr = nil // connection is now "closing"
if err != nil && err != io.EOF {
log.L.WithError(err).Error("error receiving message")
logrus.WithError(err).Error("error receiving message")
}
case <-shutdown:
return

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package ttrpc
import (

View File

@@ -1,3 +1,19 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package typeurl
import (

View File

@@ -69,6 +69,7 @@ import (
"github.com/docker/libnetwork/netlabel"
"github.com/docker/libnetwork/osl"
"github.com/docker/libnetwork/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -870,7 +871,7 @@ addToStore:
}
}()
if len(network.loadBalancerIP) != 0 {
if network.hasLoadBalancerEndpoint() {
if err = network.createLoadBalancerSandbox(); err != nil {
return nil, err
}
@@ -1143,6 +1144,11 @@ func (c *controller) NewSandbox(containerID string, options ...SandboxOption) (S
}
}
if sb.osSbox != nil {
// Apply operating specific knobs on the load balancer sandbox
sb.osSbox.ApplyOSTweaks(sb.oslTypes)
}
c.Lock()
c.sandboxes[sb.id] = sb
c.Unlock()
@@ -1252,7 +1258,7 @@ func (c *controller) loadDriver(networkType string) error {
}
if err != nil {
if err == plugins.ErrNotFound {
if errors.Cause(err) == plugins.ErrNotFound {
return types.NotFoundErrorf(err.Error())
}
return err

View File

@@ -49,9 +49,11 @@ func (sb *sandbox) setupDefaultGW() error {
createOptions := []EndpointOption{CreateOptionAnonymous()}
eplen := gwEPlen
if len(sb.containerID) < gwEPlen {
eplen = len(sb.containerID)
var gwName string
if len(sb.containerID) <= gwEPlen {
gwName = "gateway_" + sb.containerID
} else {
gwName = "gateway_" + sb.id[:gwEPlen]
}
sbLabels := sb.Labels()
@@ -69,7 +71,7 @@ func (sb *sandbox) setupDefaultGW() error {
createOptions = append(createOptions, epOption)
}
newEp, err := n.CreateEndpoint("gateway_"+sb.containerID[0:eplen], createOptions...)
newEp, err := n.CreateEndpoint(gwName, createOptions...)
if err != nil {
return fmt.Errorf("container %s: endpoint create on GW Network failed: %v", sb.containerID, err)
}

View File

@@ -120,3 +120,13 @@ type TablePeersResult struct {
TableObj
Elements []PeerEntryObj `json:"entries"`
}
// NetworkStatsResult network db stats related to entries and queue len for a network
type NetworkStatsResult struct {
Entries int `json:"entries"`
QueueLen int `jsoin:"qlen"`
}
func (n *NetworkStatsResult) String() string {
return fmt.Sprintf("entries: %d, qlen: %d\n", n.Entries, n.QueueLen)
}

View File

@@ -614,9 +614,7 @@ func (d *driver) checkConflict(config *networkConfiguration) error {
return nil
}
func (d *driver) createNetwork(config *networkConfiguration) error {
var err error
func (d *driver) createNetwork(config *networkConfiguration) (err error) {
defer osl.InitOSContext()()
networkList := d.getNetworks()
@@ -775,7 +773,7 @@ func (d *driver) deleteNetwork(nid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove bridge endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove bridge endpoint %.7s from store: %v", ep.id, err)
}
}
@@ -1050,7 +1048,7 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
}
if err = d.storeUpdate(endpoint); err != nil {
return fmt.Errorf("failed to save bridge endpoint %s to store: %v", endpoint.id[0:7], err)
return fmt.Errorf("failed to save bridge endpoint %.7s to store: %v", endpoint.id, err)
}
return nil
@@ -1116,7 +1114,7 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove bridge endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove bridge endpoint %.7s from store: %v", ep.id, err)
}
return nil
@@ -1290,7 +1288,7 @@ func (d *driver) ProgramExternalConnectivity(nid, eid string, options map[string
}()
if err = d.storeUpdate(endpoint); err != nil {
return fmt.Errorf("failed to update bridge endpoint %s to store: %v", endpoint.id[0:7], err)
return fmt.Errorf("failed to update bridge endpoint %.7s to store: %v", endpoint.id, err)
}
if !network.config.EnableICC {
@@ -1332,7 +1330,7 @@ func (d *driver) RevokeExternalConnectivity(nid, eid string) error {
clearEndpointConnections(d.nlh, endpoint)
if err = d.storeUpdate(endpoint); err != nil {
return fmt.Errorf("failed to update bridge endpoint %s to store: %v", endpoint.id[0:7], err)
return fmt.Errorf("failed to update bridge endpoint %.7s to store: %v", endpoint.id, err)
}
return nil

View File

@@ -62,7 +62,7 @@ func (d *driver) populateNetworks() error {
if err = d.createNetwork(ncfg); err != nil {
logrus.Warnf("could not create bridge network for id %s bridge name %s while booting up from persistent state: %v", ncfg.ID, ncfg.BridgeName, err)
}
logrus.Debugf("Network (%s) restored", ncfg.ID[0:7])
logrus.Debugf("Network (%.7s) restored", ncfg.ID)
}
return nil
@@ -82,16 +82,16 @@ func (d *driver) populateEndpoints() error {
ep := kvo.(*bridgeEndpoint)
n, ok := d.networks[ep.nid]
if !ok {
logrus.Debugf("Network (%s) not found for restored bridge endpoint (%s)", ep.nid[0:7], ep.id[0:7])
logrus.Debugf("Deleting stale bridge endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Network (%.7s) not found for restored bridge endpoint (%.7s)", ep.nid, ep.id)
logrus.Debugf("Deleting stale bridge endpoint (%.7s) from store", ep.id)
if err := d.storeDelete(ep); err != nil {
logrus.Debugf("Failed to delete stale bridge endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Failed to delete stale bridge endpoint (%.7s) from store", ep.id)
}
continue
}
n.endpoints[ep.id] = ep
n.restorePortAllocations(ep)
logrus.Debugf("Endpoint (%s) restored to network (%s)", ep.id[0:7], ep.nid[0:7])
logrus.Debugf("Endpoint (%.7s) restored to network (%.7s)", ep.id, ep.nid)
}
return nil
@@ -382,7 +382,7 @@ func (n *bridgeNetwork) restorePortAllocations(ep *bridgeEndpoint) {
ep.extConnConfig.PortBindings = ep.portMapping
_, err := n.allocatePorts(ep, n.config.DefaultBindingIP, n.driver.config.EnableUserlandProxy)
if err != nil {
logrus.Warnf("Failed to reserve existing port mapping for endpoint %s:%v", ep.id[0:7], err)
logrus.Warnf("Failed to reserve existing port mapping for endpoint %.7s:%v", ep.id, err)
}
ep.extConnConfig.PortBindings = tmp
}

View File

@@ -53,7 +53,7 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
}
if err := d.storeUpdate(ep); err != nil {
return fmt.Errorf("failed to save ipvlan endpoint %s to store: %v", ep.id[0:7], err)
return fmt.Errorf("failed to save ipvlan endpoint %.7s to store: %v", ep.id, err)
}
n.addEndpoint(ep)
@@ -82,7 +82,7 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove ipvlan endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove ipvlan endpoint %.7s from store: %v", ep.id, err)
}
n.deleteEndpoint(ep.id)
return nil

View File

@@ -117,7 +117,7 @@ func (d *driver) Join(nid, eid string, sboxKey string, jinfo driverapi.JoinInfo,
return err
}
if err = d.storeUpdate(ep); err != nil {
return fmt.Errorf("failed to save ipvlan endpoint %s to store: %v", ep.id[0:7], err)
return fmt.Errorf("failed to save ipvlan endpoint %.7s to store: %v", ep.id, err)
}
return nil

View File

@@ -156,7 +156,7 @@ func (d *driver) DeleteNetwork(nid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove ipvlan endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove ipvlan endpoint %.7s from store: %v", ep.id, err)
}
}
// delete the *network

View File

@@ -95,15 +95,15 @@ func (d *driver) populateEndpoints() error {
ep := kvo.(*endpoint)
n, ok := d.networks[ep.nid]
if !ok {
logrus.Debugf("Network (%s) not found for restored ipvlan endpoint (%s)", ep.nid[0:7], ep.id[0:7])
logrus.Debugf("Deleting stale ipvlan endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Network (%.7s) not found for restored ipvlan endpoint (%.7s)", ep.nid, ep.id)
logrus.Debugf("Deleting stale ipvlan endpoint (%.7s) from store", ep.id)
if err := d.storeDelete(ep); err != nil {
logrus.Debugf("Failed to delete stale ipvlan endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Failed to delete stale ipvlan endpoint (%.7s) from store", ep.id)
}
continue
}
n.endpoints[ep.id] = ep
logrus.Debugf("Endpoint (%s) restored to network (%s)", ep.id[0:7], ep.nid[0:7])
logrus.Debugf("Endpoint (%.7s) restored to network (%.7s)", ep.id, ep.nid)
}
return nil

View File

@@ -58,7 +58,7 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
}
if err := d.storeUpdate(ep); err != nil {
return fmt.Errorf("failed to save macvlan endpoint %s to store: %v", ep.id[0:7], err)
return fmt.Errorf("failed to save macvlan endpoint %.7s to store: %v", ep.id, err)
}
n.addEndpoint(ep)
@@ -87,7 +87,7 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove macvlan endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove macvlan endpoint %.7s from store: %v", ep.id, err)
}
n.deleteEndpoint(ep.id)

View File

@@ -78,7 +78,7 @@ func (d *driver) Join(nid, eid string, sboxKey string, jinfo driverapi.JoinInfo,
return err
}
if err := d.storeUpdate(ep); err != nil {
return fmt.Errorf("failed to save macvlan endpoint %s to store: %v", ep.id[0:7], err)
return fmt.Errorf("failed to save macvlan endpoint %.7s to store: %v", ep.id, err)
}
return nil
}

View File

@@ -160,7 +160,7 @@ func (d *driver) DeleteNetwork(nid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove macvlan endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove macvlan endpoint %.7s from store: %v", ep.id, err)
}
}
// delete the *network

View File

@@ -95,15 +95,15 @@ func (d *driver) populateEndpoints() error {
ep := kvo.(*endpoint)
n, ok := d.networks[ep.nid]
if !ok {
logrus.Debugf("Network (%s) not found for restored macvlan endpoint (%s)", ep.nid[0:7], ep.id[0:7])
logrus.Debugf("Deleting stale macvlan endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Network (%.7s) not found for restored macvlan endpoint (%.7s)", ep.nid, ep.id)
logrus.Debugf("Deleting stale macvlan endpoint (%.7s) from store", ep.id)
if err := d.storeDelete(ep); err != nil {
logrus.Debugf("Failed to delete stale macvlan endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Failed to delete stale macvlan endpoint (%.7s) from store", ep.id)
}
continue
}
n.endpoints[ep.id] = ep
logrus.Debugf("Endpoint (%s) restored to network (%s)", ep.id[0:7], ep.nid[0:7])
logrus.Debugf("Endpoint (%.7s) restored to network (%.7s)", ep.id, ep.nid)
}
return nil

View File

@@ -78,7 +78,7 @@ func (e *encrMap) String() string {
}
func (d *driver) checkEncryption(nid string, rIP net.IP, vxlanID uint32, isLocal, add bool) error {
logrus.Debugf("checkEncryption(%s, %v, %d, %t)", nid[0:7], rIP, vxlanID, isLocal)
logrus.Debugf("checkEncryption(%.7s, %v, %d, %t)", nid, rIP, vxlanID, isLocal)
n := d.network(nid)
if n == nil || !n.secure {
@@ -101,7 +101,7 @@ func (d *driver) checkEncryption(nid string, rIP net.IP, vxlanID uint32, isLocal
}
return false
}); err != nil {
logrus.Warnf("Failed to retrieve list of participating nodes in overlay network %s: %v", nid[0:5], err)
logrus.Warnf("Failed to retrieve list of participating nodes in overlay network %.5s: %v", nid, err)
}
default:
if len(d.network(nid).endpoints) > 0 {

View File

@@ -69,7 +69,7 @@ func (d *driver) Join(nid, eid string, sboxKey string, jinfo driverapi.JoinInfo,
ep.ifName = containerIfName
if err = d.writeEndpointToStore(ep); err != nil {
return fmt.Errorf("failed to update overlay endpoint %s to local data store: %v", ep.id[0:7], err)
return fmt.Errorf("failed to update overlay endpoint %.7s to local data store: %v", ep.id, err)
}
// Set the container interface and its peer MTU to 1450 to allow

View File

@@ -1,72 +1,23 @@
package overlay
import (
"io/ioutil"
"path"
"strconv"
"strings"
"github.com/sirupsen/logrus"
"github.com/docker/libnetwork/osl/kernel"
)
type conditionalCheck func(val1, val2 string) bool
type osValue struct {
value string
checkFn conditionalCheck
}
var osConfig = map[string]osValue{
var ovConfig = map[string]*kernel.OSValue{
"net.ipv4.neigh.default.gc_thresh1": {"8192", checkHigher},
"net.ipv4.neigh.default.gc_thresh2": {"49152", checkHigher},
"net.ipv4.neigh.default.gc_thresh3": {"65536", checkHigher},
}
func propertyIsValid(val1, val2 string, check conditionalCheck) bool {
if check == nil || check(val1, val2) {
return true
}
return false
}
func checkHigher(val1, val2 string) bool {
val1Int, _ := strconv.ParseInt(val1, 10, 32)
val2Int, _ := strconv.ParseInt(val2, 10, 32)
return val1Int < val2Int
}
// writeSystemProperty writes the value to a path under /proc/sys as determined from the key.
// For e.g. net.ipv4.ip_forward translated to /proc/sys/net/ipv4/ip_forward.
func writeSystemProperty(key, value string) error {
keyPath := strings.Replace(key, ".", "/", -1)
return ioutil.WriteFile(path.Join("/proc/sys", keyPath), []byte(value), 0644)
}
func readSystemProperty(key string) (string, error) {
keyPath := strings.Replace(key, ".", "/", -1)
value, err := ioutil.ReadFile(path.Join("/proc/sys", keyPath))
if err != nil {
return "", err
}
return string(value), nil
}
func applyOStweaks() {
for k, v := range osConfig {
// read the existing property from disk
oldv, err := readSystemProperty(k)
if err != nil {
logrus.Errorf("error reading the kernel parameter %s, error: %s", k, err)
continue
}
if propertyIsValid(oldv, v.value, v.checkFn) {
// write new prop value to disk
if err := writeSystemProperty(k, v.value); err != nil {
logrus.Errorf("error setting the kernel parameter %s = %s, (leaving as %s) error: %s", k, v.value, oldv, err)
continue
}
logrus.Debugf("updated kernel parameter %s = %s (was %s)", k, v.value, oldv)
}
}
kernel.ApplyOSTweaks(ovConfig)
}

View File

@@ -90,7 +90,7 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
n.addEndpoint(ep)
if err := d.writeEndpointToStore(ep); err != nil {
return fmt.Errorf("failed to update overlay endpoint %s to local store: %v", ep.id[0:7], err)
return fmt.Errorf("failed to update overlay endpoint %.7s to local store: %v", ep.id, err)
}
return nil
@@ -116,7 +116,7 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
n.deleteEndpoint(eid)
if err := d.deleteEndpointFromStore(ep); err != nil {
logrus.Warnf("Failed to delete overlay endpoint %s from local store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to delete overlay endpoint %.7s from local store: %v", ep.id, err)
}
if ep.ifName == "" {

View File

@@ -274,7 +274,7 @@ func (d *driver) DeleteNetwork(nid string) error {
}
if err := d.deleteEndpointFromStore(ep); err != nil {
logrus.Warnf("Failed to delete overlay endpoint %s from local store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to delete overlay endpoint %.7s from local store: %v", ep.id, err)
}
}
// flush the peerDB entries

View File

@@ -137,10 +137,10 @@ func (d *driver) restoreEndpoints() error {
ep := kvo.(*endpoint)
n := d.network(ep.nid)
if n == nil {
logrus.Debugf("Network (%s) not found for restored endpoint (%s)", ep.nid[0:7], ep.id[0:7])
logrus.Debugf("Deleting stale overlay endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Network (%.7s) not found for restored endpoint (%.7s)", ep.nid, ep.id)
logrus.Debugf("Deleting stale overlay endpoint (%.7s) from store", ep.id)
if err := d.deleteEndpointFromStore(ep); err != nil {
logrus.Debugf("Failed to delete stale overlay endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Failed to delete stale overlay endpoint (%.7s) from store", ep.id)
}
continue
}

View File

@@ -1,12 +1,11 @@
// Code generated by protoc-gen-gogo.
// source: overlay.proto
// DO NOT EDIT!
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: drivers/overlay/overlay.proto
/*
Package overlay is a generated protocol buffer package.
It is generated from these files:
overlay.proto
drivers/overlay/overlay.proto
It has these top-level messages:
PeerRecord
@@ -19,9 +18,6 @@ import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import strings "strings"
import github_com_gogo_protobuf_proto "github.com/gogo/protobuf/proto"
import sort "sort"
import strconv "strconv"
import reflect "reflect"
import io "io"
@@ -33,7 +29,9 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
// PeerRecord defines the information corresponding to a peer
// container in the overlay network.
@@ -54,6 +52,27 @@ func (m *PeerRecord) Reset() { *m = PeerRecord{} }
func (*PeerRecord) ProtoMessage() {}
func (*PeerRecord) Descriptor() ([]byte, []int) { return fileDescriptorOverlay, []int{0} }
func (m *PeerRecord) GetEndpointIP() string {
if m != nil {
return m.EndpointIP
}
return ""
}
func (m *PeerRecord) GetEndpointMAC() string {
if m != nil {
return m.EndpointMAC
}
return ""
}
func (m *PeerRecord) GetTunnelEndpointIP() string {
if m != nil {
return m.TunnelEndpointIP
}
return ""
}
func init() {
proto.RegisterType((*PeerRecord)(nil), "overlay.PeerRecord")
}
@@ -77,84 +96,49 @@ func valueToGoStringOverlay(v interface{}, typ string) string {
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("func(v %v) *%v { return &v } ( %#v )", typ, typ, pv)
}
func extensionToGoStringOverlay(e map[int32]github_com_gogo_protobuf_proto.Extension) string {
if e == nil {
return "nil"
}
s := "map[int32]proto.Extension{"
keys := make([]int, 0, len(e))
for k := range e {
keys = append(keys, int(k))
}
sort.Ints(keys)
ss := []string{}
for _, k := range keys {
ss = append(ss, strconv.Itoa(k)+": "+e[int32(k)].GoString())
}
s += strings.Join(ss, ",") + "}"
return s
}
func (m *PeerRecord) Marshal() (data []byte, err error) {
func (m *PeerRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *PeerRecord) MarshalTo(data []byte) (int, error) {
func (m *PeerRecord) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.EndpointIP) > 0 {
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintOverlay(data, i, uint64(len(m.EndpointIP)))
i += copy(data[i:], m.EndpointIP)
i = encodeVarintOverlay(dAtA, i, uint64(len(m.EndpointIP)))
i += copy(dAtA[i:], m.EndpointIP)
}
if len(m.EndpointMAC) > 0 {
data[i] = 0x12
dAtA[i] = 0x12
i++
i = encodeVarintOverlay(data, i, uint64(len(m.EndpointMAC)))
i += copy(data[i:], m.EndpointMAC)
i = encodeVarintOverlay(dAtA, i, uint64(len(m.EndpointMAC)))
i += copy(dAtA[i:], m.EndpointMAC)
}
if len(m.TunnelEndpointIP) > 0 {
data[i] = 0x1a
dAtA[i] = 0x1a
i++
i = encodeVarintOverlay(data, i, uint64(len(m.TunnelEndpointIP)))
i += copy(data[i:], m.TunnelEndpointIP)
i = encodeVarintOverlay(dAtA, i, uint64(len(m.TunnelEndpointIP)))
i += copy(dAtA[i:], m.TunnelEndpointIP)
}
return i, nil
}
func encodeFixed64Overlay(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Overlay(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintOverlay(data []byte, offset int, v uint64) int {
func encodeVarintOverlay(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
dAtA[offset] = uint8(v)
return offset + 1
}
func (m *PeerRecord) Size() (n int) {
@@ -208,8 +192,8 @@ func valueToStringOverlay(v interface{}) string {
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("*%v", pv)
}
func (m *PeerRecord) Unmarshal(data []byte) error {
l := len(data)
func (m *PeerRecord) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -221,7 +205,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -249,7 +233,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -264,7 +248,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.EndpointIP = string(data[iNdEx:postIndex])
m.EndpointIP = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
@@ -278,7 +262,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -293,7 +277,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.EndpointMAC = string(data[iNdEx:postIndex])
m.EndpointMAC = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
@@ -307,7 +291,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -322,11 +306,11 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.TunnelEndpointIP = string(data[iNdEx:postIndex])
m.TunnelEndpointIP = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipOverlay(data[iNdEx:])
skippy, err := skipOverlay(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -345,8 +329,8 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
}
return nil
}
func skipOverlay(data []byte) (n int, err error) {
l := len(data)
func skipOverlay(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
var wire uint64
@@ -357,7 +341,7 @@ func skipOverlay(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -375,7 +359,7 @@ func skipOverlay(data []byte) (n int, err error) {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
if dAtA[iNdEx-1] < 0x80 {
break
}
}
@@ -392,7 +376,7 @@ func skipOverlay(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -415,7 +399,7 @@ func skipOverlay(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -426,7 +410,7 @@ func skipOverlay(data []byte) (n int, err error) {
if innerWireType == 4 {
break
}
next, err := skipOverlay(data[start:])
next, err := skipOverlay(dAtA[start:])
if err != nil {
return 0, err
}
@@ -450,19 +434,22 @@ var (
ErrIntOverflowOverlay = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("drivers/overlay/overlay.proto", fileDescriptorOverlay) }
var fileDescriptorOverlay = []byte{
// 195 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0xcd, 0x2f, 0x4b, 0x2d,
0xca, 0x49, 0xac, 0xd4, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x44, 0xd2,
0xf3, 0xd3, 0xf3, 0xc1, 0x62, 0xfa, 0x20, 0x16, 0x44, 0x5a, 0x69, 0x2b, 0x23, 0x17, 0x57, 0x40,
0x6a, 0x6a, 0x51, 0x50, 0x6a, 0x72, 0x7e, 0x51, 0x8a, 0x90, 0x3e, 0x17, 0x77, 0x6a, 0x5e, 0x4a,
0x41, 0x7e, 0x66, 0x5e, 0x49, 0x7c, 0x66, 0x81, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0xa7, 0x13, 0xdf,
0xa3, 0x7b, 0xf2, 0x5c, 0xae, 0x50, 0x61, 0xcf, 0x80, 0x20, 0x2e, 0x98, 0x12, 0xcf, 0x02, 0x21,
0x23, 0x2e, 0x1e, 0xb8, 0x86, 0xdc, 0xc4, 0x64, 0x09, 0x26, 0xb0, 0x0e, 0x7e, 0xa0, 0x0e, 0x6e,
0x98, 0x0e, 0x5f, 0x47, 0xe7, 0x20, 0xb8, 0xa9, 0xbe, 0x89, 0xc9, 0x42, 0x4e, 0x5c, 0x42, 0x25,
0xa5, 0x79, 0x79, 0xa9, 0x39, 0xf1, 0xc8, 0x76, 0x31, 0x83, 0x75, 0x8a, 0x00, 0x75, 0x0a, 0x84,
0x80, 0x65, 0x91, 0x6c, 0x14, 0x28, 0x41, 0x15, 0x29, 0x70, 0x92, 0xb8, 0xf1, 0x50, 0x8e, 0xe1,
0xc3, 0x43, 0x39, 0xc6, 0x86, 0x47, 0x72, 0x8c, 0x27, 0x80, 0xf8, 0x02, 0x10, 0x3f, 0x00, 0xe2,
0x24, 0x36, 0xb0, 0xc7, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xbf, 0xd7, 0x7d, 0x7d, 0x08,
0x01, 0x00, 0x00,
// 212 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x4d, 0x29, 0xca, 0x2c,
0x4b, 0x2d, 0x2a, 0xd6, 0xcf, 0x2f, 0x4b, 0x2d, 0xca, 0x49, 0xac, 0x84, 0xd1, 0x7a, 0x05, 0x45,
0xf9, 0x25, 0xf9, 0x42, 0xec, 0x50, 0xae, 0x94, 0x48, 0x7a, 0x7e, 0x7a, 0x3e, 0x58, 0x4c, 0x1f,
0xc4, 0x82, 0x48, 0x2b, 0x6d, 0x65, 0xe4, 0xe2, 0x0a, 0x48, 0x4d, 0x2d, 0x0a, 0x4a, 0x4d, 0xce,
0x2f, 0x4a, 0x11, 0xd2, 0xe7, 0xe2, 0x4e, 0xcd, 0x4b, 0x29, 0xc8, 0xcf, 0xcc, 0x2b, 0x89, 0xcf,
0x2c, 0x90, 0x60, 0x54, 0x60, 0xd4, 0xe0, 0x74, 0xe2, 0x7b, 0x74, 0x4f, 0x9e, 0xcb, 0x15, 0x2a,
0xec, 0x19, 0x10, 0xc4, 0x05, 0x53, 0xe2, 0x59, 0x20, 0x64, 0xc4, 0xc5, 0x03, 0xd7, 0x90, 0x9b,
0x98, 0x2c, 0xc1, 0x04, 0xd6, 0xc1, 0xff, 0xe8, 0x9e, 0x3c, 0x37, 0x4c, 0x87, 0xaf, 0xa3, 0x73,
0x10, 0xdc, 0x54, 0xdf, 0xc4, 0x64, 0x21, 0x27, 0x2e, 0xa1, 0x92, 0xd2, 0xbc, 0xbc, 0xd4, 0x9c,
0x78, 0x64, 0xbb, 0x98, 0xc1, 0x3a, 0x45, 0x1e, 0xdd, 0x93, 0x17, 0x08, 0x01, 0xcb, 0x22, 0xd9,
0x28, 0x50, 0x82, 0x2a, 0x52, 0xe0, 0x24, 0x71, 0xe3, 0xa1, 0x1c, 0xc3, 0x87, 0x87, 0x72, 0x8c,
0x0d, 0x8f, 0xe4, 0x18, 0x4f, 0x3c, 0x92, 0x63, 0xbc, 0xf0, 0x48, 0x8e, 0xf1, 0xc1, 0x23, 0x39,
0xc6, 0x24, 0x36, 0xb0, 0xc7, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0x48, 0x07, 0xf6, 0xf3,
0x18, 0x01, 0x00, 0x00,
}

View File

@@ -1,16 +1,17 @@
package remote
import (
"errors"
"fmt"
"net"
"github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/pkg/plugins"
"github.com/docker/libnetwork/datastore"
"github.com/docker/libnetwork/discoverapi"
"github.com/docker/libnetwork/driverapi"
"github.com/docker/libnetwork/drivers/remote/api"
"github.com/docker/libnetwork/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -49,7 +50,11 @@ func Init(dc driverapi.DriverCallback, config map[string]interface{}) error {
handleFunc = pg.Handle
activePlugins := pg.GetAllManagedPluginsByCap(driverapi.NetworkPluginEndpointType)
for _, ap := range activePlugins {
newPluginHandler(ap.Name(), ap.Client())
client, err := getPluginClient(ap)
if err != nil {
return err
}
newPluginHandler(ap.Name(), client)
}
}
handleFunc(driverapi.NetworkPluginEndpointType, newPluginHandler)
@@ -57,6 +62,28 @@ func Init(dc driverapi.DriverCallback, config map[string]interface{}) error {
return nil
}
func getPluginClient(p plugingetter.CompatPlugin) (*plugins.Client, error) {
if v1, ok := p.(plugingetter.PluginWithV1Client); ok {
return v1.Client(), nil
}
pa, ok := p.(plugingetter.PluginAddr)
if !ok {
return nil, errors.Errorf("unknown plugin type %T", p)
}
if pa.Protocol() != plugins.ProtocolSchemeHTTPV1 {
return nil, errors.Errorf("unsupported plugin protocol %s", pa.Protocol())
}
addr := pa.Addr()
client, err := plugins.NewClientWithTimeout(addr.Network()+"://"+addr.String(), nil, pa.Timeout())
if err != nil {
return nil, errors.Wrap(err, "error creating plugin client")
}
return client, nil
}
// Get capability from client
func (d *driver) getCapabilities() (*driverapi.Capability, error) {
var capResp api.GetCapabilityResponse

View File

@@ -80,7 +80,7 @@ func (n *network) removeEndpointWithAddress(addr *net.IPNet) {
_, err := hcsshim.HNSEndpointRequest("DELETE", networkEndpoint.profileID, "")
if err != nil {
logrus.Debugf("Failed to delete stale overlay endpoint (%s) from hns", networkEndpoint.id[0:7])
logrus.Debugf("Failed to delete stale overlay endpoint (%.7s) from hns", networkEndpoint.id)
}
}
}

View File

@@ -1,12 +1,11 @@
// Code generated by protoc-gen-gogo.
// source: overlay.proto
// DO NOT EDIT!
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: drivers/windows/overlay/overlay.proto
/*
Package overlay is a generated protocol buffer package.
It is generated from these files:
overlay.proto
drivers/windows/overlay/overlay.proto
It has these top-level messages:
PeerRecord
@@ -19,9 +18,6 @@ import math "math"
import _ "github.com/gogo/protobuf/gogoproto"
import strings "strings"
import github_com_gogo_protobuf_proto "github.com/gogo/protobuf/proto"
import sort "sort"
import strconv "strconv"
import reflect "reflect"
import io "io"
@@ -33,7 +29,9 @@ var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
const _ = proto.GoGoProtoPackageIsVersion1
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
// PeerRecord defines the information corresponding to a peer
// container in the overlay network.
@@ -54,6 +52,27 @@ func (m *PeerRecord) Reset() { *m = PeerRecord{} }
func (*PeerRecord) ProtoMessage() {}
func (*PeerRecord) Descriptor() ([]byte, []int) { return fileDescriptorOverlay, []int{0} }
func (m *PeerRecord) GetEndpointIP() string {
if m != nil {
return m.EndpointIP
}
return ""
}
func (m *PeerRecord) GetEndpointMAC() string {
if m != nil {
return m.EndpointMAC
}
return ""
}
func (m *PeerRecord) GetTunnelEndpointIP() string {
if m != nil {
return m.TunnelEndpointIP
}
return ""
}
func init() {
proto.RegisterType((*PeerRecord)(nil), "overlay.PeerRecord")
}
@@ -77,84 +96,49 @@ func valueToGoStringOverlay(v interface{}, typ string) string {
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("func(v %v) *%v { return &v } ( %#v )", typ, typ, pv)
}
func extensionToGoStringOverlay(e map[int32]github_com_gogo_protobuf_proto.Extension) string {
if e == nil {
return "nil"
}
s := "map[int32]proto.Extension{"
keys := make([]int, 0, len(e))
for k := range e {
keys = append(keys, int(k))
}
sort.Ints(keys)
ss := []string{}
for _, k := range keys {
ss = append(ss, strconv.Itoa(k)+": "+e[int32(k)].GoString())
}
s += strings.Join(ss, ",") + "}"
return s
}
func (m *PeerRecord) Marshal() (data []byte, err error) {
func (m *PeerRecord) Marshal() (dAtA []byte, err error) {
size := m.Size()
data = make([]byte, size)
n, err := m.MarshalTo(data)
dAtA = make([]byte, size)
n, err := m.MarshalTo(dAtA)
if err != nil {
return nil, err
}
return data[:n], nil
return dAtA[:n], nil
}
func (m *PeerRecord) MarshalTo(data []byte) (int, error) {
func (m *PeerRecord) MarshalTo(dAtA []byte) (int, error) {
var i int
_ = i
var l int
_ = l
if len(m.EndpointIP) > 0 {
data[i] = 0xa
dAtA[i] = 0xa
i++
i = encodeVarintOverlay(data, i, uint64(len(m.EndpointIP)))
i += copy(data[i:], m.EndpointIP)
i = encodeVarintOverlay(dAtA, i, uint64(len(m.EndpointIP)))
i += copy(dAtA[i:], m.EndpointIP)
}
if len(m.EndpointMAC) > 0 {
data[i] = 0x12
dAtA[i] = 0x12
i++
i = encodeVarintOverlay(data, i, uint64(len(m.EndpointMAC)))
i += copy(data[i:], m.EndpointMAC)
i = encodeVarintOverlay(dAtA, i, uint64(len(m.EndpointMAC)))
i += copy(dAtA[i:], m.EndpointMAC)
}
if len(m.TunnelEndpointIP) > 0 {
data[i] = 0x1a
dAtA[i] = 0x1a
i++
i = encodeVarintOverlay(data, i, uint64(len(m.TunnelEndpointIP)))
i += copy(data[i:], m.TunnelEndpointIP)
i = encodeVarintOverlay(dAtA, i, uint64(len(m.TunnelEndpointIP)))
i += copy(dAtA[i:], m.TunnelEndpointIP)
}
return i, nil
}
func encodeFixed64Overlay(data []byte, offset int, v uint64) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
data[offset+4] = uint8(v >> 32)
data[offset+5] = uint8(v >> 40)
data[offset+6] = uint8(v >> 48)
data[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Overlay(data []byte, offset int, v uint32) int {
data[offset] = uint8(v)
data[offset+1] = uint8(v >> 8)
data[offset+2] = uint8(v >> 16)
data[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintOverlay(data []byte, offset int, v uint64) int {
func encodeVarintOverlay(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
data[offset] = uint8(v&0x7f | 0x80)
dAtA[offset] = uint8(v&0x7f | 0x80)
v >>= 7
offset++
}
data[offset] = uint8(v)
dAtA[offset] = uint8(v)
return offset + 1
}
func (m *PeerRecord) Size() (n int) {
@@ -208,8 +192,8 @@ func valueToStringOverlay(v interface{}) string {
pv := reflect.Indirect(rv).Interface()
return fmt.Sprintf("*%v", pv)
}
func (m *PeerRecord) Unmarshal(data []byte) error {
l := len(data)
func (m *PeerRecord) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
preIndex := iNdEx
@@ -221,7 +205,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -249,7 +233,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -264,7 +248,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.EndpointIP = string(data[iNdEx:postIndex])
m.EndpointIP = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 2:
if wireType != 2 {
@@ -278,7 +262,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -293,7 +277,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.EndpointMAC = string(data[iNdEx:postIndex])
m.EndpointMAC = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
@@ -307,7 +291,7 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -322,11 +306,11 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.TunnelEndpointIP = string(data[iNdEx:postIndex])
m.TunnelEndpointIP = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipOverlay(data[iNdEx:])
skippy, err := skipOverlay(dAtA[iNdEx:])
if err != nil {
return err
}
@@ -345,8 +329,8 @@ func (m *PeerRecord) Unmarshal(data []byte) error {
}
return nil
}
func skipOverlay(data []byte) (n int, err error) {
l := len(data)
func skipOverlay(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
var wire uint64
@@ -357,7 +341,7 @@ func skipOverlay(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -375,7 +359,7 @@ func skipOverlay(data []byte) (n int, err error) {
return 0, io.ErrUnexpectedEOF
}
iNdEx++
if data[iNdEx-1] < 0x80 {
if dAtA[iNdEx-1] < 0x80 {
break
}
}
@@ -392,7 +376,7 @@ func skipOverlay(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
length |= (int(b) & 0x7F) << shift
if b < 0x80 {
@@ -415,7 +399,7 @@ func skipOverlay(data []byte) (n int, err error) {
if iNdEx >= l {
return 0, io.ErrUnexpectedEOF
}
b := data[iNdEx]
b := dAtA[iNdEx]
iNdEx++
innerWire |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
@@ -426,7 +410,7 @@ func skipOverlay(data []byte) (n int, err error) {
if innerWireType == 4 {
break
}
next, err := skipOverlay(data[start:])
next, err := skipOverlay(dAtA[start:])
if err != nil {
return 0, err
}
@@ -450,19 +434,22 @@ var (
ErrIntOverflowOverlay = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("drivers/windows/overlay/overlay.proto", fileDescriptorOverlay) }
var fileDescriptorOverlay = []byte{
// 195 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0xe2, 0xe2, 0xcd, 0x2f, 0x4b, 0x2d,
0xca, 0x49, 0xac, 0xd4, 0x2b, 0x28, 0xca, 0x2f, 0xc9, 0x17, 0x62, 0x87, 0x72, 0xa5, 0x44, 0xd2,
0xf3, 0xd3, 0xf3, 0xc1, 0x62, 0xfa, 0x20, 0x16, 0x44, 0x5a, 0x69, 0x2b, 0x23, 0x17, 0x57, 0x40,
0x6a, 0x6a, 0x51, 0x50, 0x6a, 0x72, 0x7e, 0x51, 0x8a, 0x90, 0x3e, 0x17, 0x77, 0x6a, 0x5e, 0x4a,
0x41, 0x7e, 0x66, 0x5e, 0x49, 0x7c, 0x66, 0x81, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0xa7, 0x13, 0xdf,
0xa3, 0x7b, 0xf2, 0x5c, 0xae, 0x50, 0x61, 0xcf, 0x80, 0x20, 0x2e, 0x98, 0x12, 0xcf, 0x02, 0x21,
0x23, 0x2e, 0x1e, 0xb8, 0x86, 0xdc, 0xc4, 0x64, 0x09, 0x26, 0xb0, 0x0e, 0x7e, 0xa0, 0x0e, 0x6e,
0x98, 0x0e, 0x5f, 0x47, 0xe7, 0x20, 0xb8, 0xa9, 0xbe, 0x89, 0xc9, 0x42, 0x4e, 0x5c, 0x42, 0x25,
0xa5, 0x79, 0x79, 0xa9, 0x39, 0xf1, 0xc8, 0x76, 0x31, 0x83, 0x75, 0x8a, 0x00, 0x75, 0x0a, 0x84,
0x80, 0x65, 0x91, 0x6c, 0x14, 0x28, 0x41, 0x15, 0x29, 0x70, 0x92, 0xb8, 0xf1, 0x50, 0x8e, 0xe1,
0xc3, 0x43, 0x39, 0xc6, 0x86, 0x47, 0x72, 0x8c, 0x27, 0x80, 0xf8, 0x02, 0x10, 0x3f, 0x00, 0xe2,
0x24, 0x36, 0xb0, 0xc7, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff, 0xbf, 0xd7, 0x7d, 0x7d, 0x08,
0x01, 0x00, 0x00,
// 220 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x52, 0x4d, 0x29, 0xca, 0x2c,
0x4b, 0x2d, 0x2a, 0xd6, 0x2f, 0xcf, 0xcc, 0x4b, 0xc9, 0x2f, 0x2f, 0xd6, 0xcf, 0x2f, 0x4b, 0x2d,
0xca, 0x49, 0xac, 0x84, 0xd1, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9, 0x42, 0xec, 0x50, 0xae, 0x94,
0x48, 0x7a, 0x7e, 0x7a, 0x3e, 0x58, 0x4c, 0x1f, 0xc4, 0x82, 0x48, 0x2b, 0x6d, 0x65, 0xe4, 0xe2,
0x0a, 0x48, 0x4d, 0x2d, 0x0a, 0x4a, 0x4d, 0xce, 0x2f, 0x4a, 0x11, 0xd2, 0xe7, 0xe2, 0x4e, 0xcd,
0x4b, 0x29, 0xc8, 0xcf, 0xcc, 0x2b, 0x89, 0xcf, 0x2c, 0x90, 0x60, 0x54, 0x60, 0xd4, 0xe0, 0x74,
0xe2, 0x7b, 0x74, 0x4f, 0x9e, 0xcb, 0x15, 0x2a, 0xec, 0x19, 0x10, 0xc4, 0x05, 0x53, 0xe2, 0x59,
0x20, 0x64, 0xc4, 0xc5, 0x03, 0xd7, 0x90, 0x9b, 0x98, 0x2c, 0xc1, 0x04, 0xd6, 0xc1, 0xff, 0xe8,
0x9e, 0x3c, 0x37, 0x4c, 0x87, 0xaf, 0xa3, 0x73, 0x10, 0xdc, 0x54, 0xdf, 0xc4, 0x64, 0x21, 0x27,
0x2e, 0xa1, 0x92, 0xd2, 0xbc, 0xbc, 0xd4, 0x9c, 0x78, 0x64, 0xbb, 0x98, 0xc1, 0x3a, 0x45, 0x1e,
0xdd, 0x93, 0x17, 0x08, 0x01, 0xcb, 0x22, 0xd9, 0x28, 0x50, 0x82, 0x2a, 0x52, 0xe0, 0x24, 0x71,
0xe3, 0xa1, 0x1c, 0xc3, 0x87, 0x87, 0x72, 0x8c, 0x0d, 0x8f, 0xe4, 0x18, 0x4f, 0x3c, 0x92, 0x63,
0xbc, 0xf0, 0x48, 0x8e, 0xf1, 0xc1, 0x23, 0x39, 0xc6, 0x24, 0x36, 0xb0, 0xc7, 0x8c, 0x01, 0x01,
0x00, 0x00, 0xff, 0xff, 0xc0, 0x48, 0xd1, 0xc0, 0x20, 0x01, 0x00, 0x00,
}

View File

@@ -415,7 +415,7 @@ func (d *driver) DeleteNetwork(nid string) error {
// delele endpoints belong to this network
for _, ep := range n.endpoints {
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove bridge endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove bridge endpoint %.7s from store: %v", ep.id, err)
}
}
@@ -704,7 +704,7 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
}
if err = d.storeUpdate(endpoint); err != nil {
logrus.Errorf("Failed to save endpoint %s to store: %v", endpoint.id[0:7], err)
logrus.Errorf("Failed to save endpoint %.7s to store: %v", endpoint.id, err)
}
return nil
@@ -731,7 +731,7 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
}
if err := d.storeDelete(ep); err != nil {
logrus.Warnf("Failed to remove bridge endpoint %s from store: %v", ep.id[0:7], err)
logrus.Warnf("Failed to remove bridge endpoint %.7s from store: %v", ep.id, err)
}
return nil
}

View File

@@ -64,7 +64,7 @@ func (d *driver) populateNetworks() error {
if err = d.createNetwork(ncfg); err != nil {
logrus.Warnf("could not create windows network for id %s hnsid %s while booting up from persistent state: %v", ncfg.ID, ncfg.HnsID, err)
}
logrus.Debugf("Network %v (%s) restored", d.name, ncfg.ID[0:7])
logrus.Debugf("Network %v (%.7s) restored", d.name, ncfg.ID)
}
return nil
@@ -87,15 +87,15 @@ func (d *driver) populateEndpoints() error {
}
n, ok := d.networks[ep.nid]
if !ok {
logrus.Debugf("Network (%s) not found for restored endpoint (%s)", ep.nid[0:7], ep.id[0:7])
logrus.Debugf("Deleting stale endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Network (%.7s) not found for restored endpoint (%.7s)", ep.nid, ep.id)
logrus.Debugf("Deleting stale endpoint (%.7s) from store", ep.id)
if err := d.storeDelete(ep); err != nil {
logrus.Debugf("Failed to delete stale endpoint (%s) from store", ep.id[0:7])
logrus.Debugf("Failed to delete stale endpoint (%.7s) from store", ep.id)
}
continue
}
n.endpoints[ep.id] = ep
logrus.Debugf("Endpoint (%s) restored to network (%s)", ep.id[0:7], ep.nid[0:7])
logrus.Debugf("Endpoint (%.7s) restored to network (%.7s)", ep.id, ep.nid)
}
return nil

View File

@@ -540,6 +540,12 @@ func (ep *endpoint) sbJoin(sb *sandbox, options ...EndpointOption) (err error) {
}
}()
// Load balancing endpoints should never have a default gateway nor
// should they alter the status of a network's default gateway
if ep.loadBalancer && !sb.ingress {
return nil
}
if sb.needDefaultGW() && sb.getEndpointInGWNetwork() == nil {
return sb.setupDefaultGW()
}

View File

@@ -49,6 +49,9 @@ type InterfaceInfo interface {
// LinkLocalAddresses returns the list of link-local (IPv4/IPv6) addresses assigned to the endpoint.
LinkLocalAddresses() []*net.IPNet
// SrcName returns the name of the interface w/in the container
SrcName() string
}
type endpointInterface struct {
@@ -272,6 +275,10 @@ func (epi *endpointInterface) LinkLocalAddresses() []*net.IPNet {
return epi.llAddrs
}
func (epi *endpointInterface) SrcName() string {
return epi.srcName
}
func (epi *endpointInterface) SetNames(srcName string, dstPrefix string) error {
epi.srcName = srcName
epi.dstPrefix = dstPrefix

View File

@@ -29,7 +29,10 @@ const (
// Allocator provides per address space ipv4/ipv6 book keeping
type Allocator struct {
// Predefined pools for default address spaces
predefined map[string][]*net.IPNet
// Separate from the addrSpace because they should not be serialized
predefined map[string][]*net.IPNet
predefinedStartIndices map[string]int
// The (potentially serialized) address spaces
addrSpaces map[string]*addrSpace
// stores []datastore.Datastore
// Allocated addresses in each address space's subnet
@@ -47,6 +50,9 @@ func NewAllocator(lcDs, glDs datastore.DataStore) (*Allocator, error) {
globalAddressSpace: ipamutils.PredefinedGranularNetworks,
}
// Initialize asIndices map
a.predefinedStartIndices = make(map[string]int)
// Initialize bitseq map
a.addresses = make(map[SubnetKey]*bitseq.Handle)
@@ -197,6 +203,10 @@ func (a *Allocator) GetDefaultAddressSpaces() (string, string, error) {
}
// RequestPool returns an address pool along with its unique id.
// addressSpace must be a valid address space name and must not be the empty string.
// If pool is the empty string then the default predefined pool for addressSpace will be used, otherwise pool must be a valid IP address and length in CIDR notation.
// If subPool is not empty, it must be a valid IP address and length in CIDR notation which is a sub-range of pool.
// subPool must be empty if pool is empty.
func (a *Allocator) RequestPool(addressSpace, pool, subPool string, options map[string]string, v6 bool) (string, *net.IPNet, map[string]string, error) {
logrus.Debugf("RequestPool(%s, %s, %s, %v, %t)", addressSpace, pool, subPool, options, v6)
@@ -277,8 +287,8 @@ retry:
return remove()
}
// Given the address space, returns the local or global PoolConfig based on the
// address space is local or global. AddressSpace locality is being registered with IPAM out of band.
// Given the address space, returns the local or global PoolConfig based on whether the
// address space is local or global. AddressSpace locality is registered with IPAM out of band.
func (a *Allocator) getAddrSpace(as string) (*addrSpace, error) {
a.Lock()
defer a.Unlock()
@@ -289,6 +299,8 @@ func (a *Allocator) getAddrSpace(as string) (*addrSpace, error) {
return aSpace, nil
}
// parsePoolRequest parses and validates a request to create a new pool under addressSpace and returns
// a SubnetKey, network and range describing the request.
func (a *Allocator) parsePoolRequest(addressSpace, pool, subPool string, v6 bool) (*SubnetKey, *net.IPNet, *AddressRange, error) {
var (
nw *net.IPNet
@@ -374,11 +386,24 @@ func (a *Allocator) retrieveBitmask(k SubnetKey, n *net.IPNet) (*bitseq.Handle,
func (a *Allocator) getPredefineds(as string) []*net.IPNet {
a.Lock()
defer a.Unlock()
l := make([]*net.IPNet, 0, len(a.predefined[as]))
for _, pool := range a.predefined[as] {
l = append(l, pool)
p := a.predefined[as]
i := a.predefinedStartIndices[as]
// defensive in case the list changed since last update
if i >= len(p) {
i = 0
}
return l
return append(p[i:], p[:i]...)
}
func (a *Allocator) updateStartIndex(as string, amt int) {
a.Lock()
i := a.predefinedStartIndices[as] + amt
if i < 0 || i >= len(a.predefined[as]) {
i = 0
}
a.predefinedStartIndices[as] = i
a.Unlock()
}
func (a *Allocator) getPredefinedPool(as string, ipV6 bool) (*net.IPNet, error) {
@@ -397,21 +422,26 @@ func (a *Allocator) getPredefinedPool(as string, ipV6 bool) (*net.IPNet, error)
return nil, err
}
for _, nw := range a.getPredefineds(as) {
predefined := a.getPredefineds(as)
aSpace.Lock()
for i, nw := range predefined {
if v != getAddressVersion(nw.IP) {
continue
}
aSpace.Lock()
// Checks whether pool has already been allocated
if _, ok := aSpace.subnets[SubnetKey{AddressSpace: as, Subnet: nw.String()}]; ok {
aSpace.Unlock()
continue
}
// Shouldn't be necessary, but check prevents IP collisions should
// predefined pools overlap for any reason.
if !aSpace.contains(as, nw) {
aSpace.Unlock()
a.updateStartIndex(as, i+1)
return nw, nil
}
aSpace.Unlock()
}
aSpace.Unlock()
return nil, types.NotFoundErrorf("could not find an available, non-overlapping IPv%d address pool among the defaults to assign to the network", v)
}

View File

@@ -257,17 +257,19 @@ func (aSpace *addrSpace) New() datastore.KVObject {
}
}
// updatePoolDBOnAdd returns a closure which will add the subnet k to the address space when executed.
func (aSpace *addrSpace) updatePoolDBOnAdd(k SubnetKey, nw *net.IPNet, ipr *AddressRange, pdf bool) (func() error, error) {
aSpace.Lock()
defer aSpace.Unlock()
// Check if already allocated
if p, ok := aSpace.subnets[k]; ok {
if _, ok := aSpace.subnets[k]; ok {
if pdf {
return nil, types.InternalMaskableErrorf("predefined pool %s is already reserved", nw)
}
aSpace.incRefCount(p, 1)
return func() error { return nil }, nil
// This means the same pool is already allocated. updatePoolDBOnAdd is called when there
// is request for a pool/subpool. It should ensure there is no overlap with existing pools
return nil, ipamapi.ErrPoolOverlap
}
// If master pool, check for overlap
@@ -280,7 +282,7 @@ func (aSpace *addrSpace) updatePoolDBOnAdd(k SubnetKey, nw *net.IPNet, ipr *Addr
return func() error { return aSpace.alloc.insertBitMask(k, nw) }, nil
}
// This is a new non-master pool
// This is a new non-master pool (subPool)
p := &PoolData{
ParentKey: SubnetKey{AddressSpace: k.AddressSpace, Subnet: k.Subnet},
Pool: nw,

View File

@@ -4,11 +4,13 @@ import (
"fmt"
"net"
"github.com/docker/docker/pkg/plugingetter"
"github.com/docker/docker/pkg/plugins"
"github.com/docker/libnetwork/discoverapi"
"github.com/docker/libnetwork/ipamapi"
"github.com/docker/libnetwork/ipams/remote/api"
"github.com/docker/libnetwork/types"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
@@ -52,13 +54,39 @@ func Init(cb ipamapi.Callback, l, g interface{}) error {
handleFunc = pg.Handle
activePlugins := pg.GetAllManagedPluginsByCap(ipamapi.PluginEndpointType)
for _, ap := range activePlugins {
newPluginHandler(ap.Name(), ap.Client())
client, err := getPluginClient(ap)
if err != nil {
return err
}
newPluginHandler(ap.Name(), client)
}
}
handleFunc(ipamapi.PluginEndpointType, newPluginHandler)
return nil
}
func getPluginClient(p plugingetter.CompatPlugin) (*plugins.Client, error) {
if v1, ok := p.(plugingetter.PluginWithV1Client); ok {
return v1.Client(), nil
}
pa, ok := p.(plugingetter.PluginAddr)
if !ok {
return nil, errors.Errorf("unknown plugin type %T", p)
}
if pa.Protocol() != plugins.ProtocolSchemeHTTPV1 {
return nil, errors.Errorf("unsupported plugin protocol %s", pa.Protocol())
}
addr := pa.Addr()
client, err := plugins.NewClientWithTimeout(addr.Network()+"://"+addr.String(), nil, pa.Timeout())
if err != nil {
return nil, errors.Wrap(err, "error creating plugin client")
}
return client, nil
}
func (a *allocator) call(methodName string, arg interface{}, retVal PluginResponse) error {
method := ipamapi.PluginEndpointType + "." + methodName
err := a.endpoint.Call(method, arg, retVal)

View File

@@ -40,7 +40,7 @@ type Network interface {
CreateEndpoint(name string, options ...EndpointOption) (Endpoint, error)
// Delete the network.
Delete() error
Delete(options ...NetworkDeleteOption) error
// Endpoints returns the list of Endpoint(s) in this network.
Endpoints() []Endpoint
@@ -875,6 +875,28 @@ func (n *network) processOptions(options ...NetworkOption) {
}
}
type networkDeleteParams struct {
rmLBEndpoint bool
}
// NetworkDeleteOption is a type for optional parameters to pass to the
// network.Delete() function.
type NetworkDeleteOption func(p *networkDeleteParams)
// NetworkDeleteOptionRemoveLB informs a network.Delete() operation that should
// remove the load balancer endpoint for this network. Note that the Delete()
// method will automatically remove a load balancing endpoint for most networks
// when the network is otherwise empty. However, this does not occur for some
// networks. In particular, networks marked as ingress (which are supposed to
// be more permanent than other overlay networks) won't automatically remove
// the LB endpoint on Delete(). This method allows for explicit removal of
// such networks provided there are no other endpoints present in the network.
// If the network still has non-LB endpoints present, Delete() will not
// remove the LB endpoint and will return an error.
func NetworkDeleteOptionRemoveLB(p *networkDeleteParams) {
p.rmLBEndpoint = true
}
func (n *network) resolveDriver(name string, load bool) (driverapi.Driver, *driverapi.Capability, error) {
c := n.getController()
@@ -938,11 +960,23 @@ func (n *network) driver(load bool) (driverapi.Driver, error) {
return d, nil
}
func (n *network) Delete() error {
return n.delete(false)
func (n *network) Delete(options ...NetworkDeleteOption) error {
var params networkDeleteParams
for _, opt := range options {
opt(&params)
}
return n.delete(false, params.rmLBEndpoint)
}
func (n *network) delete(force bool) error {
// This function gets called in 3 ways:
// * Delete() -- (false, false)
// remove if endpoint count == 0 or endpoint count == 1 and
// there is a load balancer IP
// * Delete(libnetwork.NetworkDeleteOptionRemoveLB) -- (false, true)
// remove load balancer and network if endpoint count == 1
// * controller.networkCleanup() -- (true, true)
// remove the network no matter what
func (n *network) delete(force bool, rmLBEndpoint bool) error {
n.Lock()
c := n.ctrlr
name := n.name
@@ -957,10 +991,32 @@ func (n *network) delete(force bool) error {
return &UnknownNetworkError{name: name, id: id}
}
if len(n.loadBalancerIP) != 0 {
endpoints := n.Endpoints()
if force || (len(endpoints) == 1 && !n.ingress) {
n.deleteLoadBalancerSandbox()
// Only remove ingress on force removal or explicit LB endpoint removal
if n.ingress && !force && !rmLBEndpoint {
return &ActiveEndpointsError{name: n.name, id: n.id}
}
// Check that the network is empty
var emptyCount uint64
if n.hasLoadBalancerEndpoint() {
emptyCount = 1
}
if !force && n.getEpCnt().EndpointCnt() > emptyCount {
if n.configOnly {
return types.ForbiddenErrorf("configuration network %q is in use", n.Name())
}
return &ActiveEndpointsError{name: n.name, id: n.id}
}
if n.hasLoadBalancerEndpoint() {
// If we got to this point, then the following must hold:
// * force is true OR endpoint count == 1
if err := n.deleteLoadBalancerSandbox(); err != nil {
if !force {
return err
}
// continue deletion when force is true even on error
logrus.Warnf("Error deleting load balancer sandbox: %v", err)
}
//Reload the network from the store to update the epcnt.
n, err = c.getNetworkFromStore(id)
@@ -969,12 +1025,10 @@ func (n *network) delete(force bool) error {
}
}
if !force && n.getEpCnt().EndpointCnt() != 0 {
if n.configOnly {
return types.ForbiddenErrorf("configuration network %q is in use", n.Name())
}
return &ActiveEndpointsError{name: n.name, id: n.id}
}
// Up to this point, errors that we returned were recoverable.
// From here on, any errors leave us in an inconsistent state.
// This is unfortunate, but there isn't a safe way to
// reconstitute a load-balancer endpoint after removing it.
// Mark the network for deletion
n.inDelete = true
@@ -1023,9 +1077,6 @@ func (n *network) delete(force bool) error {
// Cleanup the service discovery for this network
c.cleanupServiceDiscovery(n.ID())
// Cleanup the load balancer
c.cleanupServiceBindings(n.ID())
removeFromStore:
// deleteFromStore performs an atomic delete operation and the
// network.epCnt will help prevent any possible
@@ -1156,18 +1207,6 @@ func (n *network) createEndpoint(name string, options ...EndpointOption) (Endpoi
ep.releaseAddress()
}
}()
// Moving updateToSTore before calling addEndpoint so that we shall clean up VETH interfaces in case
// DockerD get killed between addEndpoint and updateSTore call
if err = n.getController().updateToStore(ep); err != nil {
return nil, err
}
defer func() {
if err != nil {
if e := n.getController().deleteFromStore(ep); e != nil {
logrus.Warnf("error rolling back endpoint %s from store: %v", name, e)
}
}
}()
if err = n.addEndpoint(ep); err != nil {
return nil, err
@@ -1180,6 +1219,19 @@ func (n *network) createEndpoint(name string, options ...EndpointOption) (Endpoi
}
}()
// We should perform updateToStore call right after addEndpoint
// in order to have iface properly configured
if err = n.getController().updateToStore(ep); err != nil {
return nil, err
}
defer func() {
if err != nil {
if e := n.getController().deleteFromStore(ep); e != nil {
logrus.Warnf("error rolling back endpoint %s from store: %v", name, e)
}
}
}()
if err = ep.assignAddress(ipam, false, n.enableIPv6 && n.postIPv6); err != nil {
return nil, err
}
@@ -1338,7 +1390,7 @@ func (n *network) addSvcRecords(eID, name, serviceID string, epIP, epIPv6 net.IP
return
}
logrus.Debugf("%s (%s).addSvcRecords(%s, %s, %s, %t) %s sid:%s", eID, n.ID()[0:7], name, epIP, epIPv6, ipMapUpdate, method, serviceID)
logrus.Debugf("%s (%.7s).addSvcRecords(%s, %s, %s, %t) %s sid:%s", eID, n.ID(), name, epIP, epIPv6, ipMapUpdate, method, serviceID)
c := n.getController()
c.Lock()
@@ -1374,7 +1426,7 @@ func (n *network) deleteSvcRecords(eID, name, serviceID string, epIP net.IP, epI
return
}
logrus.Debugf("%s (%s).deleteSvcRecords(%s, %s, %s, %t) %s sid:%s ", eID, n.ID()[0:7], name, epIP, epIPv6, ipMapUpdate, method, serviceID)
logrus.Debugf("%s (%.7s).deleteSvcRecords(%s, %s, %s, %t) %s sid:%s ", eID, n.ID(), name, epIP, epIPv6, ipMapUpdate, method, serviceID)
c := n.getController()
c.Lock()
@@ -1876,6 +1928,10 @@ func (n *network) hasSpecialDriver() bool {
return n.Type() == "host" || n.Type() == "null"
}
func (n *network) hasLoadBalancerEndpoint() bool {
return len(n.loadBalancerIP) != 0
}
func (n *network) ResolveName(req string, ipType int) ([]net.IP, bool) {
var ipv6Miss bool
@@ -2055,9 +2111,22 @@ func (c *controller) getConfigNetwork(name string) (*network, error) {
return n.(*network), nil
}
func (n *network) createLoadBalancerSandbox() error {
sandboxName := n.name + "-sbox"
sbOptions := []SandboxOption{}
func (n *network) lbSandboxName() string {
name := "lb-" + n.name
if n.ingress {
name = n.name + "-sbox"
}
return name
}
func (n *network) lbEndpointName() string {
return n.name + "-endpoint"
}
func (n *network) createLoadBalancerSandbox() (retErr error) {
sandboxName := n.lbSandboxName()
// Mark the sandbox to be a load balancer
sbOptions := []SandboxOption{OptionLoadBalancer()}
if n.ingress {
sbOptions = append(sbOptions, OptionIngress())
}
@@ -2066,26 +2135,30 @@ func (n *network) createLoadBalancerSandbox() error {
return err
}
defer func() {
if err != nil {
if retErr != nil {
if e := n.ctrlr.SandboxDestroy(sandboxName); e != nil {
logrus.Warnf("could not delete sandbox %s on failure on failure (%v): %v", sandboxName, err, e)
logrus.Warnf("could not delete sandbox %s on failure on failure (%v): %v", sandboxName, retErr, e)
}
}
}()
endpointName := n.name + "-endpoint"
endpointName := n.lbEndpointName()
epOptions := []EndpointOption{
CreateOptionIpam(n.loadBalancerIP, nil, nil, nil),
CreateOptionLoadBalancer(),
}
if n.hasLoadBalancerEndpoint() && !n.ingress {
// Mark LB endpoints as anonymous so they don't show up in DNS
epOptions = append(epOptions, CreateOptionAnonymous())
}
ep, err := n.createEndpoint(endpointName, epOptions...)
if err != nil {
return err
}
defer func() {
if err != nil {
if retErr != nil {
if e := ep.Delete(true); e != nil {
logrus.Warnf("could not delete endpoint %s on failure on failure (%v): %v", endpointName, err, e)
logrus.Warnf("could not delete endpoint %s on failure on failure (%v): %v", endpointName, retErr, e)
}
}
}()
@@ -2093,17 +2166,18 @@ func (n *network) createLoadBalancerSandbox() error {
if err := ep.Join(sb, nil); err != nil {
return err
}
return sb.EnableService()
}
func (n *network) deleteLoadBalancerSandbox() {
func (n *network) deleteLoadBalancerSandbox() error {
n.Lock()
c := n.ctrlr
name := n.name
n.Unlock()
endpointName := name + "-endpoint"
sandboxName := name + "-sbox"
sandboxName := n.lbSandboxName()
endpointName := n.lbEndpointName()
endpoint, err := n.EndpointByName(endpointName)
if err != nil {
@@ -2128,6 +2202,7 @@ func (n *network) deleteLoadBalancerSandbox() {
}
if err := c.SandboxDestroy(sandboxName); err != nil {
logrus.Warnf("Failed to delete %s sandbox: %v", sandboxName, err)
return fmt.Errorf("Failed to delete %s sandbox: %v", sandboxName, err)
}
return nil
}

View File

@@ -110,7 +110,6 @@ type tableEventMessage struct {
tname string
key string
msg []byte
node string
}
func (m *tableEventMessage) Invalidates(other memberlist.Broadcast) bool {
@@ -168,7 +167,6 @@ func (nDB *NetworkDB) sendTableEvent(event TableEvent_Type, nid string, tname st
id: nid,
tname: tname,
key: key,
node: nDB.config.NodeID,
})
return nil
}

View File

@@ -24,6 +24,9 @@ const (
retryInterval = 1 * time.Second
nodeReapInterval = 24 * time.Hour
nodeReapPeriod = 2 * time.Hour
// considering a cluster with > 20 nodes and a drain speed of 100 msg/s
// the following is roughly 1 minute
maxQueueLenBroadcastOnSync = 500
)
type logWriter struct{}
@@ -52,7 +55,7 @@ func (l *logWriter) Write(p []byte) (int, error) {
// SetKey adds a new key to the key ring
func (nDB *NetworkDB) SetKey(key []byte) {
logrus.Debugf("Adding key %s", hex.EncodeToString(key)[0:5])
logrus.Debugf("Adding key %.5s", hex.EncodeToString(key))
nDB.Lock()
defer nDB.Unlock()
for _, dbKey := range nDB.config.Keys {
@@ -69,7 +72,7 @@ func (nDB *NetworkDB) SetKey(key []byte) {
// SetPrimaryKey sets the given key as the primary key. This should have
// been added apriori through SetKey
func (nDB *NetworkDB) SetPrimaryKey(key []byte) {
logrus.Debugf("Primary Key %s", hex.EncodeToString(key)[0:5])
logrus.Debugf("Primary Key %.5s", hex.EncodeToString(key))
nDB.RLock()
defer nDB.RUnlock()
for _, dbKey := range nDB.config.Keys {
@@ -85,7 +88,7 @@ func (nDB *NetworkDB) SetPrimaryKey(key []byte) {
// RemoveKey removes a key from the key ring. The key being removed
// can't be the primary key
func (nDB *NetworkDB) RemoveKey(key []byte) {
logrus.Debugf("Remove Key %s", hex.EncodeToString(key)[0:5])
logrus.Debugf("Remove Key %.5s", hex.EncodeToString(key))
nDB.Lock()
defer nDB.Unlock()
for i, dbKey := range nDB.config.Keys {
@@ -123,7 +126,7 @@ func (nDB *NetworkDB) clusterInit() error {
var err error
if len(nDB.config.Keys) > 0 {
for i, key := range nDB.config.Keys {
logrus.Debugf("Encryption key %d: %s", i+1, hex.EncodeToString(key)[0:5])
logrus.Debugf("Encryption key %d: %.5s", i+1, hex.EncodeToString(key))
}
nDB.keyring, err = memberlist.NewKeyring(nDB.config.Keys, nDB.config.Keys[0])
if err != nil {
@@ -285,18 +288,35 @@ func (nDB *NetworkDB) rejoinClusterBootStrap() {
return
}
myself, _ := nDB.nodes[nDB.config.NodeID]
bootStrapIPs := make([]string, 0, len(nDB.bootStrapIP))
for _, bootIP := range nDB.bootStrapIP {
for _, node := range nDB.nodes {
if node.Addr.Equal(bootIP) {
// One of the bootstrap nodes is part of the cluster, return
nDB.RUnlock()
return
}
// botostrap IPs are usually IP:port from the Join
var bootstrapIP net.IP
ipStr, _, err := net.SplitHostPort(bootIP)
if err != nil {
// try to parse it as an IP with port
// Note this seems to be the case for swarm that do not specify any port
ipStr = bootIP
}
bootstrapIP = net.ParseIP(ipStr)
if bootstrapIP != nil {
for _, node := range nDB.nodes {
if node.Addr.Equal(bootstrapIP) && !node.Addr.Equal(myself.Addr) {
// One of the bootstrap nodes (and not myself) is part of the cluster, return
nDB.RUnlock()
return
}
}
bootStrapIPs = append(bootStrapIPs, bootIP)
}
bootStrapIPs = append(bootStrapIPs, bootIP.String())
}
nDB.RUnlock()
if len(bootStrapIPs) == 0 {
// this will also avoid to call the Join with an empty list erasing the current bootstrap ip list
logrus.Debug("rejoinClusterBootStrap did not find any valid IP")
return
}
// None of the bootStrap nodes are in the cluster, call memberlist join
logrus.Debugf("rejoinClusterBootStrap, calling cluster join with bootStrap %v", bootStrapIPs)
ctx, cancel := context.WithTimeout(nDB.ctx, rejoinClusterDuration)
@@ -555,6 +575,7 @@ func (nDB *NetworkDB) bulkSync(nodes []string, all bool) ([]string, error) {
var err error
var networks []string
var success bool
for _, node := range nodes {
if node == nDB.config.NodeID {
continue
@@ -562,21 +583,25 @@ func (nDB *NetworkDB) bulkSync(nodes []string, all bool) ([]string, error) {
logrus.Debugf("%v(%v): Initiating bulk sync with node %v", nDB.config.Hostname, nDB.config.NodeID, node)
networks = nDB.findCommonNetworks(node)
err = nDB.bulkSyncNode(networks, node, true)
// if its periodic bulksync stop after the first successful sync
if !all && err == nil {
break
}
if err != nil {
err = fmt.Errorf("bulk sync to node %s failed: %v", node, err)
logrus.Warn(err.Error())
} else {
// bulk sync succeeded
success = true
// if its periodic bulksync stop after the first successful sync
if !all {
break
}
}
}
if err != nil {
return nil, err
if success {
// if at least one node sync succeeded
return networks, nil
}
return networks, nil
return nil, err
}
// Bulk sync all the table entries belonging to a set of networks to a

View File

@@ -41,7 +41,7 @@ func (nDB *NetworkDB) handleNodeEvent(nEvent *NodeEvent) bool {
// If the node is not known from memberlist we cannot process save any state of it else if it actually
// dies we won't receive any notification and we will remain stuck with it
if _, ok := nDB.nodes[nEvent.NodeName]; !ok {
logrus.Error("node: %s is unknown to memberlist", nEvent.NodeName)
logrus.Errorf("node: %s is unknown to memberlist", nEvent.NodeName)
return false
}
@@ -142,7 +142,7 @@ func (nDB *NetworkDB) handleNetworkEvent(nEvent *NetworkEvent) bool {
return true
}
func (nDB *NetworkDB) handleTableEvent(tEvent *TableEvent) bool {
func (nDB *NetworkDB) handleTableEvent(tEvent *TableEvent, isBulkSync bool) bool {
// Update our local clock if the received messages has newer time.
nDB.tableClock.Witness(tEvent.LTime)
@@ -175,6 +175,14 @@ func (nDB *NetworkDB) handleTableEvent(tEvent *TableEvent) bool {
nDB.Unlock()
return false
}
} else if tEvent.Type == TableEventTypeDelete && !isBulkSync {
nDB.Unlock()
// We don't know the entry, the entry is being deleted and the message is an async message
// In this case the safest approach is to ignore it, it is possible that the queue grew so much to
// exceed the garbage collection time (the residual reap time that is in the message is not being
// updated, to avoid inserting too many messages in the queue).
// Instead the messages coming from TCP bulk sync are safe with the latest value for the garbage collection time
return false
}
e = &entry{
@@ -197,11 +205,17 @@ func (nDB *NetworkDB) handleTableEvent(tEvent *TableEvent) bool {
nDB.Unlock()
if err != nil && tEvent.Type == TableEventTypeDelete {
// If it is a delete event and we did not have a state for it, don't propagate to the application
// Again we don't know the entry but this is coming from a TCP sync so the message body is up to date.
// We had saved the state so to speed up convergence and be able to avoid accepting create events.
// Now we will rebroadcast the message if 2 conditions are met:
// 1) we had already synced this network (during the network join)
// 2) the residual reapTime is higher than 1/6 of the total reapTime.
// If the residual reapTime is lower or equal to 1/6 of the total reapTime don't bother broadcasting it around
// most likely the cluster is already aware of it, if not who will sync with this node will catch the state too.
// This also avoids that deletion of entries close to their garbage collection ends up circuling around forever
return e.reapTime > nDB.config.reapEntryInterval/6
// most likely the cluster is already aware of it
// This also reduce the possibility that deletion of entries close to their garbage collection ends up circuling around
// forever
//logrus.Infof("exiting on delete not knowing the obj with rebroadcast:%t", network.inSync)
return network.inSync && e.reapTime > nDB.config.reapEntryInterval/6
}
var op opType
@@ -215,7 +229,7 @@ func (nDB *NetworkDB) handleTableEvent(tEvent *TableEvent) bool {
}
nDB.broadcaster.Write(makeEvent(op, tEvent.TableName, tEvent.NetworkID, tEvent.Key, tEvent.Value))
return true
return network.inSync
}
func (nDB *NetworkDB) handleCompound(buf []byte, isBulkSync bool) {
@@ -244,7 +258,7 @@ func (nDB *NetworkDB) handleTableMessage(buf []byte, isBulkSync bool) {
return
}
if rebroadcast := nDB.handleTableEvent(&tEvent); rebroadcast {
if rebroadcast := nDB.handleTableEvent(&tEvent, isBulkSync); rebroadcast {
var err error
buf, err = encodeRawMessage(MessageTypeTableEvent, buf)
if err != nil {
@@ -261,12 +275,16 @@ func (nDB *NetworkDB) handleTableMessage(buf []byte, isBulkSync bool) {
return
}
// if the queue is over the threshold, avoid distributing information coming from TCP sync
if isBulkSync && n.tableBroadcasts.NumQueued() > maxQueueLenBroadcastOnSync {
return
}
n.tableBroadcasts.QueueBroadcast(&tableEventMessage{
msg: buf,
id: tEvent.NetworkID,
tname: tEvent.TableName,
key: tEvent.Key,
node: tEvent.NodeName,
})
}
}

View File

@@ -5,7 +5,6 @@ package networkdb
import (
"context"
"fmt"
"net"
"os"
"strings"
"sync"
@@ -96,7 +95,7 @@ type NetworkDB struct {
// bootStrapIP is the list of IPs that can be used to bootstrap
// the gossip.
bootStrapIP []net.IP
bootStrapIP []string
// lastStatsTimestamp is the last timestamp when the stats got printed
lastStatsTimestamp time.Time
@@ -131,6 +130,9 @@ type network struct {
// Lamport time for the latest state of the entry.
ltime serf.LamportTime
// Gets set to true after the first bulk sync happens
inSync bool
// Node leave is in progress.
leaving bool
@@ -268,10 +270,8 @@ func New(c *Config) (*NetworkDB, error) {
// instances passed by the caller in the form of addr:port
func (nDB *NetworkDB) Join(members []string) error {
nDB.Lock()
nDB.bootStrapIP = make([]net.IP, 0, len(members))
for _, m := range members {
nDB.bootStrapIP = append(nDB.bootStrapIP, net.ParseIP(m))
}
nDB.bootStrapIP = append([]string(nil), members...)
logrus.Infof("The new bootstrap node list is:%v", nDB.bootStrapIP)
nDB.Unlock()
return nDB.clusterJoin(members)
}
@@ -619,6 +619,7 @@ func (nDB *NetworkDB) JoinNetwork(nid string) error {
}
nDB.addNetworkNode(nid, nDB.config.NodeID)
networkNodes := nDB.networkNodes[nid]
n = nodeNetworks[nid]
nDB.Unlock()
if err := nDB.sendNetworkEvent(nid, NetworkEventTypeJoin, ltime); err != nil {
@@ -630,6 +631,12 @@ func (nDB *NetworkDB) JoinNetwork(nid string) error {
logrus.Errorf("Error bulk syncing while joining network %s: %v", nid, err)
}
// Mark the network as being synced
// note this is a best effort, we are not checking the result of the bulk sync
nDB.Lock()
n.inSync = true
nDB.Unlock()
return nil
}

View File

@@ -1,11 +1,11 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: networkdb.proto
// source: networkdb/networkdb.proto
/*
Package networkdb is a generated protocol buffer package.
It is generated from these files:
networkdb.proto
networkdb/networkdb.proto
It has these top-level messages:
GossipMessage
@@ -476,7 +476,7 @@ func (m *CompoundMessage) GetMessages() []*CompoundMessage_SimpleMessage {
type CompoundMessage_SimpleMessage struct {
// Bytestring payload of a message constructed using
// other message type definitions.
Payload []byte `protobuf:"bytes,1,opt,name=Payload,proto3" json:"Payload,omitempty"`
Payload []byte `protobuf:"bytes,1,opt,name=Payload,json=payload,proto3" json:"Payload,omitempty"`
}
func (m *CompoundMessage_SimpleMessage) Reset() { *m = CompoundMessage_SimpleMessage{} }
@@ -997,24 +997,6 @@ func (m *CompoundMessage_SimpleMessage) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
func encodeFixed64Networkdb(dAtA []byte, offset int, v uint64) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
dAtA[offset+4] = uint8(v >> 32)
dAtA[offset+5] = uint8(v >> 40)
dAtA[offset+6] = uint8(v >> 48)
dAtA[offset+7] = uint8(v >> 56)
return offset + 8
}
func encodeFixed32Networkdb(dAtA []byte, offset int, v uint32) int {
dAtA[offset] = uint8(v)
dAtA[offset+1] = uint8(v >> 8)
dAtA[offset+2] = uint8(v >> 16)
dAtA[offset+3] = uint8(v >> 24)
return offset + 4
}
func encodeVarintNetworkdb(dAtA []byte, offset int, v uint64) int {
for v >= 1<<7 {
dAtA[offset] = uint8(v&0x7f | 0x80)
@@ -2666,68 +2648,68 @@ var (
ErrIntOverflowNetworkdb = fmt.Errorf("proto: integer overflow")
)
func init() { proto.RegisterFile("networkdb.proto", fileDescriptorNetworkdb) }
func init() { proto.RegisterFile("networkdb/networkdb.proto", fileDescriptorNetworkdb) }
var fileDescriptorNetworkdb = []byte{
// 953 bytes of a gzipped FileDescriptorProto
// 955 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x96, 0xcd, 0x6e, 0xe3, 0x54,
0x14, 0xc7, 0x7b, 0xf3, 0xd5, 0xe4, 0x34, 0xa5, 0xe6, 0x4e, 0x67, 0xc6, 0xe3, 0x81, 0xc4, 0x98,
0x99, 0x2a, 0x53, 0x41, 0x8a, 0x3a, 0x4f, 0xd0, 0x24, 0x16, 0x64, 0x26, 0xe3, 0x44, 0x6e, 0x52,
0xc4, 0x2a, 0xba, 0xad, 0x2f, 0xa9, 0x55, 0xc7, 0xb6, 0x6c, 0x27, 0x28, 0x2b, 0x10, 0xab, 0x51,
0x16, 0xbc, 0x41, 0x56, 0xc3, 0x9a, 0x07, 0x40, 0x2c, 0x59, 0xcc, 0x82, 0x05, 0xec, 0x10, 0x8b,
0x88, 0xe6, 0x09, 0x78, 0x04, 0xe4, 0x6b, 0x3b, 0xb9, 0x49, 0xab, 0x91, 0x10, 0x23, 0xc1, 0x26,
0xb9, 0x1f, 0xbf, 0x1c, 0x9f, 0xf3, 0xf7, 0xff, 0xdc, 0x1b, 0xd8, 0xb3, 0x69, 0xf0, 0x95, 0xe3,
0x5d, 0x19, 0xe7, 0x55, 0xd7, 0x73, 0x02, 0x07, 0x17, 0x96, 0x0b, 0xd2, 0xfe, 0xc0, 0x19, 0x38,
0x6c, 0xf5, 0x28, 0x1c, 0x45, 0x80, 0xd2, 0x86, 0xdd, 0x4f, 0x1d, 0xdf, 0x37, 0xdd, 0x17, 0xd4,
0xf7, 0xc9, 0x80, 0xe2, 0x43, 0xc8, 0x04, 0x13, 0x97, 0x8a, 0x48, 0x46, 0x95, 0x77, 0x8e, 0xef,
0x55, 0x57, 0x11, 0x63, 0xa2, 0x3b, 0x71, 0xa9, 0xce, 0x18, 0x8c, 0x21, 0x63, 0x90, 0x80, 0x88,
0x29, 0x19, 0x55, 0x8a, 0x3a, 0x1b, 0x2b, 0xaf, 0x52, 0x50, 0xd0, 0x1c, 0x83, 0xaa, 0x63, 0x6a,
0x07, 0xf8, 0xe3, 0xb5, 0x68, 0x0f, 0xb8, 0x68, 0x4b, 0xa6, 0xca, 0x05, 0x6c, 0x42, 0xce, 0xea,
0x07, 0xe6, 0x90, 0xb2, 0x90, 0x99, 0xda, 0xf1, 0xeb, 0x79, 0x79, 0xeb, 0x8f, 0x79, 0xf9, 0x70,
0x60, 0x06, 0x97, 0xa3, 0xf3, 0xea, 0x85, 0x33, 0x3c, 0xba, 0x24, 0xfe, 0xa5, 0x79, 0xe1, 0x78,
0xee, 0x91, 0x4f, 0xbd, 0x2f, 0xd9, 0x47, 0xb5, 0x45, 0x86, 0xae, 0xe3, 0x05, 0x5d, 0x73, 0x48,
0xf5, 0xac, 0x15, 0x7e, 0xe1, 0x87, 0x50, 0xb0, 0x1d, 0x83, 0xf6, 0x6d, 0x32, 0xa4, 0x62, 0x5a,
0x46, 0x95, 0x82, 0x9e, 0x0f, 0x17, 0x34, 0x32, 0xa4, 0xca, 0xd7, 0x90, 0x09, 0x9f, 0x8a, 0x1f,
0xc3, 0x76, 0x53, 0x3b, 0x3b, 0x69, 0x35, 0x1b, 0xc2, 0x96, 0x24, 0x4e, 0x67, 0xf2, 0xfe, 0x32,
0xad, 0x70, 0xbf, 0x69, 0x8f, 0x89, 0x65, 0x1a, 0xb8, 0x0c, 0x99, 0x67, 0xed, 0xa6, 0x26, 0x20,
0xe9, 0xee, 0x74, 0x26, 0xbf, 0xbb, 0xc6, 0x3c, 0x73, 0x4c, 0x1b, 0x7f, 0x00, 0xd9, 0x96, 0x7a,
0x72, 0xa6, 0x0a, 0x29, 0xe9, 0xde, 0x74, 0x26, 0xe3, 0x35, 0xa2, 0x45, 0xc9, 0x98, 0x4a, 0xc5,
0x97, 0xaf, 0x4a, 0x5b, 0x3f, 0x7e, 0x5f, 0x62, 0x0f, 0x56, 0xae, 0x53, 0x50, 0xd4, 0x22, 0x2d,
0x22, 0xa1, 0x3e, 0x59, 0x13, 0xea, 0x3d, 0x5e, 0x28, 0x0e, 0xfb, 0x0f, 0xb4, 0xc2, 0x1f, 0x01,
0xc4, 0xc9, 0xf4, 0x4d, 0x43, 0xcc, 0x84, 0xbb, 0xb5, 0xdd, 0xc5, 0xbc, 0x5c, 0x88, 0x13, 0x6b,
0x36, 0xf4, 0xc4, 0x65, 0x4d, 0x43, 0x79, 0x89, 0x62, 0x69, 0x2b, 0xbc, 0xb4, 0x0f, 0xa7, 0x33,
0xf9, 0x3e, 0x5f, 0x08, 0xaf, 0xae, 0xb2, 0x54, 0x37, 0x7a, 0x03, 0x1b, 0x18, 0x13, 0xf8, 0xd1,
0x4a, 0xe0, 0x07, 0xd3, 0x99, 0x7c, 0x77, 0x13, 0xba, 0x4d, 0xe3, 0x5f, 0xd0, 0x4a, 0x63, 0x3b,
0xf0, 0x26, 0x1b, 0x95, 0xa0, 0x37, 0x57, 0xf2, 0x36, 0xf5, 0x7d, 0x72, 0x43, 0xdf, 0x5a, 0x71,
0x31, 0x2f, 0xe7, 0xb5, 0x58, 0x63, 0x4e, 0x6d, 0x11, 0xb6, 0x2d, 0x4a, 0xc6, 0xa6, 0x3d, 0x60,
0x52, 0xe7, 0xf5, 0x64, 0xaa, 0xfc, 0x84, 0x60, 0x2f, 0x4e, 0xb4, 0x33, 0xf2, 0x2f, 0x3b, 0x23,
0xcb, 0xe2, 0x72, 0x44, 0xff, 0x36, 0xc7, 0xa7, 0x90, 0x8f, 0x6b, 0xf7, 0xc5, 0x94, 0x9c, 0xae,
0xec, 0x1c, 0xdf, 0xbf, 0xc5, 0x84, 0xa1, 0x8e, 0xfa, 0x12, 0xfc, 0x07, 0x85, 0x29, 0xdf, 0x65,
0x00, 0xba, 0xe4, 0xdc, 0x8a, 0x0f, 0x86, 0xea, 0x9a, 0xdf, 0x25, 0xee, 0x51, 0x2b, 0xe8, 0x7f,
0xef, 0x76, 0xfc, 0x3e, 0x40, 0x10, 0xa6, 0x1b, 0xc5, 0xca, 0xb2, 0x58, 0x05, 0xb6, 0xc2, 0x82,
0x09, 0x90, 0xbe, 0xa2, 0x13, 0x31, 0xc7, 0xd6, 0xc3, 0x21, 0xde, 0x87, 0xec, 0x98, 0x58, 0x23,
0x2a, 0x6e, 0xb3, 0x23, 0x33, 0x9a, 0xe0, 0x1a, 0x60, 0x8f, 0xfa, 0xa6, 0x31, 0x22, 0x56, 0xdf,
0xa3, 0xc4, 0x8d, 0x0a, 0xcd, 0xcb, 0xa8, 0x92, 0xad, 0xed, 0x2f, 0xe6, 0x65, 0x41, 0x8f, 0x77,
0x75, 0x4a, 0x5c, 0x56, 0x8a, 0xe0, 0x6d, 0xac, 0x28, 0x3f, 0x24, 0x8d, 0x77, 0xc0, 0x37, 0x1e,
0x6b, 0x96, 0x95, 0xa2, 0x7c, 0xdb, 0x3d, 0x82, 0x5c, 0x5d, 0x57, 0x4f, 0xba, 0x6a, 0xd2, 0x78,
0xeb, 0x58, 0xdd, 0xa3, 0x24, 0xa0, 0x21, 0xd5, 0xeb, 0x34, 0x42, 0x2a, 0x75, 0x1b, 0xd5, 0x73,
0x8d, 0x98, 0x6a, 0xa8, 0x2d, 0xb5, 0xab, 0x0a, 0xe9, 0xdb, 0xa8, 0x06, 0xb5, 0x68, 0xb0, 0xd9,
0x9e, 0xbf, 0x21, 0xd8, 0xab, 0x8d, 0xac, 0xab, 0xd3, 0x89, 0x7d, 0x91, 0x5c, 0x3e, 0x6f, 0xd1,
0xcf, 0x32, 0xec, 0x8c, 0x6c, 0xdf, 0xb1, 0xcc, 0x0b, 0x33, 0xa0, 0x06, 0x73, 0x4d, 0x5e, 0xe7,
0x97, 0xde, 0xec, 0x03, 0x89, 0x6b, 0x87, 0x8c, 0x9c, 0x66, 0x7b, 0x89, 0xeb, 0x45, 0xd8, 0x76,
0xc9, 0xc4, 0x72, 0x88, 0xc1, 0x5e, 0x79, 0x51, 0x4f, 0xa6, 0xca, 0xb7, 0x08, 0xf6, 0xea, 0xce,
0xd0, 0x75, 0x46, 0xb6, 0x91, 0xd4, 0xd4, 0x80, 0xfc, 0x30, 0x1a, 0xfa, 0x22, 0x62, 0x8d, 0x55,
0xe1, 0xdc, 0xbe, 0x41, 0x57, 0x4f, 0xcd, 0xa1, 0x6b, 0xd1, 0x78, 0xa6, 0x2f, 0x7f, 0x29, 0x3d,
0x81, 0xdd, 0xb5, 0xad, 0x30, 0x89, 0x4e, 0x9c, 0x04, 0x8a, 0x92, 0x88, 0xa7, 0x87, 0x3f, 0xa7,
0x60, 0x87, 0xbb, 0xab, 0xf1, 0x87, 0xbc, 0x21, 0xd8, 0xf5, 0xc4, 0xed, 0x26, 0x6e, 0xa8, 0xc2,
0xae, 0xa6, 0x76, 0x3f, 0x6f, 0xeb, 0xcf, 0xfb, 0xea, 0x99, 0xaa, 0x75, 0x05, 0x14, 0x1d, 0xda,
0x1c, 0xba, 0x76, 0x5f, 0x1d, 0xc2, 0x4e, 0xf7, 0xa4, 0xd6, 0x52, 0x63, 0x3a, 0x3e, 0x96, 0x39,
0x9a, 0xeb, 0xf5, 0x03, 0x28, 0x74, 0x7a, 0xa7, 0x9f, 0xf5, 0x3b, 0xbd, 0x56, 0x4b, 0x48, 0x4b,
0xf7, 0xa7, 0x33, 0xf9, 0x0e, 0x47, 0x2e, 0x4f, 0xb3, 0x03, 0x28, 0xd4, 0x7a, 0xad, 0xe7, 0xfd,
0xd3, 0x2f, 0xb4, 0xba, 0x90, 0xb9, 0xc1, 0x25, 0x66, 0xc1, 0x8f, 0x21, 0x5f, 0x6f, 0xbf, 0xe8,
0xb4, 0x7b, 0x5a, 0x43, 0xc8, 0xde, 0xc0, 0x12, 0x45, 0x71, 0x05, 0x40, 0x6b, 0x37, 0x92, 0x0c,
0x73, 0x91, 0x31, 0xf9, 0x7a, 0x92, 0x4b, 0x5a, 0xba, 0x13, 0x1b, 0x93, 0x97, 0xad, 0x26, 0xfe,
0x7e, 0x5d, 0xda, 0xfa, 0xeb, 0xba, 0x84, 0xbe, 0x59, 0x94, 0xd0, 0xeb, 0x45, 0x09, 0xfd, 0xba,
0x28, 0xa1, 0x3f, 0x17, 0x25, 0x74, 0x9e, 0x63, 0x7f, 0x9d, 0x9e, 0xfe, 0x1d, 0x00, 0x00, 0xff,
0xff, 0x92, 0x82, 0xdb, 0x1a, 0x6e, 0x09, 0x00, 0x00,
0x14, 0xc7, 0x7b, 0xf3, 0xd1, 0x26, 0xa7, 0x29, 0x35, 0x77, 0x3a, 0x53, 0xd7, 0x03, 0x89, 0x31,
0x33, 0x55, 0xa6, 0x82, 0x14, 0x75, 0x9e, 0xa0, 0x49, 0x2c, 0xc8, 0x4c, 0xc6, 0x89, 0xdc, 0xa4,
0x88, 0x55, 0x74, 0x5b, 0x5f, 0x52, 0xab, 0x8e, 0x6d, 0xd9, 0x4e, 0x50, 0x56, 0x20, 0x56, 0xa3,
0x2c, 0x78, 0x83, 0xac, 0x86, 0x35, 0x0f, 0x80, 0x58, 0xb2, 0x98, 0x05, 0x0b, 0xd8, 0x21, 0x16,
0x11, 0xcd, 0x13, 0xf0, 0x08, 0xc8, 0xd7, 0x76, 0x72, 0x93, 0x56, 0x23, 0x21, 0x46, 0x82, 0x4d,
0x72, 0x3f, 0x7e, 0x39, 0x3e, 0xe7, 0xef, 0xff, 0xb9, 0x37, 0x70, 0x60, 0xd3, 0xe0, 0x2b, 0xc7,
0xbb, 0x36, 0x2e, 0x8e, 0x17, 0xa3, 0x8a, 0xeb, 0x39, 0x81, 0x83, 0xf3, 0x8b, 0x05, 0x69, 0xaf,
0xef, 0xf4, 0x1d, 0xb6, 0x7a, 0x1c, 0x8e, 0x22, 0x40, 0x69, 0xc1, 0xce, 0xa7, 0x8e, 0xef, 0x9b,
0xee, 0x0b, 0xea, 0xfb, 0xa4, 0x4f, 0xf1, 0x11, 0x64, 0x82, 0xb1, 0x4b, 0x45, 0x24, 0xa3, 0xf2,
0x3b, 0x27, 0x0f, 0x2a, 0xcb, 0x88, 0x31, 0xd1, 0x19, 0xbb, 0x54, 0x67, 0x0c, 0xc6, 0x90, 0x31,
0x48, 0x40, 0xc4, 0x94, 0x8c, 0xca, 0x05, 0x9d, 0x8d, 0x95, 0x57, 0x29, 0xc8, 0x6b, 0x8e, 0x41,
0xd5, 0x11, 0xb5, 0x03, 0xfc, 0xf1, 0x4a, 0xb4, 0x03, 0x2e, 0xda, 0x82, 0xa9, 0x70, 0x01, 0x1b,
0xb0, 0x69, 0xf5, 0x02, 0x73, 0x40, 0x59, 0xc8, 0x4c, 0xf5, 0xe4, 0xf5, 0xac, 0xb4, 0xf1, 0xc7,
0xac, 0x74, 0xd4, 0x37, 0x83, 0xab, 0xe1, 0x45, 0xe5, 0xd2, 0x19, 0x1c, 0x5f, 0x11, 0xff, 0xca,
0xbc, 0x74, 0x3c, 0xf7, 0xd8, 0xa7, 0xde, 0x97, 0xec, 0xa3, 0xd2, 0x24, 0x03, 0xd7, 0xf1, 0x82,
0x8e, 0x39, 0xa0, 0x7a, 0xd6, 0x0a, 0xbf, 0xf0, 0x43, 0xc8, 0xdb, 0x8e, 0x41, 0x7b, 0x36, 0x19,
0x50, 0x31, 0x2d, 0xa3, 0x72, 0x5e, 0xcf, 0x85, 0x0b, 0x1a, 0x19, 0x50, 0xe5, 0x6b, 0xc8, 0x84,
0x4f, 0xc5, 0x8f, 0x61, 0xab, 0xa1, 0x9d, 0x9f, 0x36, 0x1b, 0x75, 0x61, 0x43, 0x12, 0x27, 0x53,
0x79, 0x6f, 0x91, 0x56, 0xb8, 0xdf, 0xb0, 0x47, 0xc4, 0x32, 0x0d, 0x5c, 0x82, 0xcc, 0xb3, 0x56,
0x43, 0x13, 0x90, 0x74, 0x7f, 0x32, 0x95, 0xdf, 0x5d, 0x61, 0x9e, 0x39, 0xa6, 0x8d, 0x3f, 0x80,
0x6c, 0x53, 0x3d, 0x3d, 0x57, 0x85, 0x94, 0xf4, 0x60, 0x32, 0x95, 0xf1, 0x0a, 0xd1, 0xa4, 0x64,
0x44, 0xa5, 0xc2, 0xcb, 0x57, 0xc5, 0x8d, 0x1f, 0xbf, 0x2f, 0xb2, 0x07, 0x2b, 0x37, 0x29, 0x28,
0x68, 0x91, 0x16, 0x91, 0x50, 0x9f, 0xac, 0x08, 0xf5, 0x1e, 0x2f, 0x14, 0x87, 0xfd, 0x07, 0x5a,
0xe1, 0x8f, 0x00, 0xe2, 0x64, 0x7a, 0xa6, 0x21, 0x66, 0xc2, 0xdd, 0xea, 0xce, 0x7c, 0x56, 0xca,
0xc7, 0x89, 0x35, 0xea, 0x7a, 0xe2, 0xb2, 0x86, 0xa1, 0xbc, 0x44, 0xb1, 0xb4, 0x65, 0x5e, 0xda,
0x87, 0x93, 0xa9, 0xbc, 0xcf, 0x17, 0xc2, 0xab, 0xab, 0x2c, 0xd4, 0x8d, 0xde, 0xc0, 0x1a, 0xc6,
0x04, 0x7e, 0xb4, 0x14, 0xf8, 0x60, 0x32, 0x95, 0xef, 0xaf, 0x43, 0x77, 0x69, 0xfc, 0x0b, 0x5a,
0x6a, 0x6c, 0x07, 0xde, 0x78, 0xad, 0x12, 0xf4, 0xe6, 0x4a, 0xde, 0xa6, 0xbe, 0x4f, 0x6e, 0xe9,
0x5b, 0x2d, 0xcc, 0x67, 0xa5, 0x9c, 0x16, 0x6b, 0xcc, 0xa9, 0x2d, 0xc2, 0x96, 0x45, 0xc9, 0xc8,
0xb4, 0xfb, 0x4c, 0xea, 0x9c, 0x9e, 0x4c, 0x95, 0x9f, 0x10, 0xec, 0xc6, 0x89, 0xb6, 0x87, 0xfe,
0x55, 0x7b, 0x68, 0x59, 0x5c, 0x8e, 0xe8, 0xdf, 0xe6, 0xf8, 0x14, 0x72, 0x71, 0xed, 0xbe, 0x98,
0x92, 0xd3, 0xe5, 0xed, 0x93, 0xfd, 0x3b, 0x4c, 0x18, 0xea, 0xa8, 0x2f, 0xc0, 0x7f, 0x50, 0x98,
0xf2, 0x5d, 0x06, 0xa0, 0x43, 0x2e, 0xac, 0xf8, 0x60, 0xa8, 0xac, 0xf8, 0x5d, 0xe2, 0x1e, 0xb5,
0x84, 0xfe, 0xf7, 0x6e, 0xc7, 0xef, 0x03, 0x04, 0x61, 0xba, 0x51, 0xac, 0x2c, 0x8b, 0x95, 0x67,
0x2b, 0x2c, 0x98, 0x00, 0xe9, 0x6b, 0x3a, 0x16, 0x37, 0xd9, 0x7a, 0x38, 0xc4, 0x7b, 0x90, 0x1d,
0x11, 0x6b, 0x48, 0xc5, 0x2d, 0x76, 0x64, 0x46, 0x13, 0x5c, 0x05, 0xec, 0x51, 0xdf, 0x34, 0x86,
0xc4, 0xea, 0x79, 0x94, 0xb8, 0x51, 0xa1, 0x39, 0x19, 0x95, 0xb3, 0xd5, 0xbd, 0xf9, 0xac, 0x24,
0xe8, 0xf1, 0xae, 0x4e, 0x89, 0xcb, 0x4a, 0x11, 0xbc, 0xb5, 0x15, 0xe5, 0x87, 0xa4, 0xf1, 0x0e,
0xf9, 0xc6, 0x63, 0xcd, 0xb2, 0x54, 0x94, 0x6f, 0xbb, 0x47, 0xb0, 0x59, 0xd3, 0xd5, 0xd3, 0x8e,
0x9a, 0x34, 0xde, 0x2a, 0x56, 0xf3, 0x28, 0x09, 0x68, 0x48, 0x75, 0xdb, 0xf5, 0x90, 0x4a, 0xdd,
0x45, 0x75, 0x5d, 0x23, 0xa6, 0xea, 0x6a, 0x53, 0xed, 0xa8, 0x42, 0xfa, 0x2e, 0xaa, 0x4e, 0x2d,
0x1a, 0xac, 0xb7, 0xe7, 0x6f, 0x08, 0x76, 0xab, 0x43, 0xeb, 0xfa, 0x6c, 0x6c, 0x5f, 0x26, 0x97,
0xcf, 0x5b, 0xf4, 0xb3, 0x0c, 0xdb, 0x43, 0xdb, 0x77, 0x2c, 0xf3, 0xd2, 0x0c, 0xa8, 0xc1, 0x5c,
0x93, 0xd3, 0xf9, 0xa5, 0x37, 0xfb, 0x40, 0xe2, 0xda, 0x21, 0x23, 0xa7, 0xd9, 0x5e, 0xe2, 0x7a,
0x11, 0xb6, 0x5c, 0x32, 0xb6, 0x1c, 0x62, 0xb0, 0x57, 0x5e, 0xd0, 0x93, 0xa9, 0xf2, 0x2d, 0x82,
0xdd, 0x9a, 0x33, 0x70, 0x9d, 0xa1, 0x6d, 0x24, 0x35, 0xd5, 0x21, 0x37, 0x88, 0x86, 0xbe, 0x88,
0x58, 0x63, 0x95, 0x39, 0xb7, 0xaf, 0xd1, 0x95, 0x33, 0x73, 0xe0, 0x5a, 0x34, 0x9e, 0xe9, 0x8b,
0x5f, 0x4a, 0x4f, 0x60, 0x67, 0x65, 0x2b, 0x4c, 0xa2, 0x1d, 0x27, 0x81, 0x56, 0x92, 0x38, 0xfa,
0x39, 0x05, 0xdb, 0xdc, 0x5d, 0x8d, 0x3f, 0xe4, 0x0d, 0xc1, 0xae, 0x27, 0x6e, 0x37, 0x71, 0x43,
0x05, 0x76, 0x34, 0xb5, 0xf3, 0x79, 0x4b, 0x7f, 0xde, 0x53, 0xcf, 0x55, 0xad, 0x23, 0xa0, 0xe8,
0xd0, 0xe6, 0xd0, 0x95, 0xfb, 0xea, 0x08, 0xb6, 0x3b, 0xa7, 0xd5, 0xa6, 0x1a, 0xd3, 0xf1, 0xb1,
0xcc, 0xd1, 0x5c, 0xaf, 0x1f, 0x42, 0xbe, 0xdd, 0x3d, 0xfb, 0xac, 0xd7, 0xee, 0x36, 0x9b, 0x42,
0x5a, 0xda, 0x9f, 0x4c, 0xe5, 0x7b, 0x1c, 0xb9, 0x38, 0xcd, 0x0e, 0x21, 0x5f, 0xed, 0x36, 0x9f,
0xf7, 0xce, 0xbe, 0xd0, 0x6a, 0x42, 0xe6, 0x16, 0x97, 0x98, 0x05, 0x3f, 0x86, 0x5c, 0xad, 0xf5,
0xa2, 0xdd, 0xea, 0x6a, 0x75, 0x21, 0x7b, 0x0b, 0x4b, 0x14, 0xc5, 0x65, 0x00, 0xad, 0x55, 0x4f,
0x32, 0xdc, 0x8c, 0x8c, 0xc9, 0xd7, 0x93, 0x5c, 0xd2, 0xd2, 0xbd, 0xd8, 0x98, 0xbc, 0x6c, 0x55,
0xf1, 0xf7, 0x9b, 0xe2, 0xc6, 0x5f, 0x37, 0x45, 0xf4, 0xcd, 0xbc, 0x88, 0x5e, 0xcf, 0x8b, 0xe8,
0xd7, 0x79, 0x11, 0xfd, 0x39, 0x2f, 0xa2, 0x8b, 0x4d, 0xf6, 0xd7, 0xe9, 0xe9, 0xdf, 0x01, 0x00,
0x00, 0xff, 0xff, 0x02, 0x9d, 0x53, 0x72, 0x78, 0x09, 0x00, 0x00,
}

View File

@@ -28,6 +28,7 @@ var NetDbPaths2Func = map[string]diagnostic.HTTPHandlerFunc{
"/deleteentry": dbDeleteEntry,
"/getentry": dbGetEntry,
"/gettable": dbGetTable,
"/networkstats": dbNetworkStats,
}
func dbJoin(ctx interface{}, w http.ResponseWriter, r *http.Request) {
@@ -411,3 +412,41 @@ func dbGetTable(ctx interface{}, w http.ResponseWriter, r *http.Request) {
}
diagnostic.HTTPReply(w, diagnostic.FailCommand(fmt.Errorf("%s", dbNotAvailable)), json)
}
func dbNetworkStats(ctx interface{}, w http.ResponseWriter, r *http.Request) {
r.ParseForm()
diagnostic.DebugHTTPForm(r)
_, json := diagnostic.ParseHTTPFormOptions(r)
// audit logs
log := logrus.WithFields(logrus.Fields{"component": "diagnostic", "remoteIP": r.RemoteAddr, "method": common.CallerName(0), "url": r.URL.String()})
log.Info("network stats")
if len(r.Form["nid"]) < 1 {
rsp := diagnostic.WrongCommand(missingParameter, fmt.Sprintf("%s?nid=test", r.URL.Path))
log.Error("network stats failed, wrong input")
diagnostic.HTTPReply(w, rsp, json)
return
}
nDB, ok := ctx.(*NetworkDB)
if ok {
nDB.RLock()
networks := nDB.networks[nDB.config.NodeID]
network, ok := networks[r.Form["nid"][0]]
entries := -1
qLen := -1
if ok {
entries = network.entriesNumber
qLen = network.tableBroadcasts.NumQueued()
}
nDB.RUnlock()
rsp := diagnostic.CommandSucceed(&diagnostic.NetworkStatsResult{Entries: entries, QueueLen: qLen})
log.WithField("response", fmt.Sprintf("%+v", rsp)).Info("network stats done")
diagnostic.HTTPReply(w, rsp, json)
return
}
diagnostic.HTTPReply(w, diagnostic.FailCommand(fmt.Errorf("%s", dbNotAvailable)), json)
}

Some files were not shown because too many files have changed in this diff Show More