Compare commits

...

22 Commits

Author SHA1 Message Date
Sebastiaan van Stijn
56c18c6d94 Update libnetwork to fix stale HNS endpoints on Windows
Update libnetwork to 1b91bc94094ecfdae41daa465cc0c8df37dfb3dd to bring in a fix
for stale HNS endpoints on Windows:

When Windows Server 2016 is restarted with the Docker service running, it is
possible for endpoints to be deleted from the libnetwork store without being
deleted from HNS. This does not occur if the Docker service is stopped cleanly
first, or forcibly terminated (since the endpoints still exist in both). This
change works around the issue by removing any stale HNS endpoints for a network
when creating it.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit fb364f0746)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 10:09:19 +01:00
Sebastiaan van Stijn
9cfa94b327 Update libnetwork with fixes for duplicate IP addresses
This updates libnetwork to 8892d7537c67232591f1f3af60587e3e77e61d41 to bring in
IPAM fixes for duplicate IP addresses.

- IPAM tests (libnetwork PR 2104) (no changes in vendored files)
- Fix for Duplicate IP issues  (libnetwork PR 2105)

Also bump golang/x/sync to match libnetwork (no code-changes, other
than the README being updated)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 55e0fe24db)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:19 +01:00
Kir Kolyshkin
172d8cfb0f daemon/oci_linux_test: add TestIpcPrivateVsReadonly
The test case checks that in case of IpcMode: private and
ReadonlyRootfs: true (as in "docker run --ipc private --read-only")
the resulting /dev/shm mount is NOT made read-only.

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit 33dd562e3a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:19 +01:00
Kir Kolyshkin
bde862d11e daemon/setMounts(): do not make /dev/shm ro
It has been pointed out that if --read-only flag is given, /dev/shm
also becomes read-only in case of --ipc private.

This happens because in this case the mount comes from OCI spec
(since commit 7120976d74), and is a regression caused by that
commit.

The meaning of --read-only flag is to only have a "main" container
filesystem read-only, not the auxiliary stuff (that includes /dev/shm,
other mounts and volumes, --tmpfs, /proc, /dev and so on).

So, let's make sure /dev/shm that comes from OCI spec is not made
read-only.

Fixes: 7120976d74 ("Implement none, private, and shareable ipc modes")

Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
(cherry picked from commit cad74056c0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
Chris Telfer
49d487808f Update vendoring for libnetwork PR #2097
This PR prevents automatic removal of the load balancing sandbox
endpoint when the endpoint is the last one in the network but
the network is marked as ingress.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit bebad150c9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
Chris Telfer
dba33b041a Delete the load balancer endpoint in Ingress nets
Ingress networks will no longer automatically remove their
load-balancing endpoint (and sandbox) automatically when the network is
otherwise upopulated.   This is to prevent automatic removal of the
ingress networks when all the containers leave them.  Therefore
explicit removal of an ingress network also requires explicit removal
of its load-balancing endpoint.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 3da4ebf355)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
Chris Telfer
3db5356b58 Add test for ingress removal on service removal
The commit https://github.com/moby/moby/pull/35422 had the result of
accidentally causing the removal of the ingress network when the
last member of a service left the network.  This did not appear
in swarm instances because the swarm manager would still maintain
and return cluster state about the network even though it had
removed its sandbox and endpoint.  This test verifies that after a
service gets added and removed that the ingress sandbox remains
in a functional state.

Signed-off-by: Chris Telfer <ctelfer@docker.com>
(cherry picked from commit 805b6a7f74)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
Christopher Jones
f833d6ccb1 [integration] skip ppc64le oom tests for now
These tests were enabled by changing a config option on the ci
machines, instead of from a patch, so let me disable them
for now on ppc64le and open up another patch to enable them, where I can find
out what the issues are with them.

Signed-off-by: Christopher Jones <tophj@linux.vnet.ibm.com>
(cherry picked from commit 620ddc78a1)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 10:09:18 +01:00
Eli Uriegas
b8ae001c8f buildmod => buildmode
There was a typo with the buildmode flag for containerd

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit 5e4885b9af)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
Eli Uriegas
9c23db048a Build containerd, runc, and proxy statically
These were originally static binaries in the first place, this changes
them back to that.

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit 63c7bb2463)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
Sebastiaan van Stijn
cc19775363 Engine: bump swarmkit to 11d7b06f48bc1d73fc6d8776c3552a4b11c94301
Ingress network should not be attachable

Ingress network is a special network used only to expose
ports. For this reason the network cannot be explicitly
attached during service create or service update

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:18 +01:00
selansen
aa334bf751 Fix to address regression caused by PR 30897
With the inclusion of PR 30897, creating service for host network
    fails in 18.02. Modified IsPreDefinedNetwork check and return
    NetworkNameError instead of errdefs.Forbidden to address this issue

Signed-off-by: selansen <elango.siva@docker.com>
(cherry picked from commit 7cf8b20762)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:17 +01:00
Andrew Hsu
22482c7de4 engine: vndr swarmkit 49a9d7f
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 10:09:17 +01:00
Tonis Tiigi
99e9effcaf builder: fix layer lifecycle leak
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 7ad41d53df)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-03-23 10:09:17 +01:00
Sebastiaan van Stijn
c76f748e62 bump containerd/console to 2748ece16665b45a47f884001d5831ec79703880
Fix runc exec on big-endian, causing:

    container_linux.go:265: starting container process caused "open /dev/pts/4294967296: no such file or directory"

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit aab5eaddcc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:17 +01:00
Sebastiaan van Stijn
120ae64d02 Bump Runc to 1.0.0-rc5 / 4fc53a81fb7c994640722ac585fa9ca548971871
Release notes: https://github.com/opencontainers/runc/releases/tag/v1.0.0-rc5

Possibly relevant changes included:

- chroot when no mount namespaces is provided
- fix systemd slice expansion so that it could be consumed by cAdvisor
- libcontainer/capabilities_linux: Drop os.Getpid() call
- Update console dependency to fix runc exec on BE (causing: `container_linux.go:265: starting container process caused "open /dev/pts/4294967296: no such file or directory"`)
- libcontainer: setupUserNamespace is always called (fixes: Devices are mounted with wrong uid/gid)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit a2f5a1a5b2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:17 +01:00
Brian Goff
8232dc173d Split binary installers/commit scripts
Originally I worked on this for the multi-stage build Dockerfile
changes. Decided to split this out as we are still waiting for
multi-stage to be available on CI and rebasing these is pretty annoying.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit b529d1b093)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:17 +01:00
Daniel Nephin
8bdf5fc525 Migrate some copy tests to integration
Signed-off-by: Daniel Nephin <dnephin@docker.com>
(cherry picked from commit 00d409f03e)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 10:09:17 +01:00
Vincent Demeester
385a42156c Clean some docker_cli_build_tests that are cli-only
Remove TestBuildRenamedDockerfile and TestBuildDockerfileOutsideContext
that are cli-only tests (and already tested in the docker/cli
repository).

Also adds some comments on few tests that could be migrate to
docker/cli.

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
(cherry picked from commit 894c213b3b)
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2020-03-23 10:09:17 +01:00
Riyaz Faizullabhoy
b61d45bb41 update integration-cli tests for stderr output
Signed-off-by: Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
(cherry picked from commit 250b84ee8820d5ac28f223ef3affdffeff7ee026)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit d256539bf422fe6dc84c720c9153823c05396a3e)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 5742bd3ccf41601e363cd0bc8d8e45cd54973eed)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 1a2098cecfcdf93f3d8b8e203ffb6480973b172f)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 10:09:16 +01:00
Eli Uriegas
766fcb992c Blacklist tests, will be rewritten later on
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
(cherry picked from commit 4e81e4fa4edce70d1ce4e96c2181fcdfb88241bb)
Signed-off-by: Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
(cherry picked from commit ec6b0a1a4a2e0d48f7338080f76b47fc3b022c74)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit fbfecebc0a5a2c75212b2c2d2b53a00255ce479e)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit e3571070d5502ac07e6c8a56867e21b2cfa3ae07)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
(cherry picked from commit 9d7b9c23f54a082b1aa2f1206473adc1a266fb0a)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 10:09:16 +01:00
Daniel Nephin
27c443f09c Fix TestAttachAfterDetach to work with latest client
Signed-off-by: Daniel Nephin <dnephin@docker.com>
(cherry picked from commit 847b610620)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 10:09:16 +01:00
69 changed files with 1071 additions and 858 deletions

View File

@@ -187,11 +187,12 @@ RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
# Install tomlv, vndr, runc, containerd, tini, docker-proxy dockercli
# Please edit hack/dockerfile/install-binaries.sh to update them.
COPY hack/dockerfile/binaries-commits /tmp/binaries-commits
COPY hack/dockerfile/install-binaries.sh /tmp/install-binaries.sh
RUN /tmp/install-binaries.sh tomlv vndr runc containerd tini proxy dockercli gometalinter
# Install tomlv, vndr, runc, containerd, tini, proxy dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN for i in tomlv vndr tini gometalinter proxy dockercli runc containerd; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH
# Activate bash completion and include Docker's completion if mounted with DOCKER_BASH_COMPLETION_PATH

View File

@@ -151,14 +151,17 @@ RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
debian:jessie@sha256:287a20c5f73087ab406e6b364833e3fb7b3ae63ca0eb3486555dc27ed32c6e60 \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
#
# Install tomlv, vndr, runc, containerd, tini, docker-proxy
# Please edit hack/dockerfile/install-binaries.sh to update them.
COPY hack/dockerfile/binaries-commits /tmp/binaries-commits
COPY hack/dockerfile/install-binaries.sh /tmp/install-binaries.sh
RUN /tmp/install-binaries.sh tomlv vndr runc containerd tini proxy dockercli gometalinter
# Install tomlv, vndr, runc, containerd, tini, proxy dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN for i in tomlv vndr tini gometalinter proxy dockercli runc containerd; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]

View File

@@ -138,11 +138,12 @@ RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
# Install tomlv, vndr, runc, containerd, tini, docker-proxy
# Please edit hack/dockerfile/install-binaries.sh to update them.
COPY hack/dockerfile/binaries-commits /tmp/binaries-commits
COPY hack/dockerfile/install-binaries.sh /tmp/install-binaries.sh
RUN /tmp/install-binaries.sh tomlv vndr runc containerd tini proxy dockercli gometalinter
# Install tomlv, vndr, runc, containerd, tini, proxy dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN for i in tomlv vndr tini gometalinter proxy dockercli runc containerd; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH
ENTRYPOINT ["hack/dind"]

View File

@@ -23,7 +23,7 @@ RUN contrib/download-frozen-image-v2.sh /output/docker-frozen-images \
# Download Docker CLI binary
COPY hack/dockerfile hack/dockerfile
RUN hack/dockerfile/install-binaries.sh dockercli
RUN hack/dockerfile/install.sh dockercli
# Set tag and add sources
ARG DOCKER_GITCOMMIT

View File

@@ -136,11 +136,12 @@ RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
# Install tomlv, vndr, runc, containerd, tini, docker-proxy
# Please edit hack/dockerfile/install-binaries.sh to update them.
COPY hack/dockerfile/binaries-commits /tmp/binaries-commits
COPY hack/dockerfile/install-binaries.sh /tmp/install-binaries.sh
RUN /tmp/install-binaries.sh tomlv vndr runc containerd tini proxy dockercli gometalinter
# Install tomlv, vndr, runc, containerd, tini, proxy dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN for i in tomlv vndr tini gometalinter proxy dockercli runc containerd; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH
# Wrap all commands in the "docker-in-docker" script to allow nested containers

View File

@@ -130,11 +130,12 @@ RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
hello-world:latest@sha256:be0cd392e45be79ffeffa6b05338b98ebb16c87b255f48e297ec7f98e123905c
# See also ensureFrozenImagesLinux() in "integration-cli/fixtures_linux_daemon_test.go" (which needs to be updated when adding images to this list)
# Install tomlv, vndr, runc, containerd, tini, docker-proxy
# Please edit hack/dockerfile/install-binaries.sh to update them.
COPY hack/dockerfile/binaries-commits /tmp/binaries-commits
COPY hack/dockerfile/install-binaries.sh /tmp/install-binaries.sh
RUN /tmp/install-binaries.sh tomlv vndr runc containerd tini proxy dockercli gometalinter
# Install tomlv, vndr, runc, containerd, tini, proxy dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN for i in tomlv vndr tini gometalinter proxy dockercli runc containerd; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH
# Wrap all commands in the "docker-in-docker" script to allow nested containers

View File

@@ -50,10 +50,11 @@ ENV GOPATH /go
ENV CGO_LDFLAGS -L/lib
# Install runc, containerd, tini and docker-proxy
# Please edit hack/dockerfile/install-binaries.sh to update them.
COPY hack/dockerfile/binaries-commits /tmp/binaries-commits
COPY hack/dockerfile/install-binaries.sh /tmp/install-binaries.sh
RUN /tmp/install-binaries.sh runc containerd tini proxy dockercli
# Please edit hack/dockerfile/install/<name>.installer to update them.
COPY hack/dockerfile/install hack/dockerfile/install
RUN for i in runc containerd tini proxy dockercli; \
do hack/dockerfile/install/install.sh $i; \
done
ENV PATH=/usr/local/cli:$PATH
ENV AUTO_GOPATH 1

View File

@@ -53,7 +53,7 @@ type Backend interface {
// ImageBackend are the interface methods required from an image component
type ImageBackend interface {
GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (Image, ReleaseableLayer, error)
GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (Image, ROLayer, error)
}
// ExecBackend contains the interface methods required for executing containers
@@ -100,10 +100,16 @@ type Image interface {
OperatingSystem() string
}
// ReleaseableLayer is an image layer that can be mounted and released
type ReleaseableLayer interface {
// ROLayer is a reference to image rootfs layer
type ROLayer interface {
Release() error
Mount() (containerfs.ContainerFS, error)
Commit() (ReleaseableLayer, error)
NewRWLayer() (RWLayer, error)
DiffID() layer.DiffID
}
// RWLayer is active layer that can be read/modified
type RWLayer interface {
Release() error
Root() containerfs.ContainerFS
Commit() (ROLayer, error)
}

View File

@@ -72,8 +72,12 @@ type copier struct {
source builder.Source
pathCache pathCache
download sourceDownloader
tmpPaths []string
platform string
// for cleanup. TODO: having copier.cleanup() is error prone and hard to
// follow. Code calling performCopy should manage the lifecycle of its params.
// Copier should take override source as input, not imageMount.
activeLayer builder.RWLayer
tmpPaths []string
}
func copierFromDispatchRequest(req dispatchRequest, download sourceDownloader, imageSource *imageMount) copier {
@@ -155,6 +159,10 @@ func (o *copier) Cleanup() {
os.RemoveAll(path)
}
o.tmpPaths = []string{}
if o.activeLayer != nil {
o.activeLayer.Release()
o.activeLayer = nil
}
}
// TODO: allowWildcards can probably be removed by refactoring this function further.
@@ -166,9 +174,15 @@ func (o *copier) calcCopyInfo(origPath string, allowWildcards bool) ([]copyInfo,
// done on image Source?
if imageSource != nil {
var err error
o.source, err = imageSource.Source()
rwLayer, err := imageSource.NewRWLayer()
if err != nil {
return nil, errors.Wrapf(err, "failed to copy from %s", imageSource.ImageID())
return nil, err
}
o.activeLayer = rwLayer
o.source, err = remotecontext.NewLazySource(rwLayer.Root())
if err != nil {
return nil, errors.Wrapf(err, "failed to create context for copy from %s", rwLayer.Root().Path())
}
}

View File

@@ -127,7 +127,7 @@ func TestFromScratch(t *testing.T) {
func TestFromWithArg(t *testing.T) {
tag, expected := ":sometag", "expectedthisid"
getImage := func(name string) (builder.Image, builder.ReleaseableLayer, error) {
getImage := func(name string) (builder.Image, builder.ROLayer, error) {
assert.Equal(t, "alpine"+tag, name)
return &mockImage{id: "expectedthisid"}, nil, nil
}
@@ -159,7 +159,7 @@ func TestFromWithArg(t *testing.T) {
func TestFromWithUndefinedArg(t *testing.T) {
tag, expected := "sometag", "expectedthisid"
getImage := func(name string) (builder.Image, builder.ReleaseableLayer, error) {
getImage := func(name string) (builder.Image, builder.ROLayer, error) {
assert.Equal(t, "alpine", name)
return &mockImage{id: "expectedthisid"}, nil, nil
}
@@ -433,7 +433,7 @@ func TestRunWithBuildArgs(t *testing.T) {
return imageCache
}
b.imageProber = newImageProber(mockBackend, nil, false)
mockBackend.getImageFunc = func(_ string) (builder.Image, builder.ReleaseableLayer, error) {
mockBackend.getImageFunc = func(_ string) (builder.Image, builder.ROLayer, error) {
return &mockImage{
id: "abcdef",
config: &container.Config{Cmd: origCmd},

View File

@@ -5,7 +5,6 @@ import (
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/builder"
"github.com/docker/docker/builder/remotecontext"
dockerimage "github.com/docker/docker/image"
"github.com/docker/docker/pkg/system"
"github.com/pkg/errors"
@@ -13,7 +12,7 @@ import (
"golang.org/x/net/context"
)
type getAndMountFunc func(string, bool) (builder.Image, builder.ReleaseableLayer, error)
type getAndMountFunc func(string, bool) (builder.Image, builder.ROLayer, error)
// imageSources mounts images and provides a cache for mounted images. It tracks
// all images so they can be unmounted at the end of the build.
@@ -24,7 +23,7 @@ type imageSources struct {
}
func newImageSources(ctx context.Context, options builderOptions) *imageSources {
getAndMount := func(idOrRef string, localOnly bool) (builder.Image, builder.ReleaseableLayer, error) {
getAndMount := func(idOrRef string, localOnly bool) (builder.Image, builder.ROLayer, error) {
pullOption := backend.PullOptionNoPull
if !localOnly {
if options.Options.PullParent {
@@ -92,32 +91,14 @@ func (m *imageSources) Add(im *imageMount) {
type imageMount struct {
image builder.Image
source builder.Source
layer builder.ReleaseableLayer
layer builder.ROLayer
}
func newImageMount(image builder.Image, layer builder.ReleaseableLayer) *imageMount {
func newImageMount(image builder.Image, layer builder.ROLayer) *imageMount {
im := &imageMount{image: image, layer: layer}
return im
}
func (im *imageMount) Source() (builder.Source, error) {
if im.source == nil {
if im.layer == nil {
return nil, errors.Errorf("empty context")
}
mountPath, err := im.layer.Mount()
if err != nil {
return nil, errors.Wrapf(err, "failed to mount %s", im.image.ImageID())
}
source, err := remotecontext.NewLazySource(mountPath)
if err != nil {
return nil, errors.Wrapf(err, "failed to create lazycontext for %s", mountPath)
}
im.source = source
}
return im.source, nil
}
func (im *imageMount) unmount() error {
if im.layer == nil {
return nil
@@ -133,8 +114,8 @@ func (im *imageMount) Image() builder.Image {
return im.image
}
func (im *imageMount) Layer() builder.ReleaseableLayer {
return im.layer
func (im *imageMount) NewRWLayer() (builder.RWLayer, error) {
return im.layer.NewRWLayer()
}
func (im *imageMount) ImageID() string {

View File

@@ -17,6 +17,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/backend"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/builder"
"github.com/docker/docker/image"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/chrootarchive"
@@ -114,8 +115,8 @@ func (b *Builder) commitContainer(dispatchState *dispatchState, id string, conta
return err
}
func (b *Builder) exportImage(state *dispatchState, imageMount *imageMount, runConfig *container.Config) error {
newLayer, err := imageMount.Layer().Commit()
func (b *Builder) exportImage(state *dispatchState, layer builder.RWLayer, parent builder.Image, runConfig *container.Config) error {
newLayer, err := layer.Commit()
if err != nil {
return err
}
@@ -124,7 +125,7 @@ func (b *Builder) exportImage(state *dispatchState, imageMount *imageMount, runC
// if there is an error before we can add the full mount with image
b.imageSources.Add(newImageMount(nil, newLayer))
parentImage, ok := imageMount.Image().(*image.Image)
parentImage, ok := parent.(*image.Image)
if !ok {
return errors.Errorf("unexpected image type")
}
@@ -177,7 +178,13 @@ func (b *Builder) performCopy(state *dispatchState, inst copyInstruction) error
return errors.Wrapf(err, "failed to get destination image %q", state.imageID)
}
destInfo, err := createDestInfo(state.runConfig.WorkingDir, inst, imageMount, b.options.Platform)
rwLayer, err := imageMount.NewRWLayer()
if err != nil {
return err
}
defer rwLayer.Release()
destInfo, err := createDestInfo(state.runConfig.WorkingDir, inst, rwLayer, b.options.Platform)
if err != nil {
return err
}
@@ -203,10 +210,10 @@ func (b *Builder) performCopy(state *dispatchState, inst copyInstruction) error
return errors.Wrapf(err, "failed to copy files")
}
}
return b.exportImage(state, imageMount, runConfigWithCommentCmd)
return b.exportImage(state, rwLayer, imageMount.Image(), runConfigWithCommentCmd)
}
func createDestInfo(workingDir string, inst copyInstruction, imageMount *imageMount, platform string) (copyInfo, error) {
func createDestInfo(workingDir string, inst copyInstruction, rwLayer builder.RWLayer, platform string) (copyInfo, error) {
// Twiddle the destination when it's a relative path - meaning, make it
// relative to the WORKINGDIR
dest, err := normalizeDest(workingDir, inst.dest, platform)
@@ -214,12 +221,7 @@ func createDestInfo(workingDir string, inst copyInstruction, imageMount *imageMo
return copyInfo{}, errors.Wrapf(err, "invalid %s", inst.cmdName)
}
destMount, err := imageMount.Source()
if err != nil {
return copyInfo{}, errors.Wrapf(err, "failed to mount copy source")
}
return newCopyInfoFromSource(destMount, dest, ""), nil
return copyInfo{root: rwLayer.Root(), path: dest}, nil
}
// normalizeDest normalises the destination of a COPY/ADD command in a

View File

@@ -20,7 +20,7 @@ import (
type MockBackend struct {
containerCreateFunc func(config types.ContainerCreateConfig) (container.ContainerCreateCreatedBody, error)
commitFunc func(backend.CommitConfig) (image.ID, error)
getImageFunc func(string) (builder.Image, builder.ReleaseableLayer, error)
getImageFunc func(string) (builder.Image, builder.ROLayer, error)
makeImageCacheFunc func(cacheFrom []string) builder.ImageCache
}
@@ -66,7 +66,7 @@ func (m *MockBackend) CopyOnBuild(containerID string, destPath string, srcRoot s
return nil
}
func (m *MockBackend) GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (builder.Image, builder.ReleaseableLayer, error) {
func (m *MockBackend) GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (builder.Image, builder.ROLayer, error) {
if m.getImageFunc != nil {
return m.getImageFunc(refOrID)
}
@@ -124,14 +124,25 @@ func (l *mockLayer) Release() error {
return nil
}
func (l *mockLayer) Mount() (containerfs.ContainerFS, error) {
return containerfs.NewLocalContainerFS("mountPath"), nil
}
func (l *mockLayer) Commit() (builder.ReleaseableLayer, error) {
return nil, nil
func (l *mockLayer) NewRWLayer() (builder.RWLayer, error) {
return &mockRWLayer{}, nil
}
func (l *mockLayer) DiffID() layer.DiffID {
return layer.DiffID("abcdef")
}
type mockRWLayer struct {
}
func (l *mockRWLayer) Release() error {
return nil
}
func (l *mockRWLayer) Commit() (builder.ROLayer, error) {
return nil, nil
}
func (l *mockRWLayer) Root() containerfs.ContainerFS {
return nil
}

View File

@@ -15,130 +15,126 @@ import (
"github.com/docker/docker/pkg/system"
"github.com/docker/docker/registry"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
"golang.org/x/net/context"
)
type releaseableLayer struct {
type roLayer struct {
released bool
layerStore layer.Store
roLayer layer.Layer
rwLayer layer.RWLayer
}
func (rl *releaseableLayer) Mount() (containerfs.ContainerFS, error) {
var err error
var mountPath containerfs.ContainerFS
func (l *roLayer) DiffID() layer.DiffID {
if l.roLayer == nil {
return layer.DigestSHA256EmptyTar
}
return l.roLayer.DiffID()
}
func (l *roLayer) Release() error {
if l.released {
return nil
}
if l.roLayer != nil {
metadata, err := l.layerStore.Release(l.roLayer)
layer.LogReleaseMetadata(metadata)
if err != nil {
return errors.Wrap(err, "failed to release ROLayer")
}
}
l.roLayer = nil
l.released = true
return nil
}
func (l *roLayer) NewRWLayer() (builder.RWLayer, error) {
var chainID layer.ChainID
if rl.roLayer != nil {
chainID = rl.roLayer.ChainID()
if l.roLayer != nil {
chainID = l.roLayer.ChainID()
}
mountID := stringid.GenerateRandomID()
rl.rwLayer, err = rl.layerStore.CreateRWLayer(mountID, chainID, nil)
newLayer, err := l.layerStore.CreateRWLayer(mountID, chainID, nil)
if err != nil {
return nil, errors.Wrap(err, "failed to create rwlayer")
}
mountPath, err = rl.rwLayer.Mount("")
rwLayer := &rwLayer{layerStore: l.layerStore, rwLayer: newLayer}
fs, err := newLayer.Mount("")
if err != nil {
// Clean up the layer if we fail to mount it here.
metadata, err := rl.layerStore.ReleaseRWLayer(rl.rwLayer)
layer.LogReleaseMetadata(metadata)
if err != nil {
logrus.Errorf("Failed to release RWLayer: %s", err)
}
rl.rwLayer = nil
rwLayer.Release()
return nil, err
}
return mountPath, nil
rwLayer.fs = fs
return rwLayer, nil
}
func (rl *releaseableLayer) Commit() (builder.ReleaseableLayer, error) {
var chainID layer.ChainID
if rl.roLayer != nil {
chainID = rl.roLayer.ChainID()
}
type rwLayer struct {
released bool
layerStore layer.Store
rwLayer layer.RWLayer
fs containerfs.ContainerFS
}
stream, err := rl.rwLayer.TarStream()
func (l *rwLayer) Root() containerfs.ContainerFS {
return l.fs
}
func (l *rwLayer) Commit() (builder.ROLayer, error) {
stream, err := l.rwLayer.TarStream()
if err != nil {
return nil, err
}
defer stream.Close()
newLayer, err := rl.layerStore.Register(stream, chainID)
var chainID layer.ChainID
if parent := l.rwLayer.Parent(); parent != nil {
chainID = parent.ChainID()
}
newLayer, err := l.layerStore.Register(stream, chainID)
if err != nil {
return nil, err
}
// TODO: An optimization would be to handle empty layers before returning
return &releaseableLayer{layerStore: rl.layerStore, roLayer: newLayer}, nil
return &roLayer{layerStore: l.layerStore, roLayer: newLayer}, nil
}
func (rl *releaseableLayer) DiffID() layer.DiffID {
if rl.roLayer == nil {
return layer.DigestSHA256EmptyTar
}
return rl.roLayer.DiffID()
}
func (rl *releaseableLayer) Release() error {
if rl.released {
func (l *rwLayer) Release() error {
if l.released {
return nil
}
if err := rl.releaseRWLayer(); err != nil {
// Best effort attempt at releasing read-only layer before returning original error.
rl.releaseROLayer()
return err
if l.fs != nil {
if err := l.rwLayer.Unmount(); err != nil {
return errors.Wrap(err, "failed to unmount RWLayer")
}
l.fs = nil
}
if err := rl.releaseROLayer(); err != nil {
return err
metadata, err := l.layerStore.ReleaseRWLayer(l.rwLayer)
layer.LogReleaseMetadata(metadata)
if err != nil {
return errors.Wrap(err, "failed to release RWLayer")
}
rl.released = true
l.released = true
return nil
}
func (rl *releaseableLayer) releaseRWLayer() error {
if rl.rwLayer == nil {
return nil
}
if err := rl.rwLayer.Unmount(); err != nil {
logrus.Errorf("Failed to unmount RWLayer: %s", err)
return err
}
metadata, err := rl.layerStore.ReleaseRWLayer(rl.rwLayer)
layer.LogReleaseMetadata(metadata)
if err != nil {
logrus.Errorf("Failed to release RWLayer: %s", err)
}
rl.rwLayer = nil
return err
}
func (rl *releaseableLayer) releaseROLayer() error {
if rl.roLayer == nil {
return nil
}
metadata, err := rl.layerStore.Release(rl.roLayer)
layer.LogReleaseMetadata(metadata)
if err != nil {
logrus.Errorf("Failed to release ROLayer: %s", err)
}
rl.roLayer = nil
return err
}
func newReleasableLayerForImage(img *image.Image, layerStore layer.Store) (builder.ReleaseableLayer, error) {
func newROLayerForImage(img *image.Image, layerStore layer.Store) (builder.ROLayer, error) {
if img == nil || img.RootFS.ChainID() == "" {
return &releaseableLayer{layerStore: layerStore}, nil
return &roLayer{layerStore: layerStore}, nil
}
// Hold a reference to the image layer so that it can't be removed before
// it is released
roLayer, err := layerStore.Get(img.RootFS.ChainID())
layer, err := layerStore.Get(img.RootFS.ChainID())
if err != nil {
return nil, errors.Wrapf(err, "failed to get layer for image %s", img.ImageID())
}
return &releaseableLayer{layerStore: layerStore, roLayer: roLayer}, nil
return &roLayer{layerStore: layerStore, roLayer: layer}, nil
}
// TODO: could this use the regular daemon PullImage ?
@@ -170,12 +166,12 @@ func (daemon *Daemon) pullForBuilder(ctx context.Context, name string, authConfi
// GetImageAndReleasableLayer returns an image and releaseable layer for a reference or ID.
// Every call to GetImageAndReleasableLayer MUST call releasableLayer.Release() to prevent
// leaking of layers.
func (daemon *Daemon) GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (builder.Image, builder.ReleaseableLayer, error) {
func (daemon *Daemon) GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (builder.Image, builder.ROLayer, error) {
if refOrID == "" {
if !system.IsOSSupported(opts.OS) {
return nil, nil, system.ErrNotSupportedOperatingSystem
}
layer, err := newReleasableLayerForImage(nil, daemon.layerStores[opts.OS])
layer, err := newROLayerForImage(nil, daemon.layerStores[opts.OS])
return nil, layer, err
}
@@ -189,7 +185,7 @@ func (daemon *Daemon) GetImageAndReleasableLayer(ctx context.Context, refOrID st
if !system.IsOSSupported(image.OperatingSystem()) {
return nil, nil, system.ErrNotSupportedOperatingSystem
}
layer, err := newReleasableLayerForImage(image, daemon.layerStores[image.OperatingSystem()])
layer, err := newROLayerForImage(image, daemon.layerStores[image.OperatingSystem()])
return image, layer, err
}
}
@@ -201,7 +197,7 @@ func (daemon *Daemon) GetImageAndReleasableLayer(ctx context.Context, refOrID st
if !system.IsOSSupported(image.OperatingSystem()) {
return nil, nil, system.ErrNotSupportedOperatingSystem
}
layer, err := newReleasableLayerForImage(image, daemon.layerStores[image.OperatingSystem()])
layer, err := newROLayerForImage(image, daemon.layerStores[image.OperatingSystem()])
return image, layer, err
}

View File

@@ -18,6 +18,7 @@ import (
containertypes "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/events"
containerpkg "github.com/docker/docker/container"
"github.com/docker/docker/daemon"
"github.com/docker/docker/daemon/cluster/convert"
executorpkg "github.com/docker/docker/daemon/cluster/executor"
"github.com/docker/libnetwork"
@@ -153,7 +154,11 @@ func (c *containerAdapter) createNetworks(ctx context.Context) error {
if _, ok := err.(libnetwork.NetworkNameError); ok {
continue
}
// We will continue if CreateManagedNetwork returns PredefinedNetworkError error.
// Other callers still can treat it as Error.
if _, ok := err.(daemon.PredefinedNetworkError); ok {
continue
}
return err
}
}

View File

@@ -24,6 +24,16 @@ import (
"golang.org/x/net/context"
)
// PredefinedNetworkError is returned when user tries to create predefined network that already exists.
type PredefinedNetworkError string
func (pnr PredefinedNetworkError) Error() string {
return fmt.Sprintf("operation is not permitted on predefined %s network ", string(pnr))
}
// Forbidden denotes the type of this error
func (pnr PredefinedNetworkError) Forbidden() {}
// NetworkControllerEnabled checks if the networking stack is enabled.
// This feature depends on OS primitives and it's disabled in systems like Windows.
func (daemon *Daemon) NetworkControllerEnabled() bool {
@@ -212,6 +222,8 @@ func (daemon *Daemon) releaseIngress(id string) {
return
}
daemon.deleteLoadBalancerSandbox(n)
if err := n.Delete(); err != nil {
logrus.Errorf("Failed to delete ingress network %s: %v", n.ID(), err)
return
@@ -267,9 +279,8 @@ func (daemon *Daemon) CreateNetwork(create types.NetworkCreateRequest) (*types.N
}
func (daemon *Daemon) createNetwork(create types.NetworkCreateRequest, id string, agent bool) (*types.NetworkCreateResponse, error) {
if runconfig.IsPreDefinedNetwork(create.Name) && !agent {
err := fmt.Errorf("%s is a pre-defined network and cannot be created", create.Name)
return nil, errdefs.Forbidden(err)
if runconfig.IsPreDefinedNetwork(create.Name) {
return nil, PredefinedNetworkError(create.Name)
}
var warning string

View File

@@ -667,7 +667,7 @@ func setMounts(daemon *Daemon, s *specs.Spec, c *container.Container, mounts []c
if s.Root.Readonly {
for i, m := range s.Mounts {
switch m.Destination {
case "/proc", "/dev/pts", "/dev/mqueue", "/dev":
case "/proc", "/dev/pts", "/dev/shm", "/dev/mqueue", "/dev":
continue
}
if _, ok := userMounts[m.Destination]; !ok {

View File

@@ -48,3 +48,41 @@ func TestTmpfsDevShmNoDupMount(t *testing.T) {
err = setMounts(&d, &s, c, ms)
assert.NoError(t, err)
}
// TestIpcPrivateVsReadonly checks that in case of IpcMode: private
// and ReadonlyRootfs: true (as in "docker run --ipc private --read-only")
// the resulting /dev/shm mount is NOT made read-only.
// https://github.com/moby/moby/issues/36503
func TestIpcPrivateVsReadonly(t *testing.T) {
d := Daemon{
// some empty structs to avoid getting a panic
// caused by a null pointer dereference
idMappings: &idtools.IDMappings{},
configStore: &config.Config{},
}
c := &container.Container{
HostConfig: &containertypes.HostConfig{
IpcMode: containertypes.IpcMode("private"),
ReadonlyRootfs: true,
},
}
// We can't call createSpec() so mimick the minimal part
// of its code flow, just enough to reproduce the issue.
ms, err := d.setupMounts(c)
assert.NoError(t, err)
s := oci.DefaultSpec()
s.Root.Readonly = c.HostConfig.ReadonlyRootfs
err = setMounts(&d, &s, c, ms)
assert.NoError(t, err)
// Find the /dev/shm mount in ms, check it does not have ro
for _, m := range s.Mounts {
if m.Destination != "/dev/shm" {
continue
}
assert.Equal(t, false, inSlice(m.Options, "ro"))
}
}

View File

@@ -1,23 +0,0 @@
#!/bin/sh
# When updating TOMLV_COMMIT, consider updating github.com/BurntSushi/toml
# in vendor.conf accordingly
TOMLV_COMMIT=a368813c5e648fee92e5f6c30e3944ff9d5e8895
# When updating RUNC_COMMIT, also update runc in vendor.conf accordingly
RUNC_COMMIT=6c55f98695e902427906eed2c799e566e3d3dfb5
# containerd is also pinned in vendor.conf. When updating the binary
# version you may also need to update the vendor version to pick up bug
# fixes or new APIs.
CONTAINERD_COMMIT=cfd04396dc68220d1cecbe686a6cc3aa5ce3667c # v1.0.2
TINI_COMMIT=949e6facb77383876aeff8a6944dde66b3089574
# LIBNETWORK_COMMIT is used to build the docker-userland-proxy binary. When
# updating the binary version, consider updating github.com/docker/libnetwork
# in vendor.conf accordingly
LIBNETWORK_COMMIT=ed2130d117c11c542327b4d5216a5db36770bc65
VNDR_COMMIT=a6e196d8b4b0cbbdc29aebdb20c59ac6926bb384
# Linting
GOMETALINTER_COMMIT=bfcc1d6942136fd86eb6f1a6fb328de8398fbd80

View File

@@ -1,176 +0,0 @@
#!/usr/bin/env bash
set -e
set -x
. $(dirname "$0")/binaries-commits
RM_GOPATH=0
TMP_GOPATH=${TMP_GOPATH:-""}
if [ -z "$TMP_GOPATH" ]; then
export GOPATH="$(mktemp -d)"
RM_GOPATH=1
else
export GOPATH="$TMP_GOPATH"
fi
# Do not build with ambient capabilities support
RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp apparmor selinux"}"
install_runc() {
echo "Install runc version $RUNC_COMMIT"
git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc"
cd "$GOPATH/src/github.com/opencontainers/runc"
git checkout -q "$RUNC_COMMIT"
make BUILDTAGS="$RUNC_BUILDTAGS" $1
cp runc /usr/local/bin/docker-runc
}
install_containerd() {
echo "Install containerd version $CONTAINERD_COMMIT"
git clone https://github.com/containerd/containerd.git "$GOPATH/src/github.com/containerd/containerd"
cd "$GOPATH/src/github.com/containerd/containerd"
git checkout -q "$CONTAINERD_COMMIT"
(
export GOPATH
make
)
cp bin/containerd /usr/local/bin/docker-containerd
cp bin/containerd-shim /usr/local/bin/docker-containerd-shim
cp bin/ctr /usr/local/bin/docker-containerd-ctr
}
install_containerd_static() {
echo "Install containerd version $CONTAINERD_COMMIT"
git clone https://github.com/containerd/containerd.git "$GOPATH/src/github.com/containerd/containerd"
cd "$GOPATH/src/github.com/containerd/containerd"
git checkout -q "$CONTAINERD_COMMIT"
(
export GOPATH
make BUILDTAGS='static_build netgo' EXTRA_FLAGS="-buildmode pie" EXTRA_LDFLAGS='-extldflags "-fno-PIC -static"'
)
cp bin/containerd /usr/local/bin/docker-containerd
cp bin/containerd-shim /usr/local/bin/docker-containerd-shim
cp bin/ctr /usr/local/bin/docker-containerd-ctr
}
install_proxy() {
echo "Install docker-proxy version $LIBNETWORK_COMMIT"
git clone https://github.com/docker/libnetwork.git "$GOPATH/src/github.com/docker/libnetwork"
cd "$GOPATH/src/github.com/docker/libnetwork"
git checkout -q "$LIBNETWORK_COMMIT"
go build -buildmode=pie -ldflags="$PROXY_LDFLAGS" -o /usr/local/bin/docker-proxy github.com/docker/libnetwork/cmd/proxy
}
install_dockercli() {
DOCKERCLI_CHANNEL=${DOCKERCLI_CHANNEL:-edge}
DOCKERCLI_VERSION=${DOCKERCLI_VERSION:-17.06.0-ce}
echo "Install docker/cli version $DOCKERCLI_VERSION from $DOCKERCLI_CHANNEL"
arch=$(uname -m)
# No official release of these platforms
if [[ "$arch" != "x86_64" ]] && [[ "$arch" != "s390x" ]]; then
build_dockercli
return
fi
url=https://download.docker.com/linux/static
curl -Ls $url/$DOCKERCLI_CHANNEL/$arch/docker-$DOCKERCLI_VERSION.tgz | \
tar -xz docker/docker
mv docker/docker /usr/local/bin/
rmdir docker
}
build_dockercli() {
DOCKERCLI_VERSION=${DOCKERCLI_VERSION:-17.06.0-ce}
git clone https://github.com/docker/docker-ce "$GOPATH/tmp/docker-ce"
cd "$GOPATH/tmp/docker-ce"
git checkout -q "v$DOCKERCLI_VERSION"
mkdir -p "$GOPATH/src/github.com/docker"
mv components/cli "$GOPATH/src/github.com/docker/cli"
go build -buildmode=pie -o /usr/local/bin/docker github.com/docker/cli/cmd/docker
}
install_gometalinter() {
echo "Installing gometalinter version $GOMETALINTER_COMMIT"
go get -d github.com/alecthomas/gometalinter
cd "$GOPATH/src/github.com/alecthomas/gometalinter"
git checkout -q "$GOMETALINTER_COMMIT"
go build -buildmode=pie -o /usr/local/bin/gometalinter github.com/alecthomas/gometalinter
GOBIN=/usr/local/bin gometalinter --install
}
for prog in "$@"
do
case $prog in
tomlv)
echo "Install tomlv version $TOMLV_COMMIT"
git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml"
cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT"
go build -buildmode=pie -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv
;;
runc)
install_runc static
;;
runc-dynamic)
install_runc
;;
containerd)
install_containerd_static
;;
containerd-dynamic)
install_containerd
;;
gometalinter)
install_gometalinter
;;
tini)
echo "Install tini version $TINI_COMMIT"
git clone https://github.com/krallin/tini.git "$GOPATH/tini"
cd "$GOPATH/tini"
git checkout -q "$TINI_COMMIT"
cmake .
make tini-static
cp tini-static /usr/local/bin/docker-init
;;
proxy)
(
export CGO_ENABLED=0
install_proxy
)
;;
proxy-dynamic)
PROXY_LDFLAGS="-linkmode=external" install_proxy
;;
vndr)
echo "Install vndr version $VNDR_COMMIT"
git clone https://github.com/LK4D4/vndr.git "$GOPATH/src/github.com/LK4D4/vndr"
cd "$GOPATH/src/github.com/LK4D4/vndr"
git checkout -q "$VNDR_COMMIT"
go build -buildmode=pie -v -o /usr/local/bin/vndr .
;;
dockercli)
install_dockercli
;;
*)
echo echo "Usage: $0 [tomlv|runc|runc-dynamic|containerd|containerd-dynamic|tini|proxy|proxy-dynamic|vndr|dockercli|gometalinter]"
exit 1
esac
done
if [ $RM_GOPATH -eq 1 ]; then
rm -rf "$GOPATH"
fi

View File

@@ -0,0 +1,36 @@
#!/bin/sh
# containerd is also pinned in vendor.conf. When updating the binary
# version you may also need to update the vendor version to pick up bug
# fixes or new APIs.
CONTAINERD_COMMIT=cfd04396dc68220d1cecbe686a6cc3aa5ce3667c # v1.0.2
install_containerd() {
echo "Install containerd version $CONTAINERD_COMMIT"
git clone https://github.com/containerd/containerd.git "$GOPATH/src/github.com/containerd/containerd"
cd "$GOPATH/src/github.com/containerd/containerd"
git checkout -q "$CONTAINERD_COMMIT"
(
export BUILDTAGS='static_build netgo'
export EXTRA_FLAGS='-buildmode=pie'
export EXTRA_LDFLAGS='-extldflags "-fno-PIC -static"'
# Reset build flags to nothing if we want a dynbinary
if [ "$1" == "dynamic" ]; then
export BUILDTAGS=''
export EXTRA_FLAGS=''
export EXTRA_LDFLAGS=''
fi
make
)
mkdir -p ${PREFIX}
cp bin/containerd ${PREFIX}/docker-containerd
cp bin/containerd-shim ${PREFIX}/docker-containerd-shim
cp bin/ctr ${PREFIX}/docker-containerd-ctr
}

View File

@@ -0,0 +1,31 @@
#!/bin/sh
DOCKERCLI_CHANNEL=${DOCKERCLI_CHANNEL:-edge}
DOCKERCLI_VERSION=${DOCKERCLI_VERSION:-17.06.0-ce}
install_dockercli() {
echo "Install docker/cli version $DOCKERCLI_VERSION from $DOCKERCLI_CHANNEL"
arch=$(uname -m)
# No official release of these platforms
if [[ "$arch" != "x86_64" ]] && [[ "$arch" != "s390x" ]]; then
build_dockercli
return
fi
url=https://download.docker.com/linux/static
curl -Ls $url/$DOCKERCLI_CHANNEL/$arch/docker-$DOCKERCLI_VERSION.tgz | \
tar -xz docker/docker
mkdir -p ${PREFIX}
mv docker/docker ${PREFIX}/
rmdir docker
}
build_dockercli() {
git clone https://github.com/docker/docker-ce "$GOPATH/tmp/docker-ce"
cd "$GOPATH/tmp/docker-ce"
git checkout -q "v$DOCKERCLI_VERSION"
mkdir -p "$GOPATH/src/github.com/docker"
mv components/cli "$GOPATH/src/github.com/docker/cli"
go build -buildmode=pie -o ${PREFIX}/docker github.com/docker/cli/cmd/docker
}

View File

@@ -0,0 +1,12 @@
#!/bin/sh
GOMETALINTER_COMMIT=bfcc1d6942136fd86eb6f1a6fb328de8398fbd80
install_gometalinter() {
echo "Installing gometalinter version $GOMETALINTER_COMMIT"
go get -d github.com/alecthomas/gometalinter
cd "$GOPATH/src/github.com/alecthomas/gometalinter"
git checkout -q "$GOMETALINTER_COMMIT"
go build -buildmode=pie -o ${PREFIX}/gometalinter github.com/alecthomas/gometalinter
GOBIN=${PREFIX} ${PREFIX}/gometalinter --install
}

View File

@@ -0,0 +1,30 @@
#!/bin/bash
set -e
set -x
RM_GOPATH=0
TMP_GOPATH=${TMP_GOPATH:-""}
: ${PREFIX:="/usr/local/bin"}
if [ -z "$TMP_GOPATH" ]; then
export GOPATH="$(mktemp -d)"
RM_GOPATH=1
else
export GOPATH="$TMP_GOPATH"
fi
dir="$(dirname $0)"
bin=$1
shift
if [ ! -f "${dir}/${bin}.installer" ]; then
echo "Could not find installer for \"$bin\""
exit 1
fi
. $dir/$bin.installer
install_$bin "$@"

View File

@@ -0,0 +1,38 @@
#!/bin/sh
# LIBNETWORK_COMMIT is used to build the docker-userland-proxy binary. When
# updating the binary version, consider updating github.com/docker/libnetwork
# in vendor.conf accordingly
LIBNETWORK_COMMIT=1b91bc94094ecfdae41daa465cc0c8df37dfb3dd
install_proxy() {
case "$1" in
"dynamic")
install_proxy_dynamic
return
;;
"")
export CGO_ENABLED=0
_install_proxy
;;
*)
echo 'Usage: $0 [dynamic]'
;;
esac
}
install_proxy_dynamic() {
export PROXY_LDFLAGS="-linkmode=external" install_proxy
export BUILD_MODE="-buildmode=pie"
_install_proxy
}
_install_proxy() {
echo "Install docker-proxy version $LIBNETWORK_COMMIT"
git clone https://github.com/docker/libnetwork.git "$GOPATH/src/github.com/docker/libnetwork"
cd "$GOPATH/src/github.com/docker/libnetwork"
git checkout -q "$LIBNETWORK_COMMIT"
go build $BUILD_MODE -ldflags="$PROXY_LDFLAGS" -o ${PREFIX}/docker-proxy github.com/docker/libnetwork/cmd/proxy
}

View File

@@ -0,0 +1,22 @@
#!/bin/sh
# When updating RUNC_COMMIT, also update runc in vendor.conf accordingly
RUNC_COMMIT=4fc53a81fb7c994640722ac585fa9ca548971871
install_runc() {
# Do not build with ambient capabilities support
RUNC_BUILDTAGS="${RUNC_BUILDTAGS:-"seccomp apparmor selinux"}"
echo "Install runc version $RUNC_COMMIT"
git clone https://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc"
cd "$GOPATH/src/github.com/opencontainers/runc"
git checkout -q "$RUNC_COMMIT"
if [ -z "$1" ]; then
target=static
else
target="$1"
fi
make BUILDTAGS="$RUNC_BUILDTAGS" "$target"
mkdir -p ${PREFIX}
cp runc ${PREFIX}/docker-runc
}

View File

@@ -0,0 +1,14 @@
#!/bin/sh
TINI_COMMIT=949e6facb77383876aeff8a6944dde66b3089574
install_tini() {
echo "Install tini version $TINI_COMMIT"
git clone https://github.com/krallin/tini.git "$GOPATH/tini"
cd "$GOPATH/tini"
git checkout -q "$TINI_COMMIT"
cmake .
make tini-static
mkdir -p ${PREFIX}
cp tini-static ${PREFIX}/docker-init
}

View File

@@ -0,0 +1,12 @@
#!/bin/sh
# When updating TOMLV_COMMIT, consider updating github.com/BurntSushi/toml
# in vendor.conf accordingly
TOMLV_COMMIT=a368813c5e648fee92e5f6c30e3944ff9d5e8895
install_tomlv() {
echo "Install tomlv version $TOMLV_COMMIT"
git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml"
cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT"
go build -v -buildmode=pie -o ${PREFIX}/tomlv github.com/BurntSushi/toml/cmd/tomlv
}

View File

@@ -0,0 +1,11 @@
#!/bin/sh
VNDR_COMMIT=a6e196d8b4b0cbbdc29aebdb20c59ac6926bb384
install_vndr() {
echo "Install vndr version $VNDR_COMMIT"
git clone https://github.com/LK4D4/vndr.git "$GOPATH/src/github.com/LK4D4/vndr"
cd "$GOPATH/src/github.com/LK4D4/vndr"
git checkout -q "$VNDR_COMMIT"
go build -buildmode=pie -v -o ${PREFIX}/vndr .
}

View File

@@ -2,7 +2,9 @@
rm -rf autogen
source hack/dockerfile/binaries-commits
source hack/dockerfile/install/runc.installer
source hack/dockerfile/install/tini.installer
source hack/dockerfile/install/containerd.installer
cat > dockerversion/version_autogen.go <<DVEOF
// +build autogen

View File

@@ -69,10 +69,10 @@ func (s *DockerSuite) TestAttachAfterDetach(c *check.C) {
cmd.Stdout = tty
cmd.Stderr = tty
errChan := make(chan error)
cmdExit := make(chan error)
go func() {
errChan <- cmd.Run()
close(errChan)
cmdExit <- cmd.Run()
close(cmdExit)
}()
c.Assert(waitRun(name), check.IsNil)
@@ -82,12 +82,7 @@ func (s *DockerSuite) TestAttachAfterDetach(c *check.C) {
cpty.Write([]byte{17})
select {
case err := <-errChan:
if err != nil {
buff := make([]byte, 200)
tty.Read(buff)
c.Fatalf("%s: %s", err, buff)
}
case <-cmdExit:
case <-time.After(5 * time.Second):
c.Fatal("timeout while detaching")
}
@@ -102,6 +97,7 @@ func (s *DockerSuite) TestAttachAfterDetach(c *check.C) {
err = cmd.Start()
c.Assert(err, checker.IsNil)
defer cmd.Process.Kill()
bytes := make([]byte, 10)
var nBytes int
@@ -124,11 +120,7 @@ func (s *DockerSuite) TestAttachAfterDetach(c *check.C) {
c.Fatal("timeout waiting for attach read")
}
err = cmd.Wait()
c.Assert(err, checker.IsNil)
c.Assert(string(bytes[:nBytes]), checker.Contains, "/ #")
}
// TestAttachDetach checks that attach in tty mode can be detached using the long container ID

View File

@@ -1439,6 +1439,7 @@ func (s *DockerSuite) TestBuildRelativeCopy(c *check.C) {
))
}
// FIXME(vdemeester) should be unit test
func (s *DockerSuite) TestBuildBlankName(c *check.C) {
name := "testbuildblankname"
testCases := []struct {
@@ -2066,6 +2067,7 @@ func (s *DockerSuite) TestBuildNoContext(c *check.C) {
}
}
// FIXME(vdemeester) migrate to docker/cli e2e
func (s *DockerSuite) TestBuildDockerfileStdin(c *check.C) {
name := "stdindockerfile"
tmpDir, err := ioutil.TempDir("", "fake-context")
@@ -2085,6 +2087,7 @@ CMD ["cat", "/foo"]`),
c.Assert(strings.TrimSpace(string(res)), checker.Equals, `[cat /foo]`)
}
// FIXME(vdemeester) migrate to docker/cli tests (unit or e2e)
func (s *DockerSuite) TestBuildDockerfileStdinConflict(c *check.C) {
name := "stdindockerfiletarcontext"
icmd.RunCmd(icmd.Cmd{
@@ -2401,6 +2404,7 @@ func (s *DockerSuite) TestBuildDockerignoringDockerfile(c *check.C) {
build.WithFile("Dockerfile", dockerfile),
build.WithFile(".dockerignore", "Dockerfile\n"),
))
// FIXME(vdemeester) why twice ?
buildImageSuccessfully(c, name, build.WithBuildContext(c,
build.WithFile("Dockerfile", dockerfile),
build.WithFile(".dockerignore", "./Dockerfile\n"),
@@ -2420,6 +2424,7 @@ func (s *DockerSuite) TestBuildDockerignoringRenamedDockerfile(c *check.C) {
build.WithFile("MyDockerfile", dockerfile),
build.WithFile(".dockerignore", "MyDockerfile\n"),
))
// FIXME(vdemeester) why twice ?
buildImageSuccessfully(c, name, cli.WithFlags("-f", "MyDockerfile"), build.WithBuildContext(c,
build.WithFile("Dockerfile", "Should not use me"),
build.WithFile("MyDockerfile", dockerfile),
@@ -3045,6 +3050,7 @@ func (s *DockerSuite) TestBuildAddTarXzGz(c *check.C) {
buildImageSuccessfully(c, name, build.WithExternalBuildContext(ctx))
}
// FIXME(vdemeester) most of the from git tests could be moved to `docker/cli` e2e tests
func (s *DockerSuite) TestBuildFromGit(c *check.C) {
name := "testbuildfromgit"
git := fakegit.New(c, "repo", map[string]string{
@@ -3422,6 +3428,7 @@ func (s *DockerSuite) TestBuildLabelsCache(c *check.C) {
}
// FIXME(vdemeester) port to docker/cli e2e tests (api tests should test suppressOutput option though)
func (s *DockerSuite) TestBuildNotVerboseSuccess(c *check.C) {
// This test makes sure that -q works correctly when build is successful:
// stdout has only the image ID (long image ID) and stderr is empty.
@@ -3472,6 +3479,7 @@ func (s *DockerSuite) TestBuildNotVerboseSuccess(c *check.C) {
}
// FIXME(vdemeester) migrate to docker/cli tests
func (s *DockerSuite) TestBuildNotVerboseFailureWithNonExistImage(c *check.C) {
// This test makes sure that -q works correctly when build fails by
// comparing between the stderr output in quiet mode and in stdout
@@ -3492,6 +3500,7 @@ func (s *DockerSuite) TestBuildNotVerboseFailureWithNonExistImage(c *check.C) {
}
}
// FIXME(vdemeester) migrate to docker/cli tests
func (s *DockerSuite) TestBuildNotVerboseFailure(c *check.C) {
// This test makes sure that -q works correctly when build fails by
// comparing between the stderr output in quiet mode and in stdout
@@ -3519,6 +3528,7 @@ func (s *DockerSuite) TestBuildNotVerboseFailure(c *check.C) {
}
}
// FIXME(vdemeester) migrate to docker/cli tests
func (s *DockerSuite) TestBuildNotVerboseFailureRemote(c *check.C) {
// This test ensures that when given a wrong URL, stderr in quiet mode and
// stderr in verbose mode are identical.
@@ -3548,6 +3558,7 @@ func (s *DockerSuite) TestBuildNotVerboseFailureRemote(c *check.C) {
}
}
// FIXME(vdemeester) migrate to docker/cli tests
func (s *DockerSuite) TestBuildStderr(c *check.C) {
// This test just makes sure that no non-error output goes
// to stderr
@@ -3688,67 +3699,6 @@ CMD cat /foo/file`),
}
// FIXME(vdemeester) part of this should be unit test, other part should be clearer
func (s *DockerSuite) TestBuildRenamedDockerfile(c *check.C) {
ctx := fakecontext.New(c, "", fakecontext.WithFiles(map[string]string{
"Dockerfile": "FROM busybox\nRUN echo from Dockerfile",
"files/Dockerfile": "FROM busybox\nRUN echo from files/Dockerfile",
"files/dFile": "FROM busybox\nRUN echo from files/dFile",
"dFile": "FROM busybox\nRUN echo from dFile",
"files/dFile2": "FROM busybox\nRUN echo from files/dFile2",
}))
defer ctx.Close()
cli.Docker(cli.Args("build", "-t", "test1", "."), cli.InDir(ctx.Dir)).Assert(c, icmd.Expected{
Out: "from Dockerfile",
})
cli.Docker(cli.Args("build", "-f", filepath.Join("files", "Dockerfile"), "-t", "test2", "."), cli.InDir(ctx.Dir)).Assert(c, icmd.Expected{
Out: "from files/Dockerfile",
})
cli.Docker(cli.Args("build", fmt.Sprintf("--file=%s", filepath.Join("files", "dFile")), "-t", "test3", "."), cli.InDir(ctx.Dir)).Assert(c, icmd.Expected{
Out: "from files/dFile",
})
cli.Docker(cli.Args("build", "--file=dFile", "-t", "test4", "."), cli.InDir(ctx.Dir)).Assert(c, icmd.Expected{
Out: "from dFile",
})
dirWithNoDockerfile, err := ioutil.TempDir(os.TempDir(), "test5")
c.Assert(err, check.IsNil)
nonDockerfileFile := filepath.Join(dirWithNoDockerfile, "notDockerfile")
if _, err = os.Create(nonDockerfileFile); err != nil {
c.Fatal(err)
}
cli.Docker(cli.Args("build", fmt.Sprintf("--file=%s", nonDockerfileFile), "-t", "test5", "."), cli.InDir(ctx.Dir)).Assert(c, icmd.Expected{
ExitCode: 1,
Err: fmt.Sprintf("unable to prepare context: the Dockerfile (%s) must be within the build context", nonDockerfileFile),
})
cli.Docker(cli.Args("build", "-f", filepath.Join("..", "Dockerfile"), "-t", "test6", ".."), cli.InDir(filepath.Join(ctx.Dir, "files"))).Assert(c, icmd.Expected{
Out: "from Dockerfile",
})
cli.Docker(cli.Args("build", "-f", filepath.Join(ctx.Dir, "files", "Dockerfile"), "-t", "test7", ".."), cli.InDir(filepath.Join(ctx.Dir, "files"))).Assert(c, icmd.Expected{
Out: "from files/Dockerfile",
})
cli.Docker(cli.Args("build", "-f", filepath.Join("..", "Dockerfile"), "-t", "test8", "."), cli.InDir(filepath.Join(ctx.Dir, "files"))).Assert(c, icmd.Expected{
ExitCode: 1,
Err: "must be within the build context",
})
tmpDir := os.TempDir()
cli.Docker(cli.Args("build", "-t", "test9", ctx.Dir), cli.InDir(tmpDir)).Assert(c, icmd.Expected{
Out: "from Dockerfile",
})
cli.Docker(cli.Args("build", "-f", "dFile2", "-t", "test10", "."), cli.InDir(filepath.Join(ctx.Dir, "files"))).Assert(c, icmd.Expected{
Out: "from files/dFile2",
})
}
func (s *DockerSuite) TestBuildFromMixedcaseDockerfile(c *check.C) {
testRequires(c, UnixCli) // Dockerfile overwrites dockerfile on windows
testRequires(c, DaemonIsLinux)
@@ -3772,6 +3722,7 @@ func (s *DockerSuite) TestBuildFromMixedcaseDockerfile(c *check.C) {
})
}
// FIXME(vdemeester) should migrate to docker/cli tests
func (s *DockerSuite) TestBuildFromURLWithF(c *check.C) {
server := fakestorage.New(c, "", fakecontext.WithFiles(map[string]string{"baz": `FROM busybox
RUN echo from baz
@@ -3798,6 +3749,7 @@ RUN find /tmp/`}))
}
// FIXME(vdemeester) should migrate to docker/cli tests
func (s *DockerSuite) TestBuildFromStdinWithF(c *check.C) {
testRequires(c, DaemonIsLinux) // TODO Windows: This test is flaky; no idea why
ctx := fakecontext.New(c, "", fakecontext.WithDockerfile(`FROM busybox
@@ -3840,61 +3792,6 @@ func (s *DockerSuite) TestBuildFromOfficialNames(c *check.C) {
}
}
func (s *DockerSuite) TestBuildDockerfileOutsideContext(c *check.C) {
testRequires(c, UnixCli, DaemonIsLinux) // uses os.Symlink: not implemented in windows at the time of writing (go-1.4.2)
name := "testbuilddockerfileoutsidecontext"
tmpdir, err := ioutil.TempDir("", name)
c.Assert(err, check.IsNil)
defer os.RemoveAll(tmpdir)
ctx := filepath.Join(tmpdir, "context")
if err := os.MkdirAll(ctx, 0755); err != nil {
c.Fatal(err)
}
if err := ioutil.WriteFile(filepath.Join(ctx, "Dockerfile"), []byte("FROM scratch\nENV X Y"), 0644); err != nil {
c.Fatal(err)
}
wd, err := os.Getwd()
if err != nil {
c.Fatal(err)
}
defer os.Chdir(wd)
if err := os.Chdir(ctx); err != nil {
c.Fatal(err)
}
if err := ioutil.WriteFile(filepath.Join(tmpdir, "outsideDockerfile"), []byte("FROM scratch\nENV x y"), 0644); err != nil {
c.Fatal(err)
}
if err := os.Symlink(filepath.Join("..", "outsideDockerfile"), filepath.Join(ctx, "dockerfile1")); err != nil {
c.Fatal(err)
}
if err := os.Symlink(filepath.Join(tmpdir, "outsideDockerfile"), filepath.Join(ctx, "dockerfile2")); err != nil {
c.Fatal(err)
}
for _, dockerfilePath := range []string{
filepath.Join("..", "outsideDockerfile"),
filepath.Join(ctx, "dockerfile1"),
filepath.Join(ctx, "dockerfile2"),
} {
result := dockerCmdWithResult("build", "-t", name, "--no-cache", "-f", dockerfilePath, ".")
result.Assert(c, icmd.Expected{
Err: "must be within the build context",
ExitCode: 1,
})
deleteImages(name)
}
os.Chdir(tmpdir)
// Path to Dockerfile should be resolved relative to working directory, not relative to context.
// There is a Dockerfile in the context, but since there is no Dockerfile in the current directory, the following should fail
out, _, err := dockerCmdWithError("build", "-t", name, "--no-cache", "-f", "Dockerfile", ctx)
if err == nil {
c.Fatalf("Expected error. Out: %s", out)
}
}
// FIXME(vdemeester) should be a unit test
func (s *DockerSuite) TestBuildSpaces(c *check.C) {
// Test to make sure that leading/trailing spaces on a command
@@ -4186,6 +4083,7 @@ func (s *DockerTrustSuite) TestTrustedBuildUntrustedTag(c *check.C) {
})
}
// FIXME(vdemeester) should migrate to docker/cli e2e tests
func (s *DockerTrustSuite) TestBuildContextDirIsSymlink(c *check.C) {
testRequires(c, DaemonIsLinux)
tempDir, err := ioutil.TempDir("", "test-build-dir-is-symlink-")
@@ -4222,6 +4120,7 @@ func (s *DockerTrustSuite) TestBuildContextDirIsSymlink(c *check.C) {
}
func (s *DockerTrustSuite) TestTrustedBuildTagFromReleasesRole(c *check.C) {
c.Skip("Blacklisting for Docker CE")
testRequires(c, NotaryHosting)
latestTag := s.setupTrustedImage(c, "trusted-build-releases-role")
@@ -4253,6 +4152,7 @@ func (s *DockerTrustSuite) TestTrustedBuildTagFromReleasesRole(c *check.C) {
}
func (s *DockerTrustSuite) TestTrustedBuildTagIgnoresOtherDelegationRoles(c *check.C) {
c.Skip("Blacklisting for Docker CE")
testRequires(c, NotaryHosting)
latestTag := s.setupTrustedImage(c, "trusted-build-releases-role")
@@ -5130,6 +5030,7 @@ func (s *DockerSuite) TestBuildCacheRootSource(c *check.C) {
}
// #19375
// FIXME(vdemeester) should migrate to docker/cli tests
func (s *DockerSuite) TestBuildFailsGitNotCallable(c *check.C) {
buildImage("gitnotcallable", cli.WithEnvironmentVariables("PATH="),
build.WithContextPath("github.com/docker/v1.10-migrator.git")).Assert(c, icmd.Expected{
@@ -6447,6 +6348,7 @@ CMD echo foo
c.Assert(strings.TrimSpace(out), checker.Equals, `["/bin/sh","-c","echo foo"]`)
}
// FIXME(vdemeester) should migrate to docker/cli tests
func (s *DockerSuite) TestBuildIidFile(c *check.C) {
tmpDir, err := ioutil.TempDir("", "TestBuildIidFile")
if err != nil {
@@ -6471,6 +6373,7 @@ ENV BAR BAZ`),
c.Assert(d.String(), checker.Equals, getIDByName(c, name))
}
// FIXME(vdemeester) should migrate to docker/cli tests
func (s *DockerSuite) TestBuildIidFileCleanupOnFail(c *check.C) {
tmpDir, err := ioutil.TempDir("", "TestBuildIidFileCleanupOnFail")
if err != nil {
@@ -6493,6 +6396,7 @@ func (s *DockerSuite) TestBuildIidFileCleanupOnFail(c *check.C) {
c.Assert(os.IsNotExist(err), check.Equals, true)
}
// FIXME(vdemeester) should migrate to docker/cli tests
func (s *DockerSuite) TestBuildIidFileSquash(c *check.C) {
testRequires(c, ExperimentalDaemon)
tmpDir, err := ioutil.TempDir("", "TestBuildIidFileSquash")

View File

@@ -8,8 +8,6 @@ import (
"github.com/go-check/check"
)
// docker cp CONTAINER:PATH LOCALPATH
// Try all of the test cases from the archive package which implements the
// internals of `docker cp` and ensure that the behavior matches when actually
// copying to and from containers.
@@ -20,67 +18,9 @@ import (
// 3. DST parent directory must exist.
// 4. If DST exists as a file, it must not end with a trailing separator.
// First get these easy error cases out of the way.
// Test for error when SRC does not exist.
func (s *DockerSuite) TestCpFromErrSrcNotExists(c *check.C) {
containerID := makeTestContainer(c, testContainerOptions{})
tmpDir := getTestDir(c, "test-cp-from-err-src-not-exists")
defer os.RemoveAll(tmpDir)
err := runDockerCp(c, containerCpPath(containerID, "file1"), tmpDir, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotExist(err), checker.True, check.Commentf("expected IsNotExist error, but got %T: %s", err, err))
}
// Test for error when SRC ends in a trailing
// path separator but it exists as a file.
func (s *DockerSuite) TestCpFromErrSrcNotDir(c *check.C) {
testRequires(c, DaemonIsLinux)
containerID := makeTestContainer(c, testContainerOptions{addContent: true})
tmpDir := getTestDir(c, "test-cp-from-err-src-not-dir")
defer os.RemoveAll(tmpDir)
err := runDockerCp(c, containerCpPathTrailingSep(containerID, "file1"), tmpDir, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotDir(err), checker.True, check.Commentf("expected IsNotDir error, but got %T: %s", err, err))
}
// Test for error when DST ends in a trailing
// path separator but exists as a file.
func (s *DockerSuite) TestCpFromErrDstNotDir(c *check.C) {
testRequires(c, DaemonIsLinux)
containerID := makeTestContainer(c, testContainerOptions{addContent: true})
tmpDir := getTestDir(c, "test-cp-from-err-dst-not-dir")
defer os.RemoveAll(tmpDir)
makeTestContentInDir(c, tmpDir)
// Try with a file source.
srcPath := containerCpPath(containerID, "/file1")
dstPath := cpPathTrailingSep(tmpDir, "file1")
err := runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotDir(err), checker.True, check.Commentf("expected IsNotDir error, but got %T: %s", err, err))
// Try with a directory source.
srcPath = containerCpPath(containerID, "/dir1")
err = runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotDir(err), checker.True, check.Commentf("expected IsNotDir error, but got %T: %s", err, err))
}
// Check that copying from a container to a local symlink copies to the symlink
// target and does not overwrite the local symlink itself.
// TODO: move to docker/cli and/or integration/container/copy_test.go
func (s *DockerSuite) TestCpFromSymlinkDestination(c *check.C) {
testRequires(c, DaemonIsLinux)
containerID := makeTestContainer(c, testContainerOptions{addContent: true})

View File

@@ -2,15 +2,11 @@ package main
import (
"os"
"runtime"
"strings"
"github.com/docker/docker/integration-cli/checker"
"github.com/go-check/check"
)
// docker cp LOCALPATH CONTAINER:PATH
// Try all of the test cases from the archive package which implements the
// internals of `docker cp` and ensure that the behavior matches when actually
// copying to and from containers.
@@ -21,124 +17,6 @@ import (
// 3. DST parent directory must exist.
// 4. If DST exists as a file, it must not end with a trailing separator.
// First get these easy error cases out of the way.
// Test for error when SRC does not exist.
func (s *DockerSuite) TestCpToErrSrcNotExists(c *check.C) {
containerID := makeTestContainer(c, testContainerOptions{})
tmpDir := getTestDir(c, "test-cp-to-err-src-not-exists")
defer os.RemoveAll(tmpDir)
srcPath := cpPath(tmpDir, "file1")
dstPath := containerCpPath(containerID, "file1")
_, srcStatErr := os.Stat(srcPath)
c.Assert(os.IsNotExist(srcStatErr), checker.True)
err := runDockerCp(c, srcPath, dstPath, nil)
if runtime.GOOS == "windows" {
// Go 1.9+ on Windows returns a different error for `os.Stat()`, see
// https://github.com/golang/go/commit/6144c7270e5812d9de8fb97456ee4e5ae657fcbb#diff-f63e1a4b4377b2fe0b05011db3df9599
//
// Go 1.8: CreateFile C:\not-exist: The system cannot find the file specified.
// Go 1.9: GetFileAttributesEx C:\not-exist: The system cannot find the file specified.
//
// Due to the CLI using a different version than the daemon, comparing the
// error message won't work, so just hard-code the common part here.
//
// TODO this should probably be a test in the CLI repository instead
c.Assert(strings.ToLower(err.Error()), checker.Contains, "cannot find the file specified")
c.Assert(strings.ToLower(err.Error()), checker.Contains, strings.ToLower(tmpDir))
} else {
c.Assert(strings.ToLower(err.Error()), checker.Contains, strings.ToLower(srcStatErr.Error()))
}
}
// Test for error when SRC ends in a trailing
// path separator but it exists as a file.
func (s *DockerSuite) TestCpToErrSrcNotDir(c *check.C) {
containerID := makeTestContainer(c, testContainerOptions{})
tmpDir := getTestDir(c, "test-cp-to-err-src-not-dir")
defer os.RemoveAll(tmpDir)
makeTestContentInDir(c, tmpDir)
srcPath := cpPathTrailingSep(tmpDir, "file1")
dstPath := containerCpPath(containerID, "testDir")
err := runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotDir(err), checker.True, check.Commentf("expected IsNotDir error, but got %T: %s", err, err))
}
// Test for error when SRC is a valid file or directory,
// but the DST parent directory does not exist.
func (s *DockerSuite) TestCpToErrDstParentNotExists(c *check.C) {
testRequires(c, DaemonIsLinux)
containerID := makeTestContainer(c, testContainerOptions{addContent: true})
tmpDir := getTestDir(c, "test-cp-to-err-dst-parent-not-exists")
defer os.RemoveAll(tmpDir)
makeTestContentInDir(c, tmpDir)
// Try with a file source.
srcPath := cpPath(tmpDir, "file1")
dstPath := containerCpPath(containerID, "/notExists", "file1")
err := runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotExist(err), checker.True, check.Commentf("expected IsNotExist error, but got %T: %s", err, err))
// Try with a directory source.
srcPath = cpPath(tmpDir, "dir1")
err = runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpNotExist(err), checker.True, check.Commentf("expected IsNotExist error, but got %T: %s", err, err))
}
// Test for error when DST ends in a trailing path separator but exists as a
// file. Also test that we cannot overwrite an existing directory with a
// non-directory and cannot overwrite an existing
func (s *DockerSuite) TestCpToErrDstNotDir(c *check.C) {
testRequires(c, DaemonIsLinux)
containerID := makeTestContainer(c, testContainerOptions{addContent: true})
tmpDir := getTestDir(c, "test-cp-to-err-dst-not-dir")
defer os.RemoveAll(tmpDir)
makeTestContentInDir(c, tmpDir)
// Try with a file source.
srcPath := cpPath(tmpDir, "dir1/file1-1")
dstPath := containerCpPathTrailingSep(containerID, "file1")
// The client should encounter an error trying to stat the destination
// and then be unable to copy since the destination is asserted to be a
// directory but does not exist.
err := runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCpDirNotExist(err), checker.True, check.Commentf("expected DirNotExist error, but got %T: %s", err, err))
// Try with a directory source.
srcPath = cpPath(tmpDir, "dir1")
// The client should encounter an error trying to stat the destination and
// then decide to extract to the parent directory instead with a rebased
// name in the source archive, but this directory would overwrite the
// existing file with the same name.
err = runDockerCp(c, srcPath, dstPath, nil)
c.Assert(err, checker.NotNil)
c.Assert(isCannotOverwriteNonDirWithDir(err), checker.True, check.Commentf("expected CannotOverwriteNonDirWithDir error, but got %T: %s", err, err))
}
// Check that copying from a local path to a symlink in a container copies to
// the symlink target and does not overwrite the container symlink itself.
func (s *DockerSuite) TestCpToSymlinkDestination(c *check.C) {

View File

@@ -228,18 +228,10 @@ func getTestDir(c *check.C, label string) (tmpDir string) {
return
}
func isCpNotExist(err error) bool {
return strings.Contains(strings.ToLower(err.Error()), "could not find the file")
}
func isCpDirNotExist(err error) bool {
return strings.Contains(err.Error(), archive.ErrDirNotExists.Error())
}
func isCpNotDir(err error) bool {
return strings.Contains(err.Error(), archive.ErrNotDirectory.Error()) || strings.Contains(err.Error(), "filename, directory name, or volume label syntax is incorrect")
}
func isCpCannotCopyDir(err error) bool {
return strings.Contains(err.Error(), archive.ErrCannotCopyDir.Error())
}
@@ -248,10 +240,6 @@ func isCpCannotCopyReadOnly(err error) bool {
return strings.Contains(err.Error(), "marked read-only")
}
func isCannotOverwriteNonDirWithDir(err error) bool {
return strings.Contains(err.Error(), "cannot overwrite non-directory")
}
func fileContentEquals(c *check.C, filename, contents string) (err error) {
c.Logf("checking that file %q contains %q\n", filename, contents)

View File

@@ -49,7 +49,7 @@ func (s *DockerSuite) TestEventsRedirectStdout(c *check.C) {
}
func (s *DockerSuite) TestEventsOOMDisableFalse(c *check.C) {
testRequires(c, DaemonIsLinux, oomControl, memoryLimitSupport, swapMemorySupport)
testRequires(c, DaemonIsLinux, oomControl, memoryLimitSupport, swapMemorySupport, NotPpc64le)
errChan := make(chan error)
go func() {
@@ -79,7 +79,7 @@ func (s *DockerSuite) TestEventsOOMDisableFalse(c *check.C) {
}
func (s *DockerSuite) TestEventsOOMDisableTrue(c *check.C) {
testRequires(c, DaemonIsLinux, oomControl, memoryLimitSupport, NotArm, swapMemorySupport)
testRequires(c, DaemonIsLinux, oomControl, memoryLimitSupport, NotArm, swapMemorySupport, NotPpc64le)
errChan := make(chan error)
observer, err := newEventObserver(c)

View File

@@ -133,6 +133,7 @@ func (s *DockerTrustSuite) TestTrustedPullDelete(c *check.C) {
}
func (s *DockerTrustSuite) TestTrustedPullReadsFromReleasesRole(c *check.C) {
c.Skip("Blacklisting for Docker CE")
testRequires(c, NotaryHosting)
repoName := fmt.Sprintf("%v/dockerclireleasesdelegationpulling/trusted", privateRegistryURL)
targetName := fmt.Sprintf("%s:latest", repoName)
@@ -188,6 +189,7 @@ func (s *DockerTrustSuite) TestTrustedPullReadsFromReleasesRole(c *check.C) {
}
func (s *DockerTrustSuite) TestTrustedPullIgnoresOtherDelegationRoles(c *check.C) {
c.Skip("Blacklisting for Docker CE")
testRequires(c, NotaryHosting)
repoName := fmt.Sprintf("%v/dockerclipullotherdelegation/trusted", privateRegistryURL)
targetName := fmt.Sprintf("%s:latest", repoName)

View File

@@ -282,6 +282,7 @@ func (s *DockerSchema1RegistrySuite) TestCrossRepositoryLayerPushNotSupported(c
}
func (s *DockerTrustSuite) TestTrustedPush(c *check.C) {
c.Skip("Blacklisting for Docker CE")
repoName := fmt.Sprintf("%v/dockerclitrusted/pushtest:latest", privateRegistryURL)
// tag the image and upload it to the private registry
cli.DockerCmd(c, "tag", "busybox", repoName)
@@ -366,6 +367,7 @@ func (s *DockerTrustSuite) TestTrustedPushWithExistingSignedTag(c *check.C) {
}
func (s *DockerTrustSuite) TestTrustedPushWithIncorrectPassphraseForNonRoot(c *check.C) {
c.Skip("Blacklisting for Docker CE")
repoName := fmt.Sprintf("%v/dockercliincorretpwd/trusted:latest", privateRegistryURL)
// tag the image and upload it to the private registry
cli.DockerCmd(c, "tag", "busybox", repoName)

View File

@@ -615,7 +615,7 @@ func (s *DockerSuite) TestRunWithInvalidPathforBlkioDeviceWriteIOps(c *check.C)
}
func (s *DockerSuite) TestRunOOMExitCode(c *check.C) {
testRequires(c, memoryLimitSupport, swapMemorySupport)
testRequires(c, memoryLimitSupport, swapMemorySupport, NotPpc64le)
errChan := make(chan error)
go func() {
defer close(errChan)

View File

@@ -40,7 +40,7 @@ const notaryHost = "localhost:4443"
const notaryURL = "https://" + notaryHost
var SuccessTagging = icmd.Expected{
Out: "Tagging",
Err: "Tagging",
}
var SuccessSigningAndPushing = icmd.Expected{

View File

@@ -301,6 +301,46 @@ COPY bar /`
require.NotContains(t, out.String(), "Using cache")
}
// docker/for-linux#135
// #35641
func TestBuildMultiStageLayerLeak(t *testing.T) {
ctx := context.TODO()
defer setupTest(t)()
// all commands need to match until COPY
dockerfile := `FROM busybox
WORKDIR /foo
COPY foo .
FROM busybox
WORKDIR /foo
COPY bar .
RUN [ -f bar ]
RUN [ ! -f foo ]
`
source := fakecontext.New(t, "",
fakecontext.WithFile("foo", "0"),
fakecontext.WithFile("bar", "1"),
fakecontext.WithDockerfile(dockerfile))
defer source.Close()
apiclient := testEnv.APIClient()
resp, err := apiclient.ImageBuild(ctx,
source.AsTarReader(t),
types.ImageBuildOptions{
Remove: true,
ForceRemove: true,
})
out := bytes.NewBuffer(nil)
require.NoError(t, err)
_, err = io.Copy(out, resp.Body)
resp.Body.Close()
require.NoError(t, err)
assert.Contains(t, out.String(), "Successfully built")
}
func writeTarRecord(t *testing.T, w *tar.Writer, fn, contents string) {
err := w.WriteHeader(&tar.Header{
Name: fn,

View File

@@ -0,0 +1,65 @@
package container // import "github.com/docker/docker/integration/container"
import (
"context"
"fmt"
"testing"
"github.com/docker/docker/api/types"
"github.com/docker/docker/client"
"github.com/docker/docker/integration/internal/container"
"github.com/docker/docker/internal/testutil"
"github.com/gotestyourself/gotestyourself/skip"
"github.com/stretchr/testify/require"
)
func TestCopyFromContainerPathDoesNotExist(t *testing.T) {
defer setupTest(t)()
ctx := context.Background()
apiclient := testEnv.APIClient()
cid := container.Create(t, ctx, apiclient)
_, _, err := apiclient.CopyFromContainer(ctx, cid, "/dne")
require.True(t, client.IsErrNotFound(err))
expected := fmt.Sprintf("No such container:path: %s:%s", cid, "/dne")
testutil.ErrorContains(t, err, expected)
}
func TestCopyFromContainerPathIsNotDir(t *testing.T) {
defer setupTest(t)()
skip.If(t, testEnv.OSType == "windows")
ctx := context.Background()
apiclient := testEnv.APIClient()
cid := container.Create(t, ctx, apiclient)
_, _, err := apiclient.CopyFromContainer(ctx, cid, "/etc/passwd/")
require.Contains(t, err.Error(), "not a directory")
}
func TestCopyToContainerPathDoesNotExist(t *testing.T) {
defer setupTest(t)()
skip.If(t, testEnv.OSType == "windows")
ctx := context.Background()
apiclient := testEnv.APIClient()
cid := container.Create(t, ctx, apiclient)
err := apiclient.CopyToContainer(ctx, cid, "/dne", nil, types.CopyToContainerOptions{})
require.True(t, client.IsErrNotFound(err))
expected := fmt.Sprintf("No such container:path: %s:%s", cid, "/dne")
testutil.ErrorContains(t, err, expected)
}
func TestCopyToContainerPathIsNotDir(t *testing.T) {
defer setupTest(t)()
skip.If(t, testEnv.OSType == "windows")
ctx := context.Background()
apiclient := testEnv.APIClient()
cid := container.Create(t, ctx, apiclient)
err := apiclient.CopyToContainer(ctx, cid, "/etc/passwd/", nil, types.CopyToContainerOptions{})
require.Contains(t, err.Error(), "not a directory")
}

View File

@@ -0,0 +1,161 @@
package network // import "github.com/docker/docker/integration/network"
import (
"runtime"
"testing"
"time"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm"
"github.com/docker/docker/client"
"github.com/gotestyourself/gotestyourself/poll"
"github.com/stretchr/testify/require"
"golang.org/x/net/context"
)
func TestServiceWithPredefinedNetwork(t *testing.T) {
defer setupTest(t)()
d := newSwarm(t)
defer d.Stop(t)
client, err := client.NewClientWithOpts(client.WithHost((d.Sock())))
require.NoError(t, err)
hostName := "host"
var instances uint64 = 1
serviceName := "TestService"
serviceSpec := swarmServiceSpec(serviceName, instances)
serviceSpec.TaskTemplate.Networks = append(serviceSpec.TaskTemplate.Networks, swarm.NetworkAttachmentConfig{Target: hostName})
serviceResp, err := client.ServiceCreate(context.Background(), serviceSpec, types.ServiceCreateOptions{
QueryRegistry: false,
})
require.NoError(t, err)
pollSettings := func(config *poll.Settings) {
if runtime.GOARCH == "arm64" || runtime.GOARCH == "arm" {
config.Timeout = 50 * time.Second
config.Delay = 100 * time.Millisecond
} else {
config.Timeout = 30 * time.Second
config.Delay = 100 * time.Millisecond
}
}
serviceID := serviceResp.ID
poll.WaitOn(t, serviceRunningCount(client, serviceID, instances), pollSettings)
_, _, err = client.ServiceInspectWithRaw(context.Background(), serviceID, types.ServiceInspectOptions{})
require.NoError(t, err)
err = client.ServiceRemove(context.Background(), serviceID)
require.NoError(t, err)
poll.WaitOn(t, serviceIsRemoved(client, serviceID), pollSettings)
poll.WaitOn(t, noTasks(client), pollSettings)
}
const ingressNet = "ingress"
func TestServiceWithIngressNetwork(t *testing.T) {
defer setupTest(t)()
d := newSwarm(t)
defer d.Stop(t)
client, err := client.NewClientWithOpts(client.WithHost((d.Sock())))
require.NoError(t, err)
pollSettings := func(config *poll.Settings) {
if runtime.GOARCH == "arm64" || runtime.GOARCH == "arm" {
config.Timeout = 50 * time.Second
config.Delay = 100 * time.Millisecond
} else {
config.Timeout = 30 * time.Second
config.Delay = 100 * time.Millisecond
}
}
poll.WaitOn(t, swarmIngressReady(client), pollSettings)
var instances uint64 = 1
serviceName := "TestIngressService"
serviceSpec := swarmServiceSpec(serviceName, instances)
serviceSpec.TaskTemplate.Networks = append(serviceSpec.TaskTemplate.Networks, swarm.NetworkAttachmentConfig{Target: ingressNet})
serviceSpec.EndpointSpec = &swarm.EndpointSpec{
Ports: []swarm.PortConfig{
{
Protocol: swarm.PortConfigProtocolTCP,
TargetPort: 80,
PublishMode: swarm.PortConfigPublishModeIngress,
},
},
}
serviceResp, err := client.ServiceCreate(context.Background(), serviceSpec, types.ServiceCreateOptions{
QueryRegistry: false,
})
require.NoError(t, err)
serviceID := serviceResp.ID
poll.WaitOn(t, serviceRunningCount(client, serviceID, instances), pollSettings)
_, _, err = client.ServiceInspectWithRaw(context.Background(), serviceID, types.ServiceInspectOptions{})
require.NoError(t, err)
err = client.ServiceRemove(context.Background(), serviceID)
require.NoError(t, err)
poll.WaitOn(t, serviceIsRemoved(client, serviceID), pollSettings)
poll.WaitOn(t, noTasks(client), pollSettings)
// Ensure that "ingress" is not removed or corrupted
time.Sleep(10 * time.Second)
netInfo, err := client.NetworkInspect(context.Background(), ingressNet, types.NetworkInspectOptions{
Verbose: true,
Scope: "swarm",
})
require.NoError(t, err, "Ingress network was removed after removing service!")
require.NotZero(t, len(netInfo.Containers), "No load balancing endpoints in ingress network")
require.NotZero(t, len(netInfo.Peers), "No peers (including self) in ingress network")
_, ok := netInfo.Containers["ingress-sbox"]
require.True(t, ok, "ingress-sbox not present in ingress network")
}
func serviceRunningCount(client client.ServiceAPIClient, serviceID string, instances uint64) func(log poll.LogT) poll.Result {
return func(log poll.LogT) poll.Result {
filter := filters.NewArgs()
filter.Add("service", serviceID)
services, err := client.ServiceList(context.Background(), types.ServiceListOptions{})
if err != nil {
return poll.Error(err)
}
if len(services) != int(instances) {
return poll.Continue("Service count at %d waiting for %d", len(services), instances)
}
return poll.Success()
}
}
func swarmIngressReady(client client.NetworkAPIClient) func(log poll.LogT) poll.Result {
return func(log poll.LogT) poll.Result {
netInfo, err := client.NetworkInspect(context.Background(), ingressNet, types.NetworkInspectOptions{
Verbose: true,
Scope: "swarm",
})
if err != nil {
return poll.Error(err)
}
np := len(netInfo.Peers)
nc := len(netInfo.Containers)
if np == 0 || nc == 0 {
return poll.Continue("ingress not ready: %d peers and %d containers", nc, np)
}
_, ok := netInfo.Containers["ingress-sbox"]
if !ok {
return poll.Continue("ingress not ready: does not contain the ingress-sbox")
}
return poll.Success()
}
}

View File

@@ -25,15 +25,15 @@ github.com/google/go-cmp v0.1.0
github.com/RackSec/srslog 456df3a81436d29ba874f3590eeeee25d666f8a5
github.com/imdario/mergo 0.2.1
golang.org/x/sync de49d9dcd27d4f764488181bea099dfe6179bcf0
golang.org/x/sync fd80eb99c8f653c847d294a001bdf2a3a6f768f5
github.com/moby/buildkit aaff9d591ef128560018433fe61beb802e149de8
github.com/tonistiigi/fsutil dea3a0da73aee887fc02142d995be764106ac5e2
#get libnetwork packages
# When updating, also update LIBNETWORK_COMMIT in hack/dockerfile/binaries-commits accordingly
github.com/docker/libnetwork ed2130d117c11c542327b4d5216a5db36770bc65
# When updating, also update LIBNETWORK_COMMIT in hack/dockerfile/install/proxy accordingly
github.com/docker/libnetwork 1b91bc94094ecfdae41daa465cc0c8df37dfb3dd
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/armon/go-radix e39d623f12e8e41c7b5529e9a9dd67a1e2261f80
github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
@@ -47,7 +47,7 @@ github.com/docker/libkv 1d8431073ae03cdaedb198a89722f3aab6d418ef
github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
github.com/vishvananda/netlink b2de5d10e38ecce8607e6b438b6d174f389a004e
# When updating, consider updating TOMLV_COMMIT in hack/dockerfile/binaries-commits accordingly
# When updating, consider updating TOMLV_COMMIT in hack/dockerfile/install/tomlv accordingly
github.com/BurntSushi/toml a368813c5e648fee92e5f6c30e3944ff9d5e8895
github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374
github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d
@@ -70,8 +70,8 @@ github.com/pborman/uuid v1.0
google.golang.org/grpc v1.3.0
# When updating, also update RUNC_COMMIT in hack/dockerfile/binaries-commits accordingly
github.com/opencontainers/runc 6c55f98695e902427906eed2c799e566e3d3dfb5
# When updating, also update RUNC_COMMIT in hack/dockerfile/install/runc accordingly
github.com/opencontainers/runc 4fc53a81fb7c994640722ac585fa9ca548971871
github.com/opencontainers/runtime-spec v1.0.1
github.com/opencontainers/image-spec v1.0.1
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
@@ -113,14 +113,14 @@ github.com/containerd/containerd 3fa104f843ec92328912e042b767d26825f202aa
github.com/containerd/fifo fbfb6a11ec671efbe94ad1c12c2e98773f19e1e6
github.com/containerd/continuity d8fb8589b0e8e85b8c8bbaa8840226d0dfeb7371
github.com/containerd/cgroups c0710c92e8b3a44681d1321dcfd1360fc5c6c089
github.com/containerd/console 84eeaae905fa414d03e07bcd6c8d3f19e7cf180e
github.com/containerd/console 2748ece16665b45a47f884001d5831ec79703880
github.com/containerd/go-runc 4f6e87ae043f859a38255247b49c9abc262d002f
github.com/containerd/typeurl f6943554a7e7e88b3c14aad190bf05932da84788
github.com/dmcgowan/go-tar go1.10
github.com/stevvooe/ttrpc d4528379866b0ce7e9d71f3eb96f0582fc374577
# cluster
github.com/docker/swarmkit f74983e7c015a38a81c8642803a78b8322cf7eac
github.com/docker/swarmkit 11d7b06f48bc1d73fc6d8776c3552a4b11c94301
github.com/gogo/protobuf v0.4
github.com/cloudflare/cfssl 7fb22c8cba7ecaf98e4082d22d65800cf45e042a
github.com/google/certificate-transparency d90e65c3a07988180c5b1ece71791c0b6506826e

View File

@@ -13,25 +13,21 @@ const (
cmdTcSet = unix.TCSETS
)
func ioctl(fd, flag, data uintptr) error {
if _, _, err := unix.Syscall(unix.SYS_IOCTL, fd, flag, data); err != 0 {
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
var u int32
if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCSPTLCK, uintptr(unsafe.Pointer(&u))); err != 0 {
return err
}
return nil
}
// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
// unlockpt should be called before opening the slave side of a pty.
func unlockpt(f *os.File) error {
var u int32
return ioctl(f.Fd(), unix.TIOCSPTLCK, uintptr(unsafe.Pointer(&u)))
}
// ptsname retrieves the name of the first available pts for the given master.
func ptsname(f *os.File) (string, error) {
n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCGPTN)
if err != nil {
var u uint32
if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCGPTN, uintptr(unsafe.Pointer(&u))); err != 0 {
return "", err
}
return fmt.Sprintf("/dev/pts/%d", n), nil
return fmt.Sprintf("/dev/pts/%d", u), nil
}

View File

@@ -108,6 +108,12 @@ func (s *sequence) getAvailableBit(from uint64) (uint64, uint64, error) {
bitSel >>= 1
bits++
}
// Check if the loop exited because it could not
// find any available bit int block starting from
// "from". Return invalid pos in that case.
if bitSel == 0 {
return invalidPos, invalidPos, ErrNoBitAvailable
}
return bits / 8, bits % 8, nil
}
@@ -313,14 +319,14 @@ func (h *Handle) set(ordinal, start, end uint64, any bool, release bool, serial
curr := uint64(0)
h.Lock()
store = h.store
h.Unlock()
if store != nil {
h.Unlock() // The lock is acquired in the GetObject
if err := store.GetObject(datastore.Key(h.Key()...), h); err != nil && err != datastore.ErrKeyNotFound {
return ret, err
}
h.Lock() // Acquire the lock back
}
h.Lock()
logrus.Debugf("Received set for ordinal %v, start %v, end %v, any %t, release %t, serial:%v curr:%d \n", ordinal, start, end, any, release, serial, h.curr)
if serial {
curr = h.curr
}
@@ -346,7 +352,6 @@ func (h *Handle) set(ordinal, start, end uint64, any bool, release bool, serial
// Create a private copy of h and work on it
nh := h.getCopy()
h.Unlock()
nh.head = pushReservation(bytePos, bitPos, nh.head, release)
if release {
@@ -355,22 +360,25 @@ func (h *Handle) set(ordinal, start, end uint64, any bool, release bool, serial
nh.unselected--
}
// Attempt to write private copy to store
if err := nh.writeToStore(); err != nil {
if _, ok := err.(types.RetryError); !ok {
return ret, fmt.Errorf("internal failure while setting the bit: %v", err)
if h.store != nil {
h.Unlock()
// Attempt to write private copy to store
if err := nh.writeToStore(); err != nil {
if _, ok := err.(types.RetryError); !ok {
return ret, fmt.Errorf("internal failure while setting the bit: %v", err)
}
// Retry
continue
}
// Retry
continue
h.Lock()
}
// Previous atomic push was succesfull. Save private copy to local copy
h.Lock()
defer h.Unlock()
h.unselected = nh.unselected
h.head = nh.head
h.dbExists = nh.dbExists
h.dbIndex = nh.dbIndex
h.Unlock()
return ret, nil
}
}
@@ -498,24 +506,40 @@ func (h *Handle) UnmarshalJSON(data []byte) error {
func getFirstAvailable(head *sequence, start uint64) (uint64, uint64, error) {
// Find sequence which contains the start bit
byteStart, bitStart := ordinalToPos(start)
current, _, _, inBlockBytePos := findSequence(head, byteStart)
current, _, precBlocks, inBlockBytePos := findSequence(head, byteStart)
// Derive the this sequence offsets
byteOffset := byteStart - inBlockBytePos
bitOffset := inBlockBytePos*8 + bitStart
var firstOffset uint64
if current == head {
firstOffset = byteOffset
}
for current != nil {
if current.block != blockMAX {
// If the current block is not full, check if there is any bit
// from the current bit in the current block. If not, before proceeding to the
// next block node, make sure we check for available bit in the next
// instance of the same block. Due to RLE same block signature will be
// compressed.
retry:
bytePos, bitPos, err := current.getAvailableBit(bitOffset)
if err != nil && precBlocks == current.count-1 {
// This is the last instance in the same block node,
// so move to the next block.
goto next
}
if err != nil {
// There are some more instances of the same block, so add the offset
// and be optimistic that you will find the available bit in the next
// instance of the same block.
bitOffset = 0
byteOffset += blockBytes
precBlocks++
goto retry
}
return byteOffset + bytePos, bitPos, err
}
// Moving to next block: Reset bit offset.
next:
bitOffset = 0
byteOffset += (current.count * blockBytes) - firstOffset
firstOffset = 0
byteOffset += (current.count * blockBytes) - (precBlocks * blockBytes)
precBlocks = 0
current = current.next
}
return invalidPos, invalidPos, ErrNoBitAvailable
@@ -526,19 +550,20 @@ func getFirstAvailable(head *sequence, start uint64) (uint64, uint64, error) {
// This can be further optimized to check from start till curr in case of a rollover
func getAvailableFromCurrent(head *sequence, start, curr, end uint64) (uint64, uint64, error) {
var bytePos, bitPos uint64
var err error
if curr != 0 && curr > start {
bytePos, bitPos, _ = getFirstAvailable(head, curr)
bytePos, bitPos, err = getFirstAvailable(head, curr)
ret := posToOrdinal(bytePos, bitPos)
if end < ret {
if end < ret || err != nil {
goto begin
}
return bytePos, bitPos, nil
}
begin:
bytePos, bitPos, _ = getFirstAvailable(head, start)
bytePos, bitPos, err = getFirstAvailable(head, start)
ret := posToOrdinal(bytePos, bitPos)
if end < ret {
if end < ret || err != nil {
return invalidPos, invalidPos, ErrNoBitAvailable
}
return bytePos, bitPos, nil

View File

@@ -10,6 +10,7 @@ import (
"github.com/docker/libkv/store"
"github.com/docker/libnetwork/cluster"
"github.com/docker/libnetwork/datastore"
"github.com/docker/libnetwork/ipamutils"
"github.com/docker/libnetwork/netlabel"
"github.com/docker/libnetwork/osl"
"github.com/sirupsen/logrus"
@@ -40,6 +41,7 @@ type DaemonCfg struct {
DriverCfg map[string]interface{}
ClusterProvider cluster.Provider
NetworkControlPlaneMTU int
DefaultAddressPool []*ipamutils.NetworkToSplit
}
// ClusterCfg represents cluster configuration
@@ -110,6 +112,13 @@ func OptionDefaultDriver(dd string) Option {
}
}
// OptionDefaultAddressPoolConfig function returns an option setter for default address pool
func OptionDefaultAddressPoolConfig(addressPool []*ipamutils.NetworkToSplit) Option {
return func(c *Config) {
c.Daemon.DefaultAddressPool = addressPool
}
}
// OptionDriverConfig returns an option setter for driver configuration.
func OptionDriverConfig(networkType string, config map[string]interface{}) Option {
return func(c *Config) {

View File

@@ -222,7 +222,7 @@ func New(cfgOptions ...config.Option) (NetworkController, error) {
}
}
if err = initIPAMDrivers(drvRegistry, nil, c.getStore(datastore.GlobalScope)); err != nil {
if err = initIPAMDrivers(drvRegistry, nil, c.getStore(datastore.GlobalScope), c.cfg.Daemon.DefaultAddressPool); err != nil {
return nil, err
}

View File

@@ -782,7 +782,9 @@ func (d *driver) deleteNetwork(nid string) error {
logrus.Warn(err)
}
if link, err := d.nlh.LinkByName(ep.srcName); err == nil {
d.nlh.LinkDel(link)
if err := d.nlh.LinkDel(link); err != nil {
logrus.WithError(err).Errorf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.srcName, ep.id)
}
}
if err := d.storeDelete(ep); err != nil {
@@ -969,7 +971,9 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
}
defer func() {
if err != nil {
d.nlh.LinkDel(host)
if err := d.nlh.LinkDel(host); err != nil {
logrus.WithError(err).Warnf("Failed to delete host side interface (%s)'s link", hostIfName)
}
}
}()
@@ -980,7 +984,9 @@ func (d *driver) CreateEndpoint(nid, eid string, ifInfo driverapi.InterfaceInfo,
}
defer func() {
if err != nil {
d.nlh.LinkDel(sbox)
if err := d.nlh.LinkDel(sbox); err != nil {
logrus.WithError(err).Warnf("Failed to delete sandbox side interface (%s)'s link", containerIfName)
}
}
}()
@@ -1117,7 +1123,9 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
// Try removal of link. Discard error: it is a best effort.
// Also make sure defer does not see this error either.
if link, err := d.nlh.LinkByName(ep.srcName); err == nil {
d.nlh.LinkDel(link)
if err := d.nlh.LinkDel(link); err != nil {
logrus.WithError(err).Errorf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.srcName, ep.id)
}
}
if err := d.storeDelete(ep); err != nil {

View File

@@ -76,7 +76,9 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
return fmt.Errorf("endpoint id %q not found", eid)
}
if link, err := ns.NlHandle().LinkByName(ep.srcName); err == nil {
ns.NlHandle().LinkDel(link)
if err := ns.NlHandle().LinkDel(link); err != nil {
logrus.WithError(err).Warnf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.srcName, ep.id)
}
}
if err := d.storeDelete(ep); err != nil {

View File

@@ -150,7 +150,9 @@ func (d *driver) DeleteNetwork(nid string) error {
}
for _, ep := range n.endpoints {
if link, err := ns.NlHandle().LinkByName(ep.srcName); err == nil {
ns.NlHandle().LinkDel(link)
if err := ns.NlHandle().LinkDel(link); err != nil {
logrus.WithError(err).Warnf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.srcName, ep.id)
}
}
if err := d.storeDelete(ep); err != nil {

View File

@@ -81,7 +81,9 @@ func (d *driver) DeleteEndpoint(nid, eid string) error {
return fmt.Errorf("endpoint id %q not found", eid)
}
if link, err := ns.NlHandle().LinkByName(ep.srcName); err == nil {
ns.NlHandle().LinkDel(link)
if err := ns.NlHandle().LinkDel(link); err != nil {
logrus.WithError(err).Warnf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.srcName, ep.id)
}
}
if err := d.storeDelete(ep); err != nil {

View File

@@ -154,7 +154,9 @@ func (d *driver) DeleteNetwork(nid string) error {
}
for _, ep := range n.endpoints {
if link, err := ns.NlHandle().LinkByName(ep.srcName); err == nil {
ns.NlHandle().LinkDel(link)
if err := ns.NlHandle().LinkDel(link); err != nil {
logrus.WithError(err).Warnf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.srcName, ep.id)
}
}
if err := d.storeDelete(ep); err != nil {

View File

@@ -242,8 +242,10 @@ func (d *driver) DeleteNetwork(nid string) error {
for _, ep := range n.endpoints {
if ep.ifName != "" {
if link, err := ns.NlHandle().LinkByName(ep.ifName); err != nil {
ns.NlHandle().LinkDel(link)
if link, err := ns.NlHandle().LinkByName(ep.ifName); err == nil {
if err := ns.NlHandle().LinkDel(link); err != nil {
logrus.WithError(err).Warnf("Failed to delete interface (%s)'s link on endpoint (%s) delete", ep.ifName, ep.id)
}
}
}

View File

@@ -365,6 +365,22 @@ func (d *driver) CreateNetwork(id string, option map[string]interface{}, nInfo d
config.HnsID = hnsresponse.Id
genData[HNSID] = config.HnsID
} else {
// Delete any stale HNS endpoints for this network.
if endpoints, err := hcsshim.HNSListEndpointRequest(); err == nil {
for _, ep := range endpoints {
if ep.VirtualNetwork == config.HnsID {
logrus.Infof("Removing stale HNS endpoint %s", ep.Id)
_, err = hcsshim.HNSEndpointRequest("DELETE", ep.Id, "")
if err != nil {
logrus.Warnf("Error removing HNS endpoint %s", ep.Id)
}
}
}
} else {
logrus.Warnf("Error listing HNS endpoints for network %s", config.HnsID)
}
}
n, err := d.getNetwork(id)

View File

@@ -6,9 +6,11 @@ import (
builtinIpam "github.com/docker/libnetwork/ipams/builtin"
nullIpam "github.com/docker/libnetwork/ipams/null"
remoteIpam "github.com/docker/libnetwork/ipams/remote"
"github.com/docker/libnetwork/ipamutils"
)
func initIPAMDrivers(r *drvregistry.DrvRegistry, lDs, gDs interface{}) error {
func initIPAMDrivers(r *drvregistry.DrvRegistry, lDs, gDs interface{}, addressPool []*ipamutils.NetworkToSplit) error {
builtinIpam.SetDefaultIPAddressPool(addressPool)
for _, fn := range [](func(ipamapi.Callback, interface{}, interface{}) error){
builtinIpam.Init,
remoteIpam.Init,

View File

@@ -402,15 +402,15 @@ func (a *Allocator) getPredefinedPool(as string, ipV6 bool) (*net.IPNet, error)
continue
}
aSpace.Lock()
_, ok := aSpace.subnets[SubnetKey{AddressSpace: as, Subnet: nw.String()}]
aSpace.Unlock()
if ok {
if _, ok := aSpace.subnets[SubnetKey{AddressSpace: as, Subnet: nw.String()}]; ok {
aSpace.Unlock()
continue
}
if !aSpace.contains(as, nw) {
aSpace.Unlock()
return nw, nil
}
aSpace.Unlock()
}
return nil, types.NotFoundErrorf("could not find an available, non-overlapping IPv%d address pool among the defaults to assign to the network", v)

View File

@@ -11,6 +11,11 @@ import (
"github.com/docker/libnetwork/ipamutils"
)
var (
// defaultAddressPool Stores user configured subnet list
defaultAddressPool []*ipamutils.NetworkToSplit
)
// Init registers the built-in ipam service with libnetwork
func Init(ic ipamapi.Callback, l, g interface{}) error {
var (
@@ -30,7 +35,7 @@ func Init(ic ipamapi.Callback, l, g interface{}) error {
}
}
ipamutils.InitNetworks()
ipamutils.InitNetworks(GetDefaultIPAddressPool())
a, err := ipam.NewAllocator(localDs, globalDs)
if err != nil {
@@ -41,3 +46,13 @@ func Init(ic ipamapi.Callback, l, g interface{}) error {
return ic.RegisterIpamDriverWithCapabilities(ipamapi.DefaultIPAM, a, cps)
}
// SetDefaultIPAddressPool stores default address pool.
func SetDefaultIPAddressPool(addressPool []*ipamutils.NetworkToSplit) {
defaultAddressPool = addressPool
}
// GetDefaultIPAddressPool returns default address pool.
func GetDefaultIPAddressPool() []*ipamutils.NetworkToSplit {
return defaultAddressPool
}

View File

@@ -13,6 +13,11 @@ import (
windowsipam "github.com/docker/libnetwork/ipams/windowsipam"
)
var (
// defaultAddressPool Stores user configured subnet list
defaultAddressPool []*ipamutils.NetworkToSplit
)
// InitDockerDefault registers the built-in ipam service with libnetwork
func InitDockerDefault(ic ipamapi.Callback, l, g interface{}) error {
var (
@@ -32,7 +37,7 @@ func InitDockerDefault(ic ipamapi.Callback, l, g interface{}) error {
}
}
ipamutils.InitNetworks()
ipamutils.InitNetworks(nil)
a, err := ipam.NewAllocator(localDs, globalDs)
if err != nil {
@@ -55,3 +60,13 @@ func Init(ic ipamapi.Callback, l, g interface{}) error {
return initFunc(ic, l, g)
}
// SetDefaultIPAddressPool stores default address pool .
func SetDefaultIPAddressPool(addressPool []*ipamutils.NetworkToSplit) {
defaultAddressPool = addressPool
}
// GetDefaultIPAddressPool returns default address pool .
func GetDefaultIPAddressPool() []*ipamutils.NetworkToSplit {
return defaultAddressPool
}

View File

@@ -2,8 +2,11 @@
package ipamutils
import (
"fmt"
"net"
"sync"
"github.com/sirupsen/logrus"
)
var (
@@ -13,38 +16,81 @@ var (
// PredefinedGranularNetworks contains a list of 64K IPv4 private networks with host size 8
// (10.x.x.x/24) which do not overlap with the networks in `PredefinedBroadNetworks`
PredefinedGranularNetworks []*net.IPNet
initNetworksOnce sync.Once
initNetworksOnce sync.Once
defaultBroadNetwork = []*NetworkToSplit{{"172.17.0.0/16", 16}, {"172.18.0.0/16", 16}, {"172.19.0.0/16", 16},
{"172.20.0.0/14", 16}, {"172.24.0.0/14", 16}, {"172.28.0.0/14", 16},
{"192.168.0.0/16", 20}}
defaultGranularNetwork = []*NetworkToSplit{{"10.0.0.0/8", 24}}
)
// InitNetworks initializes the pre-defined networks used by the built-in IP allocator
func InitNetworks() {
// NetworkToSplit represent a network that has to be split in chunks with mask length Size.
// Each subnet in the set is derived from the Base pool. Base is to be passed
// in CIDR format.
// Example: a Base "10.10.0.0/16 with Size 24 will define the set of 256
// 10.10.[0-255].0/24 address pools
type NetworkToSplit struct {
Base string `json:"base"`
Size int `json:"size"`
}
// InitNetworks initializes the broad network pool and the granular network pool
func InitNetworks(defaultAddressPool []*NetworkToSplit) {
initNetworksOnce.Do(func() {
PredefinedBroadNetworks = initBroadPredefinedNetworks()
PredefinedGranularNetworks = initGranularPredefinedNetworks()
// error ingnored should never fail
PredefinedGranularNetworks, _ = splitNetworks(defaultGranularNetwork)
if defaultAddressPool == nil {
defaultAddressPool = defaultBroadNetwork
}
var err error
if PredefinedBroadNetworks, err = splitNetworks(defaultAddressPool); err != nil {
logrus.WithError(err).Error("InitAddressPools failed to initialize the default address pool")
}
})
}
func initBroadPredefinedNetworks() []*net.IPNet {
pl := make([]*net.IPNet, 0, 31)
mask := []byte{255, 255, 0, 0}
for i := 17; i < 32; i++ {
pl = append(pl, &net.IPNet{IP: []byte{172, byte(i), 0, 0}, Mask: mask})
// splitNetworks takes a slice of networks, split them accordingly and returns them
func splitNetworks(list []*NetworkToSplit) ([]*net.IPNet, error) {
localPools := make([]*net.IPNet, 0, len(list))
for _, p := range list {
_, b, err := net.ParseCIDR(p.Base)
if err != nil {
return nil, fmt.Errorf("invalid base pool %q: %v", p.Base, err)
}
ones, _ := b.Mask.Size()
if p.Size <= 0 || p.Size < ones {
return nil, fmt.Errorf("invalid pools size: %d", p.Size)
}
localPools = append(localPools, splitNetwork(p.Size, b)...)
}
mask20 := []byte{255, 255, 240, 0}
for i := 0; i < 16; i++ {
pl = append(pl, &net.IPNet{IP: []byte{192, 168, byte(i << 4), 0}, Mask: mask20})
}
return pl
return localPools, nil
}
func initGranularPredefinedNetworks() []*net.IPNet {
pl := make([]*net.IPNet, 0, 256*256)
mask := []byte{255, 255, 255, 0}
for i := 0; i < 256; i++ {
for j := 0; j < 256; j++ {
pl = append(pl, &net.IPNet{IP: []byte{10, byte(i), byte(j), 0}, Mask: mask})
}
func splitNetwork(size int, base *net.IPNet) []*net.IPNet {
one, bits := base.Mask.Size()
mask := net.CIDRMask(size, bits)
n := 1 << uint(size-one)
s := uint(bits - size)
list := make([]*net.IPNet, 0, n)
for i := 0; i < n; i++ {
ip := copyIP(base.IP)
addIntToIP(ip, uint(i<<s))
list = append(list, &net.IPNet{IP: ip, Mask: mask})
}
return list
}
func copyIP(from net.IP) net.IP {
ip := make([]byte, len(from))
copy(ip, from)
return ip
}
func addIntToIP(array net.IP, ordinal uint) {
for i := len(array) - 1; i >= 0; i-- {
array[i] |= (byte)(ordinal & 0xff)
ordinal >>= 8
}
return pl
}

View File

@@ -959,7 +959,7 @@ func (n *network) delete(force bool) error {
if len(n.loadBalancerIP) != 0 {
endpoints := n.Endpoints()
if force || len(endpoints) == 1 {
if force || (len(endpoints) == 1 && !n.ingress) {
n.deleteLoadBalancerSandbox()
}
//Reload the network from the store to update the epcnt.

View File

@@ -9,6 +9,8 @@ import (
"github.com/sirupsen/logrus"
)
const maxSetStringLen = 350
func (c *controller) addEndpointNameResolution(svcName, svcID, nID, eID, containerName string, vip net.IP, serviceAliases, taskAliases []string, ip net.IP, addService bool, method string) error {
n, err := c.NetworkByID(nID)
if err != nil {
@@ -285,7 +287,10 @@ func (c *controller) addServiceBinding(svcName, svcID, nID, eID, containerName s
ok, entries := s.assignIPToEndpoint(ip.String(), eID)
if !ok || entries > 1 {
setStr, b := s.printIPToEndpoint(ip.String())
logrus.Warnf("addServiceBinding %s possible trainsient state ok:%t entries:%d set:%t %s", eID, ok, entries, b, setStr)
if len(setStr) > maxSetStringLen {
setStr = setStr[:maxSetStringLen]
}
logrus.Warnf("addServiceBinding %s possible transient state ok:%t entries:%d set:%t %s", eID, ok, entries, b, setStr)
}
// Add loadbalancer service and backend in all sandboxes in
@@ -353,7 +358,10 @@ func (c *controller) rmServiceBinding(svcName, svcID, nID, eID, containerName st
ok, entries := s.removeIPToEndpoint(ip.String(), eID)
if !ok || entries > 0 {
setStr, b := s.printIPToEndpoint(ip.String())
logrus.Warnf("rmServiceBinding %s possible trainsient state ok:%t entries:%d set:%t %s", eID, ok, entries, b, setStr)
if len(setStr) > maxSetStringLen {
setStr = setStr[:maxSetStringLen]
}
logrus.Warnf("rmServiceBinding %s possible transient state ok:%t entries:%d set:%t %s", eID, ok, entries, b, setStr)
}
// Remove loadbalancer service(if needed) and backend in all

View File

@@ -1,5 +1,5 @@
github.com/Azure/go-ansiterm d6e3b3328b783f23731bc4d058875b0371ff8109
github.com/BurntSushi/toml f706d00e3de6abe700c994cdd545a1a4915af060
github.com/BurntSushi/toml a368813c5e648fee92e5f6c30e3944ff9d5e8895
github.com/Microsoft/go-winio v0.4.5
github.com/Microsoft/hcsshim v0.6.5
github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
@@ -50,5 +50,6 @@ github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
golang.org/x/crypto 558b6879de74bc843225cde5686419267ff707ca
golang.org/x/net 7dcfb8076726a3fdd9353b6b8a1f1b6be6811bd6
golang.org/x/sys 07c182904dbd53199946ba614a412c61d3c548f5
golang.org/x/sync fd80eb99c8f653c847d294a001bdf2a3a6f768f5
github.com/pkg/errors 839d9e913e063e28dfd0e6c7b7512793e0a48be9
github.com/ishidawataru/sctp 07191f837fedd2f13d1ec7b5f885f0f3ec54b1cb

View File

@@ -644,7 +644,7 @@ func (s *Server) CreateService(ctx context.Context, request *api.CreateServiceRe
return nil, err
}
if err := s.validateNetworks(request.Spec.Networks); err != nil {
if err := s.validateNetworks(request.Spec.Task.Networks); err != nil {
return nil, err
}
@@ -727,6 +727,10 @@ func (s *Server) UpdateService(ctx context.Context, request *api.UpdateServiceRe
return nil, err
}
if err := s.validateNetworks(request.Spec.Task.Networks); err != nil {
return nil, err
}
var service *api.Service
s.store.View(func(tx store.ReadTx) {
service = store.GetService(tx, request.ServiceID)

View File

@@ -125,12 +125,17 @@ type clusterUpdate struct {
// Dispatcher is responsible for dispatching tasks and tracking agent health.
type Dispatcher struct {
// mu is a lock to provide mutually exclusive access to dispatcher fields
// e.g. lastSeenManagers, networkBootstrapKeys, lastSeenRootCert etc.
// Mutex to synchronize access to dispatcher shared state e.g. nodes,
// lastSeenManagers, networkBootstrapKeys etc.
// TODO(anshul): This can potentially be removed and rpcRW used in its place.
mu sync.Mutex
// shutdownWait is used by stop() to wait for existing operations to finish.
shutdownWait sync.WaitGroup
// WaitGroup to handle the case when Stop() gets called before Run()
// has finished initializing the dispatcher.
wg sync.WaitGroup
// This RWMutex synchronizes RPC handlers and the dispatcher stop().
// The RPC handlers use the read lock while stop() uses the write lock
// and acts as a barrier to shutdown.
rpcRW sync.RWMutex
nodes *nodeStore
store *store.MemoryStore
lastSeenManagers []*api.WeightedPeer
@@ -253,11 +258,8 @@ func (d *Dispatcher) Run(ctx context.Context) error {
defer cancel()
d.ctx, d.cancel = context.WithCancel(ctx)
ctx = d.ctx
// If Stop() is called, it should wait
// for Run() to complete.
d.shutdownWait.Add(1)
defer d.shutdownWait.Done()
d.wg.Add(1)
defer d.wg.Done()
d.mu.Unlock()
publishManagers := func(peers []*api.Peer) {
@@ -320,15 +322,18 @@ func (d *Dispatcher) Stop() error {
return errors.New("dispatcher is already stopped")
}
// Cancel dispatcher context.
// This should also close the the streams in Tasks(), Assignments().
log := log.G(d.ctx).WithField("method", "(*Dispatcher).Stop")
log.Info("dispatcher stopping")
d.cancel()
d.mu.Unlock()
// Wait for the RPCs that are in-progress to finish.
d.shutdownWait.Wait()
// The active nodes list can be cleaned out only when all
// existing RPCs have finished.
// RPCs that start after rpcRW.Unlock() should find the context
// cancelled and should fail organically.
d.rpcRW.Lock()
d.nodes.Clean()
d.rpcRW.Unlock()
d.processUpdatesLock.Lock()
// In case there are any waiters. There is no chance of any starting
@@ -338,6 +343,14 @@ func (d *Dispatcher) Stop() error {
d.processUpdatesLock.Unlock()
d.clusterUpdateQueue.Close()
// TODO(anshul): This use of Wait() could be unsafe.
// According to go's documentation on WaitGroup,
// Add() with a positive delta that occur when the counter is zero
// must happen before a Wait().
// As is, dispatcher Stop() can race with Run().
d.wg.Wait()
return nil
}
@@ -485,13 +498,13 @@ func nodeIPFromContext(ctx context.Context) (string, error) {
// register is used for registration of node with particular dispatcher.
func (d *Dispatcher) register(ctx context.Context, nodeID string, description *api.NodeDescription) (string, error) {
logLocal := log.G(ctx).WithField("method", "(*Dispatcher).register")
// prevent register until we're ready to accept it
dctx, err := d.isRunningLocked()
if err != nil {
return "", err
}
logLocal := log.G(ctx).WithField("method", "(*Dispatcher).register")
if err := d.nodes.CheckRateLimit(nodeID); err != nil {
return "", err
}
@@ -539,15 +552,8 @@ func (d *Dispatcher) register(ctx context.Context, nodeID string, description *a
// UpdateTaskStatus updates status of task. Node should send such updates
// on every status change of its tasks.
func (d *Dispatcher) UpdateTaskStatus(ctx context.Context, r *api.UpdateTaskStatusRequest) (*api.UpdateTaskStatusResponse, error) {
// shutdownWait.Add() followed by isRunning() to ensures that
// if this rpc sees the dispatcher running,
// it will already have called Add() on the shutdownWait wait,
// which ensures that Stop() will wait for this rpc to complete.
// Note that Stop() first does Dispatcher.ctx.cancel() followed by
// shutdownWait.Wait() to make sure new rpc's don't start before waiting
// for existing ones to finish.
d.shutdownWait.Add(1)
defer d.shutdownWait.Done()
d.rpcRW.RLock()
defer d.rpcRW.RUnlock()
dctx, err := d.isRunningLocked()
if err != nil {
@@ -740,15 +746,8 @@ func (d *Dispatcher) processUpdates(ctx context.Context) {
// of tasks which should be run on node, if task is not present in that list,
// it should be terminated.
func (d *Dispatcher) Tasks(r *api.TasksRequest, stream api.Dispatcher_TasksServer) error {
// shutdownWait.Add() followed by isRunning() to ensures that
// if this rpc sees the dispatcher running,
// it will already have called Add() on the shutdownWait wait,
// which ensures that Stop() will wait for this rpc to complete.
// Note that Stop() first does Dispatcher.ctx.cancel() followed by
// shutdownWait.Wait() to make sure new rpc's don't start before waiting
// for existing ones to finish.
d.shutdownWait.Add(1)
defer d.shutdownWait.Done()
d.rpcRW.RLock()
defer d.rpcRW.RUnlock()
dctx, err := d.isRunningLocked()
if err != nil {
@@ -873,15 +872,8 @@ func (d *Dispatcher) Tasks(r *api.TasksRequest, stream api.Dispatcher_TasksServe
// Assignments is a stream of assignments for a node. Each message contains
// either full list of tasks and secrets for the node, or an incremental update.
func (d *Dispatcher) Assignments(r *api.AssignmentsRequest, stream api.Dispatcher_AssignmentsServer) error {
// shutdownWait.Add() followed by isRunning() to ensures that
// if this rpc sees the dispatcher running,
// it will already have called Add() on the shutdownWait wait,
// which ensures that Stop() will wait for this rpc to complete.
// Note that Stop() first does Dispatcher.ctx.cancel() followed by
// shutdownWait.Wait() to make sure new rpc's don't start before waiting
// for existing ones to finish.
d.shutdownWait.Add(1)
defer d.shutdownWait.Done()
d.rpcRW.RLock()
defer d.rpcRW.RUnlock()
dctx, err := d.isRunningLocked()
if err != nil {
@@ -1140,20 +1132,13 @@ func (d *Dispatcher) markNodeNotReady(id string, state api.NodeStatus_State, mes
// Node should send new heartbeat earlier than now + TTL, otherwise it will
// be deregistered from dispatcher and its status will be updated to NodeStatus_DOWN
func (d *Dispatcher) Heartbeat(ctx context.Context, r *api.HeartbeatRequest) (*api.HeartbeatResponse, error) {
// shutdownWait.Add() followed by isRunning() to ensures that
// if this rpc sees the dispatcher running,
// it will already have called Add() on the shutdownWait wait,
// which ensures that Stop() will wait for this rpc to complete.
// Note that Stop() first does Dispatcher.ctx.cancel() followed by
// shutdownWait.Wait() to make sure new rpc's don't start before waiting
// for existing ones to finish.
d.shutdownWait.Add(1)
defer d.shutdownWait.Done()
d.rpcRW.RLock()
defer d.rpcRW.RUnlock()
// isRunningLocked() is not needed since its OK if
// the dispatcher context is cancelled while this call is in progress
// since Stop() which cancels the dispatcher context will wait for
// Heartbeat() to complete.
// Its OK to call isRunning() here instead of isRunningLocked()
// because of the rpcRW readlock above.
// TODO(anshul) other uses of isRunningLocked() can probably
// also be removed.
if !d.isRunning() {
return nil, status.Errorf(codes.Aborted, "dispatcher is stopped")
}
@@ -1192,15 +1177,8 @@ func (d *Dispatcher) getRootCACert() []byte {
// a special boolean field Disconnect which if true indicates that node should
// reconnect to another Manager immediately.
func (d *Dispatcher) Session(r *api.SessionRequest, stream api.Dispatcher_SessionServer) error {
// shutdownWait.Add() followed by isRunning() to ensures that
// if this rpc sees the dispatcher running,
// it will already have called Add() on the shutdownWait wait,
// which ensures that Stop() will wait for this rpc to complete.
// Note that Stop() first does Dispatcher.ctx.cancel() followed by
// shutdownWait.Wait() to make sure new rpc's don't start before waiting
// for existing ones to finish.
d.shutdownWait.Add(1)
defer d.shutdownWait.Done()
d.rpcRW.RLock()
defer d.rpcRW.RUnlock()
dctx, err := d.isRunningLocked()
if err != nil {
@@ -1208,6 +1186,7 @@ func (d *Dispatcher) Session(r *api.SessionRequest, stream api.Dispatcher_Sessio
}
ctx := stream.Context()
nodeInfo, err := ca.RemoteNode(ctx)
if err != nil {
return err

View File

@@ -41,8 +41,18 @@ make
sudo make install
```
You can also use `go get` to install to your `GOPATH`, assuming that you have a `github.com` parent folder already created under `src`:
```bash
go get github.com/opencontainers/runc
cd $GOPATH/src/github.com/opencontainers/runc
make
sudo make install
```
`runc` will be installed to `/usr/local/sbin/runc` on your system.
#### Build Tags
`runc` supports optional build tags for compiling support of various features.

View File

@@ -21,5 +21,5 @@ github.com/urfave/cli d53eb991652b1d438abdd34ce4bfa3ef1539108e
golang.org/x/sys 7ddbeae9ae08c6a06a59597f0c9edbc5ff2444ce https://github.com/golang/sys
# console dependencies
github.com/containerd/console 84eeaae905fa414d03e07bcd6c8d3f19e7cf180e
github.com/containerd/console 2748ece16665b45a47f884001d5831ec79703880
github.com/pkg/errors v0.8.0

2
vendor/golang.org/x/sync/README generated vendored
View File

@@ -1,2 +0,0 @@
This repository provides Go concurrency primitives in addition to the
ones provided by the language and "sync" and "sync/atomic" packages.

18
vendor/golang.org/x/sync/README.md generated vendored Normal file
View File

@@ -0,0 +1,18 @@
# Go Sync
This repository provides Go concurrency primitives in addition to the
ones provided by the language and "sync" and "sync/atomic" packages.
## Download/Install
The easiest way to install is to run `go get -u golang.org/x/sync`. You can
also manually git clone the repository to `$GOPATH/src/golang.org/x/sync`.
## Report Issues / Send Patches
This repository uses Gerrit for code changes. To learn how to submit changes to
this repository, see https://golang.org/doc/contribute.html.
The main issue tracker for the sync repository is located at
https://github.com/golang/go/issues. Prefix your issue with "x/sync:" in the
subject line, so it is easy to find.