Compare commits

...

102 Commits

Author SHA1 Message Date
Eli Uriegas
acf06a5d60 bump version to 17.06.1-ce-rc2
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:38 +00:00
Fengtu Wang
8ff1920458 Keep pause state when restoring container's status
Do not change pause state when restoring container's
status, or status in docker will be different with
status in runc.

Signed-off-by: Fengtu Wang <wangfengtu@huawei.com>
(cherry picked from commit 977c4046fd)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:38 +00:00
Wentao Zhang
1fd76f3838 Set unpasued state when receiving 'stateExit' event
Description:
 1. start a container with restart=always.
    `docker run -d --restart=always ubuntu sleep 3`
 2. container init process exits.
 3. use `docker pause <id>` to pause this container.

if the pause action is before cgroup data is removed and after the init process died.
`Pause` operation will success to write cgroup data, but actually do not freeze any process.

And then docker received pause event and stateExit event from
containerd, the docker state will be Running(paused), but the container
is free running.

Then we can not remove it, stop it , pause it  and unpause it.

Signed-off-by: Wentao Zhang <zhangwentao234@huawei.com>
(cherry picked from commit fe1b4cfba6)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:38 +00:00
Brian Goff
acbee943fd Fix error handling with not-exist errors on remove
Specifically, none of the graphdrivers are supposed to return a
not-exist type of error on remove (or at least that's how they are
currently handled).

Found that AUFS still had one case where a not-exist error could escape,
when checking if the directory is mounted we call a `Statfs` on the
path.

This fixes AUFS to not return an error in this case, but also
double-checks at the daemon level on layer remove that the error is not
a `not-exist` type of error.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit d42dbdd3d4)
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-03-23 09:41:38 +00:00
Aaron Lehmann
8660f37d48 cluster: Avoid recursive RLock
GetTasks can call GetService and GetNode with the read lock held. These
methods try to aquire the read side of the same lock. According to the
sync package documentation, this is not safe:

> If a goroutine holds a RWMutex for reading, it must not expect this or
> any other goroutine to be able to also take the read lock until the
> first read lock is released. In particular, this prohibits recursive
> read locking. This is to ensure that the lock eventually becomes
> available; a blocked Lock call excludes new readers from acquiring the
> lock.

Fix GetTasks to use the lower-level getService and getNode methods
instead. Also, use lockedManagerAction to simplify GetTasks.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
(cherry picked from commit bd4f66c8f1)
2020-03-23 09:41:38 +00:00
Ying
8f9d67cb79 Update the swarmkit vendor to include the following changes:
- https://github.com/docker/swarmkit/pull/2309 (updating the service spec version when rolling back)
- https://github.com/docker/swarmkit/pull/2310 (fix for slow swarm shutdown)
- https://github.com/docker/swarmkit/pull/2323 (run watchapi server on all managers)

Signed-off-by: Ying <ying.li@docker.com>
2020-03-23 09:41:38 +00:00
Tonis Tiigi
c3d547c456 archive: add test for prefix header
With docker-17.06.0 some images pulled do not extract properly. Some files don't appear in correct directories. This may or may not cause the pull to fail. These images can't be pushed or saved. 17.06 is the first version of Docker built with go1.8.

Cause

There are multiple updates to the tar package in go1.8.

https://go-review.googlesource.com/c/32234/ disables using "prefix" field when new tar archives are being written. Prefix field was previously set when a record in the archive used a path longer than 100 bytes.

Another change https://go-review.googlesource.com/c/31444/ makes the reader ignore the "prefix" field value if the record is in GNU format. GNU format defines that same area should be used for access and modified times. If the "prefix" field is not read, a file will only be extracted by the basename.

The problem is that with a previous version of the golang archive package headers could be written, that use the prefix field while at the same time setting the header format to GNU. This happens when numeric fields are big enough that they can not be written as octal strings and need to be written in binary. Usually, this shouldn't happen: uid, gid, devmajor, devminor can use up to 7 bytes, size and timestamp can use 11. If one of the records does overflow it switches the whole writer to GNU mode and all next files will be saved in GNU format.

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>(cherry picked from commit 4a3cfda45e)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:38 +00:00
Tonis Tiigi
89bacc278b vendor: add archive/tar
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 72df48d1ad)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:38 +00:00
Abhinandan Prativadi
853274ece3 Fixing issue with driver opt not passed to drivers
Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
(cherry picked from commit bcb55c6202)
Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
2020-03-23 09:41:38 +00:00
Sebastiaan van Stijn
2cddc993de Improve API docs for UsageData
The docs did not mention when this information
was set, and what the `-1` value indicated.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 194f635ce7)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:38 +00:00
Aaron Lehmann
fc5e100344 api: Update swagger.yaml for configs
Also fix bad reference to ServiceSpec.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
(cherry picked from commit ea1d14a189)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:38 +00:00
Sebastiaan van Stijn
7d99cdff8d Fix API docs for GET /secrets/{id}, GET /secrets
The swagger.yml defined these endpoints to return
a "ServiceSpec" instead of a "SecretSpec".

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f6954bea9f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:38 +00:00
allencloud
064f22822c add cluster events change in version_history.md
Signed-off-by: allencloud <allen.sun@daocloud.io>
(cherry picked from commit e9da15a660)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:38 +00:00
allencloud
e712f523e0 add cluster events details in swagger.yml
Signed-off-by: allencloud <allen.sun@daocloud.io>
(cherry picked from commit f596fb7683)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:38 +00:00
Sebastiaan van Stijn
f20ac0badb Service privileges: API docs
This documents the Service privileges
API changes, that were added in:
091b5e68ea

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d0a8e73e7b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:37 +00:00
Tobias Gesellchen
6fade192eb Fix typo in swagger doc
Signed-off-by: Tobias Gesellchen <tobias@gesellix.de>
(cherry picked from commit 56da4f2fb2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:37 +00:00
Tonis Tiigi
e011810b80 builder: fix copy —from conflict with force pull
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit e430b58811813084df2b9f8b1a9e929114b2187a)
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:37 +00:00
Justin Cormack
c32ca2f148 In the case of remounting with changed data, need to call mount
The case where we are trying to do a remount with changed filesystem specific options was missing,
we need to call `mount` as well here to change those options.

See #33844 for where we need this, as we change `tmpfs` options.

Signed-off-by: Justin Cormack <justin.cormack@docker.com>
(cherry picked from commit 3a1ab5b479)
Signed-off-by: Ying <ying.li@docker.com>
2020-03-23 09:41:37 +00:00
Tibor Vass
993b6912fd [engine] Graceful upgrade of containerd and runc state files upon live-restore
Vendors new dependency github.com/crosbymichael/upgrade

Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:37 +00:00
Andrew Hsu
d27ffb2708 bump version to 17.06.1-ce-rc1
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Andrew Hsu
af9667841e import runtime so cherry-pick moby/moby@b9255e4 can work
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Andrew Hsu
9bb8dfa27e adjust cherry-pick moby/moby@b9255e4 to work without lcow
Because the LCOW support is not part of this codebase, removing the
lines that expect it to be there.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
John Stephens
b3fefc36b3 Stop trying to load images on an incompatible OS
Signed-off-by: John Stephens <johnstep@docker.com>
(cherry picked from commit b9255e4a53)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Brian Goff
b9f1bb4e97 Set ping version even on error
In some cases a server may return an error on the ping response but
still provide version details. The client should use these values when
available.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 27ef09a46f)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Sebastiaan van Stijn
064ade21f2 Fix api-version history
Commit c79c16910c
inadvertently put these API changes under API 1.31,
but they were added in API 1.30.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit dd5e818fab)

Conflicts:
components/engine/docs/api/version-history.md
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Sebastiaan van Stijn
260246bb70 Add missing API documentatoin for DataPathAddr
COmmit 0307fe1a0b added
a new `DataPathAddr` property to the swarm/init and swarm/join
endpoints. This property was not yet added to the
documentation.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit c79c16910c)

Conflicts:
components/engine/docs/api/version-history.md
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Kenfe-Mickael Laventure
c0e0575129 Prevent a goroutine leak when healthcheck gets stopped
Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
(cherry picked from commit 67297ba005)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:37 +00:00
Ying
482543a87c Bump swarmkit vendor in order to include the following changes:
- https://github.com/docker/swarmkit/pull/2281 - fixes an issue where some cluster updates
  could be missed if a manager receives a catch-up snapshot from another manager
- https://github.com/docker/swarmkit/pull/2300 - fixes a possible memory issue if an
  external CA sends an overlarge response

Signed-off-by: Ying <ying.li@docker.com>
2020-03-23 09:41:36 +00:00
Tibor Vass
98debbf694 [engine][vendor] forks of containerd + runc + runtime-spec
This vendors from the docker org:
- containerd to 6e23458c129b551d5c9871e5174f6b1b7f6d1170
- runc to 810190ceaa507aa2727d7ae6f4790c76ec150bd2
- runtime-spec to a45ba0989fc26c695fe166a49c45bb8b7618ab36

This fixes two issues:
- if the container is paused, it now responds properly to SIGKILL
- on buggy kernels such as RHEL7.2, a int64->uint64 conversion bug
  prevented containers to start when memory cgroup was specified.

Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:36 +00:00
Andrew Hsu
bdd34e70b3 add go imports to enable specific cherry-pick of 913eb99
So that we don't have bring in the entire commit d3d1aab

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Sebastiaan van Stijn
801c2a432c Fix handling of remote "git@" notation
`docker build` accepts remote repositories
using either the `git://` notation, or `git@`.

Docker attempted to parse both as an URL, however,
`git@` is not an URL, but an argument to `git clone`.

Go 1.7 silently ignored this, and managed to
extract the needed information from these
remotes, however, Go 1.8 does a more strict
validation, and invalidated these.

This patch adds a different path for `git@` remotes,
to prevent them from being handled as URL (and
invalidated).

A test is also added, because there were no
tests for handling of `git@` remotes.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 913eb99fdc)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
David Glasser
02d284bd2f Fix stderr logging for journald and syslog
logger.PutMessage, added in #28762 (v17.04.0-ce), clears msg.Source. So journald
and syslog were treating stderr messages as if they were stdout.

Signed-off-by: David Glasser <glasser@davidglasser.net>
(cherry picked from commit 917050c572)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Aaron Lehmann
c6c8526b85 middleware: Redact secret data on "secret create"
With debug logging turned on, we currently log the base64-encoded secret
payload.

Change the middleware code to redact this. Since the field is called
"Data", it requires some context-sensitivity. The URI path is examined
to see which route is being invoked.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
(cherry picked from commit 3fbc352cbb)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Tonis Tiigi
3ec56e0783 build: fix add from remote url
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 2981667e11)
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-03-23 09:41:36 +00:00
Nishant Totla
a473cc5ef7 Do not add duplicate platform information to service spec
Signed-off-by: Nishant Totla <nishanttotla@gmail.com>
(cherry picked from commit da85b62001)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Brian Goff
af2c3bd37f Do not error on relabel when relabel not supported
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit ebfdfc5768)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Brian Goff
3673d2b668 Remove OSX cross stuff from main Dockerfile
This is no longer needed here. It was required for compiling the CLI
which we no longer do here.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 57f0e0c619)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Andrew Hsu
6c7b043f77 use latestChunk instead of latestFile to get cherry-pick commit 7bd7bde working
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Brian Goff
791faef9e2 Fix log readers can block writes indefinitely
Before this patch, a log reader is able to block all log writes
indefinitely (and other operations) by simply opening the log stream and
not consuming all the messages.

The reason for this is we protect the read stream from corruption by
ensuring there are no new writes while the log stream is consumed (and
caught up with the live entries).

We can get around this issue because log files are append only, so we
can limit reads to only the section of the file that was written to when
the log stream was first requested.

Now logs are only blocked until all files are opened, rather than
streamed to the client.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit e2209185ed)

Conflicts:
components/engine/daemon/logger/jsonfilelog/read.go
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:36 +00:00
Wentao Zhang
5fe12d194d Limit max backoff delay to 2 seconds for GRPC connection
Docker use default GRPC backoff strategy to reconnect to containerd when
connection is lost. and the delay time grows exponentially, until reaches 120s.

So Change the max delay time to 2s to avoid docker and containerd
connection failure.

Signed-off-by: Wentao Zhang <zhangwentao234@huawei.com>
(cherry picked from commit d3d8c77d19)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Brian Goff
e4e9921367 Fix plugin remove dir name after rename.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 4bf263c198)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Andrew Hsu
b1323a1c8f import system for plugin/manager.go
To get the cherry-pick ba42966 to merge smoothly,
expecting import of system. The import of system was added by
https://github.com/docker/docker-ce/commit/8508f49 but not taking
entire commit.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Brian Goff
27deeb1298 Make plugin removes more resilient to failure
Before this patch, if the plugin's `config.json` is successfully removed
but the main plugin state dir could not be removed for some reason (e.g.
leaked mount), it will prevent the daemon from being able to be
restarted.

This patches changes this to atomically remove the plugin such that on
daemon restart we can detect that there was an error and re-try. It also
changes the logic so that it only logs errors on restore rather than
erroring out the daemon.

This also removes some code which is now duplicated elsewhere.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 11cf394e5e)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Andrew Hsu
bc5e4e3ce3 vndr to update vendor for github.com/spf13/cobra
No changes to vendor.conf, just running vndr.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Andrew Hsu
0915c2c848 vndr to update vendor for github.com/Microsoft/go-winio
No changes to vendor.conf, just running vndr.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Tibor Vass
73c462b2e5 Remove docs (except docs/api), experimental/, contrib/completion, man/
They have been moved to github.com/docker/cli.

Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit b5579a4ce3)

Conflicts:
components/engine/docs/extend/legacy_plugins.md
components/engine/docs/reference/builder.md
components/engine/docs/reference/commandline/node_ls.md
components/engine/docs/reference/commandline/run.md
components/engine/docs/reference/run.md
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
Eli Uriegas
9c5bb024df bump docker-ce VERSION to 17.06.0-ce
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:35 +00:00
Eli Uriegas
cc547fdd4b bump VERSION to 17.06.0-ce-rc5
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:35 +00:00
Tibor Vass
5c27bce649 [engine] Vendor libnetwork to 20bf4e23da1bb3f9d3dc70e4cf531b84c45947bd
Brings in this pull request:
- https://github.com/docker/libnetwork/pull/1808

Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:35 +00:00
Ying Li
14bc03dbb8 Redact the swarm's spec's signing CA cert when getting swarm info, because
otherwise if the user gets the info from the API, makes a non-CA related change,
then updates, swarm will interpret this as the user trying to remove the signing
key from the swarm.  We are redacting due to usability reasons, not because
the signing cert is secret.  The signing KEY is secret, hence it's redacted.

Signed-off-by: Ying Li <ying.li@docker.com>
(cherry picked from commit bdfbd22afb)
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:35 +00:00
Tibor Vass
b055794618 [engine] Vendor swarmkit to 9edb625cfb4407da456cc7fc479db6d824fe81f3
Brings in these pull requests:
- https://github.com/docker/swarmkit/pull/2224
- https://github.com/docker/swarmkit/pull/2268
- https://github.com/docker/swarmkit/pull/2263

Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:35 +00:00
Andrew Hsu
a65742e314 bump VERSION files to 17.06.0-ce-rc4
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:35 +00:00
unclejack
9a727acc15 pkg/pools: add buffer32KPool & use it for copy
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
(cherry picked from commit ba40f4593f)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:34 +00:00
unclejack
82e1487e72 container/stream/attach: use pools.Copy
The use of pools.Copy avoids io.Copy's internal buffer allocation.
This commit replaces io.Copy with pools.Copy to avoid the allocation of
buffers in io.Copy.

Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
(cherry picked from commit 014095e6a0)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:34 +00:00
Aaron Lehmann
a11bca5891 Vendor swarmkit bf9b892
Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2020-03-23 09:41:34 +00:00
Andrew Hsu
a67f442586 bump VERSION files to 17.06.0-ce-rc3
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:34 +00:00
Sebastiaan van Stijn
842c51c394 Update docs, completion scripts for disable-legacy-registry
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 2b8f0eef73)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:34 +00:00
Sebastiaan van Stijn
a2f65ec4aa Disable legacy (v1) registries by default
Deprecation of interacting with v1 registries was
started in docker 1.8.3, which added a `--disable-legacy-registry`
flag.

This option was anounced to be the default starting
with docker 17.06, and v1 registries completely
removed in docker 17.12.

This patch updates the default, and disables
interaction with v1 registres by default.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 128280013f)
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:34 +00:00
Andrew Hsu
dcbaad5448 revendor github.com/docker/swarmkit to 6083c76
To get the changes:
* https://github.com/docker/swarmkit/pull/2234
* https://github.com/docker/swarmkit/pull/2237
* https://github.com/docker/swarmkit/pull/2238

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:34 +00:00
Madhu Venugopal
5c0be74bb8 Vendoring libnetwork 4f5310be349d9299f6cab6d5822312f00cfa965c
This is a cherry-pick of https://github.com/moby/moby/pull/33634
that brings in https://github.com/docker/libnetwork/pull/1796.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
2020-03-23 09:41:34 +00:00
Michael Crosby
91c49beea0 [engine] Revert ONCLR and OPOST changes
This reverts to a version of runc without the ONCLR cleared to not cause
a regression with different clients using --tty.

This also reverts the OPOST changes to the term package to support the
initial change.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit a5e83836a4)
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:34 +00:00
Brian Goff
9778382f97 Check signal is unset before using user stopsignal
This fixes an issue where if a stop signal is set, and a user sends
SIGKILL, `container.ExitOnNext()` is not set, thus causing the container
to restart.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 114652ab86)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:34 +00:00
Sebastiaan van Stijn
70d059f024 Update deprecated.md for removal of --email flag
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 43239f62be)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:34 +00:00
Sebastiaan van Stijn
ab046e94d5 Remove "-e" / "--email" from integration tests
This option is no longer supported in docker 17.06,
so should not be used.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-03-23 09:41:34 +00:00
Victor Vieux
5c5ac7eb9b Merge pull request #29418 from aboch/p66
[1.13.x] Fix buildIpamResources()
(cherry picked from commit 4d2be03b68)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 27498a3c60)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Andrew Hsu
d62ed5f3ac revendoring netlink in engine component
bumping vendor of vishvananda/netlink to bd6d5de5ccef2d66b0a26177928d0d8895d7f969

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:33 +00:00
Andrew Hsu
182cd939be revendoring libnetwork in engine component
bumping vendor of docker/libnetwork to 411846172fb457a6459ea94eb536f91717ee0a04

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:33 +00:00
Felix Abecassis
f7feb663b7 Do not reuse a http.Request after a failure in callWithRetry
Closes: #33412

Signed-off-by: Felix Abecassis <fabecassis@nvidia.com>
(cherry picked from commit 62871ef2fa)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Peter Bücker
a2b498e128 Logging driver should receive same file in start/stop request
Signed-off-by: Peter Bücker <peter.buecker@gmail.com>
(cherry picked from commit e908e1a357)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Tonis Tiigi
f024e2d7eb Fix cancelling builder on chunked requests
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit 23628bd7ef)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Aaron Lehmann
4e8af4c431 daemon: Remove daemon datastructure dump functionality
When sending SIGUSR1 to the daemon, it can crash because of a concurrent
map access panic, showing a stack trace involving dumpDaemon. It appears
it's not possible to recover from a concurrent map access panic. Since
it's important that SIGUSR1 not be a destructive operation, sadly the
best course of action I can think of is to remove this functionality.

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
(cherry picked from commit a4c68ee857)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Victor Vieux
e0ffd247bf bump to GA
Signed-off-by: Victor Vieux <victorvieux@gmail.com>
(cherry picked from commit 89658bed64)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit d94f281d78)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Victor Vieux
33af086a0b bump to rc2
Signed-off-by: Victor Vieux <victorvieux@gmail.com>
(cherry picked from commit c57fdb2a14)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit b4d36e47c3)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:33 +00:00
Nishant Totla
3c9dfed30e Fixing stack deploy tests to not contact registry
Because of cherry-pick from commit
f790e839fc7d669acafa6365ca7a83cbedfe9e2d

Signed-off-by: Nishant Totla <nishanttotla@gmail.com>
2020-03-23 09:41:33 +00:00
Nishant Totla
ee9a857558 Fixing e2e test
Because of cherry-pick from commit
5efcec7717 into
components/cli/vendor/github.com/docker/docker

Signed-off-by: Nishant Totla <nishanttotla@gmail.com>
2020-03-23 09:41:33 +00:00
Nishant Totla
9ab2df07fd Adding unit tests for pin by digest (client)
Signed-off-by: Nishant Totla <nishanttotla@gmail.com>
(cherry picked from commit 75c7536d2e)
2020-03-23 09:41:33 +00:00
Nishant Totla
752471c92f Ensure service images get default tag and print familiar strings
Signed-off-by: Nishant Totla <nishanttotla@gmail.com>
(cherry picked from commit 5efcec7717)
2020-03-23 09:41:33 +00:00
Ying
f8ecde191a Re-vendor swarmkit to include the following fixes:
- https://github.com/docker/swarmkit/pull/2218
- https://github.com/docker/swarmkit/pull/2215
- https://github.com/docker/swarmkit/pull/2233

Signed-off-by: Ying <ying.li@docker.com>
2020-03-23 09:41:32 +00:00
Antonio Murdaca
093c31c694 libcontainerd: fix reaper goroutine position
It has observed defunct containerd processes accumulating over
time while dockerd was permanently failing to restart containerd.
Due to a bug in the runContainerdDaemon() function, dockerd does not clean up
its child process if containerd already exits very soon after the (re)start.

The reproducer and analysis below comes from docker 1.12.x but bug
still applies on latest master.

- from libcontainerd/remote_linux.go:

  329 func (r *remote) runContainerdDaemon() error {
   :
   :      // start the containerd child process
   :
  403     if err := cmd.Start(); err != nil {
  404             return err
  405     }
   :
   :      // If containerd exits very soon after (re)start, it is
possible
   :      // that containerd is already in defunct state at the time
when
   :      // dockerd gets here. The setOOMScore() function tries to
write
   :      // to /proc/PID_OF_CONTAINERD/oom_score_adj. However, this
fails
   :      // with errno EINVAL because containerd is defunct. Please see
   :      // snippets of kernel source code and further explanation
below.
   :
  407     if err := setOOMScore(cmd.Process.Pid, r.oomScore); err != nil
{
  408             utils.KillProcess(cmd.Process.Pid)
   :
   :              // Due to the error from write() we return here. As
the
   :              // goroutine that would clean up the child has not
been
   :              // started yet, containerd remains in the defunct
state
   :              // and never gets reaped.
   :
  409             return err
  410     }
   :
  417     go func() {
  418             cmd.Wait()
  419             close(r.daemonWaitCh)
  420     }() // Reap our child when needed
   :
  423 }

This is the kernel function that gets invoked when dockerd tries to
write
to /proc/PID_OF_CONTAINERD/oom_score_adj.

- from fs/proc/base.c:

 1197 static ssize_t oom_score_adj_write(struct file *file, ...
 1198                                         size_t count, loff_t
*ppos)
 1199 {
   :
 1223         task = get_proc_task(file_inode(file));
   :
   :          // The defunct containerd process does not have a virtual
   :          // address space anymore, i.e. task->mm is NULL. Thus the
   :          // following code returns errno EINVAL to dockerd.
   :
 1230         if (!task->mm) {
 1231                 err = -EINVAL;
 1232                 goto err_task_lock;
 1233         }
   :
 1253 err_task_lock:
   :
 1257         return err < 0 ? err : count;
 1258 }

The purpose of the following program is to demonstrate the behavior of
the oom_score_adj_write() function in connection with a defunct process.

$ cat defunct_test.c

\#include <unistd.h>

main()
{
    pid_t pid = fork();

    if (pid == 0)
        // child
        _exit(0);

    // parent
    pause();
}

$ make defunct_test
cc     defunct_test.c   -o defunct_test

$ ./defunct_test &
[1] 3142

$ ps -f | grep defunct_test | grep -v grep
root      3142  2956  0 13:04 pts/0    00:00:00 ./defunct_test
root      3143  3142  0 13:04 pts/0    00:00:00 [defunct_test] <defunct>

$ echo "ps 3143" | crash -s
  PID    PPID  CPU       TASK        ST  %MEM     VSZ    RSS  COMM
  3143   3142   2  ffff880035def300  ZO   0.0       0      0
defunct_test

$ echo "px ((struct task_struct *)0xffff880035def300)->mm" | crash -s
$1 = (struct mm_struct *) 0x0
                          ^^^ task->mm is NULL

$ cat /proc/3143/oom_score_adj
0

$ echo 0 > /proc/3143/oom_score_adj
-bash: echo: write error: Invalid argument"

---

This patch fixes the above issue by making sure we start the reaper
goroutine as soon as possible.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>

(cherry picked from commit 27087eacbf)

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Andrew Hsu
5c6a466a84 Revert "Set OPOST on bsd"
This reverts commit fff42c853aedd3e15c731217058642d9859e7d3f.

Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:32 +00:00
Dong Chen
4d988e9141 Event tests need to wait for events
Signed-off-by: Dong Chen <dongluo.chen@docker.com>
(cherry picked from commit 59b2d0473a)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Ying Li
b214658e4c Do not log the CA config CA signing key in debug mode.
Signed-off-by: Ying Li <ying.li@docker.com>
(cherry picked from commit d60f182049)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Daniel Nephin
f981cf94d0 Fix Cache with ONBUILD
Signed-off-by: Daniel Nephin <dnephin@docker.com>
(cherry picked from commit f1ade82d82)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Lei Jitang
b9863603cd Don't create source directory while the daemon is being shutdown, fix #30348
If a container mount the socket the daemon is listening on into
container while the daemon is being shutdown, the socket will
not exist on the host, then daemon will assume it's a directory
and create it on the host, this will cause the daemon can't start
next time.

fix issue https://github.com/moby/moby/issues/30348

To reproduce this issue, you can add following code

```
--- a/daemon/oci_linux.go
+++ b/daemon/oci_linux.go
@@ -8,6 +8,7 @@ import (
        "sort"
        "strconv"
        "strings"
+       "time"

        "github.com/Sirupsen/logrus"
        "github.com/docker/docker/container"
@@ -666,7 +667,8 @@ func (daemon *Daemon) createSpec(c *container.Container) (*libcontainerd.Spec, e
        if err := daemon.setupIpcDirs(c); err != nil {
                return nil, err
        }
-
+       fmt.Printf("===please stop the daemon===\n")
+       time.Sleep(time.Second * 2)
        ms, err := daemon.setupMounts(c)
        if err != nil {
                return nil, err

```

step1 run a container which has `--restart always` and `-v /var/run/docker.sock:/sock`
```
$ docker run -ti --restart always -v /var/run/docker.sock:/sock busybox
/ #

```
step2 exit the the container
```
/ # exit
```
and kill the daemon when you see
```
===please stop the daemon===
```
in the daemon log

The daemon can't restart again and fail with `can't create unix socket /var/run/docker.sock: is a directory`.

Signed-off-by: Lei Jitang <leijitang@huawei.com>

(cherry picked from commit 7318eba5b2)

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>

Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Boaz Shuster
0f09cebc65 Add container environment variables correctly to the health check
The health check process doesn't have all the environment
varialbes in the container or has them set incorrectly.

This patch should fix that problem.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>
(cherry picked from commit 5836d86ac4)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Madhu Venugopal
870d659101 Service alias should not be copied to task alias
If a service alias is copied to task, then the DNS resolution on the
service name will resolve to service VIP and all of Task-IPs and that
will break the concept of vip based load-balancing resulting in all the
dns-rr caching issues.

This is a regression introduced in #33130

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 38c1553150)
Signed-off-by: Eli Uriegas <eli.uriegas@docker.com>
2020-03-23 09:41:32 +00:00
Kenfe-Mickael Laventure
5ee88ddc0d Update containerd to cfb82a876ecc11b5ca0977d1733adbe58599088a
Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
2020-03-23 09:41:32 +00:00
Andrew Hsu
631cf6dc8d bump VERSION files to 17.06.0-ce-rc2
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:32 +00:00
Andrea Luzzardi
bd04a75392 Re-vendor SwarmKit to 4b872cfac8ffc0cc7fff434902cc05dbc7612da5
Includes:
- docker/swarmkit#2203
- docker/swarmkit#2210
- docker/swarmkit#2212

Signed-off-by: Andrea Luzzardi <aluzzardi@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-03-23 09:41:32 +00:00
Madhu Venugopal
7a11613e94 Vendoring libnetwork
This is a 17.06 equivalent cherry-pick of
https://github.com/moby/moby/pull/33463

Signed-off-by: Madhu Venugopal <madhu@docker.com>
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:31 +00:00
Michael Crosby
0570feee3d Set OPOST on bsd
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit 17ec46a243)
2020-03-23 09:41:27 +00:00
Daniel Nephin
daef057517 Fix ONBUILD COPY
the source was missing from the second dispatch

Signed-off-by: Daniel Nephin <dnephin@docker.com>
(cherry picked from commit 3f26041577)
2020-03-23 09:41:27 +00:00
Ying Li
400454cf9a Do not clear swarm directory at the begining of swarm init and swarm join now.
However, do clear the directory if init or join fails, because we don't
want to leave it in a half-finished state.

Signed-off-by: Ying Li <ying.li@docker.com>
(cherry picked from commit bf3e9293a6)
2020-03-23 09:41:27 +00:00
Evan Hazlett
b66b1849e9 remove RuntimeData from cluster and types
Signed-off-by: Evan Hazlett <ejhazlett@gmail.com>
(cherry picked from commit 8eeba75198)
2020-03-23 09:41:27 +00:00
Sandeep Bansal
b01ed8895c Adding support for DNS search on RS1
Signed-off-by: Sandeep Bansal <sabansal@microsoft.com>
(cherry picked from commit b8e8dcd6e0)
2020-03-23 09:41:27 +00:00
Neil Horman
4a5fa1e147 Ensure that a device mapper task is referenced until task is complete
DeviceMapper tasks in go use SetFinalizer to clean up C construct
counterparts in the C LVM library.  While thats well and good, it relies
heavily on the exact interpretation of when the golang garbage collector
determines that an object is unreachable is subject to reclaimation.
While common sense would assert that for stack variables (which these DM
tasks always are), are unreachable when the stack frame in which they
are declared returns, thats not the case.  According to this:

https://golang.org/pkg/runtime/#SetFinalizer

The garbage collector decides that, if a function calls into a
systemcall (which task.run() always will in LVM), and there are no
subsequent references to the task variable within that stack frame, then
it can be reclaimed.  Those conditions are met in several devmapper.go
routines, and if the garbage collector runs in the middle of a
deviceMapper operation, then the task can be destroyed while the
operation is in progress, leading to crashes, failed operations and
other unpredictable behavior.

The fix is to use the KeepAlive interface:

https://golang.org/pkg/runtime/#KeepAlive

The KeepAlive method is effectively an empy reference that fools the
garbage collector into thinking that a variable is still reachable.  By
adding a call to KeepAlive in the task.run() method, we can ensure that
the garbage collector won't reclaim a task object until its execution
within the deviceMapper C library is complete.

Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
(cherry picked from commit d764d8b166)
2020-03-23 09:41:27 +00:00
Darren Stahl
393ea2d964 Update go-winio to v0.4.2
Signed-off-by: Darren Stahl <darst@microsoft.com>
(cherry picked from commit 3f13107223)
2020-03-23 09:41:27 +00:00
Brian Goff
90fd450182 Don't unmount entire plugin manager tree on remove
This was mistakenly unmounting everything under `plugins/*` instead of
just `plugins/<id>/*` anytime a plugin is removed.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit db5f31732a)
2020-03-23 09:41:27 +00:00
Brian Goff
80cc4bc95f Bump go to go1.8.3
Note that go1.8.2 contains a security fix (CVE-2017-8932).

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
(cherry picked from commit 0c7c900e9e)
2020-03-23 09:41:27 +00:00
Alfred Landrum
9a3a4c0243 prevent image prune panic
Signed-off-by: Alfred Landrum <alfred.landrum@docker.com>
(cherry picked from commit 32da2a4234)
2020-03-23 09:41:27 +00:00
Kenfe-Mickael Laventure
312781c2e1 Use actual cli version for TestConfigHTTPHeader
Signed-off-by: Kenfe-Mickael Laventure <mickael.laventure@gmail.com>
(cherry picked from commit 0b90edc22f)
2020-03-23 09:41:26 +00:00
Andrew Hsu
0aefd9b0f8 bump VERSION files to 17.06.0-ce-rc1
Signed-off-by: Andrew Hsu <andrewhsu@docker.com>
2020-03-23 09:41:26 +00:00
492 changed files with 6635 additions and 39714 deletions

View File

@@ -5,6 +5,92 @@ information on the list of deprecated flags and APIs please have a look at
https://docs.docker.com/engine/deprecated/ where target removal dates can also
be found.
## 17.05.0-ce (2017-05-04)
### Builder
+ Add multi-stage build support [#31257](https://github.com/docker/docker/pull/31257) [#32063](https://github.com/docker/docker/pull/32063)
+ Allow using build-time args (`ARG`) in `FROM` [#31352](https://github.com/docker/docker/pull/31352)
+ Add an option for specifying build target [#32496](https://github.com/docker/docker/pull/32496)
* Accept `-f -` to read Dockerfile from `stdin`, but use local context for building [#31236](https://github.com/docker/docker/pull/31236)
* The values of default build time arguments (e.g `HTTP_PROXY`) are no longer displayed in docker image history unless a corresponding `ARG` instruction is written in the Dockerfile. [#31584](https://github.com/docker/docker/pull/31584)
- Fix setting command if a custom shell is used in a parent image [#32236](https://github.com/docker/docker/pull/32236)
- Fix `docker build --label` when the label includes single quotes and a space [#31750](https://github.com/docker/docker/pull/31750)
### Client
* Add `--mount` flag to `docker run` and `docker create` [#32251](https://github.com/docker/docker/pull/32251)
* Add `--type=secret` to `docker inspect` [#32124](https://github.com/docker/docker/pull/32124)
* Add `--format` option to `docker secret ls` [#31552](https://github.com/docker/docker/pull/31552)
* Add `--filter` option to `docker secret ls` [#30810](https://github.com/docker/docker/pull/30810)
* Add `--filter scope=<swarm|local>` to `docker network ls` [#31529](https://github.com/docker/docker/pull/31529)
* Add `--cpus` support to `docker update` [#31148](https://github.com/docker/docker/pull/31148)
* Add label filter to `docker system prune` and other `prune` commands [#30740](https://github.com/docker/docker/pull/30740)
* `docker stack rm` now accepts multiple stacks as input [#32110](https://github.com/docker/docker/pull/32110)
* Improve `docker version --format` option when the client has downgraded the API version [#31022](https://github.com/docker/docker/pull/31022)
* Prompt when using an encrypted client certificate to connect to a docker daemon [#31364](https://github.com/docker/docker/pull/31364)
* Display created tags on successful `docker build` [#32077](https://github.com/docker/docker/pull/32077)
* Cleanup compose convert error messages [#32087](https://github.com/moby/moby/pull/32087)
### Contrib
+ Add support for building docker debs for Ubuntu 17.04 Zesty on amd64 [#32435](https://github.com/docker/docker/pull/32435)
### Daemon
- Fix `--api-cors-header` being ignored if `--api-enable-cors` is not set [#32174](https://github.com/docker/docker/pull/32174)
- Cleanup docker tmp dir on start [#31741](https://github.com/docker/docker/pull/31741)
- Deprecate `--graph` flag in favor or `--data-root` [#28696](https://github.com/docker/docker/pull/28696)
### Logging
+ Add support for logging driver plugins [#28403](https://github.com/docker/docker/pull/28403)
* Add support for showing logs of individual tasks to `docker service logs`, and add `/task/{id}/logs` REST endpoint [#32015](https://github.com/docker/docker/pull/32015)
* Add `--log-opt env-regex` option to match environment variables using a regular expression [#27565](https://github.com/docker/docker/pull/27565)
### Networking
+ Allow user to replace, and customize the ingress network [#31714](https://github.com/docker/docker/pull/31714)
- Fix UDP traffic in containers not working after the container is restarted [#32505](https://github.com/docker/docker/pull/32505)
- Fix files being written to `/var/lib/docker` if a different data-root is set [#32505](https://github.com/docker/docker/pull/32505)
### Runtime
- Ensure health probe is stopped when a container exits [#32274](https://github.com/docker/docker/pull/32274)
### Swarm Mode
+ Add update/rollback order for services (`--update-order` / `--rollback-order`) [#30261](https://github.com/docker/docker/pull/30261)
+ Add support for synchronous `service create` and `service update` [#31144](https://github.com/docker/docker/pull/31144)
+ Add support for "grace periods" on healthchecks through the `HEALTHCHECK --start-period` and `--health-start-period` flag to
`docker service create`, `docker service update`, `docker create`, and `docker run` to support containers with an initial startup
time [#28938](https://github.com/docker/docker/pull/28938)
* `docker service create` now omits fields that are not specified by the user, when possible. This will allow defaults to be applied inside the manager [#32284](https://github.com/docker/docker/pull/32284)
* `docker service inspect` now shows default values for fields that are not specified by the user [#32284](https://github.com/docker/docker/pull/32284)
* Move `docker service logs` out of experimental [#32462](https://github.com/docker/docker/pull/32462)
* Add support for Credential Spec and SELinux to services to the API [#32339](https://github.com/docker/docker/pull/32339)
* Add `--entrypoint` flag to `docker service create` and `docker service update` [#29228](https://github.com/docker/docker/pull/29228)
* Add `--network-add` and `--network-rm` to `docker service update` [#32062](https://github.com/docker/docker/pull/32062)
* Add `--credential-spec` flag to `docker service create` and `docker service update` [#32339](https://github.com/docker/docker/pull/32339)
* Add `--filter mode=<global|replicated>` to `docker service ls` [#31538](https://github.com/docker/docker/pull/31538)
* Resolve network IDs on the client side, instead of in the daemon when creating services [#32062](https://github.com/docker/docker/pull/32062)
* Add `--format` option to `docker node ls` [#30424](https://github.com/docker/docker/pull/30424)
* Add `--prune` option to `docker stack deploy` to remove services that are no longer defined in the docker-compose file [#31302](https://github.com/docker/docker/pull/31302)
* Add `PORTS` column for `docker service ls` when using `ingress` mode [#30813](https://github.com/docker/docker/pull/30813)
- Fix unnescessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
- Fix `docker stack deploy` not supporting `endpoint_mode` when deploying from a docker compose file [#32333](https://github.com/docker/docker/pull/32333)
- Proceed with startup if cluster component cannot be created to allow recovering from a broken swarm setup [#31631](https://github.com/docker/docker/pull/31631)
### Security
* Allow setting SELinux type or MCS labels when using `--ipc=container:` or `--ipc=host` [#30652](https://github.com/docker/docker/pull/30652)
### Deprecation
- Deprecate `--api-enable-cors` daemon flag. This flag was marked deprecated in Docker 1.6.0 but not listed in deprecated features [#32352](https://github.com/docker/docker/pull/32352)
- Remove Ubuntu 12.04 (Precise Pangolin) as supported platform. Ubuntu 12.04 is EOL, and no longer receives updates [#32520](https://github.com/docker/docker/pull/32520)
## 17.04.0-ce (2017-04-05)
### Builder

View File

@@ -90,17 +90,6 @@ RUN cd /usr/local/lvm2 \
&& make install_device-mapper
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Configure the container for OSX cross compilation
ENV OSX_SDK MacOSX10.11.sdk
ENV OSX_CROSS_COMMIT a9317c18a3a457ca0a657f08cc4d0d43c6cf8953
RUN set -x \
&& export OSXCROSS_PATH="/osxcross" \
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
&& curl -sSL https://s3.dockerproject.org/darwin/v2/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh
ENV PATH /osxcross/target/bin:$PATH
# Install seccomp: the version shipped upstream is too old
ENV SECCOMP_VERSION 2.3.2
RUN set -x \
@@ -120,7 +109,8 @@ RUN set -x \
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
ENV GO_VERSION 1.8.1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local

View File

@@ -98,7 +98,8 @@ RUN set -x \
# bootstrap, so we use golang-go (1.6) as bootstrap to build Go from source code.
# We don't use the official ARMv6 released binaries as a GOROOT_BOOTSTRAP, because
# not all ARM64 platforms support 32-bit mode. 32-bit mode is optional for ARMv8.
ENV GO_VERSION 1.8.1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.8.3
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash

View File

@@ -71,7 +71,8 @@ RUN cd /usr/local/lvm2 \
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Install Go
ENV GO_VERSION 1.8.1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH

View File

@@ -95,7 +95,8 @@ RUN set -x \
# Install Go
# NOTE: official ppc64le go binaries weren't available until go 1.6.4 and 1.7.4
ENV GO_VERSION 1.8.1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" \
| tar -xzC /usr/local

View File

@@ -88,7 +88,8 @@ RUN cd /usr/local/lvm2 \
&& make install_device-mapper
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
ENV GO_VERSION 1.8.1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" \
| tar -xzC /usr/local

View File

@@ -53,7 +53,8 @@ RUN set -x \
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
ENV GO_VERSION 1.8.1
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH

View File

@@ -161,7 +161,7 @@ SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPref
# Environment variable notes:
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=1.8.1 `
ENV GO_VERSION=1.8.3 `
GIT_VERSION=2.11.1 `
GOPATH=C:\go `
FROM_DOCKERFILE=1

View File

@@ -1 +1 @@
17.06.0-dev
17.06.1-ce-rc2

View File

@@ -41,7 +41,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
maskSecretKeys(postForm)
maskSecretKeys(postForm, r.RequestURI)
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
logrus.Debugf("form data: %s", string(formStr))
@@ -54,23 +54,41 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
}
}
func maskSecretKeys(inp interface{}) {
func maskSecretKeys(inp interface{}, path string) {
// Remove any query string from the path
idx := strings.Index(path, "?")
if idx != -1 {
path = path[:idx]
}
// Remove trailing / characters
path = strings.TrimRight(path, "/")
if arr, ok := inp.([]interface{}); ok {
for _, f := range arr {
maskSecretKeys(f)
maskSecretKeys(f, path)
}
return
}
if form, ok := inp.(map[string]interface{}); ok {
loop0:
for k, v := range form {
for _, m := range []string{"password", "secret", "jointoken", "unlockkey"} {
for _, m := range []string{"password", "secret", "jointoken", "unlockkey", "signingcakey"} {
if strings.EqualFold(m, k) {
form[k] = "*****"
continue loop0
}
}
maskSecretKeys(v)
maskSecretKeys(v, path)
}
// Route-specific redactions
if strings.HasSuffix(path, "/secrets/create") {
for k := range form {
if k == "Data" {
form[k] = "*****"
}
}
}
}
}

View File

@@ -0,0 +1,58 @@
package middleware
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestMaskSecretKeys(t *testing.T) {
tests := []struct {
path string
input map[string]interface{}
expected map[string]interface{}
}{
{
path: "/v1.30/secrets/create",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/v1.30/secrets/create//",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/secrets/create?key=val",
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
},
{
path: "/v1.30/some/other/path",
input: map[string]interface{}{
"password": "pass",
"other": map[string]interface{}{
"secret": "secret",
"jointoken": "jointoken",
"unlockkey": "unlockkey",
"signingcakey": "signingcakey",
},
},
expected: map[string]interface{}{
"password": "*****",
"other": map[string]interface{}{
"secret": "*****",
"jointoken": "*****",
"unlockkey": "*****",
"signingcakey": "*****",
},
},
},
}
for _, testcase := range tests {
maskSecretKeys(testcase.input, testcase.path)
assert.Equal(t, testcase.expected, testcase.input)
}
}

View File

@@ -410,6 +410,9 @@ func buildIpamResources(r *types.NetworkResource, nwInfo libnetwork.NetworkInfo)
if !hasIpv6Conf {
for _, ip6Info := range ipv6Info {
if ip6Info.IPAMData.Pool == nil {
continue
}
iData := network.IPAMConfig{}
iData.Subnet = ip6Info.IPAMData.Pool.String()
iData.Gateway = ip6Info.IPAMData.Gateway.String()

View File

@@ -711,7 +711,7 @@ definitions:
- "process"
- "hyperv"
Config:
ContainerConfig:
description: "Configuration for a container that is portable between hosts"
type: "object"
properties:
@@ -908,7 +908,7 @@ definitions:
type: "string"
x-nullable: false
ContainerConfig:
$ref: "#/definitions/Config"
$ref: "#/definitions/ContainerConfig"
DockerVersion:
type: "string"
x-nullable: false
@@ -916,7 +916,7 @@ definitions:
type: "string"
x-nullable: false
Config:
$ref: "#/definitions/Config"
$ref: "#/definitions/ContainerConfig"
Architecture:
type: "string"
x-nullable: false
@@ -1078,17 +1078,27 @@ definitions:
type: "string"
UsageData:
type: "object"
x-nullable: true
required: [Size, RefCount]
description: |
Usage details about the volume. This information is used by the
`GET /system/df` endpoint, and omitted in other endpoints.
properties:
Size:
type: "integer"
description: "The disk space used by the volume (local driver only)"
default: -1
description: |
Amount of disk space used by the volume (in bytes). This information
is only available for volumes created with the `"local"` volume
driver. For volumes created with other volume drivers, this field
is set to `-1` ("not available")
x-nullable: false
RefCount:
type: "integer"
default: -1
description: "The number of containers referencing this volume."
description: |
The number of containers referencing this volume. This field
is set to `-1` if the reference-count is not available.
x-nullable: false
example:
@@ -2003,6 +2013,57 @@ definitions:
description: "A list of additional groups that the container process will run as."
items:
type: "string"
Privileges:
type: "object"
description: "Security options for the container"
properties:
CredentialSpec:
type: "object"
description: "CredentialSpec for managed service account (Windows only)"
properties:
File:
type: "string"
description: |
Load credential spec from this file. The file is read by the daemon, and must be present in the
`CredentialSpecs` subdirectory in the docker data directory, which defaults to
`C:\ProgramData\Docker\` on Windows.
For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`.
<p><br /></p>
> **Note**: `CredentialSpec.File` and `CredentialSpec.Registry` are mutually exclusive.
Registry:
type: "string"
description: |
Load credential spec from this value in the Windows registry. The specified registry value must be
located in:
`HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs`
<p><br /></p>
> **Note**: `CredentialSpec.File` and `CredentialSpec.Registry` are mutually exclusive.
SELinuxContext:
type: "object"
description: "SELinux labels of the container"
properties:
Disable:
type: "boolean"
description: "Disable SELinux"
User:
type: "string"
description: "SELinux user label"
Role:
type: "string"
description: "SELinux role label"
Type:
type: "string"
description: "SELinux type label"
Level:
type: "string"
description: "SELinux level label"
TTY:
description: "Whether a pseudo-TTY should be allocated."
type: "boolean"
@@ -2085,6 +2146,37 @@ definitions:
SecretName is the name of the secret that this references, but this is just provided for
lookup/display purposes. The secret in the reference will be identified by its ID.
type: "string"
Configs:
description: "Configs contains references to zero or more configs that will be exposed to the service."
type: "array"
items:
type: "object"
properties:
File:
description: "File represents a specific target that is backed by a file."
type: "object"
properties:
Name:
description: "Name represents the final filename in the filesystem."
type: "string"
UID:
description: "UID represents the file UID."
type: "string"
GID:
description: "GID represents the file GID."
type: "string"
Mode:
description: "Mode represents the FileMode of the file."
type: "integer"
format: "uint32"
ConfigID:
description: "ConfigID represents the ID of the specific config that we're referencing."
type: "string"
ConfigName:
description: |
ConfigName is the name of the config that this references, but this is just provided for
lookup/display purposes. The config in the reference will be identified by its ID.
type: "string"
Resources:
description: "Resource requirements which apply to each individual container created as part of the service."
@@ -2174,9 +2266,6 @@ definitions:
Runtime:
description: "Runtime is the type of runtime specified for the task executor."
type: "string"
RuntimeData:
description: "RuntimeData is the payload sent to be used with the runtime for the executor."
type: "array"
Networks:
type: "array"
items:
@@ -2691,7 +2780,39 @@ definitions:
type: "string"
format: "dateTime"
Spec:
$ref: "#/definitions/ServiceSpec"
$ref: "#/definitions/SecretSpec"
ConfigSpec:
type: "object"
properties:
Name:
description: "User-defined name of the config."
type: "string"
Labels:
description: "User-defined key/value metadata."
type: "object"
additionalProperties:
type: "string"
Data:
description: "Base64-url-safe-encoded config data"
type: "array"
items:
type: "string"
Config:
type: "object"
properties:
ID:
type: "string"
Version:
$ref: "#/definitions/ObjectVersion"
CreatedAt:
type: "string"
format: "dateTime"
UpdatedAt:
type: "string"
format: "dateTime"
Spec:
$ref: "#/definitions/ConfigSpec"
paths:
/containers/json:
get:
@@ -2896,7 +3017,7 @@ paths:
description: "Container to create"
schema:
allOf:
- $ref: "#/definitions/Config"
- $ref: "#/definitions/ContainerConfig"
- type: "object"
properties:
HostConfig:
@@ -3122,7 +3243,7 @@ paths:
all processes in the container. Freezing the process requires the process to
be running. As a result, paused containers are both `Running` _and_ `Paused`.
Use the `Status` field instead to determin if a container's state is "running".
Use the `Status` field instead to determine if a container's state is "running".
type: "boolean"
Paused:
description: "Whether this container is paused."
@@ -3194,7 +3315,7 @@ paths:
items:
$ref: "#/definitions/MountPoint"
Config:
$ref: "#/definitions/Config"
$ref: "#/definitions/ContainerConfig"
NetworkSettings:
$ref: "#/definitions/NetworkConfig"
examples:
@@ -5560,7 +5681,7 @@ paths:
in: "body"
description: "The container configuration"
schema:
$ref: "#/definitions/Config"
$ref: "#/definitions/ContainerConfig"
- name: "container"
in: "query"
description: "The ID or name of the container to commit"
@@ -5599,16 +5720,22 @@ paths:
Various objects within Docker report events when something happens to them.
Containers report these events: `attach, commit, copy, create, destroy, detach, die, exec_create, exec_detach, exec_start, export, health_status, kill, oom, pause, rename, resize, restart, start, stop, top, unpause, update`
Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, and `update`
Images report these events: `delete, import, load, pull, push, save, tag, untag`
Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, and `untag`
Volumes report these events: `create, mount, unmount, destroy`
Volumes report these events: `create`, `mount`, `unmount`, and `destroy`
Networks report these events: `create, connect, disconnect, destroy`
Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, and `remove`
The Docker daemon reports these events: `reload`
Services report these events: `create`, `update`, and `remove`
Nodes report these events: `create`, `update`, and `remove`
Secrets report these events: `create`, `update`, and `remove`
operationId: "SystemEvents"
produces:
- "application/json"
@@ -5682,7 +5809,8 @@ paths:
- `label=<string>` image or container label
- `network=<string>` network name or ID
- `plugin`=<string> plugin name or ID
- `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, or `daemon`
- `scope`<string> local or swarm
- `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service` or `secret`
- `volume=<string>` volume name or ID
type: "string"
tags: ["System"]
@@ -5763,13 +5891,13 @@ paths:
-
Name: "my-volume"
Driver: "local"
Mountpoint: ""
Mountpoint: "/var/lib/docker/volumes/my-volume/_data"
Labels: null
Scope: ""
Scope: "local"
Options: null
UsageData:
Size: 0
RefCount: 0
Size: 10920104
RefCount: 2
500:
description: "server error"
schema:
@@ -7355,6 +7483,16 @@ paths:
AdvertiseAddr:
description: "Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible."
type: "string"
DataPathAddr:
description: |
Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`,
or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr`
is used.
The `DataPathAddr` specifies the address that global scope network drivers will publish towards other
nodes in order to reach the containers running on this node. Using this parameter it is possible to
separate the container data traffic from the management traffic of the cluster.
type: "string"
ForceNewCluster:
description: "Force creation of a new swarm."
type: "boolean"
@@ -7403,6 +7541,17 @@ paths:
type: "string"
AdvertiseAddr:
description: "Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible."
type: "string"
DataPathAddr:
description: |
Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`,
or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr`
is used.
The `DataPathAddr` specifies the address that global scope network drivers will publish towards other
nodes in order to reach the containers running on this node. Using this parameter it is possible to
separate the container data traffic from the management traffic of the cluster.
type: "string"
RemoteAddrs:
description: "Addresses of manager nodes already participating in the swarm."
@@ -8382,6 +8531,198 @@ paths:
format: "int64"
required: true
tags: ["Secret"]
/configs:
get:
summary: "List configs"
operationId: "ConfigList"
produces:
- "application/json"
responses:
200:
description: "no error"
schema:
type: "array"
items:
$ref: "#/definitions/Config"
example:
- ID: "ktnbjxoalbkvbvedmg1urrz8h"
Version:
Index: 11
CreatedAt: "2016-11-05T01:20:17.327670065Z"
UpdatedAt: "2016-11-05T01:20:17.327670065Z"
Spec:
Name: "server.conf"
500:
description: "server error"
schema:
$ref: "#/definitions/ErrorResponse"
503:
description: "node is not part of a swarm"
schema:
$ref: "#/definitions/ErrorResponse"
parameters:
- name: "filters"
in: "query"
type: "string"
description: |
A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters:
- `id=<config id>`
- `label=<key> or label=<key>=value`
- `name=<config name>`
- `names=<config name>`
tags: ["Config"]
/configs/create:
post:
summary: "Create a config"
operationId: "ConfigCreate"
consumes:
- "application/json"
produces:
- "application/json"
responses:
201:
description: "no error"
schema:
type: "object"
properties:
ID:
description: "The ID of the created config."
type: "string"
example:
ID: "ktnbjxoalbkvbvedmg1urrz8h"
409:
description: "name conflicts with an existing object"
schema:
$ref: "#/definitions/ErrorResponse"
500:
description: "server error"
schema:
$ref: "#/definitions/ErrorResponse"
503:
description: "node is not part of a swarm"
schema:
$ref: "#/definitions/ErrorResponse"
parameters:
- name: "body"
in: "body"
schema:
allOf:
- $ref: "#/definitions/ConfigSpec"
- type: "object"
example:
Name: "server.conf"
Labels:
foo: "bar"
Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg=="
tags: ["Config"]
/configs/{id}:
get:
summary: "Inspect a config"
operationId: "ConfigInspect"
produces:
- "application/json"
responses:
200:
description: "no error"
schema:
$ref: "#/definitions/Config"
examples:
application/json:
ID: "ktnbjxoalbkvbvedmg1urrz8h"
Version:
Index: 11
CreatedAt: "2016-11-05T01:20:17.327670065Z"
UpdatedAt: "2016-11-05T01:20:17.327670065Z"
Spec:
Name: "app-dev.crt"
404:
description: "config not found"
schema:
$ref: "#/definitions/ErrorResponse"
500:
description: "server error"
schema:
$ref: "#/definitions/ErrorResponse"
503:
description: "node is not part of a swarm"
schema:
$ref: "#/definitions/ErrorResponse"
parameters:
- name: "id"
in: "path"
required: true
type: "string"
description: "ID of the config"
tags: ["Config"]
delete:
summary: "Delete a config"
operationId: "ConfigDelete"
produces:
- "application/json"
responses:
204:
description: "no error"
404:
description: "config not found"
schema:
$ref: "#/definitions/ErrorResponse"
500:
description: "server error"
schema:
$ref: "#/definitions/ErrorResponse"
503:
description: "node is not part of a swarm"
schema:
$ref: "#/definitions/ErrorResponse"
parameters:
- name: "id"
in: "path"
required: true
type: "string"
description: "ID of the config"
tags: ["Config"]
/configs/{id}/update:
post:
summary: "Update a Config"
operationId: "ConfigUpdate"
responses:
200:
description: "no error"
400:
description: "bad parameter"
schema:
$ref: "#/definitions/ErrorResponse"
404:
description: "no such config"
schema:
$ref: "#/definitions/ErrorResponse"
500:
description: "server error"
schema:
$ref: "#/definitions/ErrorResponse"
503:
description: "node is not part of a swarm"
schema:
$ref: "#/definitions/ErrorResponse"
parameters:
- name: "id"
in: "path"
description: "The ID or name of the config"
type: "string"
required: true
- name: "body"
in: "body"
schema:
$ref: "#/definitions/ConfigSpec"
description: "The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values."
- name: "version"
in: "query"
description: "The version number of the config object being updated. This is required to avoid conflicting writes."
type: "integer"
format: "int64"
required: true
tags: ["Config"]
/distribution/{name}/json:
get:
summary: "Get image information from the registry"

View File

@@ -67,9 +67,6 @@ type TaskSpec struct {
ForceUpdate uint64
Runtime RuntimeType `json:",omitempty"`
// TODO (ehazlett): this should be removed and instead
// use struct tags (proto) for the runtimes
RuntimeData []byte `json:",omitempty"`
}
// Resources represents resources (CPU/Memory).

View File

@@ -44,15 +44,23 @@ type Volume struct {
UsageData *VolumeUsageData `json:"UsageData,omitempty"`
}
// VolumeUsageData volume usage data
// VolumeUsageData Usage details about the volume. This information is used by the
// `GET /system/df` endpoint, and omitted in other endpoints.
//
// swagger:model VolumeUsageData
type VolumeUsageData struct {
// The number of containers referencing this volume.
// The number of containers referencing this volume. This field
// is set to `-1` if the reference-count is not available.
//
// Required: true
RefCount int64 `json:"RefCount"`
// The disk space used by the volume (local driver only)
// Amount of disk space used by the volume (in bytes). This information
// is only available for volumes created with the `"local"` volume
// driver. For volumes created with other volume drivers, this field
// is set to `-1` ("not available")
//
// Required: true
Size int64 `json:"Size"`
}

View File

@@ -281,7 +281,7 @@ func BuildFromConfig(config *container.Config, changes []string) (*container.Con
}
dispatchState := newDispatchState()
dispatchState.runConfig = config
return dispatchFromDockerfile(b, dockerfile, dispatchState)
return dispatchFromDockerfile(b, dockerfile, dispatchState, nil)
}
func checkDispatchDockerfile(dockerfile *parser.Node) error {
@@ -293,7 +293,7 @@ func checkDispatchDockerfile(dockerfile *parser.Node) error {
return nil
}
func dispatchFromDockerfile(b *Builder, result *parser.Result, dispatchState *dispatchState) (*container.Config, error) {
func dispatchFromDockerfile(b *Builder, result *parser.Result, dispatchState *dispatchState, source builder.Source) (*container.Config, error) {
shlex := NewShellLex(result.EscapeToken)
ast := result.AST
total := len(ast.Children)
@@ -304,6 +304,7 @@ func dispatchFromDockerfile(b *Builder, result *parser.Result, dispatchState *di
stepMsg: formatStep(i, total),
node: n,
shlex: shlex,
source: source,
}
if _, err := b.dispatch(opts); err != nil {
return nil, err

View File

@@ -30,9 +30,10 @@ type pathCache interface {
// copyInfo is a data object which stores the metadata about each source file in
// a copyInstruction
type copyInfo struct {
root string
path string
hash string
root string
path string
hash string
noDecompress bool
}
func newCopyInfoFromSource(source builder.Source, path string, hash string) copyInfo {
@@ -118,7 +119,9 @@ func (o *copier) getCopyInfoForSourcePath(orig string) ([]copyInfo, error) {
o.tmpPaths = append(o.tmpPaths, remote.Root())
hash, err := remote.Hash(path)
return newCopyInfos(newCopyInfoFromSource(remote, path, hash)), err
ci := newCopyInfoFromSource(remote, path, hash)
ci.noDecompress = true // data from http shouldn't be extracted even on ADD
return newCopyInfos(ci), err
}
// Cleanup removes any temporary directories created as part of downloading

View File

@@ -156,6 +156,11 @@ func add(req dispatchRequest) error {
return err
}
copyInstruction.allowLocalDecompression = true
for _, ci := range copyInstruction.infos {
if ci.noDecompress {
copyInstruction.allowLocalDecompression = false
}
}
return req.builder.performCopy(req.state, copyInstruction)
}
@@ -325,7 +330,7 @@ func processOnBuild(req dispatchRequest) error {
}
}
if _, err := dispatchFromDockerfile(req.builder, dockerfile, dispatchState); err != nil {
if _, err := dispatchFromDockerfile(req.builder, dockerfile, dispatchState, req.source); err != nil {
return err
}
}

View File

@@ -171,11 +171,9 @@ func (b *Builder) dispatch(options dispatchOptions) (*dispatchState, error) {
buildsFailed.WithValues(metricsUnknownInstructionError).Inc()
return nil, fmt.Errorf("unknown instruction: %s", upperCasedCmd)
}
if err := f(newDispatchRequestFromOptions(options, b, args)); err != nil {
return nil, err
}
options.state.updateRunConfig()
return options.state, nil
err = f(newDispatchRequestFromOptions(options, b, args))
return options.state, err
}
type dispatchOptions struct {

View File

@@ -41,6 +41,7 @@ func Detect(config backend.BuildConfig) (remote builder.Source, dockerfile *pars
}
func newArchiveRemote(rc io.ReadCloser, dockerfilePath string) (builder.Source, *parser.Result, error) {
defer rc.Close()
c, err := MakeTarSumContext(rc)
if err != nil {
return nil, nil, err

View File

@@ -20,13 +20,15 @@ func (cli *Client) Ping(ctx context.Context) (types.Ping, error) {
}
defer ensureReaderClosed(serverResp)
ping.APIVersion = serverResp.header.Get("API-Version")
if serverResp.header != nil {
ping.APIVersion = serverResp.header.Get("API-Version")
if serverResp.header.Get("Docker-Experimental") == "true" {
ping.Experimental = true
if serverResp.header.Get("Docker-Experimental") == "true" {
ping.Experimental = true
}
ping.OSType = serverResp.header.Get("OSType")
}
ping.OSType = serverResp.header.Get("OSType")
return ping, nil
err = cli.checkResponseErr(serverResp)
return ping, err
}

82
client/ping_test.go Normal file
View File

@@ -0,0 +1,82 @@
package client
import (
"errors"
"io/ioutil"
"net/http"
"strings"
"testing"
"github.com/stretchr/testify/assert"
"golang.org/x/net/context"
)
// TestPingFail tests that when a server sends a non-successful response that we
// can still grab API details, when set.
// Some of this is just excercising the code paths to make sure there are no
// panics.
func TestPingFail(t *testing.T) {
var withHeader bool
client := &Client{
client: newMockClient(func(req *http.Request) (*http.Response, error) {
resp := &http.Response{StatusCode: http.StatusInternalServerError}
if withHeader {
resp.Header = http.Header{}
resp.Header.Set("API-Version", "awesome")
resp.Header.Set("Docker-Experimental", "true")
}
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
return resp, nil
}),
}
ping, err := client.Ping(context.Background())
assert.Error(t, err)
assert.Equal(t, false, ping.Experimental)
assert.Equal(t, "", ping.APIVersion)
withHeader = true
ping2, err := client.Ping(context.Background())
assert.Error(t, err)
assert.Equal(t, true, ping2.Experimental)
assert.Equal(t, "awesome", ping2.APIVersion)
}
// TestPingWithError tests the case where there is a protocol error in the ping.
// This test is mostly just testing that there are no panics in this code path.
func TestPingWithError(t *testing.T) {
client := &Client{
client: newMockClient(func(req *http.Request) (*http.Response, error) {
resp := &http.Response{StatusCode: http.StatusInternalServerError}
resp.Header = http.Header{}
resp.Header.Set("API-Version", "awesome")
resp.Header.Set("Docker-Experimental", "true")
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
return resp, errors.New("some error")
}),
}
ping, err := client.Ping(context.Background())
assert.Error(t, err)
assert.Equal(t, false, ping.Experimental)
assert.Equal(t, "", ping.APIVersion)
}
// TestPingSuccess tests that we are able to get the expected API headers/ping
// details on success.
func TestPingSuccess(t *testing.T) {
client := &Client{
client: newMockClient(func(req *http.Request) (*http.Response, error) {
resp := &http.Response{StatusCode: http.StatusInternalServerError}
resp.Header = http.Header{}
resp.Header.Set("API-Version", "awesome")
resp.Header.Set("Docker-Experimental", "true")
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
return resp, nil
}),
}
ping, err := client.Ping(context.Background())
assert.Error(t, err)
assert.Equal(t, true, ping.Experimental)
assert.Equal(t, "awesome", ping.APIVersion)
}

View File

@@ -24,6 +24,7 @@ type serverResponse struct {
body io.ReadCloser
header http.Header
statusCode int
reqURL *url.URL
}
// head sends an http request to the docker API using the method HEAD.
@@ -118,11 +119,18 @@ func (cli *Client) sendRequest(ctx context.Context, method, path string, query u
if err != nil {
return serverResponse{}, err
}
return cli.doRequest(ctx, req)
resp, err := cli.doRequest(ctx, req)
if err != nil {
return resp, err
}
if err := cli.checkResponseErr(resp); err != nil {
return resp, err
}
return resp, nil
}
func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResponse, error) {
serverResp := serverResponse{statusCode: -1}
serverResp := serverResponse{statusCode: -1, reqURL: req.URL}
resp, err := ctxhttp.Do(ctx, cli.client, req)
if err != nil {
@@ -179,37 +187,44 @@ func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResp
if resp != nil {
serverResp.statusCode = resp.StatusCode
serverResp.body = resp.Body
serverResp.header = resp.Header
}
if serverResp.statusCode < 200 || serverResp.statusCode >= 400 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return serverResp, err
}
if len(body) == 0 {
return serverResp, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), req.URL)
}
var errorMessage string
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) &&
resp.Header.Get("Content-Type") == "application/json" {
var errorResponse types.ErrorResponse
if err := json.Unmarshal(body, &errorResponse); err != nil {
return serverResp, fmt.Errorf("Error reading JSON: %v", err)
}
errorMessage = errorResponse.Message
} else {
errorMessage = string(body)
}
return serverResp, fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
}
serverResp.body = resp.Body
serverResp.header = resp.Header
return serverResp, nil
}
func (cli *Client) checkResponseErr(serverResp serverResponse) error {
if serverResp.statusCode >= 200 && serverResp.statusCode < 400 {
return nil
}
body, err := ioutil.ReadAll(serverResp.body)
if err != nil {
return err
}
if len(body) == 0 {
return fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), serverResp.reqURL)
}
var ct string
if serverResp.header != nil {
ct = serverResp.header.Get("Content-Type")
}
var errorMessage string
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) && ct == "application/json" {
var errorResponse types.ErrorResponse
if err := json.Unmarshal(body, &errorResponse); err != nil {
return fmt.Errorf("Error reading JSON: %v", err)
}
errorMessage = errorResponse.Message
} else {
errorMessage = string(body)
}
return fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
}
func (cli *Client) addHeaders(req *http.Request, headers headers) *http.Request {
// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
// then the user can't change OUR headers
@@ -239,9 +254,9 @@ func encodeData(data interface{}) (*bytes.Buffer, error) {
}
func ensureReaderClosed(response serverResponse) {
if body := response.body; body != nil {
if response.body != nil {
// Drain up to 512 bytes and close the body to let the Transport reuse the connection
io.CopyN(ioutil.Discard, body, 512)
io.CopyN(ioutil.Discard, response.body, 512)
response.body.Close()
}
}

View File

@@ -24,18 +24,22 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
headers["X-Registry-Auth"] = []string{options.EncodedRegistryAuth}
}
// ensure that the image is tagged
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
service.TaskTemplate.ContainerSpec.Image = taggedImg
}
// Contact the registry to retrieve digest and platform information
if options.QueryRegistry {
distributionInspect, err := cli.DistributionInspect(ctx, service.TaskTemplate.ContainerSpec.Image, options.EncodedRegistryAuth)
distErr = err
if err == nil {
// now pin by digest if the image doesn't already contain a digest
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
if img != "" {
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
service.TaskTemplate.ContainerSpec.Image = img
}
// add platforms that are compatible with the service
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
}
}
var response types.ServiceCreateResponse
@@ -55,29 +59,42 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
}
// imageWithDigestString takes an image string and a digest, and updates
// the image string if it didn't originally contain a digest. It assumes
// that the image string is not an image ID
// the image string if it didn't originally contain a digest. It returns
// an empty string if there are no updates.
func imageWithDigestString(image string, dgst digest.Digest) string {
ref, err := reference.ParseAnyReference(image)
namedRef, err := reference.ParseNormalizedNamed(image)
if err == nil {
if _, isCanonical := ref.(reference.Canonical); !isCanonical {
namedRef, _ := ref.(reference.Named)
if _, isCanonical := namedRef.(reference.Canonical); !isCanonical {
// ensure that image gets a default tag if none is provided
img, err := reference.WithDigest(namedRef, dgst)
if err == nil {
return img.String()
return reference.FamiliarString(img)
}
}
}
return ""
}
// updateServicePlatforms updates the Platforms in swarm.Placement to list
// all compatible platforms for the service, as found in distributionInspect
// and returns a pointer to the new or updated swarm.Placement struct
func updateServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
// imageWithTagString takes an image string, and returns a tagged image
// string, adding a 'latest' tag if one was not provided. It returns an
// emptry string if a canonical reference was provided
func imageWithTagString(image string) string {
namedRef, err := reference.ParseNormalizedNamed(image)
if err == nil {
return reference.FamiliarString(reference.TagNameOnly(namedRef))
}
return ""
}
// setServicePlatforms sets Platforms in swarm.Placement to list all
// compatible platforms for the service, as found in distributionInspect
// and returns a pointer to the new or updated swarm.Placement struct.
func setServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
if placement == nil {
placement = &swarm.Placement{}
}
// reset any existing listed platforms
placement.Platforms = []swarm.Platform{}
for _, p := range distributionInspect.Platforms {
placement.Platforms = append(placement.Platforms, swarm.Platform{
Architecture: p.Architecture,

View File

@@ -13,6 +13,7 @@ import (
"github.com/docker/docker/api/types"
registrytypes "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go/v1"
"golang.org/x/net/context"
)
@@ -121,3 +122,92 @@ func TestServiceCreateCompatiblePlatforms(t *testing.T) {
t.Fatalf("expected `service_amd64`, got %s", r.ID)
}
}
func TestServiceCreateDigestPinning(t *testing.T) {
dgst := "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96"
dgstAlt := "sha256:37ffbf3f7497c07584dc9637ffbf3f7497c0758c0537ffbf3f7497c0c88e2bb7"
serviceCreateImage := ""
pinByDigestTests := []struct {
img string // input image provided by the user
expected string // expected image after digest pinning
}{
// default registry returns familiar string
{"docker.io/library/alpine", "alpine:latest@" + dgst},
// provided tag is preserved and digest added
{"alpine:edge", "alpine:edge@" + dgst},
// image with provided alternative digest remains unchanged
{"alpine@" + dgstAlt, "alpine@" + dgstAlt},
// image with provided tag and alternative digest remains unchanged
{"alpine:edge@" + dgstAlt, "alpine:edge@" + dgstAlt},
// image on alternative registry does not result in familiar string
{"alternate.registry/library/alpine", "alternate.registry/library/alpine:latest@" + dgst},
// unresolvable image does not get a digest
{"cannotresolve", "cannotresolve:latest"},
}
client := &Client{
client: newMockClient(func(req *http.Request) (*http.Response, error) {
if strings.HasPrefix(req.URL.Path, "/services/create") {
// reset and set image received by the service create endpoint
serviceCreateImage = ""
var service swarm.ServiceSpec
if err := json.NewDecoder(req.Body).Decode(&service); err != nil {
return nil, fmt.Errorf("could not parse service create request")
}
serviceCreateImage = service.TaskTemplate.ContainerSpec.Image
b, err := json.Marshal(types.ServiceCreateResponse{
ID: "service_id",
})
if err != nil {
return nil, err
}
return &http.Response{
StatusCode: http.StatusOK,
Body: ioutil.NopCloser(bytes.NewReader(b)),
}, nil
} else if strings.HasPrefix(req.URL.Path, "/distribution/cannotresolve") {
// unresolvable image
return nil, fmt.Errorf("cannot resolve image")
} else if strings.HasPrefix(req.URL.Path, "/distribution/") {
// resolvable images
b, err := json.Marshal(registrytypes.DistributionInspect{
Descriptor: v1.Descriptor{
Digest: digest.Digest(dgst),
},
})
if err != nil {
return nil, err
}
return &http.Response{
StatusCode: http.StatusOK,
Body: ioutil.NopCloser(bytes.NewReader(b)),
}, nil
}
return nil, fmt.Errorf("unexpected URL '%s'", req.URL.Path)
}),
}
// run pin by digest tests
for _, p := range pinByDigestTests {
r, err := client.ServiceCreate(context.Background(), swarm.ServiceSpec{
TaskTemplate: swarm.TaskSpec{
ContainerSpec: swarm.ContainerSpec{
Image: p.img,
},
},
}, types.ServiceCreateOptions{QueryRegistry: true})
if err != nil {
t.Fatal(err)
}
if r.ID != "service_id" {
t.Fatalf("expected `service_id`, got %s", r.ID)
}
if p.expected != serviceCreateImage {
t.Fatalf("expected image %s, got %s", p.expected, serviceCreateImage)
}
}
}

View File

@@ -35,6 +35,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
query.Set("version", strconv.FormatUint(version.Index, 10))
// ensure that the image is tagged
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
service.TaskTemplate.ContainerSpec.Image = taggedImg
}
// Contact the registry to retrieve digest and platform information
// This happens only when the image has changed
if options.QueryRegistry {
@@ -42,12 +47,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
distErr = err
if err == nil {
// now pin by digest if the image doesn't already contain a digest
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
if img != "" {
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
service.TaskTemplate.ContainerSpec.Image = img
}
// add platforms that are compatible with the service
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
}
}

View File

@@ -155,6 +155,8 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
api := apiserver.New(serverConfig)
cli.api = api
var hosts []string
for i := 0; i < len(cli.Config.Hosts); i++ {
var err error
if cli.Config.Hosts[i], err = dopts.ParseHost(cli.Config.TLS, cli.Config.Hosts[i]); err != nil {
@@ -186,6 +188,7 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
}
}
logrus.Debugf("Listener created for HTTP on %s (%s)", proto, addr)
hosts = append(hosts, protoAddrParts[1])
api.Accept(addr, ls...)
}
@@ -213,6 +216,8 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
return fmt.Errorf("Error starting daemon: %v", err)
}
d.StoreHosts(hosts)
// validate after NewDaemon has restored enabled plugins. Dont change order.
if err := validateAuthzPlugins(cli.Config.AuthorizationPlugins, pluginStore); err != nil {
return fmt.Errorf("Error validating authorization plugin: %v", err)
@@ -402,8 +407,12 @@ func loadDaemonCliConfig(opts daemonOptions) (*config.Config, error) {
return nil, err
}
if conf.V2Only == false {
logrus.Warnf(`The "disable-legacy-registry" option is deprecated and wil be removed in Docker v17.12. Interacting with legacy (v1) registries will no longer be supported in Docker v17.12"`)
}
if flags.Changed("graph") {
logrus.Warnf(`the "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
logrus.Warnf(`The "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
}
// Labels of the docker engine used to allow multiple values associated with the same key.

View File

@@ -102,7 +102,7 @@ func TestLoadDaemonConfigWithTrueDefaultValuesLeaveDefaults(t *testing.T) {
}
func TestLoadDaemonConfigWithLegacyRegistryOptions(t *testing.T) {
content := `{"disable-legacy-registry": true}`
content := `{"disable-legacy-registry": false}`
tempFile := tempfile.NewTempFile(t, "config", content)
defer tempFile.Remove()
@@ -110,5 +110,5 @@ func TestLoadDaemonConfigWithLegacyRegistryOptions(t *testing.T) {
loadedConfig, err := loadDaemonCliConfig(opts)
require.NoError(t, err)
require.NotNil(t, loadedConfig)
assert.True(t, loadedConfig.V2Only)
assert.False(t, loadedConfig.V2Only)
}

View File

@@ -702,6 +702,9 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
for _, alias := range epConfig.Aliases {
createOptions = append(createOptions, libnetwork.CreateOptionMyAlias(alias))
}
for k, v := range epConfig.DriverOpts {
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(options.Generic{k: v}))
}
}
if container.NetworkSettings.Service != nil {
@@ -747,9 +750,6 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(genericOption))
}
for k, v := range epConfig.DriverOpts {
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(options.Generic{k: v}))
}
}

View File

@@ -278,6 +278,9 @@ func (s *State) SetRunning(pid int, initial bool) {
s.ErrorMsg = ""
s.Running = true
s.Restarting = false
if initial {
s.Paused = false
}
s.ExitCodeValue = 0
s.Pid = pid
if initial {
@@ -304,6 +307,7 @@ func (s *State) SetRestarting(exitStatus *ExitStatus) {
// all the checks in docker around rm/stop/etc
s.Running = true
s.Restarting = true
s.Paused = false
s.Pid = 0
s.FinishedAt = time.Now().UTC()
s.setFromExitStatus(exitStatus)

View File

@@ -7,6 +7,7 @@ import (
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/pkg/pools"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/pkg/term"
)
@@ -86,7 +87,7 @@ func (c *Config) CopyStreams(ctx context.Context, cfg *AttachConfig) chan error
if cfg.TTY {
_, err = copyEscapable(cfg.CStdin, cfg.Stdin, cfg.DetachKeys)
} else {
_, err = io.Copy(cfg.CStdin, cfg.Stdin)
_, err = pools.Copy(cfg.CStdin, cfg.Stdin)
}
if err == io.ErrClosedPipe {
err = nil
@@ -116,7 +117,7 @@ func (c *Config) CopyStreams(ctx context.Context, cfg *AttachConfig) chan error
}
logrus.Debugf("attach: %s: begin", name)
_, err := io.Copy(stream, streamPipe)
_, err := pools.Copy(stream, streamPipe)
if err == io.ErrClosedPipe {
err = nil
}
@@ -174,5 +175,5 @@ func copyEscapable(dst io.Writer, src io.ReadCloser, keys []byte) (written int64
pr := term.NewEscapeProxy(src, keys)
defer src.Close()
return io.Copy(dst, pr)
return pools.Copy(dst, pr)
}

View File

@@ -12,7 +12,7 @@ RUN update-alternatives --install /usr/bin/go go /usr/lib/go-1.6/bin/go 100
# Install Go
# aarch64 doesn't have official go binaries, so use the version of go installed from
# the image to build go from source.
ENV GO_VERSION 1.7.5
ENV GO_VERSION 1.8.3
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash

View File

@@ -9,7 +9,7 @@ RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools bu
# Install Go
# aarch64 doesn't have official go binaries, so use the version of go installed from
# the image to build go from source.
ENV GO_VERSION 1.7.5
ENV GO_VERSION 1.8.3
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash

View File

@@ -11,7 +11,7 @@ RUN update-alternatives --install /usr/bin/go go /usr/lib/go-1.6/bin/go 100
# Install Go
# aarch64 doesn't have official go binaries, so use the version of go installed from
# the image to build go from source.
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash

View File

@@ -9,7 +9,7 @@ RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools bu
# Install Go
# aarch64 doesn't have official go binaries, so use the version of go installed from
# the image to build go from source.
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash

View File

@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -12,7 +12,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list.d
RUN apt-get update && apt-get install -y -t wheezy-backports btrfs-tools --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:zesty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.7.5
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
# GOARM is the ARM architecture version which is unrelated to the above Golang version
ENV GOARM 6
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local

View File

@@ -6,7 +6,7 @@ FROM armhf/ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM armhf/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM armhf/ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM s390x/ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config libsystemd-dev vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM s390x/ubuntu:yakkety
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config libsystemd-dev vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.7.5
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM amazonlinux:latest
RUN yum groupinstall -y "Development Tools"
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
ENV GO_VERSION 1.7.5
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -8,7 +8,7 @@ RUN yum groupinstall -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -8,7 +8,7 @@ RUN dnf -y upgrade
RUN dnf install -y @development-tools fedora-packager
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -8,7 +8,7 @@ RUN dnf -y upgrade
RUN dnf install -y @development-tools fedora-packager
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM opensuse:13.2
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim systemd-rpm-macros
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -10,7 +10,7 @@ RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek
RUN yum groupinstall -y "Development Tools"
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM oraclelinux:7
RUN yum groupinstall -y "Development Tools"
RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM photon:1.0
RUN tdnf install -y wget curl ca-certificates gzip make rpm-build sed gcc linux-api-headers glibc-devel binutils libseccomp libltdl-devel elfutils
RUN tdnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -9,7 +9,7 @@ RUN yum groupinstall --skip-broken -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
ENV GO_VERSION 1.7.4
ENV GO_VERSION 1.8.3
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -8,7 +8,7 @@ RUN dnf -y upgrade
RUN dnf install -y @development-tools fedora-packager
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake
ENV GO_VERSION 1.8.1
ENV GO_VERSION 1.8.3
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -1,2 +0,0 @@
Tianon Gravi <admwiggin@gmail.com> (@tianon)
Jessie Frazelle <jess@docker.com> (@jfrazelle)

File diff suppressed because it is too large Load Diff

View File

@@ -1,409 +0,0 @@
# docker.fish - docker completions for fish shell
#
# This file is generated by gen_docker_fish_completions.py from:
# https://github.com/barnybug/docker-fish-completion
#
# To install the completions:
# mkdir -p ~/.config/fish/completions
# cp docker.fish ~/.config/fish/completions
#
# Completion supported:
# - parameters
# - commands
# - containers
# - images
# - repositories
function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand'
for i in (commandline -opc)
if contains -- $i attach build commit cp create diff events exec export history images import info inspect kill load login logout logs pause port ps pull push rename restart rm rmi run save search start stop tag top unpause version wait stats
return 1
end
end
return 0
end
function __fish_print_docker_containers --description 'Print a list of docker containers' -a select
switch $select
case running
docker ps -a --no-trunc --filter status=running --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
case stopped
docker ps -a --no-trunc --filter status=exited --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
case all
docker ps -a --no-trunc --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
end
end
function __fish_print_docker_images --description 'Print a list of docker images'
docker images --format "{{.Repository}}:{{.Tag}}" | command grep -v '<none>'
end
function __fish_print_docker_repositories --description 'Print a list of docker repositories'
docker images --format "{{.Repository}}" | command grep -v '<none>' | command sort | command uniq
end
# common options
complete -c docker -f -n '__fish_docker_no_subcommand' -l api-cors-header -d "Set CORS headers in the Engine API. Default is cors disabled"
complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d 'Attach containers to a pre-existing network bridge'
complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b"
complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode'
complete -c docker -f -n '__fish_docker_no_subcommand' -s d -l daemon -d 'Enable daemon mode'
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns -d 'Force Docker to use specific DNS servers'
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns-opt -d 'Force Docker to use specific DNS options'
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns-search -d 'Force Docker to use specific DNS search domains'
complete -c docker -f -n '__fish_docker_no_subcommand' -l exec-opt -d 'Set runtime execution options'
complete -c docker -f -n '__fish_docker_no_subcommand' -l fixed-cidr -d 'IPv4 subnet for fixed IPs (e.g. 10.20.0.0/16)'
complete -c docker -f -n '__fish_docker_no_subcommand' -l fixed-cidr-v6 -d 'IPv6 subnet for fixed IPs (e.g.: 2001:a02b/48)'
complete -c docker -f -n '__fish_docker_no_subcommand' -s G -l group -d 'Group to assign the unix socket specified by -H when running in daemon mode'
complete -c docker -f -n '__fish_docker_no_subcommand' -s g -l graph -d 'Path to use as the root of the Docker runtime'
complete -c docker -f -n '__fish_docker_no_subcommand' -s H -l host -d 'The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.'
complete -c docker -f -n '__fish_docker_no_subcommand' -s h -l help -d 'Print usage'
complete -c docker -f -n '__fish_docker_no_subcommand' -l icc -d 'Allow unrestricted inter-container and Docker daemon host communication'
complete -c docker -f -n '__fish_docker_no_subcommand' -l insecure-registry -d 'Enable insecure communication with specified registries (no certificate verification for HTTPS and enable HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16)'
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip -d 'Default IP address to use when binding container ports'
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-forward -d 'Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement.'
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-masq -d "Enable IP masquerading for bridge's IP range"
complete -c docker -f -n '__fish_docker_no_subcommand' -l iptables -d "Enable Docker's addition of iptables rules"
complete -c docker -f -n '__fish_docker_no_subcommand' -l ipv6 -d 'Enable IPv6 networking'
complete -c docker -f -n '__fish_docker_no_subcommand' -s l -l log-level -d 'Set the logging level ("debug", "info", "warn", "error", "fatal")'
complete -c docker -f -n '__fish_docker_no_subcommand' -l label -d 'Set key=value labels to the daemon (displayed in `docker info`)'
complete -c docker -f -n '__fish_docker_no_subcommand' -l mtu -d 'Set the containers network MTU'
complete -c docker -f -n '__fish_docker_no_subcommand' -s p -l pidfile -d 'Path to use for daemon PID file'
complete -c docker -f -n '__fish_docker_no_subcommand' -l registry-mirror -d 'Specify a preferred Docker registry mirror'
complete -c docker -f -n '__fish_docker_no_subcommand' -s s -l storage-driver -d 'Force the Docker runtime to use a specific storage driver'
complete -c docker -f -n '__fish_docker_no_subcommand' -l selinux-enabled -d 'Enable selinux support. SELinux does not presently support the BTRFS storage driver'
complete -c docker -f -n '__fish_docker_no_subcommand' -l storage-opt -d 'Set storage driver options'
complete -c docker -f -n '__fish_docker_no_subcommand' -l tls -d 'Use TLS; implied by --tlsverify'
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlscacert -d 'Trust only remotes providing a certificate signed by the CA given here'
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlscert -d 'Path to TLS certificate file'
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlskey -d 'Path to TLS key file'
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlsverify -d 'Use TLS and verify the remote (daemon: verify client, client: verify daemon)'
complete -c docker -f -n '__fish_docker_no_subcommand' -s v -l version -d 'Print version information and quit'
# subcommands
# attach
complete -c docker -f -n '__fish_docker_no_subcommand' -a attach -d 'Attach to a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l no-stdin -d 'Do not attach STDIN'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l sig-proxy -d 'Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied.'
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -a '(__fish_print_docker_containers running)' -d "Container"
# build
complete -c docker -f -n '__fish_docker_no_subcommand' -a build -d 'Build an image from a Dockerfile'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s f -l file -d "Name of the Dockerfile(Default is 'Dockerfile' at context root)"
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l force-rm -d 'Always remove intermediate containers, even after unsuccessful builds'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l no-cache -d 'Do not use cache when building the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l pull -d 'Always attempt to pull a newer version of the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s q -l quiet -d 'Suppress the build output and print image ID on success'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l rm -d 'Remove intermediate containers after a successful build'
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s t -l tag -d 'Repository name (and optionally a tag) to be applied to the resulting image in case of success'
# commit
complete -c docker -f -n '__fish_docker_no_subcommand' -a commit -d "Create a new image from a container's changes"
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s a -l author -d 'Author (e.g., "John Hannibal Smith <hannibal@a-team.com>")'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s m -l message -d 'Commit message'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s p -l pause -d 'Pause container during commit'
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -a '(__fish_print_docker_containers all)' -d "Container"
# cp
complete -c docker -f -n '__fish_docker_no_subcommand' -a cp -d "Copy files/folders between a container and the local filesystem"
complete -c docker -A -f -n '__fish_seen_subcommand_from cp' -l help -d 'Print usage'
# create
complete -c docker -f -n '__fish_docker_no_subcommand' -a create -d 'Create a new container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpu-shares -d 'CPU shares (relative weight)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-add -d 'Add Linux capabilities'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-drop -d 'Drop Linux capabilities'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cidfile -d 'Write the container ID to the file'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpuset -d 'CPUs in which to allow execution (0-3, 0,1)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device -d 'Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device-cgroup-rule -d 'Add a rule to the cgroup allowed devices list (e.g. --device-cgroup-rule="c 13:37 rwm")'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns -d 'Set custom DNS servers'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns-opt -d "Set custom DNS options (Use --dns-opt='' if you don't wish to set options)"
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns-search -d "Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)"
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s e -l env -d 'Set environment variables'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l entrypoint -d 'Overwrite the default ENTRYPOINT of the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l env-file -d 'Read in a line delimited file of environment variables'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l expose -d 'Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l group-add -d 'Add additional groups to run as'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s h -l hostname -d 'Container host name'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s i -l interactive -d 'Keep STDIN open even if not attached'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l ipc -d 'Default is to create a private IPC namespace (POSIX SysV IPC) for the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l link -d 'Add link to another container in the form of <name|id>:alias'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s m -l memory -d 'Memory limit (format: <number>[<unit>], where unit = b, k, m or g)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l mac-address -d 'Container MAC address (e.g., 92:d0:c6:0a:29:33)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l memory-swap -d "Total memory usage (memory + swap), set '-1' to disable swap (format: <number>[<unit>], where unit = b, k, m or g)"
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l mount -d 'Attach a filesystem mount to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l name -d 'Assign a name to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l net -d 'Set the Network mode for the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s P -l publish-all -d 'Publish all exposed ports to random ports on the host interfaces'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s p -l publish -d "Publish a container's port to the host"
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l pid -d 'Default is to create a private PID namespace for the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l privileged -d 'Give extended privileges to this container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l read-only -d "Mount the container's root filesystem as read only"
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l restart -d 'Restart policy to apply when a container exits (no, on-failure[:max-retry], always)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l security-opt -d 'Security Options'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s t -l tty -d 'Allocate a pseudo-TTY'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s u -l user -d 'Username or UID'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s v -l volume -d 'Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l volumes-from -d 'Mount volumes from the specified container(s)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s w -l workdir -d 'Working directory inside the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -a '(__fish_print_docker_images)' -d "Image"
# diff
complete -c docker -f -n '__fish_docker_no_subcommand' -a diff -d "Inspect changes on a container's filesystem"
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -a '(__fish_print_docker_containers all)' -d "Container"
# events
complete -c docker -f -n '__fish_docker_no_subcommand' -a events -d 'Get real time events from the server'
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -s f -l filter -d "Provide filter values (i.e., 'event=stop')"
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l since -d 'Show all events created since timestamp'
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l until -d 'Stream events until this timestamp'
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l format -d 'Format the output using the given go template'
# exec
complete -c docker -f -n '__fish_docker_no_subcommand' -a exec -d 'Run a command in a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s d -l detach -d 'Detached mode: run command in the background'
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s i -l interactive -d 'Keep STDIN open even if not attached'
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s t -l tty -d 'Allocate a pseudo-TTY'
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -a '(__fish_print_docker_containers running)' -d "Container"
# export
complete -c docker -f -n '__fish_docker_no_subcommand' -a export -d 'Stream the contents of a container as a tar archive'
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -a '(__fish_print_docker_containers all)' -d "Container"
# history
complete -c docker -f -n '__fish_docker_no_subcommand' -a history -d 'Show the history of an image'
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -s q -l quiet -d 'Only show numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -a '(__fish_print_docker_images)' -d "Image"
# images
complete -c docker -f -n '__fish_docker_no_subcommand' -a images -d 'List images'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s a -l all -d 'Show all images (by default filter out the intermediate image layers)'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s f -l filter -d "Provide filter values (i.e., 'dangling=true')"
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s q -l quiet -d 'Only show numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -a '(__fish_print_docker_repositories)' -d "Repository"
# import
complete -c docker -f -n '__fish_docker_no_subcommand' -a import -d 'Create a new filesystem image from the contents of a tarball'
complete -c docker -A -f -n '__fish_seen_subcommand_from import' -l help -d 'Print usage'
# info
complete -c docker -f -n '__fish_docker_no_subcommand' -a info -d 'Display system-wide information'
complete -c docker -A -f -n '__fish_seen_subcommand_from info' -s f -l format -d 'Format the output using the given go template'
complete -c docker -A -f -n '__fish_seen_subcommand_from info' -l help -d 'Print usage'
# inspect
complete -c docker -f -n '__fish_docker_no_subcommand' -a inspect -d 'Return low-level information on a container or image'
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s f -l format -d 'Format the output using the given go template.'
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s s -l size -d 'Display total file sizes if the type is container.'
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_images)' -d "Image"
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_containers all)' -d "Container"
# kill
complete -c docker -f -n '__fish_docker_no_subcommand' -a kill -d 'Kill a running container'
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -s s -l signal -d 'Signal to send to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -a '(__fish_print_docker_containers running)' -d "Container"
# load
complete -c docker -f -n '__fish_docker_no_subcommand' -a load -d 'Load an image from a tar archive'
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -s i -l input -d 'Read from a tar archive file, instead of STDIN'
# login
complete -c docker -f -n '__fish_docker_no_subcommand' -a login -d 'Log in to a Docker registry server'
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s p -l password -d 'Password'
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s u -l username -d 'Username'
# logout
complete -c docker -f -n '__fish_docker_no_subcommand' -a logout -d 'Log out from a Docker registry server'
# logs
complete -c docker -f -n '__fish_docker_no_subcommand' -a logs -d 'Fetch the logs of a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s f -l follow -d 'Follow log output'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s t -l timestamps -d 'Show timestamps'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l since -d 'Show logs since timestamp'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l tail -d 'Output the specified number of lines at the end of logs (defaults to all logs)'
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -a '(__fish_print_docker_containers running)' -d "Container"
# port
complete -c docker -f -n '__fish_docker_no_subcommand' -a port -d 'Lookup the public-facing port that is NAT-ed to PRIVATE_PORT'
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -a '(__fish_print_docker_containers running)' -d "Container"
# pause
complete -c docker -f -n '__fish_docker_no_subcommand' -a pause -d 'Pause all processes within a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from pause' -a '(__fish_print_docker_containers running)' -d "Container"
# ps
complete -c docker -f -n '__fish_docker_no_subcommand' -a ps -d 'List containers'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s a -l all -d 'Show all containers. Only running containers are shown by default.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l before -d 'Show only container created before Id or Name, include non-running ones.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s f -l filter -d 'Provide filter values. Valid filters:'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s l -l latest -d 'Show only the latest created container, include non-running ones.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s n -d 'Show n last created containers, include non-running ones.'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s q -l quiet -d 'Only display numeric IDs'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s s -l size -d 'Display total file sizes'
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l since -d 'Show only containers created since Id or Name, include non-running ones.'
# pull
complete -c docker -f -n '__fish_docker_no_subcommand' -a pull -d 'Pull an image or a repository from a Docker registry server'
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -s a -l all-tags -d 'Download all tagged images in the repository'
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_images)' -d "Image"
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_repositories)' -d "Repository"
# push
complete -c docker -f -n '__fish_docker_no_subcommand' -a push -d 'Push an image or a repository to a Docker registry server'
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_images)' -d "Image"
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_repositories)' -d "Repository"
# rename
complete -c docker -f -n '__fish_docker_no_subcommand' -a rename -d 'Rename an existing container'
# restart
complete -c docker -f -n '__fish_docker_no_subcommand' -a restart -d 'Restart a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -s t -l time -d 'Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds.'
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -a '(__fish_print_docker_containers running)' -d "Container"
# rm
complete -c docker -f -n '__fish_docker_no_subcommand' -a rm -d 'Remove one or more containers'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -d 'Force the removal of a running container (uses SIGKILL)'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s l -l link -d 'Remove the specified link and not the underlying container'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s v -l volumes -d 'Remove the volumes associated with the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -a '(__fish_print_docker_containers stopped)' -d "Container"
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -a '(__fish_print_docker_containers all)' -d "Container"
# rmi
complete -c docker -f -n '__fish_docker_no_subcommand' -a rmi -d 'Remove one or more images'
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -s f -l force -d 'Force removal of the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -l no-prune -d 'Do not delete untagged parents'
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -a '(__fish_print_docker_images)' -d "Image"
# run
complete -c docker -f -n '__fish_docker_no_subcommand' -a run -d 'Run a command in a new container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s c -l cpu-shares -d 'CPU shares (relative weight)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cap-add -d 'Add Linux capabilities'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cap-drop -d 'Drop Linux capabilities'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cidfile -d 'Write the container ID to the file'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cpuset -d 'CPUs in which to allow execution (0-3, 0,1)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s d -l detach -d 'Detached mode: run the container in the background and print the new container ID'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l device -d 'Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device-cgroup-rule -d 'Add a rule to the cgroup allowed devices list (e.g. --device-cgroup-rule="c 13:37 rwm")'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns -d 'Set custom DNS servers'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns-opt -d "Set custom DNS options (Use --dns-opt='' if you don't wish to set options)"
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns-search -d "Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)"
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s e -l env -d 'Set environment variables'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l entrypoint -d 'Overwrite the default ENTRYPOINT of the image'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l env-file -d 'Read in a line delimited file of environment variables'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l expose -d 'Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l group-add -d 'Add additional groups to run as'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s h -l hostname -d 'Container host name'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s i -l interactive -d 'Keep STDIN open even if not attached'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l ipc -d 'Default is to create a private IPC namespace (POSIX SysV IPC) for the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l link -d 'Add link to another container in the form of <name|id>:alias'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s m -l memory -d 'Memory limit (format: <number>[<unit>], where unit = b, k, m or g)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l mac-address -d 'Container MAC address (e.g., 92:d0:c6:0a:29:33)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l memory-swap -d "Total memory usage (memory + swap), set '-1' to disable swap (format: <number>[<unit>], where unit = b, k, m or g)"
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l mount -d 'Attach a filesystem mount to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l name -d 'Assign a name to the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l net -d 'Set the Network mode for the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s P -l publish-all -d 'Publish all exposed ports to random ports on the host interfaces'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s p -l publish -d "Publish a container's port to the host"
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l pid -d 'Default is to create a private PID namespace for the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l privileged -d 'Give extended privileges to this container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l read-only -d "Mount the container's root filesystem as read only"
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l restart -d 'Restart policy to apply when a container exits (no, on-failure[:max-retry], always)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l rm -d 'Automatically remove the container when it exits (incompatible with -d)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l security-opt -d 'Security Options'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l sig-proxy -d 'Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied.'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l stop-signal -d 'Signal to kill a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s t -l tty -d 'Allocate a pseudo-TTY'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s u -l user -d 'Username or UID'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l tmpfs -d 'Mount tmpfs on a directory'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s v -l volume -d 'Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l volumes-from -d 'Mount volumes from the specified container(s)'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s w -l workdir -d 'Working directory inside the container'
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -a '(__fish_print_docker_images)' -d "Image"
# save
complete -c docker -f -n '__fish_docker_no_subcommand' -a save -d 'Save an image to a tar archive'
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -s o -l output -d 'Write to an file, instead of STDOUT'
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -a '(__fish_print_docker_images)' -d "Image"
# search
complete -c docker -f -n '__fish_docker_no_subcommand' -a search -d 'Search for an image on the registry (defaults to the Docker Hub)'
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l automated -d 'Only show automated builds'
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l no-trunc -d "Don't truncate output"
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -s s -l stars -d 'Only displays with at least x stars'
# start
complete -c docker -f -n '__fish_docker_no_subcommand' -a start -d 'Start a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s a -l attach -d "Attach container's STDOUT and STDERR and forward all signals to the process"
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s i -l interactive -d "Attach container's STDIN"
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -a '(__fish_print_docker_containers stopped)' -d "Container"
# stats
complete -c docker -f -n '__fish_docker_no_subcommand' -a stats -d "Display a live stream of one or more containers' resource usage statistics"
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -l no-stream -d 'Disable streaming stats and only pull the first result'
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -a '(__fish_print_docker_containers running)' -d "Container"
# stop
complete -c docker -f -n '__fish_docker_no_subcommand' -a stop -d 'Stop a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -s t -l time -d 'Number of seconds to wait for the container to stop before killing it. Default is 10 seconds.'
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -a '(__fish_print_docker_containers running)' -d "Container"
# tag
complete -c docker -f -n '__fish_docker_no_subcommand' -a tag -d 'Tag an image into a repository'
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -s f -l force -d 'Force'
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -l help -d 'Print usage'
# top
complete -c docker -f -n '__fish_docker_no_subcommand' -a top -d 'Lookup the running processes of a container'
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -a '(__fish_print_docker_containers running)' -d "Container"
# unpause
complete -c docker -f -n '__fish_docker_no_subcommand' -a unpause -d 'Unpause a paused container'
complete -c docker -A -f -n '__fish_seen_subcommand_from unpause' -a '(__fish_print_docker_containers running)' -d "Container"
# version
complete -c docker -f -n '__fish_docker_no_subcommand' -a version -d 'Show the Docker version information'
complete -c docker -A -f -n '__fish_seen_subcommand_from version' -s f -l format -d 'Format the output using the given go template'
complete -c docker -A -f -n '__fish_seen_subcommand_from version' -l help -d 'Print usage'
# wait
complete -c docker -f -n '__fish_docker_no_subcommand' -a wait -d 'Block until a container stops, then print its exit code'
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -l help -d 'Print usage'
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -a '(__fish_print_docker_containers running)' -d "Container"

View File

@@ -1 +0,0 @@
See https://github.com/samneirinck/posh-docker

View File

@@ -1,2 +0,0 @@
Tianon Gravi <admwiggin@gmail.com> (@tianon)
Jessie Frazelle <jess@docker.com> (@jfrazelle)

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
package daemon
import (
"io"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/reference"
"github.com/docker/docker/api/types"
@@ -12,7 +14,6 @@ import (
"github.com/docker/docker/registry"
"github.com/pkg/errors"
"golang.org/x/net/context"
"io"
)
type releaseableLayer struct {
@@ -104,13 +105,19 @@ func (daemon *Daemon) pullForBuilder(ctx context.Context, name string, authConfi
// Every call to GetImageAndReleasableLayer MUST call releasableLayer.Release() to prevent
// leaking of layers.
func (daemon *Daemon) GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (builder.Image, builder.ReleaseableLayer, error) {
if !opts.ForcePull {
image, _ := daemon.GetImage(refOrID)
id, _ := daemon.GetImageID(refOrID)
refIsID := id.String() == refOrID // detect if ref is an ID to skip pulling
if refIsID || !opts.ForcePull {
image, err := daemon.GetImage(refOrID)
// TODO: shouldn't we error out if error is different from "not found" ?
if image != nil {
layer, err := newReleasableLayerForImage(image, daemon.layerStore)
return image, layer, err
}
if refIsID {
return nil, nil, err
}
}
image, err := daemon.pullForBuilder(ctx, refOrID, opts.AuthConfig, opts.Output)

View File

@@ -100,7 +100,6 @@ func serviceSpecFromGRPC(spec *swarmapi.ServiceSpec) (*types.ServiceSpec, error)
return nil, fmt.Errorf("unknown task runtime type: %s", t.Generic.Payload.TypeUrl)
}
taskTemplate.RuntimeData = t.Generic.Payload.Value
default:
return nil, fmt.Errorf("error creating service; unsupported runtime %T", t)
}
@@ -176,7 +175,6 @@ func ServiceSpecToGRPC(s types.ServiceSpec) (swarmapi.ServiceSpec, error) {
Kind: string(types.RuntimePlugin),
Payload: &gogotypes.Any{
TypeUrl: string(types.RuntimeURLPlugin),
Value: s.TaskTemplate.RuntimeData,
},
},
}

View File

@@ -31,9 +31,10 @@ func SwarmFromGRPC(c swarmapi.Cluster) types.Swarm {
AutoLockManagers: c.Spec.EncryptionConfig.AutoLockManagers,
},
CAConfig: types.CAConfig{
// do not include the signing CA key (it should already be redacted via the swarm APIs)
SigningCACert: string(c.Spec.CAConfig.SigningCACert),
ForceRotate: c.Spec.CAConfig.ForceRotate,
// do not include the signing CA cert or key (it should already be redacted via the swarm APIs) -
// the key because it's secret, and the cert because otherwise doing a get + update on the spec
// can cause issues because the key would be missing and the cert wouldn't
ForceRotate: c.Spec.CAConfig.ForceRotate,
},
},
TLSInfo: types.TLSInfo{

View File

@@ -495,7 +495,6 @@ func getEndpointConfig(na *api.NetworkAttachment, b executorpkg.Backend) *networ
IPv4Address: ipv4,
IPv6Address: ipv6,
},
Aliases: na.Aliases,
DriverOpts: na.DriverAttachmentOpts,
}
if v, ok := na.Network.Spec.Annotations.Labels["com.docker.swarm.predefined"]; ok && v == "true" {

View File

@@ -88,10 +88,6 @@ func (c *Cluster) Init(req types.InitRequest) (string, error) {
}
}
if !req.ForceNewCluster {
clearPersistentState(c.root)
}
nr, err := c.newNodeRunner(nodeStartConfig{
forceNewCluster: req.ForceNewCluster,
autolock: req.AutoLockManagers,
@@ -109,16 +105,14 @@ func (c *Cluster) Init(req types.InitRequest) (string, error) {
c.mu.Unlock()
if err := <-nr.Ready(); err != nil {
c.mu.Lock()
c.nr = nil
c.mu.Unlock()
if !req.ForceNewCluster { // if failure on first attempt don't keep state
if err := clearPersistentState(c.root); err != nil {
return "", err
}
}
if err != nil {
c.mu.Lock()
c.nr = nil
c.mu.Unlock()
}
return "", err
}
state := nr.State()
@@ -166,8 +160,6 @@ func (c *Cluster) Join(req types.JoinRequest) error {
return err
}
clearPersistentState(c.root)
nr, err := c.newNodeRunner(nodeStartConfig{
RemoteAddr: req.RemoteAddrs[0],
ListenAddr: net.JoinHostPort(listenHost, listenPort),
@@ -193,6 +185,9 @@ func (c *Cluster) Join(req types.JoinRequest) error {
c.mu.Lock()
c.nr = nil
c.mu.Unlock()
if err := clearPersistentState(c.root); err != nil {
return err
}
}
return err
}

View File

@@ -11,52 +11,45 @@ import (
// GetTasks returns a list of tasks matching the filter options.
func (c *Cluster) GetTasks(options apitypes.TaskListOptions) ([]types.Task, error) {
c.mu.RLock()
defer c.mu.RUnlock()
var r *swarmapi.ListTasksResponse
state := c.currentNodeState()
if !state.IsActiveManager() {
return nil, c.errNoManager(state)
}
byName := func(filter filters.Args) error {
if filter.Include("service") {
serviceFilters := filter.Get("service")
for _, serviceFilter := range serviceFilters {
service, err := c.GetService(serviceFilter, false)
if err != nil {
return err
if err := c.lockedManagerAction(func(ctx context.Context, state nodeState) error {
byName := func(filter filters.Args) error {
if filter.Include("service") {
serviceFilters := filter.Get("service")
for _, serviceFilter := range serviceFilters {
service, err := getService(ctx, state.controlClient, serviceFilter, false)
if err != nil {
return err
}
filter.Del("service", serviceFilter)
filter.Add("service", service.ID)
}
filter.Del("service", serviceFilter)
filter.Add("service", service.ID)
}
}
if filter.Include("node") {
nodeFilters := filter.Get("node")
for _, nodeFilter := range nodeFilters {
node, err := c.GetNode(nodeFilter)
if err != nil {
return err
if filter.Include("node") {
nodeFilters := filter.Get("node")
for _, nodeFilter := range nodeFilters {
node, err := getNode(ctx, state.controlClient, nodeFilter)
if err != nil {
return err
}
filter.Del("node", nodeFilter)
filter.Add("node", node.ID)
}
filter.Del("node", nodeFilter)
filter.Add("node", node.ID)
}
return nil
}
return nil
}
filters, err := newListTasksFilters(options.Filters, byName)
if err != nil {
return nil, err
}
filters, err := newListTasksFilters(options.Filters, byName)
if err != nil {
return err
}
ctx, cancel := c.getRequestContext()
defer cancel()
r, err := state.controlClient.ListTasks(
ctx,
&swarmapi.ListTasksRequest{Filters: filters})
if err != nil {
r, err = state.controlClient.ListTasks(
ctx,
&swarmapi.ListTasksRequest{Filters: filters})
return err
}); err != nil {
return nil, err
}

View File

@@ -116,6 +116,17 @@ type Daemon struct {
diskUsageRunning int32
pruneRunning int32
hosts map[string]bool // hosts stores the addresses the daemon is listening on
}
// StoreHosts stores the addresses the daemon is listening on
func (daemon *Daemon) StoreHosts(hosts []string) {
if daemon.hosts == nil {
daemon.hosts = make(map[string]bool)
}
for _, h := range hosts {
daemon.hosts[h] = true
}
}
// HasExperimental returns whether the experimental features of the daemon are enabled or not

View File

@@ -68,17 +68,17 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
memory := specs.LinuxMemory{}
if config.Memory > 0 {
limit := uint64(config.Memory)
limit := config.Memory
memory.Limit = &limit
}
if config.MemoryReservation > 0 {
reservation := uint64(config.MemoryReservation)
reservation := config.MemoryReservation
memory.Reservation = &reservation
}
if config.MemorySwap > 0 {
swap := uint64(config.MemorySwap)
swap := config.MemorySwap
memory.Swap = &swap
}
@@ -88,7 +88,7 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
}
if config.KernelMemory != 0 {
kernelMemory := uint64(config.KernelMemory)
kernelMemory := config.KernelMemory
memory.Kernel = &kernelMemory
}

View File

@@ -1,62 +0,0 @@
package daemon
import (
"fmt"
"os"
"path/filepath"
"strings"
"time"
"github.com/davecgh/go-spew/spew"
"github.com/pkg/errors"
)
const dataStructuresLogNameTemplate = "daemon-data-%s.log"
// dumpDaemon appends the daemon datastructures into file in dir and returns full path
// to that file.
func (d *Daemon) dumpDaemon(dir string) (string, error) {
// Ensure we recover from a panic as we are doing this without any locking
defer func() {
recover()
}()
path := filepath.Join(dir, fmt.Sprintf(dataStructuresLogNameTemplate, strings.Replace(time.Now().Format(time.RFC3339), ":", "", -1)))
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, 0666)
if err != nil {
return "", errors.Wrap(err, "failed to open file to write the daemon datastructure dump")
}
defer f.Close()
dump := struct {
containers interface{}
names interface{}
links interface{}
execs interface{}
volumes interface{}
images interface{}
layers interface{}
imageReferences interface{}
downloads interface{}
uploads interface{}
registry interface{}
plugins interface{}
}{
containers: d.containers,
execs: d.execCommands,
volumes: d.volumes,
images: d.imageStore,
layers: d.layerStore,
imageReferences: d.referenceStore,
downloads: d.downloadManager,
uploads: d.uploadManager,
registry: d.RegistryService,
plugins: d.PluginStore,
names: d.nameIndex,
links: d.linkIndex,
}
spew.Fdump(f, dump) // Does not return an error
f.Sync()
return path, nil
}

View File

@@ -22,12 +22,6 @@ func (d *Daemon) setupDumpStackTrap(root string) {
} else {
logrus.Infof("goroutine stacks written to %s", path)
}
path, err = d.dumpDaemon(root)
if err != nil {
logrus.WithError(err).Error("failed to write daemon datastructure dump")
} else {
logrus.Infof("daemon datastructure dump written to %s", path)
}
}
}()
}

View File

@@ -41,12 +41,6 @@ func (d *Daemon) setupDumpStackTrap(root string) {
} else {
logrus.Infof("goroutine stacks written to %s", path)
}
path, err = d.dumpDaemon(root)
if err != nil {
logrus.WithError(err).Error("failed to write daemon datastructure dump")
} else {
logrus.Infof("daemon datastructure dump written to %s", path)
}
}
}()
}

View File

@@ -117,7 +117,7 @@ func (daemon *Daemon) cleanupContainer(container *container.Container, forceRemo
if container.RWLayer != nil {
metadata, err := daemon.layerStore.ReleaseRWLayer(container.RWLayer)
layer.LogReleaseMetadata(metadata)
if err != nil && err != layer.ErrMountDoesNotExist {
if err != nil && err != layer.ErrMountDoesNotExist && !os.IsNotExist(errors.Cause(err)) {
return errors.Wrapf(err, "driver %q failed to remove root filesystem for %s", daemon.GraphDriverName(), container.ID)
}
}

View File

@@ -37,8 +37,6 @@ import (
"time"
"github.com/Sirupsen/logrus"
"github.com/vbatts/tar-split/tar/storage"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/chrootarchive"
@@ -47,9 +45,11 @@ import (
"github.com/docker/docker/pkg/locker"
mountpk "github.com/docker/docker/pkg/mount"
"github.com/docker/docker/pkg/system"
"github.com/vbatts/tar-split/tar/storage"
rsystem "github.com/opencontainers/runc/libcontainer/system"
"github.com/opencontainers/selinux/go-selinux/label"
"github.com/pkg/errors"
)
var (
@@ -284,30 +284,41 @@ func (a *Driver) Remove(id string) error {
mountpoint = a.getMountpoint(id)
}
logger := logrus.WithFields(logrus.Fields{
"module": "graphdriver",
"driver": "aufs",
"layer": id,
})
var retries int
for {
mounted, err := a.mounted(mountpoint)
if err != nil {
if os.IsNotExist(err) {
break
}
return err
}
if !mounted {
break
}
if err := a.unmount(mountpoint); err != nil {
if err != syscall.EBUSY {
return fmt.Errorf("aufs: unmount error: %s: %v", mountpoint, err)
}
if retries >= 5 {
return fmt.Errorf("aufs: unmount error after retries: %s: %v", mountpoint, err)
}
// If unmount returns EBUSY, it could be a transient error. Sleep and retry.
retries++
logrus.Warnf("unmount failed due to EBUSY: retry count: %d", retries)
time.Sleep(100 * time.Millisecond)
continue
err = a.unmount(mountpoint)
if err == nil {
break
}
break
if err != syscall.EBUSY {
return errors.Wrapf(err, "aufs: unmount error: %s", mountpoint)
}
if retries >= 5 {
return errors.Wrapf(err, "aufs: unmount error after retries: %s", mountpoint)
}
// If unmount returns EBUSY, it could be a transient error. Sleep and retry.
retries++
logger.Warnf("unmount failed due to EBUSY: retry count: %d", retries)
time.Sleep(100 * time.Millisecond)
continue
}
// Atomically remove each directory in turn by first moving it out of the
@@ -316,21 +327,22 @@ func (a *Driver) Remove(id string) error {
tmpMntPath := path.Join(a.mntPath(), fmt.Sprintf("%s-removing", id))
if err := os.Rename(mountpoint, tmpMntPath); err != nil && !os.IsNotExist(err) {
if err == syscall.EBUSY {
logrus.Warn("os.Rename err due to EBUSY")
logger.WithField("dir", mountpoint).WithError(err).Warn("os.Rename err due to EBUSY")
}
return err
return errors.Wrapf(err, "error preparing atomic delete of aufs mountpoint for id: %s", id)
}
if err := system.EnsureRemoveAll(tmpMntPath); err != nil {
return errors.Wrapf(err, "error removing aufs layer %s", id)
}
defer system.EnsureRemoveAll(tmpMntPath)
tmpDiffpath := path.Join(a.diffPath(), fmt.Sprintf("%s-removing", id))
if err := os.Rename(a.getDiffPath(id), tmpDiffpath); err != nil && !os.IsNotExist(err) {
return err
return errors.Wrapf(err, "error preparing atomic delete of aufs diff dir for id: %s", id)
}
defer system.EnsureRemoveAll(tmpDiffpath)
// Remove the layers file for the id
if err := os.Remove(path.Join(a.rootPath(), "layers", id)); err != nil && !os.IsNotExist(err) {
return err
return errors.Wrapf(err, "error removing layers dir for %s", id)
}
a.pathCacheLock.Lock()

View File

@@ -64,31 +64,35 @@ type cmdProbe struct {
// exec the healthcheck command in the container.
// Returns the exit code and probe output (if any)
func (p *cmdProbe) run(ctx context.Context, d *Daemon, container *container.Container) (*types.HealthcheckResult, error) {
cmdSlice := strslice.StrSlice(container.Config.Healthcheck.Test)[1:]
func (p *cmdProbe) run(ctx context.Context, d *Daemon, cntr *container.Container) (*types.HealthcheckResult, error) {
cmdSlice := strslice.StrSlice(cntr.Config.Healthcheck.Test)[1:]
if p.shell {
cmdSlice = append(getShell(container.Config), cmdSlice...)
cmdSlice = append(getShell(cntr.Config), cmdSlice...)
}
entrypoint, args := d.getEntrypointAndArgs(strslice.StrSlice{}, cmdSlice)
execConfig := exec.NewConfig()
execConfig.OpenStdin = false
execConfig.OpenStdout = true
execConfig.OpenStderr = true
execConfig.ContainerID = container.ID
execConfig.ContainerID = cntr.ID
execConfig.DetachKeys = []byte{}
execConfig.Entrypoint = entrypoint
execConfig.Args = args
execConfig.Tty = false
execConfig.Privileged = false
execConfig.User = container.Config.User
execConfig.Env = container.Config.Env
execConfig.User = cntr.Config.User
d.registerExecCommand(container, execConfig)
d.LogContainerEvent(container, "exec_create: "+execConfig.Entrypoint+" "+strings.Join(execConfig.Args, " "))
linkedEnv, err := d.setupLinkedContainers(cntr)
if err != nil {
return nil, err
}
execConfig.Env = container.ReplaceOrAppendEnvValues(cntr.CreateDaemonEnvironment(execConfig.Tty, linkedEnv), execConfig.Env)
d.registerExecCommand(cntr, execConfig)
d.LogContainerEvent(cntr, "exec_create: "+execConfig.Entrypoint+" "+strings.Join(execConfig.Args, " "))
output := &limitedBuffer{}
err := d.ContainerExecStart(ctx, execConfig.ID, nil, output, output)
err = d.ContainerExecStart(ctx, execConfig.ID, nil, output, output)
if err != nil {
return nil, err
}
@@ -97,7 +101,7 @@ func (p *cmdProbe) run(ctx context.Context, d *Daemon, container *container.Cont
return nil, err
}
if info.ExitCode == nil {
return nil, fmt.Errorf("Healthcheck for container %s has no exit code!", container.ID)
return nil, fmt.Errorf("Healthcheck for container %s has no exit code!", cntr.ID)
}
// Note: Go's json package will handle invalid UTF-8 for us
out := output.String()
@@ -182,7 +186,7 @@ func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
logrus.Debugf("Running health check for container %s ...", c.ID)
startTime := time.Now()
ctx, cancelProbe := context.WithTimeout(context.Background(), probeTimeout)
results := make(chan *types.HealthcheckResult)
results := make(chan *types.HealthcheckResult, 1)
go func() {
healthChecksCounter.Inc()
result, err := probe.run(ctx, d, c)
@@ -205,8 +209,10 @@ func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
select {
case <-stop:
logrus.Debugf("Stop healthcheck monitoring for container %s (received while probing)", c.ID)
// Stop timeout and kill probe, but don't wait for probe to exit.
cancelProbe()
// Wait for probe to exit (it might take a while to respond to the TERM
// signal and we don't want dying probes to pile up).
<-results
return
case result := <-results:
handleProbeResult(d, c, result, stop)

View File

@@ -69,7 +69,7 @@ func (daemon *Daemon) killWithSignal(container *containerpkg.Container, sig int)
return errNotRunning{container.ID}
}
if container.Config.StopSignal != "" {
if container.Config.StopSignal != "" && syscall.Signal(sig) != syscall.SIGKILL {
containerStopSignal, err := signal.ParseSignal(container.Config.StopSignal)
if err != nil {
return err

View File

@@ -3,6 +3,7 @@ package logger
import (
"io"
"os"
"strings"
"sync"
"time"
@@ -18,6 +19,7 @@ type pluginAdapter struct {
driverName string
id string
plugin logPlugin
basePath string
fifoPath string
capabilities Capability
logInfo Info
@@ -56,7 +58,7 @@ func (a *pluginAdapter) Close() error {
a.mu.Lock()
defer a.mu.Unlock()
if err := a.plugin.StopLogging(a.fifoPath); err != nil {
if err := a.plugin.StopLogging(strings.TrimPrefix(a.fifoPath, a.basePath)); err != nil {
return err
}

View File

@@ -112,9 +112,10 @@ func (s *journald) Log(msg *logger.Message) error {
}
line := string(msg.Line)
source := msg.Source
logger.PutMessage(msg)
if msg.Source == "stderr" {
if source == "stderr" {
return journal.Send(line, journal.PriErr, vars)
}
return journal.Send(line, journal.PriInfo, vars)

View File

@@ -7,6 +7,7 @@ import (
"bytes"
"encoding/json"
"fmt"
"io"
"strconv"
"sync"
@@ -15,6 +16,7 @@ import (
"github.com/docker/docker/daemon/logger/loggerutils"
"github.com/docker/docker/pkg/jsonlog"
"github.com/docker/go-units"
"github.com/pkg/errors"
)
// Name is the name of the file that the jsonlogger logs to.
@@ -22,12 +24,13 @@ const Name = "json-file"
// JSONFileLogger is Logger implementation for default Docker logging.
type JSONFileLogger struct {
buf *bytes.Buffer
writer *loggerutils.RotateFileWriter
mu sync.Mutex
readers map[*logger.LogWatcher]struct{} // stores the active log followers
extra []byte // json-encoded extra attributes
extra []byte // json-encoded extra attributes
mu sync.RWMutex
buf *bytes.Buffer // avoids allocating a new buffer on each call to `Log()`
closed bool
writer *loggerutils.RotateFileWriter
readers map[*logger.LogWatcher]struct{} // stores the active log followers
}
func init() {
@@ -90,33 +93,45 @@ func New(info logger.Info) (logger.Logger, error) {
// Log converts logger.Message to jsonlog.JSONLog and serializes it to file.
func (l *JSONFileLogger) Log(msg *logger.Message) error {
l.mu.Lock()
err := writeMessageBuf(l.writer, msg, l.extra, l.buf)
l.buf.Reset()
l.mu.Unlock()
return err
}
func writeMessageBuf(w io.Writer, m *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
if err := marshalMessage(m, extra, buf); err != nil {
logger.PutMessage(m)
return err
}
logger.PutMessage(m)
if _, err := w.Write(buf.Bytes()); err != nil {
return errors.Wrap(err, "error writing log entry")
}
return nil
}
func marshalMessage(msg *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
timestamp, err := jsonlog.FastTimeMarshalJSON(msg.Timestamp)
if err != nil {
return err
}
l.mu.Lock()
logline := msg.Line
logLine := msg.Line
if !msg.Partial {
logline = append(msg.Line, '\n')
logLine = append(msg.Line, '\n')
}
err = (&jsonlog.JSONLogs{
Log: logline,
Log: logLine,
Stream: msg.Source,
Created: timestamp,
RawAttrs: l.extra,
}).MarshalJSONBuf(l.buf)
logger.PutMessage(msg)
RawAttrs: extra,
}).MarshalJSONBuf(buf)
if err != nil {
l.mu.Unlock()
return err
return errors.Wrap(err, "error writing log message to buffer")
}
l.buf.WriteByte('\n')
_, err = l.writer.Write(l.buf.Bytes())
l.buf.Reset()
l.mu.Unlock()
return err
err = buf.WriteByte('\n')
return errors.Wrap(err, "error finalizing log buffer")
}
// ValidateLogOpt looks for json specific log options max-file & max-size.

View File

@@ -3,7 +3,6 @@ package jsonfilelog
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"os"
@@ -18,6 +17,7 @@ import (
"github.com/docker/docker/pkg/ioutils"
"github.com/docker/docker/pkg/jsonlog"
"github.com/docker/docker/pkg/tailfile"
"github.com/pkg/errors"
)
const maxJSONDecodeRetry = 20000
@@ -48,10 +48,11 @@ func (l *JSONFileLogger) ReadLogs(config logger.ReadConfig) *logger.LogWatcher {
func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.ReadConfig) {
defer close(logWatcher.Msg)
// lock so the read stream doesn't get corrupted due to rotations or other log data written while we read
// lock so the read stream doesn't get corrupted due to rotations or other log data written while we open these files
// This will block writes!!!
l.mu.Lock()
l.mu.RLock()
// TODO it would be nice to move a lot of this reader implementation to the rotate logger object
pth := l.writer.LogPath()
var files []io.ReadSeeker
for i := l.writer.MaxFiles(); i > 1; i-- {
@@ -59,25 +60,36 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
if err != nil {
if !os.IsNotExist(err) {
logWatcher.Err <- err
break
l.mu.RUnlock()
return
}
continue
}
defer f.Close()
files = append(files, f)
}
latestFile, err := os.Open(pth)
if err != nil {
logWatcher.Err <- err
l.mu.Unlock()
logWatcher.Err <- errors.Wrap(err, "error opening latest log file")
l.mu.RUnlock()
return
}
defer latestFile.Close()
latestChunk, err := newSectionReader(latestFile)
// Now we have the reader sectioned, all fd's opened, we can unlock.
// New writes/rotates will not affect seeking through these files
l.mu.RUnlock()
if err != nil {
logWatcher.Err <- err
return
}
if config.Tail != 0 {
tailer := ioutils.MultiReadSeeker(append(files, latestFile)...)
tailer := ioutils.MultiReadSeeker(append(files, latestChunk)...)
tailFile(tailer, logWatcher, config.Tail, config.Since)
}
@@ -89,19 +101,14 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
}
if !config.Follow || l.closed {
l.mu.Unlock()
return
}
if config.Tail >= 0 {
latestFile.Seek(0, os.SEEK_END)
}
notifyRotate := l.writer.NotifyRotate()
defer l.writer.NotifyRotateEvict(notifyRotate)
l.mu.Lock()
l.readers[logWatcher] = struct{}{}
l.mu.Unlock()
followLogs(latestFile, logWatcher, notifyRotate, config.Since)
@@ -111,6 +118,16 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
l.mu.Unlock()
}
func newSectionReader(f *os.File) (*io.SectionReader, error) {
// seek to the end to get the size
// we'll leave this at the end of the file since section reader does not advance the reader
size, err := f.Seek(0, os.SEEK_END)
if err != nil {
return nil, errors.Wrap(err, "error getting current file size")
}
return io.NewSectionReader(f, 0, size), nil
}
func tailFile(f io.ReadSeeker, logWatcher *logger.LogWatcher, tail int, since time.Time) {
var rdr io.Reader
rdr = f

View File

@@ -59,6 +59,7 @@ func makePluginCreator(name string, l *logPluginProxy, basePath string) Creator
driverName: name,
id: id,
plugin: l,
basePath: basePath,
fifoPath: filepath.Join(root, id),
logInfo: logCtx,
}

View File

@@ -133,8 +133,9 @@ func New(info logger.Info) (logger.Logger, error) {
func (s *syslogger) Log(msg *logger.Message) error {
line := string(msg.Line)
source := msg.Source
logger.PutMessage(msg)
if msg.Source == "stderr" {
if source == "stderr" {
return s.writer.Err(line)
}
return s.writer.Info(line)

View File

@@ -46,7 +46,8 @@ func (daemon *Daemon) StateChanged(id string, e libcontainerd.StateInfo) error {
c.StreamConfig.Wait()
c.Reset(false)
restart, wait, err := c.RestartManager().ShouldRestart(e.ExitCode, c.HasBeenManuallyStopped, time.Since(c.StartedAt))
// If daemon is being shutdown, don't let the container restart
restart, wait, err := c.RestartManager().ShouldRestart(e.ExitCode, daemon.IsShuttingDown() || c.HasBeenManuallyStopped, time.Since(c.StartedAt))
if err == nil && restart {
c.RestartCount++
c.SetRestarting(platformConstructExitStatus(e))

View File

@@ -216,7 +216,7 @@ func (daemon *Daemon) ImagesPrune(ctx context.Context, pruneFilters filters.Args
if !until.IsZero() && img.Created.After(until) {
continue
}
if !matchLabels(pruneFilters, img.Config.Labels) {
if img.Config != nil && !matchLabels(pruneFilters, img.Config.Labels) {
continue
}
topImages[id] = img

View File

@@ -9,7 +9,6 @@ import (
"github.com/docker/docker/container"
"github.com/docker/docker/layer"
"github.com/docker/docker/libcontainerd"
"github.com/docker/docker/pkg/system"
"golang.org/x/sys/windows/registry"
)
@@ -32,12 +31,6 @@ func (daemon *Daemon) getLibcontainerdCreateOptions(container *container.Contain
}
dnsSearch := daemon.getDNSSearchSettings(container)
if dnsSearch != nil {
osv := system.GetOSVersion()
if osv.Build < 14997 {
return nil, fmt.Errorf("dns-search option is not supported on the current platform")
}
}
// Generate the layer folder of the layer options
layerOpts := &libcontainerd.LayerOption{}

View File

@@ -6,6 +6,7 @@ package daemon
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"sort"
@@ -42,8 +43,19 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
if err := daemon.lazyInitializeVolume(c.ID, m); err != nil {
return nil, err
}
// If the daemon is being shutdown, we should not let a container start if it is trying to
// mount the socket the daemon is listening on. During daemon shutdown, the socket
// (/var/run/docker.sock by default) doesn't exist anymore causing the call to m.Setup to
// create at directory instead. This in turn will prevent the daemon to restart.
checkfunc := func(m *volume.MountPoint) error {
if _, exist := daemon.hosts[m.Source]; exist && daemon.IsShuttingDown() {
return fmt.Errorf("Could not mount %q to container while the daemon is shutting down", m.Source)
}
return nil
}
rootUID, rootGID := daemon.GetRemappedUIDGID()
path, err := m.Setup(c.MountLabel, rootUID, rootGID)
path, err := m.Setup(c.MountLabel, rootUID, rootGID, checkfunc)
if err != nil {
return nil, err
}

View File

@@ -24,7 +24,7 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
if err := daemon.lazyInitializeVolume(c.ID, mount); err != nil {
return nil, err
}
s, err := mount.Setup(c.MountLabel, 0, 0)
s, err := mount.Setup(c.MountLabel, 0, 0, nil)
if err != nil {
return nil, err
}

View File

@@ -1,30 +0,0 @@
# The non-reference docs have been moved!
<!-- This file is maintained within the docker/docker Github
repository at https://github.com/docker/docker/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
The documentation for Docker Engine has been merged into
[the general documentation repo](https://github.com/docker/docker.github.io).
See the [README](https://github.com/docker/docker.github.io/blob/master/README.md)
for instructions on contributing to and building the documentation.
If you'd like to edit the current published version of the Engine docs,
do it in the master branch here:
https://github.com/docker/docker.github.io/tree/master/engine
If you need to document the functionality of an upcoming Engine release,
use the `vnext-engine` branch:
https://github.com/docker/docker.github.io/tree/vnext-engine/engine
The reference docs have been left in docker/docker (this repo), which remains
the place to edit them.
The docs in the general repo are open-source and we appreciate
your feedback and pull requests!

View File

@@ -29,6 +29,11 @@ keywords: "API, Docker, rcli, REST, documentation"
generate and rotate to a new CA certificate/key pair.
* `POST /service/create` and `POST /services/(id or name)/update` now take the field `Platforms` as part of the service `Placement`, allowing to specify platforms supported by the service.
* `POST /containers/(name)/wait` now accepts a `condition` query parameter to indicate which state change condition to wait for. Also, response headers are now returned immediately to acknowledge that the server has registered a wait callback for the client.
* `POST /swarm/init` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
* `POST /swarm/join` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
* `GET /events` now supports service, node and secret events which are emmited when users create, update and remove service, node and secret
* `GET /events` now supports network remove event which is emmitted when users remove a swarm scoped network
* `GET /events` now supports a filter type `scope` in which supported value could be swarm and local
## v1.29 API changes
@@ -41,6 +46,8 @@ keywords: "API, Docker, rcli, REST, documentation"
* `POST /containers/create`, `POST /service/create` and `POST /services/(id or name)/update` now takes the field `StartPeriod` as a part of the `HealthConfig` allowing for specification of a period during which the container should not be considered unhealthy even if health checks do not pass.
* `GET /services/(id)` now accepts an `insertDefaults` query-parameter to merge default values into the service inspect output.
* `POST /containers/prune`, `POST /images/prune`, `POST /volumes/prune`, and `POST /networks/prune` now support a `label` filter to filter containers, images, volumes, or networks based on the label. The format of the label filter could be `label=<key>`/`label=<key>=<value>` to remove those with the specified labels, or `label!=<key>`/`label!=<key>=<value>` to remove those without the specified labels.
* `POST /services/create` now accepts `Privileges` as part of `ContainerSpec`. Privileges currently include
`CredentialSpec` and `SELinuxContext`.
## v1.28 API changes

View File

@@ -1,321 +0,0 @@
---
aliases: ["/engine/misc/deprecated/"]
title: "Deprecated Engine Features"
description: "Deprecated Features."
keywords: "docker, documentation, about, technology, deprecate"
---
<!-- This file is maintained within the docker/docker Github
repository at https://github.com/docker/docker/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Deprecated Engine Features
The following list of features are deprecated in Engine.
To learn more about Docker Engine's deprecation policy,
see [Feature Deprecation Policy](https://docs.docker.com/engine/#feature-deprecation-policy).
### Asynchronous `service create` and `service update`
**Deprecated In Release: v17.05.0**
**Disabled by default in release: v17.09**
Docker 17.05.0 added an optional `--detach=false` option to make the
`docker service create` and `docker service update` work synchronously. This
option will be enable by default in Docker 17.09, at which point the `--detach`
flag can be used to use the previous (asynchronous) behavior.
### `-g` and `--graph` flags on `dockerd`
**Deprecated In Release: v17.05.0**
The `-g` or `--graph` flag for the `dockerd` or `docker daemon` command was
used to indicate the directory in which to store persistent data and resource
configuration and has been replaced with the more descriptive `--data-root`
flag.
These flags were added before Docker 1.0, so will not be _removed_, only
_hidden_, to discourage their use.
### Top-level network properties in NetworkSettings
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
When inspecting a container, `NetworkSettings` contains top-level information
about the default ("bridge") network;
`EndpointID`, `Gateway`, `GlobalIPv6Address`, `GlobalIPv6PrefixLen`, `IPAddress`,
`IPPrefixLen`, `IPv6Gateway`, and `MacAddress`.
These properties are deprecated in favor of per-network properties in
`NetworkSettings.Networks`. These properties were already "deprecated" in
docker 1.9, but kept around for backward compatibility.
Refer to [#17538](https://github.com/docker/docker/pull/17538) for further
information.
### `filter` param for `/images/json` endpoint
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
The `filter` param to filter the list of image by reference (name or name:tag) is now implemented as a regular filter, named `reference`.
### `repository:shortid` image references
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
`repository:shortid` syntax for referencing images is very little used, collides with tag references can be confused with digest references.
### `docker daemon` subcommand
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
The daemon is moved to a separate binary (`dockerd`), and should be used instead.
### Duplicate keys with conflicting values in engine labels
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
Duplicate keys with conflicting values have been deprecated. A warning is displayed
in the output, and an error will be returned in the future.
### `MAINTAINER` in Dockerfile
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
`MAINTAINER` was an early very limited form of `LABEL` which should be used instead.
### API calls without a version
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
API versions should be supplied to all API calls to ensure compatibility with
future Engine versions. Instead of just requesting, for example, the URL
`/containers/json`, you must now request `/v1.25/containers/json`.
### Backing filesystem without `d_type` support for overlay/overlay2
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
**Target For Removal In Release: v17.12**
The overlay and overlay2 storage driver does not work as expected if the backing
filesystem does not support `d_type`. For example, XFS does not support `d_type`
if it is formatted with the `ftype=0` option.
Please also refer to [#27358](https://github.com/docker/docker/issues/27358) for
further information.
### Three arguments form in `docker import`
**Deprecated In Release: [v0.6.7](https://github.com/docker/docker/releases/tag/v0.6.7)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
The `docker import` command format `file|URL|- [REPOSITORY [TAG]]` is deprecated since November 2013. It's no more supported.
### `-h` shorthand for `--help`
**Deprecated In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
**Target For Removal In Release: v17.09**
The shorthand (`-h`) is less common than `--help` on Linux and cannot be used
on all subcommands (due to it conflicting with, e.g. `-h` / `--hostname` on
`docker create`). For this reason, the `-h` shorthand was not printed in the
"usage" output of subcommands, nor documented, and is now marked "deprecated".
### `-e` and `--email` flags on `docker login`
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
**Target For Removal In Release: v17.06**
The docker login command is removing the ability to automatically register for an account with the target registry if the given username doesn't exist. Due to this change, the email flag is no longer required, and will be deprecated.
### Separator (`:`) of `--security-opt` flag on `docker run`
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
**Target For Removal In Release: v17.06**
The flag `--security-opt` doesn't use the colon separator(`:`) anymore to divide keys and values, it uses the equal symbol(`=`) for consistency with other similar flags, like `--storage-opt`.
### `/containers/(id or name)/copy` endpoint
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
The endpoint `/containers/(id or name)/copy` is deprecated in favor of `/containers/(id or name)/archive`.
### Ambiguous event fields in API
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
The fields `ID`, `Status` and `From` in the events API have been deprecated in favor of a more rich structure.
See the events API documentation for the new format.
### `-f` flag on `docker tag`
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
To make tagging consistent across the various `docker` commands, the `-f` flag on the `docker tag` command is deprecated. It is not longer necessary to specify `-f` to move a tag from one image to another. Nor will `docker` generate an error if the `-f` flag is missing and the specified tag is already in use.
### HostConfig at API container start
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
Passing an `HostConfig` to `POST /containers/{name}/start` is deprecated in favor of
defining it at container creation (`POST /containers/create`).
### `--before` and `--since` flags on `docker ps`
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
The `docker ps --before` and `docker ps --since` options are deprecated.
Use `docker ps --filter=before=...` and `docker ps --filter=since=...` instead.
### `--automated` and `--stars` flags on `docker search`
**Deprecated in Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
**Target For Removal In Release: v17.09**
The `docker search --automated` and `docker search --stars` options are deprecated.
Use `docker search --filter=is-automated=...` and `docker search --filter=stars=...` instead.
### Driver Specific Log Tags
**Deprecated In Release: [v1.9.0](https://github.com/docker/docker/releases/tag/v1.9.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
Log tags are now generated in a standard way across different logging drivers.
Because of which, the driver specific log tag options `syslog-tag`, `gelf-tag` and
`fluentd-tag` have been deprecated in favor of the generic `tag` option.
```bash
{% raw %}
docker --log-driver=syslog --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}"
{% endraw %}
```
### LXC built-in exec driver
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
The built-in LXC execution driver, the lxc-conf flag, and API fields have been removed.
### Old Command Line Options
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
The flags `-d` and `--daemon` are deprecated in favor of the `daemon` subcommand:
docker daemon -H ...
The following single-dash (`-opt`) variant of certain command line options
are deprecated and replaced with double-dash options (`--opt`):
docker attach -nostdin
docker attach -sig-proxy
docker build -no-cache
docker build -rm
docker commit -author
docker commit -run
docker events -since
docker history -notrunc
docker images -notrunc
docker inspect -format
docker ps -beforeId
docker ps -notrunc
docker ps -sinceId
docker rm -link
docker run -cidfile
docker run -dns
docker run -entrypoint
docker run -expose
docker run -link
docker run -lxc-conf
docker run -n
docker run -privileged
docker run -volumes-from
docker search -notrunc
docker search -stars
docker search -t
docker search -trusted
docker tag -force
The following double-dash options are deprecated and have no replacement:
docker run --cpuset
docker run --networking
docker ps --since-id
docker ps --before-id
docker search --trusted
**Deprecated In Release: [v1.5.0](https://github.com/docker/docker/releases/tag/v1.5.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
The single-dash (`-help`) was removed, in favor of the double-dash `--help`
docker -help
docker [COMMAND] -help
### `--run` flag on docker commit
**Deprecated In Release: [v0.10.0](https://github.com/docker/docker/releases/tag/v0.10.0)**
**Removed In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
The flag `--run` of the docker commit (and its short version `-run`) were deprecated in favor
of the `--changes` flag that allows to pass `Dockerfile` commands.
### Interacting with V1 registries
**Disabled By Default In Release: v17.06**
**Target For Removal In Release: v17.12**
Version 1.9 adds a flag (`--disable-legacy-registry=false`) which prevents the
docker daemon from `pull`, `push`, and `login` operations against v1
registries. Though enabled by default, this signals the intent to deprecate
the v1 protocol.
Support for the v1 protocol to the public registry was removed in 1.13. Any
mirror configurations using v1 should be updated to use a
[v2 registry mirror](https://docs.docker.com/registry/recipes/mirror/).
### Docker Content Trust ENV passphrase variables name change
**Deprecated In Release: [v1.9.0](https://github.com/docker/docker/releases/tag/v1.9.0)**
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
Since 1.9, Docker Content Trust Offline key has been renamed to Root key and the Tagging key has been renamed to Repository key. Due to this renaming, we're also changing the corresponding environment variables
- DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE is now named DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE
- DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE is now named DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE
### `--api-enable-cors` flag on dockerd
**Deprecated In Release: [v1.6.0](https://github.com/docker/docker/releases/tag/v1.6.0)**
**Target For Removal In Release: v17.09**
The flag `--api-enable-cors` is deprecated since v1.6.0. Use the flag
`--api-cors-header` instead.

View File

@@ -1,164 +0,0 @@
---
description: Volume plugin for Amazon EBS
keywords: "API, Usage, plugins, documentation, developer, amazon, ebs, rexray, volume"
title: Volume plugin for Amazon EBS
---
<!-- This file is maintained within the docker/docker Github
repository at https://github.com/docker/docker/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# A proof-of-concept Rexray plugin
In this example, a simple Rexray plugin will be created for the purposes of using
it on an Amazon EC2 instance with EBS. It is not meant to be a complete Rexray plugin.
The example source is available at [https://github.com/tiborvass/rexray-plugin](https://github.com/tiborvass/rexray-plugin).
To learn more about Rexray: [https://github.com/codedellemc/rexray](https://github.com/codedellemc/rexray)
## 1. Make a Docker image
The following is the Dockerfile used to containerize rexray.
```Dockerfile
FROM debian:jessie
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates
RUN wget https://dl.bintray.com/emccode/rexray/stable/0.6.4/rexray-Linux-x86_64-0.6.4.tar.gz -O rexray.tar.gz && tar -xvzf rexray.tar.gz -C /usr/bin && rm rexray.tar.gz
RUN mkdir -p /run/docker/plugins /var/lib/libstorage/volumes
ENTRYPOINT ["rexray"]
CMD ["--help"]
```
To build it you can run `image=$(cat Dockerfile | docker build -q -)` and `$image`
will reference the containerized rexray image.
## 2. Extract rootfs
```sh
$ TMPDIR=/tmp/rexray # for the purpose of this example
$ # create container without running it, to extract the rootfs from image
$ docker create --name rexray "$image"
$ # save the rootfs to a tar archive
$ docker export -o $TMPDIR/rexray.tar rexray
$ # extract rootfs from tar archive to a rootfs folder
$ ( mkdir -p $TMPDIR/rootfs; cd $TMPDIR/rootfs; tar xf ../rexray.tar )
```
## 3. Add plugin configuration
We have to put the following JSON to `$TMPDIR/config.json`:
```json
{
"Args": {
"Description": "",
"Name": "",
"Settable": null,
"Value": null
},
"Description": "A proof-of-concept EBS plugin (using rexray) for Docker",
"Documentation": "https://github.com/tiborvass/rexray-plugin",
"Entrypoint": [
"/usr/bin/rexray", "service", "start", "-f"
],
"Env": [
{
"Description": "",
"Name": "REXRAY_SERVICE",
"Settable": [
"value"
],
"Value": "ebs"
},
{
"Description": "",
"Name": "EBS_ACCESSKEY",
"Settable": [
"value"
],
"Value": ""
},
{
"Description": "",
"Name": "EBS_SECRETKEY",
"Settable": [
"value"
],
"Value": ""
}
],
"Interface": {
"Socket": "rexray.sock",
"Types": [
"docker.volumedriver/1.0"
]
},
"Linux": {
"AllowAllDevices": true,
"Capabilities": ["CAP_SYS_ADMIN"],
"Devices": null
},
"Mounts": [
{
"Source": "/dev",
"Destination": "/dev",
"Type": "bind",
"Options": ["rbind"]
}
],
"Network": {
"Type": "host"
},
"PropagatedMount": "/var/lib/libstorage/volumes",
"User": {},
"WorkDir": ""
}
```
Please note a couple of points:
- `PropagatedMount` is needed so that the docker daemon can see mounts done by the
rexray plugin from within the container, otherwise the docker daemon is not able
to mount a docker volume.
- The rexray plugin needs dynamic access to host devices. For that reason, we
have to give it access to all devices under `/dev` and set `AllowAllDevices` to
true for proper access.
- The user of this simple plugin can change only 3 settings: `REXRAY_SERVICE`,
`EBS_ACCESSKEY` and `EBS_SECRETKEY`. This is because of the reduced scope of this
plugin. Ideally other rexray parameters could also be set.
## 4. Create plugin
`docker plugin create tiborvass/rexray-plugin "$TMPDIR"` will create the plugin.
```sh
$ docker plugin ls
ID NAME DESCRIPTION ENABLED
2475a4bd0ca5 tiborvass/rexray-plugin:latest A rexray volume plugin for Docker false
```
## 5. Test plugin
```sh
$ docker plugin set tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`
$ docker plugin enable tiborvass/rexray-plugin
$ docker volume create -d tiborvass/rexray-plugin my-ebs-volume
$ docker volume ls
DRIVER VOLUME NAME
tiborvass/rexray-plugin:latest my-ebs-volume
$ docker run --rm -v my-ebs-volume:/volume busybox sh -c 'echo bye > /volume/hi'
$ docker run --rm -v my-ebs-volume:/volume busybox cat /volume/hi
bye
```
## 6. Push plugin
First, ensure you are logged in with `docker login`. Then you can run:
`docker plugin push tiborvass/rexray-plugin` to push it like a regular docker
image to a registry, to make it available for others to install via
`docker plugin install tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`.

View File

@@ -1,238 +0,0 @@
---
title: "Plugin config"
description: "How develop and use a plugin with the managed plugin system"
keywords: "API, Usage, plugins, documentation, developer"
---
<!-- This file is maintained within the docker/docker Github
repository at https://github.com/docker/docker/. Make all
pull requests against that repo. If you see this file in
another repository, consider it read-only there, as it will
periodically be overwritten by the definitive file. Pull
requests which include edits to this file in other repositories
will be rejected.
-->
# Plugin Config Version 1 of Plugin V2
This document outlines the format of the V0 plugin configuration. The plugin
config described herein was introduced in the Docker daemon in the [v1.12.0
release](https://github.com/docker/docker/commit/f37117045c5398fd3dca8016ea8ca0cb47e7312b).
Plugin configs describe the various constituents of a docker plugin. Plugin
configs can be serialized to JSON format with the following media types:
Config Type | Media Type
------------- | -------------
config | "application/vnd.docker.plugin.v1+json"
## *Config* Field Descriptions
Config provides the base accessible fields for working with V0 plugin format
in the registry.
- **`description`** *string*
description of the plugin
- **`documentation`** *string*
link to the documentation about the plugin
- **`interface`** *PluginInterface*
interface implemented by the plugins, struct consisting of the following fields
- **`types`** *string array*
types indicate what interface(s) the plugin currently implements.
currently supported:
- **docker.volumedriver/1.0**
- **docker.networkdriver/1.0**
- **docker.ipamdriver/1.0**
- **docker.authz/1.0**
- **docker.logdriver/1.0**
- **docker.metricscollector/1.0**
- **`socket`** *string*
socket is the name of the socket the engine should use to communicate with the plugins.
the socket will be created in `/run/docker/plugins`.
- **`entrypoint`** *string array*
entrypoint of the plugin, see [`ENTRYPOINT`](../reference/builder.md#entrypoint)
- **`workdir`** *string*
workdir of the plugin, see [`WORKDIR`](../reference/builder.md#workdir)
- **`network`** *PluginNetwork*
network of the plugin, struct consisting of the following fields
- **`type`** *string*
network type.
currently supported:
- **bridge**
- **host**
- **none**
- **`mounts`** *PluginMount array*
mount of the plugin, struct consisting of the following fields, see [`MOUNTS`](https://github.com/opencontainers/runtime-spec/blob/master/config.md#mounts)
- **`name`** *string*
name of the mount.
- **`description`** *string*
description of the mount.
- **`source`** *string*
source of the mount.
- **`destination`** *string*
destination of the mount.
- **`type`** *string*
mount type.
- **`options`** *string array*
options of the mount.
- **`ipchost`** *boolean*
Access to host ipc namespace.
- **`pidhost`** *boolean*
Access to host pid namespace.
- **`propagatedMount`** *string*
path to be mounted as rshared, so that mounts under that path are visible to docker. This is useful for volume plugins.
This path will be bind-mounted outisde of the plugin rootfs so it's contents
are preserved on upgrade.
- **`env`** *PluginEnv array*
env of the plugin, struct consisting of the following fields
- **`name`** *string*
name of the env.
- **`description`** *string*
description of the env.
- **`value`** *string*
value of the env.
- **`args`** *PluginArgs*
args of the plugin, struct consisting of the following fields
- **`name`** *string*
name of the args.
- **`description`** *string*
description of the args.
- **`value`** *string array*
values of the args.
- **`linux`** *PluginLinux*
- **`capabilities`** *string array*
capabilities of the plugin (*Linux only*), see list [`here`](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md#security)
- **`allowAllDevices`** *boolean*
If `/dev` is bind mounted from the host, and allowAllDevices is set to true, the plugin will have `rwm` access to all devices on the host.
- **`devices`** *PluginDevice array*
device of the plugin, (*Linux only*), struct consisting of the following fields, see [`DEVICES`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#devices)
- **`name`** *string*
name of the device.
- **`description`** *string*
description of the device.
- **`path`** *string*
path of the device.
## Example Config
*Example showing the 'tiborvass/sample-volume-plugin' plugin config.*
```json
{
"Args": {
"Description": "",
"Name": "",
"Settable": null,
"Value": null
},
"Description": "A sample volume plugin for Docker",
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
"Entrypoint": [
"/usr/bin/sample-volume-plugin",
"/data"
],
"Env": [
{
"Description": "",
"Name": "DEBUG",
"Settable": [
"value"
],
"Value": "0"
}
],
"Interface": {
"Socket": "plugin.sock",
"Types": [
"docker.volumedriver/1.0"
]
},
"Linux": {
"Capabilities": null,
"AllowAllDevices": false,
"Devices": null
},
"Mounts": null,
"Network": {
"Type": ""
},
"PropagatedMount": "/data",
"User": {},
"Workdir": ""
}
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

Some files were not shown because too many files have changed in this diff Show More