mirror of
https://github.com/moby/moby.git
synced 2026-01-11 18:51:37 +00:00
Compare commits
102 Commits
v17.10.0-c
...
v17.06.1-c
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
acf06a5d60 | ||
|
|
8ff1920458 | ||
|
|
1fd76f3838 | ||
|
|
acbee943fd | ||
|
|
8660f37d48 | ||
|
|
8f9d67cb79 | ||
|
|
c3d547c456 | ||
|
|
89bacc278b | ||
|
|
853274ece3 | ||
|
|
2cddc993de | ||
|
|
fc5e100344 | ||
|
|
7d99cdff8d | ||
|
|
064f22822c | ||
|
|
e712f523e0 | ||
|
|
f20ac0badb | ||
|
|
6fade192eb | ||
|
|
e011810b80 | ||
|
|
c32ca2f148 | ||
|
|
993b6912fd | ||
|
|
d27ffb2708 | ||
|
|
af9667841e | ||
|
|
9bb8dfa27e | ||
|
|
b3fefc36b3 | ||
|
|
b9f1bb4e97 | ||
|
|
064ade21f2 | ||
|
|
260246bb70 | ||
|
|
c0e0575129 | ||
|
|
482543a87c | ||
|
|
98debbf694 | ||
|
|
bdd34e70b3 | ||
|
|
801c2a432c | ||
|
|
02d284bd2f | ||
|
|
c6c8526b85 | ||
|
|
3ec56e0783 | ||
|
|
a473cc5ef7 | ||
|
|
af2c3bd37f | ||
|
|
3673d2b668 | ||
|
|
6c7b043f77 | ||
|
|
791faef9e2 | ||
|
|
5fe12d194d | ||
|
|
e4e9921367 | ||
|
|
b1323a1c8f | ||
|
|
27deeb1298 | ||
|
|
bc5e4e3ce3 | ||
|
|
0915c2c848 | ||
|
|
73c462b2e5 | ||
|
|
9c5bb024df | ||
|
|
cc547fdd4b | ||
|
|
5c27bce649 | ||
|
|
14bc03dbb8 | ||
|
|
b055794618 | ||
|
|
a65742e314 | ||
|
|
9a727acc15 | ||
|
|
82e1487e72 | ||
|
|
a11bca5891 | ||
|
|
a67f442586 | ||
|
|
842c51c394 | ||
|
|
a2f65ec4aa | ||
|
|
dcbaad5448 | ||
|
|
5c0be74bb8 | ||
|
|
91c49beea0 | ||
|
|
9778382f97 | ||
|
|
70d059f024 | ||
|
|
ab046e94d5 | ||
|
|
5c5ac7eb9b | ||
|
|
d62ed5f3ac | ||
|
|
182cd939be | ||
|
|
f7feb663b7 | ||
|
|
a2b498e128 | ||
|
|
f024e2d7eb | ||
|
|
4e8af4c431 | ||
|
|
e0ffd247bf | ||
|
|
33af086a0b | ||
|
|
3c9dfed30e | ||
|
|
ee9a857558 | ||
|
|
9ab2df07fd | ||
|
|
752471c92f | ||
|
|
f8ecde191a | ||
|
|
093c31c694 | ||
|
|
5c6a466a84 | ||
|
|
4d988e9141 | ||
|
|
b214658e4c | ||
|
|
f981cf94d0 | ||
|
|
b9863603cd | ||
|
|
0f09cebc65 | ||
|
|
870d659101 | ||
|
|
5ee88ddc0d | ||
|
|
631cf6dc8d | ||
|
|
bd04a75392 | ||
|
|
7a11613e94 | ||
|
|
0570feee3d | ||
|
|
daef057517 | ||
|
|
400454cf9a | ||
|
|
b66b1849e9 | ||
|
|
b01ed8895c | ||
|
|
4a5fa1e147 | ||
|
|
393ea2d964 | ||
|
|
90fd450182 | ||
|
|
80cc4bc95f | ||
|
|
9a3a4c0243 | ||
|
|
312781c2e1 | ||
|
|
0aefd9b0f8 |
86
CHANGELOG.md
86
CHANGELOG.md
@@ -5,6 +5,92 @@ information on the list of deprecated flags and APIs please have a look at
|
||||
https://docs.docker.com/engine/deprecated/ where target removal dates can also
|
||||
be found.
|
||||
|
||||
## 17.05.0-ce (2017-05-04)
|
||||
|
||||
### Builder
|
||||
|
||||
+ Add multi-stage build support [#31257](https://github.com/docker/docker/pull/31257) [#32063](https://github.com/docker/docker/pull/32063)
|
||||
+ Allow using build-time args (`ARG`) in `FROM` [#31352](https://github.com/docker/docker/pull/31352)
|
||||
+ Add an option for specifying build target [#32496](https://github.com/docker/docker/pull/32496)
|
||||
* Accept `-f -` to read Dockerfile from `stdin`, but use local context for building [#31236](https://github.com/docker/docker/pull/31236)
|
||||
* The values of default build time arguments (e.g `HTTP_PROXY`) are no longer displayed in docker image history unless a corresponding `ARG` instruction is written in the Dockerfile. [#31584](https://github.com/docker/docker/pull/31584)
|
||||
- Fix setting command if a custom shell is used in a parent image [#32236](https://github.com/docker/docker/pull/32236)
|
||||
- Fix `docker build --label` when the label includes single quotes and a space [#31750](https://github.com/docker/docker/pull/31750)
|
||||
|
||||
### Client
|
||||
|
||||
* Add `--mount` flag to `docker run` and `docker create` [#32251](https://github.com/docker/docker/pull/32251)
|
||||
* Add `--type=secret` to `docker inspect` [#32124](https://github.com/docker/docker/pull/32124)
|
||||
* Add `--format` option to `docker secret ls` [#31552](https://github.com/docker/docker/pull/31552)
|
||||
* Add `--filter` option to `docker secret ls` [#30810](https://github.com/docker/docker/pull/30810)
|
||||
* Add `--filter scope=<swarm|local>` to `docker network ls` [#31529](https://github.com/docker/docker/pull/31529)
|
||||
* Add `--cpus` support to `docker update` [#31148](https://github.com/docker/docker/pull/31148)
|
||||
* Add label filter to `docker system prune` and other `prune` commands [#30740](https://github.com/docker/docker/pull/30740)
|
||||
* `docker stack rm` now accepts multiple stacks as input [#32110](https://github.com/docker/docker/pull/32110)
|
||||
* Improve `docker version --format` option when the client has downgraded the API version [#31022](https://github.com/docker/docker/pull/31022)
|
||||
* Prompt when using an encrypted client certificate to connect to a docker daemon [#31364](https://github.com/docker/docker/pull/31364)
|
||||
* Display created tags on successful `docker build` [#32077](https://github.com/docker/docker/pull/32077)
|
||||
* Cleanup compose convert error messages [#32087](https://github.com/moby/moby/pull/32087)
|
||||
|
||||
### Contrib
|
||||
|
||||
+ Add support for building docker debs for Ubuntu 17.04 Zesty on amd64 [#32435](https://github.com/docker/docker/pull/32435)
|
||||
|
||||
### Daemon
|
||||
|
||||
- Fix `--api-cors-header` being ignored if `--api-enable-cors` is not set [#32174](https://github.com/docker/docker/pull/32174)
|
||||
- Cleanup docker tmp dir on start [#31741](https://github.com/docker/docker/pull/31741)
|
||||
- Deprecate `--graph` flag in favor or `--data-root` [#28696](https://github.com/docker/docker/pull/28696)
|
||||
|
||||
### Logging
|
||||
|
||||
+ Add support for logging driver plugins [#28403](https://github.com/docker/docker/pull/28403)
|
||||
* Add support for showing logs of individual tasks to `docker service logs`, and add `/task/{id}/logs` REST endpoint [#32015](https://github.com/docker/docker/pull/32015)
|
||||
* Add `--log-opt env-regex` option to match environment variables using a regular expression [#27565](https://github.com/docker/docker/pull/27565)
|
||||
|
||||
### Networking
|
||||
|
||||
+ Allow user to replace, and customize the ingress network [#31714](https://github.com/docker/docker/pull/31714)
|
||||
- Fix UDP traffic in containers not working after the container is restarted [#32505](https://github.com/docker/docker/pull/32505)
|
||||
- Fix files being written to `/var/lib/docker` if a different data-root is set [#32505](https://github.com/docker/docker/pull/32505)
|
||||
|
||||
### Runtime
|
||||
|
||||
- Ensure health probe is stopped when a container exits [#32274](https://github.com/docker/docker/pull/32274)
|
||||
|
||||
### Swarm Mode
|
||||
|
||||
+ Add update/rollback order for services (`--update-order` / `--rollback-order`) [#30261](https://github.com/docker/docker/pull/30261)
|
||||
+ Add support for synchronous `service create` and `service update` [#31144](https://github.com/docker/docker/pull/31144)
|
||||
+ Add support for "grace periods" on healthchecks through the `HEALTHCHECK --start-period` and `--health-start-period` flag to
|
||||
`docker service create`, `docker service update`, `docker create`, and `docker run` to support containers with an initial startup
|
||||
time [#28938](https://github.com/docker/docker/pull/28938)
|
||||
* `docker service create` now omits fields that are not specified by the user, when possible. This will allow defaults to be applied inside the manager [#32284](https://github.com/docker/docker/pull/32284)
|
||||
* `docker service inspect` now shows default values for fields that are not specified by the user [#32284](https://github.com/docker/docker/pull/32284)
|
||||
* Move `docker service logs` out of experimental [#32462](https://github.com/docker/docker/pull/32462)
|
||||
* Add support for Credential Spec and SELinux to services to the API [#32339](https://github.com/docker/docker/pull/32339)
|
||||
* Add `--entrypoint` flag to `docker service create` and `docker service update` [#29228](https://github.com/docker/docker/pull/29228)
|
||||
* Add `--network-add` and `--network-rm` to `docker service update` [#32062](https://github.com/docker/docker/pull/32062)
|
||||
* Add `--credential-spec` flag to `docker service create` and `docker service update` [#32339](https://github.com/docker/docker/pull/32339)
|
||||
* Add `--filter mode=<global|replicated>` to `docker service ls` [#31538](https://github.com/docker/docker/pull/31538)
|
||||
* Resolve network IDs on the client side, instead of in the daemon when creating services [#32062](https://github.com/docker/docker/pull/32062)
|
||||
* Add `--format` option to `docker node ls` [#30424](https://github.com/docker/docker/pull/30424)
|
||||
* Add `--prune` option to `docker stack deploy` to remove services that are no longer defined in the docker-compose file [#31302](https://github.com/docker/docker/pull/31302)
|
||||
* Add `PORTS` column for `docker service ls` when using `ingress` mode [#30813](https://github.com/docker/docker/pull/30813)
|
||||
- Fix unnescessary re-deploying of tasks when environment-variables are used [#32364](https://github.com/docker/docker/pull/32364)
|
||||
- Fix `docker stack deploy` not supporting `endpoint_mode` when deploying from a docker compose file [#32333](https://github.com/docker/docker/pull/32333)
|
||||
- Proceed with startup if cluster component cannot be created to allow recovering from a broken swarm setup [#31631](https://github.com/docker/docker/pull/31631)
|
||||
|
||||
### Security
|
||||
|
||||
* Allow setting SELinux type or MCS labels when using `--ipc=container:` or `--ipc=host` [#30652](https://github.com/docker/docker/pull/30652)
|
||||
|
||||
|
||||
### Deprecation
|
||||
|
||||
- Deprecate `--api-enable-cors` daemon flag. This flag was marked deprecated in Docker 1.6.0 but not listed in deprecated features [#32352](https://github.com/docker/docker/pull/32352)
|
||||
- Remove Ubuntu 12.04 (Precise Pangolin) as supported platform. Ubuntu 12.04 is EOL, and no longer receives updates [#32520](https://github.com/docker/docker/pull/32520)
|
||||
|
||||
## 17.04.0-ce (2017-04-05)
|
||||
|
||||
### Builder
|
||||
|
||||
14
Dockerfile
14
Dockerfile
@@ -90,17 +90,6 @@ RUN cd /usr/local/lvm2 \
|
||||
&& make install_device-mapper
|
||||
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
|
||||
|
||||
# Configure the container for OSX cross compilation
|
||||
ENV OSX_SDK MacOSX10.11.sdk
|
||||
ENV OSX_CROSS_COMMIT a9317c18a3a457ca0a657f08cc4d0d43c6cf8953
|
||||
RUN set -x \
|
||||
&& export OSXCROSS_PATH="/osxcross" \
|
||||
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
|
||||
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
|
||||
&& curl -sSL https://s3.dockerproject.org/darwin/v2/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
|
||||
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh
|
||||
ENV PATH /osxcross/target/bin:$PATH
|
||||
|
||||
# Install seccomp: the version shipped upstream is too old
|
||||
ENV SECCOMP_VERSION 2.3.2
|
||||
RUN set -x \
|
||||
@@ -120,7 +109,8 @@ RUN set -x \
|
||||
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
|
||||
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
|
||||
# with a heads-up.
|
||||
ENV GO_VERSION 1.8.1
|
||||
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
|
||||
|
||||
@@ -98,7 +98,8 @@ RUN set -x \
|
||||
# bootstrap, so we use golang-go (1.6) as bootstrap to build Go from source code.
|
||||
# We don't use the official ARMv6 released binaries as a GOROOT_BOOTSTRAP, because
|
||||
# not all ARM64 platforms support 32-bit mode. 32-bit mode is optional for ARMv8.
|
||||
ENV GO_VERSION 1.8.1
|
||||
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
||||
@@ -71,7 +71,8 @@ RUN cd /usr/local/lvm2 \
|
||||
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
|
||||
|
||||
# Install Go
|
||||
ENV GO_VERSION 1.8.1
|
||||
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
ENV PATH /go/bin:/usr/local/go/bin:$PATH
|
||||
|
||||
@@ -95,7 +95,8 @@ RUN set -x \
|
||||
|
||||
# Install Go
|
||||
# NOTE: official ppc64le go binaries weren't available until go 1.6.4 and 1.7.4
|
||||
ENV GO_VERSION 1.8.1
|
||||
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
|
||||
|
||||
@@ -88,7 +88,8 @@ RUN cd /usr/local/lvm2 \
|
||||
&& make install_device-mapper
|
||||
# See https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
|
||||
|
||||
@@ -53,7 +53,8 @@ RUN set -x \
|
||||
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
|
||||
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
|
||||
# with a heads-up.
|
||||
ENV GO_VERSION 1.8.1
|
||||
# IMPORTANT: When updating this please note that stdlib archive/tar pkg is vendored
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" \
|
||||
| tar -xzC /usr/local
|
||||
ENV PATH /go/bin:/usr/local/go/bin:$PATH
|
||||
|
||||
@@ -161,7 +161,7 @@ SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPref
|
||||
# Environment variable notes:
|
||||
# - GO_VERSION must be consistent with 'Dockerfile' used by Linux.
|
||||
# - FROM_DOCKERFILE is used for detection of building within a container.
|
||||
ENV GO_VERSION=1.8.1 `
|
||||
ENV GO_VERSION=1.8.3 `
|
||||
GIT_VERSION=2.11.1 `
|
||||
GOPATH=C:\go `
|
||||
FROM_DOCKERFILE=1
|
||||
|
||||
@@ -41,7 +41,7 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
|
||||
|
||||
var postForm map[string]interface{}
|
||||
if err := json.Unmarshal(b, &postForm); err == nil {
|
||||
maskSecretKeys(postForm)
|
||||
maskSecretKeys(postForm, r.RequestURI)
|
||||
formStr, errMarshal := json.Marshal(postForm)
|
||||
if errMarshal == nil {
|
||||
logrus.Debugf("form data: %s", string(formStr))
|
||||
@@ -54,23 +54,41 @@ func DebugRequestMiddleware(handler func(ctx context.Context, w http.ResponseWri
|
||||
}
|
||||
}
|
||||
|
||||
func maskSecretKeys(inp interface{}) {
|
||||
func maskSecretKeys(inp interface{}, path string) {
|
||||
// Remove any query string from the path
|
||||
idx := strings.Index(path, "?")
|
||||
if idx != -1 {
|
||||
path = path[:idx]
|
||||
}
|
||||
// Remove trailing / characters
|
||||
path = strings.TrimRight(path, "/")
|
||||
|
||||
if arr, ok := inp.([]interface{}); ok {
|
||||
for _, f := range arr {
|
||||
maskSecretKeys(f)
|
||||
maskSecretKeys(f, path)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if form, ok := inp.(map[string]interface{}); ok {
|
||||
loop0:
|
||||
for k, v := range form {
|
||||
for _, m := range []string{"password", "secret", "jointoken", "unlockkey"} {
|
||||
for _, m := range []string{"password", "secret", "jointoken", "unlockkey", "signingcakey"} {
|
||||
if strings.EqualFold(m, k) {
|
||||
form[k] = "*****"
|
||||
continue loop0
|
||||
}
|
||||
}
|
||||
maskSecretKeys(v)
|
||||
maskSecretKeys(v, path)
|
||||
}
|
||||
|
||||
// Route-specific redactions
|
||||
if strings.HasSuffix(path, "/secrets/create") {
|
||||
for k := range form {
|
||||
if k == "Data" {
|
||||
form[k] = "*****"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
58
api/server/middleware/debug_test.go
Normal file
58
api/server/middleware/debug_test.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package middleware
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestMaskSecretKeys(t *testing.T) {
|
||||
tests := []struct {
|
||||
path string
|
||||
input map[string]interface{}
|
||||
expected map[string]interface{}
|
||||
}{
|
||||
{
|
||||
path: "/v1.30/secrets/create",
|
||||
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
},
|
||||
{
|
||||
path: "/v1.30/secrets/create//",
|
||||
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
},
|
||||
|
||||
{
|
||||
path: "/secrets/create?key=val",
|
||||
input: map[string]interface{}{"Data": "foo", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
expected: map[string]interface{}{"Data": "*****", "Name": "name", "Labels": map[string]interface{}{}},
|
||||
},
|
||||
{
|
||||
path: "/v1.30/some/other/path",
|
||||
input: map[string]interface{}{
|
||||
"password": "pass",
|
||||
"other": map[string]interface{}{
|
||||
"secret": "secret",
|
||||
"jointoken": "jointoken",
|
||||
"unlockkey": "unlockkey",
|
||||
"signingcakey": "signingcakey",
|
||||
},
|
||||
},
|
||||
expected: map[string]interface{}{
|
||||
"password": "*****",
|
||||
"other": map[string]interface{}{
|
||||
"secret": "*****",
|
||||
"jointoken": "*****",
|
||||
"unlockkey": "*****",
|
||||
"signingcakey": "*****",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, testcase := range tests {
|
||||
maskSecretKeys(testcase.input, testcase.path)
|
||||
assert.Equal(t, testcase.expected, testcase.input)
|
||||
}
|
||||
}
|
||||
@@ -410,6 +410,9 @@ func buildIpamResources(r *types.NetworkResource, nwInfo libnetwork.NetworkInfo)
|
||||
|
||||
if !hasIpv6Conf {
|
||||
for _, ip6Info := range ipv6Info {
|
||||
if ip6Info.IPAMData.Pool == nil {
|
||||
continue
|
||||
}
|
||||
iData := network.IPAMConfig{}
|
||||
iData.Subnet = ip6Info.IPAMData.Pool.String()
|
||||
iData.Gateway = ip6Info.IPAMData.Gateway.String()
|
||||
|
||||
385
api/swagger.yaml
385
api/swagger.yaml
@@ -711,7 +711,7 @@ definitions:
|
||||
- "process"
|
||||
- "hyperv"
|
||||
|
||||
Config:
|
||||
ContainerConfig:
|
||||
description: "Configuration for a container that is portable between hosts"
|
||||
type: "object"
|
||||
properties:
|
||||
@@ -908,7 +908,7 @@ definitions:
|
||||
type: "string"
|
||||
x-nullable: false
|
||||
ContainerConfig:
|
||||
$ref: "#/definitions/Config"
|
||||
$ref: "#/definitions/ContainerConfig"
|
||||
DockerVersion:
|
||||
type: "string"
|
||||
x-nullable: false
|
||||
@@ -916,7 +916,7 @@ definitions:
|
||||
type: "string"
|
||||
x-nullable: false
|
||||
Config:
|
||||
$ref: "#/definitions/Config"
|
||||
$ref: "#/definitions/ContainerConfig"
|
||||
Architecture:
|
||||
type: "string"
|
||||
x-nullable: false
|
||||
@@ -1078,17 +1078,27 @@ definitions:
|
||||
type: "string"
|
||||
UsageData:
|
||||
type: "object"
|
||||
x-nullable: true
|
||||
required: [Size, RefCount]
|
||||
description: |
|
||||
Usage details about the volume. This information is used by the
|
||||
`GET /system/df` endpoint, and omitted in other endpoints.
|
||||
properties:
|
||||
Size:
|
||||
type: "integer"
|
||||
description: "The disk space used by the volume (local driver only)"
|
||||
default: -1
|
||||
description: |
|
||||
Amount of disk space used by the volume (in bytes). This information
|
||||
is only available for volumes created with the `"local"` volume
|
||||
driver. For volumes created with other volume drivers, this field
|
||||
is set to `-1` ("not available")
|
||||
x-nullable: false
|
||||
RefCount:
|
||||
type: "integer"
|
||||
default: -1
|
||||
description: "The number of containers referencing this volume."
|
||||
description: |
|
||||
The number of containers referencing this volume. This field
|
||||
is set to `-1` if the reference-count is not available.
|
||||
x-nullable: false
|
||||
|
||||
example:
|
||||
@@ -2003,6 +2013,57 @@ definitions:
|
||||
description: "A list of additional groups that the container process will run as."
|
||||
items:
|
||||
type: "string"
|
||||
Privileges:
|
||||
type: "object"
|
||||
description: "Security options for the container"
|
||||
properties:
|
||||
CredentialSpec:
|
||||
type: "object"
|
||||
description: "CredentialSpec for managed service account (Windows only)"
|
||||
properties:
|
||||
File:
|
||||
type: "string"
|
||||
description: |
|
||||
Load credential spec from this file. The file is read by the daemon, and must be present in the
|
||||
`CredentialSpecs` subdirectory in the docker data directory, which defaults to
|
||||
`C:\ProgramData\Docker\` on Windows.
|
||||
|
||||
For example, specifying `spec.json` loads `C:\ProgramData\Docker\CredentialSpecs\spec.json`.
|
||||
|
||||
<p><br /></p>
|
||||
|
||||
> **Note**: `CredentialSpec.File` and `CredentialSpec.Registry` are mutually exclusive.
|
||||
Registry:
|
||||
type: "string"
|
||||
description: |
|
||||
Load credential spec from this value in the Windows registry. The specified registry value must be
|
||||
located in:
|
||||
|
||||
`HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization\Containers\CredentialSpecs`
|
||||
|
||||
<p><br /></p>
|
||||
|
||||
|
||||
> **Note**: `CredentialSpec.File` and `CredentialSpec.Registry` are mutually exclusive.
|
||||
SELinuxContext:
|
||||
type: "object"
|
||||
description: "SELinux labels of the container"
|
||||
properties:
|
||||
Disable:
|
||||
type: "boolean"
|
||||
description: "Disable SELinux"
|
||||
User:
|
||||
type: "string"
|
||||
description: "SELinux user label"
|
||||
Role:
|
||||
type: "string"
|
||||
description: "SELinux role label"
|
||||
Type:
|
||||
type: "string"
|
||||
description: "SELinux type label"
|
||||
Level:
|
||||
type: "string"
|
||||
description: "SELinux level label"
|
||||
TTY:
|
||||
description: "Whether a pseudo-TTY should be allocated."
|
||||
type: "boolean"
|
||||
@@ -2085,6 +2146,37 @@ definitions:
|
||||
SecretName is the name of the secret that this references, but this is just provided for
|
||||
lookup/display purposes. The secret in the reference will be identified by its ID.
|
||||
type: "string"
|
||||
Configs:
|
||||
description: "Configs contains references to zero or more configs that will be exposed to the service."
|
||||
type: "array"
|
||||
items:
|
||||
type: "object"
|
||||
properties:
|
||||
File:
|
||||
description: "File represents a specific target that is backed by a file."
|
||||
type: "object"
|
||||
properties:
|
||||
Name:
|
||||
description: "Name represents the final filename in the filesystem."
|
||||
type: "string"
|
||||
UID:
|
||||
description: "UID represents the file UID."
|
||||
type: "string"
|
||||
GID:
|
||||
description: "GID represents the file GID."
|
||||
type: "string"
|
||||
Mode:
|
||||
description: "Mode represents the FileMode of the file."
|
||||
type: "integer"
|
||||
format: "uint32"
|
||||
ConfigID:
|
||||
description: "ConfigID represents the ID of the specific config that we're referencing."
|
||||
type: "string"
|
||||
ConfigName:
|
||||
description: |
|
||||
ConfigName is the name of the config that this references, but this is just provided for
|
||||
lookup/display purposes. The config in the reference will be identified by its ID.
|
||||
type: "string"
|
||||
|
||||
Resources:
|
||||
description: "Resource requirements which apply to each individual container created as part of the service."
|
||||
@@ -2174,9 +2266,6 @@ definitions:
|
||||
Runtime:
|
||||
description: "Runtime is the type of runtime specified for the task executor."
|
||||
type: "string"
|
||||
RuntimeData:
|
||||
description: "RuntimeData is the payload sent to be used with the runtime for the executor."
|
||||
type: "array"
|
||||
Networks:
|
||||
type: "array"
|
||||
items:
|
||||
@@ -2691,7 +2780,39 @@ definitions:
|
||||
type: "string"
|
||||
format: "dateTime"
|
||||
Spec:
|
||||
$ref: "#/definitions/ServiceSpec"
|
||||
$ref: "#/definitions/SecretSpec"
|
||||
ConfigSpec:
|
||||
type: "object"
|
||||
properties:
|
||||
Name:
|
||||
description: "User-defined name of the config."
|
||||
type: "string"
|
||||
Labels:
|
||||
description: "User-defined key/value metadata."
|
||||
type: "object"
|
||||
additionalProperties:
|
||||
type: "string"
|
||||
Data:
|
||||
description: "Base64-url-safe-encoded config data"
|
||||
type: "array"
|
||||
items:
|
||||
type: "string"
|
||||
Config:
|
||||
type: "object"
|
||||
properties:
|
||||
ID:
|
||||
type: "string"
|
||||
Version:
|
||||
$ref: "#/definitions/ObjectVersion"
|
||||
CreatedAt:
|
||||
type: "string"
|
||||
format: "dateTime"
|
||||
UpdatedAt:
|
||||
type: "string"
|
||||
format: "dateTime"
|
||||
Spec:
|
||||
$ref: "#/definitions/ConfigSpec"
|
||||
|
||||
paths:
|
||||
/containers/json:
|
||||
get:
|
||||
@@ -2896,7 +3017,7 @@ paths:
|
||||
description: "Container to create"
|
||||
schema:
|
||||
allOf:
|
||||
- $ref: "#/definitions/Config"
|
||||
- $ref: "#/definitions/ContainerConfig"
|
||||
- type: "object"
|
||||
properties:
|
||||
HostConfig:
|
||||
@@ -3122,7 +3243,7 @@ paths:
|
||||
all processes in the container. Freezing the process requires the process to
|
||||
be running. As a result, paused containers are both `Running` _and_ `Paused`.
|
||||
|
||||
Use the `Status` field instead to determin if a container's state is "running".
|
||||
Use the `Status` field instead to determine if a container's state is "running".
|
||||
type: "boolean"
|
||||
Paused:
|
||||
description: "Whether this container is paused."
|
||||
@@ -3194,7 +3315,7 @@ paths:
|
||||
items:
|
||||
$ref: "#/definitions/MountPoint"
|
||||
Config:
|
||||
$ref: "#/definitions/Config"
|
||||
$ref: "#/definitions/ContainerConfig"
|
||||
NetworkSettings:
|
||||
$ref: "#/definitions/NetworkConfig"
|
||||
examples:
|
||||
@@ -5560,7 +5681,7 @@ paths:
|
||||
in: "body"
|
||||
description: "The container configuration"
|
||||
schema:
|
||||
$ref: "#/definitions/Config"
|
||||
$ref: "#/definitions/ContainerConfig"
|
||||
- name: "container"
|
||||
in: "query"
|
||||
description: "The ID or name of the container to commit"
|
||||
@@ -5599,16 +5720,22 @@ paths:
|
||||
|
||||
Various objects within Docker report events when something happens to them.
|
||||
|
||||
Containers report these events: `attach, commit, copy, create, destroy, detach, die, exec_create, exec_detach, exec_start, export, health_status, kill, oom, pause, rename, resize, restart, start, stop, top, unpause, update`
|
||||
Containers report these events: `attach`, `commit`, `copy`, `create`, `destroy`, `detach`, `die`, `exec_create`, `exec_detach`, `exec_start`, `export`, `health_status`, `kill`, `oom`, `pause`, `rename`, `resize`, `restart`, `start`, `stop`, `top`, `unpause`, and `update`
|
||||
|
||||
Images report these events: `delete, import, load, pull, push, save, tag, untag`
|
||||
Images report these events: `delete`, `import`, `load`, `pull`, `push`, `save`, `tag`, and `untag`
|
||||
|
||||
Volumes report these events: `create, mount, unmount, destroy`
|
||||
Volumes report these events: `create`, `mount`, `unmount`, and `destroy`
|
||||
|
||||
Networks report these events: `create, connect, disconnect, destroy`
|
||||
Networks report these events: `create`, `connect`, `disconnect`, `destroy`, `update`, and `remove`
|
||||
|
||||
The Docker daemon reports these events: `reload`
|
||||
|
||||
Services report these events: `create`, `update`, and `remove`
|
||||
|
||||
Nodes report these events: `create`, `update`, and `remove`
|
||||
|
||||
Secrets report these events: `create`, `update`, and `remove`
|
||||
|
||||
operationId: "SystemEvents"
|
||||
produces:
|
||||
- "application/json"
|
||||
@@ -5682,7 +5809,8 @@ paths:
|
||||
- `label=<string>` image or container label
|
||||
- `network=<string>` network name or ID
|
||||
- `plugin`=<string> plugin name or ID
|
||||
- `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, or `daemon`
|
||||
- `scope`=<string> local or swarm
|
||||
- `type=<string>` object to filter by, one of `container`, `image`, `volume`, `network`, `daemon`, `plugin`, `node`, `service` or `secret`
|
||||
- `volume=<string>` volume name or ID
|
||||
type: "string"
|
||||
tags: ["System"]
|
||||
@@ -5763,13 +5891,13 @@ paths:
|
||||
-
|
||||
Name: "my-volume"
|
||||
Driver: "local"
|
||||
Mountpoint: ""
|
||||
Mountpoint: "/var/lib/docker/volumes/my-volume/_data"
|
||||
Labels: null
|
||||
Scope: ""
|
||||
Scope: "local"
|
||||
Options: null
|
||||
UsageData:
|
||||
Size: 0
|
||||
RefCount: 0
|
||||
Size: 10920104
|
||||
RefCount: 2
|
||||
500:
|
||||
description: "server error"
|
||||
schema:
|
||||
@@ -7355,6 +7483,16 @@ paths:
|
||||
AdvertiseAddr:
|
||||
description: "Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible."
|
||||
type: "string"
|
||||
DataPathAddr:
|
||||
description: |
|
||||
Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`,
|
||||
or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr`
|
||||
is used.
|
||||
|
||||
The `DataPathAddr` specifies the address that global scope network drivers will publish towards other
|
||||
nodes in order to reach the containers running on this node. Using this parameter it is possible to
|
||||
separate the container data traffic from the management traffic of the cluster.
|
||||
type: "string"
|
||||
ForceNewCluster:
|
||||
description: "Force creation of a new swarm."
|
||||
type: "boolean"
|
||||
@@ -7403,6 +7541,17 @@ paths:
|
||||
type: "string"
|
||||
AdvertiseAddr:
|
||||
description: "Externally reachable address advertised to other nodes. This can either be an address/port combination in the form `192.168.1.1:4567`, or an interface followed by a port number, like `eth0:4567`. If the port number is omitted, the port number from the listen address is used. If `AdvertiseAddr` is not specified, it will be automatically detected when possible."
|
||||
type: "string"
|
||||
DataPathAddr:
|
||||
description: |
|
||||
Address or interface to use for data path traffic (format: `<ip|interface>`), for example, `192.168.1.1`,
|
||||
or an interface, like `eth0`. If `DataPathAddr` is unspecified, the same address as `AdvertiseAddr`
|
||||
is used.
|
||||
|
||||
The `DataPathAddr` specifies the address that global scope network drivers will publish towards other
|
||||
nodes in order to reach the containers running on this node. Using this parameter it is possible to
|
||||
separate the container data traffic from the management traffic of the cluster.
|
||||
|
||||
type: "string"
|
||||
RemoteAddrs:
|
||||
description: "Addresses of manager nodes already participating in the swarm."
|
||||
@@ -8382,6 +8531,198 @@ paths:
|
||||
format: "int64"
|
||||
required: true
|
||||
tags: ["Secret"]
|
||||
/configs:
|
||||
get:
|
||||
summary: "List configs"
|
||||
operationId: "ConfigList"
|
||||
produces:
|
||||
- "application/json"
|
||||
responses:
|
||||
200:
|
||||
description: "no error"
|
||||
schema:
|
||||
type: "array"
|
||||
items:
|
||||
$ref: "#/definitions/Config"
|
||||
example:
|
||||
- ID: "ktnbjxoalbkvbvedmg1urrz8h"
|
||||
Version:
|
||||
Index: 11
|
||||
CreatedAt: "2016-11-05T01:20:17.327670065Z"
|
||||
UpdatedAt: "2016-11-05T01:20:17.327670065Z"
|
||||
Spec:
|
||||
Name: "server.conf"
|
||||
500:
|
||||
description: "server error"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
503:
|
||||
description: "node is not part of a swarm"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
parameters:
|
||||
- name: "filters"
|
||||
in: "query"
|
||||
type: "string"
|
||||
description: |
|
||||
A JSON encoded value of the filters (a `map[string][]string`) to process on the configs list. Available filters:
|
||||
|
||||
- `id=<config id>`
|
||||
- `label=<key> or label=<key>=value`
|
||||
- `name=<config name>`
|
||||
- `names=<config name>`
|
||||
tags: ["Config"]
|
||||
/configs/create:
|
||||
post:
|
||||
summary: "Create a config"
|
||||
operationId: "ConfigCreate"
|
||||
consumes:
|
||||
- "application/json"
|
||||
produces:
|
||||
- "application/json"
|
||||
responses:
|
||||
201:
|
||||
description: "no error"
|
||||
schema:
|
||||
type: "object"
|
||||
properties:
|
||||
ID:
|
||||
description: "The ID of the created config."
|
||||
type: "string"
|
||||
example:
|
||||
ID: "ktnbjxoalbkvbvedmg1urrz8h"
|
||||
409:
|
||||
description: "name conflicts with an existing object"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
500:
|
||||
description: "server error"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
503:
|
||||
description: "node is not part of a swarm"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
parameters:
|
||||
- name: "body"
|
||||
in: "body"
|
||||
schema:
|
||||
allOf:
|
||||
- $ref: "#/definitions/ConfigSpec"
|
||||
- type: "object"
|
||||
example:
|
||||
Name: "server.conf"
|
||||
Labels:
|
||||
foo: "bar"
|
||||
Data: "VEhJUyBJUyBOT1QgQSBSRUFMIENFUlRJRklDQVRFCg=="
|
||||
tags: ["Config"]
|
||||
/configs/{id}:
|
||||
get:
|
||||
summary: "Inspect a config"
|
||||
operationId: "ConfigInspect"
|
||||
produces:
|
||||
- "application/json"
|
||||
responses:
|
||||
200:
|
||||
description: "no error"
|
||||
schema:
|
||||
$ref: "#/definitions/Config"
|
||||
examples:
|
||||
application/json:
|
||||
ID: "ktnbjxoalbkvbvedmg1urrz8h"
|
||||
Version:
|
||||
Index: 11
|
||||
CreatedAt: "2016-11-05T01:20:17.327670065Z"
|
||||
UpdatedAt: "2016-11-05T01:20:17.327670065Z"
|
||||
Spec:
|
||||
Name: "app-dev.crt"
|
||||
404:
|
||||
description: "config not found"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
500:
|
||||
description: "server error"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
503:
|
||||
description: "node is not part of a swarm"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
parameters:
|
||||
- name: "id"
|
||||
in: "path"
|
||||
required: true
|
||||
type: "string"
|
||||
description: "ID of the config"
|
||||
tags: ["Config"]
|
||||
delete:
|
||||
summary: "Delete a config"
|
||||
operationId: "ConfigDelete"
|
||||
produces:
|
||||
- "application/json"
|
||||
responses:
|
||||
204:
|
||||
description: "no error"
|
||||
404:
|
||||
description: "config not found"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
500:
|
||||
description: "server error"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
503:
|
||||
description: "node is not part of a swarm"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
parameters:
|
||||
- name: "id"
|
||||
in: "path"
|
||||
required: true
|
||||
type: "string"
|
||||
description: "ID of the config"
|
||||
tags: ["Config"]
|
||||
/configs/{id}/update:
|
||||
post:
|
||||
summary: "Update a Config"
|
||||
operationId: "ConfigUpdate"
|
||||
responses:
|
||||
200:
|
||||
description: "no error"
|
||||
400:
|
||||
description: "bad parameter"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
404:
|
||||
description: "no such config"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
500:
|
||||
description: "server error"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
503:
|
||||
description: "node is not part of a swarm"
|
||||
schema:
|
||||
$ref: "#/definitions/ErrorResponse"
|
||||
parameters:
|
||||
- name: "id"
|
||||
in: "path"
|
||||
description: "The ID or name of the config"
|
||||
type: "string"
|
||||
required: true
|
||||
- name: "body"
|
||||
in: "body"
|
||||
schema:
|
||||
$ref: "#/definitions/ConfigSpec"
|
||||
description: "The spec of the config to update. Currently, only the Labels field can be updated. All other fields must remain unchanged from the [ConfigInspect endpoint](#operation/ConfigInspect) response values."
|
||||
- name: "version"
|
||||
in: "query"
|
||||
description: "The version number of the config object being updated. This is required to avoid conflicting writes."
|
||||
type: "integer"
|
||||
format: "int64"
|
||||
required: true
|
||||
tags: ["Config"]
|
||||
/distribution/{name}/json:
|
||||
get:
|
||||
summary: "Get image information from the registry"
|
||||
|
||||
@@ -67,9 +67,6 @@ type TaskSpec struct {
|
||||
ForceUpdate uint64
|
||||
|
||||
Runtime RuntimeType `json:",omitempty"`
|
||||
// TODO (ehazlett): this should be removed and instead
|
||||
// use struct tags (proto) for the runtimes
|
||||
RuntimeData []byte `json:",omitempty"`
|
||||
}
|
||||
|
||||
// Resources represents resources (CPU/Memory).
|
||||
|
||||
@@ -44,15 +44,23 @@ type Volume struct {
|
||||
UsageData *VolumeUsageData `json:"UsageData,omitempty"`
|
||||
}
|
||||
|
||||
// VolumeUsageData volume usage data
|
||||
// VolumeUsageData Usage details about the volume. This information is used by the
|
||||
// `GET /system/df` endpoint, and omitted in other endpoints.
|
||||
//
|
||||
// swagger:model VolumeUsageData
|
||||
type VolumeUsageData struct {
|
||||
|
||||
// The number of containers referencing this volume.
|
||||
// The number of containers referencing this volume. This field
|
||||
// is set to `-1` if the reference-count is not available.
|
||||
//
|
||||
// Required: true
|
||||
RefCount int64 `json:"RefCount"`
|
||||
|
||||
// The disk space used by the volume (local driver only)
|
||||
// Amount of disk space used by the volume (in bytes). This information
|
||||
// is only available for volumes created with the `"local"` volume
|
||||
// driver. For volumes created with other volume drivers, this field
|
||||
// is set to `-1` ("not available")
|
||||
//
|
||||
// Required: true
|
||||
Size int64 `json:"Size"`
|
||||
}
|
||||
|
||||
@@ -281,7 +281,7 @@ func BuildFromConfig(config *container.Config, changes []string) (*container.Con
|
||||
}
|
||||
dispatchState := newDispatchState()
|
||||
dispatchState.runConfig = config
|
||||
return dispatchFromDockerfile(b, dockerfile, dispatchState)
|
||||
return dispatchFromDockerfile(b, dockerfile, dispatchState, nil)
|
||||
}
|
||||
|
||||
func checkDispatchDockerfile(dockerfile *parser.Node) error {
|
||||
@@ -293,7 +293,7 @@ func checkDispatchDockerfile(dockerfile *parser.Node) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func dispatchFromDockerfile(b *Builder, result *parser.Result, dispatchState *dispatchState) (*container.Config, error) {
|
||||
func dispatchFromDockerfile(b *Builder, result *parser.Result, dispatchState *dispatchState, source builder.Source) (*container.Config, error) {
|
||||
shlex := NewShellLex(result.EscapeToken)
|
||||
ast := result.AST
|
||||
total := len(ast.Children)
|
||||
@@ -304,6 +304,7 @@ func dispatchFromDockerfile(b *Builder, result *parser.Result, dispatchState *di
|
||||
stepMsg: formatStep(i, total),
|
||||
node: n,
|
||||
shlex: shlex,
|
||||
source: source,
|
||||
}
|
||||
if _, err := b.dispatch(opts); err != nil {
|
||||
return nil, err
|
||||
|
||||
@@ -30,9 +30,10 @@ type pathCache interface {
|
||||
// copyInfo is a data object which stores the metadata about each source file in
|
||||
// a copyInstruction
|
||||
type copyInfo struct {
|
||||
root string
|
||||
path string
|
||||
hash string
|
||||
root string
|
||||
path string
|
||||
hash string
|
||||
noDecompress bool
|
||||
}
|
||||
|
||||
func newCopyInfoFromSource(source builder.Source, path string, hash string) copyInfo {
|
||||
@@ -118,7 +119,9 @@ func (o *copier) getCopyInfoForSourcePath(orig string) ([]copyInfo, error) {
|
||||
o.tmpPaths = append(o.tmpPaths, remote.Root())
|
||||
|
||||
hash, err := remote.Hash(path)
|
||||
return newCopyInfos(newCopyInfoFromSource(remote, path, hash)), err
|
||||
ci := newCopyInfoFromSource(remote, path, hash)
|
||||
ci.noDecompress = true // data from http shouldn't be extracted even on ADD
|
||||
return newCopyInfos(ci), err
|
||||
}
|
||||
|
||||
// Cleanup removes any temporary directories created as part of downloading
|
||||
|
||||
@@ -156,6 +156,11 @@ func add(req dispatchRequest) error {
|
||||
return err
|
||||
}
|
||||
copyInstruction.allowLocalDecompression = true
|
||||
for _, ci := range copyInstruction.infos {
|
||||
if ci.noDecompress {
|
||||
copyInstruction.allowLocalDecompression = false
|
||||
}
|
||||
}
|
||||
|
||||
return req.builder.performCopy(req.state, copyInstruction)
|
||||
}
|
||||
@@ -325,7 +330,7 @@ func processOnBuild(req dispatchRequest) error {
|
||||
}
|
||||
}
|
||||
|
||||
if _, err := dispatchFromDockerfile(req.builder, dockerfile, dispatchState); err != nil {
|
||||
if _, err := dispatchFromDockerfile(req.builder, dockerfile, dispatchState, req.source); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -171,11 +171,9 @@ func (b *Builder) dispatch(options dispatchOptions) (*dispatchState, error) {
|
||||
buildsFailed.WithValues(metricsUnknownInstructionError).Inc()
|
||||
return nil, fmt.Errorf("unknown instruction: %s", upperCasedCmd)
|
||||
}
|
||||
if err := f(newDispatchRequestFromOptions(options, b, args)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
options.state.updateRunConfig()
|
||||
return options.state, nil
|
||||
err = f(newDispatchRequestFromOptions(options, b, args))
|
||||
return options.state, err
|
||||
}
|
||||
|
||||
type dispatchOptions struct {
|
||||
|
||||
@@ -41,6 +41,7 @@ func Detect(config backend.BuildConfig) (remote builder.Source, dockerfile *pars
|
||||
}
|
||||
|
||||
func newArchiveRemote(rc io.ReadCloser, dockerfilePath string) (builder.Source, *parser.Result, error) {
|
||||
defer rc.Close()
|
||||
c, err := MakeTarSumContext(rc)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
|
||||
@@ -20,13 +20,15 @@ func (cli *Client) Ping(ctx context.Context) (types.Ping, error) {
|
||||
}
|
||||
defer ensureReaderClosed(serverResp)
|
||||
|
||||
ping.APIVersion = serverResp.header.Get("API-Version")
|
||||
if serverResp.header != nil {
|
||||
ping.APIVersion = serverResp.header.Get("API-Version")
|
||||
|
||||
if serverResp.header.Get("Docker-Experimental") == "true" {
|
||||
ping.Experimental = true
|
||||
if serverResp.header.Get("Docker-Experimental") == "true" {
|
||||
ping.Experimental = true
|
||||
}
|
||||
ping.OSType = serverResp.header.Get("OSType")
|
||||
}
|
||||
|
||||
ping.OSType = serverResp.header.Get("OSType")
|
||||
|
||||
return ping, nil
|
||||
err = cli.checkResponseErr(serverResp)
|
||||
return ping, err
|
||||
}
|
||||
|
||||
82
client/ping_test.go
Normal file
82
client/ping_test.go
Normal file
@@ -0,0 +1,82 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
// TestPingFail tests that when a server sends a non-successful response that we
|
||||
// can still grab API details, when set.
|
||||
// Some of this is just excercising the code paths to make sure there are no
|
||||
// panics.
|
||||
func TestPingFail(t *testing.T) {
|
||||
var withHeader bool
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
resp := &http.Response{StatusCode: http.StatusInternalServerError}
|
||||
if withHeader {
|
||||
resp.Header = http.Header{}
|
||||
resp.Header.Set("API-Version", "awesome")
|
||||
resp.Header.Set("Docker-Experimental", "true")
|
||||
}
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
|
||||
return resp, nil
|
||||
}),
|
||||
}
|
||||
|
||||
ping, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, false, ping.Experimental)
|
||||
assert.Equal(t, "", ping.APIVersion)
|
||||
|
||||
withHeader = true
|
||||
ping2, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, true, ping2.Experimental)
|
||||
assert.Equal(t, "awesome", ping2.APIVersion)
|
||||
}
|
||||
|
||||
// TestPingWithError tests the case where there is a protocol error in the ping.
|
||||
// This test is mostly just testing that there are no panics in this code path.
|
||||
func TestPingWithError(t *testing.T) {
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
resp := &http.Response{StatusCode: http.StatusInternalServerError}
|
||||
resp.Header = http.Header{}
|
||||
resp.Header.Set("API-Version", "awesome")
|
||||
resp.Header.Set("Docker-Experimental", "true")
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
|
||||
return resp, errors.New("some error")
|
||||
}),
|
||||
}
|
||||
|
||||
ping, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, false, ping.Experimental)
|
||||
assert.Equal(t, "", ping.APIVersion)
|
||||
}
|
||||
|
||||
// TestPingSuccess tests that we are able to get the expected API headers/ping
|
||||
// details on success.
|
||||
func TestPingSuccess(t *testing.T) {
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
resp := &http.Response{StatusCode: http.StatusInternalServerError}
|
||||
resp.Header = http.Header{}
|
||||
resp.Header.Set("API-Version", "awesome")
|
||||
resp.Header.Set("Docker-Experimental", "true")
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader("some error with the server"))
|
||||
return resp, nil
|
||||
}),
|
||||
}
|
||||
ping, err := client.Ping(context.Background())
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, true, ping.Experimental)
|
||||
assert.Equal(t, "awesome", ping.APIVersion)
|
||||
}
|
||||
@@ -24,6 +24,7 @@ type serverResponse struct {
|
||||
body io.ReadCloser
|
||||
header http.Header
|
||||
statusCode int
|
||||
reqURL *url.URL
|
||||
}
|
||||
|
||||
// head sends an http request to the docker API using the method HEAD.
|
||||
@@ -118,11 +119,18 @@ func (cli *Client) sendRequest(ctx context.Context, method, path string, query u
|
||||
if err != nil {
|
||||
return serverResponse{}, err
|
||||
}
|
||||
return cli.doRequest(ctx, req)
|
||||
resp, err := cli.doRequest(ctx, req)
|
||||
if err != nil {
|
||||
return resp, err
|
||||
}
|
||||
if err := cli.checkResponseErr(resp); err != nil {
|
||||
return resp, err
|
||||
}
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResponse, error) {
|
||||
serverResp := serverResponse{statusCode: -1}
|
||||
serverResp := serverResponse{statusCode: -1, reqURL: req.URL}
|
||||
|
||||
resp, err := ctxhttp.Do(ctx, cli.client, req)
|
||||
if err != nil {
|
||||
@@ -179,37 +187,44 @@ func (cli *Client) doRequest(ctx context.Context, req *http.Request) (serverResp
|
||||
|
||||
if resp != nil {
|
||||
serverResp.statusCode = resp.StatusCode
|
||||
serverResp.body = resp.Body
|
||||
serverResp.header = resp.Header
|
||||
}
|
||||
|
||||
if serverResp.statusCode < 200 || serverResp.statusCode >= 400 {
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return serverResp, err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return serverResp, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), req.URL)
|
||||
}
|
||||
|
||||
var errorMessage string
|
||||
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) &&
|
||||
resp.Header.Get("Content-Type") == "application/json" {
|
||||
var errorResponse types.ErrorResponse
|
||||
if err := json.Unmarshal(body, &errorResponse); err != nil {
|
||||
return serverResp, fmt.Errorf("Error reading JSON: %v", err)
|
||||
}
|
||||
errorMessage = errorResponse.Message
|
||||
} else {
|
||||
errorMessage = string(body)
|
||||
}
|
||||
|
||||
return serverResp, fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
|
||||
}
|
||||
|
||||
serverResp.body = resp.Body
|
||||
serverResp.header = resp.Header
|
||||
return serverResp, nil
|
||||
}
|
||||
|
||||
func (cli *Client) checkResponseErr(serverResp serverResponse) error {
|
||||
if serverResp.statusCode >= 200 && serverResp.statusCode < 400 {
|
||||
return nil
|
||||
}
|
||||
|
||||
body, err := ioutil.ReadAll(serverResp.body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(body) == 0 {
|
||||
return fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(serverResp.statusCode), serverResp.reqURL)
|
||||
}
|
||||
|
||||
var ct string
|
||||
if serverResp.header != nil {
|
||||
ct = serverResp.header.Get("Content-Type")
|
||||
}
|
||||
|
||||
var errorMessage string
|
||||
if (cli.version == "" || versions.GreaterThan(cli.version, "1.23")) && ct == "application/json" {
|
||||
var errorResponse types.ErrorResponse
|
||||
if err := json.Unmarshal(body, &errorResponse); err != nil {
|
||||
return fmt.Errorf("Error reading JSON: %v", err)
|
||||
}
|
||||
errorMessage = errorResponse.Message
|
||||
} else {
|
||||
errorMessage = string(body)
|
||||
}
|
||||
|
||||
return fmt.Errorf("Error response from daemon: %s", strings.TrimSpace(errorMessage))
|
||||
}
|
||||
|
||||
func (cli *Client) addHeaders(req *http.Request, headers headers) *http.Request {
|
||||
// Add CLI Config's HTTP Headers BEFORE we set the Docker headers
|
||||
// then the user can't change OUR headers
|
||||
@@ -239,9 +254,9 @@ func encodeData(data interface{}) (*bytes.Buffer, error) {
|
||||
}
|
||||
|
||||
func ensureReaderClosed(response serverResponse) {
|
||||
if body := response.body; body != nil {
|
||||
if response.body != nil {
|
||||
// Drain up to 512 bytes and close the body to let the Transport reuse the connection
|
||||
io.CopyN(ioutil.Discard, body, 512)
|
||||
io.CopyN(ioutil.Discard, response.body, 512)
|
||||
response.body.Close()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,18 +24,22 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
|
||||
headers["X-Registry-Auth"] = []string{options.EncodedRegistryAuth}
|
||||
}
|
||||
|
||||
// ensure that the image is tagged
|
||||
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = taggedImg
|
||||
}
|
||||
|
||||
// Contact the registry to retrieve digest and platform information
|
||||
if options.QueryRegistry {
|
||||
distributionInspect, err := cli.DistributionInspect(ctx, service.TaskTemplate.ContainerSpec.Image, options.EncodedRegistryAuth)
|
||||
distErr = err
|
||||
if err == nil {
|
||||
// now pin by digest if the image doesn't already contain a digest
|
||||
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
|
||||
if img != "" {
|
||||
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = img
|
||||
}
|
||||
// add platforms that are compatible with the service
|
||||
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
}
|
||||
}
|
||||
var response types.ServiceCreateResponse
|
||||
@@ -55,29 +59,42 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
|
||||
}
|
||||
|
||||
// imageWithDigestString takes an image string and a digest, and updates
|
||||
// the image string if it didn't originally contain a digest. It assumes
|
||||
// that the image string is not an image ID
|
||||
// the image string if it didn't originally contain a digest. It returns
|
||||
// an empty string if there are no updates.
|
||||
func imageWithDigestString(image string, dgst digest.Digest) string {
|
||||
ref, err := reference.ParseAnyReference(image)
|
||||
namedRef, err := reference.ParseNormalizedNamed(image)
|
||||
if err == nil {
|
||||
if _, isCanonical := ref.(reference.Canonical); !isCanonical {
|
||||
namedRef, _ := ref.(reference.Named)
|
||||
if _, isCanonical := namedRef.(reference.Canonical); !isCanonical {
|
||||
// ensure that image gets a default tag if none is provided
|
||||
img, err := reference.WithDigest(namedRef, dgst)
|
||||
if err == nil {
|
||||
return img.String()
|
||||
return reference.FamiliarString(img)
|
||||
}
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// updateServicePlatforms updates the Platforms in swarm.Placement to list
|
||||
// all compatible platforms for the service, as found in distributionInspect
|
||||
// and returns a pointer to the new or updated swarm.Placement struct
|
||||
func updateServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
|
||||
// imageWithTagString takes an image string, and returns a tagged image
|
||||
// string, adding a 'latest' tag if one was not provided. It returns an
|
||||
// emptry string if a canonical reference was provided
|
||||
func imageWithTagString(image string) string {
|
||||
namedRef, err := reference.ParseNormalizedNamed(image)
|
||||
if err == nil {
|
||||
return reference.FamiliarString(reference.TagNameOnly(namedRef))
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// setServicePlatforms sets Platforms in swarm.Placement to list all
|
||||
// compatible platforms for the service, as found in distributionInspect
|
||||
// and returns a pointer to the new or updated swarm.Placement struct.
|
||||
func setServicePlatforms(placement *swarm.Placement, distributionInspect registrytypes.DistributionInspect) *swarm.Placement {
|
||||
if placement == nil {
|
||||
placement = &swarm.Placement{}
|
||||
}
|
||||
// reset any existing listed platforms
|
||||
placement.Platforms = []swarm.Platform{}
|
||||
for _, p := range distributionInspect.Platforms {
|
||||
placement.Platforms = append(placement.Platforms, swarm.Platform{
|
||||
Architecture: p.Architecture,
|
||||
|
||||
@@ -13,6 +13,7 @@ import (
|
||||
"github.com/docker/docker/api/types"
|
||||
registrytypes "github.com/docker/docker/api/types/registry"
|
||||
"github.com/docker/docker/api/types/swarm"
|
||||
"github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
@@ -121,3 +122,92 @@ func TestServiceCreateCompatiblePlatforms(t *testing.T) {
|
||||
t.Fatalf("expected `service_amd64`, got %s", r.ID)
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceCreateDigestPinning(t *testing.T) {
|
||||
dgst := "sha256:c0537ff6a5218ef531ece93d4984efc99bbf3f7497c0a7726c88e2bb7584dc96"
|
||||
dgstAlt := "sha256:37ffbf3f7497c07584dc9637ffbf3f7497c0758c0537ffbf3f7497c0c88e2bb7"
|
||||
serviceCreateImage := ""
|
||||
pinByDigestTests := []struct {
|
||||
img string // input image provided by the user
|
||||
expected string // expected image after digest pinning
|
||||
}{
|
||||
// default registry returns familiar string
|
||||
{"docker.io/library/alpine", "alpine:latest@" + dgst},
|
||||
// provided tag is preserved and digest added
|
||||
{"alpine:edge", "alpine:edge@" + dgst},
|
||||
// image with provided alternative digest remains unchanged
|
||||
{"alpine@" + dgstAlt, "alpine@" + dgstAlt},
|
||||
// image with provided tag and alternative digest remains unchanged
|
||||
{"alpine:edge@" + dgstAlt, "alpine:edge@" + dgstAlt},
|
||||
// image on alternative registry does not result in familiar string
|
||||
{"alternate.registry/library/alpine", "alternate.registry/library/alpine:latest@" + dgst},
|
||||
// unresolvable image does not get a digest
|
||||
{"cannotresolve", "cannotresolve:latest"},
|
||||
}
|
||||
|
||||
client := &Client{
|
||||
client: newMockClient(func(req *http.Request) (*http.Response, error) {
|
||||
if strings.HasPrefix(req.URL.Path, "/services/create") {
|
||||
// reset and set image received by the service create endpoint
|
||||
serviceCreateImage = ""
|
||||
var service swarm.ServiceSpec
|
||||
if err := json.NewDecoder(req.Body).Decode(&service); err != nil {
|
||||
return nil, fmt.Errorf("could not parse service create request")
|
||||
}
|
||||
serviceCreateImage = service.TaskTemplate.ContainerSpec.Image
|
||||
|
||||
b, err := json.Marshal(types.ServiceCreateResponse{
|
||||
ID: "service_id",
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Body: ioutil.NopCloser(bytes.NewReader(b)),
|
||||
}, nil
|
||||
} else if strings.HasPrefix(req.URL.Path, "/distribution/cannotresolve") {
|
||||
// unresolvable image
|
||||
return nil, fmt.Errorf("cannot resolve image")
|
||||
} else if strings.HasPrefix(req.URL.Path, "/distribution/") {
|
||||
// resolvable images
|
||||
b, err := json.Marshal(registrytypes.DistributionInspect{
|
||||
Descriptor: v1.Descriptor{
|
||||
Digest: digest.Digest(dgst),
|
||||
},
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Body: ioutil.NopCloser(bytes.NewReader(b)),
|
||||
}, nil
|
||||
}
|
||||
return nil, fmt.Errorf("unexpected URL '%s'", req.URL.Path)
|
||||
}),
|
||||
}
|
||||
|
||||
// run pin by digest tests
|
||||
for _, p := range pinByDigestTests {
|
||||
r, err := client.ServiceCreate(context.Background(), swarm.ServiceSpec{
|
||||
TaskTemplate: swarm.TaskSpec{
|
||||
ContainerSpec: swarm.ContainerSpec{
|
||||
Image: p.img,
|
||||
},
|
||||
},
|
||||
}, types.ServiceCreateOptions{QueryRegistry: true})
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
if r.ID != "service_id" {
|
||||
t.Fatalf("expected `service_id`, got %s", r.ID)
|
||||
}
|
||||
|
||||
if p.expected != serviceCreateImage {
|
||||
t.Fatalf("expected image %s, got %s", p.expected, serviceCreateImage)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -35,6 +35,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
|
||||
|
||||
query.Set("version", strconv.FormatUint(version.Index, 10))
|
||||
|
||||
// ensure that the image is tagged
|
||||
if taggedImg := imageWithTagString(service.TaskTemplate.ContainerSpec.Image); taggedImg != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = taggedImg
|
||||
}
|
||||
|
||||
// Contact the registry to retrieve digest and platform information
|
||||
// This happens only when the image has changed
|
||||
if options.QueryRegistry {
|
||||
@@ -42,12 +47,11 @@ func (cli *Client) ServiceUpdate(ctx context.Context, serviceID string, version
|
||||
distErr = err
|
||||
if err == nil {
|
||||
// now pin by digest if the image doesn't already contain a digest
|
||||
img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest)
|
||||
if img != "" {
|
||||
if img := imageWithDigestString(service.TaskTemplate.ContainerSpec.Image, distributionInspect.Descriptor.Digest); img != "" {
|
||||
service.TaskTemplate.ContainerSpec.Image = img
|
||||
}
|
||||
// add platforms that are compatible with the service
|
||||
service.TaskTemplate.Placement = updateServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
service.TaskTemplate.Placement = setServicePlatforms(service.TaskTemplate.Placement, distributionInspect)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -155,6 +155,8 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
|
||||
api := apiserver.New(serverConfig)
|
||||
cli.api = api
|
||||
|
||||
var hosts []string
|
||||
|
||||
for i := 0; i < len(cli.Config.Hosts); i++ {
|
||||
var err error
|
||||
if cli.Config.Hosts[i], err = dopts.ParseHost(cli.Config.TLS, cli.Config.Hosts[i]); err != nil {
|
||||
@@ -186,6 +188,7 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
|
||||
}
|
||||
}
|
||||
logrus.Debugf("Listener created for HTTP on %s (%s)", proto, addr)
|
||||
hosts = append(hosts, protoAddrParts[1])
|
||||
api.Accept(addr, ls...)
|
||||
}
|
||||
|
||||
@@ -213,6 +216,8 @@ func (cli *DaemonCli) start(opts daemonOptions) (err error) {
|
||||
return fmt.Errorf("Error starting daemon: %v", err)
|
||||
}
|
||||
|
||||
d.StoreHosts(hosts)
|
||||
|
||||
// validate after NewDaemon has restored enabled plugins. Dont change order.
|
||||
if err := validateAuthzPlugins(cli.Config.AuthorizationPlugins, pluginStore); err != nil {
|
||||
return fmt.Errorf("Error validating authorization plugin: %v", err)
|
||||
@@ -402,8 +407,12 @@ func loadDaemonCliConfig(opts daemonOptions) (*config.Config, error) {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if conf.V2Only == false {
|
||||
logrus.Warnf(`The "disable-legacy-registry" option is deprecated and wil be removed in Docker v17.12. Interacting with legacy (v1) registries will no longer be supported in Docker v17.12"`)
|
||||
}
|
||||
|
||||
if flags.Changed("graph") {
|
||||
logrus.Warnf(`the "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
|
||||
logrus.Warnf(`The "-g / --graph" flag is deprecated. Please use "--data-root" instead`)
|
||||
}
|
||||
|
||||
// Labels of the docker engine used to allow multiple values associated with the same key.
|
||||
|
||||
@@ -102,7 +102,7 @@ func TestLoadDaemonConfigWithTrueDefaultValuesLeaveDefaults(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestLoadDaemonConfigWithLegacyRegistryOptions(t *testing.T) {
|
||||
content := `{"disable-legacy-registry": true}`
|
||||
content := `{"disable-legacy-registry": false}`
|
||||
tempFile := tempfile.NewTempFile(t, "config", content)
|
||||
defer tempFile.Remove()
|
||||
|
||||
@@ -110,5 +110,5 @@ func TestLoadDaemonConfigWithLegacyRegistryOptions(t *testing.T) {
|
||||
loadedConfig, err := loadDaemonCliConfig(opts)
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, loadedConfig)
|
||||
assert.True(t, loadedConfig.V2Only)
|
||||
assert.False(t, loadedConfig.V2Only)
|
||||
}
|
||||
|
||||
@@ -702,6 +702,9 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
|
||||
for _, alias := range epConfig.Aliases {
|
||||
createOptions = append(createOptions, libnetwork.CreateOptionMyAlias(alias))
|
||||
}
|
||||
for k, v := range epConfig.DriverOpts {
|
||||
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(options.Generic{k: v}))
|
||||
}
|
||||
}
|
||||
|
||||
if container.NetworkSettings.Service != nil {
|
||||
@@ -747,9 +750,6 @@ func (container *Container) BuildCreateEndpointOptions(n libnetwork.Network, epC
|
||||
|
||||
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(genericOption))
|
||||
}
|
||||
for k, v := range epConfig.DriverOpts {
|
||||
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(options.Generic{k: v}))
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
@@ -278,6 +278,9 @@ func (s *State) SetRunning(pid int, initial bool) {
|
||||
s.ErrorMsg = ""
|
||||
s.Running = true
|
||||
s.Restarting = false
|
||||
if initial {
|
||||
s.Paused = false
|
||||
}
|
||||
s.ExitCodeValue = 0
|
||||
s.Pid = pid
|
||||
if initial {
|
||||
@@ -304,6 +307,7 @@ func (s *State) SetRestarting(exitStatus *ExitStatus) {
|
||||
// all the checks in docker around rm/stop/etc
|
||||
s.Running = true
|
||||
s.Restarting = true
|
||||
s.Paused = false
|
||||
s.Pid = 0
|
||||
s.FinishedAt = time.Now().UTC()
|
||||
s.setFromExitStatus(exitStatus)
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/docker/pkg/pools"
|
||||
"github.com/docker/docker/pkg/promise"
|
||||
"github.com/docker/docker/pkg/term"
|
||||
)
|
||||
@@ -86,7 +87,7 @@ func (c *Config) CopyStreams(ctx context.Context, cfg *AttachConfig) chan error
|
||||
if cfg.TTY {
|
||||
_, err = copyEscapable(cfg.CStdin, cfg.Stdin, cfg.DetachKeys)
|
||||
} else {
|
||||
_, err = io.Copy(cfg.CStdin, cfg.Stdin)
|
||||
_, err = pools.Copy(cfg.CStdin, cfg.Stdin)
|
||||
}
|
||||
if err == io.ErrClosedPipe {
|
||||
err = nil
|
||||
@@ -116,7 +117,7 @@ func (c *Config) CopyStreams(ctx context.Context, cfg *AttachConfig) chan error
|
||||
}
|
||||
|
||||
logrus.Debugf("attach: %s: begin", name)
|
||||
_, err := io.Copy(stream, streamPipe)
|
||||
_, err := pools.Copy(stream, streamPipe)
|
||||
if err == io.ErrClosedPipe {
|
||||
err = nil
|
||||
}
|
||||
@@ -174,5 +175,5 @@ func copyEscapable(dst io.Writer, src io.ReadCloser, keys []byte) (written int64
|
||||
pr := term.NewEscapeProxy(src, keys)
|
||||
defer src.Close()
|
||||
|
||||
return io.Copy(dst, pr)
|
||||
return pools.Copy(dst, pr)
|
||||
}
|
||||
|
||||
@@ -12,7 +12,7 @@ RUN update-alternatives --install /usr/bin/go go /usr/lib/go-1.6/bin/go 100
|
||||
# Install Go
|
||||
# aarch64 doesn't have official go binaries, so use the version of go installed from
|
||||
# the image to build go from source.
|
||||
ENV GO_VERSION 1.7.5
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
||||
@@ -9,7 +9,7 @@ RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools bu
|
||||
# Install Go
|
||||
# aarch64 doesn't have official go binaries, so use the version of go installed from
|
||||
# the image to build go from source.
|
||||
ENV GO_VERSION 1.7.5
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
||||
@@ -11,7 +11,7 @@ RUN update-alternatives --install /usr/bin/go go /usr/lib/go-1.6/bin/go 100
|
||||
# Install Go
|
||||
# aarch64 doesn't have official go binaries, so use the version of go installed from
|
||||
# the image to build go from source.
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
||||
@@ -9,7 +9,7 @@ RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools bu
|
||||
# Install Go
|
||||
# aarch64 doesn't have official go binaries, so use the version of go installed from
|
||||
# the image to build go from source.
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN mkdir /usr/src/go && curl -fsSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
|
||||
&& cd /usr/src/go/src \
|
||||
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
|
||||
|
||||
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list.d
|
||||
RUN apt-get update && apt-get install -y -t wheezy-backports btrfs-tools --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ubuntu:trusty
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ubuntu:xenial
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ubuntu:yakkety
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ubuntu:zesty
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.7.5
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ RUN sed -ri "s/(httpredir|deb).debian.org/$APT_MIRROR/g" /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
# GOARM is the ARM architecture version which is unrelated to the above Golang version
|
||||
ENV GOARM 6
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM armhf/ubuntu:trusty
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM armhf/ubuntu:xenial
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM armhf/ubuntu:yakkety
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config vim-common libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:trusty
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:xenial
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM ppc64le/ubuntu:yakkety
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev pkg-config vim-common libseccomp-dev libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM s390x/ubuntu:xenial
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config libsystemd-dev vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ FROM s390x/ubuntu:yakkety
|
||||
|
||||
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential cmake curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev pkg-config libsystemd-dev vim-common --no-install-recommends && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ENV GO_VERSION 1.7.5
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-s390x.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ FROM amazonlinux:latest
|
||||
RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.7.5
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ RUN dnf -y upgrade
|
||||
RUN dnf install -y @development-tools fedora-packager
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ RUN dnf -y upgrade
|
||||
RUN dnf install -y @development-tools fedora-packager
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ FROM opensuse:13.2
|
||||
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
|
||||
RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim systemd-rpm-macros
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek
|
||||
RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ FROM oraclelinux:7
|
||||
RUN yum groupinstall -y "Development Tools"
|
||||
RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ FROM photon:1.0
|
||||
RUN tdnf install -y wget curl ca-certificates gzip make rpm-build sed gcc linux-api-headers glibc-devel binutils libseccomp libltdl-devel elfutils
|
||||
RUN tdnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel systemd-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ RUN yum groupinstall --skip-broken -y "Development Tools"
|
||||
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
|
||||
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git cmake vim-common
|
||||
|
||||
ENV GO_VERSION 1.7.4
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fSL "https://golang.org/dl/go${GO_VERSION}.linux-armv6l.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ RUN dnf -y upgrade
|
||||
RUN dnf install -y @development-tools fedora-packager
|
||||
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel systemd-devel tar git cmake
|
||||
|
||||
ENV GO_VERSION 1.8.1
|
||||
ENV GO_VERSION 1.8.3
|
||||
RUN curl -fsSL "https://golang.org/dl/go${GO_VERSION}.linux-ppc64le.tar.gz" | tar xzC /usr/local
|
||||
ENV PATH $PATH:/usr/local/go/bin
|
||||
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
Tianon Gravi <admwiggin@gmail.com> (@tianon)
|
||||
Jessie Frazelle <jess@docker.com> (@jfrazelle)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,409 +0,0 @@
|
||||
# docker.fish - docker completions for fish shell
|
||||
#
|
||||
# This file is generated by gen_docker_fish_completions.py from:
|
||||
# https://github.com/barnybug/docker-fish-completion
|
||||
#
|
||||
# To install the completions:
|
||||
# mkdir -p ~/.config/fish/completions
|
||||
# cp docker.fish ~/.config/fish/completions
|
||||
#
|
||||
# Completion supported:
|
||||
# - parameters
|
||||
# - commands
|
||||
# - containers
|
||||
# - images
|
||||
# - repositories
|
||||
|
||||
function __fish_docker_no_subcommand --description 'Test if docker has yet to be given the subcommand'
|
||||
for i in (commandline -opc)
|
||||
if contains -- $i attach build commit cp create diff events exec export history images import info inspect kill load login logout logs pause port ps pull push rename restart rm rmi run save search start stop tag top unpause version wait stats
|
||||
return 1
|
||||
end
|
||||
end
|
||||
return 0
|
||||
end
|
||||
|
||||
function __fish_print_docker_containers --description 'Print a list of docker containers' -a select
|
||||
switch $select
|
||||
case running
|
||||
docker ps -a --no-trunc --filter status=running --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
|
||||
case stopped
|
||||
docker ps -a --no-trunc --filter status=exited --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
|
||||
case all
|
||||
docker ps -a --no-trunc --format "{{.ID}}\n{{.Names}}" | tr ',' '\n'
|
||||
end
|
||||
end
|
||||
|
||||
function __fish_print_docker_images --description 'Print a list of docker images'
|
||||
docker images --format "{{.Repository}}:{{.Tag}}" | command grep -v '<none>'
|
||||
end
|
||||
|
||||
function __fish_print_docker_repositories --description 'Print a list of docker repositories'
|
||||
docker images --format "{{.Repository}}" | command grep -v '<none>' | command sort | command uniq
|
||||
end
|
||||
|
||||
# common options
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l api-cors-header -d "Set CORS headers in the Engine API. Default is cors disabled"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s b -l bridge -d 'Attach containers to a pre-existing network bridge'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l bip -d "Use this CIDR notation address for the network bridge's IP, not compatible with -b"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s D -l debug -d 'Enable debug mode'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s d -l daemon -d 'Enable daemon mode'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns -d 'Force Docker to use specific DNS servers'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns-opt -d 'Force Docker to use specific DNS options'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l dns-search -d 'Force Docker to use specific DNS search domains'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l exec-opt -d 'Set runtime execution options'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l fixed-cidr -d 'IPv4 subnet for fixed IPs (e.g. 10.20.0.0/16)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l fixed-cidr-v6 -d 'IPv6 subnet for fixed IPs (e.g.: 2001:a02b/48)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s G -l group -d 'Group to assign the unix socket specified by -H when running in daemon mode'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s g -l graph -d 'Path to use as the root of the Docker runtime'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s H -l host -d 'The socket(s) to bind to in daemon mode or connect to in client mode, specified using one or more tcp://host:port, unix:///path/to/socket, fd://* or fd://socketfd.'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s h -l help -d 'Print usage'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l icc -d 'Allow unrestricted inter-container and Docker daemon host communication'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l insecure-registry -d 'Enable insecure communication with specified registries (no certificate verification for HTTPS and enable HTTP fallback) (e.g., localhost:5000 or 10.20.0.0/16)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip -d 'Default IP address to use when binding container ports'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-forward -d 'Enable net.ipv4.ip_forward and IPv6 forwarding if --fixed-cidr-v6 is defined. IPv6 forwarding may interfere with your existing IPv6 configuration when using Router Advertisement.'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ip-masq -d "Enable IP masquerading for bridge's IP range"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l iptables -d "Enable Docker's addition of iptables rules"
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l ipv6 -d 'Enable IPv6 networking'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s l -l log-level -d 'Set the logging level ("debug", "info", "warn", "error", "fatal")'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l label -d 'Set key=value labels to the daemon (displayed in `docker info`)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l mtu -d 'Set the containers network MTU'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s p -l pidfile -d 'Path to use for daemon PID file'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l registry-mirror -d 'Specify a preferred Docker registry mirror'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s s -l storage-driver -d 'Force the Docker runtime to use a specific storage driver'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l selinux-enabled -d 'Enable selinux support. SELinux does not presently support the BTRFS storage driver'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l storage-opt -d 'Set storage driver options'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tls -d 'Use TLS; implied by --tlsverify'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlscacert -d 'Trust only remotes providing a certificate signed by the CA given here'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlscert -d 'Path to TLS certificate file'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlskey -d 'Path to TLS key file'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -l tlsverify -d 'Use TLS and verify the remote (daemon: verify client, client: verify daemon)'
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -s v -l version -d 'Print version information and quit'
|
||||
|
||||
# subcommands
|
||||
# attach
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a attach -d 'Attach to a running container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l no-stdin -d 'Do not attach STDIN'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -l sig-proxy -d 'Proxy all received signals to the process (non-TTY mode only). SIGCHLD, SIGKILL, and SIGSTOP are not proxied.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from attach' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# build
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a build -d 'Build an image from a Dockerfile'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s f -l file -d "Name of the Dockerfile(Default is 'Dockerfile' at context root)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l force-rm -d 'Always remove intermediate containers, even after unsuccessful builds'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l no-cache -d 'Do not use cache when building the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l pull -d 'Always attempt to pull a newer version of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s q -l quiet -d 'Suppress the build output and print image ID on success'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -l rm -d 'Remove intermediate containers after a successful build'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from build' -s t -l tag -d 'Repository name (and optionally a tag) to be applied to the resulting image in case of success'
|
||||
|
||||
# commit
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a commit -d "Create a new image from a container's changes"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s a -l author -d 'Author (e.g., "John Hannibal Smith <hannibal@a-team.com>")'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s m -l message -d 'Commit message'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -s p -l pause -d 'Pause container during commit'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from commit' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# cp
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a cp -d "Copy files/folders between a container and the local filesystem"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from cp' -l help -d 'Print usage'
|
||||
|
||||
# create
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a create -d 'Create a new container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpu-shares -d 'CPU shares (relative weight)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-add -d 'Add Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-drop -d 'Drop Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cidfile -d 'Write the container ID to the file'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpuset -d 'CPUs in which to allow execution (0-3, 0,1)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device -d 'Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device-cgroup-rule -d 'Add a rule to the cgroup allowed devices list (e.g. --device-cgroup-rule="c 13:37 rwm")'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns -d 'Set custom DNS servers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns-opt -d "Set custom DNS options (Use --dns-opt='' if you don't wish to set options)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l dns-search -d "Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s e -l env -d 'Set environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l entrypoint -d 'Overwrite the default ENTRYPOINT of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l env-file -d 'Read in a line delimited file of environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l expose -d 'Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l group-add -d 'Add additional groups to run as'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s h -l hostname -d 'Container host name'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s i -l interactive -d 'Keep STDIN open even if not attached'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l ipc -d 'Default is to create a private IPC namespace (POSIX SysV IPC) for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l link -d 'Add link to another container in the form of <name|id>:alias'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s m -l memory -d 'Memory limit (format: <number>[<unit>], where unit = b, k, m or g)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l mac-address -d 'Container MAC address (e.g., 92:d0:c6:0a:29:33)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l memory-swap -d "Total memory usage (memory + swap), set '-1' to disable swap (format: <number>[<unit>], where unit = b, k, m or g)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l mount -d 'Attach a filesystem mount to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l name -d 'Assign a name to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l net -d 'Set the Network mode for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s P -l publish-all -d 'Publish all exposed ports to random ports on the host interfaces'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s p -l publish -d "Publish a container's port to the host"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l pid -d 'Default is to create a private PID namespace for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l privileged -d 'Give extended privileges to this container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l read-only -d "Mount the container's root filesystem as read only"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l restart -d 'Restart policy to apply when a container exits (no, on-failure[:max-retry], always)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l security-opt -d 'Security Options'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s t -l tty -d 'Allocate a pseudo-TTY'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s u -l user -d 'Username or UID'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s v -l volume -d 'Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l volumes-from -d 'Mount volumes from the specified container(s)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s w -l workdir -d 'Working directory inside the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# diff
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a diff -d "Inspect changes on a container's filesystem"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from diff' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# events
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a events -d 'Get real time events from the server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -s f -l filter -d "Provide filter values (i.e., 'event=stop')"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l since -d 'Show all events created since timestamp'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l until -d 'Stream events until this timestamp'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from events' -l format -d 'Format the output using the given go template'
|
||||
|
||||
# exec
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a exec -d 'Run a command in a running container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s d -l detach -d 'Detached mode: run command in the background'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s i -l interactive -d 'Keep STDIN open even if not attached'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -s t -l tty -d 'Allocate a pseudo-TTY'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from exec' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# export
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a export -d 'Stream the contents of a container as a tar archive'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from export' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# history
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a history -d 'Show the history of an image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -s q -l quiet -d 'Only show numeric IDs'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from history' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# images
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a images -d 'List images'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s a -l all -d 'Show all images (by default filter out the intermediate image layers)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s f -l filter -d "Provide filter values (i.e., 'dangling=true')"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -s q -l quiet -d 'Only show numeric IDs'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from images' -a '(__fish_print_docker_repositories)' -d "Repository"
|
||||
|
||||
# import
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a import -d 'Create a new filesystem image from the contents of a tarball'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from import' -l help -d 'Print usage'
|
||||
|
||||
# info
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a info -d 'Display system-wide information'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from info' -s f -l format -d 'Format the output using the given go template'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from info' -l help -d 'Print usage'
|
||||
|
||||
# inspect
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a inspect -d 'Return low-level information on a container or image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s f -l format -d 'Format the output using the given go template.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -s s -l size -d 'Display total file sizes if the type is container.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_images)' -d "Image"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from inspect' -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# kill
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a kill -d 'Kill a running container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -s s -l signal -d 'Signal to send to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from kill' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# load
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a load -d 'Load an image from a tar archive'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from load' -s i -l input -d 'Read from a tar archive file, instead of STDIN'
|
||||
|
||||
# login
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a login -d 'Log in to a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s p -l password -d 'Password'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from login' -s u -l username -d 'Username'
|
||||
|
||||
# logout
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a logout -d 'Log out from a Docker registry server'
|
||||
|
||||
# logs
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a logs -d 'Fetch the logs of a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s f -l follow -d 'Follow log output'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -s t -l timestamps -d 'Show timestamps'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l since -d 'Show logs since timestamp'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -l tail -d 'Output the specified number of lines at the end of logs (defaults to all logs)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from logs' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# port
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a port -d 'Lookup the public-facing port that is NAT-ed to PRIVATE_PORT'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from port' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# pause
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a pause -d 'Pause all processes within a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pause' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# ps
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a ps -d 'List containers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s a -l all -d 'Show all containers. Only running containers are shown by default.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l before -d 'Show only container created before Id or Name, include non-running ones.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s f -l filter -d 'Provide filter values. Valid filters:'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s l -l latest -d 'Show only the latest created container, include non-running ones.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s n -d 'Show n last created containers, include non-running ones.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s q -l quiet -d 'Only display numeric IDs'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -s s -l size -d 'Display total file sizes'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from ps' -l since -d 'Show only containers created since Id or Name, include non-running ones.'
|
||||
|
||||
# pull
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a pull -d 'Pull an image or a repository from a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -s a -l all-tags -d 'Download all tagged images in the repository'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_images)' -d "Image"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from pull' -a '(__fish_print_docker_repositories)' -d "Repository"
|
||||
|
||||
# push
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a push -d 'Push an image or a repository to a Docker registry server'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_images)' -d "Image"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from push' -a '(__fish_print_docker_repositories)' -d "Repository"
|
||||
|
||||
# rename
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rename -d 'Rename an existing container'
|
||||
|
||||
# restart
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a restart -d 'Restart a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -s t -l time -d 'Number of seconds to try to stop for before killing the container. Once killed it will then be restarted. Default is 10 seconds.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from restart' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# rm
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rm -d 'Remove one or more containers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -d 'Force the removal of a running container (uses SIGKILL)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s l -l link -d 'Remove the specified link and not the underlying container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s v -l volumes -d 'Remove the volumes associated with the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -a '(__fish_print_docker_containers stopped)' -d "Container"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rm' -s f -l force -a '(__fish_print_docker_containers all)' -d "Container"
|
||||
|
||||
# rmi
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a rmi -d 'Remove one or more images'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -s f -l force -d 'Force removal of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -l no-prune -d 'Do not delete untagged parents'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from rmi' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# run
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a run -d 'Run a command in a new container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s c -l cpu-shares -d 'CPU shares (relative weight)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cap-add -d 'Add Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cap-drop -d 'Drop Linux capabilities'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cidfile -d 'Write the container ID to the file'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l cpuset -d 'CPUs in which to allow execution (0-3, 0,1)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s d -l detach -d 'Detached mode: run the container in the background and print the new container ID'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l device -d 'Add a host device to the container (e.g. --device=/dev/sdc:/dev/xvdc:rwm)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l device-cgroup-rule -d 'Add a rule to the cgroup allowed devices list (e.g. --device-cgroup-rule="c 13:37 rwm")'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns -d 'Set custom DNS servers'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns-opt -d "Set custom DNS options (Use --dns-opt='' if you don't wish to set options)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l dns-search -d "Set custom DNS search domains (Use --dns-search=. if you don't wish to set the search domain)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s e -l env -d 'Set environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l entrypoint -d 'Overwrite the default ENTRYPOINT of the image'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l env-file -d 'Read in a line delimited file of environment variables'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l expose -d 'Expose a port or a range of ports (e.g. --expose=3300-3310) from the container without publishing it to your host'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l group-add -d 'Add additional groups to run as'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s h -l hostname -d 'Container host name'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s i -l interactive -d 'Keep STDIN open even if not attached'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l ipc -d 'Default is to create a private IPC namespace (POSIX SysV IPC) for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l link -d 'Add link to another container in the form of <name|id>:alias'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s m -l memory -d 'Memory limit (format: <number>[<unit>], where unit = b, k, m or g)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l mac-address -d 'Container MAC address (e.g., 92:d0:c6:0a:29:33)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l memory-swap -d "Total memory usage (memory + swap), set '-1' to disable swap (format: <number>[<unit>], where unit = b, k, m or g)"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l mount -d 'Attach a filesystem mount to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l name -d 'Assign a name to the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l net -d 'Set the Network mode for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s P -l publish-all -d 'Publish all exposed ports to random ports on the host interfaces'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s p -l publish -d "Publish a container's port to the host"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l pid -d 'Default is to create a private PID namespace for the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l privileged -d 'Give extended privileges to this container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l read-only -d "Mount the container's root filesystem as read only"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l restart -d 'Restart policy to apply when a container exits (no, on-failure[:max-retry], always)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l rm -d 'Automatically remove the container when it exits (incompatible with -d)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l security-opt -d 'Security Options'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l sig-proxy -d 'Proxy received signals to the process (non-TTY mode only). SIGCHLD, SIGSTOP, and SIGKILL are not proxied.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l stop-signal -d 'Signal to kill a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s t -l tty -d 'Allocate a pseudo-TTY'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s u -l user -d 'Username or UID'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l tmpfs -d 'Mount tmpfs on a directory'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s v -l volume -d 'Bind mount a volume (e.g., from the host: -v /host:/container, from Docker: -v /container)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -l volumes-from -d 'Mount volumes from the specified container(s)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -s w -l workdir -d 'Working directory inside the container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from run' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# save
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a save -d 'Save an image to a tar archive'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -s o -l output -d 'Write to an file, instead of STDOUT'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from save' -a '(__fish_print_docker_images)' -d "Image"
|
||||
|
||||
# search
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a search -d 'Search for an image on the registry (defaults to the Docker Hub)'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l automated -d 'Only show automated builds'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -l no-trunc -d "Don't truncate output"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from search' -s s -l stars -d 'Only displays with at least x stars'
|
||||
|
||||
# start
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a start -d 'Start a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s a -l attach -d "Attach container's STDOUT and STDERR and forward all signals to the process"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -s i -l interactive -d "Attach container's STDIN"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from start' -a '(__fish_print_docker_containers stopped)' -d "Container"
|
||||
|
||||
# stats
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a stats -d "Display a live stream of one or more containers' resource usage statistics"
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -l no-stream -d 'Disable streaming stats and only pull the first result'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stats' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# stop
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a stop -d 'Stop a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -s t -l time -d 'Number of seconds to wait for the container to stop before killing it. Default is 10 seconds.'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from stop' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# tag
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a tag -d 'Tag an image into a repository'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -s f -l force -d 'Force'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from tag' -l help -d 'Print usage'
|
||||
|
||||
# top
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a top -d 'Lookup the running processes of a container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from top' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# unpause
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a unpause -d 'Unpause a paused container'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from unpause' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
|
||||
# version
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a version -d 'Show the Docker version information'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from version' -s f -l format -d 'Format the output using the given go template'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from version' -l help -d 'Print usage'
|
||||
|
||||
# wait
|
||||
complete -c docker -f -n '__fish_docker_no_subcommand' -a wait -d 'Block until a container stops, then print its exit code'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -l help -d 'Print usage'
|
||||
complete -c docker -A -f -n '__fish_seen_subcommand_from wait' -a '(__fish_print_docker_containers running)' -d "Container"
|
||||
@@ -1 +0,0 @@
|
||||
See https://github.com/samneirinck/posh-docker
|
||||
@@ -1,2 +0,0 @@
|
||||
Tianon Gravi <admwiggin@gmail.com> (@tianon)
|
||||
Jessie Frazelle <jess@docker.com> (@jfrazelle)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,8 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/distribution/reference"
|
||||
"github.com/docker/docker/api/types"
|
||||
@@ -12,7 +14,6 @@ import (
|
||||
"github.com/docker/docker/registry"
|
||||
"github.com/pkg/errors"
|
||||
"golang.org/x/net/context"
|
||||
"io"
|
||||
)
|
||||
|
||||
type releaseableLayer struct {
|
||||
@@ -104,13 +105,19 @@ func (daemon *Daemon) pullForBuilder(ctx context.Context, name string, authConfi
|
||||
// Every call to GetImageAndReleasableLayer MUST call releasableLayer.Release() to prevent
|
||||
// leaking of layers.
|
||||
func (daemon *Daemon) GetImageAndReleasableLayer(ctx context.Context, refOrID string, opts backend.GetImageAndLayerOptions) (builder.Image, builder.ReleaseableLayer, error) {
|
||||
if !opts.ForcePull {
|
||||
image, _ := daemon.GetImage(refOrID)
|
||||
id, _ := daemon.GetImageID(refOrID)
|
||||
refIsID := id.String() == refOrID // detect if ref is an ID to skip pulling
|
||||
|
||||
if refIsID || !opts.ForcePull {
|
||||
image, err := daemon.GetImage(refOrID)
|
||||
// TODO: shouldn't we error out if error is different from "not found" ?
|
||||
if image != nil {
|
||||
layer, err := newReleasableLayerForImage(image, daemon.layerStore)
|
||||
return image, layer, err
|
||||
}
|
||||
if refIsID {
|
||||
return nil, nil, err
|
||||
}
|
||||
}
|
||||
|
||||
image, err := daemon.pullForBuilder(ctx, refOrID, opts.AuthConfig, opts.Output)
|
||||
|
||||
@@ -100,7 +100,6 @@ func serviceSpecFromGRPC(spec *swarmapi.ServiceSpec) (*types.ServiceSpec, error)
|
||||
return nil, fmt.Errorf("unknown task runtime type: %s", t.Generic.Payload.TypeUrl)
|
||||
}
|
||||
|
||||
taskTemplate.RuntimeData = t.Generic.Payload.Value
|
||||
default:
|
||||
return nil, fmt.Errorf("error creating service; unsupported runtime %T", t)
|
||||
}
|
||||
@@ -176,7 +175,6 @@ func ServiceSpecToGRPC(s types.ServiceSpec) (swarmapi.ServiceSpec, error) {
|
||||
Kind: string(types.RuntimePlugin),
|
||||
Payload: &gogotypes.Any{
|
||||
TypeUrl: string(types.RuntimeURLPlugin),
|
||||
Value: s.TaskTemplate.RuntimeData,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -31,9 +31,10 @@ func SwarmFromGRPC(c swarmapi.Cluster) types.Swarm {
|
||||
AutoLockManagers: c.Spec.EncryptionConfig.AutoLockManagers,
|
||||
},
|
||||
CAConfig: types.CAConfig{
|
||||
// do not include the signing CA key (it should already be redacted via the swarm APIs)
|
||||
SigningCACert: string(c.Spec.CAConfig.SigningCACert),
|
||||
ForceRotate: c.Spec.CAConfig.ForceRotate,
|
||||
// do not include the signing CA cert or key (it should already be redacted via the swarm APIs) -
|
||||
// the key because it's secret, and the cert because otherwise doing a get + update on the spec
|
||||
// can cause issues because the key would be missing and the cert wouldn't
|
||||
ForceRotate: c.Spec.CAConfig.ForceRotate,
|
||||
},
|
||||
},
|
||||
TLSInfo: types.TLSInfo{
|
||||
|
||||
@@ -495,7 +495,6 @@ func getEndpointConfig(na *api.NetworkAttachment, b executorpkg.Backend) *networ
|
||||
IPv4Address: ipv4,
|
||||
IPv6Address: ipv6,
|
||||
},
|
||||
Aliases: na.Aliases,
|
||||
DriverOpts: na.DriverAttachmentOpts,
|
||||
}
|
||||
if v, ok := na.Network.Spec.Annotations.Labels["com.docker.swarm.predefined"]; ok && v == "true" {
|
||||
|
||||
@@ -88,10 +88,6 @@ func (c *Cluster) Init(req types.InitRequest) (string, error) {
|
||||
}
|
||||
}
|
||||
|
||||
if !req.ForceNewCluster {
|
||||
clearPersistentState(c.root)
|
||||
}
|
||||
|
||||
nr, err := c.newNodeRunner(nodeStartConfig{
|
||||
forceNewCluster: req.ForceNewCluster,
|
||||
autolock: req.AutoLockManagers,
|
||||
@@ -109,16 +105,14 @@ func (c *Cluster) Init(req types.InitRequest) (string, error) {
|
||||
c.mu.Unlock()
|
||||
|
||||
if err := <-nr.Ready(); err != nil {
|
||||
c.mu.Lock()
|
||||
c.nr = nil
|
||||
c.mu.Unlock()
|
||||
if !req.ForceNewCluster { // if failure on first attempt don't keep state
|
||||
if err := clearPersistentState(c.root); err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
c.mu.Lock()
|
||||
c.nr = nil
|
||||
c.mu.Unlock()
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
state := nr.State()
|
||||
@@ -166,8 +160,6 @@ func (c *Cluster) Join(req types.JoinRequest) error {
|
||||
return err
|
||||
}
|
||||
|
||||
clearPersistentState(c.root)
|
||||
|
||||
nr, err := c.newNodeRunner(nodeStartConfig{
|
||||
RemoteAddr: req.RemoteAddrs[0],
|
||||
ListenAddr: net.JoinHostPort(listenHost, listenPort),
|
||||
@@ -193,6 +185,9 @@ func (c *Cluster) Join(req types.JoinRequest) error {
|
||||
c.mu.Lock()
|
||||
c.nr = nil
|
||||
c.mu.Unlock()
|
||||
if err := clearPersistentState(c.root); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -11,52 +11,45 @@ import (
|
||||
|
||||
// GetTasks returns a list of tasks matching the filter options.
|
||||
func (c *Cluster) GetTasks(options apitypes.TaskListOptions) ([]types.Task, error) {
|
||||
c.mu.RLock()
|
||||
defer c.mu.RUnlock()
|
||||
var r *swarmapi.ListTasksResponse
|
||||
|
||||
state := c.currentNodeState()
|
||||
if !state.IsActiveManager() {
|
||||
return nil, c.errNoManager(state)
|
||||
}
|
||||
|
||||
byName := func(filter filters.Args) error {
|
||||
if filter.Include("service") {
|
||||
serviceFilters := filter.Get("service")
|
||||
for _, serviceFilter := range serviceFilters {
|
||||
service, err := c.GetService(serviceFilter, false)
|
||||
if err != nil {
|
||||
return err
|
||||
if err := c.lockedManagerAction(func(ctx context.Context, state nodeState) error {
|
||||
byName := func(filter filters.Args) error {
|
||||
if filter.Include("service") {
|
||||
serviceFilters := filter.Get("service")
|
||||
for _, serviceFilter := range serviceFilters {
|
||||
service, err := getService(ctx, state.controlClient, serviceFilter, false)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
filter.Del("service", serviceFilter)
|
||||
filter.Add("service", service.ID)
|
||||
}
|
||||
filter.Del("service", serviceFilter)
|
||||
filter.Add("service", service.ID)
|
||||
}
|
||||
}
|
||||
if filter.Include("node") {
|
||||
nodeFilters := filter.Get("node")
|
||||
for _, nodeFilter := range nodeFilters {
|
||||
node, err := c.GetNode(nodeFilter)
|
||||
if err != nil {
|
||||
return err
|
||||
if filter.Include("node") {
|
||||
nodeFilters := filter.Get("node")
|
||||
for _, nodeFilter := range nodeFilters {
|
||||
node, err := getNode(ctx, state.controlClient, nodeFilter)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
filter.Del("node", nodeFilter)
|
||||
filter.Add("node", node.ID)
|
||||
}
|
||||
filter.Del("node", nodeFilter)
|
||||
filter.Add("node", node.ID)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
filters, err := newListTasksFilters(options.Filters, byName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
filters, err := newListTasksFilters(options.Filters, byName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ctx, cancel := c.getRequestContext()
|
||||
defer cancel()
|
||||
|
||||
r, err := state.controlClient.ListTasks(
|
||||
ctx,
|
||||
&swarmapi.ListTasksRequest{Filters: filters})
|
||||
if err != nil {
|
||||
r, err = state.controlClient.ListTasks(
|
||||
ctx,
|
||||
&swarmapi.ListTasksRequest{Filters: filters})
|
||||
return err
|
||||
}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
||||
@@ -116,6 +116,17 @@ type Daemon struct {
|
||||
|
||||
diskUsageRunning int32
|
||||
pruneRunning int32
|
||||
hosts map[string]bool // hosts stores the addresses the daemon is listening on
|
||||
}
|
||||
|
||||
// StoreHosts stores the addresses the daemon is listening on
|
||||
func (daemon *Daemon) StoreHosts(hosts []string) {
|
||||
if daemon.hosts == nil {
|
||||
daemon.hosts = make(map[string]bool)
|
||||
}
|
||||
for _, h := range hosts {
|
||||
daemon.hosts[h] = true
|
||||
}
|
||||
}
|
||||
|
||||
// HasExperimental returns whether the experimental features of the daemon are enabled or not
|
||||
|
||||
@@ -68,17 +68,17 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
|
||||
memory := specs.LinuxMemory{}
|
||||
|
||||
if config.Memory > 0 {
|
||||
limit := uint64(config.Memory)
|
||||
limit := config.Memory
|
||||
memory.Limit = &limit
|
||||
}
|
||||
|
||||
if config.MemoryReservation > 0 {
|
||||
reservation := uint64(config.MemoryReservation)
|
||||
reservation := config.MemoryReservation
|
||||
memory.Reservation = &reservation
|
||||
}
|
||||
|
||||
if config.MemorySwap > 0 {
|
||||
swap := uint64(config.MemorySwap)
|
||||
swap := config.MemorySwap
|
||||
memory.Swap = &swap
|
||||
}
|
||||
|
||||
@@ -88,7 +88,7 @@ func getMemoryResources(config containertypes.Resources) *specs.LinuxMemory {
|
||||
}
|
||||
|
||||
if config.KernelMemory != 0 {
|
||||
kernelMemory := uint64(config.KernelMemory)
|
||||
kernelMemory := config.KernelMemory
|
||||
memory.Kernel = &kernelMemory
|
||||
}
|
||||
|
||||
|
||||
@@ -1,62 +0,0 @@
|
||||
package daemon
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/davecgh/go-spew/spew"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const dataStructuresLogNameTemplate = "daemon-data-%s.log"
|
||||
|
||||
// dumpDaemon appends the daemon datastructures into file in dir and returns full path
|
||||
// to that file.
|
||||
func (d *Daemon) dumpDaemon(dir string) (string, error) {
|
||||
// Ensure we recover from a panic as we are doing this without any locking
|
||||
defer func() {
|
||||
recover()
|
||||
}()
|
||||
|
||||
path := filepath.Join(dir, fmt.Sprintf(dataStructuresLogNameTemplate, strings.Replace(time.Now().Format(time.RFC3339), ":", "", -1)))
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, 0666)
|
||||
if err != nil {
|
||||
return "", errors.Wrap(err, "failed to open file to write the daemon datastructure dump")
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
dump := struct {
|
||||
containers interface{}
|
||||
names interface{}
|
||||
links interface{}
|
||||
execs interface{}
|
||||
volumes interface{}
|
||||
images interface{}
|
||||
layers interface{}
|
||||
imageReferences interface{}
|
||||
downloads interface{}
|
||||
uploads interface{}
|
||||
registry interface{}
|
||||
plugins interface{}
|
||||
}{
|
||||
containers: d.containers,
|
||||
execs: d.execCommands,
|
||||
volumes: d.volumes,
|
||||
images: d.imageStore,
|
||||
layers: d.layerStore,
|
||||
imageReferences: d.referenceStore,
|
||||
downloads: d.downloadManager,
|
||||
uploads: d.uploadManager,
|
||||
registry: d.RegistryService,
|
||||
plugins: d.PluginStore,
|
||||
names: d.nameIndex,
|
||||
links: d.linkIndex,
|
||||
}
|
||||
|
||||
spew.Fdump(f, dump) // Does not return an error
|
||||
f.Sync()
|
||||
return path, nil
|
||||
}
|
||||
@@ -22,12 +22,6 @@ func (d *Daemon) setupDumpStackTrap(root string) {
|
||||
} else {
|
||||
logrus.Infof("goroutine stacks written to %s", path)
|
||||
}
|
||||
path, err = d.dumpDaemon(root)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("failed to write daemon datastructure dump")
|
||||
} else {
|
||||
logrus.Infof("daemon datastructure dump written to %s", path)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
@@ -41,12 +41,6 @@ func (d *Daemon) setupDumpStackTrap(root string) {
|
||||
} else {
|
||||
logrus.Infof("goroutine stacks written to %s", path)
|
||||
}
|
||||
path, err = d.dumpDaemon(root)
|
||||
if err != nil {
|
||||
logrus.WithError(err).Error("failed to write daemon datastructure dump")
|
||||
} else {
|
||||
logrus.Infof("daemon datastructure dump written to %s", path)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
@@ -117,7 +117,7 @@ func (daemon *Daemon) cleanupContainer(container *container.Container, forceRemo
|
||||
if container.RWLayer != nil {
|
||||
metadata, err := daemon.layerStore.ReleaseRWLayer(container.RWLayer)
|
||||
layer.LogReleaseMetadata(metadata)
|
||||
if err != nil && err != layer.ErrMountDoesNotExist {
|
||||
if err != nil && err != layer.ErrMountDoesNotExist && !os.IsNotExist(errors.Cause(err)) {
|
||||
return errors.Wrapf(err, "driver %q failed to remove root filesystem for %s", daemon.GraphDriverName(), container.ID)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -37,8 +37,6 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/vbatts/tar-split/tar/storage"
|
||||
|
||||
"github.com/docker/docker/daemon/graphdriver"
|
||||
"github.com/docker/docker/pkg/archive"
|
||||
"github.com/docker/docker/pkg/chrootarchive"
|
||||
@@ -47,9 +45,11 @@ import (
|
||||
"github.com/docker/docker/pkg/locker"
|
||||
mountpk "github.com/docker/docker/pkg/mount"
|
||||
"github.com/docker/docker/pkg/system"
|
||||
"github.com/vbatts/tar-split/tar/storage"
|
||||
|
||||
rsystem "github.com/opencontainers/runc/libcontainer/system"
|
||||
"github.com/opencontainers/selinux/go-selinux/label"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -284,30 +284,41 @@ func (a *Driver) Remove(id string) error {
|
||||
mountpoint = a.getMountpoint(id)
|
||||
}
|
||||
|
||||
logger := logrus.WithFields(logrus.Fields{
|
||||
"module": "graphdriver",
|
||||
"driver": "aufs",
|
||||
"layer": id,
|
||||
})
|
||||
|
||||
var retries int
|
||||
for {
|
||||
mounted, err := a.mounted(mountpoint)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
break
|
||||
}
|
||||
return err
|
||||
}
|
||||
if !mounted {
|
||||
break
|
||||
}
|
||||
|
||||
if err := a.unmount(mountpoint); err != nil {
|
||||
if err != syscall.EBUSY {
|
||||
return fmt.Errorf("aufs: unmount error: %s: %v", mountpoint, err)
|
||||
}
|
||||
if retries >= 5 {
|
||||
return fmt.Errorf("aufs: unmount error after retries: %s: %v", mountpoint, err)
|
||||
}
|
||||
// If unmount returns EBUSY, it could be a transient error. Sleep and retry.
|
||||
retries++
|
||||
logrus.Warnf("unmount failed due to EBUSY: retry count: %d", retries)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
continue
|
||||
err = a.unmount(mountpoint)
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
break
|
||||
|
||||
if err != syscall.EBUSY {
|
||||
return errors.Wrapf(err, "aufs: unmount error: %s", mountpoint)
|
||||
}
|
||||
if retries >= 5 {
|
||||
return errors.Wrapf(err, "aufs: unmount error after retries: %s", mountpoint)
|
||||
}
|
||||
// If unmount returns EBUSY, it could be a transient error. Sleep and retry.
|
||||
retries++
|
||||
logger.Warnf("unmount failed due to EBUSY: retry count: %d", retries)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
continue
|
||||
}
|
||||
|
||||
// Atomically remove each directory in turn by first moving it out of the
|
||||
@@ -316,21 +327,22 @@ func (a *Driver) Remove(id string) error {
|
||||
tmpMntPath := path.Join(a.mntPath(), fmt.Sprintf("%s-removing", id))
|
||||
if err := os.Rename(mountpoint, tmpMntPath); err != nil && !os.IsNotExist(err) {
|
||||
if err == syscall.EBUSY {
|
||||
logrus.Warn("os.Rename err due to EBUSY")
|
||||
logger.WithField("dir", mountpoint).WithError(err).Warn("os.Rename err due to EBUSY")
|
||||
}
|
||||
return err
|
||||
return errors.Wrapf(err, "error preparing atomic delete of aufs mountpoint for id: %s", id)
|
||||
}
|
||||
if err := system.EnsureRemoveAll(tmpMntPath); err != nil {
|
||||
return errors.Wrapf(err, "error removing aufs layer %s", id)
|
||||
}
|
||||
defer system.EnsureRemoveAll(tmpMntPath)
|
||||
|
||||
tmpDiffpath := path.Join(a.diffPath(), fmt.Sprintf("%s-removing", id))
|
||||
if err := os.Rename(a.getDiffPath(id), tmpDiffpath); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
return errors.Wrapf(err, "error preparing atomic delete of aufs diff dir for id: %s", id)
|
||||
}
|
||||
defer system.EnsureRemoveAll(tmpDiffpath)
|
||||
|
||||
// Remove the layers file for the id
|
||||
if err := os.Remove(path.Join(a.rootPath(), "layers", id)); err != nil && !os.IsNotExist(err) {
|
||||
return err
|
||||
return errors.Wrapf(err, "error removing layers dir for %s", id)
|
||||
}
|
||||
|
||||
a.pathCacheLock.Lock()
|
||||
|
||||
@@ -64,31 +64,35 @@ type cmdProbe struct {
|
||||
|
||||
// exec the healthcheck command in the container.
|
||||
// Returns the exit code and probe output (if any)
|
||||
func (p *cmdProbe) run(ctx context.Context, d *Daemon, container *container.Container) (*types.HealthcheckResult, error) {
|
||||
|
||||
cmdSlice := strslice.StrSlice(container.Config.Healthcheck.Test)[1:]
|
||||
func (p *cmdProbe) run(ctx context.Context, d *Daemon, cntr *container.Container) (*types.HealthcheckResult, error) {
|
||||
cmdSlice := strslice.StrSlice(cntr.Config.Healthcheck.Test)[1:]
|
||||
if p.shell {
|
||||
cmdSlice = append(getShell(container.Config), cmdSlice...)
|
||||
cmdSlice = append(getShell(cntr.Config), cmdSlice...)
|
||||
}
|
||||
entrypoint, args := d.getEntrypointAndArgs(strslice.StrSlice{}, cmdSlice)
|
||||
execConfig := exec.NewConfig()
|
||||
execConfig.OpenStdin = false
|
||||
execConfig.OpenStdout = true
|
||||
execConfig.OpenStderr = true
|
||||
execConfig.ContainerID = container.ID
|
||||
execConfig.ContainerID = cntr.ID
|
||||
execConfig.DetachKeys = []byte{}
|
||||
execConfig.Entrypoint = entrypoint
|
||||
execConfig.Args = args
|
||||
execConfig.Tty = false
|
||||
execConfig.Privileged = false
|
||||
execConfig.User = container.Config.User
|
||||
execConfig.Env = container.Config.Env
|
||||
execConfig.User = cntr.Config.User
|
||||
|
||||
d.registerExecCommand(container, execConfig)
|
||||
d.LogContainerEvent(container, "exec_create: "+execConfig.Entrypoint+" "+strings.Join(execConfig.Args, " "))
|
||||
linkedEnv, err := d.setupLinkedContainers(cntr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
execConfig.Env = container.ReplaceOrAppendEnvValues(cntr.CreateDaemonEnvironment(execConfig.Tty, linkedEnv), execConfig.Env)
|
||||
|
||||
d.registerExecCommand(cntr, execConfig)
|
||||
d.LogContainerEvent(cntr, "exec_create: "+execConfig.Entrypoint+" "+strings.Join(execConfig.Args, " "))
|
||||
|
||||
output := &limitedBuffer{}
|
||||
err := d.ContainerExecStart(ctx, execConfig.ID, nil, output, output)
|
||||
err = d.ContainerExecStart(ctx, execConfig.ID, nil, output, output)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -97,7 +101,7 @@ func (p *cmdProbe) run(ctx context.Context, d *Daemon, container *container.Cont
|
||||
return nil, err
|
||||
}
|
||||
if info.ExitCode == nil {
|
||||
return nil, fmt.Errorf("Healthcheck for container %s has no exit code!", container.ID)
|
||||
return nil, fmt.Errorf("Healthcheck for container %s has no exit code!", cntr.ID)
|
||||
}
|
||||
// Note: Go's json package will handle invalid UTF-8 for us
|
||||
out := output.String()
|
||||
@@ -182,7 +186,7 @@ func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
|
||||
logrus.Debugf("Running health check for container %s ...", c.ID)
|
||||
startTime := time.Now()
|
||||
ctx, cancelProbe := context.WithTimeout(context.Background(), probeTimeout)
|
||||
results := make(chan *types.HealthcheckResult)
|
||||
results := make(chan *types.HealthcheckResult, 1)
|
||||
go func() {
|
||||
healthChecksCounter.Inc()
|
||||
result, err := probe.run(ctx, d, c)
|
||||
@@ -205,8 +209,10 @@ func monitor(d *Daemon, c *container.Container, stop chan struct{}, probe probe)
|
||||
select {
|
||||
case <-stop:
|
||||
logrus.Debugf("Stop healthcheck monitoring for container %s (received while probing)", c.ID)
|
||||
// Stop timeout and kill probe, but don't wait for probe to exit.
|
||||
cancelProbe()
|
||||
// Wait for probe to exit (it might take a while to respond to the TERM
|
||||
// signal and we don't want dying probes to pile up).
|
||||
<-results
|
||||
return
|
||||
case result := <-results:
|
||||
handleProbeResult(d, c, result, stop)
|
||||
|
||||
@@ -69,7 +69,7 @@ func (daemon *Daemon) killWithSignal(container *containerpkg.Container, sig int)
|
||||
return errNotRunning{container.ID}
|
||||
}
|
||||
|
||||
if container.Config.StopSignal != "" {
|
||||
if container.Config.StopSignal != "" && syscall.Signal(sig) != syscall.SIGKILL {
|
||||
containerStopSignal, err := signal.ParseSignal(container.Config.StopSignal)
|
||||
if err != nil {
|
||||
return err
|
||||
|
||||
@@ -3,6 +3,7 @@ package logger
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -18,6 +19,7 @@ type pluginAdapter struct {
|
||||
driverName string
|
||||
id string
|
||||
plugin logPlugin
|
||||
basePath string
|
||||
fifoPath string
|
||||
capabilities Capability
|
||||
logInfo Info
|
||||
@@ -56,7 +58,7 @@ func (a *pluginAdapter) Close() error {
|
||||
a.mu.Lock()
|
||||
defer a.mu.Unlock()
|
||||
|
||||
if err := a.plugin.StopLogging(a.fifoPath); err != nil {
|
||||
if err := a.plugin.StopLogging(strings.TrimPrefix(a.fifoPath, a.basePath)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -112,9 +112,10 @@ func (s *journald) Log(msg *logger.Message) error {
|
||||
}
|
||||
|
||||
line := string(msg.Line)
|
||||
source := msg.Source
|
||||
logger.PutMessage(msg)
|
||||
|
||||
if msg.Source == "stderr" {
|
||||
if source == "stderr" {
|
||||
return journal.Send(line, journal.PriErr, vars)
|
||||
}
|
||||
return journal.Send(line, journal.PriInfo, vars)
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
"sync"
|
||||
|
||||
@@ -15,6 +16,7 @@ import (
|
||||
"github.com/docker/docker/daemon/logger/loggerutils"
|
||||
"github.com/docker/docker/pkg/jsonlog"
|
||||
"github.com/docker/go-units"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
// Name is the name of the file that the jsonlogger logs to.
|
||||
@@ -22,12 +24,13 @@ const Name = "json-file"
|
||||
|
||||
// JSONFileLogger is Logger implementation for default Docker logging.
|
||||
type JSONFileLogger struct {
|
||||
buf *bytes.Buffer
|
||||
writer *loggerutils.RotateFileWriter
|
||||
mu sync.Mutex
|
||||
readers map[*logger.LogWatcher]struct{} // stores the active log followers
|
||||
extra []byte // json-encoded extra attributes
|
||||
extra []byte // json-encoded extra attributes
|
||||
|
||||
mu sync.RWMutex
|
||||
buf *bytes.Buffer // avoids allocating a new buffer on each call to `Log()`
|
||||
closed bool
|
||||
writer *loggerutils.RotateFileWriter
|
||||
readers map[*logger.LogWatcher]struct{} // stores the active log followers
|
||||
}
|
||||
|
||||
func init() {
|
||||
@@ -90,33 +93,45 @@ func New(info logger.Info) (logger.Logger, error) {
|
||||
|
||||
// Log converts logger.Message to jsonlog.JSONLog and serializes it to file.
|
||||
func (l *JSONFileLogger) Log(msg *logger.Message) error {
|
||||
l.mu.Lock()
|
||||
err := writeMessageBuf(l.writer, msg, l.extra, l.buf)
|
||||
l.buf.Reset()
|
||||
l.mu.Unlock()
|
||||
return err
|
||||
}
|
||||
|
||||
func writeMessageBuf(w io.Writer, m *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
|
||||
if err := marshalMessage(m, extra, buf); err != nil {
|
||||
logger.PutMessage(m)
|
||||
return err
|
||||
}
|
||||
logger.PutMessage(m)
|
||||
if _, err := w.Write(buf.Bytes()); err != nil {
|
||||
return errors.Wrap(err, "error writing log entry")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func marshalMessage(msg *logger.Message, extra json.RawMessage, buf *bytes.Buffer) error {
|
||||
timestamp, err := jsonlog.FastTimeMarshalJSON(msg.Timestamp)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.mu.Lock()
|
||||
logline := msg.Line
|
||||
logLine := msg.Line
|
||||
if !msg.Partial {
|
||||
logline = append(msg.Line, '\n')
|
||||
logLine = append(msg.Line, '\n')
|
||||
}
|
||||
err = (&jsonlog.JSONLogs{
|
||||
Log: logline,
|
||||
Log: logLine,
|
||||
Stream: msg.Source,
|
||||
Created: timestamp,
|
||||
RawAttrs: l.extra,
|
||||
}).MarshalJSONBuf(l.buf)
|
||||
logger.PutMessage(msg)
|
||||
RawAttrs: extra,
|
||||
}).MarshalJSONBuf(buf)
|
||||
if err != nil {
|
||||
l.mu.Unlock()
|
||||
return err
|
||||
return errors.Wrap(err, "error writing log message to buffer")
|
||||
}
|
||||
|
||||
l.buf.WriteByte('\n')
|
||||
_, err = l.writer.Write(l.buf.Bytes())
|
||||
l.buf.Reset()
|
||||
l.mu.Unlock()
|
||||
|
||||
return err
|
||||
err = buf.WriteByte('\n')
|
||||
return errors.Wrap(err, "error finalizing log buffer")
|
||||
}
|
||||
|
||||
// ValidateLogOpt looks for json specific log options max-file & max-size.
|
||||
|
||||
@@ -3,7 +3,6 @@ package jsonfilelog
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
@@ -18,6 +17,7 @@ import (
|
||||
"github.com/docker/docker/pkg/ioutils"
|
||||
"github.com/docker/docker/pkg/jsonlog"
|
||||
"github.com/docker/docker/pkg/tailfile"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
const maxJSONDecodeRetry = 20000
|
||||
@@ -48,10 +48,11 @@ func (l *JSONFileLogger) ReadLogs(config logger.ReadConfig) *logger.LogWatcher {
|
||||
func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.ReadConfig) {
|
||||
defer close(logWatcher.Msg)
|
||||
|
||||
// lock so the read stream doesn't get corrupted due to rotations or other log data written while we read
|
||||
// lock so the read stream doesn't get corrupted due to rotations or other log data written while we open these files
|
||||
// This will block writes!!!
|
||||
l.mu.Lock()
|
||||
l.mu.RLock()
|
||||
|
||||
// TODO it would be nice to move a lot of this reader implementation to the rotate logger object
|
||||
pth := l.writer.LogPath()
|
||||
var files []io.ReadSeeker
|
||||
for i := l.writer.MaxFiles(); i > 1; i-- {
|
||||
@@ -59,25 +60,36 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
|
||||
if err != nil {
|
||||
if !os.IsNotExist(err) {
|
||||
logWatcher.Err <- err
|
||||
break
|
||||
l.mu.RUnlock()
|
||||
return
|
||||
}
|
||||
continue
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
files = append(files, f)
|
||||
}
|
||||
|
||||
latestFile, err := os.Open(pth)
|
||||
if err != nil {
|
||||
logWatcher.Err <- err
|
||||
l.mu.Unlock()
|
||||
logWatcher.Err <- errors.Wrap(err, "error opening latest log file")
|
||||
l.mu.RUnlock()
|
||||
return
|
||||
}
|
||||
defer latestFile.Close()
|
||||
|
||||
latestChunk, err := newSectionReader(latestFile)
|
||||
|
||||
// Now we have the reader sectioned, all fd's opened, we can unlock.
|
||||
// New writes/rotates will not affect seeking through these files
|
||||
l.mu.RUnlock()
|
||||
|
||||
if err != nil {
|
||||
logWatcher.Err <- err
|
||||
return
|
||||
}
|
||||
|
||||
if config.Tail != 0 {
|
||||
tailer := ioutils.MultiReadSeeker(append(files, latestFile)...)
|
||||
tailer := ioutils.MultiReadSeeker(append(files, latestChunk)...)
|
||||
tailFile(tailer, logWatcher, config.Tail, config.Since)
|
||||
}
|
||||
|
||||
@@ -89,19 +101,14 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
|
||||
}
|
||||
|
||||
if !config.Follow || l.closed {
|
||||
l.mu.Unlock()
|
||||
return
|
||||
}
|
||||
|
||||
if config.Tail >= 0 {
|
||||
latestFile.Seek(0, os.SEEK_END)
|
||||
}
|
||||
|
||||
notifyRotate := l.writer.NotifyRotate()
|
||||
defer l.writer.NotifyRotateEvict(notifyRotate)
|
||||
|
||||
l.mu.Lock()
|
||||
l.readers[logWatcher] = struct{}{}
|
||||
|
||||
l.mu.Unlock()
|
||||
|
||||
followLogs(latestFile, logWatcher, notifyRotate, config.Since)
|
||||
@@ -111,6 +118,16 @@ func (l *JSONFileLogger) readLogs(logWatcher *logger.LogWatcher, config logger.R
|
||||
l.mu.Unlock()
|
||||
}
|
||||
|
||||
func newSectionReader(f *os.File) (*io.SectionReader, error) {
|
||||
// seek to the end to get the size
|
||||
// we'll leave this at the end of the file since section reader does not advance the reader
|
||||
size, err := f.Seek(0, os.SEEK_END)
|
||||
if err != nil {
|
||||
return nil, errors.Wrap(err, "error getting current file size")
|
||||
}
|
||||
return io.NewSectionReader(f, 0, size), nil
|
||||
}
|
||||
|
||||
func tailFile(f io.ReadSeeker, logWatcher *logger.LogWatcher, tail int, since time.Time) {
|
||||
var rdr io.Reader
|
||||
rdr = f
|
||||
|
||||
@@ -59,6 +59,7 @@ func makePluginCreator(name string, l *logPluginProxy, basePath string) Creator
|
||||
driverName: name,
|
||||
id: id,
|
||||
plugin: l,
|
||||
basePath: basePath,
|
||||
fifoPath: filepath.Join(root, id),
|
||||
logInfo: logCtx,
|
||||
}
|
||||
|
||||
@@ -133,8 +133,9 @@ func New(info logger.Info) (logger.Logger, error) {
|
||||
|
||||
func (s *syslogger) Log(msg *logger.Message) error {
|
||||
line := string(msg.Line)
|
||||
source := msg.Source
|
||||
logger.PutMessage(msg)
|
||||
if msg.Source == "stderr" {
|
||||
if source == "stderr" {
|
||||
return s.writer.Err(line)
|
||||
}
|
||||
return s.writer.Info(line)
|
||||
|
||||
@@ -46,7 +46,8 @@ func (daemon *Daemon) StateChanged(id string, e libcontainerd.StateInfo) error {
|
||||
c.StreamConfig.Wait()
|
||||
c.Reset(false)
|
||||
|
||||
restart, wait, err := c.RestartManager().ShouldRestart(e.ExitCode, c.HasBeenManuallyStopped, time.Since(c.StartedAt))
|
||||
// If daemon is being shutdown, don't let the container restart
|
||||
restart, wait, err := c.RestartManager().ShouldRestart(e.ExitCode, daemon.IsShuttingDown() || c.HasBeenManuallyStopped, time.Since(c.StartedAt))
|
||||
if err == nil && restart {
|
||||
c.RestartCount++
|
||||
c.SetRestarting(platformConstructExitStatus(e))
|
||||
|
||||
@@ -216,7 +216,7 @@ func (daemon *Daemon) ImagesPrune(ctx context.Context, pruneFilters filters.Args
|
||||
if !until.IsZero() && img.Created.After(until) {
|
||||
continue
|
||||
}
|
||||
if !matchLabels(pruneFilters, img.Config.Labels) {
|
||||
if img.Config != nil && !matchLabels(pruneFilters, img.Config.Labels) {
|
||||
continue
|
||||
}
|
||||
topImages[id] = img
|
||||
|
||||
@@ -9,7 +9,6 @@ import (
|
||||
"github.com/docker/docker/container"
|
||||
"github.com/docker/docker/layer"
|
||||
"github.com/docker/docker/libcontainerd"
|
||||
"github.com/docker/docker/pkg/system"
|
||||
"golang.org/x/sys/windows/registry"
|
||||
)
|
||||
|
||||
@@ -32,12 +31,6 @@ func (daemon *Daemon) getLibcontainerdCreateOptions(container *container.Contain
|
||||
}
|
||||
|
||||
dnsSearch := daemon.getDNSSearchSettings(container)
|
||||
if dnsSearch != nil {
|
||||
osv := system.GetOSVersion()
|
||||
if osv.Build < 14997 {
|
||||
return nil, fmt.Errorf("dns-search option is not supported on the current platform")
|
||||
}
|
||||
}
|
||||
|
||||
// Generate the layer folder of the layer options
|
||||
layerOpts := &libcontainerd.LayerOption{}
|
||||
|
||||
@@ -6,6 +6,7 @@ package daemon
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
@@ -42,8 +43,19 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
|
||||
if err := daemon.lazyInitializeVolume(c.ID, m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// If the daemon is being shutdown, we should not let a container start if it is trying to
|
||||
// mount the socket the daemon is listening on. During daemon shutdown, the socket
|
||||
// (/var/run/docker.sock by default) doesn't exist anymore causing the call to m.Setup to
|
||||
// create at directory instead. This in turn will prevent the daemon to restart.
|
||||
checkfunc := func(m *volume.MountPoint) error {
|
||||
if _, exist := daemon.hosts[m.Source]; exist && daemon.IsShuttingDown() {
|
||||
return fmt.Errorf("Could not mount %q to container while the daemon is shutting down", m.Source)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
rootUID, rootGID := daemon.GetRemappedUIDGID()
|
||||
path, err := m.Setup(c.MountLabel, rootUID, rootGID)
|
||||
path, err := m.Setup(c.MountLabel, rootUID, rootGID, checkfunc)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -24,7 +24,7 @@ func (daemon *Daemon) setupMounts(c *container.Container) ([]container.Mount, er
|
||||
if err := daemon.lazyInitializeVolume(c.ID, mount); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
s, err := mount.Setup(c.MountLabel, 0, 0)
|
||||
s, err := mount.Setup(c.MountLabel, 0, 0, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -1,30 +0,0 @@
|
||||
# The non-reference docs have been moved!
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
The documentation for Docker Engine has been merged into
|
||||
[the general documentation repo](https://github.com/docker/docker.github.io).
|
||||
|
||||
See the [README](https://github.com/docker/docker.github.io/blob/master/README.md)
|
||||
for instructions on contributing to and building the documentation.
|
||||
|
||||
If you'd like to edit the current published version of the Engine docs,
|
||||
do it in the master branch here:
|
||||
https://github.com/docker/docker.github.io/tree/master/engine
|
||||
|
||||
If you need to document the functionality of an upcoming Engine release,
|
||||
use the `vnext-engine` branch:
|
||||
https://github.com/docker/docker.github.io/tree/vnext-engine/engine
|
||||
|
||||
The reference docs have been left in docker/docker (this repo), which remains
|
||||
the place to edit them.
|
||||
|
||||
The docs in the general repo are open-source and we appreciate
|
||||
your feedback and pull requests!
|
||||
@@ -29,6 +29,11 @@ keywords: "API, Docker, rcli, REST, documentation"
|
||||
generate and rotate to a new CA certificate/key pair.
|
||||
* `POST /service/create` and `POST /services/(id or name)/update` now take the field `Platforms` as part of the service `Placement`, allowing to specify platforms supported by the service.
|
||||
* `POST /containers/(name)/wait` now accepts a `condition` query parameter to indicate which state change condition to wait for. Also, response headers are now returned immediately to acknowledge that the server has registered a wait callback for the client.
|
||||
* `POST /swarm/init` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
|
||||
* `POST /swarm/join` now accepts a `DataPathAddr` property to set the IP-address or network interface to use for data traffic
|
||||
* `GET /events` now supports service, node and secret events which are emmited when users create, update and remove service, node and secret
|
||||
* `GET /events` now supports network remove event which is emmitted when users remove a swarm scoped network
|
||||
* `GET /events` now supports a filter type `scope` in which supported value could be swarm and local
|
||||
|
||||
## v1.29 API changes
|
||||
|
||||
@@ -41,6 +46,8 @@ keywords: "API, Docker, rcli, REST, documentation"
|
||||
* `POST /containers/create`, `POST /service/create` and `POST /services/(id or name)/update` now takes the field `StartPeriod` as a part of the `HealthConfig` allowing for specification of a period during which the container should not be considered unhealthy even if health checks do not pass.
|
||||
* `GET /services/(id)` now accepts an `insertDefaults` query-parameter to merge default values into the service inspect output.
|
||||
* `POST /containers/prune`, `POST /images/prune`, `POST /volumes/prune`, and `POST /networks/prune` now support a `label` filter to filter containers, images, volumes, or networks based on the label. The format of the label filter could be `label=<key>`/`label=<key>=<value>` to remove those with the specified labels, or `label!=<key>`/`label!=<key>=<value>` to remove those without the specified labels.
|
||||
* `POST /services/create` now accepts `Privileges` as part of `ContainerSpec`. Privileges currently include
|
||||
`CredentialSpec` and `SELinuxContext`.
|
||||
|
||||
## v1.28 API changes
|
||||
|
||||
|
||||
@@ -1,321 +0,0 @@
|
||||
---
|
||||
aliases: ["/engine/misc/deprecated/"]
|
||||
title: "Deprecated Engine Features"
|
||||
description: "Deprecated Features."
|
||||
keywords: "docker, documentation, about, technology, deprecate"
|
||||
---
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
# Deprecated Engine Features
|
||||
|
||||
The following list of features are deprecated in Engine.
|
||||
To learn more about Docker Engine's deprecation policy,
|
||||
see [Feature Deprecation Policy](https://docs.docker.com/engine/#feature-deprecation-policy).
|
||||
|
||||
### Asynchronous `service create` and `service update`
|
||||
|
||||
**Deprecated In Release: v17.05.0**
|
||||
|
||||
**Disabled by default in release: v17.09**
|
||||
|
||||
Docker 17.05.0 added an optional `--detach=false` option to make the
|
||||
`docker service create` and `docker service update` work synchronously. This
|
||||
option will be enable by default in Docker 17.09, at which point the `--detach`
|
||||
flag can be used to use the previous (asynchronous) behavior.
|
||||
|
||||
### `-g` and `--graph` flags on `dockerd`
|
||||
|
||||
**Deprecated In Release: v17.05.0**
|
||||
|
||||
The `-g` or `--graph` flag for the `dockerd` or `docker daemon` command was
|
||||
used to indicate the directory in which to store persistent data and resource
|
||||
configuration and has been replaced with the more descriptive `--data-root`
|
||||
flag.
|
||||
|
||||
These flags were added before Docker 1.0, so will not be _removed_, only
|
||||
_hidden_, to discourage their use.
|
||||
|
||||
### Top-level network properties in NetworkSettings
|
||||
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
When inspecting a container, `NetworkSettings` contains top-level information
|
||||
about the default ("bridge") network;
|
||||
|
||||
`EndpointID`, `Gateway`, `GlobalIPv6Address`, `GlobalIPv6PrefixLen`, `IPAddress`,
|
||||
`IPPrefixLen`, `IPv6Gateway`, and `MacAddress`.
|
||||
|
||||
These properties are deprecated in favor of per-network properties in
|
||||
`NetworkSettings.Networks`. These properties were already "deprecated" in
|
||||
docker 1.9, but kept around for backward compatibility.
|
||||
|
||||
Refer to [#17538](https://github.com/docker/docker/pull/17538) for further
|
||||
information.
|
||||
|
||||
### `filter` param for `/images/json` endpoint
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
The `filter` param to filter the list of image by reference (name or name:tag) is now implemented as a regular filter, named `reference`.
|
||||
|
||||
### `repository:shortid` image references
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
`repository:shortid` syntax for referencing images is very little used, collides with tag references can be confused with digest references.
|
||||
|
||||
### `docker daemon` subcommand
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
The daemon is moved to a separate binary (`dockerd`), and should be used instead.
|
||||
|
||||
### Duplicate keys with conflicting values in engine labels
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
Duplicate keys with conflicting values have been deprecated. A warning is displayed
|
||||
in the output, and an error will be returned in the future.
|
||||
|
||||
### `MAINTAINER` in Dockerfile
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
`MAINTAINER` was an early very limited form of `LABEL` which should be used instead.
|
||||
|
||||
### API calls without a version
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
API versions should be supplied to all API calls to ensure compatibility with
|
||||
future Engine versions. Instead of just requesting, for example, the URL
|
||||
`/containers/json`, you must now request `/v1.25/containers/json`.
|
||||
|
||||
### Backing filesystem without `d_type` support for overlay/overlay2
|
||||
**Deprecated In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
The overlay and overlay2 storage driver does not work as expected if the backing
|
||||
filesystem does not support `d_type`. For example, XFS does not support `d_type`
|
||||
if it is formatted with the `ftype=0` option.
|
||||
|
||||
Please also refer to [#27358](https://github.com/docker/docker/issues/27358) for
|
||||
further information.
|
||||
|
||||
### Three arguments form in `docker import`
|
||||
**Deprecated In Release: [v0.6.7](https://github.com/docker/docker/releases/tag/v0.6.7)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The `docker import` command format `file|URL|- [REPOSITORY [TAG]]` is deprecated since November 2013. It's no more supported.
|
||||
|
||||
### `-h` shorthand for `--help`
|
||||
|
||||
**Deprecated In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
**Target For Removal In Release: v17.09**
|
||||
|
||||
The shorthand (`-h`) is less common than `--help` on Linux and cannot be used
|
||||
on all subcommands (due to it conflicting with, e.g. `-h` / `--hostname` on
|
||||
`docker create`). For this reason, the `-h` shorthand was not printed in the
|
||||
"usage" output of subcommands, nor documented, and is now marked "deprecated".
|
||||
|
||||
### `-e` and `--email` flags on `docker login`
|
||||
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
|
||||
|
||||
**Target For Removal In Release: v17.06**
|
||||
|
||||
The docker login command is removing the ability to automatically register for an account with the target registry if the given username doesn't exist. Due to this change, the email flag is no longer required, and will be deprecated.
|
||||
|
||||
### Separator (`:`) of `--security-opt` flag on `docker run`
|
||||
**Deprecated In Release: [v1.11.0](https://github.com/docker/docker/releases/tag/v1.11.0)**
|
||||
|
||||
**Target For Removal In Release: v17.06**
|
||||
|
||||
The flag `--security-opt` doesn't use the colon separator(`:`) anymore to divide keys and values, it uses the equal symbol(`=`) for consistency with other similar flags, like `--storage-opt`.
|
||||
|
||||
### `/containers/(id or name)/copy` endpoint
|
||||
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The endpoint `/containers/(id or name)/copy` is deprecated in favor of `/containers/(id or name)/archive`.
|
||||
|
||||
### Ambiguous event fields in API
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The fields `ID`, `Status` and `From` in the events API have been deprecated in favor of a more rich structure.
|
||||
See the events API documentation for the new format.
|
||||
|
||||
### `-f` flag on `docker tag`
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
To make tagging consistent across the various `docker` commands, the `-f` flag on the `docker tag` command is deprecated. It is not longer necessary to specify `-f` to move a tag from one image to another. Nor will `docker` generate an error if the `-f` flag is missing and the specified tag is already in use.
|
||||
|
||||
### HostConfig at API container start
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
Passing an `HostConfig` to `POST /containers/{name}/start` is deprecated in favor of
|
||||
defining it at container creation (`POST /containers/create`).
|
||||
|
||||
### `--before` and `--since` flags on `docker ps`
|
||||
|
||||
**Deprecated In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The `docker ps --before` and `docker ps --since` options are deprecated.
|
||||
Use `docker ps --filter=before=...` and `docker ps --filter=since=...` instead.
|
||||
|
||||
### `--automated` and `--stars` flags on `docker search`
|
||||
|
||||
**Deprecated in Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
**Target For Removal In Release: v17.09**
|
||||
|
||||
The `docker search --automated` and `docker search --stars` options are deprecated.
|
||||
Use `docker search --filter=is-automated=...` and `docker search --filter=stars=...` instead.
|
||||
|
||||
### Driver Specific Log Tags
|
||||
**Deprecated In Release: [v1.9.0](https://github.com/docker/docker/releases/tag/v1.9.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
Log tags are now generated in a standard way across different logging drivers.
|
||||
Because of which, the driver specific log tag options `syslog-tag`, `gelf-tag` and
|
||||
`fluentd-tag` have been deprecated in favor of the generic `tag` option.
|
||||
|
||||
```bash
|
||||
{% raw %}
|
||||
docker --log-driver=syslog --log-opt tag="{{.ImageName}}/{{.Name}}/{{.ID}}"
|
||||
{% endraw %}
|
||||
```
|
||||
|
||||
### LXC built-in exec driver
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The built-in LXC execution driver, the lxc-conf flag, and API fields have been removed.
|
||||
|
||||
### Old Command Line Options
|
||||
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
|
||||
|
||||
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
|
||||
|
||||
The flags `-d` and `--daemon` are deprecated in favor of the `daemon` subcommand:
|
||||
|
||||
docker daemon -H ...
|
||||
|
||||
The following single-dash (`-opt`) variant of certain command line options
|
||||
are deprecated and replaced with double-dash options (`--opt`):
|
||||
|
||||
docker attach -nostdin
|
||||
docker attach -sig-proxy
|
||||
docker build -no-cache
|
||||
docker build -rm
|
||||
docker commit -author
|
||||
docker commit -run
|
||||
docker events -since
|
||||
docker history -notrunc
|
||||
docker images -notrunc
|
||||
docker inspect -format
|
||||
docker ps -beforeId
|
||||
docker ps -notrunc
|
||||
docker ps -sinceId
|
||||
docker rm -link
|
||||
docker run -cidfile
|
||||
docker run -dns
|
||||
docker run -entrypoint
|
||||
docker run -expose
|
||||
docker run -link
|
||||
docker run -lxc-conf
|
||||
docker run -n
|
||||
docker run -privileged
|
||||
docker run -volumes-from
|
||||
docker search -notrunc
|
||||
docker search -stars
|
||||
docker search -t
|
||||
docker search -trusted
|
||||
docker tag -force
|
||||
|
||||
The following double-dash options are deprecated and have no replacement:
|
||||
|
||||
docker run --cpuset
|
||||
docker run --networking
|
||||
docker ps --since-id
|
||||
docker ps --before-id
|
||||
docker search --trusted
|
||||
|
||||
**Deprecated In Release: [v1.5.0](https://github.com/docker/docker/releases/tag/v1.5.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
The single-dash (`-help`) was removed, in favor of the double-dash `--help`
|
||||
|
||||
docker -help
|
||||
docker [COMMAND] -help
|
||||
|
||||
### `--run` flag on docker commit
|
||||
|
||||
**Deprecated In Release: [v0.10.0](https://github.com/docker/docker/releases/tag/v0.10.0)**
|
||||
|
||||
**Removed In Release: [v1.13.0](https://github.com/docker/docker/releases/tag/v1.13.0)**
|
||||
|
||||
The flag `--run` of the docker commit (and its short version `-run`) were deprecated in favor
|
||||
of the `--changes` flag that allows to pass `Dockerfile` commands.
|
||||
|
||||
|
||||
### Interacting with V1 registries
|
||||
|
||||
**Disabled By Default In Release: v17.06**
|
||||
|
||||
**Target For Removal In Release: v17.12**
|
||||
|
||||
Version 1.9 adds a flag (`--disable-legacy-registry=false`) which prevents the
|
||||
docker daemon from `pull`, `push`, and `login` operations against v1
|
||||
registries. Though enabled by default, this signals the intent to deprecate
|
||||
the v1 protocol.
|
||||
|
||||
Support for the v1 protocol to the public registry was removed in 1.13. Any
|
||||
mirror configurations using v1 should be updated to use a
|
||||
[v2 registry mirror](https://docs.docker.com/registry/recipes/mirror/).
|
||||
|
||||
### Docker Content Trust ENV passphrase variables name change
|
||||
**Deprecated In Release: [v1.9.0](https://github.com/docker/docker/releases/tag/v1.9.0)**
|
||||
|
||||
**Removed In Release: [v1.12.0](https://github.com/docker/docker/releases/tag/v1.12.0)**
|
||||
|
||||
Since 1.9, Docker Content Trust Offline key has been renamed to Root key and the Tagging key has been renamed to Repository key. Due to this renaming, we're also changing the corresponding environment variables
|
||||
|
||||
- DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE is now named DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE
|
||||
- DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE is now named DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE
|
||||
|
||||
### `--api-enable-cors` flag on dockerd
|
||||
|
||||
**Deprecated In Release: [v1.6.0](https://github.com/docker/docker/releases/tag/v1.6.0)**
|
||||
|
||||
**Target For Removal In Release: v17.09**
|
||||
|
||||
The flag `--api-enable-cors` is deprecated since v1.6.0. Use the flag
|
||||
`--api-cors-header` instead.
|
||||
@@ -1,164 +0,0 @@
|
||||
---
|
||||
description: Volume plugin for Amazon EBS
|
||||
keywords: "API, Usage, plugins, documentation, developer, amazon, ebs, rexray, volume"
|
||||
title: Volume plugin for Amazon EBS
|
||||
---
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
# A proof-of-concept Rexray plugin
|
||||
|
||||
In this example, a simple Rexray plugin will be created for the purposes of using
|
||||
it on an Amazon EC2 instance with EBS. It is not meant to be a complete Rexray plugin.
|
||||
|
||||
The example source is available at [https://github.com/tiborvass/rexray-plugin](https://github.com/tiborvass/rexray-plugin).
|
||||
|
||||
To learn more about Rexray: [https://github.com/codedellemc/rexray](https://github.com/codedellemc/rexray)
|
||||
|
||||
## 1. Make a Docker image
|
||||
|
||||
The following is the Dockerfile used to containerize rexray.
|
||||
|
||||
```Dockerfile
|
||||
FROM debian:jessie
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends wget ca-certificates
|
||||
RUN wget https://dl.bintray.com/emccode/rexray/stable/0.6.4/rexray-Linux-x86_64-0.6.4.tar.gz -O rexray.tar.gz && tar -xvzf rexray.tar.gz -C /usr/bin && rm rexray.tar.gz
|
||||
RUN mkdir -p /run/docker/plugins /var/lib/libstorage/volumes
|
||||
ENTRYPOINT ["rexray"]
|
||||
CMD ["--help"]
|
||||
```
|
||||
|
||||
To build it you can run `image=$(cat Dockerfile | docker build -q -)` and `$image`
|
||||
will reference the containerized rexray image.
|
||||
|
||||
## 2. Extract rootfs
|
||||
|
||||
```sh
|
||||
$ TMPDIR=/tmp/rexray # for the purpose of this example
|
||||
$ # create container without running it, to extract the rootfs from image
|
||||
$ docker create --name rexray "$image"
|
||||
$ # save the rootfs to a tar archive
|
||||
$ docker export -o $TMPDIR/rexray.tar rexray
|
||||
$ # extract rootfs from tar archive to a rootfs folder
|
||||
$ ( mkdir -p $TMPDIR/rootfs; cd $TMPDIR/rootfs; tar xf ../rexray.tar )
|
||||
```
|
||||
|
||||
## 3. Add plugin configuration
|
||||
|
||||
We have to put the following JSON to `$TMPDIR/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"Args": {
|
||||
"Description": "",
|
||||
"Name": "",
|
||||
"Settable": null,
|
||||
"Value": null
|
||||
},
|
||||
"Description": "A proof-of-concept EBS plugin (using rexray) for Docker",
|
||||
"Documentation": "https://github.com/tiborvass/rexray-plugin",
|
||||
"Entrypoint": [
|
||||
"/usr/bin/rexray", "service", "start", "-f"
|
||||
],
|
||||
"Env": [
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "REXRAY_SERVICE",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": "ebs"
|
||||
},
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "EBS_ACCESSKEY",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": ""
|
||||
},
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "EBS_SECRETKEY",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": ""
|
||||
}
|
||||
],
|
||||
"Interface": {
|
||||
"Socket": "rexray.sock",
|
||||
"Types": [
|
||||
"docker.volumedriver/1.0"
|
||||
]
|
||||
},
|
||||
"Linux": {
|
||||
"AllowAllDevices": true,
|
||||
"Capabilities": ["CAP_SYS_ADMIN"],
|
||||
"Devices": null
|
||||
},
|
||||
"Mounts": [
|
||||
{
|
||||
"Source": "/dev",
|
||||
"Destination": "/dev",
|
||||
"Type": "bind",
|
||||
"Options": ["rbind"]
|
||||
}
|
||||
],
|
||||
"Network": {
|
||||
"Type": "host"
|
||||
},
|
||||
"PropagatedMount": "/var/lib/libstorage/volumes",
|
||||
"User": {},
|
||||
"WorkDir": ""
|
||||
}
|
||||
```
|
||||
|
||||
Please note a couple of points:
|
||||
- `PropagatedMount` is needed so that the docker daemon can see mounts done by the
|
||||
rexray plugin from within the container, otherwise the docker daemon is not able
|
||||
to mount a docker volume.
|
||||
- The rexray plugin needs dynamic access to host devices. For that reason, we
|
||||
have to give it access to all devices under `/dev` and set `AllowAllDevices` to
|
||||
true for proper access.
|
||||
- The user of this simple plugin can change only 3 settings: `REXRAY_SERVICE`,
|
||||
`EBS_ACCESSKEY` and `EBS_SECRETKEY`. This is because of the reduced scope of this
|
||||
plugin. Ideally other rexray parameters could also be set.
|
||||
|
||||
## 4. Create plugin
|
||||
|
||||
`docker plugin create tiborvass/rexray-plugin "$TMPDIR"` will create the plugin.
|
||||
|
||||
```sh
|
||||
$ docker plugin ls
|
||||
ID NAME DESCRIPTION ENABLED
|
||||
2475a4bd0ca5 tiborvass/rexray-plugin:latest A rexray volume plugin for Docker false
|
||||
```
|
||||
|
||||
## 5. Test plugin
|
||||
|
||||
```sh
|
||||
$ docker plugin set tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`
|
||||
$ docker plugin enable tiborvass/rexray-plugin
|
||||
$ docker volume create -d tiborvass/rexray-plugin my-ebs-volume
|
||||
$ docker volume ls
|
||||
DRIVER VOLUME NAME
|
||||
tiborvass/rexray-plugin:latest my-ebs-volume
|
||||
$ docker run --rm -v my-ebs-volume:/volume busybox sh -c 'echo bye > /volume/hi'
|
||||
$ docker run --rm -v my-ebs-volume:/volume busybox cat /volume/hi
|
||||
bye
|
||||
```
|
||||
|
||||
## 6. Push plugin
|
||||
|
||||
First, ensure you are logged in with `docker login`. Then you can run:
|
||||
`docker plugin push tiborvass/rexray-plugin` to push it like a regular docker
|
||||
image to a registry, to make it available for others to install via
|
||||
`docker plugin install tiborvass/rexray-plugin EBS_ACCESSKEY=$AWS_ACCESSKEY EBS_SECRETKEY=$AWS_SECRETKEY`.
|
||||
@@ -1,238 +0,0 @@
|
||||
---
|
||||
title: "Plugin config"
|
||||
description: "How develop and use a plugin with the managed plugin system"
|
||||
keywords: "API, Usage, plugins, documentation, developer"
|
||||
---
|
||||
|
||||
<!-- This file is maintained within the docker/docker Github
|
||||
repository at https://github.com/docker/docker/. Make all
|
||||
pull requests against that repo. If you see this file in
|
||||
another repository, consider it read-only there, as it will
|
||||
periodically be overwritten by the definitive file. Pull
|
||||
requests which include edits to this file in other repositories
|
||||
will be rejected.
|
||||
-->
|
||||
|
||||
|
||||
# Plugin Config Version 1 of Plugin V2
|
||||
|
||||
This document outlines the format of the V0 plugin configuration. The plugin
|
||||
config described herein was introduced in the Docker daemon in the [v1.12.0
|
||||
release](https://github.com/docker/docker/commit/f37117045c5398fd3dca8016ea8ca0cb47e7312b).
|
||||
|
||||
Plugin configs describe the various constituents of a docker plugin. Plugin
|
||||
configs can be serialized to JSON format with the following media types:
|
||||
|
||||
Config Type | Media Type
|
||||
------------- | -------------
|
||||
config | "application/vnd.docker.plugin.v1+json"
|
||||
|
||||
|
||||
## *Config* Field Descriptions
|
||||
|
||||
Config provides the base accessible fields for working with V0 plugin format
|
||||
in the registry.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the plugin
|
||||
|
||||
- **`documentation`** *string*
|
||||
|
||||
link to the documentation about the plugin
|
||||
|
||||
- **`interface`** *PluginInterface*
|
||||
|
||||
interface implemented by the plugins, struct consisting of the following fields
|
||||
|
||||
- **`types`** *string array*
|
||||
|
||||
types indicate what interface(s) the plugin currently implements.
|
||||
|
||||
currently supported:
|
||||
|
||||
- **docker.volumedriver/1.0**
|
||||
|
||||
- **docker.networkdriver/1.0**
|
||||
|
||||
- **docker.ipamdriver/1.0**
|
||||
|
||||
- **docker.authz/1.0**
|
||||
|
||||
- **docker.logdriver/1.0**
|
||||
|
||||
- **docker.metricscollector/1.0**
|
||||
|
||||
- **`socket`** *string*
|
||||
|
||||
socket is the name of the socket the engine should use to communicate with the plugins.
|
||||
the socket will be created in `/run/docker/plugins`.
|
||||
|
||||
|
||||
- **`entrypoint`** *string array*
|
||||
|
||||
entrypoint of the plugin, see [`ENTRYPOINT`](../reference/builder.md#entrypoint)
|
||||
|
||||
- **`workdir`** *string*
|
||||
|
||||
workdir of the plugin, see [`WORKDIR`](../reference/builder.md#workdir)
|
||||
|
||||
- **`network`** *PluginNetwork*
|
||||
|
||||
network of the plugin, struct consisting of the following fields
|
||||
|
||||
- **`type`** *string*
|
||||
|
||||
network type.
|
||||
|
||||
currently supported:
|
||||
|
||||
- **bridge**
|
||||
- **host**
|
||||
- **none**
|
||||
|
||||
- **`mounts`** *PluginMount array*
|
||||
|
||||
mount of the plugin, struct consisting of the following fields, see [`MOUNTS`](https://github.com/opencontainers/runtime-spec/blob/master/config.md#mounts)
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the mount.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the mount.
|
||||
|
||||
- **`source`** *string*
|
||||
|
||||
source of the mount.
|
||||
|
||||
- **`destination`** *string*
|
||||
|
||||
destination of the mount.
|
||||
|
||||
- **`type`** *string*
|
||||
|
||||
mount type.
|
||||
|
||||
- **`options`** *string array*
|
||||
|
||||
options of the mount.
|
||||
|
||||
- **`ipchost`** *boolean*
|
||||
Access to host ipc namespace.
|
||||
- **`pidhost`** *boolean*
|
||||
Access to host pid namespace.
|
||||
|
||||
- **`propagatedMount`** *string*
|
||||
|
||||
path to be mounted as rshared, so that mounts under that path are visible to docker. This is useful for volume plugins.
|
||||
This path will be bind-mounted outisde of the plugin rootfs so it's contents
|
||||
are preserved on upgrade.
|
||||
|
||||
- **`env`** *PluginEnv array*
|
||||
|
||||
env of the plugin, struct consisting of the following fields
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the env.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the env.
|
||||
|
||||
- **`value`** *string*
|
||||
|
||||
value of the env.
|
||||
|
||||
- **`args`** *PluginArgs*
|
||||
|
||||
args of the plugin, struct consisting of the following fields
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the args.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the args.
|
||||
|
||||
- **`value`** *string array*
|
||||
|
||||
values of the args.
|
||||
|
||||
- **`linux`** *PluginLinux*
|
||||
|
||||
- **`capabilities`** *string array*
|
||||
|
||||
capabilities of the plugin (*Linux only*), see list [`here`](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md#security)
|
||||
|
||||
- **`allowAllDevices`** *boolean*
|
||||
|
||||
If `/dev` is bind mounted from the host, and allowAllDevices is set to true, the plugin will have `rwm` access to all devices on the host.
|
||||
|
||||
- **`devices`** *PluginDevice array*
|
||||
|
||||
device of the plugin, (*Linux only*), struct consisting of the following fields, see [`DEVICES`](https://github.com/opencontainers/runtime-spec/blob/master/config-linux.md#devices)
|
||||
|
||||
- **`name`** *string*
|
||||
|
||||
name of the device.
|
||||
|
||||
- **`description`** *string*
|
||||
|
||||
description of the device.
|
||||
|
||||
- **`path`** *string*
|
||||
|
||||
path of the device.
|
||||
|
||||
## Example Config
|
||||
|
||||
*Example showing the 'tiborvass/sample-volume-plugin' plugin config.*
|
||||
|
||||
```json
|
||||
{
|
||||
"Args": {
|
||||
"Description": "",
|
||||
"Name": "",
|
||||
"Settable": null,
|
||||
"Value": null
|
||||
},
|
||||
"Description": "A sample volume plugin for Docker",
|
||||
"Documentation": "https://docs.docker.com/engine/extend/plugins/",
|
||||
"Entrypoint": [
|
||||
"/usr/bin/sample-volume-plugin",
|
||||
"/data"
|
||||
],
|
||||
"Env": [
|
||||
{
|
||||
"Description": "",
|
||||
"Name": "DEBUG",
|
||||
"Settable": [
|
||||
"value"
|
||||
],
|
||||
"Value": "0"
|
||||
}
|
||||
],
|
||||
"Interface": {
|
||||
"Socket": "plugin.sock",
|
||||
"Types": [
|
||||
"docker.volumedriver/1.0"
|
||||
]
|
||||
},
|
||||
"Linux": {
|
||||
"Capabilities": null,
|
||||
"AllowAllDevices": false,
|
||||
"Devices": null
|
||||
},
|
||||
"Mounts": null,
|
||||
"Network": {
|
||||
"Type": ""
|
||||
},
|
||||
"PropagatedMount": "/data",
|
||||
"User": {},
|
||||
"Workdir": ""
|
||||
}
|
||||
```
|
||||
Binary file not shown.
|
Before Width: | Height: | Size: 45 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user