Compare commits

..

73 Commits

Author SHA1 Message Date
Arnaud Porterie
a1cae772c4 Bump to version v1.5.0
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-04 10:07:14 -08:00
Derek McGowan
d4c731ecd6 Limit push and pull to v2 official registry
No longer push to the official v2 registry when it is available. This allows pulling images from the v2 registry without defaulting push. Only pull official images from the v2 official registry.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-04 10:05:16 -08:00
Arnaud Porterie
2dba4e1386 Fix client-side validation of Dockerfile path
Arguments to `filepath.Rel` were reversed, making all builder tests to
fail.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-03 09:07:17 -08:00
Tibor Vass
06a7f471e0 builder: prevent Dockerfile to leave build context
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-02-03 09:07:17 -08:00
Doug Davis
4683d01691 Add an API test for docker build -f Dockerfile
I noticed that while we have tests to make sure that people don't
specify a Dockerfile (via -f) that's outside of the build context
when using the docker cli, we don't check on the server side to make
sure that API users have the same check done. This would be a security
risk.

While in there I had to add a new util func for the tests to allow us to
send content to the server that isn't json encoded - in this case a tarball

Signed-off-by: Doug Davis <dug@us.ibm.com>
2015-02-03 09:07:17 -08:00
Michael Crosby
6020a06399 Print zeros for initial stats collection on stopped container
When calling stats on stopped container's print out zeros for all of the
values to populate the initial table.  This signals to the user that the
operations completed and will not block.

Closes #10504

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-02-03 08:50:47 -08:00
Sebastiaan van Stijn
cc0bfccdf4 Replace "base" with "ubuntu" in documentation
The API documentation uses the "base" image in various
places. The "base" image is deprecated and it is no longer
possible to download this image.

This changes the API documentation to use "ubuntu" in stead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2015-02-02 13:34:12 -08:00
Chen Hanxiao
0c18ec62f3 docs: fix another typo in docker-build man page
s/arbtrary/arbitrary

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-02-02 13:30:11 -08:00
John Tims
a9825c9bd8 Fix documentation typo
Signed-off-by: John Tims <john.k.tims@gmail.com>
2015-02-02 13:30:11 -08:00
Josh Hawn
908be50c44 Handle gorilla/mux route url bug
When getting the URL from a v2 registry url builder, it does not
honor the scheme from the endpoint object and will cause an https
endpoint to return urls starting with http.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:11 -08:00
Josh Hawn
2a82dba34d Fix token basic auth header issue
When requesting a token, the basic auth header is always being set even
if there is no username value. This patch corrects this and does not set
the basic auth header if the username is empty.

Also fixes an issue where pulling all tags from a v2 registry succeeds
when the image does not actually exist on the registry.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:11 -08:00
Erik Hollensbe
13fd2a908c Remove "OMG IPV6" log message
Signed-off-by: Erik Hollensbe <erik+github@hollensbe.org>
2015-02-02 13:30:11 -08:00
Arnaud Porterie
464891aaf8 Fix race in test registry setup
Wait for the local registry-v2 test instance to become available to
avoid random tests failures.

Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-02-02 13:30:10 -08:00
Phil Estes
9974663ed7 Add missing $HOST in a couple places in HTTPS/TLS setup docs
Fix typos in setup docs where tcp://:2376 is used without the $HOST
parameter.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
2015-02-02 13:30:10 -08:00
Derek McGowan
76269e5c9d Add push fallback to v1 for the official registry
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Jessica Frazelle
1121d7c4fd Validate toml
Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <princess@docker.com> (github: jfrazelle)

Docker-DCO-1.1-Signed-off-by: Jessie Frazelle <hugs@docker.com> (github: jfrazelle)
2015-02-02 13:30:10 -08:00
Josh Hawn
7e197575a2 Remove Checksum field from image.Image struct
The checksum is now being stored in a separate file beside the image
JSON file.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:10 -08:00
Derek McGowan
3dc3059d94 Store tar checksum in separate file
Fixes #10432

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Derek McGowan
7b6de74c9a Revert client signature
Supports multiple tag push with daemon signature

Fixes #10444

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:10 -08:00
Phil Estes
cad8adacb8 Setup TCP keep-alive on hijacked HTTP(S) client <--> daemon sessions
Fixes #10387

Without TCP keep-alive set on socket connections to the daemon, any
long-running container with std{out,err,in} attached that doesn't
read/write for a minute or longer will end in ECONNTIMEDOUT (depending
on network settings/OS defaults, etc.), leaving the docker client side
believing it is still waiting on data with no actual underlying socket
connection.

This patch turns on TCP keep-alive for the underlying TCP connection
for both TLS and standard HTTP hijacked daemon connections from the
docker client, with a keep-alive timeout of 30 seconds.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com>
2015-02-02 13:30:10 -08:00
Srini Brahmaroutu
6226deeaf4 Removing the check on Architecture to build and run Docker on IBM Power and Z platforms
Signed-off-by: Srini Brahmaroutu <srbrahma@us.ibm.com>
2015-02-02 13:30:10 -08:00
gdi2290
3ec19f56cf Update AUTHORS file and .mailmap
added `LC_ALL=C.UTF-8` due to osx
http://www.inmotionhosting.com/support/website/ssh/speed-up-grep-searche
s-with-lc-all

Signed-off-by: Patrick Stapleton <github@gdi2290.com>
2015-02-02 13:30:10 -08:00
Josh Hawn
48c71787ed No longer compute checksum when installing images.
While checksums are verified when a layer is pulled from v2 registries,
there are known issues where the checksum may change when the layer diff
is computed again. To avoid these issues, the checksum should no longer
be computed and stored until after it has been extracted to the docker
storage driver. The checksums are instead computed lazily before they
are pushed to a v2 registry.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-02-02 13:30:10 -08:00
Jessica Frazelle
604731a930 Some small updates to the dev env docs.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-02-02 13:30:09 -08:00
Derek McGowan
e8650e01f8 Defer creation of trust key file until needed
Fixes #10442

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-02-02 13:30:09 -08:00
Sven Dowideit
817d04d992 DHE documentation placeholder and Navbar changes
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-02-02 13:30:09 -08:00
Sven Dowideit
cdff91a01c comment out the docker and curl lines we'll run later
Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)
2015-02-02 13:30:09 -08:00
Mehul Kar
6f26bd0e16 Improve explanation of port mapping from containers
Signed-off-by: Mehul Kar <mehul.kar@gmail.com>
2015-02-02 13:30:09 -08:00
Tianon Gravi
3c090db4e9 Update .deb version numbers to be more sane
Example output:
```console
root@906b21a861fb:/go/src/github.com/docker/docker# ./hack/make.sh binary ubuntu
bundles/1.4.1-dev already exists. Removing.

---> Making bundle: binary (in bundles/1.4.1-dev/binary)
Created binary: /go/src/github.com/docker/docker/bundles/1.4.1-dev/binary/docker-1.4.1-dev

---> Making bundle: ubuntu (in bundles/1.4.1-dev/ubuntu)
Created package {:path=>"lxc-docker-1.4.1-dev_1.4.1~dev~git20150128.182847.0.17e840a_amd64.deb"}
Created package {:path=>"lxc-docker_1.4.1~dev~git20150128.182847.0.17e840a_amd64.deb"}

```

As noted in a comment in the code here, this sums up the reasoning for this change: (which is how APT and reprepro compare versions)
```console
$ dpkg --compare-versions 1.5.0 gt 1.5.0~rc1 && echo true || echo false
true
$ dpkg --compare-versions 1.5.0~rc1 gt 1.5.0~git20150128.112847.17e840a && echo true || echo false
true
$ dpkg --compare-versions 1.5.0~git20150128.112847.17e840a gt 1.5.0~dev~git20150128.112847.17e840a && echo true || echo false
true
```

ie, `1.5.0` > `1.5.0~rc1` > `1.5.0~git20150128.112847.17e840a` > `1.5.0~dev~git20150128.112847.17e840a`

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
2015-01-28 13:51:12 -08:00
Arnaud Porterie
b7c3fdfd0d Update fish completion for 1.5.0
Signed-off-by: Arnaud Porterie <arnaud.porterie@docker.com>
2015-01-28 10:29:29 -08:00
Phil Estes
aa682a845b Fix bridge initialization for IPv6 if IPv4-only docker0 exists
This fixes the daemon's failure to start when setting --ipv6=true for
the first time without deleting `docker0` bridge from a prior use with
only IPv4 addressing.

The addition of the IPv6 bridge address is factored out into a separate
initialization routine which is called even if the bridge exists but no
IPv6 addresses are found.

Docker-DCO-1.1-Signed-off-by: Phil Estes <estesp@linux.vnet.ibm.com> (github: estesp)
2015-01-28 10:29:29 -08:00
Jonathan Rudenberg
218d0dcc9d Fix missing err assignment in bridge creation
Signed-off-by: Jonathan Rudenberg <jonathan@titanous.com>
2015-01-28 10:29:29 -08:00
Stephen J Day
510d8f8634 Open up v2 http status code checks for put and head checks
Under certain cases, such as when putting a manifest or check for the existence
of a layer, the status code checks in session_v2.go were too narrow for their
purpose. In the case of putting a manifest, the handler only cares that an
error is not returned. Whether it is a 304 or 202 does not matter, as long as
the server reports success. Having the client only accept specific http codes
inhibits future protocol evolution.

Signed-off-by: Stephen J Day <stephen.day@docker.com>
2015-01-28 08:48:03 -08:00
Derek McGowan
b65600f6b6 Buffer tar file on v2 push
fixes #10312
fixes #10306

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-28 08:48:03 -08:00
Jessica Frazelle
79dcea718c Add completion for stats.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-01-28 08:45:57 -08:00
Sven Dowideit
072b09c45d Add the registry mirror document to the menu
Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)
2015-01-27 19:35:26 -08:00
Derek McGowan
c2d9837745 Use layer checksum if calculated during manifest creation
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:26 -08:00
Josh Hawn
fa5dfbb18b Fix premature close of build output on pull
The build job will sometimes trigger a pull job when the base image
does not exist. Now that engine jobs properly close their output by default
the pull job would also close the build job's stdout in a cascading close
upon completion of the pull.

This patch corrects this by wrapping the `pull` job's stdout with a
nopCloseWriter which will not close the stdout of the `build` job.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:26 -08:00
Sven Dowideit
6532a075f3 tell users they can what IP range Hub webhooks can come from so they can filter
Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@docker.com> (github: SvenDowideit)

Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
2015-01-27 19:35:26 -08:00
Chen Hanxiao
3b4a4bf809 docs: fix a typo in docker-build man page
s/Dockefile/Dockerfile

Signed-off-by: Chen Hanxiao <chenhanxiao@cn.fujitsu.com>
2015-01-27 19:35:26 -08:00
Tony Miller
4602909566 fix /etc/host typo in remote API docs
Signed-off-by: Tony Miller <mcfiredrill@gmail.com>
2015-01-27 19:35:25 -08:00
Sven Dowideit
588f350b61 as we're not using the search suggestion feature only load the search_content when we have a search ?q= param
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Sven Dowideit
6e5ff509b2 set the content-type for the search_content.json.gz
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Sven Dowideit
61d341c2ca Change to load the json.gz file
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>

Docker-DCO-1.1-Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
unclejack
b996d379a1 docs: compress search_content.json for release
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>

Docker-DCO-1.1-Signed-off-by: unclejack <unclejacksons@gmail.com> (github: SvenDowideit)
2015-01-27 19:35:25 -08:00
Derek McGowan
b0935ea730 Better error messaging and logging for v2 registry requests
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:25 -08:00
Derek McGowan
96fe13b49b Add file path to errors loading the key file
Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:25 -08:00
Brian Goff
12ccde442a Do not return err on symlink eval
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-01-27 19:35:25 -08:00
Michael Crosby
4262cfe41f Remove omitempty json tags from stucts
When unmarshaling the json response from the API in languages to a
dynamic object having the omitempty field tag on types such as float64
case the key to be omitted on 0.0 values.  Various langages will
interpret this as a null when 0.0 is the actual value.

This patch removes the omitempty tags on fields that are not structs
where they can be safely omited.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2015-01-27 19:35:25 -08:00
Brian Goff
ddc2e25546 Fix bind-mounts only partially removed
When calling delete on a bind-mount volume, the config file was bing
removed, but it was not actually being removed from the volume index.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2015-01-27 19:35:25 -08:00
unclejack
6646cff646 docs: shrink sprites-small_360.png
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-01-27 19:35:25 -08:00
Euan
ac8fd856c0 Allow empty layer configs in manifests
Before the V2 registry changes, images with no config could be pushed.
This change fixes a regression that made those images not able to be
pushed to a registry.

Signed-off-by: Euan Kemp <euank@euank.com>
2015-01-27 19:35:25 -08:00
DiuDiugirl
48754d673c Fix a minor typo
Docker inspect can also be used on images, this patch fixed the
minor typo in file docker/flags.go and docs/man/docker.1.md

Signed-off-by:  DiuDiugirl <sophia.wang@pku.edu.cn>
2015-01-27 19:35:25 -08:00
unclejack
723684525a pkg/archive: remove tar autodetection log line
Signed-off-by: Cristian Staretu <cristian.staretu@gmail.com>
2015-01-27 19:35:24 -08:00
Derek McGowan
32aceadbe6 Revert progressreader to not defer close
When progress reader closes it overwrites the progress line with the full progress bar, replaces the completed message.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:24 -08:00
Derek McGowan
c67d3e159c Use filepath instead of path
Currently loading the trust key uses path instead of filepath. This creates problems on some operating systems such as Windows.

Fixes #10319

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-01-27 19:35:24 -08:00
Jessica Frazelle
a080e2add7 Make debugs logs suck less.
Docker-DCO-1.1-Signed-off-by: Jessica Frazelle <jess@docker.com> (github: jfrazelle)
2015-01-27 19:35:24 -08:00
Josh Hawn
24d81b0ddb Always store images with tarsum.v1 checksum added
Updates `image.StoreImage()` to always ensure that images
that are installed in Docker have a tarsum.v1 checksum.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:24 -08:00
Tony Miller
08f2fad40b document the ExtraHosts parameter for /containers/create for the remote API
I think this was added from version 1.15.

Signed-off-by: Tony Miller <mcfiredrill@gmail.com>
2015-01-27 19:35:24 -08:00
GennadySpb
f91fbe39ce Update using_supervisord.md
Fix factual error

change made by: GennadySpb <lipenkov@gmail.com>

Signed-off-by: Sven Dowideit <SvenDowideit@docker.com>
2015-01-27 19:35:24 -08:00
Tianon Gravi
018ab080bb Remove windows from the list of supported platforms
Since it can still be tested natively without this, this won't cause any harm while we fix the tests to actually work on Windows.

Signed-off-by: Andrew "Tianon" Page <admwiggin@gmail.com>
2015-01-27 19:35:24 -08:00
Tibor Vass
fe94ecb2c1 integration-cli: wait for container before sending ^D
Signed-off-by: Tibor Vass <teabee89@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
7b2e67036f Fix inconsistent formatting
Colon was bold, but regular at other occurences.

Blame cf27b310c4

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
e130faea1b doc: Minor semantical/editorial fixes in HTTPS article
"read-only" vs. "only readable by you"

Refer to:
https://github.com/docker/docker/pull/9952#discussion_r22690266

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
38f09de334 doc: Editorial changes as suggested by @fredlf
Refer to:
 * https://github.com/docker/docker/pull/9952#discussion_r22686652
 * https://github.com/docker/docker/pull/9952#discussion_r22686804

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:24 -08:00
Lorenz Leutgeb
f9ba68ddfb doc: Improve article on HTTPS
* Adjust header to match _page_title
 * Add instructions on deletion of CSRs and setting permissions
 * Simplify some path expressions and commands
 * Consqeuently use ~ instead of ${HOME}
 * Precise formulation ('key' vs. 'public key')
 * Fix wrong indentation of output of `openssl req`
 * Use dash ('--') instead of minus ('-')

Remark on permissions:

It's not a problem to `chmod 0400` the private keys, because the
Docker daemon runs as root (can read the file anyway) and the Docker
client runs as user.

Signed-off-by: Lorenz Leutgeb <lorenz.leutgeb@gmail.com>
2015-01-27 19:35:23 -08:00
Abin Shahab
16913455bd Fixes apparmor regression
Signed-off-by: Abin Shahab <ashahab@altiscale.com> (github: ashahab-altiscale)
Docker-DCO-1.1-Signed-off-by: Abin Shahab <ashahab@altiscale.com> (github: ashahab-altiscale)
2015-01-27 19:35:23 -08:00
Andrew C. Bodine
32f189cd08 Adds docs for /containers/(id)/attach/ws api endpoint
Signed-off-by: Andrew C. Bodine <acbodine@us.ibm.com>
2015-01-27 19:35:23 -08:00
Josh Hawn
526ca42282 Split API Version header when checking for v2
Since the Docker-Distribution-API-Version header value may contain multiple
space delimited versions as well as many instances of the header key, the
header value is now split on whitespace characters to iterate over all versions
that may be listed in one instance of the header.

Docker-DCO-1.1-Signed-off-by: Josh Hawn <josh.hawn@docker.com> (github: jlhawn)
2015-01-27 19:35:23 -08:00
Harald Albers
b98b42d843 Add bash completions for daemon flags, simplify with extglob
Implementing the deamon flags the traditional way introduced even more
redundancy than usual because the same list of options with flags
had to be added twice.

This can be avoided by using variables in the case statements when
using the extglob shell option.

Signed-off-by: Harald Albers <github@albersweb.de>
2015-01-27 19:35:23 -08:00
imre Fitos
7bf03dd132 fix typo 'setup/set up'
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
imre Fitos
034aa3b2c4 start docker before checking for updated NAT rule
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
imre Fitos
6da1e01e6c docs: remove NAT rule when removing bridge
Signed-off-by: imre Fitos <imre.fitos+github@gmail.com>
2015-01-27 19:35:23 -08:00
3940 changed files with 227073 additions and 609330 deletions

View File

@@ -1,3 +1,2 @@
bundles
.gopath
vendor/pkg

14
.drone.yml Executable file
View File

@@ -0,0 +1,14 @@
image: dockercore/docker
env:
- AUTO_GOPATH=1
- DOCKER_GRAPHDRIVER=vfs
- DOCKER_EXECDRIVER=native
script:
# Setup the DockerInDocker environment.
- hack/dind
# Tests relying on StartWithBusybox make Drone time out.
- rm integration-cli/docker_cli_daemon_test.go
- rm integration-cli/docker_cli_exec_test.go
# Validate and test.
- hack/make.sh validate-dco validate-gofmt validate-toml
- hack/make.sh binary cross test-unit test-integration-cli test-integration test-docker-py

View File

@@ -1,51 +0,0 @@
<!--
If you are reporting a new issue, make sure that we do not have any duplicates
already open. You can ensure this by searching the issue list for this
repository. If there is a duplicate, please close your issue and add a comment
to the existing issue instead.
If you suspect your issue is a bug, please edit your issue description to
include the BUG REPORT INFORMATION shown below. If you fail to provide this
information within 7 days, we cannot debug your issue and will close it. We
will, however, reopen it if you later provide the information.
For more information about reporting issues, see
https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues
---------------------------------------------------
BUG REPORT INFORMATION
---------------------------------------------------
Use the commands below to provide key information from your environment:
You do NOT have to include this information if this is a FEATURE REQUEST
-->
**Output of `docker version`:**
```
(paste your output here)
```
**Output of `docker info`:**
```
(paste your output here)
```
**Additional environment details (AWS, VirtualBox, physical, etc.):**
**Steps to reproduce the issue:**
1.
2.
3.
**Describe the results you received:**
**Describe the results you expected:**
**Additional information you deem important (e.g. issue happens only occasionally):**

View File

@@ -1,23 +0,0 @@
<!--
Please make sure you've read and understood our contributing guidelines;
https://github.com/docker/docker/blob/master/CONTRIBUTING.md
** Make sure all your commits include a signature generated with `git commit -s` **
For additional information on our contributing process, read our contributing
guide https://docs.docker.com/opensource/code/
If this is a bug fix, make sure your description includes "fixes #xxxx", or
"closes #xxxx"
Please provide the following information:
-->
**- What I did**
**- How I did it**
**- How to verify it**
**- A picture of a cute animal (not mandatory but encouraged)**

42
.gitignore vendored
View File

@@ -1,27 +1,31 @@
# Docker project generated files to ignore
# if you want to ignore files created by your editor/tools,
# please consider a global .gitignore https://help.github.com/articles/ignoring-files
*.exe
*.exe~
*.orig
*.test
.*.swp
.DS_Store
.gopath/
autogen/
bundles/
.vagrant*
bin
docker/docker
dockerversion/version_autogen.go
docs/AWS_S3_BUCKET
docs/GITCOMMIT
docs/GIT_BRANCH
docs/VERSION
*.exe
.*.swp
a.out
*.orig
build_src
.flymake*
.idea
.DS_Store
docs/_build
docs/_static
docs/_templates
docs/changed-files
# generated by man/md2man-all.sh
man/man1
man/man5
man/man8
.gopath/
.dotcloud
*.test
bundles/
.hg/
.git/
vendor/pkg/
pyenv
Vagrantfile
docs/AWS_S3_BUCKET
docs/GIT_BRANCH
docs/VERSION
docs/GITCOMMIT
docs/changed-files

107
.mailmap
View File

@@ -1,4 +1,4 @@
# Generate AUTHORS: hack/generate-authors.sh
# Generate AUTHORS: project/generate-authors.sh
# Tip for finding duplicates (besides scanning the output of AUTHORS for name
# duplicates that aren't also email duplicates): scan the output of:
@@ -93,8 +93,7 @@ Sven Dowideit <SvenDowideit@home.org.au> <¨SvenDowideit@home.org.au¨>
Sven Dowideit <SvenDowideit@home.org.au> <SvenDowideit@users.noreply.github.com>
Sven Dowideit <SvenDowideit@home.org.au> <sven@t440s.home.gateway>
<alexl@redhat.com> <alexander.larsson@gmail.com>
Alexander Morozov <lk4d4@docker.com> <lk4d4math@gmail.com>
Alexander Morozov <lk4d4@docker.com>
Alexandr Morozov <lk4d4math@gmail.com>
<git.nivoc@neverbox.com> <kuehnle@online.de>
O.S. Tezer <ostezer@gmail.com>
<ostezer@gmail.com> <ostezer@users.noreply.github.com>
@@ -107,9 +106,7 @@ Roberto G. Hashioka <roberto.hashioka@docker.com> <roberto_hashioka@hotmail.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Sridhar Ratnakumar <sridharr@activestate.com> <github@srid.name>
Liang-Chi Hsieh <viirya@gmail.com>
Aleksa Sarai <asarai@suse.de>
Aleksa Sarai <asarai@suse.de> <asarai@suse.com>
Aleksa Sarai <asarai@suse.de> <cyphar@cyphar.com>
Aleksa Sarai <cyphar@cyphar.com>
Will Weaver <monkey@buildingbananas.com>
Timothy Hobbs <timothyhobbs@seznam.cz>
Nathan LeClaire <nathan.leclaire@docker.com> <nathan.leclaire@gmail.com>
@@ -120,27 +117,24 @@ Nathan LeClaire <nathan.leclaire@docker.com> <nathanleclaire@gmail.com>
<marc@marc-abramowitz.com> <msabramo@gmail.com>
Matthew Heon <mheon@redhat.com> <mheon@mheonlaptop.redhat.com>
<bernat@luffy.cx> <vincent@bernat.im>
<bernat@luffy.cx> <Vincent.Bernat@exoscale.ch>
<p@pwaller.net> <peter@scraperwiki.com>
<andrew.weiss@outlook.com> <andrew.weiss@microsoft.com>
Francisco Carriedo <fcarriedo@gmail.com>
<julienbordellier@gmail.com> <git@julienbordellier.com>
<ahmetb@microsoft.com> <ahmetalpbalkan@gmail.com>
<lk4d4@docker.com> <lk4d4math@gmail.com>
<arnaud.porterie@docker.com> <icecrime@gmail.com>
<baloo@gandi.net> <superbaloo+registrations.github@superbaloo.net>
Brian Goff <cpuguy83@gmail.com>
<cpuguy83@gmail.com> <bgoff@cpuguy83-mbp.home>
<eric@windisch.us> <ewindisch@docker.com>
<ewindisch@docker.com> <eric@windisch.us>
<frank.rosquin+github@gmail.com> <frank.rosquin@gmail.com>
Hollie Teal <hollie@docker.com>
<hollie@docker.com> <hollie.teal@docker.com>
<hollie@docker.com> <hollietealok@users.noreply.github.com>
<huu@prismskylabs.com> <whoshuu@gmail.com>
Jessica Frazelle <jess@mesosphere.com>
Jessica Frazelle <jess@mesosphere.com> <jfrazelle@users.noreply.github.com>
Jessica Frazelle <jess@mesosphere.com> <acidburn@docker.com>
Jessica Frazelle <jess@mesosphere.com> <jess@docker.com>
Jessica Frazelle <jess@mesosphere.com> <princess@docker.com>
Jessica Frazelle <jess@docker.com> Jessie Frazelle <jfrazelle@users.noreply.github.com>
<jess@docker.com> <jfrazelle@users.noreply.github.com>
<konrad.wilhelm.kleine@gmail.com> <kwk@users.noreply.github.com>
<tintypemolly@gmail.com> <tintypemolly@Ohui-MacBook-Pro.local>
<estesp@linux.vnet.ibm.com> <estesp@gmail.com>
@@ -148,90 +142,3 @@ Jessica Frazelle <jess@mesosphere.com> <princess@docker.com>
Thomas LEVEIL <thomasleveil@gmail.com> Thomas LÉVEIL <thomasleveil@users.noreply.github.com>
<oi@truffles.me.uk> <timruffles@googlemail.com>
<Vincent.Bernat@exoscale.ch> <bernat@luffy.cx>
Antonio Murdaca <antonio.murdaca@gmail.com> <amurdaca@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@redhat.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <me@runcom.ninja>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@linux.com>
Antonio Murdaca <antonio.murdaca@gmail.com> <runcom@users.noreply.github.com>
Darren Shepherd <darren.s.shepherd@gmail.com> <darren@rancher.com>
Deshi Xiao <dxiao@redhat.com> <dsxiao@dataman-inc.com>
Deshi Xiao <dxiao@redhat.com> <xiaods@gmail.com>
Doug Davis <dug@us.ibm.com> <duglin@users.noreply.github.com>
Jacob Atzen <jacob@jacobatzen.dk> <jatzen@gmail.com>
Jeff Nickoloff <jeff.nickoloff@gmail.com> <jeff@allingeek.com>
John Howard (VM) <John.Howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <john.howard@microsoft.com>
John Howard (VM) <John.Howard@microsoft.com> <jhoward@microsoft.com>
Madhu Venugopal <madhu@socketplane.io> <madhu@docker.com>
Mary Anthony <mary.anthony@docker.com> <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> moxiegirl <mary@docker.com>
Mary Anthony <mary.anthony@docker.com> <moxieandmore@gmail.com>
mattyw <mattyw@me.com> <gh@mattyw.net>
resouer <resouer@163.com> <resouer@gmail.com>
AJ Bowen <aj@gandi.net> soulshake <amy@gandi.net>
AJ Bowen <aj@gandi.net> soulshake <aj@gandi.net>
Tibor Vass <teabee89@gmail.com> <tibor@docker.com>
Tibor Vass <teabee89@gmail.com> <tiborvass@users.noreply.github.com>
Vincent Bernat <bernat@luffy.cx> <Vincent.Bernat@exoscale.ch>
Yestin Sun <sunyi0804@gmail.com> <yestin.sun@polyera.com>
bin liu <liubin0329@users.noreply.github.com> <liubin0329@gmail.com>
John Howard (VM) <John.Howard@microsoft.com> jhowardmsft <jhoward@microsoft.com>
Ankush Agarwal <ankushagarwal11@gmail.com> <ankushagarwal@users.noreply.github.com>
Tangi COLIN <tangicolin@gmail.com> tangicolin <tangicolin@gmail.com>
Allen Sun <allen.sun@daocloud.io>
Adrien Gallouët <adrien@gallouet.fr> <angt@users.noreply.github.com>
<aanm90@gmail.com> <martins@noironetworks.com>
Anuj Bahuguna <anujbahuguna.dev@gmail.com>
Anusha Ragunathan <anusha.ragunathan@docker.com> <anusha@docker.com>
Avi Miller <avi.miller@oracle.com> <avi.miller@gmail.com>
Brent Salisbury <brent.salisbury@docker.com> <brent@docker.com>
Chander G <chandergovind@gmail.com>
Chun Chen <ramichen@tencent.com> <chenchun.feed@gmail.com>
Ying Li <cyli@twistedmatrix.com>
Daehyeok Mun <daehyeok@gmail.com> <daehyeok@daehyeok-ui-MacBook-Air.local>
<dqminh@cloudflare.com> <dqminh89@gmail.com>
Daniel, Dao Quang Minh <dqminh@cloudflare.com>
Daniel Nephin <dnephin@docker.com> <dnephin@gmail.com>
Dave Tucker <dt@docker.com> <dave@dtucker.co.uk>
Doug Tangren <d.tangren@gmail.com>
Frederick F. Kautz IV <fkautz@redhat.com> <fkautz@alumni.cmu.edu>
Ben Golub <ben.golub@dotcloud.com>
Harold Cooper <hrldcpr@gmail.com>
hsinko <21551195@zju.edu.cn> <hsinko@users.noreply.github.com>
Josh Hawn <josh.hawn@docker.com> <jlhawn@berkeley.edu>
Justin Cormack <justin.cormack@docker.com>
<justin.cormack@docker.com> <justin.cormack@unikernel.com>
<justin.cormack@docker.com> <justin@specialbusservice.com>
Kamil Domański <kamil@domanski.co>
Lei Jitang <leijitang@huawei.com>
<leijitang@huawei.com> <leijitang@gmail.com>
Linus Heckemann <lheckemann@twig-world.com>
<lheckemann@twig-world.com> <anonymouse2048@gmail.com>
Lynda O'Leary <lyndaoleary29@gmail.com>
<lyndaoleary29@gmail.com> <lyndaoleary@hotmail.com>
Marianna Tessel <mtesselh@gmail.com>
Michael Huettermann <michael@huettermann.net>
Moysés Borges <moysesb@gmail.com>
<moysesb@gmail.com> <moyses.furtado@wplex.com.br>
Nigel Poulton <nigelpoulton@hotmail.com>
Qiang Huang <h.huangqiang@huawei.com>
<h.huangqiang@huawei.com> <qhuang@10.0.2.15>
Boaz Shuster <ripcurld.github@gmail.com>
Shuwei Hao <haosw@cn.ibm.com>
<haosw@cn.ibm.com> <haoshuwei24@gmail.com>
Soshi Katsuta <soshi.katsuta@gmail.com>
<soshi.katsuta@gmail.com> <katsuta_soshi@cyberagent.co.jp>
Stefan Berger <stefanb@linux.vnet.ibm.com>
<stefanb@linux.vnet.ibm.com> <stefanb@us.ibm.com>
Stephen Day <stephen.day@docker.com>
<stephen.day@docker.com> <stevvooe@users.noreply.github.com>
Toli Kuznets <toli@docker.com>
Tristan Carel <tristan@cogniteev.com>
<tristan@cogniteev.com> <tristan.carel@gmail.com>
Vincent Demeester <vincent@sbr.pm>
<vincent@sbr.pm> <vincent+github@demeester.fr>
Vishnu Kannan <vishnuk@google.com>
xlgao-zju <xlgao@zju.edu.cn> xlgao <xlgao@zju.edu.cn>
yuchangchun <yuchangchun1@huawei.com> y00277921 <yuchangchun1@huawei.com>
<zij@case.edu> <zjaffee@us.ibm.com>

715
AUTHORS

File diff suppressed because it is too large Load Diff

View File

@@ -1,765 +1,6 @@
# Changelog
Items starting with `DEPRECATE` are important deprecation notices. For more
information on the list of deprecated flags and APIs please have a look at
https://docs.docker.com/engine/deprecated/ where target removal dates can also
be found.
## 1.11.0 (2016-04-13)
**IMPORTANT**: With Docker 1.11, a Linux docker installation is now made of 4 binaries (`docker`, [`docker-containerd`](https://github.com/docker/containerd), [`docker-containerd-shim`](https://github.com/docker/containerd) and [`docker-runc`](https://github.com/opencontainers/runc)). If you have scripts relying on docker being a single static binaries, please make sure to update them. Interaction with the daemon stay the same otherwise, the usage of the other binaries should be transparent. A Windows docker installation remains a single binary, `docker.exe`.
### Builder
- Fix a bug where Docker would not used the correct uid/gid when processing the `WORKDIR` command ([#21033](https://github.com/docker/docker/pull/21033))
- Fix a bug where copy operations with userns would not use the proper uid/gid ([#20782](https://github.com/docker/docker/pull/20782), [#21162](https://github.com/docker/docker/pull/21162))
### Client
* Usage of the `:` separator for security option has been deprecated. `=` should be used instead ([#21232](https://github.com/docker/docker/pull/21232))
+ The client user agent is now passed to the registry on `pull`, `build`, `push`, `login` and `search` operations ([#21306](https://github.com/docker/docker/pull/21306), [#21373](https://github.com/docker/docker/pull/21373))
* Allow setting the Domainname and Hostname separately through the API ([#20200](https://github.com/docker/docker/pull/20200))
* Docker info will now warn users if it can not detect the kernel version or the operating system ([#21128](https://github.com/docker/docker/pull/21128))
- Fix an issue where `docker stats --no-stream` output could be all 0s ([#20803](https://github.com/docker/docker/pull/20803))
- Fix a bug where some newly started container would not appear in a running `docker stats` command ([#20792](https://github.com/docker/docker/pull/20792))
* Post processing is no longer enabled for linux-cgo terminals ([#20587](https://github.com/docker/docker/pull/20587))
- Values to `--hostname` are now refused if they do not comply with [RFC1123](https://tools.ietf.org/html/rfc1123) ([#20566](https://github.com/docker/docker/pull/20566))
+ Docker learned how to use a SOCKS proxy ([#20366](https://github.com/docker/docker/pull/20366), [#18373](https://github.com/docker/docker/pull/18373))
+ Docker now supports external credential stores ([#20107](https://github.com/docker/docker/pull/20107))
* `docker ps` now supports displaying the list of volumes mounted inside a container ([#20017](https://github.com/docker/docker/pull/20017))
* `docker info` now also report Docker's root directory location ([#19986](https://github.com/docker/docker/pull/19986))
- Docker now prohibits login in with an empty username (spaces are trimmed) ([#19806](https://github.com/docker/docker/pull/19806))
* Docker events attributes are now sorted by key ([#19761](https://github.com/docker/docker/pull/19761))
* `docker ps` no longer show exported port for stopped containers ([#19483](https://github.com/docker/docker/pull/19483))
- Docker now cleans after itself if a save/export command fails ([#17849](https://github.com/docker/docker/pull/17849))
* Docker load learned how to display a progress bar ([#17329](https://github.com/docker/docker/pull/17329), [#120078](https://github.com/docker/docker/pull/20078))
### Distribution
- Fix a panic that occurred when pulling an images with 0 layers ([#21222](https://github.com/docker/docker/pull/21222))
- Fix a panic that could occur on error while pushing to a registry with a misconfigured token service ([#21212](https://github.com/docker/docker/pull/21212))
+ All first-level delegation roles are now signed when doing a trusted push ([#21046](https://github.com/docker/docker/pull/21046))
+ OAuth support for registries was added ([#20970](https://github.com/docker/docker/pull/20970))
* `docker login` now handles token using the implementation found in [docker/distribution](https://github.com/docker/distribution) ([#20832](https://github.com/docker/docker/pull/20832))
* `docker login` will no longer prompt for an email ([#20565](https://github.com/docker/docker/pull/20565))
* Docker will now fallback to registry V1 if no basic auth credentials are available ([#20241](https://github.com/docker/docker/pull/20241))
* Docker will now try to resume layer download where it left off after a network error/timeout ([#19840](https://github.com/docker/docker/pull/19840))
- Fix generated manifest mediaType when pushing cross-repository ([#19509](https://github.com/docker/docker/pull/19509))
- Fix docker requesting additional push credentials when pulling an image if Content Trust is enabled ([#20382](https://github.com/docker/docker/pull/20382))
### Logging
- Fix a race in the journald log driver ([#21311](https://github.com/docker/docker/pull/21311))
* Docker syslog driver now uses the RFC-5424 format when emitting logs ([#20121](https://github.com/docker/docker/pull/20121))
* Docker GELF log driver now allows to specify the compression algorithm and level via the `gelf-compression-type` and `gelf-compression-level` options ([#19831](https://github.com/docker/docker/pull/19831))
* Docker daemon learned to output uncolorized logs via the `--raw-logs` options ([#19794](https://github.com/docker/docker/pull/19794))
+ Docker, on Windows platform, now includes an ETW (Event Tracing in Windows) logging driver named `etwlogs` ([#19689](https://github.com/docker/docker/pull/19689))
* Journald log driver learned how to handle tags ([#19564](https://github.com/docker/docker/pull/19564))
+ The fluentd log driver learned the following options: `fluentd-address`, `fluentd-buffer-limit`, `fluentd-retry-wait`, `fluentd-max-retries` and `fluentd-async-connect` ([#19439](https://github.com/docker/docker/pull/19439))
+ Docker learned to send log to Google Cloud via the new `gcplogs` logging driver. ([#18766](https://github.com/docker/docker/pull/18766))
### Misc
+ When saving linked images together with `docker save` a subsequent `docker load` will correctly restore their parent/child relationship ([#21385](https://github.com/docker/docker/pull/c))
+ Support for building the Docker cli for OpenBSD was added ([#21325](https://github.com/docker/docker/pull/21325))
+ Labels can now be applied at network, volume and image creation ([#21270](https://github.com/docker/docker/pull/21270))
* The `dockremap` is now created as a system user ([#21266](https://github.com/docker/docker/pull/21266))
- Fix a few response body leaks ([#21258](https://github.com/docker/docker/pull/21258))
- Docker, when run as a service with systemd, will now properly manage its processes cgroups ([#20633](https://github.com/docker/docker/pull/20633))
* Docker info now reports the value of cgroup KernelMemory or emits a warning if it is not supported ([#20863](https://github.com/docker/docker/pull/20863))
* Docker info now also reports the cgroup driver in use ([#20388](https://github.com/docker/docker/pull/20388))
* Docker completion is now available on PowerShell ([#19894](https://github.com/docker/docker/pull/19894))
* `dockerinit` is no more ([#19490](https://github.com/docker/docker/pull/19490),[#19851](https://github.com/docker/docker/pull/19851))
+ Support for building Docker on arm64 was added ([#19013](https://github.com/docker/docker/pull/19013))
+ Experimental support for building docker.exe in a native Windows Docker installation ([#18348](https://github.com/docker/docker/pull/18348))
### Networking
- Fix panic if a node is forcibly removed from the cluster ([#21671](https://github.com/docker/docker/pull/21671))
- Fix "error creating vxlan interface" when starting a container in a Swarm cluster ([#21671](https://github.com/docker/docker/pull/21671))
* `docker network inspect` will now report all endpoints whether they have an active container or not ([#21160](https://github.com/docker/docker/pull/21160))
+ Experimental support for the MacVlan and IPVlan network drivers have been added ([#21122](https://github.com/docker/docker/pull/21122))
* Output of `docker network ls` is now sorted by network name ([#20383](https://github.com/docker/docker/pull/20383))
- Fix a bug where Docker would allow a network to be created with the reserved `default` name ([#19431](https://github.com/docker/docker/pull/19431))
* `docker network inspect` returns whether a network is internal or not ([#19357](https://github.com/docker/docker/pull/19357))
+ Control IPv6 via explicit option when creating a network (`docker network create --ipv6`). This shows up as a new `EnableIPv6` field in `docker network inspect` ([#17513](https://github.com/docker/docker/pull/17513))
* Support for AAAA Records (aka IPv6 Service Discovery) in embedded DNS Server ([#21396](https://github.com/docker/docker/pull/21396))
- Fix to not forward docker domain IPv6 queries to external servers ([#21396](https://github.com/docker/docker/pull/21396))
* Multiple A/AAAA records from embedded DNS Server for DNS Round robin ([#21019](https://github.com/docker/docker/pull/21019))
- Fix endpoint count inconsistency after an ungraceful dameon restart ([#21261](https://github.com/docker/docker/pull/21261))
- Move the ownership of exposed ports and port-mapping options from Endpoint to Sandbox ([#21019](https://github.com/docker/docker/pull/21019))
- Fixed a bug which prevents docker reload when host is configured with ipv6.disable=1 ([#21019](https://github.com/docker/docker/pull/21019))
- Added inbuilt nil IPAM driver ([#21019](https://github.com/docker/docker/pull/21019))
- Fixed bug in iptables.Exists() logic [#21019](https://github.com/docker/docker/pull/21019)
- Fixed a Veth interface leak when using overlay network ([#21019](https://github.com/docker/docker/pull/21019))
- Fixed a bug which prevents docker reload after a network delete during shutdown ([#20214](https://github.com/docker/docker/pull/20214))
- Make sure iptables chains are recreated on firewalld reload ([#20419](https://github.com/docker/docker/pull/20419))
- Allow to pass global datastore during config reload ([#20419](https://github.com/docker/docker/pull/20419))
- For anonymous containers use the alias name for IP to name mapping, ie:DNS PTR record ([#21019](https://github.com/docker/docker/pull/21019))
- Fix a panic when deleting an entry from /etc/hosts file ([#21019](https://github.com/docker/docker/pull/21019))
- Source the forwarded DNS queries from the container net namespace ([#21019](https://github.com/docker/docker/pull/21019))
- Fix to retain the network internal mode config for bridge networks on daemon reload ([#21780] (https://github.com/docker/docker/pull/21780))
- Fix to retain IPAM driver option configs on daemon reload ([#21914] (https://github.com/docker/docker/pull/21914))
### Plugins
- Fix a file descriptor leak that would occur every time plugins were enumerated ([#20686](https://github.com/docker/docker/pull/20686))
- Fix an issue where Authz plugin would corrupt the payload body when faced with a large amount of data ([#20602](https://github.com/docker/docker/pull/20602))
### Runtime
- Fix a panic that could occur when cleanup after a container started with invalid parameters ([#21716](https://github.com/docker/docker/pull/21716))
- Fix a race with event timers stopping early ([#21692](https://github.com/docker/docker/pull/21692))
- Fix race conditions in the layer store, potentially corrupting the map and crashing the process ([#21677](https://github.com/docker/docker/pull/21677))
- Un-deprecate auto-creation of host directories for mounts. This feature was marked deprecated in ([#21666](https://github.com/docker/docker/pull/21666))
Docker 1.9, but was decided to be too much of an backward-incompatible change, so it was decided to keep the feature.
+ It is now possible for containers to share the NET and IPC namespaces when `userns` is enabled ([#21383](https://github.com/docker/docker/pull/21383))
+ `docker inspect <image-id>` will now expose the rootfs layers ([#21370](https://github.com/docker/docker/pull/21370))
+ Docker Windows gained a minimal `top` implementation ([#21354](https://github.com/docker/docker/pull/21354))
* Docker learned to report the faulty exe when a container cannot be started due to its condition ([#21345](https://github.com/docker/docker/pull/21345))
* Docker with device mapper will now refuse to run if `udev sync` is not available ([#21097](https://github.com/docker/docker/pull/21097))
- Fix a bug where Docker would not validate the config file upon configuration reload ([#21089](https://github.com/docker/docker/pull/21089))
- Fix a hang that would happen on attach if initial start was to fail ([#21048](https://github.com/docker/docker/pull/21048))
- Fix an issue where registry service options in the daemon configuration file were not properly taken into account ([#21045](https://github.com/docker/docker/pull/21045))
- Fix a race between the exec and resize operations ([#21022](https://github.com/docker/docker/pull/21022))
- Fix an issue where nanoseconds were not correctly taken in account when filtering Docker events ([#21013](https://github.com/docker/docker/pull/21013))
- Fix the handling of Docker command when passed a 64 bytes id ([#21002](https://github.com/docker/docker/pull/21002))
* Docker will now return a `204` (i.e http.StatusNoContent) code when it successfully deleted a network ([#20977](https://github.com/docker/docker/pull/20977))
- Fix a bug where the daemon would wait indefinitely in case the process it was about to killed had already exited on its own ([#20967](https://github.com/docker/docker/pull/20967)
* The devmapper driver learned the `dm.min_free_space` option. If the mapped device free space reaches the passed value, new device creation will be prohibited. ([#20786](https://github.com/docker/docker/pull/20786))
+ Docker can now prevent processes in container to gain new privileges via the `--security-opt=no-new-privileges` flag ([#20727](https://github.com/docker/docker/pull/20727))
- Starting a container with the `--device` option will now correctly resolves symlinks ([#20684](https://github.com/docker/docker/pull/20684))
+ Docker now relies on [`containerd`](https://github.com/docker/containerd) and [`runc`](https://github.com/opencontainers/runc) to spawn containers. ([#20662](https://github.com/docker/docker/pull/20662))
- Fix docker configuration reloading to only alter value present in the given config file ([#20604](https://github.com/docker/docker/pull/20604))
+ Docker now allows setting a container hostname via the `--hostname` flag when `--net=host` ([#20177](https://github.com/docker/docker/pull/20177))
+ Docker now allows executing privileged container while running with `--userns-remap` if both `--privileged` and the new `--userns=host` flag are specified ([#20111](https://github.com/docker/docker/pull/20111))
- Fix Docker not cleaning up correctly old containers upon restarting after a crash ([#19679](https://github.com/docker/docker/pull/19679))
* Docker will now error out if it doesn't recognize a configuration key within the config file ([#19517](https://github.com/docker/docker/pull/19517))
- Fix container loading, on daemon startup, when they depends on a plugin running within a container ([#19500](https://github.com/docker/docker/pull/19500))
* `docker update` learned how to change a container restart policy ([#19116](https://github.com/docker/docker/pull/19116))
* `docker inspect` now also returns a new `State` field containing the container state in a human readable way (i.e. one of `created`, `restarting`, `running`, `paused`, `exited` or `dead`)([#18966](https://github.com/docker/docker/pull/18966))
+ Docker learned to limit the number of active pids (i.e. processes) within the container via the `pids-limit` flags. NOTE: This requires `CGROUP_PIDS=y` to be in the kernel configuration. ([#18697](https://github.com/docker/docker/pull/18697))
- `docker load` now has a `--quiet` option to suppress the load output ([#20078](https://github.com/docker/docker/pull/20078))
- Fix a bug in neighbor discovery for IPv6 peers ([#20842](https://github.com/docker/docker/pull/20842))
- Fix a panic during cleanup if a container was started with invalid options ([#21802](https://github.com/docker/docker/pull/21802))
- Fix a situation where a container cannot be stopped if the terminal is closed ([#21840](https://github.com/docker/docker/pull/21840))
### Security
* Object with the `pcp_pmcd_t` selinux type were given management access to `/var/lib/docker(/.*)?` ([#21370](https://github.com/docker/docker/pull/21370))
* `restart_syscall`, `copy_file_range`, `mlock2` joined the list of allowed calls in the default seccomp profile ([#21117](https://github.com/docker/docker/pull/21117), [#21262](https://github.com/docker/docker/pull/21262))
* `send`, `recv` and `x32` were added to the list of allowed syscalls and arch in the default seccomp profile ([#19432](https://github.com/docker/docker/pull/19432))
* Docker Content Trust now requests the server to perform snapshot signing ([#21046](https://github.com/docker/docker/pull/21046))
* Support for using YubiKeys for Content Trust signing has been moved out of experimental ([#21591](https://github.com/docker/docker/pull/21591))
### Volumes
* Output of `docker volume ls` is now sorted by volume name ([#20389](https://github.com/docker/docker/pull/20389))
* Local volumes can now accepts options similar to the unix `mount` tool ([#20262](https://github.com/docker/docker/pull/20262))
- Fix an issue where one letter directory name could not be used as source for volumes ([#21106](https://github.com/docker/docker/pull/21106))
+ `docker run -v` now accepts a new flag `nocopy`. This tell the runtime not to copy the container path content into the volume (which is the default behavior) ([#21223](https://github.com/docker/docker/pull/21223))
## 1.10.3 (2016-03-10)
### Runtime
- Fix Docker client exiting with an "Unrecognized input header" error [#20706](https://github.com/docker/docker/pull/20706)
- Fix Docker exiting if Exec is started with both `AttachStdin` and `Detach` [#20647](https://github.com/docker/docker/pull/20647)
### Distribution
- Fix a crash when pushing multiple images sharing the same layers to the same repository in parallel [#20831](https://github.com/docker/docker/pull/20831)
- Fix a panic when pushing images to a registry which uses a misconfigured token service [#21030](https://github.com/docker/docker/pull/21030)
### Plugin system
- Fix issue preventing volume plugins to start when SELinux is enabled [#20834](https://github.com/docker/docker/pull/20834)
- Prevent Docker from exiting if a volume plugin returns a null response for Get requests [#20682](https://github.com/docker/docker/pull/20682)
- Fix plugin system leaking file descriptors if a plugin has an error [#20680](https://github.com/docker/docker/pull/20680)
### Security
- Fix linux32 emulation to fail during docker build [#20672](https://github.com/docker/docker/pull/20672)
It was due to the `personality` syscall being blocked by the default seccomp profile.
- Fix Oracle XE 10g failing to start in a container [#20981](https://github.com/docker/docker/pull/20981)
It was due to the `ipc` syscall being blocked by the default seccomp profile.
- Fix user namespaces not working on Linux From Scratch [#20685](https://github.com/docker/docker/pull/20685)
- Fix issue preventing daemon to start if userns is enabled and the `subuid` or `subgid` files contain comments [#20725](https://github.com/docker/docker/pull/20725)
## 1.10.2 (2016-02-22)
### Runtime
- Prevent systemd from deleting containers' cgroups when its configuration is reloaded [#20518](https://github.com/docker/docker/pull/20518)
- Fix SELinux issues by disregarding `--read-only` when mounting `/dev/mqueue` [#20333](https://github.com/docker/docker/pull/20333)
- Fix chown permissions used during `docker cp` when userns is used [#20446](https://github.com/docker/docker/pull/20446)
- Fix configuration loading issue with all booleans defaulting to `true` [#20471](https://github.com/docker/docker/pull/20471)
- Fix occasional panic with `docker logs -f` [#20522](https://github.com/docker/docker/pull/20522)
### Distribution
- Keep layer reference if deletion failed to avoid a badly inconsistent state [#20513](https://github.com/docker/docker/pull/20513)
- Handle gracefully a corner case when canceling migration [#20372](https://github.com/docker/docker/pull/20372)
- Fix docker import on compressed data [#20367](https://github.com/docker/docker/pull/20367)
- Fix tar-split files corruption during migration that later cause docker push and docker save to fail [#20458](https://github.com/docker/docker/pull/20458)
### Networking
- Fix daemon crash if embedded DNS is sent garbage [#20510](https://github.com/docker/docker/pull/20510)
### Volumes
- Fix issue with multiple volume references with same name [#20381](https://github.com/docker/docker/pull/20381)
### Security
- Fix potential cache corruption and delegation conflict issues [#20523](https://github.com/docker/docker/pull/20523)
## 1.10.1 (2016-02-11)
### Runtime
* Do not stop daemon on migration hard failure [#20156](https://github.com/docker/docker/pull/20156)
- Fix various issues with migration to content-addressable images [#20058](https://github.com/docker/docker/pull/20058)
- Fix ZFS permission bug with user namespaces [#20045](https://github.com/docker/docker/pull/20045)
- Do not leak /dev/mqueue from the host to all containers, keep it container-specific [#19876](https://github.com/docker/docker/pull/19876) [#20133](https://github.com/docker/docker/pull/20133)
- Fix `docker ps --filter before=...` to not show stopped containers without providing `-a` flag [#20135](https://github.com/docker/docker/pull/20135)
### Security
- Fix issue preventing docker events to work properly with authorization plugin [#20002](https://github.com/docker/docker/pull/20002)
### Distribution
* Add additional verifications and prevent from uploading invalid data to registries [#20164](https://github.com/docker/docker/pull/20164)
- Fix regression preventing uppercase characters in image reference hostname [#20175](https://github.com/docker/docker/pull/20175)
### Networking
- Fix embedded DNS for user-defined networks in the presence of firewalld [#20060](https://github.com/docker/docker/pull/20060)
- Fix issue where removing a network during shutdown left Docker inoperable [#20181](https://github.com/docker/docker/issues/20181) [#20235](https://github.com/docker/docker/issues/20235)
- Embedded DNS is now able to return compressed results [#20181](https://github.com/docker/docker/issues/20181)
- Fix port-mapping issue with `userland-proxy=false` [#20181](https://github.com/docker/docker/issues/20181)
### Logging
- Fix bug where tcp+tls protocol would be rejected [#20109](https://github.com/docker/docker/pull/20109)
### Volumes
- Fix issue whereby older volume drivers would not receive volume options [#19983](https://github.com/docker/docker/pull/19983)
### Misc
- Remove TasksMax from Docker systemd service [#20167](https://github.com/docker/docker/pull/20167)
## 1.10.0 (2016-02-04)
**IMPORTANT**: Docker 1.10 uses a new content-addressable storage for images and layers.
A migration is performed the first time docker is run, and can take a significant amount of time depending on the number of images present.
Refer to this page on the wiki for more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
We also released a cool migration utility that enables you to perform the migration before updating to reduce downtime.
Engine 1.10 migrator can be found on Docker Hub: https://hub.docker.com/r/docker/v1.10-migrator/
### Runtime
+ New `docker update` command that allows updating resource constraints on running containers [#15078](https://github.com/docker/docker/pull/15078)
+ Add `--tmpfs` flag to `docker run` to create a tmpfs mount in a container [#13587](https://github.com/docker/docker/pull/13587)
+ Add `--format` flag to `docker images` command [#17692](https://github.com/docker/docker/pull/17692)
+ Allow to set daemon configuration in a file and hot-reload it with the `SIGHUP` signal [#18587](https://github.com/docker/docker/pull/18587)
+ Updated docker events to include more meta-data and event types [#18888](https://github.com/docker/docker/pull/18888)
This change is backward compatible in the API, but not on the CLI.
+ Add `--blkio-weight-device` flag to `docker run` [#13959](https://github.com/docker/docker/pull/13959)
+ Add `--device-read-bps` and `--device-write-bps` flags to `docker run` [#14466](https://github.com/docker/docker/pull/14466)
+ Add `--device-read-iops` and `--device-write-iops` flags to `docker run` [#15879](https://github.com/docker/docker/pull/15879)
+ Add `--oom-score-adj` flag to `docker run` [#16277](https://github.com/docker/docker/pull/16277)
+ Add `--detach-keys` flag to `attach`, `run`, `start` and `exec` commands to override the default key sequence that detaches from a container [#15666](https://github.com/docker/docker/pull/15666)
+ Add `--shm-size` flag to `run`, `create` and `build` to set the size of `/dev/shm` [#16168](https://github.com/docker/docker/pull/16168)
+ Show the number of running, stopped, and paused containers in `docker info` [#19249](https://github.com/docker/docker/pull/19249)
+ Show the `OSType` and `Architecture` in `docker info` [#17478](https://github.com/docker/docker/pull/17478)
+ Add `--cgroup-parent` flag on `daemon` to set cgroup parent for all containers [#19062](https://github.com/docker/docker/pull/19062)
+ Add `-L` flag to docker cp to follow symlinks [#16613](https://github.com/docker/docker/pull/16613)
+ New `status=dead` filter for `docker ps` [#17908](https://github.com/docker/docker/pull/17908)
* Change `docker run` exit codes to distinguish between runtime and application errors [#14012](https://github.com/docker/docker/pull/14012)
* Enhance `docker events --since` and `--until` to support nanoseconds and timezones [#17495](https://github.com/docker/docker/pull/17495)
* Add `--all`/`-a` flag to `stats` to include both running and stopped containers [#16742](https://github.com/docker/docker/pull/16742)
* Change the default cgroup-driver to `cgroupfs` [#17704](https://github.com/docker/docker/pull/17704)
* Emit a "tag" event when tagging an image with `build -t` [#17115](https://github.com/docker/docker/pull/17115)
* Best effort for linked containers' start order when starting the daemon [#18208](https://github.com/docker/docker/pull/18208)
* Add ability to add multiple tags on `build` [#15780](https://github.com/docker/docker/pull/15780)
* Permit `OPTIONS` request against any url, thus fixing issue with CORS [#19569](https://github.com/docker/docker/pull/19569)
- Fix the `--quiet` flag on `docker build` to actually be quiet [#17428](https://github.com/docker/docker/pull/17428)
- Fix `docker images --filter dangling=false` to now show all non-dangling images [#19326](https://github.com/docker/docker/pull/19326)
- Fix race condition causing autorestart turning off on restart [#17629](https://github.com/docker/docker/pull/17629)
- Recognize GPFS filesystems [#19216](https://github.com/docker/docker/pull/19216)
- Fix obscure bug preventing to start containers [#19751](https://github.com/docker/docker/pull/19751)
- Forbid `exec` during container restart [#19722](https://github.com/docker/docker/pull/19722)
- devicemapper: Increasing `--storage-opt dm.basesize` will now increase the base device size on daemon restart [#19123](https://github.com/docker/docker/pull/19123)
### Security
+ Add `--userns-remap` flag to `daemon` to support user namespaces (previously in experimental) [#19187](https://github.com/docker/docker/pull/19187)
+ Add support for custom seccomp profiles in `--security-opt` [#17989](https://github.com/docker/docker/pull/17989)
+ Add default seccomp profile [#18780](https://github.com/docker/docker/pull/18780)
+ Add `--authorization-plugin` flag to `daemon` to customize ACLs [#15365](https://github.com/docker/docker/pull/15365)
+ Docker Content Trust now supports the ability to read and write user delegations [#18887](https://github.com/docker/docker/pull/18887)
This is an optional, opt-in feature that requires the explicit use of the Notary command-line utility in order to be enabled.
Enabling delegation support in a specific repository will break the ability of Docker 1.9 and 1.8 to pull from that repository, if content trust is enabled.
* Allow SELinux to run in a container when using the BTRFS storage driver [#16452](https://github.com/docker/docker/pull/16452)
### Distribution
* Use content-addressable storage for images and layers [#17924](https://github.com/docker/docker/pull/17924)
Note that a migration is performed the first time docker is run; it can take a significant amount of time depending on the number of images and containers present.
Images no longer depend on the parent chain but contain a list of layer references.
`docker load`/`docker save` tarballs now also contain content-addressable image configurations.
For more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration
* Add support for the new [manifest format ("schema2")](https://github.com/docker/distribution/blob/master/docs/spec/manifest-v2-2.md) [#18785](https://github.com/docker/docker/pull/18785)
* Lots of improvements for push and pull: performance++, retries on failed downloads, cancelling on client disconnect [#18353](https://github.com/docker/docker/pull/18353), [#18418](https://github.com/docker/docker/pull/18418), [#19109](https://github.com/docker/docker/pull/19109), [#18353](https://github.com/docker/docker/pull/18353)
* Limit v1 protocol fallbacks [#18590](https://github.com/docker/docker/pull/18590)
- Fix issue where docker could hang indefinitely waiting for a nonexistent process to pull an image [#19743](https://github.com/docker/docker/pull/19743)
### Networking
+ Use DNS-based discovery instead of `/etc/hosts` [#19198](https://github.com/docker/docker/pull/19198)
+ Support for network-scoped alias using `--net-alias` on `run` and `--alias` on `network connect` [#19242](https://github.com/docker/docker/pull/19242)
+ Add `--ip` and `--ip6` on `run` and `network connect` to support custom IP addresses for a container in a network [#19001](https://github.com/docker/docker/pull/19001)
+ Add `--ipam-opt` to `network create` for passing custom IPAM options [#17316](https://github.com/docker/docker/pull/17316)
+ Add `--internal` flag to `network create` to restrict external access to and from the network [#19276](https://github.com/docker/docker/pull/19276)
+ Add `kv.path` option to `--cluster-store-opt` [#19167](https://github.com/docker/docker/pull/19167)
+ Add `discovery.heartbeat` and `discovery.ttl` options to `--cluster-store-opt` to configure discovery TTL and heartbeat timer [#18204](https://github.com/docker/docker/pull/18204)
+ Add `--format` flag to `network inspect` [#17481](https://github.com/docker/docker/pull/17481)
+ Add `--link` to `network connect` to provide a container-local alias [#19229](https://github.com/docker/docker/pull/19229)
+ Support for Capability exchange with remote IPAM plugins [#18775](https://github.com/docker/docker/pull/18775)
+ Add `--force` to `network disconnect` to force container to be disconnected from network [#19317](https://github.com/docker/docker/pull/19317)
* Support for multi-host networking using built-in overlay driver for all engine supported kernels: 3.10+ [#18775](https://github.com/docker/docker/pull/18775)
* `--link` is now supported on `docker run` for containers in user-defined network [#19229](https://github.com/docker/docker/pull/19229)
* Enhance `docker network rm` to allow removing multiple networks [#17489](https://github.com/docker/docker/pull/17489)
* Include container names in `network inspect` [#17615](https://github.com/docker/docker/pull/17615)
* Include auto-generated subnets for user-defined networks in `network inspect` [#17316](https://github.com/docker/docker/pull/17316)
* Add `--filter` flag to `network ls` to hide predefined networks [#17782](https://github.com/docker/docker/pull/17782)
* Add support for network connect/disconnect to stopped containers [#18906](https://github.com/docker/docker/pull/18906)
* Add network ID to container inspect [#19323](https://github.com/docker/docker/pull/19323)
- Fix MTU issue where Docker would not start with two or more default routes [#18108](https://github.com/docker/docker/pull/18108)
- Fix duplicate IP address for containers [#18106](https://github.com/docker/docker/pull/18106)
- Fix issue preventing sometimes docker from creating the bridge network [#19338](https://github.com/docker/docker/pull/19338)
- Do not substitute 127.0.0.1 name server when using `--net=host` [#19573](https://github.com/docker/docker/pull/19573)
### Logging
+ New logging driver for Splunk [#16488](https://github.com/docker/docker/pull/16488)
+ Add support for syslog over TCP+TLS [#18998](https://github.com/docker/docker/pull/18998)
* Enhance `docker logs --since` and `--until` to support nanoseconds and time [#17495](https://github.com/docker/docker/pull/17495)
* Enhance AWS logs to auto-detect region [#16640](https://github.com/docker/docker/pull/16640)
### Volumes
+ Add support to set the mount propagation mode for a volume [#17034](https://github.com/docker/docker/pull/17034)
* Add `ls` and `inspect` endpoints to volume plugin API [#16534](https://github.com/docker/docker/pull/16534)
Existing plugins need to make use of these new APIs to satisfy users' expectation
For that, please use the new MIME type `application/vnd.docker.plugins.v1.2+json` [#19549](https://github.com/docker/docker/pull/19549)
- Fix data not being copied to named volumes [#19175](https://github.com/docker/docker/pull/19175)
- Fix issues preventing volume drivers from being containerized [#19500](https://github.com/docker/docker/pull/19500)
- Fix `docker volumes ls --dangling=false` to now show all non-dangling volumes [#19671](https://github.com/docker/docker/pull/19671)
- Do not remove named volumes on container removal [#19568](https://github.com/docker/docker/pull/19568)
- Allow external volume drivers to host anonymous volumes [#19190](https://github.com/docker/docker/pull/19190)
### Builder
+ Add support for `**` in `.dockerignore` to wildcard multiple levels of directories [#17090](https://github.com/docker/docker/pull/17090)
- Fix handling of UTF-8 characters in Dockerfiles [#17055](https://github.com/docker/docker/pull/17055)
- Fix permissions problem when reading from STDIN [#19283](https://github.com/docker/docker/pull/19283)
### Client
+ Add support for overriding the API version to use via an `DOCKER_API_VERSION` environment-variable [#15964](https://github.com/docker/docker/pull/15964)
- Fix a bug preventing Windows clients to log in to Docker Hub [#19891](https://github.com/docker/docker/pull/19891)
### Misc
* systemd: Set TasksMax in addition to LimitNPROC in systemd service file [#19391](https://github.com/docker/docker/pull/19391)
### Deprecations
* Remove LXC support. The LXC driver was deprecated in Docker 1.8, and has now been removed [#17700](https://github.com/docker/docker/pull/17700)
* Remove `--exec-driver` daemon flag, because it is no longer in use [#17700](https://github.com/docker/docker/pull/17700)
* Remove old deprecated single-dashed long CLI flags (such as `-rm`; use `--rm` instead) [#17724](https://github.com/docker/docker/pull/17724)
* Deprecate HostConfig at API container start [#17799](https://github.com/docker/docker/pull/17799)
* Deprecate docker packages for newly EOL'd Linux distributions: Fedora 21 and Ubuntu 15.04 (Vivid) [#18794](https://github.com/docker/docker/pull/18794), [#18809](https://github.com/docker/docker/pull/18809)
* Deprecate `-f` flag for docker tag [#18350](https://github.com/docker/docker/pull/18350)
## 1.9.1 (2015-11-21)
### Runtime
- Do not prevent daemon from booting if images could not be restored (#17695)
- Force IPC mount to unmount on daemon shutdown/init (#17539)
- Turn IPC unmount errors into warnings (#17554)
- Fix `docker stats` performance regression (#17638)
- Clarify cryptic error message upon `docker logs` if `--log-driver=none` (#17767)
- Fix seldom panics (#17639, #17634, #17703)
- Fix opq whiteouts problems for files with dot prefix (#17819)
- devicemapper: try defaulting to xfs instead of ext4 for performance reasons (#17903, #17918)
- devicemapper: fix displayed fs in docker info (#17974)
- selinux: only relabel if user requested so with the `z` option (#17450, #17834)
- Do not make network calls when normalizing names (#18014)
### Client
- Fix `docker login` on windows (#17738)
- Fix bug with `docker inspect` output when not connected to daemon (#17715)
- Fix `docker inspect -f {{.HostConfig.Dns}} somecontainer` (#17680)
### Builder
- Fix regression with symlink behavior in ADD/COPY (#17710)
### Networking
- Allow passing a network ID as an argument for `--net` (#17558)
- Fix connect to host and prevent disconnect from host for `host` network (#17476)
- Fix `--fixed-cidr` issue when gateway ip falls in ip-range and ip-range is
not the first block in the network (#17853)
- Restore deterministic `IPv6` generation from `MAC` address on default `bridge` network (#17890)
- Allow port-mapping only for endpoints created on docker run (#17858)
- Fixed an endpoint delete issue with a possible stale sbox (#18102)
### Distribution
- Correct parent chain in v2 push when v1Compatibility files on the disk are inconsistent (#18047)
## 1.9.0 (2015-11-03)
### Runtime
+ `docker stats` now returns block IO metrics (#15005)
+ `docker stats` now details network stats per interface (#15786)
+ Add `ancestor=<image>` filter to `docker ps --filter` flag to filter
containers based on their ancestor images (#14570)
+ Add `label=<somelabel>` filter to `docker ps --filter` to filter containers
based on label (#16530)
+ Add `--kernel-memory` flag to `docker run` (#14006)
+ Add `--message` flag to `docker import` allowing to specify an optional
message (#15711)
+ Add `--privileged` flag to `docker exec` (#14113)
+ Add `--stop-signal` flag to `docker run` allowing to replace the container
process stopping signal (#15307)
+ Add a new `unless-stopped` restart policy (#15348)
+ Inspecting an image now returns tags (#13185)
+ Add container size information to `docker inspect` (#15796)
+ Add `RepoTags` and `RepoDigests` field to `/images/{name:.*}/json` (#17275)
- Remove the deprecated `/container/ps` endpoint from the API (#15972)
- Send and document correct HTTP codes for `/exec/<name>/start` (#16250)
- Share shm and mqueue between containers sharing IPC namespace (#15862)
- Event stream now shows OOM status when `--oom-kill-disable` is set (#16235)
- Ensure special network files (/etc/hosts etc.) are read-only if bind-mounted
with `ro` option (#14965)
- Improve `rmi` performance (#16890)
- Do not update /etc/hosts for the default bridge network, except for links (#17325)
- Fix conflict with duplicate container names (#17389)
- Fix an issue with incorrect template execution in `docker inspect` (#17284)
- DEPRECATE `-c` short flag variant for `--cpu-shares` in docker run (#16271)
### Client
+ Allow `docker import` to import from local files (#11907)
### Builder
+ Add a `STOPSIGNAL` Dockerfile instruction allowing to set a different
stop-signal for the container process (#15307)
+ Add an `ARG` Dockerfile instruction and a `--build-arg` flag to `docker build`
that allows to add build-time environment variables (#15182)
- Improve cache miss performance (#16890)
### Storage
- devicemapper: Implement deferred deletion capability (#16381)
## Networking
+ `docker network` exits experimental and is part of standard release (#16645)
+ New network top-level concept, with associated subcommands and API (#16645)
WARNING: the API is different from the experimental API
+ Support for multiple isolated/micro-segmented networks (#16645)
+ Built-in multihost networking using VXLAN based overlay driver (#14071)
+ Support for third-party network plugins (#13424)
+ Ability to dynamically connect containers to multiple networks (#16645)
+ Support for user-defined IP address management via pluggable IPAM drivers (#16910)
+ Add daemon flags `--cluster-store` and `--cluster-advertise` for built-in nodes discovery (#16229)
+ Add `--cluster-store-opt` for setting up TLS settings (#16644)
+ Add `--dns-opt` to the daemon (#16031)
- DEPRECATE following container `NetworkSettings` fields in API v1.21: `EndpointID`, `Gateway`,
`GlobalIPv6Address`, `GlobalIPv6PrefixLen`, `IPAddress`, `IPPrefixLen`, `IPv6Gateway` and `MacAddress`.
Those are now specific to the `bridge` network. Use `NetworkSettings.Networks` to inspect
the networking settings of a container per network.
### Volumes
+ New top-level `volume` subcommand and API (#14242)
- Move API volume driver settings to host-specific config (#15798)
- Print an error message if volume name is not unique (#16009)
- Ensure volumes created from Dockerfiles always use the local volume driver
(#15507)
- DEPRECATE auto-creating missing host paths for bind mounts (#16349)
### Logging
+ Add `awslogs` logging driver for Amazon CloudWatch (#15495)
+ Add generic `tag` log option to allow customizing container/image
information passed to driver (e.g. show container names) (#15384)
- Implement the `docker logs` endpoint for the journald driver (#13707)
- DEPRECATE driver-specific log tags (e.g. `syslog-tag`, etc.) (#15384)
### Distribution
+ `docker search` now works with partial names (#16509)
- Push optimization: avoid buffering to file (#15493)
- The daemon will display progress for images that were already being pulled
by another client (#15489)
- Only permissions required for the current action being performed are requested (#)
+ Renaming trust keys (and respective environment variables) from `offline` to
`root` and `tagging` to `repository` (#16894)
- DEPRECATE trust key environment variables
`DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE` and
`DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE` (#16894)
### Security
+ Add SELinux profiles to the rpm package (#15832)
- Fix various issues with AppArmor profiles provided in the deb package
(#14609)
- Add AppArmor policy that prevents writing to /proc (#15571)
## 1.8.3 (2015-10-12)
### Distribution
- Fix layer IDs lead to local graph poisoning (CVE-2014-8178)
- Fix manifest validation and parsing logic errors allow pull-by-digest validation bypass (CVE-2014-8179)
+ Add `--disable-legacy-registry` to prevent a daemon from using a v1 registry
## 1.8.2 (2015-09-10)
### Distribution
- Fixes rare edge case of handling GNU LongLink and LongName entries.
- Fix ^C on docker pull.
- Fix docker pull issues on client disconnection.
- Fix issue that caused the daemon to panic when loggers weren't configured properly.
- Fix goroutine leak pulling images from registry V2.
### Runtime
- Fix a bug mounting cgroups for docker daemons running inside docker containers.
- Initialize log configuration properly.
### Client:
- Handle `-q` flag in `docker ps` properly when there is a default format.
### Networking
- Fix several corner cases with netlink.
### Contrib
- Fix several issues with bash completion.
## 1.8.1 (2015-08-12)
### Distribution
* Fix a bug where pushing multiple tags would result in invalid images
## 1.8.0 (2015-08-11)
### Distribution
+ Trusted pull, push and build, disabled by default
* Make tar layers deterministic between registries
* Don't allow deleting the image of running containers
* Check if a tag name to load is a valid digest
* Allow one character repository names
* Add a more accurate error description for invalid tag name
* Make build cache ignore mtime
### Cli
+ Add support for DOCKER_CONFIG/--config to specify config file dir
+ Add --type flag for docker inspect command
+ Add formatting options to `docker ps` with `--format`
+ Replace `docker -d` with new subcommand `docker daemon`
* Zsh completion updates and improvements
* Add some missing events to bash completion
* Support daemon urls with base paths in `docker -H`
* Validate status= filter to docker ps
* Display when a container is in --net=host in docker ps
* Extend docker inspect to export image metadata related to graph driver
* Restore --default-gateway{,-v6} daemon options
* Add missing unpublished ports in docker ps
* Allow duration strings in `docker events` as --since/--until
* Expose more mounts information in `docker inspect`
### Runtime
+ Add new Fluentd logging driver
+ Allow `docker import` to load from local files
+ Add logging driver for GELF via UDP
+ Allow to copy files from host to containers with `docker cp`
+ Promote volume drivers from experimental to master
+ Add rollover options to json-file log driver, and --log-driver-opts flag
+ Add memory swappiness tuning options
* Remove cgroup read-only flag when privileged
* Make /proc, /sys, & /dev readonly for readonly containers
* Add cgroup bind mount by default
* Overlay: Export metadata for container and image in `docker inspect`
* Devicemapper: external device activation
* Devicemapper: Compare uuid of base device on startup
* Remove RC4 from the list of registry cipher suites
* Add syslog-facility option
* LXC execdriver compatibility with recent LXC versions
* Mark LXC execriver as deprecated (to be removed with the migration to runc)
### Plugins
* Separate plugin sockets and specs locations
* Allow TLS connections to plugins
### Bug fixes
- Add missing 'Names' field to /containers/json API output
- Make `docker rmi` of dangling images safe while pulling
- Devicemapper: Change default basesize to 100G
- Go Scheduler issue with sync.Mutex and gcc
- Fix issue where Search API endpoint would panic due to empty AuthConfig
- Set image canonical names correctly
- Check dockerinit only if lxc driver is used
- Fix ulimit usage of nproc
- Always attach STDIN if -i,--interactive is specified
- Show error messages when saving container state fails
- Fixed incorrect assumption on --bridge=none treated as disable network
- Check for invalid port specifications in host configuration
- Fix endpoint leave failure for --net=host mode
- Fix goroutine leak in the stats API if the container is not running
- Check for apparmor file before reading it
- Fix DOCKER_TLS_VERIFY being ignored
- Set umask to the default on startup
- Correct the message of pause and unpause a non-running container
- Adjust disallowed CpuShares in container creation
- ZFS: correctly apply selinux context
- Display empty string instead of <nil> when IP opt is nil
- `docker kill` returns error when container is not running
- Fix COPY/ADD quoted/json form
- Fix goroutine leak on logs -f with no output
- Remove panic in nat package on invalid hostport
- Fix container linking in Fedora 22
- Fix error caused using default gateways outside of the allocated range
- Format times in inspect command with a template as RFC3339Nano
- Make registry client to accept 2xx and 3xx http status responses as successful
- Fix race issue that caused the daemon to crash with certain layer downloads failed in a specific order.
- Fix error when the docker ps format was not valid.
- Remove redundant ip forward check.
- Fix issue trying to push images to repository mirrors.
- Fix error cleaning up network entrypoints when there is an initialization issue.
## 1.7.1 (2015-07-14)
#### Runtime
- Fix default user spawning exec process with `docker exec`
- Make `--bridge=none` not to configure the network bridge
- Publish networking stats properly
- Fix implicit devicemapper selection with static binaries
- Fix socket connections that hung intermittently
- Fix bridge interface creation on CentOS/RHEL 6.6
- Fix local dns lookups added to resolv.conf
- Fix copy command mounting volumes
- Fix read/write privileges in volumes mounted with --volumes-from
#### Remote API
- Fix unmarshalling of Command and Entrypoint
- Set limit for minimum client version supported
- Validate port specification
- Return proper errors when attach/reattach fail
#### Distribution
- Fix pulling private images
- Fix fallback between registry V2 and V1
## 1.7.0 (2015-06-16)
#### Runtime
+ Experimental feature: support for out-of-process volume plugins
* The userland proxy can be disabled in favor of hairpin NAT using the daemons `--userland-proxy=false` flag
* The `exec` command supports the `-u|--user` flag to specify the new process owner
+ Default gateway for containers can be specified daemon-wide using the `--default-gateway` and `--default-gateway-v6` flags
+ The CPU CFS (Completely Fair Scheduler) quota can be set in `docker run` using `--cpu-quota`
+ Container block IO can be controlled in `docker run` using`--blkio-weight`
+ ZFS support
+ The `docker logs` command supports a `--since` argument
+ UTS namespace can be shared with the host with `docker run --uts=host`
#### Quality
* Networking stack was entirely rewritten as part of the libnetwork effort
* Engine internals refactoring
* Volumes code was entirely rewritten to support the plugins effort
+ Sending SIGUSR1 to a daemon will dump all goroutines stacks without exiting
#### Build
+ Support ${variable:-value} and ${variable:+value} syntax for environment variables
+ Support resource management flags `--cgroup-parent`, `--cpu-period`, `--cpu-quota`, `--cpuset-cpus`, `--cpuset-mems`
+ git context changes with branches and directories
* The .dockerignore file support exclusion rules
#### Distribution
+ Client support for v2 mirroring support for the official registry
#### Bugfixes
* Firewalld is now supported and will automatically be used when available
* mounting --device recursively
## 1.6.2 (2015-05-13)
#### Runtime
- Revert change prohibiting mounting into /sys
## 1.6.1 (2015-05-07)
#### Security
- Fix read/write /proc paths (CVE-2015-3630)
- Prohibit VOLUME /proc and VOLUME / (CVE-2015-3631)
- Fix opening of file-descriptor 1 (CVE-2015-3627)
- Fix symlink traversal on container respawn allowing local privilege escalation (CVE-2015-3629)
- Prohibit mount of /sys
#### Runtime
- Update AppArmor policy to not allow mounts
## 1.6.0 (2015-04-07)
#### Builder
+ Building images from an image ID
+ Build containers with resource constraints, ie `docker build --cpu-shares=100 --memory=1024m...`
+ `commit --change` to apply specified Dockerfile instructions while committing the image
+ `import --change` to apply specified Dockerfile instructions while importing the image
+ Builds no longer continue in the background when canceled with CTRL-C
#### Client
+ Windows Support
#### Runtime
+ Container and image Labels
+ `--cgroup-parent` for specifying a parent cgroup to place container cgroup within
+ Logging drivers, `json-file`, `syslog`, or `none`
+ Pulling images by ID
+ `--ulimit` to set the ulimit on a container
+ `--default-ulimit` option on the daemon which applies to all created containers (and overwritten by `--ulimit` on run)
## 1.5.0 (2015-02-10)
## 1.5.0 (2015-02-05)
#### Builder
+ Dockerfile to use for a given `docker build` can be specified with the `-f` flag
@@ -789,7 +30,7 @@ by another client (#15489)
+ Docker daemon has full IPv6 support
+ The `docker run` command can take the `--pid=host` flag to use the host PID namespace, which makes it possible for example to debug host processes using containerized debugging tools
+ The `docker run` command can take the `--read-only` flag to make the containers root filesystem mounted as readonly, which can be used in combination with volumes to force a containers processes to only write to locations that will be persisted
+ Container total memory usage can be limited for `docker run` using the `--memory-swap` flag
+ Container total memory usage can be limited for `docker run` using the `memory-swap` flag
* Major stability improvements for devicemapper storage driver
* Better integration with host system: containers will reflect changes to the host's `/etc/resolv.conf` file when restarted
* Better integration with host system: per-container iptable rules are moved to the DOCKER chain
@@ -808,7 +49,7 @@ by another client (#15489)
#### Notable Features since 1.3.0
+ Set key=value labels to the daemon (displayed in `docker info`), applied with
new `-label` daemon flag
+ Add support for `ENV` in Dockerfile of the form:
+ Add support for `ENV` in Dockerfile of the form:
`ENV name=value name2=value2...`
+ New Overlayfs Storage Driver
+ `docker info` now returns an `ID` and `Name` field
@@ -1031,7 +272,7 @@ by another client (#15489)
#### Hack
* Clean up "go test" output from "make test" to be much more readable/scannable.
* Exclude more "definitely not unit tested Go source code" directories from hack/make/test.
* Excluse more "definitely not unit tested Go source code" directories from hack/make/test.
+ Generate md5 and sha256 hashes when building, and upload them via hack/release.sh.
- Include contributed completions in Ubuntu PPA.
+ Add cli integration tests.
@@ -1270,7 +511,7 @@ by another client (#15489)
- Fix broken images API for version less than 1.7
- Use the right encoding for all API endpoints which return JSON
- Move remote api client to api/
- Queue calls to the API using generic socket wait
- Queue calls to the API using generic socket wait
#### Runtime
@@ -1311,7 +552,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
* The ADD instruction now supports caching, which avoids unnecessarily re-uploading the same source content again and again when it hasnt changed
* The new ONBUILD instruction adds to your image a “trigger” instruction to be executed at a later time, when the image is used as the base for another build
* Docker now ships with an experimental storage driver which uses the BTRFS filesystem for copy-on-write
* Docker is officially supported on Mac OS X
* Docker is officially supported on Mac OSX
* The Docker daemon supports systemd socket activation
## 0.7.6 (2014-01-14)
@@ -1350,7 +591,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
- Do not add hostname when networking is disabled
* Return most recent image from the cache by date
- Return all errors from docker wait
* Add Content-Type Header "application/json" to GET /version and /info responses
* Add Content-Type Header "application/json" to GET /version and /info responses
#### Other
@@ -1365,12 +606,12 @@ With the ongoing changes to the networking and execution subsystems of docker te
- Fix ADD caching issue with . prefixed path
- Fix docker build on devicemapper by reverting sparse file tar option
- Fix issue with file caching and prevent wrong cache hit
* Use same error handling while unmarshalling CMD and ENTRYPOINT
* Use same error handling while unmarshalling CMD and ENTRYPOINT
#### Documentation
* Simplify and streamline Amazon Quickstart
* Install instructions use unprefixed Fedora image
* Install instructions use unprefixed fedora image
* Update instructions for mtu flag for Docker on GCE
+ Add Ubuntu Saucy to installation
- Fix for wrong version warning on master instead of latest
@@ -1378,7 +619,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
#### Runtime
- Only get the image's rootfs when we need to calculate the image size
- Correctly handle unmapping UDP ports
- Correctly handle unmapping UDP ports
* Make CopyFileWithTar use a pipe instead of a buffer to save memory on docker build
- Fix login message to say pull instead of push
- Fix "docker load" help by removing "SOURCE" prompt and mentioning STDIN
@@ -1545,7 +786,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
* Improve unit tests
* The test suite now runs all tests even if one fails
* Refactor C in Go (Devmapper)
- Fix OS X compilation
- Fix OSX compilation
## 0.7.0 (2013-11-25)
@@ -1563,7 +804,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
#### Runtime
* Improve stability, fixes some race conditions
* Improve stability, fixes some race conditons
* Skip the volumes mounted when deleting the volumes of container.
* Fix layer size computation: handle hard links correctly
* Use the work Path for docker cp CONTAINER:PATH
@@ -1686,7 +927,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
+ Add -rm to docker run for removing a container on exit
- Remove error messages which are not actually errors
- Fix `docker rm` with volumes
- Fix some error cases where an HTTP body might not be closed
- Fix some error cases where a HTTP body might not be closed
- Fix panic with wrong dockercfg file
- Fix the attach behavior with -i
* Record termination time in state.
@@ -2020,7 +1261,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
+ Containers can expose public UDP ports (eg, '-p 123/udp')
+ Optionally specify an exact public port (eg. '-p 80:4500')
* 'docker login' supports additional options
- Don't save a container`s hostname when committing an image.
- Dont save a container`s hostname when committing an image.
#### Registry

View File

@@ -1,64 +1,70 @@
# Contributing to Docker
Want to hack on Docker? Awesome! We have a contributor's guide that explains
[setting up a Docker development environment and the contribution
process](https://docs.docker.com/opensource/project/who-written-for/).
![Contributors guide](docs/static_files/contributors.png)
This page contains information about reporting issues as well as some tips and
guidelines useful to experienced open source contributors. Finally, make sure
you read our [community guidelines](#docker-community-guidelines) before you
start participating.
Want to hack on Docker? Awesome! Here are instructions to get you
started. They are probably not perfect, please let us know if anything
feels wrong or incomplete.
## Topics
* [Reporting Security Issues](#reporting-security-issues)
* [Design and Cleanup Proposals](#design-and-cleanup-proposals)
* [Reporting Issues](#reporting-other-issues)
* [Quick Contribution Tips and Guidelines](#quick-contribution-tips-and-guidelines)
* [Reporting Issues](#reporting-issues)
* [Build Environment](#build-environment)
* [Contribution Guidelines](#contribution-guidelines)
* [Community Guidelines](#docker-community-guidelines)
## Reporting security issues
## Reporting Security Issues
The Docker maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
The Docker maintainers take security very seriously. If you discover a security issue,
please bring it to their attention right away!
Please **DO NOT** file a public issue, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Please send your report privately to [security@docker.com](mailto:security@docker.com),
please **DO NOT** file a public issue.
Security reports are greatly appreciated and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into Docker schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
Security reports are greatly appreciated and we will publicly thank you for it. We also
like to send gifts - if you're into Docker shwag make sure to let us know :)
We currently do not offer a paid security bounty program, but are not ruling it out in
the future.
## Design and Cleanup Proposals
## Reporting other issues
When considering a design proposal, we are looking for:
* A description of the problem this design proposal solves
* A pull request, not an issue, that modifies the documentation describing
the feature you are proposing, adding new documentation if necessary.
* Please prefix your issue with `Proposal:` in the title
* Please review [the existing Proposals](https://github.com/docker/docker/pulls?q=is%3Aopen+is%3Apr+label%3AProposal)
before reporting a new one. You can always pair with someone if you both
have the same idea.
When considering a cleanup task, we are looking for:
* A description of the refactors made
* Please note any logic changes if necessary
* A pull request with the code
* Please prefix your PR's title with `Cleanup:` so we can quickly address it.
* Your pull request must remain up to date with master, so rebase as necessary.
## Reporting Issues
A great way to contribute to the project is to send a detailed report when you
encounter an issue. We always appreciate a well-written, thorough bug report,
and will thank you for it!
Check that [our issue database](https://github.com/docker/docker/issues)
doesn't already include that problem or suggestion before submitting an issue.
If you find a match, you can use the "subscribe" button to get notified on
updates. Do *not* leave random "+1" or "I have this too" comments, as they
only clutter the discussion, and don't help resolving it. However, if you
have ways to reproduce the issue or have additional information that may help
resolving the issue, please leave a comment.
When reporting issues, always include:
When reporting [issues](https://github.com/docker/docker/issues) on
GitHub please include your host OS (Ubuntu 12.04, Fedora 19, etc).
Please include:
* The output of `uname -a`.
* The output of `docker version`.
* The output of `docker info`.
* The output of `docker -D info`.
Also include the steps required to reproduce the problem if possible and
applicable. This information will help us review and fix your issue faster.
When sending lengthy log-files, consider posting them as a gist (https://gist.github.com).
Don't forget to remove sensitive data from your logfiles before posting (you can
replace those parts with "REDACTED").
Please also include the steps required to reproduce the problem if
possible and applicable. This information will help us review and fix
your issue faster.
**Issue Report Template**:
### Template
```
Description of problem:
@@ -97,169 +103,123 @@ Additional info:
```
## Build Environment
##Quick contribution tips and guidelines
For instructions on setting up your development environment, please
see our dedicated [dev environment setup
docs](http://docs.docker.com/contributing/devenvironment/).
This section gives the experienced contributor some tips and guidelines.
## Contribution guidelines
###Pull requests are always welcome
### Pull requests are always welcome
Not sure if that typo is worth a pull request? Found a bug and know how to fix
it? Do it! We will appreciate it. Any significant improvement should be
documented as [a GitHub issue](https://github.com/docker/docker/issues) before
anybody starts working on it.
We are always thrilled to receive pull requests, and do our best to
process them as quickly as possible. Not sure if that typo is worth a pull
request? Do it! We will appreciate it.
We are always thrilled to receive pull requests. We do our best to process them
quickly. If your pull request is not accepted on the first try,
don't get discouraged! Our contributor's guide explains [the review process we
use for simple changes](https://docs.docker.com/opensource/workflow/make-a-contribution/).
If your pull request is not accepted on the first try, don't be
discouraged! If there's a problem with the implementation, hopefully you
received feedback on what to improve.
### Design and cleanup proposals
We're trying very hard to keep Docker lean and focused. We don't want it
to do everything for everybody. This means that we might decide against
incorporating a new feature. However, there might be a way to implement
that feature *on top of* Docker.
You can propose new designs for existing Docker features. You can also design
entirely new features. We really appreciate contributors who want to refactor or
otherwise cleanup our project. For information on making these types of
contributions, see [the advanced contribution
section](https://docs.docker.com/opensource/workflow/advanced-contributing/) in
the contributors guide.
### Discuss your design on the mailing list
We try hard to keep Docker lean and focused. Docker can't do everything for
everybody. This means that we might decide against incorporating a new feature.
However, there might be a way to implement that feature *on top of* Docker.
We recommend discussing your plans [on the mailing
list](https://groups.google.com/forum/?fromgroups#!forum/docker-dev)
before starting to code - especially for more ambitious contributions.
This gives other contributors a chance to point you in the right
direction, give feedback on your design, and maybe point out if someone
else is working on the same thing.
### Talking to other Docker users and contributors
### Create issues...
<table class="tg">
<col width="45%">
<col width="65%">
<tr>
<td>Internet&nbsp;Relay&nbsp;Chat&nbsp;(IRC)</td>
<td>
<p>
IRC a direct line to our most knowledgeable Docker users; we have
both the <code>#docker</code> and <code>#docker-dev</code> group on
<strong>irc.freenode.net</strong>.
IRC is a rich chat protocol but it can overwhelm new users. You can search
<a href="https://botbot.me/freenode/docker/#" target="_blank">our chat archives</a>.
</p>
Read our <a href="https://docs.docker.com/opensource/get-help/#irc-quickstart" target="_blank">IRC quickstart guide</a> for an easy way to get started.
</td>
</tr>
<tr>
<td>Google Groups</td>
<td>
There are two groups.
<a href="https://groups.google.com/forum/#!forum/docker-user" target="_blank">Docker-user</a>
is for people using Docker containers.
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
group is for contributors and other people contributing to the Docker project.
You can join them without a google account by sending an email to
<a href="mailto:docker-dev+subscribe@googlegroups.com">docker-dev+subscribe@googlegroups.com</a>.
After receiving the join-request message, you can simply reply to that to confirm the subscribtion.
</td>
</tr>
<tr>
<td>Twitter</td>
<td>
You can follow <a href="https://twitter.com/docker/" target="_blank">Docker's Twitter feed</a>
to get updates on our products. You can also tweet us questions or just
share blogs or stories.
</td>
</tr>
<tr>
<td>Stack Overflow</td>
<td>
Stack Overflow has over 17000 Docker questions listed. We regularly
monitor <a href="https://stackoverflow.com/search?tab=newest&q=docker" target="_blank">Docker questions</a>
and so do many other knowledgeable Docker users.
</td>
</tr>
</table>
Any significant improvement should be documented as [a GitHub
issue](https://github.com/docker/docker/issues) before anybody
starts working on it.
### ...but check for existing issues first!
Please take a moment to check that an issue doesn't already exist
documenting your bug report or improvement proposal. If it does, it
never hurts to add a quick "+1" or "I have this problem too". This will
help prioritize the most common problems and requests.
### Conventions
Fork the repository and make changes on your fork in a feature branch:
- If it's a bug fix branch, name it XXXX-something where XXXX is the number of
the issue.
- If it's a feature branch, create an enhancement issue to announce
your intentions, and name it XXXX-something where XXXX is the number of the
issue.
- If it's a bug fix branch, name it XXXX-something where XXXX is the number of the
issue.
- If it's a feature branch, create an enhancement issue to announce your
intentions, and name it XXXX-something where XXXX is the number of the issue.
Submit unit tests for your changes. Go has a great test framework built in; use
it! Take a look at existing tests for inspiration. [Run the full test
suite](https://docs.docker.com/opensource/project/test-and-docs/) on your branch before
submitting a pull request.
Submit unit tests for your changes. Go has a great test framework built in; use
it! Take a look at existing tests for inspiration. Run the full test suite on
your branch before submitting a pull request.
Update the documentation when creating or modifying features. Test your
documentation changes for clarity, concision, and correctness, as well as a
clean documentation build. See our contributors guide for [our style
guide](https://docs.docker.com/opensource/doc-style) and instructions on [building
the documentation](https://docs.docker.com/opensource/project/test-and-docs/#build-and-test-the-documentation).
Update the documentation when creating or modifying features. Test
your documentation changes for clarity, concision, and correctness, as
well as a clean documentation build. See `docs/README.md` for more
information on building the docs and how they get released.
Write clean code. Universally formatted code promotes ease of writing, reading,
and maintenance. Always run `gofmt -s -w file.go` on each changed file before
committing your changes. Most editors have plug-ins that do this automatically.
Pull request descriptions should be as clear as possible and include a reference
to all the issues that they address.
Pull requests descriptions should be as clear as possible and include a
reference to all the issues that they address.
Commit messages must start with a capitalized and short summary (max. 50 chars)
written in the imperative, followed by an optional, more detailed explanatory
text which is separated from the summary by an empty line.
Commit messages must start with a capitalized and short summary (max. 50
chars) written in the imperative, followed by an optional, more detailed
explanatory text which is separated from the summary by an empty line.
Code review comments may be added to your pull request. Discuss, then make the
suggested modifications and push additional commits to your feature branch. Post
a comment after pushing. New commits show up in the pull request automatically,
but the reviewers are notified only when you comment.
suggested modifications and push additional commits to your feature branch. Be
sure to post a comment after pushing. The new commits will show up in the pull
request automatically, but the reviewers will not be notified unless you
comment.
Pull requests must be cleanly rebased on top of master without multiple branches
Pull requests must be cleanly rebased ontop of master without multiple branches
mixed into the PR.
**Git tip**: If your PR no longer merges cleanly, use `rebase master` in your
feature branch to update your pull request rather than `merge master`.
Before you make a pull request, squash your commits into logical units of work
using `git rebase -i` and `git push -f`. A logical unit of work is a consistent
set of patches that should be reviewed together: for example, upgrading the
version of a vendored dependency and taking advantage of its now available new
feature constitute two separate units of work. Implementing a new function and
calling it in another file constitute a single logical unit of work. The very
high majority of submissions should have a single commit, so if in doubt: squash
down to one.
Before the pull request is merged, make sure that you squash your commits into
logical units of work using `git rebase -i` and `git push -f`. After every
commit the test suite should be passing. Include documentation changes in the
same commit so that a revert would remove all traces of the feature or fix.
After every commit, [make sure the test suite passes]
(https://docs.docker.com/opensource/project/test-and-docs/). Include documentation
changes in the same pull request so that a revert would remove all traces of
the feature or fix.
Commits that fix or close an issue should include a reference like
`Closes #XXXX` or `Fixes #XXXX`, which will automatically close the
issue when merged.
Include an issue reference like `Closes #XXXX` or `Fixes #XXXX` in commits that
close an issue. Including references automatically closes the issue on a merge.
Please do not add yourself to the `AUTHORS` file, as it is regenerated regularly
from the Git history.
Please see the [Coding Style](#coding-style) for further guidelines.
Please do not add yourself to the `AUTHORS` file, as it is regenerated
regularly from the Git history.
### Merge approval
Docker maintainers use LGTM (Looks Good To Me) in comments on the code review to
indicate acceptance.
Docker maintainers use LGTM (Looks Good To Me) in comments on the code review
to indicate acceptance.
A change requires LGTMs from an absolute majority of the maintainers of each
component affected. For example, if a change affects `docs/` and `registry/`, it
needs an absolute majority from the maintainers of `docs/` AND, separately, an
absolute majority of the maintainers of `registry/`.
For more details, see the [MAINTAINERS](MAINTAINERS) page.
For more details see [MAINTAINERS.md](project/MAINTAINERS.md)
### Sign your work
The sign-off is a simple line at the end of the explanation for the patch. Your
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are pretty simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
The sign-off is a simple line at the end of the explanation for the
patch, which certifies that you wrote it or otherwise have the right to
pass it on as an open-source patch. The rules are pretty simple: if you
can certify the below (from
[developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
@@ -303,7 +263,7 @@ Then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe.smith@email.com>
Use your real name (sorry, no pseudonyms or anonymous contributions.)
Using your real name (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
@@ -314,46 +274,51 @@ format right away, but please do adjust your processes for future contributions.
### How can I become a maintainer?
The procedures for adding new maintainers are explained in the
global [MAINTAINERS](https://github.com/docker/opensource/blob/master/MAINTAINERS)
file in the [https://github.com/docker/opensource/](https://github.com/docker/opensource/)
repository.
* Step 1: Learn the component inside out
* Step 2: Make yourself useful by contributing code, bug fixes, support etc.
* Step 3: Volunteer on the IRC channel (#docker at Freenode)
* Step 4: Propose yourself at a scheduled docker meeting in #docker-dev
Don't forget: being a maintainer is a time investment. Make sure you
will have time to make yourself available. You don't have to be a
will have time to make yourself available. You don't have to be a
maintainer to make a difference on the project!
## Docker community guidelines
### IRC Meetings
We want to keep the Docker community awesome, growing and collaborative. We need
your help to keep it that way. To help with this we've come up with some general
guidelines for the community as a whole:
There are two monthly meetings taking place on #docker-dev IRC to accomodate all timezones.
Anybody can ask for a topic to be discussed prior to the meeting.
* Be nice: Be courteous, respectful and polite to fellow community members:
no regional, racial, gender, or other abuse will be tolerated. We like
nice people way better than mean ones!
If you feel the conversation is going off-topic, feel free to point it out.
* Encourage diversity and participation: Make everyone in our community feel
welcome, regardless of their background and the extent of their
For the exact dates and times, have a look at [the irc-minutes repo](https://github.com/docker/irc-minutes).
They also contain all the notes from previous meetings.
## Docker Community Guidelines
We want to keep the Docker community awesome, growing and collaborative. We
need your help to keep it that way. To help with this we've come up with some
general guidelines for the community as a whole:
* Be nice: Be courteous, respectful and polite to fellow community members: no
regional, racial, gender, or other abuse will be tolerated. We like nice people
way better than mean ones!
* Encourage diversity and participation: Make everyone in our community
feel welcome, regardless of their background and the extent of their
contributions, and do everything possible to encourage participation in
our community.
* Keep it legal: Basically, don't get us in trouble. Share only content that
you own, do not share private or sensitive information, and don't break
the law.
you own, do not share private or sensitive information, and don't break the
law.
* Stay on topic: Make sure that you are posting to the correct channel and
avoid off-topic discussions. Remember when you update an issue or respond
to an email you are potentially sending to a large number of people. Please
consider this before you update. Also remember that nobody likes spam.
* Stay on topic: Make sure that you are posting to the correct channel
and avoid off-topic discussions. Remember when you update an issue or
respond to an email you are potentially sending to a large number of
people. Please consider this before you update. Also remember that
nobody likes spam.
* Don't send email to the maintainers: There's no need to send email to the
maintainers to ask them to investigate an issue or to take a look at a
pull request. Instead of sending an email, GitHub mentions should be
used to ping maintainers to review a pull request, a proposal or an
issue.
### Guideline violations — 3 strikes method
### Guideline Violations — 3 Strikes Method
The point of this section is not to find opportunities to punish people, but we
do need a fair way to deal with people who are making our community suck.
@@ -372,65 +337,20 @@ do need a fair way to deal with people who are making our community suck.
* Obvious spammers are banned on first occurrence. If we don't do this, we'll
have spam all over the place.
* Violations are forgiven after 6 months of good behavior, and we won't hold a
grudge.
* Violations are forgiven after 6 months of good behavior, and we won't
hold a grudge.
* People who commit minor infractions will get some education, rather than
hammering them in the 3 strikes process.
* People who commit minor infractions will get some education,
rather than hammering them in the 3 strikes process.
* The rules apply equally to everyone in the community, no matter how much
you've contributed.
* The rules apply equally to everyone in the community, no matter how
much you've contributed.
* Extreme violations of a threatening, abusive, destructive or illegal nature
will be addressed immediately and are not subject to 3 strikes or forgiveness.
will be addressed immediately and are not subject to 3 strikes or
forgiveness.
* Contact abuse@docker.com to report abuse or appeal violations. In the case of
appeals, we know that mistakes happen, and we'll work with you to come up with a
fair solution if there has been a misunderstanding.
appeals, we know that mistakes happen, and we'll work with you to come up with
a fair solution if there has been a misunderstanding.
## Coding Style
Unless explicitly stated, we follow all coding guidelines from the Go
community. While some of these standards may seem arbitrary, they somehow seem
to result in a solid, consistent codebase.
It is possible that the code base does not currently comply with these
guidelines. We are not looking for a massive PR that fixes this, since that
goes against the spirit of the guidelines. All new contributions should make a
best effort to clean up and make the code base better than they left it.
Obviously, apply your best judgement. Remember, the goal here is to make the
code base easier for humans to navigate and understand. Always keep that in
mind when nudging others to comply.
The rules:
1. All code should be formatted with `gofmt -s`.
2. All code should pass the default levels of
[`golint`](https://github.com/golang/lint).
3. All code should follow the guidelines covered in [Effective
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
4. Comment the code. Tell us the why, the history and the context.
5. Document _all_ declarations and methods, even private ones. Declare
expectations, caveats and anything else that may be important. If a type
gets exported, having the comments already there will ensure it's ready.
6. Variable name length should be proportional to it's context and no longer.
`noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.
In practice, short methods will have short variable names and globals will
have longer names.
7. No underscores in package names. If you need a compound name, step back,
and re-examine why you need a compound name. If you still think you need a
compound name, lose the underscore.
8. No utils or helpers packages. If a function is not general enough to
warrant it's own package, it has not been written generally enough to be a
part of a util package. Just leave it unexported and well-documented.
9. All tests should run with `go test` and outside tooling should not be
required. No, we don't need another unit testing framework. Assertion
packages are acceptable if they provide _real_ incremental value.
10. Even though we call these "rules" above, they are actually just
guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](http://golang.org/doc/effective_go.html). The
[Go Blog](http://blog.golang.org/) is also a great resource. Drinking the
kool-aid is a lot easier than going thirsty.

View File

@@ -19,180 +19,122 @@
# -e GPG_PASSPHRASE=gloubiboulga \
# docker hack/release.sh
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# Note: Apparmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
FROM debian:jessie
# add zfs ppa
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61 \
|| apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys E871F18B51E0147C77796AC81196BA81F6B0FC61
RUN echo deb http://ppa.launchpad.net/zfs-native/stable/ubuntu trusty main > /etc/apt/sources.list.d/zfs.list
# add llvm repo
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421 \
|| apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 6084F3CF814B57C1CF12EFD515CF4D18AF4F7421
RUN echo deb http://llvm.org/apt/jessie/ llvm-toolchain-jessie-3.8 main > /etc/apt/sources.list.d/llvm.list
# allow replacing httpredir mirror
ARG APT_MIRROR=httpredir.debian.org
RUN sed -i s/httpredir.debian.org/$APT_MIRROR/g /etc/apt/sources.list
FROM ubuntu:14.04
MAINTAINER Tianon Gravi <admwiggin@gmail.com> (@tianon)
# Packaged dependencies
RUN apt-get update && apt-get install -y \
apparmor \
apt-utils \
aufs-tools \
automake \
bash-completion \
bsdmainutils \
btrfs-tools \
build-essential \
clang-3.8 \
createrepo \
curl \
dpkg-sig \
gcc-mingw-w64 \
git \
iptables \
jq \
libapparmor-dev \
libcap-dev \
libltdl-dev \
libsqlite3-dev \
libsystemd-journal-dev \
libtool \
mercurial \
net-tools \
pkg-config \
python-dev \
parallel \
python-mock \
python-pip \
python-websocket \
ubuntu-zfs \
xfsprogs \
libzfs-dev \
tar \
zip \
--no-install-recommends \
&& pip install awscli==1.10.15 \
&& ln -snf /usr/bin/clang-3.8 /usr/local/bin/clang \
&& ln -snf /usr/bin/clang++-3.8 /usr/local/bin/clang++
reprepro \
ruby1.9.1 \
ruby1.9.1-dev \
s3cmd=1.1.0* \
--no-install-recommends
# Get lvm2 source for compiling statically
ENV LVM2_VERSION 2.02.103
RUN mkdir -p /usr/local/lvm2 \
&& curl -fsSL "https://mirrors.kernel.org/sourceware/lvm2/LVM2.${LVM2_VERSION}.tgz" \
| tar -xzC /usr/local/lvm2 --strip-components=1
RUN git clone -b v2_02_103 https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# Compile and install lvm2
RUN cd /usr/local/lvm2 \
&& ./configure \
--build="$(gcc -print-multiarch)" \
--enable-static_link \
&& ./configure --enable-static_link \
&& make device-mapper \
&& make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Configure the container for OSX cross compilation
ENV OSX_SDK MacOSX10.11.sdk
ENV OSX_CROSS_COMMIT 8aa9b71a394905e6c5f4b59e2b97b87a004658a4
RUN set -x \
&& export OSXCROSS_PATH="/osxcross" \
&& git clone https://github.com/tpoechtrager/osxcross.git $OSXCROSS_PATH \
&& ( cd $OSXCROSS_PATH && git checkout -q $OSX_CROSS_COMMIT) \
&& curl -sSL https://s3.dockerproject.org/darwin/v2/${OSX_SDK}.tar.xz -o "${OSXCROSS_PATH}/tarballs/${OSX_SDK}.tar.xz" \
&& UNATTENDED=yes OSX_VERSION_MIN=10.6 ${OSXCROSS_PATH}/build.sh
ENV PATH /osxcross/target/bin:$PATH
# install seccomp: the version shipped in trusty is too old
ENV SECCOMP_VERSION 2.3.0
RUN set -x \
&& export SECCOMP_PATH="$(mktemp -d)" \
&& curl -fsSL "https://github.com/seccomp/libseccomp/releases/download/v${SECCOMP_VERSION}/libseccomp-${SECCOMP_VERSION}.tar.gz" \
| tar -xzC "$SECCOMP_PATH" --strip-components=1 \
&& ( \
cd "$SECCOMP_PATH" \
&& ./configure --prefix=/usr/local \
&& make \
&& make install \
&& ldconfig \
) \
&& rm -rf "$SECCOMP_PATH"
# Install lxc
ENV LXC_VERSION 1.0.7
RUN mkdir -p /usr/src/lxc \
&& curl -sSL https://linuxcontainers.org/downloads/lxc/lxc-${LXC_VERSION}.tar.gz | tar -v -C /usr/src/lxc/ -xz --strip-components=1
RUN cd /usr/src/lxc \
&& ./configure \
&& make \
&& make install \
&& ldconfig
# Install Go
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
ENV GO_VERSION 1.5.4
RUN curl -fsSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
ENV GO_VERSION 1.4.1
RUN curl -sSL https://golang.org/dl/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/local -xz \
&& mkdir -p /go/bin
ENV PATH /go/bin:/usr/local/go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
RUN cd /usr/local/go/src && ./make.bash --no-clean 2>&1
# Compile Go for cross compilation
ENV DOCKER_CROSSPLATFORMS \
linux/386 linux/arm \
darwin/amd64 \
freebsd/amd64 freebsd/386 freebsd/arm \
windows/amd64 windows/386
darwin/amd64 darwin/386 \
freebsd/amd64 freebsd/386 freebsd/arm
# This has been commented out and kept as reference because we don't support compiling with older Go anymore.
# ENV GOFMT_VERSION 1.3.3
# RUN curl -sSL https://storage.googleapis.com/golang/go${GOFMT_VERSION}.$(go env GOOS)-$(go env GOARCH).tar.gz | tar -C /go/bin -xz --strip-components=2 go/bin/gofmt
# TODO when https://jenkins.dockerproject.com/job/Windows/ is green, add windows back to the list above
# windows/amd64 windows/386
# (set an explicit GOARM of 5 for maximum compatibility)
ENV GOARM 5
RUN cd /usr/local/go/src \
&& set -x \
&& for platform in $DOCKER_CROSSPLATFORMS; do \
GOOS=${platform%/*} \
GOARCH=${platform##*/} \
./make.bash --no-clean 2>&1; \
done
# We still support compiling with older Go, so need to grab older "gofmt"
ENV GOFMT_VERSION 1.3.3
RUN curl -sSL https://storage.googleapis.com/golang/go${GOFMT_VERSION}.$(go env GOOS)-$(go env GOARCH).tar.gz | tar -C /go/bin -xz --strip-components=2 go/bin/gofmt
ENV GO_TOOLS_COMMIT 823804e1ae08dbb14eb807afc7db9993bc9e3cc3
# Grab Go's cover tool for dead-simple code coverage testing
# Grab Go's vet tool for examining go code to find suspicious constructs
# and help prevent errors that the compiler might not catch
RUN git clone https://github.com/golang/tools.git /go/src/golang.org/x/tools \
&& (cd /go/src/golang.org/x/tools && git checkout -q $GO_TOOLS_COMMIT) \
&& go install -v golang.org/x/tools/cmd/cover \
&& go install -v golang.org/x/tools/cmd/vet
# Grab Go's lint tool
ENV GO_LINT_COMMIT 32a87160691b3c96046c0c678fe57c5bef761456
RUN git clone https://github.com/golang/lint.git /go/src/github.com/golang/lint \
&& (cd /go/src/github.com/golang/lint && git checkout -q $GO_LINT_COMMIT) \
&& go install -v github.com/golang/lint/golint
RUN go get golang.org/x/tools/cmd/cover
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
# TODO replace FPM with some very minimal debhelper stuff
RUN gem install --no-rdoc --no-ri fpm --version 1.3.2
# Install notary server
ENV NOTARY_VERSION docker-v1.11-3
# Get the "busybox" image source so we can build locally instead of pulling
RUN git clone -b buildroot-2014.02 https://github.com/jpetazzo/docker-busybox.git /docker-busybox
# Get the "cirros" image source so we can import it instead of fetching it during tests
RUN curl -sSL -o /cirros.tar.gz https://github.com/ewindisch/docker-cirros/raw/1cded459668e8b9dbf4ef976c94c05add9bbd8e9/cirros-0.3.0-x86_64-lxc.tar.gz
# Install registry
ENV REGISTRY_COMMIT c448e0416925a9876d5576e412703c9b8b865e19
RUN set -x \
&& export GO15VENDOREXPERIMENT=1 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
&& rm -rf "$GOPATH"
&& git clone https://github.com/docker/distribution.git /go/src/github.com/docker/distribution \
&& (cd /go/src/github.com/docker/distribution && git checkout -q $REGISTRY_COMMIT) \
&& GOPATH=/go/src/github.com/docker/distribution/Godeps/_workspace:/go \
go build -o /go/bin/registry-v2 github.com/docker/distribution/cmd/registry
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
ENV DOCKER_PY_COMMIT aa19d7b6609c6676e8258f6b900dea2eda1dbe95
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT \
&& pip install -r test-requirements.txt
&& git checkout -q $DOCKER_PY_COMMIT
# Setup s3cmd config
RUN { \
echo '[default]'; \
echo 'access_key=$AWS_ACCESS_KEY'; \
echo 'secret_key=$AWS_SECRET_KEY'; \
} > ~/.s3cfg
# Set user.email so crosbymichael's in-container merge commits go smoothly
RUN git config --global user.email 'docker-dummy@example.com'
@@ -203,71 +145,19 @@ RUN useradd --create-home --gid docker unprivilegeduser
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor pkcs11 seccomp selinux
ENV DOCKER_BUILDTAGS apparmor selinux btrfs_noversion
# Let us use a .bashrc file
RUN ln -sfv $PWD/.bashrc ~/.bashrc
# Register Docker's bash completion.
RUN ln -sv $PWD/contrib/completion/bash/docker /etc/bash_completion.d/docker
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /go/src/github.com/docker/docker/contrib/
RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
buildpack-deps:jessie@sha256:25785f89240fbcdd8a74bdaf30dd5599a9523882c6dfc567f2e9ef7cf6f79db6 \
busybox:latest@sha256:e4f93f6ed15a0cdd342f5aae387886fba0ab98af0a102da6276eaf24d6e6ade0 \
debian:jessie@sha256:f968f10b4b523737e253a97eac59b0d1420b5c19b69928d35801a6373ffe330e \
hello-world:latest@sha256:8be990ef2aeb16dbcb9271ddfe2610fa6658d13f6dfb8bc72074cc1ca36966a7
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
# Download man page generator
# Install man page generator
COPY vendor /go/src/github.com/docker/docker/vendor
# (copy vendor/ because go-md2man needs golang.org/x/net)
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b v1.0.4 https://github.com/cpuguy83/go-md2man.git "$GOPATH/src/github.com/cpuguy83/go-md2man" \
&& git clone --depth 1 -b v1.4 https://github.com/russross/blackfriday.git "$GOPATH/src/github.com/russross/blackfriday" \
&& go get -v -d github.com/cpuguy83/go-md2man \
&& go build -v -o /usr/local/bin/go-md2man github.com/cpuguy83/go-md2man \
&& rm -rf "$GOPATH"
&& git clone -b v1 https://github.com/cpuguy83/go-md2man.git /go/src/github.com/cpuguy83/go-md2man \
&& git clone -b v1.2 https://github.com/russross/blackfriday.git /go/src/github.com/russross/blackfriday \
&& go install -v github.com/cpuguy83/go-md2man
# Download toml validator
ENV TOMLV_COMMIT 9baf8a8a9f2ed20a8e54160840c492f937eeaf9a
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml" \
&& (cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT") \
&& go build -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv \
&& rm -rf "$GOPATH"
# Build/install the tool for embedding resources in Windows binaries
ENV RSRC_COMMIT ba14da1f827188454a4591717fff29999010887f
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/akavel/rsrc.git "$GOPATH/src/github.com/akavel/rsrc" \
&& (cd "$GOPATH/src/github.com/akavel/rsrc" && git checkout -q "$RSRC_COMMIT") \
&& go build -v -o /usr/local/bin/rsrc github.com/akavel/rsrc \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
# install toml validator
RUN git clone -b v0.1.0 https://github.com/BurntSushi/toml.git /go/src/github.com/BurntSushi/toml \
&& go install -v github.com/BurntSushi/toml/cmd/tomlv
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]

View File

@@ -1,209 +0,0 @@
# This file describes the standard way to build Docker on aarch64, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker -f Dockerfile.aarch64 .
#
# # Mount your source in an interactive container for quick testing:
# docker run -v `pwd`:/go/src/github.com/docker/docker --privileged -i -t docker bash
#
# # Run the test suite:
# docker run --privileged docker hack/make.sh test
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
FROM aarch64/ubuntu:wily
# Packaged dependencies
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
automake \
bash-completion \
btrfs-tools \
build-essential \
createrepo \
curl \
dpkg-sig \
g++ \
gcc \
git \
iptables \
jq \
libapparmor-dev \
libc6-dev \
libcap-dev \
libsqlite3-dev \
libsystemd-dev \
mercurial \
net-tools \
parallel \
pkg-config \
python-dev \
python-mock \
python-pip \
python-websocket \
gccgo \
--no-install-recommends
# Install armhf loader to use armv6 binaries on armv8
RUN dpkg --add-architecture armhf \
&& apt-get update \
&& apt-get install -y libc6:armhf
# Get lvm2 source for compiling statically
ENV LVM2_VERSION 2.02.103
RUN mkdir -p /usr/local/lvm2 \
&& curl -fsSL "https://mirrors.kernel.org/sourceware/lvm2/LVM2.${LVM2_VERSION}.tgz" \
| tar -xzC /usr/local/lvm2 --strip-components=1
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# fix platform enablement in lvm2 to support aarch64 properly
RUN set -e \
&& for f in config.guess config.sub; do \
curl -fsSL -o "/usr/local/lvm2/autoconf/$f" "http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=$f;hb=HEAD"; \
done
# "arch.c:78:2: error: #error the arch code needs to know about your machine type"
# Compile and install lvm2
RUN cd /usr/local/lvm2 \
&& ./configure \
--build="$(gcc -print-multiarch)" \
--enable-static_link \
&& make device-mapper \
&& make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# install seccomp: the version shipped in trusty is too old
ENV SECCOMP_VERSION 2.3.0
RUN set -x \
&& export SECCOMP_PATH="$(mktemp -d)" \
&& curl -fsSL "https://github.com/seccomp/libseccomp/releases/download/v${SECCOMP_VERSION}/libseccomp-${SECCOMP_VERSION}.tar.gz" \
| tar -xzC "$SECCOMP_PATH" --strip-components=1 \
&& ( \
cd "$SECCOMP_PATH" \
&& ./configure --prefix=/usr/local \
&& make \
&& make install \
&& ldconfig \
) \
&& rm -rf "$SECCOMP_PATH"
# Install Go
# We don't have official binary tarballs for ARM64, eigher for Go or bootstrap,
# so we use the official armv6 released binaries as a GOROOT_BOOTSTRAP, and
# build Go from source code.
ENV GO_VERSION 1.5.4
RUN mkdir /usr/src/go && curl -fsSL https://storage.googleapis.com/golang/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
ENV PATH /usr/src/go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
# Only install one version of the registry, because old version which support
# schema1 manifests is not working on ARM64, we should skip integration-cli
# tests for schema1 manifests on ARM64.
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
# Install notary server
ENV NOTARY_VERSION docker-v1.11-3
RUN set -x \
&& export GO15VENDOREXPERIMENT=1 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT \
&& pip install -r test-requirements.txt
# Set user.email so crosbymichael's in-container merge commits go smoothly
RUN git config --global user.email 'docker-dummy@example.com'
# Add an unprivileged user to be used for tests which need it
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor pkcs11 seccomp selinux
# Let us use a .bashrc file
RUN ln -sfv $PWD/.bashrc ~/.bashrc
# Register Docker's bash completion.
RUN ln -sv $PWD/contrib/completion/bash/docker /etc/bash_completion.d/docker
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /go/src/github.com/docker/docker/contrib/
RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
aarch64/buildpack-deps:jessie@sha256:6aa1d6910791b7ac78265fd0798e5abd6cb3f27ae992f6f960f6c303ec9535f2 \
aarch64/busybox:latest@sha256:b23a6a37cf269dff6e46d2473b6e227afa42b037e6d23435f1d2bc40fc8c2828 \
aarch64/debian:jessie@sha256:4be74a41a7c70ebe887b634b11ffe516cf4fcd56864a54941e56bb49883c3170 \
aarch64/hello-world:latest@sha256:65a4a158587b307bb02db4de41b836addb0c35175bdc801367b1ac1ddeb9afda
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
# Download man page generator
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b v1.0.4 https://github.com/cpuguy83/go-md2man.git "$GOPATH/src/github.com/cpuguy83/go-md2man" \
&& git clone --depth 1 -b v1.4 https://github.com/russross/blackfriday.git "$GOPATH/src/github.com/russross/blackfriday" \
&& go get -v -d github.com/cpuguy83/go-md2man \
&& go build -v -o /usr/local/bin/go-md2man github.com/cpuguy83/go-md2man \
&& rm -rf "$GOPATH"
# Download toml validator
ENV TOMLV_COMMIT 9baf8a8a9f2ed20a8e54160840c492f937eeaf9a
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml" \
&& (cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT") \
&& go build -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,227 +0,0 @@
# This file describes the standard way to build Docker on ARMv7, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker -f Dockerfile.armhf .
#
# # Mount your source in an interactive container for quick testing:
# docker run -v `pwd`:/go/src/github.com/docker/docker --privileged -i -t docker bash
#
# # Run the test suite:
# docker run --privileged docker hack/make.sh test
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
FROM armhf/debian:jessie
# Packaged dependencies
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
automake \
bash-completion \
btrfs-tools \
build-essential \
createrepo \
curl \
dpkg-sig \
git \
iptables \
jq \
net-tools \
libapparmor-dev \
libcap-dev \
libltdl-dev \
libsqlite3-dev \
libsystemd-journal-dev \
libtool \
mercurial \
pkg-config \
python-dev \
python-mock \
python-pip \
python-websocket \
xfsprogs \
tar \
--no-install-recommends
# Get lvm2 source for compiling statically
ENV LVM2_VERSION 2.02.103
RUN mkdir -p /usr/local/lvm2 \
&& curl -fsSL "https://mirrors.kernel.org/sourceware/lvm2/LVM2.${LVM2_VERSION}.tgz" \
| tar -xzC /usr/local/lvm2 --strip-components=1
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# Compile and install lvm2
RUN cd /usr/local/lvm2 \
&& ./configure \
--build="$(gcc -print-multiarch)" \
--enable-static_link \
&& make device-mapper \
&& make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Install Go
# TODO Update to 1.5.4 once available, or build from source, as these builds
# are marked "end of life", see http://dave.cheney.net/unofficial-arm-tarballs
ENV GO_VERSION 1.5.3
RUN curl -fsSL "http://dave.cheney.net/paste/go${GO_VERSION}.linux-arm.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
# we're building for armhf, which is ARMv7, so let's be explicit about that
ENV GOARCH arm
ENV GOARM 7
# This has been commented out and kept as reference because we don't support compiling with older Go anymore.
# ENV GOFMT_VERSION 1.3.3
# RUN curl -sSL https://storage.googleapis.com/golang/go${GOFMT_VERSION}.$(go env GOOS)-$(go env GOARCH).tar.gz | tar -C /go/bin -xz --strip-components=2 go/bin/gofmt
ENV GO_TOOLS_COMMIT 823804e1ae08dbb14eb807afc7db9993bc9e3cc3
# Grab Go's cover tool for dead-simple code coverage testing
# Grab Go's vet tool for examining go code to find suspicious constructs
# and help prevent errors that the compiler might not catch
RUN git clone https://github.com/golang/tools.git /go/src/golang.org/x/tools \
&& (cd /go/src/golang.org/x/tools && git checkout -q $GO_TOOLS_COMMIT) \
&& go install -v golang.org/x/tools/cmd/cover \
&& go install -v golang.org/x/tools/cmd/vet
# Grab Go's lint tool
ENV GO_LINT_COMMIT 32a87160691b3c96046c0c678fe57c5bef761456
RUN git clone https://github.com/golang/lint.git /go/src/github.com/golang/lint \
&& (cd /go/src/github.com/golang/lint && git checkout -q $GO_LINT_COMMIT) \
&& go install -v github.com/golang/lint/golint
# install seccomp: the version shipped in trusty is too old
ENV SECCOMP_VERSION 2.3.0
RUN set -x \
&& export SECCOMP_PATH="$(mktemp -d)" \
&& curl -fsSL "https://github.com/seccomp/libseccomp/releases/download/v${SECCOMP_VERSION}/libseccomp-${SECCOMP_VERSION}.tar.gz" \
| tar -xzC "$SECCOMP_PATH" --strip-components=1 \
&& ( \
cd "$SECCOMP_PATH" \
&& ./configure --prefix=/usr/local \
&& make \
&& make install \
&& ldconfig \
) \
&& rm -rf "$SECCOMP_PATH"
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT cb08de17d74bef86ce6c5abe8b240e282f5750be
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
# Install notary server
ENV NOTARY_VERSION docker-v1.11-3
RUN set -x \
&& export GO15VENDOREXPERIMENT=1 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT \
&& pip install -r test-requirements.txt
# Set user.email so crosbymichael's in-container merge commits go smoothly
RUN git config --global user.email 'docker-dummy@example.com'
# Add an unprivileged user to be used for tests which need it
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor pkcs11 seccomp selinux
# Let us use a .bashrc file
RUN ln -sfv $PWD/.bashrc ~/.bashrc
# Register Docker's bash completion.
RUN ln -sv $PWD/contrib/completion/bash/docker /etc/bash_completion.d/docker
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /go/src/github.com/docker/docker/contrib/
RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
armhf/buildpack-deps:jessie@sha256:ca6cce8e5bf5c952129889b5cc15cd6aa8d995d77e55e3749bbaadae50e476cb \
armhf/busybox:latest@sha256:d98a7343ac750ffe387e3d514f8521ba69846c216778919b01414b8617cfb3d4 \
armhf/debian:jessie@sha256:4a2187483f04a84f9830910fe3581d69b3c985cc045d9f01d8e2f3795b28107b \
armhf/hello-world:latest@sha256:161dcecea0225975b2ad5f768058212c1e0d39e8211098666ffa1ac74cfb7791
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
# Download man page generator
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b v1.0.4 https://github.com/cpuguy83/go-md2man.git "$GOPATH/src/github.com/cpuguy83/go-md2man" \
&& git clone --depth 1 -b v1.4 https://github.com/russross/blackfriday.git "$GOPATH/src/github.com/russross/blackfriday" \
&& go get -v -d github.com/cpuguy83/go-md2man \
&& go build -v -o /usr/local/bin/go-md2man github.com/cpuguy83/go-md2man \
&& rm -rf "$GOPATH"
# Download toml validator
ENV TOMLV_COMMIT 9baf8a8a9f2ed20a8e54160840c492f937eeaf9a
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml" \
&& (cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT") \
&& go build -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv \
&& rm -rf "$GOPATH"
# Build/install the tool for embedding resources in Windows binaries
ENV RSRC_VERSION v2
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b "$RSRC_VERSION" https://github.com/akavel/rsrc.git "$GOPATH/src/github.com/akavel/rsrc" \
&& go build -v -o /usr/local/bin/rsrc github.com/akavel/rsrc \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
ENTRYPOINT ["hack/dind"]
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,102 +0,0 @@
# This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker -f Dockerfile.gccgo .
#
FROM gcc:5.3
# Packaged dependencies
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
btrfs-tools \
build-essential \
curl \
git \
iptables \
jq \
net-tools \
libapparmor-dev \
libcap-dev \
libsqlite3-dev \
mercurial \
net-tools \
parallel \
python-dev \
python-mock \
python-pip \
python-websocket \
--no-install-recommends
# Get lvm2 source for compiling statically
RUN git clone -b v2_02_103 https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# Compile and install lvm2
RUN cd /usr/local/lvm2 \
&& ./configure --enable-static_link \
&& make device-mapper \
&& make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# install seccomp: the version shipped in jessie is too old
ENV SECCOMP_VERSION v2.3.0
RUN set -x \
&& export SECCOMP_PATH=$(mktemp -d) \
&& git clone https://github.com/seccomp/libseccomp.git "$SECCOMP_PATH" \
&& ( \
cd "$SECCOMP_PATH" \
&& git checkout "$SECCOMP_VERSION" \
&& ./autogen.sh \
&& ./configure --prefix=/usr \
&& make \
&& make install \
) \
&& rm -rf "$SECCOMP_PATH"
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT
# Add an unprivileged user to be used for tests which need it
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,227 +0,0 @@
# This file describes the standard way to build Docker on ppc64le, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker -f Dockerfile.ppc64le .
#
# # Mount your source in an interactive container for quick testing:
# docker run -v `pwd`:/go/src/github.com/docker/docker --privileged -i -t docker bash
#
# # Run the test suite:
# docker run --privileged docker hack/make.sh test
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
FROM ppc64le/gcc:5.3
# Packaged dependencies
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
automake \
bash-completion \
btrfs-tools \
build-essential \
createrepo \
curl \
dpkg-sig \
git \
iptables \
jq \
net-tools \
libapparmor-dev \
libcap-dev \
libltdl-dev \
libsqlite3-dev \
libsystemd-journal-dev \
libtool \
mercurial \
pkg-config \
python-dev \
python-mock \
python-pip \
python-websocket \
xfsprogs \
tar \
--no-install-recommends
# Get lvm2 source for compiling statically
ENV LVM2_VERSION 2.02.103
RUN mkdir -p /usr/local/lvm2 \
&& curl -fsSL "https://mirrors.kernel.org/sourceware/lvm2/LVM2.${LVM2_VERSION}.tgz" \
| tar -xzC /usr/local/lvm2 --strip-components=1
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# fix platform enablement in lvm2 to support ppc64le properly
RUN set -e \
&& for f in config.guess config.sub; do \
curl -fsSL -o "/usr/local/lvm2/autoconf/$f" "http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=$f;hb=HEAD"; \
done
# "arch.c:78:2: error: #error the arch code needs to know about your machine type"
# Compile and install lvm2
RUN cd /usr/local/lvm2 \
&& ./configure \
--build="$(gcc -print-multiarch)" \
--enable-static_link \
&& make device-mapper \
&& make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# TODO install Go, using gccgo as GOROOT_BOOTSTRAP (Go 1.5+ supports ppc64le properly)
# possibly a ppc64le/golang image?
## BUILD GOLANG
ENV GO_VERSION 1.5.4
ENV GO_DOWNLOAD_URL https://golang.org/dl/go${GO_VERSION}.src.tar.gz
ENV GO_DOWNLOAD_SHA256 002acabce7ddc140d0d55891f9d4fcfbdd806b9332fb8b110c91bc91afb0bc93
ENV GOROOT_BOOTSTRAP /usr/local
RUN curl -fsSL "$GO_DOWNLOAD_URL" -o golang.tar.gz \
&& echo "$GO_DOWNLOAD_SHA256 golang.tar.gz" | sha256sum -c - \
&& tar -C /usr/src -xzf golang.tar.gz \
&& rm golang.tar.gz \
&& cd /usr/src/go/src && ./make.bash 2>&1
ENV GOROOT_BOOTSTRAP /usr/src/
ENV PATH /usr/src/go/bin/:/go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
# This has been commented out and kept as reference because we don't support compiling with older Go anymore.
# ENV GOFMT_VERSION 1.3.3
# RUN curl -sSL https://storage.googleapis.com/golang/go${GOFMT_VERSION}.$(go env GOOS)-$(go env GOARCH).tar.gz | tar -C /go/bin -xz --strip-components=2 go/bin/gofmt
ENV GO_TOOLS_COMMIT 823804e1ae08dbb14eb807afc7db9993bc9e3cc3
# Grab Go's cover tool for dead-simple code coverage testing
# Grab Go's vet tool for examining go code to find suspicious constructs
# and help prevent errors that the compiler might not catch
RUN git clone https://github.com/golang/tools.git /go/src/golang.org/x/tools \
&& (cd /go/src/golang.org/x/tools && git checkout -q $GO_TOOLS_COMMIT) \
&& go install -v golang.org/x/tools/cmd/cover \
&& go install -v golang.org/x/tools/cmd/vet
# Grab Go's lint tool
ENV GO_LINT_COMMIT 32a87160691b3c96046c0c678fe57c5bef761456
RUN git clone https://github.com/golang/lint.git /go/src/github.com/golang/lint \
&& (cd /go/src/github.com/golang/lint && git checkout -q $GO_LINT_COMMIT) \
&& go install -v github.com/golang/lint/golint
# Install two versions of the registry. The first is an older version that
# only supports schema1 manifests. The second is a newer version that supports
# both. This allows integration-cli tests to cover push/pull with both schema1
# and schema2 manifests.
ENV REGISTRY_COMMIT_SCHEMA1 ec87e9b6971d831f0eff752ddb54fb64693e51cd
ENV REGISTRY_COMMIT 47a064d4195a9b56133891bbb13620c3ac83a827
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT_SCHEMA1") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2-schema1 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
# Install notary and notary-server
ENV NOTARY_VERSION docker-v1.11-3
RUN set -x \
&& export GO15VENDOREXPERIMENT=1 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary github.com/docker/notary/cmd/notary \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT \
&& pip install -r test-requirements.txt
# Set user.email so crosbymichael's in-container merge commits go smoothly
RUN git config --global user.email 'docker-dummy@example.com'
# Add an unprivileged user to be used for tests which need it
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor pkcs11 selinux
# Let us use a .bashrc file
RUN ln -sfv $PWD/.bashrc ~/.bashrc
# Register Docker's bash completion.
RUN ln -sv $PWD/contrib/completion/bash/docker /etc/bash_completion.d/docker
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /go/src/github.com/docker/docker/contrib/
RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
ppc64le/buildpack-deps:jessie@sha256:902bfe4ef1389f94d143d64516dd50a2de75bca2e66d4a44b1d73f63ddf05dda \
ppc64le/busybox:latest@sha256:38bb82085248d5a3c24bd7a5dc146f2f2c191e189da0441f1c2ca560e3fc6f1b \
ppc64le/debian:jessie@sha256:412845f51b6ab662afba71bc7a716e20fdb9b84f185d180d4c7504f8a75c4f91 \
ppc64le/hello-world:latest@sha256:186a40a9a02ca26df0b6c8acdfb8ac2f3ae6678996a838f977e57fac9d963974
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
# Download man page generator
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b v1.0.4 https://github.com/cpuguy83/go-md2man.git "$GOPATH/src/github.com/cpuguy83/go-md2man" \
&& git clone --depth 1 -b v1.4 https://github.com/russross/blackfriday.git "$GOPATH/src/github.com/russross/blackfriday" \
&& go get -v -d github.com/cpuguy83/go-md2man \
&& go build -v -o /usr/local/bin/go-md2man github.com/cpuguy83/go-md2man \
&& rm -rf "$GOPATH"
# Download toml validator
ENV TOMLV_COMMIT 9baf8a8a9f2ed20a8e54160840c492f937eeaf9a
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml" \
&& (cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT") \
&& go build -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv \
&& rm -rf "$GOPATH"
# Build/install the tool for embedding resources in Windows binaries
ENV RSRC_VERSION v2
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b "$RSRC_VERSION" https://github.com/akavel/rsrc.git "$GOPATH/src/github.com/akavel/rsrc" \
&& go build -v -o /usr/local/bin/rsrc github.com/akavel/rsrc \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,206 +0,0 @@
# This file describes the standard way to build Docker on s390x, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker -f Dockerfile.s390x .
#
# # Mount your source in an interactive container for quick testing:
# docker run -v `pwd`:/go/src/github.com/docker/docker --privileged -i -t docker bash
#
# # Run the test suite:
# docker run --privileged docker hack/make.sh test
#
# Note: AppArmor used to mess with privileged mode, but this is no longer
# the case. Therefore, you don't have to disable it anymore.
#
FROM s390x/gcc:5.3
# Packaged dependencies
RUN apt-get update && apt-get install -y \
apparmor \
aufs-tools \
automake \
bash-completion \
btrfs-tools \
build-essential \
createrepo \
curl \
dpkg-sig \
git \
iptables \
jq \
net-tools \
libapparmor-dev \
libcap-dev \
libltdl-dev \
libsqlite3-dev \
libsystemd-journal-dev \
libtool \
mercurial \
pkg-config \
python-dev \
python-mock \
python-pip \
python-websocket \
xfsprogs \
tar \
--no-install-recommends
# Get lvm2 source for compiling statically
ENV LVM2_VERSION 2.02.103
RUN mkdir -p /usr/local/lvm2 \
&& curl -fsSL "https://mirrors.kernel.org/sourceware/lvm2/LVM2.${LVM2_VERSION}.tgz" \
| tar -xzC /usr/local/lvm2 --strip-components=1
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# fix platform enablement in lvm2 to support s390x properly
RUN set -e \
&& for f in config.guess config.sub; do \
curl -fsSL -o "/usr/local/lvm2/autoconf/$f" "http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=$f;hb=HEAD"; \
done
# "arch.c:78:2: error: #error the arch code needs to know about your machine type"
# Compile and install lvm2
RUN cd /usr/local/lvm2 \
&& ./configure \
--build="$(gcc -print-multiarch)" \
--enable-static_link \
&& make device-mapper \
&& make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Note: Go comes from the base image (gccgo, specifically)
# We can't compile Go proper because s390x isn't an officially supported architecture yet.
ENV PATH /go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/docker/docker/vendor
# This has been commented out and kept as reference because we don't support compiling with older Go anymore.
# ENV GOFMT_VERSION 1.3.3
# RUN curl -sSL https://storage.googleapis.com/golang/go${GOFMT_VERSION}.$(go env GOOS)-$(go env GOARCH).tar.gz | tar -C /go/bin -xz --strip-components=2 go/bin/gofmt
# TODO update this sha when we upgrade to Go 1.5+
ENV GO_TOOLS_COMMIT 069d2f3bcb68257b627205f0486d6cc69a231ff9
# Grab Go's cover tool for dead-simple code coverage testing
# Grab Go's vet tool for examining go code to find suspicious constructs
# and help prevent errors that the compiler might not catch
RUN git clone https://github.com/golang/tools.git /go/src/golang.org/x/tools \
&& (cd /go/src/golang.org/x/tools && git checkout -q $GO_TOOLS_COMMIT) \
&& go install -v golang.org/x/tools/cmd/cover \
&& go install -v golang.org/x/tools/cmd/vet
# Grab Go's lint tool
ENV GO_LINT_COMMIT f42f5c1c440621302702cb0741e9d2ca547ae80f
RUN git clone https://github.com/golang/lint.git /go/src/github.com/golang/lint \
&& (cd /go/src/github.com/golang/lint && git checkout -q $GO_LINT_COMMIT) \
&& go install -v github.com/golang/lint/golint
# Install registry
ENV REGISTRY_COMMIT ec87e9b6971d831f0eff752ddb54fb64693e51cd
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/distribution.git "$GOPATH/src/github.com/docker/distribution" \
&& (cd "$GOPATH/src/github.com/docker/distribution" && git checkout -q "$REGISTRY_COMMIT") \
&& GOPATH="$GOPATH/src/github.com/docker/distribution/Godeps/_workspace:$GOPATH" \
go build -o /usr/local/bin/registry-v2 github.com/docker/distribution/cmd/registry \
&& rm -rf "$GOPATH"
# Install notary server
ENV NOTARY_VERSION docker-v1.11-3
RUN set -x \
&& export GO15VENDOREXPERIMENT=1 \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/docker/notary.git "$GOPATH/src/github.com/docker/notary" \
&& (cd "$GOPATH/src/github.com/docker/notary" && git checkout -q "$NOTARY_VERSION") \
&& GOPATH="$GOPATH/src/github.com/docker/notary/vendor:$GOPATH" \
go build -o /usr/local/bin/notary-server github.com/docker/notary/cmd/notary-server \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT e2878cbcc3a7eef99917adc1be252800b0e41ece
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT \
&& pip install -r test-requirements.txt
# Set user.email so crosbymichael's in-container merge commits go smoothly
RUN git config --global user.email 'docker-dummy@example.com'
# Add an unprivileged user to be used for tests which need it
RUN groupadd -r docker
RUN useradd --create-home --gid docker unprivilegeduser
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor pkcs11 selinux
# Let us use a .bashrc file
RUN ln -sfv $PWD/.bashrc ~/.bashrc
# Register Docker's bash completion.
RUN ln -sv $PWD/contrib/completion/bash/docker /etc/bash_completion.d/docker
# Get useful and necessary Hub images so we can "docker load" locally instead of pulling
COPY contrib/download-frozen-image-v2.sh /go/src/github.com/docker/docker/contrib/
RUN ./contrib/download-frozen-image-v2.sh /docker-frozen-images \
s390x/buildpack-deps:jessie@sha256:4d1381224acaca6c4bfe3604de3af6972083a8558a99672cb6989c7541780099 \
s390x/busybox:latest@sha256:dd61522c983884a66ed72d60301925889028c6d2d5e0220a8fe1d9b4c6a4f01b \
s390x/debian:jessie@sha256:b74c863400909eff3c5e196cac9bfd1f6333ce47aae6a38398d87d5875da170a \
s390x/hello-world:latest@sha256:780d80b3a7677c3788c0d5cd9168281320c8d4a6d9183892d8ee5cdd610f5699
# see also "hack/make/.ensure-frozen-images" (which needs to be updated any time this list is)
# Download man page generator
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b v1.0.4 https://github.com/cpuguy83/go-md2man.git "$GOPATH/src/github.com/cpuguy83/go-md2man" \
&& git clone --depth 1 -b v1.4 https://github.com/russross/blackfriday.git "$GOPATH/src/github.com/russross/blackfriday" \
&& go get -v -d github.com/cpuguy83/go-md2man \
&& go build -v -o /usr/local/bin/go-md2man github.com/cpuguy83/go-md2man \
&& rm -rf "$GOPATH"
# Download toml validator
ENV TOMLV_COMMIT 9baf8a8a9f2ed20a8e54160840c492f937eeaf9a
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone https://github.com/BurntSushi/toml.git "$GOPATH/src/github.com/BurntSushi/toml" \
&& (cd "$GOPATH/src/github.com/BurntSushi/toml" && git checkout -q "$TOMLV_COMMIT") \
&& go build -v -o /usr/local/bin/tomlv github.com/BurntSushi/toml/cmd/tomlv \
&& rm -rf "$GOPATH"
# Build/install the tool for embedding resources in Windows binaries
ENV RSRC_VERSION v2
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone --depth 1 -b "$RSRC_VERSION" https://github.com/akavel/rsrc.git "$GOPATH/src/github.com/akavel/rsrc" \
&& go build -v -o /usr/local/bin/rsrc github.com/akavel/rsrc \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
# Wrap all commands in the "docker-in-docker" script to allow nested containers
ENTRYPOINT ["hack/dind"]
# Upload docker source
COPY . /go/src/github.com/docker/docker

View File

@@ -1,56 +0,0 @@
# docker build -t docker:simple -f Dockerfile.simple .
# docker run --rm docker:simple hack/make.sh dynbinary
# docker run --rm --privileged docker:simple hack/dind hack/make.sh test-unit
# docker run --rm --privileged -v /var/lib/docker docker:simple hack/dind hack/make.sh dynbinary test-integration-cli
# This represents the bare minimum required to build and test Docker.
FROM debian:jessie
# compile and runtime deps
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#build-dependencies
# https://github.com/docker/docker/blob/master/project/PACKAGERS.md#runtime-dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
btrfs-tools \
curl \
gcc \
git \
golang \
libdevmapper-dev \
libsqlite3-dev \
\
ca-certificates \
e2fsprogs \
iptables \
procps \
xfsprogs \
xz-utils \
\
aufs-tools \
&& rm -rf /var/lib/apt/lists/*
# Install runc
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \
&& cd "$GOPATH/src/github.com/opencontainers/runc" \
&& git checkout -q "$RUNC_COMMIT" \
&& make static BUILDTAGS="seccomp apparmor selinux" \
&& cp runc /usr/local/bin/docker-runc
# Install containerd
ENV CONTAINERD_COMMIT d2f03861c91edaafdcb3961461bf82ae83785ed7
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/docker/containerd.git "$GOPATH/src/github.com/docker/containerd" \
&& cd "$GOPATH/src/github.com/docker/containerd" \
&& git checkout -q "$CONTAINERD_COMMIT" \
&& make static \
&& cp bin/containerd /usr/local/bin/docker-containerd \
&& cp bin/containerd-shim /usr/local/bin/docker-containerd-shim \
&& cp bin/ctr /usr/local/bin/docker-containerd-ctr
ENV AUTO_GOPATH 1
WORKDIR /usr/src/docker
COPY . /usr/src/docker

View File

@@ -1,101 +0,0 @@
# This file describes the standard way to build Docker, using a docker container on Windows
# Server 2016
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time. Run this from
# # a directory containing the sources you are validating. For example from
# # c:\go\src\github.com\docker\docker
#
# docker build -t docker -f Dockerfile.windows .
#
#
# # Build docker in a container. Run the following from a Windows cmd command prommpt,
# # replacing c:\built with the directory you want the binaries to be placed on the
# # host system.
#
# docker run --rm -v "c:\built:c:\target" docker sh -c 'cd /c/go/src/github.com/docker/docker; hack/make.sh binary; ec=$?; if [ $ec -eq 0 ]; then robocopy /c/go/src/github.com/docker/docker/bundles/$(cat VERSION)/binary /c/target/binary; fi; exit $ec'
#
# Important notes:
# ---------------
#
# 'Start-Sleep' is a deliberate workaround for a current problem on containers in Windows
# Server 2016. It ensures that the network is up and available for when the command is
# network related. This bug is being tracked internally at Microsoft and exists in TP4.
# Generally sleep 1 or 2 is probably enough, but making it 5 to make the build file
# as bullet proof as possible. This isn't a big deal as this only runs the first time.
#
# The cygwin posix utilities from GIT aren't usable interactively as at January 2016. This
# is because they require a console window which isn't present in a container in Windows.
# See the example at the top of this file. Do NOT use -it in that docker run!!!
#
# Don't try to use a volume for passing the source through. The cygwin posix utilities will
# balk at reparse points. Again, see the example at the top of this file on how use a volume
# to get the built binary out of the container.
#
# The steps are minimised dramatically to improve performance (TP4 is slow on commit)
FROM windowsservercore
# Environment variable notes:
# - GO_VERSION must consistent with 'Dockerfile' used by Linux'.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=1.5.4 \
GIT_LOCATION=https://github.com/git-for-windows/git/releases/download/v2.7.2.windows.1/Git-2.7.2-64-bit.exe \
RSRC_COMMIT=ba14da1f827188454a4591717fff29999010887f \
GOPATH=C:/go;C:/go/src/github.com/docker/docker/vendor \
FROM_DOCKERFILE=1
WORKDIR c:/
# Everything downloaded/installed in one go (better performance, esp on TP4)
RUN \
setx /M Path "c:\git\cmd;c:\git\bin;c:\git\usr\bin;%Path%;c:\gcc\bin;c:\go\bin" && \
setx GOROOT "c:\go" && \
powershell -command \
$ErrorActionPreference = 'Stop'; \
Start-Sleep -Seconds 5; \
Function Download-File([string] $source, [string] $target) { \
$wc = New-Object net.webclient; $wc.Downloadfile($source, $target) \
} \
\
Write-Host INFO: Downloading git...; \
Download-File %GIT_LOCATION% gitsetup.exe; \
\
Write-Host INFO: Downloading go...; \
Download-File https://storage.googleapis.com/golang/go%GO_VERSION%.windows-amd64.msi go.msi; \
\
Write-Host INFO: Downloading compiler 1 of 3...; \
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/gcc.zip gcc.zip; \
\
Write-Host INFO: Downloading compiler 2 of 3...; \
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/runtime.zip runtime.zip; \
\
Write-Host INFO: Downloading compiler 3 of 3...; \
Download-File https://raw.githubusercontent.com/jhowardmsft/docker-tdmgcc/master/binutils.zip binutils.zip; \
\
Write-Host INFO: Installing git...; \
Start-Process gitsetup.exe -ArgumentList '/VERYSILENT /SUPPRESSMSGBOXES /CLOSEAPPLICATIONS /DIR=c:\git\' -Wait; \
\
Write-Host INFO: Installing go..."; \
Start-Process msiexec -ArgumentList '-i go.msi -quiet' -Wait; \
\
Write-Host INFO: Unzipping compiler...; \
c:\git\usr\bin\unzip.exe -q -o gcc.zip -d /c/gcc; \
c:\git\usr\bin\unzip.exe -q -o runtime.zip -d /c/gcc; \
c:\git\usr\bin\unzip.exe -q -o binutils.zip -d /c/gcc"; \
\
Write-Host INFO: Removing interim files; \
Remove-Item *.zip; \
Remove-Item go.msi; \
Remove-Item gitsetup.exe; \
\
Write-Host INFO: Cloning and installing RSRC; \
c:\git\bin\git.exe clone https://github.com/akavel/rsrc.git c:\go\src\github.com\akavel\rsrc; \
cd \go\src\github.com\akavel\rsrc; c:\git\bin\git.exe checkout -q %RSRC_COMMIT%; c:\go\bin\go.exe install -v; \
\
Write-Host INFO: Completed
# Prepare for building
COPY . /go/src/github.com/docker/docker

View File

@@ -1,7 +1,7 @@
Apache License
Version 2.0, January 2004
https://www.apache.org/licenses/
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
@@ -176,13 +176,13 @@
END OF TERMS AND CONDITIONS
Copyright 2013-2016 Docker, Inc.
Copyright 2013-2015 Docker, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,

View File

@@ -1,255 +1,9 @@
# Docker maintainers file
#
# This file describes who runs the docker/docker project and how.
# This is a living document - if you see something out of date or missing, speak up!
#
# It is structured to be consumable by both humans and programs.
# To extract its contents programmatically, use any TOML-compliant
# parser.
#
# This file is compiled into the MAINTAINERS file in docker/opensource.
#
[Org]
[Org."Core maintainers"]
# The Core maintainers are the ghostbusters of the project: when there's a problem others
# can't solve, they show up and fix it with bizarre devices and weaponry.
# They have final say on technical implementation and coding style.
# They are ultimately responsible for quality in all its forms: usability polish,
# bugfixes, performance, stability, etc. When ownership can cleanly be passed to
# a subsystem, they are responsible for doing so and holding the
# subsystem maintainers accountable. If ownership is unclear, they are the de facto owners.
# For each release (including minor releases), a "release captain" is assigned from the
# pool of core maintainers. Rotation is encouraged across all maintainers, to ensure
# the release process is clear and up-to-date.
people = [
"aaronlehmann",
"calavera",
"coolljt0725",
"cpuguy83",
"crosbymichael",
"duglin",
"estesp",
"icecrime",
"jhowardmsft",
"jfrazelle",
"lk4d4",
"mhbauer",
"runcom",
"tianon",
"tibor",
"tonistiigi",
"unclejack",
"vbatts",
"vdemeester"
]
[Org."Docs maintainers"]
# TODO Describe the docs maintainers role.
people = [
"jamtur01",
"moxiegirl",
"sven",
"thajeztah"
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
# that our various contribution and reviewing processes are respected. With their knowledge of
# the repository activity, they can also guide contributors to relevant material or
# discussions.
#
# They are neither code nor docs reviewers, so they are never expected to merge. They can
# however:
# - close an issue or pull request when it's an exact duplicate
# - close an issue or pull request when it's inappropriate or off-topic
people = [
"programmerq",
"thajeztah"
]
[Org.Alumni]
# This list contains maintainers that are no longer active on the project.
# It is thanks to these people that the project has become what it is today.
# Thank you!
people = [
# As a maintainer, Erik was responsible for the "builder", and
# started the first designs for the new networking model in
# Docker. Erik is now working on all kinds of plugins for Docker
# (https://github.com/contiv) and various open source projects
# in his own repository https://github.com/erikh. You may
# still stumble into him in our issue tracker, or on IRC.
"erikh",
# Victor is one of the earliest contributors to Docker, having worked on the
# project when it was still "dotCloud" in April 2013. He's been responsible
# for multiple releases (https://github.com/docker/docker/pulls?q=is%3Apr+bump+in%3Atitle+author%3Avieux),
# and up until today (2015), our number 2 contributor. Although he's no longer
# a maintainer for the Docker "Engine", he's still actively involved in other
# Docker projects, and most likely can be found in the Docker Swarm repository,
# for which he's a core maintainer.
"vieux",
# Vishnu became a maintainer to help out on the daemon codebase and
# libcontainer integration. He's currently involved in the
# Open Containers Initiative, working on the specifications,
# besides his work on cAdvisor and Kubernetes for Google.
"vishh"
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
[people.aaronlehmann]
Name = "Aaron Lehmann"
Email = "aaron.lehmann@docker.com"
GitHub = "aaronlehmann"
[people.calavera]
Name = "David Calavera"
Email = "david.calavera@gmail.com"
GitHub = "calavera"
[people.coolljt0725]
Name = "Lei Jitang"
Email = "leijitang@huawei.com"
GitHub = "coolljt0725"
[people.cpuguy83]
Name = "Brian Goff"
Email = "cpuguy83@gmail.com"
Github = "cpuguy83"
[people.crosbymichael]
Name = "Michael Crosby"
Email = "crosbymichael@gmail.com"
GitHub = "crosbymichael"
[people.duglin]
Name = "Doug Davis"
Email = "dug@us.ibm.com"
GitHub = "duglin"
[people.erikh]
Name = "Erik Hollensbe"
Email = "erik@docker.com"
GitHub = "erikh"
[people.estesp]
Name = "Phil Estes"
Email = "estesp@linux.vnet.ibm.com"
GitHub = "estesp"
[people.icecrime]
Name = "Arnaud Porterie"
Email = "arnaud@docker.com"
GitHub = "icecrime"
[people.jamtur01]
Name = "James Turnbull"
Email = "james@lovedthanlost.net"
GitHub = "jamtur01"
[people.jhowardmsft]
Name = "John Howard"
Email = "jhoward@microsoft.com"
GitHub = "jhowardmsft"
[people.jfrazelle]
Name = "Jessie Frazelle"
Email = "jess@linux.com"
GitHub = "jfrazelle"
[people.lk4d4]
Name = "Alexander Morozov"
Email = "lk4d4@docker.com"
GitHub = "lk4d4"
[people.mhbauer]
Name = "Morgan Bauer"
Email = "mbauer@us.ibm.com"
GitHub = "mhbauer"
[people.moxiegirl]
Name = "Mary Anthony"
Email = "mary.anthony@docker.com"
GitHub = "moxiegirl"
[people.programmerq]
Name = "Jeff Anderson"
Email = "jeff@docker.com"
GitHub = "programmerq"
[people.runcom]
Name = "Antonio Murdaca"
Email = "runcom@redhat.com"
GitHub = "runcom"
[people.shykes]
Name = "Solomon Hykes"
Email = "solomon@docker.com"
GitHub = "shykes"
[people.sven]
Name = "Sven Dowideit"
Email = "SvenDowideit@home.org.au"
GitHub = "SvenDowideit"
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"
GitHub = "thaJeztah"
[people.tianon]
Name = "Tianon Gravi"
Email = "admwiggin@gmail.com"
GitHub = "tianon"
[people.tibor]
Name = "Tibor Vass"
Email = "tibor@docker.com"
GitHub = "tiborvass"
[people.tonistiigi]
Name = "Tõnis Tiigi"
Email = "tonis@docker.com"
GitHub = "tonistiigi"
[people.unclejack]
Name = "Cristian Staretu"
Email = "cristian.staretu@gmail.com"
GitHub = "unclejack"
[people.vbatts]
Name = "Vincent Batts"
Email = "vbatts@redhat.com"
GitHub = "vbatts"
[people.vdemeester]
Name = "Vincent Demeester"
Email = "vincent@sbr.pm"
GitHub = "vdemeester"
[people.vieux]
Name = "Victor Vieux"
Email = "vieux@docker.com"
GitHub = "vieux"
[people.vishh]
Name = "Vishnu Kannan"
Email = "vishnuk@google.com"
GitHub = "vishh"
Solomon Hykes <solomon@docker.com> (@shykes)
Victor Vieux <vieux@docker.com> (@vieux)
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
.mailmap: Tianon Gravi <admwiggin@gmail.com> (@tianon)
.travis.yml: Tianon Gravi <admwiggin@gmail.com> (@tianon)
AUTHORS: Tianon Gravi <admwiggin@gmail.com> (@tianon)
Dockerfile: Tianon Gravi <admwiggin@gmail.com> (@tianon)
Makefile: Tianon Gravi <admwiggin@gmail.com> (@tianon)
.dockerignore: Tianon Gravi <admwiggin@gmail.com> (@tianon)

116
Makefile
View File

@@ -1,56 +1,39 @@
.PHONY: all binary build cross default docs docs-build docs-shell shell test test-docker-py test-integration-cli test-unit validate
# get OS/Arch of docker engine
DOCKER_OSARCH := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKER_ENGINE_OSARCH:-$$DOCKER_CLIENT_OSARCH}')
DOCKERFILE := $(shell bash -c 'source hack/make/.detect-daemon-osarch && echo $${DOCKERFILE}')
.PHONY: all binary build cross default docs docs-build docs-shell shell test test-unit test-integration test-integration-cli test-docker-py validate
# env vars passed through directly to Docker's build scripts
# to allow things like `make DOCKER_CLIENTONLY=1 binary` easily
# `docs/sources/contributing/devenvironment.md ` and `project/PACKAGERS.md` have some limited documentation of some of these
DOCKER_ENVS := \
-e BUILDFLAGS \
-e KEEPBUNDLE \
-e DOCKER_BUILD_GOGC \
-e DOCKER_BUILD_PKGS \
-e DOCKER_CLIENTONLY \
-e DOCKER_DEBUG \
-e DOCKER_EXPERIMENTAL \
-e DOCKER_EXECDRIVER \
-e DOCKER_GRAPHDRIVER \
-e DOCKER_INCREMENTAL_BINARY \
-e DOCKER_REMAP_ROOT \
-e DOCKER_STORAGE_OPTS \
-e DOCKER_USERLANDPROXY \
-e TESTDIRS \
-e TESTFLAGS \
-e TIMEOUT
# note: we _cannot_ add "-e DOCKER_BUILDTAGS" here because even if it's unset in the shell, that would shadow the "ENV DOCKER_BUILDTAGS" set in our Dockerfile, which is very important for our official builds
# to allow `make BIND_DIR=. shell` or `make BIND_DIR= test`
# to allow `make BINDDIR=. shell` or `make BINDDIR= test`
# (default to no bind mount if DOCKER_HOST is set)
# note: BINDDIR is supported for backwards-compatibility here
BIND_DIR := $(if $(BINDDIR),$(BINDDIR),$(if $(DOCKER_HOST),,bundles))
DOCKER_MOUNT := $(if $(BIND_DIR),-v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/docker/docker/$(BIND_DIR)")
BINDDIR := $(if $(DOCKER_HOST),,bundles)
DOCKER_MOUNT := $(if $(BINDDIR),-v "$(CURDIR)/$(BINDDIR):/go/src/github.com/docker/docker/$(BINDDIR)")
# This allows the test suite to be able to run without worrying about the underlying fs used by the container running the daemon (e.g. aufs-on-aufs), so long as the host running the container is running a supported fs.
# The volume will be cleaned up when the container is removed due to `--rm`.
# Note that `BIND_DIR` will already be set to `bundles` if `DOCKER_HOST` is not set (see above BIND_DIR line), in such case this will do nothing since `DOCKER_MOUNT` will already be set.
DOCKER_MOUNT := $(if $(DOCKER_MOUNT),$(DOCKER_MOUNT),-v "/go/src/github.com/docker/docker/bundles")
# to allow `make DOCSDIR=docs docs-shell` (to create a bind mount in docs)
DOCS_MOUNT := $(if $(DOCSDIR),-v $(CURDIR)/$(DOCSDIR):/$(DOCSDIR))
# to allow `make DOCSPORT=9000 docs`
DOCSPORT := 8000
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := docker-dev$(if $(GIT_BRANCH),:$(GIT_BRANCH))
DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH))
DOCKER_DOCS_IMAGE := docker-docs$(if $(GIT_BRANCH),:$(GIT_BRANCH))
DOCKER_FLAGS := docker run --rm -i --privileged $(DOCKER_ENVS) $(DOCKER_MOUNT)
DOCKER_RUN_DOCKER := docker run --rm -it --privileged $(DOCKER_ENVS) $(DOCKER_MOUNT) "$(DOCKER_IMAGE)"
# if this session isn't interactive, then we don't want to allocate a
# TTY, which would fail, but if it is interactive, we do want to attach
# so that the user can send e.g. ^C through.
INTERACTIVE := $(shell [ -t 0 ] && echo 1 || echo 0)
ifeq ($(INTERACTIVE), 1)
DOCKER_FLAGS += -t
endif
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE
DOCKER_RUN_DOCKER := $(DOCKER_FLAGS) "$(DOCKER_IMAGE)"
# for some docs workarounds (see below in "docs-build" target)
GITCOMMIT := $(shell git rev-parse --short HEAD 2>/dev/null)
default: binary
@@ -60,47 +43,52 @@ all: build
binary: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary
build: bundles
docker build ${DOCKER_BUILD_ARGS} -t "$(DOCKER_IMAGE)" -f "$(DOCKERFILE)" .
bundles:
mkdir bundles
cross: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross
win: build
$(DOCKER_RUN_DOCKER) hack/make.sh win
docs: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" mkdocs serve
tgz: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary binary cross tgz
docs-shell: docs-build
$(DOCKER_RUN_DOCS) -p $(if $(DOCSPORT),$(DOCSPORT):)8000 "$(DOCKER_DOCS_IMAGE)" bash
deb: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary build-deb
docs-release: docs-build
$(DOCKER_RUN_DOCS) -e OPTIONS -e BUILD_ROOT -e DISTRIBUTION_ID "$(DOCKER_DOCS_IMAGE)" ./release.sh
docs:
$(MAKE) -C docs docs
gccgo: build
$(DOCKER_RUN_DOCKER) hack/make.sh gccgo
rpm: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary build-rpm
shell: build
$(DOCKER_RUN_DOCKER) bash
docs-test: docs-build
$(DOCKER_RUN_DOCS) "$(DOCKER_DOCS_IMAGE)" ./test.sh
test: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary cross test-unit test-integration-cli test-docker-py
test-docker-py: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-docker-py
test-integration-cli: build
$(DOCKER_RUN_DOCKER) hack/make.sh dynbinary test-integration-cli
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross test-unit test-integration test-integration-cli test-docker-py
test-unit: build
$(DOCKER_RUN_DOCKER) hack/make.sh test-unit
test-integration: build
$(DOCKER_RUN_DOCKER) hack/make.sh test-integration
test-integration-cli: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary test-integration-cli
test-docker-py: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary test-docker-py
validate: build
$(DOCKER_RUN_DOCKER) hack/make.sh validate-dco validate-default-seccomp validate-gofmt validate-pkg validate-lint validate-test validate-toml validate-vet validate-vendor
$(DOCKER_RUN_DOCKER) hack/make.sh validate-gofmt validate-dco validate-toml
shell: build
$(DOCKER_RUN_DOCKER) bash
build: bundles
docker build -t "$(DOCKER_IMAGE)" .
docs-build:
( git remote | grep -v upstream ) || git diff --name-status upstream/release..upstream/docs docs/ > docs/changed-files
cp ./VERSION docs/VERSION
echo "$(GIT_BRANCH)" > docs/GIT_BRANCH
echo "$(AWS_S3_BUCKET)" > docs/AWS_S3_BUCKET
echo "$(GITCOMMIT)" > docs/GITCOMMIT
docker build -t "$(DOCKER_DOCS_IMAGE)" docs
bundles:
mkdir bundles

12
NOTICE
View File

@@ -1,7 +1,7 @@
Docker
Copyright 2012-2016 Docker, Inc.
Copyright 2012-2014 Docker, Inc.
This product includes software developed at Docker, Inc. (https://www.docker.com).
This product includes software developed at Docker, Inc. (http://www.docker.com).
This product contains software (https://github.com/kr/pty) developed
by Keith Rarick, licensed under the MIT License.
@@ -10,10 +10,10 @@ The following is courtesy of our legal counsel:
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
violate applicable laws.
For more information, please see https://www.bis.doc.gov
For more information, please see http://www.bis.doc.gov
See also https://www.apache.org/dev/crypto.html and/or seek legal counsel.
See also http://www.apache.org/dev/crypto.html and/or seek legal counsel.

161
README.md
View File

@@ -1,24 +1,24 @@
Docker: the container engine [![Release](https://img.shields.io/github/release/docker/docker.svg)](https://github.com/docker/docker/releases/latest)
============================
Docker: the Linux container engine
==================================
Docker is an open source project to pack, ship and run any application
as a lightweight container.
as a lightweight container
Docker containers are both *hardware-agnostic* and *platform-agnostic*.
This means they can run anywhere, from your laptop to the largest
cloud compute instance and everything in between - and they don't require
EC2 compute instance and everything in between - and they don't require
you to use a particular language, framework or packaging system. That
makes them great building blocks for deploying and scaling web apps,
databases, and backend services without depending on a particular stack
or provider.
Docker began as an open-source implementation of the deployment engine which
powers [dotCloud](https://www.dotcloud.com), a popular Platform-as-a-Service.
powers [dotCloud](http://dotcloud.com), a popular Platform-as-a-Service.
It benefits directly from the experience accumulated over several years
of large-scale operation and support of hundreds of thousands of
applications and databases.
![](docs/static_files/docker-logo-compressed.png "Docker")
![Docker L](docs/theme/mkdocs/images/docker-logo-compressed.png "Docker")
## Security Disclosure
@@ -30,7 +30,7 @@ security@docker.com and not by creating a github issue.
A common method for distributing applications and sandboxing their
execution is to use virtual machines, or VMs. Typical VM formats are
VMware's vmdk, Oracle VirtualBox's vdi, and Amazon EC2's ami. In theory
VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory
these formats should allow every developer to automatically package
their application into a "machine" for easy distribution and deployment.
In practice, that almost never happens, for a few reasons:
@@ -56,12 +56,12 @@ By contrast, Docker relies on a different sandboxing method known as
*containerization*. Unlike traditional virtualization, containerization
takes place at the kernel level. Most modern operating system kernels
now support the primitives necessary for containerization, including
Linux with [openvz](https://openvz.org),
Linux with [openvz](http://openvz.org),
[vserver](http://linux-vserver.org) and more recently
[lxc](https://linuxcontainers.org/), Solaris with
[zones](https://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc),
[lxc](http://lxc.sourceforge.net), Solaris with
[zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc),
and FreeBSD with
[Jails](https://www.freebsd.org/doc/handbook/jails.html).
[Jails](http://www.freebsd.org/doc/handbook/jails.html).
Docker builds on top of these low-level primitives to offer developers a
portable format and runtime environment that solves all four problems.
@@ -105,7 +105,7 @@ This is usually difficult for several reasons:
these situations with various degrees of ease - but they all
handle them in different and incompatible ways, which again forces
the developer to do extra work.
* *Custom dependencies*. A developer may need to prepare a custom
version of their application's dependency. Some packaging systems
can handle custom versions of a dependency, others can't - and all
@@ -115,7 +115,7 @@ This is usually difficult for several reasons:
Docker solves the problem of dependency hell by giving the developer a simple
way to express *all* their application's dependencies in one place, while
streamlining the process of assembling them. If this makes you think of
[XKCD 927](https://xkcd.com/927/), don't worry. Docker doesn't
[XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
*replace* your favorite packaging systems. It simply orchestrates
their use in a simple and repeatable way. How does it do that? With
layers.
@@ -143,22 +143,23 @@ as they can be built by running a Unix command in a container.
Getting started
===============
Docker can be installed either on your computer for building applications or
on servers for running them. To get started, [check out the installation
instructions in the
documentation](https://docs.docker.com/engine/installation/).
Docker can be installed on your local machine as well as servers - both
bare metal and virtualized. It is available as a binary on most modern
Linux systems, or as a VM on Windows, Mac and other systems.
We also offer an [interactive tutorial](https://www.docker.com/tryit/)
We also offer an [interactive tutorial](http://www.docker.com/tryit/)
for quickly learning the basics of using Docker.
For up-to-date install instructions, see the [Docs](http://docs.docker.com).
Usage examples
==============
Docker can be used to run short-lived commands, long-running daemons
(app servers, databases, etc.), interactive shell sessions, etc.
(app servers, databases etc.), interactive shell sessions, etc.
You can find a [list of real-world
examples](https://docs.docker.com/engine/examples/) in the
examples](http://docs.docker.com/examples/) in the
documentation.
Under the hood
@@ -167,107 +168,44 @@ Under the hood
Under the hood, Docker is built on the following components:
* The
[cgroups](https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
[cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c)
and
[namespaces](http://man7.org/linux/man-pages/man7/namespaces.7.html)
capabilities of the Linux kernel
* The [Go](https://golang.org) programming language
* The [Docker Image Specification](https://github.com/docker/docker/blob/master/image/spec/v1.md)
* The [Libcontainer Specification](https://github.com/opencontainers/runc/blob/master/libcontainer/SPEC.md)
[namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part)
capabilities of the Linux kernel;
* The [Go](http://golang.org) programming language.
Contributing to Docker [![GoDoc](https://godoc.org/github.com/docker/docker?status.svg)](https://godoc.org/github.com/docker/docker)
Contributing to Docker
======================
| **Master** (Linux) | **Experimental** (linux) | **Windows** | **FreeBSD** |
|------------------|----------------------|---------|---------|
| [![Jenkins Build Status](https://jenkins.dockerproject.org/view/Docker/job/Docker%20Master/badge/icon)](https://jenkins.dockerproject.org/view/Docker/job/Docker%20Master/) | [![Jenkins Build Status](https://jenkins.dockerproject.org/view/Docker/job/Docker%20Master%20%28experimental%29/badge/icon)](https://jenkins.dockerproject.org/view/Docker/job/Docker%20Master%20%28experimental%29/) | [![Build Status](http://jenkins.dockerproject.org/job/Docker%20Master%20(windows)/badge/icon)](http://jenkins.dockerproject.org/job/Docker%20Master%20(windows)/) | [![Build Status](http://jenkins.dockerproject.org/job/Docker%20Master%20(freebsd)/badge/icon)](http://jenkins.dockerproject.org/job/Docker%20Master%20(freebsd)/) |
[![GoDoc](https://godoc.org/github.com/docker/docker?status.png)](https://godoc.org/github.com/docker/docker)
[![Jenkins Build Status](https://jenkins.dockerproject.com/job/Docker%20Master/badge/icon)](https://jenkins.dockerproject.com/job/Docker%20Master/)
Want to hack on Docker? Awesome! We have [instructions to help you get
started contributing code or documentation](https://docs.docker.com/opensource/project/who-written-for/).
started](CONTRIBUTING.md). If you'd like to contribute to the
documentation, please take a look at this [README.md](https://github.com/docker/docker/blob/master/docs/README.md).
These instructions are probably not perfect, please let us know if anything
feels wrong or incomplete. Better yet, submit a PR and improve them yourself.
Getting the development builds
==============================
Want to run Docker from a master build? You can download
master builds at [master.dockerproject.org](https://master.dockerproject.org).
master builds at [master.dockerproject.com](https://master.dockerproject.com).
They are updated with each commit merged into the master branch.
Don't know how to use that super cool new feature in the master build? Check
out the master docs at
[docs.master.dockerproject.org](http://docs.master.dockerproject.org).
How the project is run
======================
Docker is a very, very active project. If you want to learn more about how it is run,
or want to get more involved, the best place to start is [the project directory](https://github.com/docker/docker/tree/master/project).
We are always open to suggestions on process improvements, and are always looking for more maintainers.
### Talking to other Docker users and contributors
<table class="tg">
<col width="45%">
<col width="65%">
<tr>
<td>Internet&nbsp;Relay&nbsp;Chat&nbsp;(IRC)</td>
<td>
<p>
IRC is a direct line to our most knowledgeable Docker users; we have
both the <code>#docker</code> and <code>#docker-dev</code> group on
<strong>irc.freenode.net</strong>.
IRC is a rich chat protocol but it can overwhelm new users. You can search
<a href="https://botbot.me/freenode/docker/#" target="_blank">our chat archives</a>.
</p>
Read our <a href="https://docs.docker.com/project/get-help/#irc-quickstart" target="_blank">IRC quickstart guide</a> for an easy way to get started.
</td>
</tr>
<tr>
<td>Google Groups</td>
<td>
There are two groups.
<a href="https://groups.google.com/forum/#!forum/docker-user" target="_blank">Docker-user</a>
is for people using Docker containers.
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
group is for contributors and other people contributing to the Docker
project.
You can join them without an google account by sending an email to e.g. "docker-user+subscribe@googlegroups.com".
After receiving the join-request message, you can simply reply to that to confirm the subscribtion.
</td>
</tr>
<tr>
<td>Twitter</td>
<td>
You can follow <a href="https://twitter.com/docker/" target="_blank">Docker's Twitter feed</a>
to get updates on our products. You can also tweet us questions or just
share blogs or stories.
</td>
</tr>
<tr>
<td>Stack Overflow</td>
<td>
Stack Overflow has over 7000 Docker questions listed. We regularly
monitor <a href="https://stackoverflow.com/search?tab=newest&q=docker" target="_blank">Docker questions</a>
and so do many other knowledgeable Docker users.
</td>
</tr>
</table>
[docs.master.dockerproject.com](http://docs.master.dockerproject.com).
### Legal
*Brought to you courtesy of our legal counsel. For more context,
please see the [NOTICE](https://github.com/docker/docker/blob/master/NOTICE) document in this repo.*
please see the "NOTICE" document in this repo.*
Use and transfer of Docker may be subject to certain restrictions by the
United States and other governments.
United States and other governments.
It is your responsibility to ensure that your use and/or transfer does not
violate applicable laws.
violate applicable laws.
For more information, please see https://www.bis.doc.gov
For more information, please see http://www.bis.doc.gov
Licensing
@@ -280,22 +218,17 @@ Other Docker Related Projects
=============================
There are a number of projects under development that are based on Docker's
core technology. These projects expand the tooling built around the
Docker platform to broaden its application and utility.
Docker platform to broaden its application and utility.
* [Docker Registry](https://github.com/docker/distribution): Registry
server for Docker (hosting/delivery of repositories and images)
* [Docker Machine](https://github.com/docker/machine): Machine management
for a container-centric world
* [Docker Swarm](https://github.com/docker/swarm): A Docker-native clustering
system
* [Docker Compose](https://github.com/docker/compose) (formerly Fig):
Define and run multi-container apps
* [Kitematic](https://github.com/docker/kitematic): The easiest way to use
Docker on Mac and Windows
If you know of another project underway that should be listed here, please help
If you know of another project underway that should be listed here, please help
us keep this list up-to-date by submitting a PR.
Awesome-Docker
==============
You can find more projects, tools and articles related to Docker on the [awesome-docker list](https://github.com/veggiemonk/awesome-docker). Add your project there.
* [Docker Registry](https://github.com/docker/docker-registry): Registry
server for Docker (hosting/delivering of repositories and images)
* [Docker Machine](https://github.com/docker/machine): Machine management
for a container-centric world
* [Docker Swarm](https://github.com/docker/swarm): A Docker-native clustering
system
* [Docker Compose, aka Fig](https://github.com/docker/fig):
Multi-container application management

View File

@@ -1,140 +0,0 @@
Docker Engine Roadmap
=====================
### How should I use this document?
This document provides description of items that the project decided to prioritize. This should
serve as a reference point for Docker contributors to understand where the project is going, and
help determine if a contribution could be conflicting with some longer terms plans.
The fact that a feature isn't listed here doesn't mean that a patch for it will automatically be
refused (except for those mentioned as "frozen features" below)! We are always happy to receive
patches for new cool features we haven't thought about, or didn't judge priority. Please however
understand that such patches might take longer for us to review.
### How can I help?
Short term objectives are listed in the [wiki](https://github.com/docker/docker/wiki) and described
in [Issues](https://github.com/docker/docker/issues?q=is%3Aopen+is%3Aissue+label%3Aroadmap). Our
goal is to split down the workload in such way that anybody can jump in and help. Please comment on
issues if you want to take it to avoid duplicating effort! Similarly, if a maintainer is already
assigned on an issue you'd like to participate in, pinging him on IRC or GitHub to offer your help is
the best way to go.
### How can I add something to the roadmap?
The roadmap process is new to the Docker Engine: we are only beginning to structure and document the
project objectives. Our immediate goal is to be more transparent, and work with our community to
focus our efforts on fewer prioritized topics.
We hope to offer in the near future a process allowing anyone to propose a topic to the roadmap, but
we are not quite there yet. For the time being, the BDFL remains the keeper of the roadmap, and we
won't be accepting pull requests adding or removing items from this file.
# 1. Features and refactoring
## 1.1 Runtime improvements
We recently introduced [`runC`](https://runc.io) as a standalone low-level tool for container
execution. The initial goal was to integrate runC as a replacement in the Engine for the traditional
default libcontainer `execdriver`, but the Engine internals were not ready for this.
As runC continued evolving, and the OCI specification along with it, we created
[`containerd`](https://containerd.tools/), a daemon to control and monitor multiple `runC`. This is
the new target for Engine integration, as it can entirely replace the whole `execdriver`
architecture, and container monitoring along with it.
Docker Engine will rely on a long-running `containerd` companion daemon for all container execution
related operations. This could open the door in the future for Engine restarts without interrupting
running containers.
## 1.2 Plugins improvements
Docker Engine 1.7.0 introduced plugin support, initially for the use cases of volumes and networks
extensions. The plugin infrastructure was kept minimal as we were collecting use cases and real
world feedback before optimizing for any particular workflow.
In the future, we'd like plugins to become first class citizens, and encourage an ecosystem of
plugins. This implies in particular making it trivially easy to distribute plugins as containers
through any Registry instance, as well as solving the commonly heard pain points of plugins needing
to be treated as somewhat special (being active at all time, started before any other user
containers, and not as easily dismissed).
## 1.3 Internal decoupling
A lot of work has been done in trying to decouple the Docker Engine's internals. In particular, the
API implementation has been refactored and ongoing work is happening to move the code to a separate
repository ([`docker/engine-api`](https://github.com/docker/engine-api)), and the Builder side of
the daemon is now [fully independent](https://github.com/docker/docker/tree/master/builder) while
still residing in the same repository.
We are exploring ways to go further with that decoupling, capitalizing on the work introduced by the
runtime renovation and plugins improvement efforts. Indeed, the combination of `containerd` support
with the concept of "special" containers opens the door for bootstrapping more Engine internals
using the same facilities.
## 1.4 Cluster capable Engine
The community has been pushing for a more cluster capable Docker Engine, and a huge effort was spent
adding features such as multihost networking, and node discovery down at the Engine level. Yet, the
Engine is currently incapable of taking scheduling decisions alone, and continues relying on Swarm
for that.
We plan to complete this effort and make Engine fully cluster capable. Multiple instances of the
Docker Engine being already capable of discovering each other and establish overlay networking for
their container to communicate, the next step is for a given Engine to gain ability to dispatch work
to another node in the cluster. This will be introduced in a backward compatible way, such that a
`docker run` invocation on a particular node remains fully deterministic.
# 2 Frozen features
## 2.1 Docker exec
We won't accept patches expanding the surface of `docker exec`, which we intend to keep as a
*debugging* feature, as well as being strongly dependent on the Runtime ingredient effort.
## 2.2 Dockerfile syntax
The Dockerfile syntax as we know it is simple, and has proven successful in supporting all our
[official images](https://github.com/docker-library/official-images). Although this is *not* a
definitive move, we temporarily won't accept more patches to the Dockerfile syntax for several
reasons:
- Long term impact of syntax changes is a sensitive matter that require an amount of attention the
volume of Engine codebase and activity today doesn't allow us to provide.
- Allowing the Builder to be implemented as a separate utility consuming the Engine's API will
open the door for many possibilities, such as offering alternate syntaxes or DSL for existing
languages without cluttering the Engine's codebase.
- A standalone Builder will also offer the opportunity for a better dedicated group of maintainers
to own the Dockerfile syntax and decide collectively on the direction to give it.
- Our experience with official images tend to show that no new instruction or syntax expansion is
*strictly* necessary for the majority of use cases, and although we are aware many things are
still lacking for many, we cannot make it a priority yet for the above reasons.
Again, this is not about saying that the Dockerfile syntax is done, it's about making choices about
what we want to do first!
## 2.3 Remote Registry Operations
A large amount of work is ongoing in the area of image distribution and provenance. This includes
moving to the V2 Registry API and heavily refactoring the code that powers these features. The
desired result is more secure, reliable and easier to use image distribution.
Part of the problem with this part of the code base is the lack of a stable and flexible interface.
If new features are added that access the registry without solidifying these interfaces, achieving
feature parity will continue to be elusive. While we get a handle on this situation, we are imposing
a moratorium on new code that accesses the Registry API in commands that don't already make remote
calls.
Currently, only the following commands cause interaction with a remote registry:
- push
- pull
- run
- build
- search
- login
In the interest of stabilizing the registry access model during this ongoing work, we are not
accepting additions to other commands that will cause remote interaction with the Registry API. This
moratorium will lift when the goals of the distribution project have been met.

View File

@@ -1,45 +0,0 @@
# Vendoring policies
This document outlines recommended Vendoring policies for Docker repositories.
(Example, libnetwork is a Docker repo and logrus is not.)
## Vendoring using tags
Commit ID based vendoring provides little/no information about the updates
vendored. To fix this, vendors will now require that repositories use annotated
tags along with commit ids to snapshot commits. Annotated tags by themselves
are not sufficient, since the same tag can be force updated to reference
different commits.
Each tag should:
- Follow Semantic Versioning rules (refer to section on "Semantic Versioning")
- Have a corresponding entry in the change tracking document.
Each repo should:
- Have a change tracking document between tags/releases. Ex: CHANGELOG.md,
github releases file.
The goal here is for consuming repos to be able to use the tag version and
changelog updates to determine whether the vendoring will cause any breaking or
backward incompatible changes. This also means that repos can specify having
dependency on a package of a specific version or greater up to the next major
release, without encountering breaking changes.
## Semantic Versioning
Annotated version tags should follow Schema Versioning policies.
According to http://semver.org:
"Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions
to the MAJOR.MINOR.PATCH format."
## Vendoring cadence
In order to avoid huge vendoring changes, it is recommended to have a regular
cadence for vendoring updates. eg. monthly.
## Pre-merge vendoring tests
All related repos will be vendored into docker/docker.
CI on docker/docker should catch any breaking changes involving multiple repos.

View File

@@ -1 +1 @@
1.11.0
1.5.0-rc4

2
api/MAINTAINERS Normal file
View File

@@ -0,0 +1,2 @@
Victor Vieux <vieux@docker.com> (@vieux)
Jessie Frazelle <jess@docker.com> (@jfrazelle)

19
api/api_unit_test.go Normal file
View File

@@ -0,0 +1,19 @@
package api
import (
"testing"
)
func TestJsonContentType(t *testing.T) {
if !MatchesContentType("application/json", "application/json") {
t.Fail()
}
if !MatchesContentType("application/json; charset=utf-8", "application/json") {
t.Fail()
}
if MatchesContentType("dockerapplication/json", "application/json") {
t.Fail()
}
}

View File

@@ -1,109 +0,0 @@
package client
import (
"fmt"
"io"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/signal"
"github.com/docker/engine-api/types"
)
// CmdAttach attaches to a running container.
//
// Usage: docker attach [OPTIONS] CONTAINER
func (cli *DockerCli) CmdAttach(args ...string) error {
cmd := Cli.Subcmd("attach", []string{"CONTAINER"}, Cli.DockerCommands["attach"].Description, true)
noStdin := cmd.Bool([]string{"-no-stdin"}, false, "Do not attach STDIN")
proxy := cmd.Bool([]string{"-sig-proxy"}, true, "Proxy all received signals to the process")
detachKeys := cmd.String([]string{"-detach-keys"}, "", "Override the key sequence for detaching a container")
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
c, err := cli.client.ContainerInspect(context.Background(), cmd.Arg(0))
if err != nil {
return err
}
if !c.State.Running {
return fmt.Errorf("You cannot attach to a stopped container, start it first")
}
if c.State.Paused {
return fmt.Errorf("You cannot attach to a paused container, unpause it first")
}
if err := cli.CheckTtyInput(!*noStdin, c.Config.Tty); err != nil {
return err
}
if *detachKeys != "" {
cli.configFile.DetachKeys = *detachKeys
}
options := types.ContainerAttachOptions{
ContainerID: cmd.Arg(0),
Stream: true,
Stdin: !*noStdin && c.Config.OpenStdin,
Stdout: true,
Stderr: true,
DetachKeys: cli.configFile.DetachKeys,
}
var in io.ReadCloser
if options.Stdin {
in = cli.in
}
if *proxy && !c.Config.Tty {
sigc := cli.forwardAllSignals(options.ContainerID)
defer signal.StopCatch(sigc)
}
resp, err := cli.client.ContainerAttach(context.Background(), options)
if err != nil {
return err
}
defer resp.Close()
if in != nil && c.Config.Tty {
if err := cli.setRawTerminal(); err != nil {
return err
}
defer cli.restoreTerminal(in)
}
if c.Config.Tty && cli.isTerminalOut {
height, width := cli.getTtySize()
// To handle the case where a user repeatedly attaches/detaches without resizing their
// terminal, the only way to get the shell prompt to display for attaches 2+ is to artificially
// resize it, then go back to normal. Without this, every attach after the first will
// require the user to manually resize or hit enter.
cli.resizeTtyTo(cmd.Arg(0), height+1, width+1, false)
// After the above resizing occurs, the call to monitorTtySize below will handle resetting back
// to the actual size.
if err := cli.monitorTtySize(cmd.Arg(0), false); err != nil {
logrus.Debugf("Error monitoring TTY size: %s", err)
}
}
if err := cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp); err != nil {
return err
}
_, status, err := getExitCode(cli, options.ContainerID)
if err != nil {
return err
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}

View File

@@ -1,399 +0,0 @@
package client
import (
"archive/tar"
"bufio"
"bytes"
"fmt"
"io"
"os"
"path/filepath"
"regexp"
"runtime"
"golang.org/x/net/context"
"github.com/docker/docker/api"
"github.com/docker/docker/builder"
"github.com/docker/docker/builder/dockerignore"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/archive"
"github.com/docker/docker/pkg/fileutils"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/progress"
"github.com/docker/docker/pkg/streamformatter"
"github.com/docker/docker/pkg/urlutil"
"github.com/docker/docker/reference"
runconfigopts "github.com/docker/docker/runconfig/opts"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/container"
"github.com/docker/go-units"
)
type translatorFunc func(reference.NamedTagged) (reference.Canonical, error)
// CmdBuild builds a new image from the source code at a given path.
//
// If '-' is provided instead of a path or URL, Docker will build an image from either a Dockerfile or tar archive read from STDIN.
//
// Usage: docker build [OPTIONS] PATH | URL | -
func (cli *DockerCli) CmdBuild(args ...string) error {
cmd := Cli.Subcmd("build", []string{"PATH | URL | -"}, Cli.DockerCommands["build"].Description, true)
flTags := opts.NewListOpts(validateTag)
cmd.Var(&flTags, []string{"t", "-tag"}, "Name and optionally a tag in the 'name:tag' format")
suppressOutput := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress the build output and print image ID on success")
noCache := cmd.Bool([]string{"-no-cache"}, false, "Do not use cache when building the image")
rm := cmd.Bool([]string{"-rm"}, true, "Remove intermediate containers after a successful build")
forceRm := cmd.Bool([]string{"-force-rm"}, false, "Always remove intermediate containers")
pull := cmd.Bool([]string{"-pull"}, false, "Always attempt to pull a newer version of the image")
dockerfileName := cmd.String([]string{"f", "-file"}, "", "Name of the Dockerfile (Default is 'PATH/Dockerfile')")
flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit")
flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Swap limit equal to memory plus swap: '-1' to enable unlimited swap")
flShmSize := cmd.String([]string{"-shm-size"}, "", "Size of /dev/shm, default value is 64MB")
flCPUShares := cmd.Int64([]string{"#c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
flCPUPeriod := cmd.Int64([]string{"-cpu-period"}, 0, "Limit the CPU CFS (Completely Fair Scheduler) period")
flCPUQuota := cmd.Int64([]string{"-cpu-quota"}, 0, "Limit the CPU CFS (Completely Fair Scheduler) quota")
flCPUSetCpus := cmd.String([]string{"-cpuset-cpus"}, "", "CPUs in which to allow execution (0-3, 0,1)")
flCPUSetMems := cmd.String([]string{"-cpuset-mems"}, "", "MEMs in which to allow execution (0-3, 0,1)")
flCgroupParent := cmd.String([]string{"-cgroup-parent"}, "", "Optional parent cgroup for the container")
flBuildArg := opts.NewListOpts(runconfigopts.ValidateEnv)
cmd.Var(&flBuildArg, []string{"-build-arg"}, "Set build-time variables")
isolation := cmd.String([]string{"-isolation"}, "", "Container isolation technology")
flLabels := opts.NewListOpts(nil)
cmd.Var(&flLabels, []string{"-label"}, "Set metadata for an image")
ulimits := make(map[string]*units.Ulimit)
flUlimits := runconfigopts.NewUlimitOpt(&ulimits)
cmd.Var(flUlimits, []string{"-ulimit"}, "Ulimit options")
cmd.Require(flag.Exact, 1)
// For trusted pull on "FROM <image>" instruction.
addTrustedFlags(cmd, true)
cmd.ParseFlags(args, true)
var (
ctx io.ReadCloser
err error
)
specifiedContext := cmd.Arg(0)
var (
contextDir string
tempDir string
relDockerfile string
progBuff io.Writer
buildBuff io.Writer
)
progBuff = cli.out
buildBuff = cli.out
if *suppressOutput {
progBuff = bytes.NewBuffer(nil)
buildBuff = bytes.NewBuffer(nil)
}
switch {
case specifiedContext == "-":
ctx, relDockerfile, err = builder.GetContextFromReader(cli.in, *dockerfileName)
case urlutil.IsGitURL(specifiedContext):
tempDir, relDockerfile, err = builder.GetContextFromGitURL(specifiedContext, *dockerfileName)
case urlutil.IsURL(specifiedContext):
ctx, relDockerfile, err = builder.GetContextFromURL(progBuff, specifiedContext, *dockerfileName)
default:
contextDir, relDockerfile, err = builder.GetContextFromLocalDir(specifiedContext, *dockerfileName)
}
if err != nil {
if *suppressOutput && urlutil.IsURL(specifiedContext) {
fmt.Fprintln(cli.err, progBuff)
}
return fmt.Errorf("unable to prepare context: %s", err)
}
if tempDir != "" {
defer os.RemoveAll(tempDir)
contextDir = tempDir
}
if ctx == nil {
// And canonicalize dockerfile name to a platform-independent one
relDockerfile, err = archive.CanonicalTarNameForPath(relDockerfile)
if err != nil {
return fmt.Errorf("cannot canonicalize dockerfile path %s: %v", relDockerfile, err)
}
f, err := os.Open(filepath.Join(contextDir, ".dockerignore"))
if err != nil && !os.IsNotExist(err) {
return err
}
var excludes []string
if err == nil {
excludes, err = dockerignore.ReadAll(f)
if err != nil {
return err
}
}
if err := builder.ValidateContextDirectory(contextDir, excludes); err != nil {
return fmt.Errorf("Error checking context: '%s'.", err)
}
// If .dockerignore mentions .dockerignore or the Dockerfile
// then make sure we send both files over to the daemon
// because Dockerfile is, obviously, needed no matter what, and
// .dockerignore is needed to know if either one needs to be
// removed. The daemon will remove them for us, if needed, after it
// parses the Dockerfile. Ignore errors here, as they will have been
// caught by validateContextDirectory above.
var includes = []string{"."}
keepThem1, _ := fileutils.Matches(".dockerignore", excludes)
keepThem2, _ := fileutils.Matches(relDockerfile, excludes)
if keepThem1 || keepThem2 {
includes = append(includes, ".dockerignore", relDockerfile)
}
ctx, err = archive.TarWithOptions(contextDir, &archive.TarOptions{
Compression: archive.Uncompressed,
ExcludePatterns: excludes,
IncludeFiles: includes,
})
if err != nil {
return err
}
}
var resolvedTags []*resolvedTag
if isTrusted() {
// Wrap the tar archive to replace the Dockerfile entry with the rewritten
// Dockerfile which uses trusted pulls.
ctx = replaceDockerfileTarWrapper(ctx, relDockerfile, cli.trustedReference, &resolvedTags)
}
// Setup an upload progress bar
progressOutput := streamformatter.NewStreamFormatter().NewProgressOutput(progBuff, true)
var body io.Reader = progress.NewProgressReader(ctx, progressOutput, 0, "", "Sending build context to Docker daemon")
var memory int64
if *flMemoryString != "" {
parsedMemory, err := units.RAMInBytes(*flMemoryString)
if err != nil {
return err
}
memory = parsedMemory
}
var memorySwap int64
if *flMemorySwap != "" {
if *flMemorySwap == "-1" {
memorySwap = -1
} else {
parsedMemorySwap, err := units.RAMInBytes(*flMemorySwap)
if err != nil {
return err
}
memorySwap = parsedMemorySwap
}
}
var shmSize int64
if *flShmSize != "" {
shmSize, err = units.RAMInBytes(*flShmSize)
if err != nil {
return err
}
}
options := types.ImageBuildOptions{
Context: body,
Memory: memory,
MemorySwap: memorySwap,
Tags: flTags.GetAll(),
SuppressOutput: *suppressOutput,
NoCache: *noCache,
Remove: *rm,
ForceRemove: *forceRm,
PullParent: *pull,
Isolation: container.Isolation(*isolation),
CPUSetCPUs: *flCPUSetCpus,
CPUSetMems: *flCPUSetMems,
CPUShares: *flCPUShares,
CPUQuota: *flCPUQuota,
CPUPeriod: *flCPUPeriod,
CgroupParent: *flCgroupParent,
Dockerfile: relDockerfile,
ShmSize: shmSize,
Ulimits: flUlimits.GetList(),
BuildArgs: runconfigopts.ConvertKVStringsToMap(flBuildArg.GetAll()),
AuthConfigs: cli.retrieveAuthConfigs(),
Labels: runconfigopts.ConvertKVStringsToMap(flLabels.GetAll()),
}
response, err := cli.client.ImageBuild(context.Background(), options)
if err != nil {
return err
}
defer response.Body.Close()
err = jsonmessage.DisplayJSONMessagesStream(response.Body, buildBuff, cli.outFd, cli.isTerminalOut, nil)
if err != nil {
if jerr, ok := err.(*jsonmessage.JSONError); ok {
// If no error code is set, default to 1
if jerr.Code == 0 {
jerr.Code = 1
}
if *suppressOutput {
fmt.Fprintf(cli.err, "%s%s", progBuff, buildBuff)
}
return Cli.StatusError{Status: jerr.Message, StatusCode: jerr.Code}
}
}
// Windows: show error message about modified file permissions if the
// daemon isn't running Windows.
if response.OSType != "windows" && runtime.GOOS == "windows" {
fmt.Fprintln(cli.err, `SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.`)
}
// Everything worked so if -q was provided the output from the daemon
// should be just the image ID and we'll print that to stdout.
if *suppressOutput {
fmt.Fprintf(cli.out, "%s", buildBuff)
}
if isTrusted() {
// Since the build was successful, now we must tag any of the resolved
// images from the above Dockerfile rewrite.
for _, resolved := range resolvedTags {
if err := cli.tagTrusted(resolved.digestRef, resolved.tagRef); err != nil {
return err
}
}
}
return nil
}
// validateTag checks if the given image name can be resolved.
func validateTag(rawRepo string) (string, error) {
_, err := reference.ParseNamed(rawRepo)
if err != nil {
return "", err
}
return rawRepo, nil
}
var dockerfileFromLinePattern = regexp.MustCompile(`(?i)^[\s]*FROM[ \f\r\t\v]+(?P<image>[^ \f\r\t\v\n#]+)`)
// resolvedTag records the repository, tag, and resolved digest reference
// from a Dockerfile rewrite.
type resolvedTag struct {
digestRef reference.Canonical
tagRef reference.NamedTagged
}
// rewriteDockerfileFrom rewrites the given Dockerfile by resolving images in
// "FROM <image>" instructions to a digest reference. `translator` is a
// function that takes a repository name and tag reference and returns a
// trusted digest reference.
func rewriteDockerfileFrom(dockerfile io.Reader, translator translatorFunc) (newDockerfile []byte, resolvedTags []*resolvedTag, err error) {
scanner := bufio.NewScanner(dockerfile)
buf := bytes.NewBuffer(nil)
// Scan the lines of the Dockerfile, looking for a "FROM" line.
for scanner.Scan() {
line := scanner.Text()
matches := dockerfileFromLinePattern.FindStringSubmatch(line)
if matches != nil && matches[1] != api.NoBaseImageSpecifier {
// Replace the line with a resolved "FROM repo@digest"
ref, err := reference.ParseNamed(matches[1])
if err != nil {
return nil, nil, err
}
ref = reference.WithDefaultTag(ref)
if ref, ok := ref.(reference.NamedTagged); ok && isTrusted() {
trustedRef, err := translator(ref)
if err != nil {
return nil, nil, err
}
line = dockerfileFromLinePattern.ReplaceAllLiteralString(line, fmt.Sprintf("FROM %s", trustedRef.String()))
resolvedTags = append(resolvedTags, &resolvedTag{
digestRef: trustedRef,
tagRef: ref,
})
}
}
_, err := fmt.Fprintln(buf, line)
if err != nil {
return nil, nil, err
}
}
return buf.Bytes(), resolvedTags, scanner.Err()
}
// replaceDockerfileTarWrapper wraps the given input tar archive stream and
// replaces the entry with the given Dockerfile name with the contents of the
// new Dockerfile. Returns a new tar archive stream with the replaced
// Dockerfile.
func replaceDockerfileTarWrapper(inputTarStream io.ReadCloser, dockerfileName string, translator translatorFunc, resolvedTags *[]*resolvedTag) io.ReadCloser {
pipeReader, pipeWriter := io.Pipe()
go func() {
tarReader := tar.NewReader(inputTarStream)
tarWriter := tar.NewWriter(pipeWriter)
defer inputTarStream.Close()
for {
hdr, err := tarReader.Next()
if err == io.EOF {
// Signals end of archive.
tarWriter.Close()
pipeWriter.Close()
return
}
if err != nil {
pipeWriter.CloseWithError(err)
return
}
var content io.Reader = tarReader
if hdr.Name == dockerfileName {
// This entry is the Dockerfile. Since the tar archive was
// generated from a directory on the local filesystem, the
// Dockerfile will only appear once in the archive.
var newDockerfile []byte
newDockerfile, *resolvedTags, err = rewriteDockerfileFrom(content, translator)
if err != nil {
pipeWriter.CloseWithError(err)
return
}
hdr.Size = int64(len(newDockerfile))
content = bytes.NewBuffer(newDockerfile)
}
if err := tarWriter.WriteHeader(hdr); err != nil {
pipeWriter.CloseWithError(err)
return
}
if _, err := io.Copy(tarWriter, content); err != nil {
pipeWriter.CloseWithError(err)
return
}
}
}()
return pipeReader
}

View File

@@ -1,66 +1,116 @@
package client
import (
"crypto/tls"
"encoding/json"
"errors"
"fmt"
"io"
"net"
"net/http"
"os"
"runtime"
"reflect"
"strings"
"text/template"
"time"
"github.com/docker/docker/api"
"github.com/docker/docker/cli"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/cliconfig/credentials"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/term"
"github.com/docker/engine-api/client"
"github.com/docker/go-connections/sockets"
"github.com/docker/go-connections/tlsconfig"
"github.com/docker/docker/registry"
)
// DockerCli represents the docker command line client.
// Instances of the client can be returned from NewDockerCli.
type DockerCli struct {
// initializing closure
init func() error
// configFile has the client configuration file
configFile *cliconfig.ConfigFile
// in holds the input stream and closer (io.ReadCloser) for the client.
in io.ReadCloser
// out holds the output stream (io.Writer) for the client.
out io.Writer
// err holds the error stream (io.Writer) for the client.
err io.Writer
// keyFile holds the key file as a string.
keyFile string
// inFd holds the file descriptor of the client's STDIN (if valid).
proto string
addr string
configFile *registry.ConfigFile
in io.ReadCloser
out io.Writer
err io.Writer
keyFile string
tlsConfig *tls.Config
scheme string
// inFd holds file descriptor of the client's STDIN, if it's a valid file
inFd uintptr
// outFd holds file descriptor of the client's STDOUT (if valid).
// outFd holds file descriptor of the client's STDOUT, if it's a valid file
outFd uintptr
// isTerminalIn indicates whether the client's STDIN is a TTY
// isTerminalIn describes if client's STDIN is a TTY
isTerminalIn bool
// isTerminalOut indicates whether the client's STDOUT is a TTY
// isTerminalOut describes if client's STDOUT is a TTY
isTerminalOut bool
// client is the http client that performs all API operations
client client.APIClient
// state holds the terminal state
state *term.State
transport *http.Transport
}
// Initialize calls the init function that will setup the configuration for the client
// such as the TLS, tcp and other parameters used to run the client.
func (cli *DockerCli) Initialize() error {
if cli.init == nil {
return nil
var funcMap = template.FuncMap{
"json": func(v interface{}) string {
a, _ := json.Marshal(v)
return string(a)
},
}
func (cli *DockerCli) getMethod(args ...string) (func(...string) error, bool) {
camelArgs := make([]string, len(args))
for i, s := range args {
if len(s) == 0 {
return nil, false
}
camelArgs[i] = strings.ToUpper(s[:1]) + strings.ToLower(s[1:])
}
return cli.init()
methodName := "Cmd" + strings.Join(camelArgs, "")
method := reflect.ValueOf(cli).MethodByName(methodName)
if !method.IsValid() {
return nil, false
}
return method.Interface().(func(...string) error), true
}
// Cmd executes the specified command
func (cli *DockerCli) Cmd(args ...string) error {
if len(args) > 1 {
method, exists := cli.getMethod(args[:2]...)
if exists {
return method(args[2:]...)
}
}
if len(args) > 0 {
method, exists := cli.getMethod(args[0])
if !exists {
fmt.Fprintf(cli.err, "docker: '%s' is not a docker command. See 'docker --help'.\n", args[0])
os.Exit(1)
}
return method(args[1:]...)
}
return cli.CmdHelp()
}
func (cli *DockerCli) Subcmd(name, signature, description string, exitOnError bool) *flag.FlagSet {
var errorHandling flag.ErrorHandling
if exitOnError {
errorHandling = flag.ExitOnError
} else {
errorHandling = flag.ContinueOnError
}
flags := flag.NewFlagSet(name, errorHandling)
flags.Usage = func() {
options := ""
if flags.FlagCountUndeprecated() > 0 {
options = "[OPTIONS] "
}
fmt.Fprintf(cli.out, "\nUsage: docker %s %s%s\n\n%s\n\n", name, options, signature, description)
flags.SetOutput(cli.out)
flags.PrintDefaults()
os.Exit(0)
}
return flags
}
func (cli *DockerCli) LoadConfigFile() (err error) {
cli.configFile, err = registry.LoadConfig(os.Getenv("HOME"))
if err != nil {
fmt.Fprintf(cli.err, "WARNING: %s\n", err)
}
return err
}
// CheckTtyInput checks if we are trying to attach to a container tty
// from a non-tty client input stream, and if so, returns an error.
func (cli *DockerCli) CheckTtyInput(attachStdin, ttyMode bool) error {
// In order to attach to a container tty, input stream for the client must
// be a tty itself: redirecting or piping the client standard input is
@@ -71,145 +121,68 @@ func (cli *DockerCli) CheckTtyInput(attachStdin, ttyMode bool) error {
return nil
}
// PsFormat returns the format string specified in the configuration.
// String contains columns and format specification, for example {{ID}}\t{{Name}}.
func (cli *DockerCli) PsFormat() string {
return cli.configFile.PsFormat
}
func NewDockerCli(in io.ReadCloser, out, err io.Writer, keyFile string, proto, addr string, tlsConfig *tls.Config) *DockerCli {
var (
inFd uintptr
outFd uintptr
isTerminalIn = false
isTerminalOut = false
scheme = "http"
)
// ImagesFormat returns the format string specified in the configuration.
// String contains columns and format specification, for example {{ID}}\t{{Name}}.
func (cli *DockerCli) ImagesFormat() string {
return cli.configFile.ImagesFormat
}
func (cli *DockerCli) setRawTerminal() error {
if cli.isTerminalIn && os.Getenv("NORAW") == "" {
state, err := term.SetRawTerminal(cli.inFd)
if err != nil {
return err
}
cli.state = state
}
return nil
}
func (cli *DockerCli) restoreTerminal(in io.Closer) error {
if cli.state != nil {
term.RestoreTerminal(cli.inFd, cli.state)
}
// WARNING: DO NOT REMOVE THE OS CHECK !!!
// For some reason this Close call blocks on darwin..
// As the client exists right after, simply discard the close
// until we find a better solution.
if in != nil && runtime.GOOS != "darwin" {
return in.Close()
}
return nil
}
// NewDockerCli returns a DockerCli instance with IO output and error streams set by in, out and err.
// The key file, protocol (i.e. unix) and address are passed in as strings, along with the tls.Config. If the tls.Config
// is set the client scheme will be set to https.
// The client will be given a 32-second timeout (see https://github.com/docker/docker/pull/8035).
func NewDockerCli(in io.ReadCloser, out, err io.Writer, clientFlags *cli.ClientFlags) *DockerCli {
cli := &DockerCli{
in: in,
out: out,
err: err,
keyFile: clientFlags.Common.TrustKey,
if tlsConfig != nil {
scheme = "https"
}
cli.init = func() error {
clientFlags.PostParse()
configFile, e := cliconfig.Load(cliconfig.ConfigDir())
if e != nil {
fmt.Fprintf(cli.err, "WARNING: Error loading config file:%v\n", e)
if in != nil {
if file, ok := in.(*os.File); ok {
inFd = file.Fd()
isTerminalIn = term.IsTerminal(inFd)
}
if !configFile.ContainsAuth() {
credentials.DetectDefaultStore(configFile)
}
cli.configFile = configFile
host, err := getServerHost(clientFlags.Common.Hosts, clientFlags.Common.TLSOptions)
if err != nil {
return err
}
customHeaders := cli.configFile.HTTPHeaders
if customHeaders == nil {
customHeaders = map[string]string{}
}
customHeaders["User-Agent"] = clientUserAgent()
verStr := api.DefaultVersion.String()
if tmpStr := os.Getenv("DOCKER_API_VERSION"); tmpStr != "" {
verStr = tmpStr
}
httpClient, err := newHTTPClient(host, clientFlags.Common.TLSOptions)
if err != nil {
return err
}
client, err := client.NewClient(host, verStr, httpClient, customHeaders)
if err != nil {
return err
}
cli.client = client
if cli.in != nil {
cli.inFd, cli.isTerminalIn = term.GetFdInfo(cli.in)
}
if cli.out != nil {
cli.outFd, cli.isTerminalOut = term.GetFdInfo(cli.out)
}
return nil
}
return cli
}
func getServerHost(hosts []string, tlsOptions *tlsconfig.Options) (host string, err error) {
switch len(hosts) {
case 0:
host = os.Getenv("DOCKER_HOST")
case 1:
host = hosts[0]
default:
return "", errors.New("Please specify only one -H")
if out != nil {
if file, ok := out.(*os.File); ok {
outFd = file.Fd()
isTerminalOut = term.IsTerminal(outFd)
}
}
host, err = opts.ParseHost(tlsOptions != nil, host)
return
}
func newHTTPClient(host string, tlsOptions *tlsconfig.Options) (*http.Client, error) {
if tlsOptions == nil {
// let the api client configure the default transport.
return nil, nil
if err == nil {
err = out
}
config, err := tlsconfig.Client(*tlsOptions)
if err != nil {
return nil, err
}
// The transport is created here for reuse during the client session
tr := &http.Transport{
TLSClientConfig: config,
}
proto, addr, _, err := client.ParseHost(host)
if err != nil {
return nil, err
Proxy: http.ProxyFromEnvironment,
TLSClientConfig: tlsConfig,
}
sockets.ConfigureTransport(tr, proto, addr)
// Why 32? See issue 8035
timeout := 32 * time.Second
if proto == "unix" {
// no need in compressing for local communications
tr.DisableCompression = true
tr.Dial = func(_, _ string) (net.Conn, error) {
return net.DialTimeout(proto, addr, timeout)
}
} else {
tr.Dial = (&net.Dialer{Timeout: timeout}).Dial
}
return &http.Client{
Transport: tr,
}, nil
}
func clientUserAgent() string {
return "Docker-Client/" + dockerversion.Version + " (" + runtime.GOOS + ")"
return &DockerCli{
proto: proto,
addr: addr,
in: in,
out: out,
err: err,
keyFile: keyFile,
inFd: inFd,
outFd: outFd,
isTerminalIn: isTerminalIn,
isTerminalOut: isTerminalOut,
tlsConfig: tlsConfig,
scheme: scheme,
transport: tr,
}
}

View File

@@ -1,5 +0,0 @@
// Package client provides a command-line interface for Docker.
//
// Run "docker help SUBCOMMAND" or "docker SUBCOMMAND --help" to see more information on any Docker subcommand, including the full list of options supported for the subcommand.
// See https://docs.docker.com/installation/ for instructions on installing Docker.
package client

2764
api/client/commands.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,85 +0,0 @@
package client
import (
"encoding/json"
"errors"
"fmt"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/reference"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/container"
)
// CmdCommit creates a new image from a container's changes.
//
// Usage: docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
func (cli *DockerCli) CmdCommit(args ...string) error {
cmd := Cli.Subcmd("commit", []string{"CONTAINER [REPOSITORY[:TAG]]"}, Cli.DockerCommands["commit"].Description, true)
flPause := cmd.Bool([]string{"p", "-pause"}, true, "Pause container during commit")
flComment := cmd.String([]string{"m", "-message"}, "", "Commit message")
flAuthor := cmd.String([]string{"a", "-author"}, "", "Author (e.g., \"John Hannibal Smith <hannibal@a-team.com>\")")
flChanges := opts.NewListOpts(nil)
cmd.Var(&flChanges, []string{"c", "-change"}, "Apply Dockerfile instruction to the created image")
// FIXME: --run is deprecated, it will be replaced with inline Dockerfile commands.
flConfig := cmd.String([]string{"#-run"}, "", "This option is deprecated and will be removed in a future version in favor of inline Dockerfile-compatible commands")
cmd.Require(flag.Max, 2)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var (
name = cmd.Arg(0)
repositoryAndTag = cmd.Arg(1)
repositoryName string
tag string
)
//Check if the given image name can be resolved
if repositoryAndTag != "" {
ref, err := reference.ParseNamed(repositoryAndTag)
if err != nil {
return err
}
repositoryName = ref.Name()
switch x := ref.(type) {
case reference.Canonical:
return errors.New("cannot commit to digest reference")
case reference.NamedTagged:
tag = x.Tag()
}
}
var config *container.Config
if *flConfig != "" {
config = &container.Config{}
if err := json.Unmarshal([]byte(*flConfig), config); err != nil {
return err
}
}
options := types.ContainerCommitOptions{
ContainerID: name,
RepositoryName: repositoryName,
Tag: tag,
Comment: *flComment,
Author: *flAuthor,
Changes: flChanges.GetAll(),
Pause: *flPause,
Config: config,
}
response, err := cli.client.ContainerCommit(context.Background(), options)
if err != nil {
return err
}
fmt.Fprintln(cli.out, response.ID)
return nil
}

View File

@@ -1,298 +0,0 @@
package client
import (
"fmt"
"io"
"os"
"path/filepath"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/archive"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/system"
"github.com/docker/engine-api/types"
)
type copyDirection int
const (
fromContainer copyDirection = (1 << iota)
toContainer
acrossContainers = fromContainer | toContainer
)
type cpConfig struct {
followLink bool
}
// CmdCp copies files/folders to or from a path in a container.
//
// When copying from a container, if DEST_PATH is '-' the data is written as a
// tar archive file to STDOUT.
//
// When copying to a container, if SRC_PATH is '-' the data is read as a tar
// archive file from STDIN, and the destination CONTAINER:DEST_PATH, must specify
// a directory.
//
// Usage:
// docker cp CONTAINER:SRC_PATH DEST_PATH|-
// docker cp SRC_PATH|- CONTAINER:DEST_PATH
func (cli *DockerCli) CmdCp(args ...string) error {
cmd := Cli.Subcmd(
"cp",
[]string{"CONTAINER:SRC_PATH DEST_PATH|-", "SRC_PATH|- CONTAINER:DEST_PATH"},
strings.Join([]string{
Cli.DockerCommands["cp"].Description,
"\nUse '-' as the source to read a tar archive from stdin\n",
"and extract it to a directory destination in a container.\n",
"Use '-' as the destination to stream a tar archive of a\n",
"container source to stdout.",
}, ""),
true,
)
followLink := cmd.Bool([]string{"L", "-follow-link"}, false, "Always follow symbol link in SRC_PATH")
cmd.Require(flag.Exact, 2)
cmd.ParseFlags(args, true)
if cmd.Arg(0) == "" {
return fmt.Errorf("source can not be empty")
}
if cmd.Arg(1) == "" {
return fmt.Errorf("destination can not be empty")
}
srcContainer, srcPath := splitCpArg(cmd.Arg(0))
dstContainer, dstPath := splitCpArg(cmd.Arg(1))
var direction copyDirection
if srcContainer != "" {
direction |= fromContainer
}
if dstContainer != "" {
direction |= toContainer
}
cpParam := &cpConfig{
followLink: *followLink,
}
switch direction {
case fromContainer:
return cli.copyFromContainer(srcContainer, srcPath, dstPath, cpParam)
case toContainer:
return cli.copyToContainer(srcPath, dstContainer, dstPath, cpParam)
case acrossContainers:
// Copying between containers isn't supported.
return fmt.Errorf("copying between containers is not supported")
default:
// User didn't specify any container.
return fmt.Errorf("must specify at least one container source")
}
}
// We use `:` as a delimiter between CONTAINER and PATH, but `:` could also be
// in a valid LOCALPATH, like `file:name.txt`. We can resolve this ambiguity by
// requiring a LOCALPATH with a `:` to be made explicit with a relative or
// absolute path:
// `/path/to/file:name.txt` or `./file:name.txt`
//
// This is apparently how `scp` handles this as well:
// http://www.cyberciti.biz/faq/rsync-scp-file-name-with-colon-punctuation-in-it/
//
// We can't simply check for a filepath separator because container names may
// have a separator, e.g., "host0/cname1" if container is in a Docker cluster,
// so we have to check for a `/` or `.` prefix. Also, in the case of a Windows
// client, a `:` could be part of an absolute Windows path, in which case it
// is immediately proceeded by a backslash.
func splitCpArg(arg string) (container, path string) {
if system.IsAbs(arg) {
// Explicit local absolute path, e.g., `C:\foo` or `/foo`.
return "", arg
}
parts := strings.SplitN(arg, ":", 2)
if len(parts) == 1 || strings.HasPrefix(parts[0], ".") {
// Either there's no `:` in the arg
// OR it's an explicit local relative path like `./file:name.txt`.
return "", arg
}
return parts[0], parts[1]
}
func (cli *DockerCli) statContainerPath(containerName, path string) (types.ContainerPathStat, error) {
return cli.client.ContainerStatPath(context.Background(), containerName, path)
}
func resolveLocalPath(localPath string) (absPath string, err error) {
if absPath, err = filepath.Abs(localPath); err != nil {
return
}
return archive.PreserveTrailingDotOrSeparator(absPath, localPath), nil
}
func (cli *DockerCli) copyFromContainer(srcContainer, srcPath, dstPath string, cpParam *cpConfig) (err error) {
if dstPath != "-" {
// Get an absolute destination path.
dstPath, err = resolveLocalPath(dstPath)
if err != nil {
return err
}
}
// if client requests to follow symbol link, then must decide target file to be copied
var rebaseName string
if cpParam.followLink {
srcStat, err := cli.statContainerPath(srcContainer, srcPath)
// If the destination is a symbolic link, we should follow it.
if err == nil && srcStat.Mode&os.ModeSymlink != 0 {
linkTarget := srcStat.LinkTarget
if !system.IsAbs(linkTarget) {
// Join with the parent directory.
srcParent, _ := archive.SplitPathDirEntry(srcPath)
linkTarget = filepath.Join(srcParent, linkTarget)
}
linkTarget, rebaseName = archive.GetRebaseName(srcPath, linkTarget)
srcPath = linkTarget
}
}
content, stat, err := cli.client.CopyFromContainer(context.Background(), srcContainer, srcPath)
if err != nil {
return err
}
defer content.Close()
if dstPath == "-" {
// Send the response to STDOUT.
_, err = io.Copy(os.Stdout, content)
return err
}
// Prepare source copy info.
srcInfo := archive.CopyInfo{
Path: srcPath,
Exists: true,
IsDir: stat.Mode.IsDir(),
RebaseName: rebaseName,
}
preArchive := content
if len(srcInfo.RebaseName) != 0 {
_, srcBase := archive.SplitPathDirEntry(srcInfo.Path)
preArchive = archive.RebaseArchiveEntries(content, srcBase, srcInfo.RebaseName)
}
// See comments in the implementation of `archive.CopyTo` for exactly what
// goes into deciding how and whether the source archive needs to be
// altered for the correct copy behavior.
return archive.CopyTo(preArchive, srcInfo, dstPath)
}
func (cli *DockerCli) copyToContainer(srcPath, dstContainer, dstPath string, cpParam *cpConfig) (err error) {
if srcPath != "-" {
// Get an absolute source path.
srcPath, err = resolveLocalPath(srcPath)
if err != nil {
return err
}
}
// In order to get the copy behavior right, we need to know information
// about both the source and destination. The API is a simple tar
// archive/extract API but we can use the stat info header about the
// destination to be more informed about exactly what the destination is.
// Prepare destination copy info by stat-ing the container path.
dstInfo := archive.CopyInfo{Path: dstPath}
dstStat, err := cli.statContainerPath(dstContainer, dstPath)
// If the destination is a symbolic link, we should evaluate it.
if err == nil && dstStat.Mode&os.ModeSymlink != 0 {
linkTarget := dstStat.LinkTarget
if !system.IsAbs(linkTarget) {
// Join with the parent directory.
dstParent, _ := archive.SplitPathDirEntry(dstPath)
linkTarget = filepath.Join(dstParent, linkTarget)
}
dstInfo.Path = linkTarget
dstStat, err = cli.statContainerPath(dstContainer, linkTarget)
}
// Ignore any error and assume that the parent directory of the destination
// path exists, in which case the copy may still succeed. If there is any
// type of conflict (e.g., non-directory overwriting an existing directory
// or vice versa) the extraction will fail. If the destination simply did
// not exist, but the parent directory does, the extraction will still
// succeed.
if err == nil {
dstInfo.Exists, dstInfo.IsDir = true, dstStat.Mode.IsDir()
}
var (
content io.Reader
resolvedDstPath string
)
if srcPath == "-" {
// Use STDIN.
content = os.Stdin
resolvedDstPath = dstInfo.Path
if !dstInfo.IsDir {
return fmt.Errorf("destination %q must be a directory", fmt.Sprintf("%s:%s", dstContainer, dstPath))
}
} else {
// Prepare source copy info.
srcInfo, err := archive.CopyInfoSourcePath(srcPath, cpParam.followLink)
if err != nil {
return err
}
srcArchive, err := archive.TarResource(srcInfo)
if err != nil {
return err
}
defer srcArchive.Close()
// With the stat info about the local source as well as the
// destination, we have enough information to know whether we need to
// alter the archive that we upload so that when the server extracts
// it to the specified directory in the container we get the desired
// copy behavior.
// See comments in the implementation of `archive.PrepareArchiveCopy`
// for exactly what goes into deciding how and whether the source
// archive needs to be altered for the correct copy behavior when it is
// extracted. This function also infers from the source and destination
// info which directory to extract to, which may be the parent of the
// destination that the user specified.
dstDir, preparedArchive, err := archive.PrepareArchiveCopy(srcArchive, srcInfo, dstInfo)
if err != nil {
return err
}
defer preparedArchive.Close()
resolvedDstPath = dstDir
content = preparedArchive
}
options := types.CopyToContainerOptions{
ContainerID: dstContainer,
Path: resolvedDstPath,
Content: content,
AllowOverwriteDirWithFile: false,
}
return cli.client.CopyToContainer(context.Background(), options)
}

View File

@@ -1,180 +0,0 @@
package client
import (
"fmt"
"io"
"os"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/jsonmessage"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
runconfigopts "github.com/docker/docker/runconfig/opts"
"github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/container"
networktypes "github.com/docker/engine-api/types/network"
)
func (cli *DockerCli) pullImage(image string) error {
return cli.pullImageCustomOut(image, cli.out)
}
func (cli *DockerCli) pullImageCustomOut(image string, out io.Writer) error {
ref, err := reference.ParseNamed(image)
if err != nil {
return err
}
var tag string
switch x := reference.WithDefaultTag(ref).(type) {
case reference.Canonical:
tag = x.Digest().String()
case reference.NamedTagged:
tag = x.Tag()
}
// Resolve the Repository name from fqn to RepositoryInfo
repoInfo, err := registry.ParseRepositoryInfo(ref)
if err != nil {
return err
}
authConfig := cli.resolveAuthConfig(repoInfo.Index)
encodedAuth, err := encodeAuthToBase64(authConfig)
if err != nil {
return err
}
options := types.ImageCreateOptions{
Parent: ref.Name(),
Tag: tag,
RegistryAuth: encodedAuth,
}
responseBody, err := cli.client.ImageCreate(context.Background(), options)
if err != nil {
return err
}
defer responseBody.Close()
return jsonmessage.DisplayJSONMessagesStream(responseBody, out, cli.outFd, cli.isTerminalOut, nil)
}
type cidFile struct {
path string
file *os.File
written bool
}
func newCIDFile(path string) (*cidFile, error) {
if _, err := os.Stat(path); err == nil {
return nil, fmt.Errorf("Container ID file found, make sure the other container isn't running or delete %s", path)
}
f, err := os.Create(path)
if err != nil {
return nil, fmt.Errorf("Failed to create the container ID file: %s", err)
}
return &cidFile{path: path, file: f}, nil
}
func (cli *DockerCli) createContainer(config *container.Config, hostConfig *container.HostConfig, networkingConfig *networktypes.NetworkingConfig, cidfile, name string) (*types.ContainerCreateResponse, error) {
var containerIDFile *cidFile
if cidfile != "" {
var err error
if containerIDFile, err = newCIDFile(cidfile); err != nil {
return nil, err
}
defer containerIDFile.Close()
}
var trustedRef reference.Canonical
_, ref, err := reference.ParseIDOrReference(config.Image)
if err != nil {
return nil, err
}
if ref != nil {
ref = reference.WithDefaultTag(ref)
if ref, ok := ref.(reference.NamedTagged); ok && isTrusted() {
var err error
trustedRef, err = cli.trustedReference(ref)
if err != nil {
return nil, err
}
config.Image = trustedRef.String()
}
}
//create the container
response, err := cli.client.ContainerCreate(context.Background(), config, hostConfig, networkingConfig, name)
//if image not found try to pull it
if err != nil {
if client.IsErrImageNotFound(err) && ref != nil {
fmt.Fprintf(cli.err, "Unable to find image '%s' locally\n", ref.String())
// we don't want to write to stdout anything apart from container.ID
if err = cli.pullImageCustomOut(config.Image, cli.err); err != nil {
return nil, err
}
if ref, ok := ref.(reference.NamedTagged); ok && trustedRef != nil {
if err := cli.tagTrusted(trustedRef, ref); err != nil {
return nil, err
}
}
// Retry
var retryErr error
response, retryErr = cli.client.ContainerCreate(context.Background(), config, hostConfig, networkingConfig, name)
if retryErr != nil {
return nil, retryErr
}
} else {
return nil, err
}
}
for _, warning := range response.Warnings {
fmt.Fprintf(cli.err, "WARNING: %s\n", warning)
}
if containerIDFile != nil {
if err = containerIDFile.Write(response.ID); err != nil {
return nil, err
}
}
return &response, nil
}
// CmdCreate creates a new container from a given image.
//
// Usage: docker create [OPTIONS] IMAGE [COMMAND] [ARG...]
func (cli *DockerCli) CmdCreate(args ...string) error {
cmd := Cli.Subcmd("create", []string{"IMAGE [COMMAND] [ARG...]"}, Cli.DockerCommands["create"].Description, true)
addTrustedFlags(cmd, true)
// These are flags not stored in Config/HostConfig
var (
flName = cmd.String([]string{"-name"}, "", "Assign a name to the container")
)
config, hostConfig, networkingConfig, cmd, err := runconfigopts.Parse(cmd, args)
if err != nil {
cmd.ReportError(err.Error(), true)
os.Exit(1)
}
if config.Image == "" {
cmd.Usage()
return nil
}
response, err := cli.createContainer(config, hostConfig, networkingConfig, hostConfig.ContainerIDFile, *flName)
if err != nil {
return err
}
fmt.Fprintf(cli.out, "%s\n", response.ID)
return nil
}

View File

@@ -1,49 +0,0 @@
package client
import (
"fmt"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/archive"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdDiff shows changes on a container's filesystem.
//
// Each changed file is printed on a separate line, prefixed with a single
// character that indicates the status of the file: C (modified), A (added),
// or D (deleted).
//
// Usage: docker diff CONTAINER
func (cli *DockerCli) CmdDiff(args ...string) error {
cmd := Cli.Subcmd("diff", []string{"CONTAINER"}, Cli.DockerCommands["diff"].Description, true)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
if cmd.Arg(0) == "" {
return fmt.Errorf("Container name cannot be empty")
}
changes, err := cli.client.ContainerDiff(context.Background(), cmd.Arg(0))
if err != nil {
return err
}
for _, change := range changes {
var kind string
switch change.Kind {
case archive.ChangeModify:
kind = "C"
case archive.ChangeAdd:
kind = "A"
case archive.ChangeDelete:
kind = "D"
}
fmt.Fprintf(cli.out, "%s %s\n", kind, change.Path)
}
return nil
}

View File

@@ -1,146 +0,0 @@
package client
import (
"encoding/json"
"fmt"
"io"
"sort"
"strings"
"sync"
"time"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/jsonlog"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/engine-api/types"
eventtypes "github.com/docker/engine-api/types/events"
"github.com/docker/engine-api/types/filters"
)
// CmdEvents prints a live stream of real time events from the server.
//
// Usage: docker events [OPTIONS]
func (cli *DockerCli) CmdEvents(args ...string) error {
cmd := Cli.Subcmd("events", nil, Cli.DockerCommands["events"].Description, true)
since := cmd.String([]string{"-since"}, "", "Show all events created since timestamp")
until := cmd.String([]string{"-until"}, "", "Stream events until this timestamp")
flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Filter output based on conditions provided")
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
eventFilterArgs := filters.NewArgs()
// Consolidate all filter flags, and sanity check them early.
// They'll get process in the daemon/server.
for _, f := range flFilter.GetAll() {
var err error
eventFilterArgs, err = filters.ParseFlag(f, eventFilterArgs)
if err != nil {
return err
}
}
options := types.EventsOptions{
Since: *since,
Until: *until,
Filters: eventFilterArgs,
}
responseBody, err := cli.client.Events(context.Background(), options)
if err != nil {
return err
}
defer responseBody.Close()
return streamEvents(responseBody, cli.out)
}
// streamEvents decodes prints the incoming events in the provided output.
func streamEvents(input io.Reader, output io.Writer) error {
return decodeEvents(input, func(event eventtypes.Message, err error) error {
if err != nil {
return err
}
printOutput(event, output)
return nil
})
}
type eventProcessor func(event eventtypes.Message, err error) error
func decodeEvents(input io.Reader, ep eventProcessor) error {
dec := json.NewDecoder(input)
for {
var event eventtypes.Message
err := dec.Decode(&event)
if err != nil && err == io.EOF {
break
}
if procErr := ep(event, err); procErr != nil {
return procErr
}
}
return nil
}
// printOutput prints all types of event information.
// Each output includes the event type, actor id, name and action.
// Actor attributes are printed at the end if the actor has any.
func printOutput(event eventtypes.Message, output io.Writer) {
if event.TimeNano != 0 {
fmt.Fprintf(output, "%s ", time.Unix(0, event.TimeNano).Format(jsonlog.RFC3339NanoFixed))
} else if event.Time != 0 {
fmt.Fprintf(output, "%s ", time.Unix(event.Time, 0).Format(jsonlog.RFC3339NanoFixed))
}
fmt.Fprintf(output, "%s %s %s", event.Type, event.Action, event.Actor.ID)
if len(event.Actor.Attributes) > 0 {
var attrs []string
var keys []string
for k := range event.Actor.Attributes {
keys = append(keys, k)
}
sort.Strings(keys)
for _, k := range keys {
v := event.Actor.Attributes[k]
attrs = append(attrs, fmt.Sprintf("%s=%s", k, v))
}
fmt.Fprintf(output, " (%s)", strings.Join(attrs, ", "))
}
fmt.Fprint(output, "\n")
}
type eventHandler struct {
handlers map[string]func(eventtypes.Message)
mu sync.Mutex
}
func (w *eventHandler) Handle(action string, h func(eventtypes.Message)) {
w.mu.Lock()
w.handlers[action] = h
w.mu.Unlock()
}
// Watch ranges over the passed in event chan and processes the events based on the
// handlers created for a given action.
// To stop watching, close the event chan.
func (w *eventHandler) Watch(c <-chan eventtypes.Message) {
for e := range c {
w.mu.Lock()
h, exists := w.handlers[e.Action]
w.mu.Unlock()
if !exists {
continue
}
logrus.Debugf("event handler: received event: %v", e)
go h(e)
}
}

View File

@@ -1,166 +0,0 @@
package client
import (
"fmt"
"io"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/promise"
"github.com/docker/engine-api/types"
)
// CmdExec runs a command in a running container.
//
// Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
func (cli *DockerCli) CmdExec(args ...string) error {
cmd := Cli.Subcmd("exec", []string{"CONTAINER COMMAND [ARG...]"}, Cli.DockerCommands["exec"].Description, true)
detachKeys := cmd.String([]string{"-detach-keys"}, "", "Override the key sequence for detaching a container")
execConfig, err := ParseExec(cmd, args)
// just in case the ParseExec does not exit
if execConfig.Container == "" || err != nil {
return Cli.StatusError{StatusCode: 1}
}
if *detachKeys != "" {
cli.configFile.DetachKeys = *detachKeys
}
// Send client escape keys
execConfig.DetachKeys = cli.configFile.DetachKeys
response, err := cli.client.ContainerExecCreate(context.Background(), *execConfig)
if err != nil {
return err
}
execID := response.ID
if execID == "" {
fmt.Fprintf(cli.out, "exec ID empty")
return nil
}
//Temp struct for execStart so that we don't need to transfer all the execConfig
if !execConfig.Detach {
if err := cli.CheckTtyInput(execConfig.AttachStdin, execConfig.Tty); err != nil {
return err
}
} else {
execStartCheck := types.ExecStartCheck{
Detach: execConfig.Detach,
Tty: execConfig.Tty,
}
if err := cli.client.ContainerExecStart(context.Background(), execID, execStartCheck); err != nil {
return err
}
// For now don't print this - wait for when we support exec wait()
// fmt.Fprintf(cli.out, "%s\n", execID)
return nil
}
// Interactive exec requested.
var (
out, stderr io.Writer
in io.ReadCloser
errCh chan error
)
if execConfig.AttachStdin {
in = cli.in
}
if execConfig.AttachStdout {
out = cli.out
}
if execConfig.AttachStderr {
if execConfig.Tty {
stderr = cli.out
} else {
stderr = cli.err
}
}
resp, err := cli.client.ContainerExecAttach(context.Background(), execID, *execConfig)
if err != nil {
return err
}
defer resp.Close()
if in != nil && execConfig.Tty {
if err := cli.setRawTerminal(); err != nil {
return err
}
defer cli.restoreTerminal(in)
}
errCh = promise.Go(func() error {
return cli.holdHijackedConnection(execConfig.Tty, in, out, stderr, resp)
})
if execConfig.Tty && cli.isTerminalIn {
if err := cli.monitorTtySize(execID, true); err != nil {
fmt.Fprintf(cli.err, "Error monitoring TTY size: %s\n", err)
}
}
if err := <-errCh; err != nil {
logrus.Debugf("Error hijack: %s", err)
return err
}
var status int
if _, status, err = getExecExitCode(cli, execID); err != nil {
return err
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}
// ParseExec parses the specified args for the specified command and generates
// an ExecConfig from it.
// If the minimal number of specified args is not right or if specified args are
// not valid, it will return an error.
func ParseExec(cmd *flag.FlagSet, args []string) (*types.ExecConfig, error) {
var (
flStdin = cmd.Bool([]string{"i", "-interactive"}, false, "Keep STDIN open even if not attached")
flTty = cmd.Bool([]string{"t", "-tty"}, false, "Allocate a pseudo-TTY")
flDetach = cmd.Bool([]string{"d", "-detach"}, false, "Detached mode: run command in the background")
flUser = cmd.String([]string{"u", "-user"}, "", "Username or UID (format: <name|uid>[:<group|gid>])")
flPrivileged = cmd.Bool([]string{"-privileged"}, false, "Give extended privileges to the command")
execCmd []string
container string
)
cmd.Require(flag.Min, 2)
if err := cmd.ParseFlags(args, true); err != nil {
return nil, err
}
container = cmd.Arg(0)
parsedArgs := cmd.Args()
execCmd = parsedArgs[1:]
execConfig := &types.ExecConfig{
User: *flUser,
Privileged: *flPrivileged,
Tty: *flTty,
Cmd: execCmd,
Container: container,
Detach: *flDetach,
}
// If -d is not set, attach to everything by default
if !*flDetach {
execConfig.AttachStdout = true
execConfig.AttachStderr = true
if *flStdin {
execConfig.AttachStdin = true
}
}
return execConfig, nil
}

View File

@@ -1,130 +0,0 @@
package client
import (
"fmt"
"io/ioutil"
"testing"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/engine-api/types"
)
type arguments struct {
args []string
}
func TestParseExec(t *testing.T) {
invalids := map[*arguments]error{
&arguments{[]string{"-unknown"}}: fmt.Errorf("flag provided but not defined: -unknown"),
&arguments{[]string{"-u"}}: fmt.Errorf("flag needs an argument: -u"),
&arguments{[]string{"--user"}}: fmt.Errorf("flag needs an argument: --user"),
}
valids := map[*arguments]*types.ExecConfig{
&arguments{
[]string{"container", "command"},
}: {
Container: "container",
Cmd: []string{"command"},
AttachStdout: true,
AttachStderr: true,
},
&arguments{
[]string{"container", "command1", "command2"},
}: {
Container: "container",
Cmd: []string{"command1", "command2"},
AttachStdout: true,
AttachStderr: true,
},
&arguments{
[]string{"-i", "-t", "-u", "uid", "container", "command"},
}: {
User: "uid",
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
Tty: true,
Container: "container",
Cmd: []string{"command"},
},
&arguments{
[]string{"-d", "container", "command"},
}: {
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
Detach: true,
Container: "container",
Cmd: []string{"command"},
},
&arguments{
[]string{"-t", "-i", "-d", "container", "command"},
}: {
AttachStdin: false,
AttachStdout: false,
AttachStderr: false,
Detach: true,
Tty: true,
Container: "container",
Cmd: []string{"command"},
},
}
for invalid, expectedError := range invalids {
cmd := flag.NewFlagSet("exec", flag.ContinueOnError)
cmd.ShortUsage = func() {}
cmd.SetOutput(ioutil.Discard)
_, err := ParseExec(cmd, invalid.args)
if err == nil || err.Error() != expectedError.Error() {
t.Fatalf("Expected an error [%v] for %v, got %v", expectedError, invalid, err)
}
}
for valid, expectedExecConfig := range valids {
cmd := flag.NewFlagSet("exec", flag.ContinueOnError)
cmd.ShortUsage = func() {}
cmd.SetOutput(ioutil.Discard)
execConfig, err := ParseExec(cmd, valid.args)
if err != nil {
t.Fatal(err)
}
if !compareExecConfig(expectedExecConfig, execConfig) {
t.Fatalf("Expected [%v] for %v, got [%v]", expectedExecConfig, valid, execConfig)
}
}
}
func compareExecConfig(config1 *types.ExecConfig, config2 *types.ExecConfig) bool {
if config1.AttachStderr != config2.AttachStderr {
return false
}
if config1.AttachStdin != config2.AttachStdin {
return false
}
if config1.AttachStdout != config2.AttachStdout {
return false
}
if config1.Container != config2.Container {
return false
}
if config1.Detach != config2.Detach {
return false
}
if config1.Privileged != config2.Privileged {
return false
}
if config1.Tty != config2.Tty {
return false
}
if config1.User != config2.User {
return false
}
if len(config1.Cmd) != len(config2.Cmd) {
return false
}
for index, value := range config1.Cmd {
if value != config2.Cmd[index] {
return false
}
}
return true
}

View File

@@ -1,42 +0,0 @@
package client
import (
"errors"
"io"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdExport exports a filesystem as a tar archive.
//
// The tar archive is streamed to STDOUT by default or written to a file.
//
// Usage: docker export [OPTIONS] CONTAINER
func (cli *DockerCli) CmdExport(args ...string) error {
cmd := Cli.Subcmd("export", []string{"CONTAINER"}, Cli.DockerCommands["export"].Description, true)
outfile := cmd.String([]string{"o", "-output"}, "", "Write to a file, instead of STDOUT")
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
if *outfile == "" && cli.isTerminalOut {
return errors.New("Cowardly refusing to save to a terminal. Use the -o flag or redirect.")
}
responseBody, err := cli.client.ContainerExport(context.Background(), cmd.Arg(0))
if err != nil {
return err
}
defer responseBody.Close()
if *outfile == "" {
_, err := io.Copy(cli.out, responseBody)
return err
}
return copyToFile(*outfile, responseBody)
}

View File

@@ -1,242 +0,0 @@
package formatter
import (
"fmt"
"strconv"
"strings"
"time"
"github.com/docker/docker/api"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/stringutils"
"github.com/docker/engine-api/types"
"github.com/docker/go-units"
)
const (
tableKey = "table"
containerIDHeader = "CONTAINER ID"
imageHeader = "IMAGE"
namesHeader = "NAMES"
commandHeader = "COMMAND"
createdSinceHeader = "CREATED"
createdAtHeader = "CREATED AT"
runningForHeader = "CREATED"
statusHeader = "STATUS"
portsHeader = "PORTS"
sizeHeader = "SIZE"
labelsHeader = "LABELS"
imageIDHeader = "IMAGE ID"
repositoryHeader = "REPOSITORY"
tagHeader = "TAG"
digestHeader = "DIGEST"
mountsHeader = "MOUNTS"
)
type containerContext struct {
baseSubContext
trunc bool
c types.Container
}
func (c *containerContext) ID() string {
c.addHeader(containerIDHeader)
if c.trunc {
return stringid.TruncateID(c.c.ID)
}
return c.c.ID
}
func (c *containerContext) Names() string {
c.addHeader(namesHeader)
names := stripNamePrefix(c.c.Names)
if c.trunc {
for _, name := range names {
if len(strings.Split(name, "/")) == 1 {
names = []string{name}
break
}
}
}
return strings.Join(names, ",")
}
func (c *containerContext) Image() string {
c.addHeader(imageHeader)
if c.c.Image == "" {
return "<no image>"
}
if c.trunc {
if trunc := stringid.TruncateID(c.c.ImageID); trunc == stringid.TruncateID(c.c.Image) {
return trunc
}
}
return c.c.Image
}
func (c *containerContext) Command() string {
c.addHeader(commandHeader)
command := c.c.Command
if c.trunc {
command = stringutils.Truncate(command, 20)
}
return strconv.Quote(command)
}
func (c *containerContext) CreatedAt() string {
c.addHeader(createdAtHeader)
return time.Unix(int64(c.c.Created), 0).String()
}
func (c *containerContext) RunningFor() string {
c.addHeader(runningForHeader)
createdAt := time.Unix(int64(c.c.Created), 0)
return units.HumanDuration(time.Now().UTC().Sub(createdAt))
}
func (c *containerContext) Ports() string {
c.addHeader(portsHeader)
return api.DisplayablePorts(c.c.Ports)
}
func (c *containerContext) Status() string {
c.addHeader(statusHeader)
return c.c.Status
}
func (c *containerContext) Size() string {
c.addHeader(sizeHeader)
srw := units.HumanSize(float64(c.c.SizeRw))
sv := units.HumanSize(float64(c.c.SizeRootFs))
sf := srw
if c.c.SizeRootFs > 0 {
sf = fmt.Sprintf("%s (virtual %s)", srw, sv)
}
return sf
}
func (c *containerContext) Labels() string {
c.addHeader(labelsHeader)
if c.c.Labels == nil {
return ""
}
var joinLabels []string
for k, v := range c.c.Labels {
joinLabels = append(joinLabels, fmt.Sprintf("%s=%s", k, v))
}
return strings.Join(joinLabels, ",")
}
func (c *containerContext) Label(name string) string {
n := strings.Split(name, ".")
r := strings.NewReplacer("-", " ", "_", " ")
h := r.Replace(n[len(n)-1])
c.addHeader(h)
if c.c.Labels == nil {
return ""
}
return c.c.Labels[name]
}
func (c *containerContext) Mounts() string {
c.addHeader(mountsHeader)
var name string
var mounts []string
for _, m := range c.c.Mounts {
if m.Name == "" {
name = m.Source
} else {
name = m.Name
}
if c.trunc {
name = stringutils.Truncate(name, 15)
}
mounts = append(mounts, name)
}
return strings.Join(mounts, ",")
}
type imageContext struct {
baseSubContext
trunc bool
i types.Image
repo string
tag string
digest string
}
func (c *imageContext) ID() string {
c.addHeader(imageIDHeader)
if c.trunc {
return stringid.TruncateID(c.i.ID)
}
return c.i.ID
}
func (c *imageContext) Repository() string {
c.addHeader(repositoryHeader)
return c.repo
}
func (c *imageContext) Tag() string {
c.addHeader(tagHeader)
return c.tag
}
func (c *imageContext) Digest() string {
c.addHeader(digestHeader)
return c.digest
}
func (c *imageContext) CreatedSince() string {
c.addHeader(createdSinceHeader)
createdAt := time.Unix(int64(c.i.Created), 0)
return units.HumanDuration(time.Now().UTC().Sub(createdAt))
}
func (c *imageContext) CreatedAt() string {
c.addHeader(createdAtHeader)
return time.Unix(int64(c.i.Created), 0).String()
}
func (c *imageContext) Size() string {
c.addHeader(sizeHeader)
return units.HumanSize(float64(c.i.Size))
}
type subContext interface {
fullHeader() string
addHeader(header string)
}
type baseSubContext struct {
header []string
}
func (c *baseSubContext) fullHeader() string {
if c.header == nil {
return ""
}
return strings.Join(c.header, "\t")
}
func (c *baseSubContext) addHeader(header string) {
if c.header == nil {
c.header = []string{}
}
c.header = append(c.header, strings.ToUpper(header))
}
func stripNamePrefix(ss []string) []string {
for i, s := range ss {
ss[i] = s[1:]
}
return ss
}

View File

@@ -1,192 +0,0 @@
package formatter
import (
"reflect"
"strings"
"testing"
"time"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/engine-api/types"
)
func TestContainerPsContext(t *testing.T) {
containerID := stringid.GenerateRandomID()
unix := time.Now().Add(-65 * time.Second).Unix()
var ctx containerContext
cases := []struct {
container types.Container
trunc bool
expValue string
expHeader string
call func() string
}{
{types.Container{ID: containerID}, true, stringid.TruncateID(containerID), containerIDHeader, ctx.ID},
{types.Container{ID: containerID}, false, containerID, containerIDHeader, ctx.ID},
{types.Container{Names: []string{"/foobar_baz"}}, true, "foobar_baz", namesHeader, ctx.Names},
{types.Container{Image: "ubuntu"}, true, "ubuntu", imageHeader, ctx.Image},
{types.Container{Image: "verylongimagename"}, true, "verylongimagename", imageHeader, ctx.Image},
{types.Container{Image: "verylongimagename"}, false, "verylongimagename", imageHeader, ctx.Image},
{types.Container{
Image: "a5a665ff33eced1e0803148700880edab4",
ImageID: "a5a665ff33eced1e0803148700880edab4269067ed77e27737a708d0d293fbf5",
},
true,
"a5a665ff33ec",
imageHeader,
ctx.Image,
},
{types.Container{
Image: "a5a665ff33eced1e0803148700880edab4",
ImageID: "a5a665ff33eced1e0803148700880edab4269067ed77e27737a708d0d293fbf5",
},
false,
"a5a665ff33eced1e0803148700880edab4",
imageHeader,
ctx.Image,
},
{types.Container{Image: ""}, true, "<no image>", imageHeader, ctx.Image},
{types.Container{Command: "sh -c 'ls -la'"}, true, `"sh -c 'ls -la'"`, commandHeader, ctx.Command},
{types.Container{Created: unix}, true, time.Unix(unix, 0).String(), createdAtHeader, ctx.CreatedAt},
{types.Container{Ports: []types.Port{{PrivatePort: 8080, PublicPort: 8080, Type: "tcp"}}}, true, "8080/tcp", portsHeader, ctx.Ports},
{types.Container{Status: "RUNNING"}, true, "RUNNING", statusHeader, ctx.Status},
{types.Container{SizeRw: 10}, true, "10 B", sizeHeader, ctx.Size},
{types.Container{SizeRw: 10, SizeRootFs: 20}, true, "10 B (virtual 20 B)", sizeHeader, ctx.Size},
{types.Container{}, true, "", labelsHeader, ctx.Labels},
{types.Container{Labels: map[string]string{"cpu": "6", "storage": "ssd"}}, true, "cpu=6,storage=ssd", labelsHeader, ctx.Labels},
{types.Container{Created: unix}, true, "About a minute", runningForHeader, ctx.RunningFor},
}
for _, c := range cases {
ctx = containerContext{c: c.container, trunc: c.trunc}
v := c.call()
if strings.Contains(v, ",") {
compareMultipleValues(t, v, c.expValue)
} else if v != c.expValue {
t.Fatalf("Expected %s, was %s\n", c.expValue, v)
}
h := ctx.fullHeader()
if h != c.expHeader {
t.Fatalf("Expected %s, was %s\n", c.expHeader, h)
}
}
c1 := types.Container{Labels: map[string]string{"com.docker.swarm.swarm-id": "33", "com.docker.swarm.node_name": "ubuntu"}}
ctx = containerContext{c: c1, trunc: true}
sid := ctx.Label("com.docker.swarm.swarm-id")
node := ctx.Label("com.docker.swarm.node_name")
if sid != "33" {
t.Fatalf("Expected 33, was %s\n", sid)
}
if node != "ubuntu" {
t.Fatalf("Expected ubuntu, was %s\n", node)
}
h := ctx.fullHeader()
if h != "SWARM ID\tNODE NAME" {
t.Fatalf("Expected %s, was %s\n", "SWARM ID\tNODE NAME", h)
}
c2 := types.Container{}
ctx = containerContext{c: c2, trunc: true}
label := ctx.Label("anything.really")
if label != "" {
t.Fatalf("Expected an empty string, was %s", label)
}
ctx = containerContext{c: c2, trunc: true}
fullHeader := ctx.fullHeader()
if fullHeader != "" {
t.Fatalf("Expected fullHeader to be empty, was %s", fullHeader)
}
}
func TestImagesContext(t *testing.T) {
imageID := stringid.GenerateRandomID()
unix := time.Now().Unix()
var ctx imageContext
cases := []struct {
imageCtx imageContext
expValue string
expHeader string
call func() string
}{
{imageContext{
i: types.Image{ID: imageID},
trunc: true,
}, stringid.TruncateID(imageID), imageIDHeader, ctx.ID},
{imageContext{
i: types.Image{ID: imageID},
trunc: false,
}, imageID, imageIDHeader, ctx.ID},
{imageContext{
i: types.Image{Size: 10},
trunc: true,
}, "10 B", sizeHeader, ctx.Size},
{imageContext{
i: types.Image{Created: unix},
trunc: true,
}, time.Unix(unix, 0).String(), createdAtHeader, ctx.CreatedAt},
// FIXME
// {imageContext{
// i: types.Image{Created: unix},
// trunc: true,
// }, units.HumanDuration(time.Unix(unix, 0)), createdSinceHeader, ctx.CreatedSince},
{imageContext{
i: types.Image{},
repo: "busybox",
}, "busybox", repositoryHeader, ctx.Repository},
{imageContext{
i: types.Image{},
tag: "latest",
}, "latest", tagHeader, ctx.Tag},
{imageContext{
i: types.Image{},
digest: "sha256:d149ab53f8718e987c3a3024bb8aa0e2caadf6c0328f1d9d850b2a2a67f2819a",
}, "sha256:d149ab53f8718e987c3a3024bb8aa0e2caadf6c0328f1d9d850b2a2a67f2819a", digestHeader, ctx.Digest},
}
for _, c := range cases {
ctx = c.imageCtx
v := c.call()
if strings.Contains(v, ",") {
compareMultipleValues(t, v, c.expValue)
} else if v != c.expValue {
t.Fatalf("Expected %s, was %s\n", c.expValue, v)
}
h := ctx.fullHeader()
if h != c.expHeader {
t.Fatalf("Expected %s, was %s\n", c.expHeader, h)
}
}
}
func compareMultipleValues(t *testing.T, value, expected string) {
// comma-separated values means probably a map input, which won't
// be guaranteed to have the same order as our expected value
// We'll create maps and use reflect.DeepEquals to check instead:
entriesMap := make(map[string]string)
expMap := make(map[string]string)
entries := strings.Split(value, ",")
expectedEntries := strings.Split(expected, ",")
for _, entry := range entries {
keyval := strings.Split(entry, "=")
entriesMap[keyval[0]] = keyval[1]
}
for _, expected := range expectedEntries {
keyval := strings.Split(expected, "=")
expMap[keyval[0]] = keyval[1]
}
if !reflect.DeepEqual(expMap, entriesMap) {
t.Fatalf("Expected entries: %v, got: %v", expected, value)
}
}

View File

@@ -1,255 +0,0 @@
package formatter
import (
"bytes"
"fmt"
"io"
"strings"
"text/tabwriter"
"text/template"
"github.com/docker/docker/reference"
"github.com/docker/docker/utils/templates"
"github.com/docker/engine-api/types"
)
const (
tableFormatKey = "table"
rawFormatKey = "raw"
defaultContainerTableFormat = "table {{.ID}}\t{{.Image}}\t{{.Command}}\t{{.RunningFor}} ago\t{{.Status}}\t{{.Ports}}\t{{.Names}}"
defaultImageTableFormat = "table {{.Repository}}\t{{.Tag}}\t{{.ID}}\t{{.CreatedSince}} ago\t{{.Size}}"
defaultImageTableFormatWithDigest = "table {{.Repository}}\t{{.Tag}}\t{{.Digest}}\t{{.ID}}\t{{.CreatedSince}} ago\t{{.Size}}"
defaultQuietFormat = "{{.ID}}"
)
// Context contains information required by the formatter to print the output as desired.
type Context struct {
// Output is the output stream to which the formatted string is written.
Output io.Writer
// Format is used to choose raw, table or custom format for the output.
Format string
// Quiet when set to true will simply print minimal information.
Quiet bool
// Trunc when set to true will truncate the output of certain fields such as Container ID.
Trunc bool
// internal element
table bool
finalFormat string
header string
buffer *bytes.Buffer
}
func (c *Context) preformat() {
c.finalFormat = c.Format
if strings.HasPrefix(c.Format, tableKey) {
c.table = true
c.finalFormat = c.finalFormat[len(tableKey):]
}
c.finalFormat = strings.Trim(c.finalFormat, " ")
r := strings.NewReplacer(`\t`, "\t", `\n`, "\n")
c.finalFormat = r.Replace(c.finalFormat)
}
func (c *Context) parseFormat() (*template.Template, error) {
tmpl, err := templates.Parse(c.finalFormat)
if err != nil {
c.buffer.WriteString(fmt.Sprintf("Template parsing error: %v\n", err))
c.buffer.WriteTo(c.Output)
}
return tmpl, err
}
func (c *Context) postformat(tmpl *template.Template, subContext subContext) {
if c.table {
if len(c.header) == 0 {
// if we still don't have a header, we didn't have any containers so we need to fake it to get the right headers from the template
tmpl.Execute(bytes.NewBufferString(""), subContext)
c.header = subContext.fullHeader()
}
t := tabwriter.NewWriter(c.Output, 20, 1, 3, ' ', 0)
t.Write([]byte(c.header))
t.Write([]byte("\n"))
c.buffer.WriteTo(t)
t.Flush()
} else {
c.buffer.WriteTo(c.Output)
}
}
func (c *Context) contextFormat(tmpl *template.Template, subContext subContext) error {
if err := tmpl.Execute(c.buffer, subContext); err != nil {
c.buffer = bytes.NewBufferString(fmt.Sprintf("Template parsing error: %v\n", err))
c.buffer.WriteTo(c.Output)
return err
}
if c.table && len(c.header) == 0 {
c.header = subContext.fullHeader()
}
c.buffer.WriteString("\n")
return nil
}
// ContainerContext contains container specific information required by the formater, encapsulate a Context struct.
type ContainerContext struct {
Context
// Size when set to true will display the size of the output.
Size bool
// Containers
Containers []types.Container
}
// ImageContext contains image specific information required by the formater, encapsulate a Context struct.
type ImageContext struct {
Context
Digest bool
// Images
Images []types.Image
}
func (ctx ContainerContext) Write() {
switch ctx.Format {
case tableFormatKey:
ctx.Format = defaultContainerTableFormat
if ctx.Quiet {
ctx.Format = defaultQuietFormat
}
case rawFormatKey:
if ctx.Quiet {
ctx.Format = `container_id: {{.ID}}`
} else {
ctx.Format = `container_id: {{.ID}}
image: {{.Image}}
command: {{.Command}}
created_at: {{.CreatedAt}}
status: {{.Status}}
names: {{.Names}}
labels: {{.Labels}}
ports: {{.Ports}}
`
if ctx.Size {
ctx.Format += `size: {{.Size}}
`
}
}
}
ctx.buffer = bytes.NewBufferString("")
ctx.preformat()
if ctx.table && ctx.Size {
ctx.finalFormat += "\t{{.Size}}"
}
tmpl, err := ctx.parseFormat()
if err != nil {
return
}
for _, container := range ctx.Containers {
containerCtx := &containerContext{
trunc: ctx.Trunc,
c: container,
}
err = ctx.contextFormat(tmpl, containerCtx)
if err != nil {
return
}
}
ctx.postformat(tmpl, &containerContext{})
}
func (ctx ImageContext) Write() {
switch ctx.Format {
case tableFormatKey:
ctx.Format = defaultImageTableFormat
if ctx.Digest {
ctx.Format = defaultImageTableFormatWithDigest
}
if ctx.Quiet {
ctx.Format = defaultQuietFormat
}
case rawFormatKey:
if ctx.Quiet {
ctx.Format = `image_id: {{.ID}}`
} else {
if ctx.Digest {
ctx.Format = `repository: {{ .Repository }}
tag: {{.Tag}}
digest: {{.Digest}}
image_id: {{.ID}}
created_at: {{.CreatedAt}}
virtual_size: {{.Size}}
`
} else {
ctx.Format = `repository: {{ .Repository }}
tag: {{.Tag}}
image_id: {{.ID}}
created_at: {{.CreatedAt}}
virtual_size: {{.Size}}
`
}
}
}
ctx.buffer = bytes.NewBufferString("")
ctx.preformat()
if ctx.table && ctx.Digest && !strings.Contains(ctx.Format, "{{.Digest}}") {
ctx.finalFormat += "\t{{.Digest}}"
}
tmpl, err := ctx.parseFormat()
if err != nil {
return
}
for _, image := range ctx.Images {
repoTags := image.RepoTags
repoDigests := image.RepoDigests
if len(repoTags) == 1 && repoTags[0] == "<none>:<none>" && len(repoDigests) == 1 && repoDigests[0] == "<none>@<none>" {
// dangling image - clear out either repoTags or repoDigests so we only show it once below
repoDigests = []string{}
}
// combine the tags and digests lists
tagsAndDigests := append(repoTags, repoDigests...)
for _, repoAndRef := range tagsAndDigests {
repo := "<none>"
tag := "<none>"
digest := "<none>"
if !strings.HasPrefix(repoAndRef, "<none>") {
ref, err := reference.ParseNamed(repoAndRef)
if err != nil {
continue
}
repo = ref.Name()
switch x := ref.(type) {
case reference.Canonical:
digest = x.Digest().String()
case reference.NamedTagged:
tag = x.Tag()
}
}
imageCtx := &imageContext{
trunc: ctx.Trunc,
i: image,
repo: repo,
tag: tag,
digest: digest,
}
err = ctx.contextFormat(tmpl, imageCtx)
if err != nil {
return
}
}
}
ctx.postformat(tmpl, &imageContext{})
}

View File

@@ -1,535 +0,0 @@
package formatter
import (
"bytes"
"fmt"
"testing"
"time"
"github.com/docker/engine-api/types"
)
func TestContainerContextWrite(t *testing.T) {
unixTime := time.Now().AddDate(0, 0, -1).Unix()
expectedTime := time.Unix(unixTime, 0).String()
contexts := []struct {
context ContainerContext
expected string
}{
// Errors
{
ContainerContext{
Context: Context{
Format: "{{InvalidFunction}}",
},
},
`Template parsing error: template: :1: function "InvalidFunction" not defined
`,
},
{
ContainerContext{
Context: Context{
Format: "{{nil}}",
},
},
`Template parsing error: template: :1:2: executing "" at <nil>: nil is not a command
`,
},
// Table Format
{
ContainerContext{
Context: Context{
Format: "table",
},
},
`CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
containerID1 ubuntu "" 24 hours ago foobar_baz
containerID2 ubuntu "" 24 hours ago foobar_bar
`,
},
{
ContainerContext{
Context: Context{
Format: "table {{.Image}}",
},
},
"IMAGE\nubuntu\nubuntu\n",
},
{
ContainerContext{
Context: Context{
Format: "table {{.Image}}",
},
Size: true,
},
"IMAGE SIZE\nubuntu 0 B\nubuntu 0 B\n",
},
{
ContainerContext{
Context: Context{
Format: "table {{.Image}}",
Quiet: true,
},
},
"IMAGE\nubuntu\nubuntu\n",
},
{
ContainerContext{
Context: Context{
Format: "table",
Quiet: true,
},
},
"containerID1\ncontainerID2\n",
},
// Raw Format
{
ContainerContext{
Context: Context{
Format: "raw",
},
},
fmt.Sprintf(`container_id: containerID1
image: ubuntu
command: ""
created_at: %s
status:
names: foobar_baz
labels:
ports:
container_id: containerID2
image: ubuntu
command: ""
created_at: %s
status:
names: foobar_bar
labels:
ports:
`, expectedTime, expectedTime),
},
{
ContainerContext{
Context: Context{
Format: "raw",
},
Size: true,
},
fmt.Sprintf(`container_id: containerID1
image: ubuntu
command: ""
created_at: %s
status:
names: foobar_baz
labels:
ports:
size: 0 B
container_id: containerID2
image: ubuntu
command: ""
created_at: %s
status:
names: foobar_bar
labels:
ports:
size: 0 B
`, expectedTime, expectedTime),
},
{
ContainerContext{
Context: Context{
Format: "raw",
Quiet: true,
},
},
"container_id: containerID1\ncontainer_id: containerID2\n",
},
// Custom Format
{
ContainerContext{
Context: Context{
Format: "{{.Image}}",
},
},
"ubuntu\nubuntu\n",
},
{
ContainerContext{
Context: Context{
Format: "{{.Image}}",
},
Size: true,
},
"ubuntu\nubuntu\n",
},
}
for _, context := range contexts {
containers := []types.Container{
{ID: "containerID1", Names: []string{"/foobar_baz"}, Image: "ubuntu", Created: unixTime},
{ID: "containerID2", Names: []string{"/foobar_bar"}, Image: "ubuntu", Created: unixTime},
}
out := bytes.NewBufferString("")
context.context.Output = out
context.context.Containers = containers
context.context.Write()
actual := out.String()
if actual != context.expected {
t.Fatalf("Expected \n%s, got \n%s", context.expected, actual)
}
// Clean buffer
out.Reset()
}
}
func TestContainerContextWriteWithNoContainers(t *testing.T) {
out := bytes.NewBufferString("")
containers := []types.Container{}
contexts := []struct {
context ContainerContext
expected string
}{
{
ContainerContext{
Context: Context{
Format: "{{.Image}}",
Output: out,
},
},
"",
},
{
ContainerContext{
Context: Context{
Format: "table {{.Image}}",
Output: out,
},
},
"IMAGE\n",
},
{
ContainerContext{
Context: Context{
Format: "{{.Image}}",
Output: out,
},
Size: true,
},
"",
},
{
ContainerContext{
Context: Context{
Format: "table {{.Image}}",
Output: out,
},
Size: true,
},
"IMAGE SIZE\n",
},
}
for _, context := range contexts {
context.context.Containers = containers
context.context.Write()
actual := out.String()
if actual != context.expected {
t.Fatalf("Expected \n%s, got \n%s", context.expected, actual)
}
// Clean buffer
out.Reset()
}
}
func TestImageContextWrite(t *testing.T) {
unixTime := time.Now().AddDate(0, 0, -1).Unix()
expectedTime := time.Unix(unixTime, 0).String()
contexts := []struct {
context ImageContext
expected string
}{
// Errors
{
ImageContext{
Context: Context{
Format: "{{InvalidFunction}}",
},
},
`Template parsing error: template: :1: function "InvalidFunction" not defined
`,
},
{
ImageContext{
Context: Context{
Format: "{{nil}}",
},
},
`Template parsing error: template: :1:2: executing "" at <nil>: nil is not a command
`,
},
// Table Format
{
ImageContext{
Context: Context{
Format: "table",
},
},
`REPOSITORY TAG IMAGE ID CREATED SIZE
image tag1 imageID1 24 hours ago 0 B
image <none> imageID1 24 hours ago 0 B
image tag2 imageID2 24 hours ago 0 B
<none> <none> imageID3 24 hours ago 0 B
`,
},
{
ImageContext{
Context: Context{
Format: "table {{.Repository}}",
},
},
"REPOSITORY\nimage\nimage\nimage\n<none>\n",
},
{
ImageContext{
Context: Context{
Format: "table {{.Repository}}",
},
Digest: true,
},
`REPOSITORY DIGEST
image <none>
image sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
image <none>
<none> <none>
`,
},
{
ImageContext{
Context: Context{
Format: "table {{.Repository}}",
Quiet: true,
},
},
"REPOSITORY\nimage\nimage\nimage\n<none>\n",
},
{
ImageContext{
Context: Context{
Format: "table",
Quiet: true,
},
},
"imageID1\nimageID1\nimageID2\nimageID3\n",
},
{
ImageContext{
Context: Context{
Format: "table",
Quiet: false,
},
Digest: true,
},
`REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
image tag1 <none> imageID1 24 hours ago 0 B
image <none> sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf imageID1 24 hours ago 0 B
image tag2 <none> imageID2 24 hours ago 0 B
<none> <none> <none> imageID3 24 hours ago 0 B
`,
},
{
ImageContext{
Context: Context{
Format: "table",
Quiet: true,
},
Digest: true,
},
"imageID1\nimageID1\nimageID2\nimageID3\n",
},
// Raw Format
{
ImageContext{
Context: Context{
Format: "raw",
},
},
fmt.Sprintf(`repository: image
tag: tag1
image_id: imageID1
created_at: %s
virtual_size: 0 B
repository: image
tag: <none>
image_id: imageID1
created_at: %s
virtual_size: 0 B
repository: image
tag: tag2
image_id: imageID2
created_at: %s
virtual_size: 0 B
repository: <none>
tag: <none>
image_id: imageID3
created_at: %s
virtual_size: 0 B
`, expectedTime, expectedTime, expectedTime, expectedTime),
},
{
ImageContext{
Context: Context{
Format: "raw",
},
Digest: true,
},
fmt.Sprintf(`repository: image
tag: tag1
digest: <none>
image_id: imageID1
created_at: %s
virtual_size: 0 B
repository: image
tag: <none>
digest: sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
image_id: imageID1
created_at: %s
virtual_size: 0 B
repository: image
tag: tag2
digest: <none>
image_id: imageID2
created_at: %s
virtual_size: 0 B
repository: <none>
tag: <none>
digest: <none>
image_id: imageID3
created_at: %s
virtual_size: 0 B
`, expectedTime, expectedTime, expectedTime, expectedTime),
},
{
ImageContext{
Context: Context{
Format: "raw",
Quiet: true,
},
},
`image_id: imageID1
image_id: imageID1
image_id: imageID2
image_id: imageID3
`,
},
// Custom Format
{
ImageContext{
Context: Context{
Format: "{{.Repository}}",
},
},
"image\nimage\nimage\n<none>\n",
},
{
ImageContext{
Context: Context{
Format: "{{.Repository}}",
},
Digest: true,
},
"image\nimage\nimage\n<none>\n",
},
}
for _, context := range contexts {
images := []types.Image{
{ID: "imageID1", RepoTags: []string{"image:tag1"}, RepoDigests: []string{"image@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf"}, Created: unixTime},
{ID: "imageID2", RepoTags: []string{"image:tag2"}, Created: unixTime},
{ID: "imageID3", RepoTags: []string{"<none>:<none>"}, RepoDigests: []string{"<none>@<none>"}, Created: unixTime},
}
out := bytes.NewBufferString("")
context.context.Output = out
context.context.Images = images
context.context.Write()
actual := out.String()
if actual != context.expected {
t.Fatalf("Expected \n%s, got \n%s", context.expected, actual)
}
// Clean buffer
out.Reset()
}
}
func TestImageContextWriteWithNoImage(t *testing.T) {
out := bytes.NewBufferString("")
images := []types.Image{}
contexts := []struct {
context ImageContext
expected string
}{
{
ImageContext{
Context: Context{
Format: "{{.Repository}}",
Output: out,
},
},
"",
},
{
ImageContext{
Context: Context{
Format: "table {{.Repository}}",
Output: out,
},
},
"REPOSITORY\n",
},
{
ImageContext{
Context: Context{
Format: "{{.Repository}}",
Output: out,
},
Digest: true,
},
"",
},
{
ImageContext{
Context: Context{
Format: "table {{.Repository}}",
Output: out,
},
Digest: true,
},
"REPOSITORY DIGEST\n",
},
}
for _, context := range contexts {
context.context.Images = images
context.context.Write()
actual := out.String()
if actual != context.expected {
t.Fatalf("Expected \n%s, got \n%s", context.expected, actual)
}
// Clean buffer
out.Reset()
}
}

View File

@@ -1,56 +1,250 @@
package client
import (
"crypto/tls"
"errors"
"fmt"
"io"
"net"
"net/http"
"net/http/httputil"
"os"
"runtime"
"strings"
"time"
"github.com/Sirupsen/logrus"
log "github.com/Sirupsen/logrus"
"github.com/docker/docker/api"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/pkg/stdcopy"
"github.com/docker/engine-api/types"
"github.com/docker/docker/pkg/term"
)
func (cli *DockerCli) holdHijackedConnection(tty bool, inputStream io.ReadCloser, outputStream, errorStream io.Writer, resp types.HijackedResponse) error {
var err error
receiveStdout := make(chan error, 1)
if outputStream != nil || errorStream != nil {
go func() {
// When TTY is ON, use regular copy
if tty && outputStream != nil {
_, err = io.Copy(outputStream, resp.Reader)
} else {
_, err = stdcopy.StdCopy(outputStream, errorStream, resp.Reader)
}
logrus.Debugf("[hijack] End of stdout")
receiveStdout <- err
}()
type tlsClientCon struct {
*tls.Conn
rawConn net.Conn
}
func (c *tlsClientCon) CloseWrite() error {
// Go standard tls.Conn doesn't provide the CloseWrite() method so we do it
// on its underlying connection.
if cwc, ok := c.rawConn.(interface {
CloseWrite() error
}); ok {
return cwc.CloseWrite()
}
return nil
}
func tlsDial(network, addr string, config *tls.Config) (net.Conn, error) {
return tlsDialWithDialer(new(net.Dialer), network, addr, config)
}
// We need to copy Go's implementation of tls.Dial (pkg/cryptor/tls/tls.go) in
// order to return our custom tlsClientCon struct which holds both the tls.Conn
// object _and_ its underlying raw connection. The rationale for this is that
// we need to be able to close the write end of the connection when attaching,
// which tls.Conn does not provide.
func tlsDialWithDialer(dialer *net.Dialer, network, addr string, config *tls.Config) (net.Conn, error) {
// We want the Timeout and Deadline values from dialer to cover the
// whole process: TCP connection and TLS handshake. This means that we
// also need to start our own timers now.
timeout := dialer.Timeout
if !dialer.Deadline.IsZero() {
deadlineTimeout := dialer.Deadline.Sub(time.Now())
if timeout == 0 || deadlineTimeout < timeout {
timeout = deadlineTimeout
}
}
var errChannel chan error
if timeout != 0 {
errChannel = make(chan error, 2)
time.AfterFunc(timeout, func() {
errChannel <- errors.New("")
})
}
rawConn, err := dialer.Dial(network, addr)
if err != nil {
return nil, err
}
// When we set up a TCP connection for hijack, there could be long periods
// of inactivity (a long running command with no output) that in certain
// network setups may cause ECONNTIMEOUT, leaving the client in an unknown
// state. Setting TCP KeepAlive on the socket connection will prohibit
// ECONNTIMEOUT unless the socket connection truly is broken
if tcpConn, ok := rawConn.(*net.TCPConn); ok {
tcpConn.SetKeepAlive(true)
tcpConn.SetKeepAlivePeriod(30 * time.Second)
}
colonPos := strings.LastIndex(addr, ":")
if colonPos == -1 {
colonPos = len(addr)
}
hostname := addr[:colonPos]
// If no ServerName is set, infer the ServerName
// from the hostname we're connecting to.
if config.ServerName == "" {
// Make a copy to avoid polluting argument or default.
c := *config
c.ServerName = hostname
config = &c
}
conn := tls.Client(rawConn, config)
if timeout == 0 {
err = conn.Handshake()
} else {
go func() {
errChannel <- conn.Handshake()
}()
err = <-errChannel
}
if err != nil {
rawConn.Close()
return nil, err
}
// This is Docker difference with standard's crypto/tls package: returned a
// wrapper which holds both the TLS and raw connections.
return &tlsClientCon{conn, rawConn}, nil
}
func (cli *DockerCli) dial() (net.Conn, error) {
if cli.tlsConfig != nil && cli.proto != "unix" {
// Notice this isn't Go standard's tls.Dial function
return tlsDial(cli.proto, cli.addr, cli.tlsConfig)
}
return net.Dial(cli.proto, cli.addr)
}
func (cli *DockerCli) hijack(method, path string, setRawTerminal bool, in io.ReadCloser, stdout, stderr io.Writer, started chan io.Closer, data interface{}) error {
defer func() {
if started != nil {
close(started)
}
}()
params, err := cli.encodeData(data)
if err != nil {
return err
}
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", api.APIVERSION, path), params)
if err != nil {
return err
}
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.Header.Set("Content-Type", "text/plain")
req.Header.Set("Connection", "Upgrade")
req.Header.Set("Upgrade", "tcp")
req.Host = cli.addr
dial, err := cli.dial()
// When we set up a TCP connection for hijack, there could be long periods
// of inactivity (a long running command with no output) that in certain
// network setups may cause ECONNTIMEOUT, leaving the client in an unknown
// state. Setting TCP KeepAlive on the socket connection will prohibit
// ECONNTIMEOUT unless the socket connection truly is broken
if tcpConn, ok := dial.(*net.TCPConn); ok {
tcpConn.SetKeepAlive(true)
tcpConn.SetKeepAlivePeriod(30 * time.Second)
}
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return fmt.Errorf("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
}
return err
}
clientconn := httputil.NewClientConn(dial, nil)
defer clientconn.Close()
// Server hijacks the connection, error 'connection closed' expected
clientconn.Do(req)
rwc, br := clientconn.Hijack()
defer rwc.Close()
if started != nil {
started <- rwc
}
var receiveStdout chan error
var oldState *term.State
if in != nil && setRawTerminal && cli.isTerminalIn && os.Getenv("NORAW") == "" {
oldState, err = term.SetRawTerminal(cli.inFd)
if err != nil {
return err
}
defer term.RestoreTerminal(cli.inFd, oldState)
}
if stdout != nil || stderr != nil {
receiveStdout = promise.Go(func() (err error) {
defer func() {
if in != nil {
if setRawTerminal && cli.isTerminalIn {
term.RestoreTerminal(cli.inFd, oldState)
}
// For some reason this Close call blocks on darwin..
// As the client exists right after, simply discard the close
// until we find a better solution.
if runtime.GOOS != "darwin" {
in.Close()
}
}
}()
// When TTY is ON, use regular copy
if setRawTerminal && stdout != nil {
_, err = io.Copy(stdout, br)
} else {
_, err = stdcopy.StdCopy(stdout, stderr, br)
}
log.Debugf("[hijack] End of stdout")
return err
})
}
sendStdin := promise.Go(func() error {
if in != nil {
io.Copy(rwc, in)
log.Debugf("[hijack] End of stdin")
}
if conn, ok := rwc.(interface {
CloseWrite() error
}); ok {
if err := conn.CloseWrite(); err != nil {
log.Debugf("Couldn't send EOF: %s", err)
}
}
// Discard errors due to pipe interruption
return nil
})
if stdout != nil || stderr != nil {
if err := <-receiveStdout; err != nil {
log.Debugf("Error receiveStdout: %s", err)
return err
}
}
if !cli.isTerminalIn {
if err := <-sendStdin; err != nil {
log.Debugf("Error sendStdin: %s", err)
return err
}
}
stdinDone := make(chan struct{})
go func() {
if inputStream != nil {
io.Copy(resp.Conn, inputStream)
logrus.Debugf("[hijack] End of stdin")
}
if err := resp.CloseWrite(); err != nil {
logrus.Debugf("Couldn't send EOF: %s", err)
}
close(stdinDone)
}()
select {
case err := <-receiveStdout:
if err != nil {
logrus.Debugf("Error receiveStdout: %s", err)
return err
}
case <-stdinDone:
if outputStream != nil || errorStream != nil {
if err := <-receiveStdout; err != nil {
logrus.Debugf("Error receiveStdout: %s", err)
return err
}
}
}
return nil
}

View File

@@ -1,76 +0,0 @@
package client
import (
"fmt"
"strconv"
"strings"
"text/tabwriter"
"time"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/stringid"
"github.com/docker/docker/pkg/stringutils"
"github.com/docker/go-units"
)
// CmdHistory shows the history of an image.
//
// Usage: docker history [OPTIONS] IMAGE
func (cli *DockerCli) CmdHistory(args ...string) error {
cmd := Cli.Subcmd("history", []string{"IMAGE"}, Cli.DockerCommands["history"].Description, true)
human := cmd.Bool([]string{"H", "-human"}, true, "Print sizes and dates in human readable format")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
noTrunc := cmd.Bool([]string{"-no-trunc"}, false, "Don't truncate output")
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
history, err := cli.client.ImageHistory(context.Background(), cmd.Arg(0))
if err != nil {
return err
}
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
if *quiet {
for _, entry := range history {
if *noTrunc {
fmt.Fprintf(w, "%s\n", entry.ID)
} else {
fmt.Fprintf(w, "%s\n", stringid.TruncateID(entry.ID))
}
}
w.Flush()
return nil
}
var imageID string
var createdBy string
var created string
var size string
fmt.Fprintln(w, "IMAGE\tCREATED\tCREATED BY\tSIZE\tCOMMENT")
for _, entry := range history {
imageID = entry.ID
createdBy = strings.Replace(entry.CreatedBy, "\t", " ", -1)
if *noTrunc == false {
createdBy = stringutils.Truncate(createdBy, 45)
imageID = stringid.TruncateID(entry.ID)
}
if *human {
created = units.HumanDuration(time.Now().UTC().Sub(time.Unix(entry.Created, 0))) + " ago"
size = units.HumanSize(float64(entry.Size))
} else {
created = time.Unix(entry.Created, 0).Format(time.RFC3339)
size = strconv.FormatInt(entry.Size, 10)
}
fmt.Fprintf(w, "%s\t%s\t%s\t%s\t%s\n", imageID, created, createdBy, size, entry.Comment)
}
w.Flush()
return nil
}

View File

@@ -1,81 +0,0 @@
package client
import (
"golang.org/x/net/context"
"github.com/docker/docker/api/client/formatter"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/filters"
)
// CmdImages lists the images in a specified repository, or all top-level images if no repository is specified.
//
// Usage: docker images [OPTIONS] [REPOSITORY]
func (cli *DockerCli) CmdImages(args ...string) error {
cmd := Cli.Subcmd("images", []string{"[REPOSITORY[:TAG]]"}, Cli.DockerCommands["images"].Description, true)
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only show numeric IDs")
all := cmd.Bool([]string{"a", "-all"}, false, "Show all images (default hides intermediate images)")
noTrunc := cmd.Bool([]string{"-no-trunc"}, false, "Don't truncate output")
showDigests := cmd.Bool([]string{"-digests"}, false, "Show digests")
format := cmd.String([]string{"-format"}, "", "Pretty-print images using a Go template")
flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Filter output based on conditions provided")
cmd.Require(flag.Max, 1)
cmd.ParseFlags(args, true)
// Consolidate all filter flags, and sanity check them early.
// They'll get process in the daemon/server.
imageFilterArgs := filters.NewArgs()
for _, f := range flFilter.GetAll() {
var err error
imageFilterArgs, err = filters.ParseFlag(f, imageFilterArgs)
if err != nil {
return err
}
}
var matchName string
if cmd.NArg() == 1 {
matchName = cmd.Arg(0)
}
options := types.ImageListOptions{
MatchName: matchName,
All: *all,
Filters: imageFilterArgs,
}
images, err := cli.client.ImageList(context.Background(), options)
if err != nil {
return err
}
f := *format
if len(f) == 0 {
if len(cli.ImagesFormat()) > 0 && !*quiet {
f = cli.ImagesFormat()
} else {
f = "table"
}
}
imagesCtx := formatter.ImageContext{
Context: formatter.Context{
Output: cli.out,
Format: f,
Quiet: *quiet,
Trunc: !*noTrunc,
},
Digest: *showDigests,
Images: images,
}
imagesCtx.Write()
return nil
}

View File

@@ -1,82 +0,0 @@
package client
import (
"fmt"
"io"
"os"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/urlutil"
"github.com/docker/docker/reference"
"github.com/docker/engine-api/types"
)
// CmdImport creates an empty filesystem image, imports the contents of the tarball into the image, and optionally tags the image.
//
// The URL argument is the address of a tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) file or a path to local file relative to docker client. If the URL is '-', then the tar file is read from STDIN.
//
// Usage: docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]
func (cli *DockerCli) CmdImport(args ...string) error {
cmd := Cli.Subcmd("import", []string{"file|URL|- [REPOSITORY[:TAG]]"}, Cli.DockerCommands["import"].Description, true)
flChanges := opts.NewListOpts(nil)
cmd.Var(&flChanges, []string{"c", "-change"}, "Apply Dockerfile instruction to the created image")
message := cmd.String([]string{"m", "-message"}, "", "Set commit message for imported image")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var (
in io.Reader
tag string
src = cmd.Arg(0)
srcName = src
repository = cmd.Arg(1)
changes = flChanges.GetAll()
)
if cmd.NArg() == 3 {
fmt.Fprintf(cli.err, "[DEPRECATED] The format 'file|URL|- [REPOSITORY [TAG]]' has been deprecated. Please use file|URL|- [REPOSITORY[:TAG]]\n")
tag = cmd.Arg(2)
}
if repository != "" {
//Check if the given image name can be resolved
if _, err := reference.ParseNamed(repository); err != nil {
return err
}
}
if src == "-" {
in = cli.in
} else if !urlutil.IsURL(src) {
srcName = "-"
file, err := os.Open(src)
if err != nil {
return err
}
defer file.Close()
in = file
}
options := types.ImageImportOptions{
Source: in,
SourceName: srcName,
RepositoryName: repository,
Message: *message,
Tag: tag,
Changes: changes,
}
responseBody, err := cli.client.ImageImport(context.Background(), options)
if err != nil {
return err
}
defer responseBody.Close()
return jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, nil)
}

View File

@@ -1,155 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/ioutils"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/utils"
"github.com/docker/go-units"
)
// CmdInfo displays system-wide information.
//
// Usage: docker info
func (cli *DockerCli) CmdInfo(args ...string) error {
cmd := Cli.Subcmd("info", nil, Cli.DockerCommands["info"].Description, true)
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
info, err := cli.client.Info(context.Background())
if err != nil {
return err
}
fmt.Fprintf(cli.out, "Containers: %d\n", info.Containers)
fmt.Fprintf(cli.out, " Running: %d\n", info.ContainersRunning)
fmt.Fprintf(cli.out, " Paused: %d\n", info.ContainersPaused)
fmt.Fprintf(cli.out, " Stopped: %d\n", info.ContainersStopped)
fmt.Fprintf(cli.out, "Images: %d\n", info.Images)
ioutils.FprintfIfNotEmpty(cli.out, "Server Version: %s\n", info.ServerVersion)
ioutils.FprintfIfNotEmpty(cli.out, "Storage Driver: %s\n", info.Driver)
if info.DriverStatus != nil {
for _, pair := range info.DriverStatus {
fmt.Fprintf(cli.out, " %s: %s\n", pair[0], pair[1])
// print a warning if devicemapper is using a loopback file
if pair[0] == "Data loop file" {
fmt.Fprintln(cli.err, " WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.")
}
}
}
if info.SystemStatus != nil {
for _, pair := range info.SystemStatus {
fmt.Fprintf(cli.out, "%s: %s\n", pair[0], pair[1])
}
}
ioutils.FprintfIfNotEmpty(cli.out, "Execution Driver: %s\n", info.ExecutionDriver)
ioutils.FprintfIfNotEmpty(cli.out, "Logging Driver: %s\n", info.LoggingDriver)
ioutils.FprintfIfNotEmpty(cli.out, "Cgroup Driver: %s\n", info.CgroupDriver)
fmt.Fprintf(cli.out, "Plugins: \n")
fmt.Fprintf(cli.out, " Volume:")
fmt.Fprintf(cli.out, " %s", strings.Join(info.Plugins.Volume, " "))
fmt.Fprintf(cli.out, "\n")
fmt.Fprintf(cli.out, " Network:")
fmt.Fprintf(cli.out, " %s", strings.Join(info.Plugins.Network, " "))
fmt.Fprintf(cli.out, "\n")
if len(info.Plugins.Authorization) != 0 {
fmt.Fprintf(cli.out, " Authorization:")
fmt.Fprintf(cli.out, " %s", strings.Join(info.Plugins.Authorization, " "))
fmt.Fprintf(cli.out, "\n")
}
ioutils.FprintfIfNotEmpty(cli.out, "Kernel Version: %s\n", info.KernelVersion)
ioutils.FprintfIfNotEmpty(cli.out, "Operating System: %s\n", info.OperatingSystem)
ioutils.FprintfIfNotEmpty(cli.out, "OSType: %s\n", info.OSType)
ioutils.FprintfIfNotEmpty(cli.out, "Architecture: %s\n", info.Architecture)
fmt.Fprintf(cli.out, "CPUs: %d\n", info.NCPU)
fmt.Fprintf(cli.out, "Total Memory: %s\n", units.BytesSize(float64(info.MemTotal)))
ioutils.FprintfIfNotEmpty(cli.out, "Name: %s\n", info.Name)
ioutils.FprintfIfNotEmpty(cli.out, "ID: %s\n", info.ID)
fmt.Fprintf(cli.out, "Docker Root Dir: %s\n", info.DockerRootDir)
fmt.Fprintf(cli.out, "Debug mode (client): %v\n", utils.IsDebugEnabled())
fmt.Fprintf(cli.out, "Debug mode (server): %v\n", info.Debug)
if info.Debug {
fmt.Fprintf(cli.out, " File Descriptors: %d\n", info.NFd)
fmt.Fprintf(cli.out, " Goroutines: %d\n", info.NGoroutines)
fmt.Fprintf(cli.out, " System Time: %s\n", info.SystemTime)
fmt.Fprintf(cli.out, " EventsListeners: %d\n", info.NEventsListener)
}
ioutils.FprintfIfNotEmpty(cli.out, "Http Proxy: %s\n", info.HTTPProxy)
ioutils.FprintfIfNotEmpty(cli.out, "Https Proxy: %s\n", info.HTTPSProxy)
ioutils.FprintfIfNotEmpty(cli.out, "No Proxy: %s\n", info.NoProxy)
if info.IndexServerAddress != "" {
u := cli.configFile.AuthConfigs[info.IndexServerAddress].Username
if len(u) > 0 {
fmt.Fprintf(cli.out, "Username: %v\n", u)
}
fmt.Fprintf(cli.out, "Registry: %v\n", info.IndexServerAddress)
}
// Only output these warnings if the server does not support these features
if info.OSType != "windows" {
if !info.MemoryLimit {
fmt.Fprintln(cli.err, "WARNING: No memory limit support")
}
if !info.SwapLimit {
fmt.Fprintln(cli.err, "WARNING: No swap limit support")
}
if !info.KernelMemory {
fmt.Fprintln(cli.err, "WARNING: No kernel memory limit support")
}
if !info.OomKillDisable {
fmt.Fprintln(cli.err, "WARNING: No oom kill disable support")
}
if !info.CPUCfsQuota {
fmt.Fprintln(cli.err, "WARNING: No cpu cfs quota support")
}
if !info.CPUCfsPeriod {
fmt.Fprintln(cli.err, "WARNING: No cpu cfs period support")
}
if !info.CPUShares {
fmt.Fprintln(cli.err, "WARNING: No cpu shares support")
}
if !info.CPUSet {
fmt.Fprintln(cli.err, "WARNING: No cpuset support")
}
if !info.IPv4Forwarding {
fmt.Fprintln(cli.err, "WARNING: IPv4 forwarding is disabled")
}
if !info.BridgeNfIptables {
fmt.Fprintln(cli.err, "WARNING: bridge-nf-call-iptables is disabled")
}
if !info.BridgeNfIP6tables {
fmt.Fprintln(cli.err, "WARNING: bridge-nf-call-ip6tables is disabled")
}
}
if info.Labels != nil {
fmt.Fprintln(cli.out, "Labels:")
for _, attribute := range info.Labels {
fmt.Fprintf(cli.out, " %s\n", attribute)
}
}
ioutils.FprintfIfTrue(cli.out, "Experimental: %v\n", info.ExperimentalBuild)
if info.ClusterStore != "" {
fmt.Fprintf(cli.out, "Cluster store: %s\n", info.ClusterStore)
}
if info.ClusterAdvertise != "" {
fmt.Fprintf(cli.out, "Cluster advertise: %s\n", info.ClusterAdvertise)
}
return nil
}

View File

@@ -1,127 +0,0 @@
package client
import (
"fmt"
"golang.org/x/net/context"
"github.com/docker/docker/api/client/inspect"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/utils/templates"
"github.com/docker/engine-api/client"
)
// CmdInspect displays low-level information on one or more containers or images.
//
// Usage: docker inspect [OPTIONS] CONTAINER|IMAGE [CONTAINER|IMAGE...]
func (cli *DockerCli) CmdInspect(args ...string) error {
cmd := Cli.Subcmd("inspect", []string{"CONTAINER|IMAGE [CONTAINER|IMAGE...]"}, Cli.DockerCommands["inspect"].Description, true)
tmplStr := cmd.String([]string{"f", "-format"}, "", "Format the output using the given go template")
inspectType := cmd.String([]string{"-type"}, "", "Return JSON for specified type, (e.g image or container)")
size := cmd.Bool([]string{"s", "-size"}, false, "Display total file sizes if the type is container")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
if *inspectType != "" && *inspectType != "container" && *inspectType != "image" {
return fmt.Errorf("%q is not a valid value for --type", *inspectType)
}
var elementSearcher inspectSearcher
switch *inspectType {
case "container":
elementSearcher = cli.inspectContainers(*size)
case "image":
elementSearcher = cli.inspectImages(*size)
default:
elementSearcher = cli.inspectAll(*size)
}
return cli.inspectElements(*tmplStr, cmd.Args(), elementSearcher)
}
func (cli *DockerCli) inspectContainers(getSize bool) inspectSearcher {
return func(ref string) (interface{}, []byte, error) {
return cli.client.ContainerInspectWithRaw(context.Background(), ref, getSize)
}
}
func (cli *DockerCli) inspectImages(getSize bool) inspectSearcher {
return func(ref string) (interface{}, []byte, error) {
return cli.client.ImageInspectWithRaw(context.Background(), ref, getSize)
}
}
func (cli *DockerCli) inspectAll(getSize bool) inspectSearcher {
return func(ref string) (interface{}, []byte, error) {
c, rawContainer, err := cli.client.ContainerInspectWithRaw(context.Background(), ref, getSize)
if err != nil {
// Search for image with that id if a container doesn't exist.
if client.IsErrContainerNotFound(err) {
i, rawImage, err := cli.client.ImageInspectWithRaw(context.Background(), ref, getSize)
if err != nil {
if client.IsErrImageNotFound(err) {
return nil, nil, fmt.Errorf("Error: No such image or container: %s", ref)
}
return nil, nil, err
}
return i, rawImage, err
}
return nil, nil, err
}
return c, rawContainer, err
}
}
type inspectSearcher func(ref string) (interface{}, []byte, error)
func (cli *DockerCli) inspectElements(tmplStr string, references []string, searchByReference inspectSearcher) error {
elementInspector, err := cli.newInspectorWithTemplate(tmplStr)
if err != nil {
return Cli.StatusError{StatusCode: 64, Status: err.Error()}
}
var inspectErr error
for _, ref := range references {
element, raw, err := searchByReference(ref)
if err != nil {
inspectErr = err
break
}
if err := elementInspector.Inspect(element, raw); err != nil {
inspectErr = err
break
}
}
if err := elementInspector.Flush(); err != nil {
cli.inspectErrorStatus(err)
}
if status := cli.inspectErrorStatus(inspectErr); status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}
func (cli *DockerCli) inspectErrorStatus(err error) (status int) {
if err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
status = 1
}
return
}
func (cli *DockerCli) newInspectorWithTemplate(tmplStr string) (inspect.Inspector, error) {
elementInspector := inspect.NewIndentedInspector(cli.out)
if tmplStr != "" {
tmpl, err := templates.Parse(tmplStr)
if err != nil {
return nil, fmt.Errorf("Template parsing error: %s", err)
}
elementInspector = inspect.NewTemplateInspector(cli.out, tmpl)
}
return elementInspector, nil
}

View File

@@ -1,119 +0,0 @@
package inspect
import (
"bytes"
"encoding/json"
"fmt"
"io"
"text/template"
)
// Inspector defines an interface to implement to process elements
type Inspector interface {
Inspect(typedElement interface{}, rawElement []byte) error
Flush() error
}
// TemplateInspector uses a text template to inspect elements.
type TemplateInspector struct {
outputStream io.Writer
buffer *bytes.Buffer
tmpl *template.Template
}
// NewTemplateInspector creates a new inspector with a template.
func NewTemplateInspector(outputStream io.Writer, tmpl *template.Template) Inspector {
return &TemplateInspector{
outputStream: outputStream,
buffer: new(bytes.Buffer),
tmpl: tmpl,
}
}
// Inspect executes the inspect template.
// It decodes the raw element into a map if the initial execution fails.
// This allows docker cli to parse inspect structs injected with Swarm fields.
func (i *TemplateInspector) Inspect(typedElement interface{}, rawElement []byte) error {
buffer := new(bytes.Buffer)
if err := i.tmpl.Execute(buffer, typedElement); err != nil {
if rawElement == nil {
return fmt.Errorf("Template parsing error: %v", err)
}
return i.tryRawInspectFallback(rawElement, err)
}
i.buffer.Write(buffer.Bytes())
i.buffer.WriteByte('\n')
return nil
}
// Flush write the result of inspecting all elements into the output stream.
func (i *TemplateInspector) Flush() error {
if i.buffer.Len() == 0 {
_, err := io.WriteString(i.outputStream, "\n")
return err
}
_, err := io.Copy(i.outputStream, i.buffer)
return err
}
// IndentedInspector uses a buffer to stop the indented representation of an element.
type IndentedInspector struct {
outputStream io.Writer
elements []interface{}
rawElements [][]byte
}
// NewIndentedInspector generates a new IndentedInspector.
func NewIndentedInspector(outputStream io.Writer) Inspector {
return &IndentedInspector{
outputStream: outputStream,
}
}
// Inspect writes the raw element with an indented json format.
func (i *IndentedInspector) Inspect(typedElement interface{}, rawElement []byte) error {
if rawElement != nil {
i.rawElements = append(i.rawElements, rawElement)
} else {
i.elements = append(i.elements, typedElement)
}
return nil
}
// Flush write the result of inspecting all elements into the output stream.
func (i *IndentedInspector) Flush() error {
if len(i.elements) == 0 && len(i.rawElements) == 0 {
_, err := io.WriteString(i.outputStream, "[]\n")
return err
}
var buffer io.Reader
if len(i.rawElements) > 0 {
bytesBuffer := new(bytes.Buffer)
bytesBuffer.WriteString("[")
for idx, r := range i.rawElements {
bytesBuffer.Write(r)
if idx < len(i.rawElements)-1 {
bytesBuffer.WriteString(",")
}
}
bytesBuffer.WriteString("]")
indented := new(bytes.Buffer)
if err := json.Indent(indented, bytesBuffer.Bytes(), "", " "); err != nil {
return err
}
buffer = indented
} else {
b, err := json.MarshalIndent(i.elements, "", " ")
if err != nil {
return err
}
buffer = bytes.NewReader(b)
}
if _, err := io.Copy(i.outputStream, buffer); err != nil {
return err
}
_, err := io.WriteString(i.outputStream, "\n")
return err
}

View File

@@ -1,40 +0,0 @@
// +build !go1.5
package inspect
import (
"bytes"
"encoding/json"
"fmt"
"strings"
)
// tryeRawInspectFallback executes the inspect template with a raw interface.
// This allows docker cli to parse inspect structs injected with Swarm fields.
// Unfortunately, go 1.4 doesn't fail executing invalid templates when the input is an interface.
// It doesn't allow to modify this behavior either, sending <no value> messages to the output.
// We assume that the template is invalid when there is a <no value>, if the template was valid
// we'd get <nil> or "" values. In that case we fail with the original error raised executing the
// template with the typed input.
func (i *TemplateInspector) tryRawInspectFallback(rawElement []byte, originalErr error) error {
var raw interface{}
buffer := new(bytes.Buffer)
rdr := bytes.NewReader(rawElement)
dec := json.NewDecoder(rdr)
if rawErr := dec.Decode(&raw); rawErr != nil {
return fmt.Errorf("unable to read inspect data: %v", rawErr)
}
if rawErr := i.tmpl.Execute(buffer, raw); rawErr != nil {
return fmt.Errorf("Template parsing error: %v", rawErr)
}
if strings.Contains(buffer.String(), "<no value>") {
return fmt.Errorf("Template parsing error: %v", originalErr)
}
i.buffer.Write(buffer.Bytes())
i.buffer.WriteByte('\n')
return nil
}

View File

@@ -1,29 +0,0 @@
// +build go1.5
package inspect
import (
"bytes"
"encoding/json"
"fmt"
)
func (i *TemplateInspector) tryRawInspectFallback(rawElement []byte, _ error) error {
var raw interface{}
buffer := new(bytes.Buffer)
rdr := bytes.NewReader(rawElement)
dec := json.NewDecoder(rdr)
if rawErr := dec.Decode(&raw); rawErr != nil {
return fmt.Errorf("unable to read inspect data: %v", rawErr)
}
tmplMissingKey := i.tmpl.Option("missingkey=error")
if rawErr := tmplMissingKey.Execute(buffer, raw); rawErr != nil {
return fmt.Errorf("Template parsing error: %v", rawErr)
}
i.buffer.Write(buffer.Bytes())
i.buffer.WriteByte('\n')
return nil
}

View File

@@ -1,221 +0,0 @@
package inspect
import (
"bytes"
"strings"
"testing"
"github.com/docker/docker/utils/templates"
)
type testElement struct {
DNS string `json:"Dns"`
}
func TestTemplateInspectorDefault(t *testing.T) {
b := new(bytes.Buffer)
tmpl, err := templates.Parse("{{.DNS}}")
if err != nil {
t.Fatal(err)
}
i := NewTemplateInspector(b, tmpl)
if err := i.Inspect(testElement{"0.0.0.0"}, nil); err != nil {
t.Fatal(err)
}
if err := i.Flush(); err != nil {
t.Fatal(err)
}
if b.String() != "0.0.0.0\n" {
t.Fatalf("Expected `0.0.0.0\\n`, got `%s`", b.String())
}
}
func TestTemplateInspectorEmpty(t *testing.T) {
b := new(bytes.Buffer)
tmpl, err := templates.Parse("{{.DNS}}")
if err != nil {
t.Fatal(err)
}
i := NewTemplateInspector(b, tmpl)
if err := i.Flush(); err != nil {
t.Fatal(err)
}
if b.String() != "\n" {
t.Fatalf("Expected `\\n`, got `%s`", b.String())
}
}
func TestTemplateInspectorTemplateError(t *testing.T) {
b := new(bytes.Buffer)
tmpl, err := templates.Parse("{{.Foo}}")
if err != nil {
t.Fatal(err)
}
i := NewTemplateInspector(b, tmpl)
err = i.Inspect(testElement{"0.0.0.0"}, nil)
if err == nil {
t.Fatal("Expected error got nil")
}
if !strings.HasPrefix(err.Error(), "Template parsing error") {
t.Fatalf("Expected template error, got %v", err)
}
}
func TestTemplateInspectorRawFallback(t *testing.T) {
b := new(bytes.Buffer)
tmpl, err := templates.Parse("{{.Dns}}")
if err != nil {
t.Fatal(err)
}
i := NewTemplateInspector(b, tmpl)
if err := i.Inspect(testElement{"0.0.0.0"}, []byte(`{"Dns": "0.0.0.0"}`)); err != nil {
t.Fatal(err)
}
if err := i.Flush(); err != nil {
t.Fatal(err)
}
if b.String() != "0.0.0.0\n" {
t.Fatalf("Expected `0.0.0.0\\n`, got `%s`", b.String())
}
}
func TestTemplateInspectorRawFallbackError(t *testing.T) {
b := new(bytes.Buffer)
tmpl, err := templates.Parse("{{.Dns}}")
if err != nil {
t.Fatal(err)
}
i := NewTemplateInspector(b, tmpl)
err = i.Inspect(testElement{"0.0.0.0"}, []byte(`{"Foo": "0.0.0.0"}`))
if err == nil {
t.Fatal("Expected error got nil")
}
if !strings.HasPrefix(err.Error(), "Template parsing error") {
t.Fatalf("Expected template error, got %v", err)
}
}
func TestTemplateInspectorMultiple(t *testing.T) {
b := new(bytes.Buffer)
tmpl, err := templates.Parse("{{.DNS}}")
if err != nil {
t.Fatal(err)
}
i := NewTemplateInspector(b, tmpl)
if err := i.Inspect(testElement{"0.0.0.0"}, nil); err != nil {
t.Fatal(err)
}
if err := i.Inspect(testElement{"1.1.1.1"}, nil); err != nil {
t.Fatal(err)
}
if err := i.Flush(); err != nil {
t.Fatal(err)
}
if b.String() != "0.0.0.0\n1.1.1.1\n" {
t.Fatalf("Expected `0.0.0.0\\n1.1.1.1\\n`, got `%s`", b.String())
}
}
func TestIndentedInspectorDefault(t *testing.T) {
b := new(bytes.Buffer)
i := NewIndentedInspector(b)
if err := i.Inspect(testElement{"0.0.0.0"}, nil); err != nil {
t.Fatal(err)
}
if err := i.Flush(); err != nil {
t.Fatal(err)
}
expected := `[
{
"Dns": "0.0.0.0"
}
]
`
if b.String() != expected {
t.Fatalf("Expected `%s`, got `%s`", expected, b.String())
}
}
func TestIndentedInspectorMultiple(t *testing.T) {
b := new(bytes.Buffer)
i := NewIndentedInspector(b)
if err := i.Inspect(testElement{"0.0.0.0"}, nil); err != nil {
t.Fatal(err)
}
if err := i.Inspect(testElement{"1.1.1.1"}, nil); err != nil {
t.Fatal(err)
}
if err := i.Flush(); err != nil {
t.Fatal(err)
}
expected := `[
{
"Dns": "0.0.0.0"
},
{
"Dns": "1.1.1.1"
}
]
`
if b.String() != expected {
t.Fatalf("Expected `%s`, got `%s`", expected, b.String())
}
}
func TestIndentedInspectorEmpty(t *testing.T) {
b := new(bytes.Buffer)
i := NewIndentedInspector(b)
if err := i.Flush(); err != nil {
t.Fatal(err)
}
expected := "[]\n"
if b.String() != expected {
t.Fatalf("Expected `%s`, got `%s`", expected, b.String())
}
}
func TestIndentedInspectorRawElements(t *testing.T) {
b := new(bytes.Buffer)
i := NewIndentedInspector(b)
if err := i.Inspect(testElement{"0.0.0.0"}, []byte(`{"Dns": "0.0.0.0", "Node": "0"}`)); err != nil {
t.Fatal(err)
}
if err := i.Inspect(testElement{"1.1.1.1"}, []byte(`{"Dns": "1.1.1.1", "Node": "1"}`)); err != nil {
t.Fatal(err)
}
if err := i.Flush(); err != nil {
t.Fatal(err)
}
expected := `[
{
"Dns": "0.0.0.0",
"Node": "0"
},
{
"Dns": "1.1.1.1",
"Node": "1"
}
]
`
if b.String() != expected {
t.Fatalf("Expected `%s`, got `%s`", expected, b.String())
}
}

View File

@@ -1,35 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdKill kills one or more running container using SIGKILL or a specified signal.
//
// Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdKill(args ...string) error {
cmd := Cli.Subcmd("kill", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["kill"].Description, true)
signal := cmd.String([]string{"s", "-signal"}, "KILL", "Signal to send to the container")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
if err := cli.client.ContainerKill(context.Background(), name, *signal); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,50 +0,0 @@
package client
import (
"io"
"os"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdLoad loads an image from a tar archive.
//
// The tar archive is read from STDIN by default, or from a tar archive file.
//
// Usage: docker load [OPTIONS]
func (cli *DockerCli) CmdLoad(args ...string) error {
cmd := Cli.Subcmd("load", nil, Cli.DockerCommands["load"].Description, true)
infile := cmd.String([]string{"i", "-input"}, "", "Read from a tar archive file, instead of STDIN")
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Suppress the load output")
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
var input io.Reader = cli.in
if *infile != "" {
file, err := os.Open(*infile)
if err != nil {
return err
}
defer file.Close()
input = file
}
if !cli.isTerminalOut {
*quiet = true
}
response, err := cli.client.ImageLoad(context.Background(), input, *quiet)
if err != nil {
return err
}
defer response.Body.Close()
if response.Body != nil && response.JSON {
return jsonmessage.DisplayJSONMessagesStream(response.Body, cli.out, cli.outFd, cli.isTerminalOut, nil)
}
_, err = io.Copy(cli.out, response.Body)
return err
}

View File

@@ -1,177 +0,0 @@
package client
import (
"bufio"
"fmt"
"io"
"os"
"runtime"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/cliconfig/credentials"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/term"
"github.com/docker/engine-api/types"
)
// CmdLogin logs in a user to a Docker registry service.
//
// If no server is specified, the user will be logged into or registered to the registry's index server.
//
// Usage: docker login SERVER
func (cli *DockerCli) CmdLogin(args ...string) error {
cmd := Cli.Subcmd("login", []string{"[SERVER]"}, Cli.DockerCommands["login"].Description+".\nIf no server is specified, the default is defined by the daemon.", true)
cmd.Require(flag.Max, 1)
flUser := cmd.String([]string{"u", "-username"}, "", "Username")
flPassword := cmd.String([]string{"p", "-password"}, "", "Password")
// Deprecated in 1.11: Should be removed in docker 1.13
cmd.String([]string{"#e", "#-email"}, "", "Email")
cmd.ParseFlags(args, true)
// On Windows, force the use of the regular OS stdin stream. Fixes #14336/#14210
if runtime.GOOS == "windows" {
cli.in = os.Stdin
}
var serverAddress string
var isDefaultRegistry bool
if len(cmd.Args()) > 0 {
serverAddress = cmd.Arg(0)
} else {
serverAddress = cli.electAuthServer()
isDefaultRegistry = true
}
authConfig, err := cli.configureAuth(*flUser, *flPassword, serverAddress, isDefaultRegistry)
if err != nil {
return err
}
response, err := cli.client.RegistryLogin(context.Background(), authConfig)
if err != nil {
return err
}
if response.IdentityToken != "" {
authConfig.Password = ""
authConfig.IdentityToken = response.IdentityToken
}
if err := storeCredentials(cli.configFile, authConfig); err != nil {
return fmt.Errorf("Error saving credentials: %v", err)
}
if response.Status != "" {
fmt.Fprintln(cli.out, response.Status)
}
return nil
}
func (cli *DockerCli) promptWithDefault(prompt string, configDefault string) {
if configDefault == "" {
fmt.Fprintf(cli.out, "%s: ", prompt)
} else {
fmt.Fprintf(cli.out, "%s (%s): ", prompt, configDefault)
}
}
func (cli *DockerCli) configureAuth(flUser, flPassword, serverAddress string, isDefaultRegistry bool) (types.AuthConfig, error) {
authconfig, err := getCredentials(cli.configFile, serverAddress)
if err != nil {
return authconfig, err
}
authconfig.Username = strings.TrimSpace(authconfig.Username)
if flUser = strings.TrimSpace(flUser); flUser == "" {
if isDefaultRegistry {
// if this is a defauly registry (docker hub), then display the following message.
fmt.Fprintln(cli.out, "Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.")
}
cli.promptWithDefault("Username", authconfig.Username)
flUser = readInput(cli.in, cli.out)
flUser = strings.TrimSpace(flUser)
if flUser == "" {
flUser = authconfig.Username
}
}
if flUser == "" {
return authconfig, fmt.Errorf("Error: Non-null Username Required")
}
if flPassword == "" {
oldState, err := term.SaveState(cli.inFd)
if err != nil {
return authconfig, err
}
fmt.Fprintf(cli.out, "Password: ")
term.DisableEcho(cli.inFd, oldState)
flPassword = readInput(cli.in, cli.out)
fmt.Fprint(cli.out, "\n")
term.RestoreTerminal(cli.inFd, oldState)
if flPassword == "" {
return authconfig, fmt.Errorf("Error: Password Required")
}
}
authconfig.Username = flUser
authconfig.Password = flPassword
authconfig.ServerAddress = serverAddress
authconfig.IdentityToken = ""
return authconfig, nil
}
func readInput(in io.Reader, out io.Writer) string {
reader := bufio.NewReader(in)
line, _, err := reader.ReadLine()
if err != nil {
fmt.Fprintln(out, err.Error())
os.Exit(1)
}
return string(line)
}
// getCredentials loads the user credentials from a credentials store.
// The store is determined by the config file settings.
func getCredentials(c *cliconfig.ConfigFile, serverAddress string) (types.AuthConfig, error) {
s := loadCredentialsStore(c)
return s.Get(serverAddress)
}
func getAllCredentials(c *cliconfig.ConfigFile) (map[string]types.AuthConfig, error) {
s := loadCredentialsStore(c)
return s.GetAll()
}
// storeCredentials saves the user credentials in a credentials store.
// The store is determined by the config file settings.
func storeCredentials(c *cliconfig.ConfigFile, auth types.AuthConfig) error {
s := loadCredentialsStore(c)
return s.Store(auth)
}
// eraseCredentials removes the user credentials from a credentials store.
// The store is determined by the config file settings.
func eraseCredentials(c *cliconfig.ConfigFile, serverAddress string) error {
s := loadCredentialsStore(c)
return s.Erase(serverAddress)
}
// loadCredentialsStore initializes a new credentials store based
// in the settings provided in the configuration file.
func loadCredentialsStore(c *cliconfig.ConfigFile) credentials.Store {
if c.CredentialsStore != "" {
return credentials.NewNativeStore(c)
}
return credentials.NewFileStore(c)
}

View File

@@ -1,41 +0,0 @@
package client
import (
"fmt"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdLogout logs a user out from a Docker registry.
//
// If no server is specified, the user will be logged out from the registry's index server.
//
// Usage: docker logout [SERVER]
func (cli *DockerCli) CmdLogout(args ...string) error {
cmd := Cli.Subcmd("logout", []string{"[SERVER]"}, Cli.DockerCommands["logout"].Description+".\nIf no server is specified, the default is defined by the daemon.", true)
cmd.Require(flag.Max, 1)
cmd.ParseFlags(args, true)
var serverAddress string
if len(cmd.Args()) > 0 {
serverAddress = cmd.Arg(0)
} else {
serverAddress = cli.electAuthServer()
}
// check if we're logged in based on the records in the config file
// which means it couldn't have user/pass cause they may be in the creds store
if _, ok := cli.configFile.AuthConfigs[serverAddress]; !ok {
fmt.Fprintf(cli.out, "Not logged in to %s\n", serverAddress)
return nil
}
fmt.Fprintf(cli.out, "Remove login credentials for %s\n", serverAddress)
if err := eraseCredentials(cli.configFile, serverAddress); err != nil {
fmt.Fprintf(cli.out, "WARNING: could not erase credentials: %v\n", err)
}
return nil
}

View File

@@ -1,65 +0,0 @@
package client
import (
"fmt"
"io"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/stdcopy"
"github.com/docker/engine-api/types"
)
var validDrivers = map[string]bool{
"json-file": true,
"journald": true,
}
// CmdLogs fetches the logs of a given container.
//
// docker logs [OPTIONS] CONTAINER
func (cli *DockerCli) CmdLogs(args ...string) error {
cmd := Cli.Subcmd("logs", []string{"CONTAINER"}, Cli.DockerCommands["logs"].Description, true)
follow := cmd.Bool([]string{"f", "-follow"}, false, "Follow log output")
since := cmd.String([]string{"-since"}, "", "Show logs since timestamp")
times := cmd.Bool([]string{"t", "-timestamps"}, false, "Show timestamps")
tail := cmd.String([]string{"-tail"}, "all", "Number of lines to show from the end of the logs")
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
name := cmd.Arg(0)
c, err := cli.client.ContainerInspect(context.Background(), name)
if err != nil {
return err
}
if !validDrivers[c.HostConfig.LogConfig.Type] {
return fmt.Errorf("\"logs\" command is supported only for \"json-file\" and \"journald\" logging drivers (got: %s)", c.HostConfig.LogConfig.Type)
}
options := types.ContainerLogsOptions{
ContainerID: name,
ShowStdout: true,
ShowStderr: true,
Since: *since,
Timestamps: *times,
Follow: *follow,
Tail: *tail,
}
responseBody, err := cli.client.ContainerLogs(context.Background(), options)
if err != nil {
return err
}
defer responseBody.Close()
if c.Config.Tty {
_, err = io.Copy(cli.out, responseBody)
} else {
_, err = stdcopy.StdCopy(cli.out, cli.err, responseBody)
}
return err
}

View File

@@ -1,392 +0,0 @@
package client
import (
"fmt"
"net"
"sort"
"strings"
"text/tabwriter"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/stringid"
runconfigopts "github.com/docker/docker/runconfig/opts"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/filters"
"github.com/docker/engine-api/types/network"
)
// CmdNetwork is the parent subcommand for all network commands
//
// Usage: docker network <COMMAND> [OPTIONS]
func (cli *DockerCli) CmdNetwork(args ...string) error {
cmd := Cli.Subcmd("network", []string{"COMMAND [OPTIONS]"}, networkUsage(), false)
cmd.Require(flag.Min, 1)
err := cmd.ParseFlags(args, true)
cmd.Usage()
return err
}
// CmdNetworkCreate creates a new network with a given name
//
// Usage: docker network create [OPTIONS] <NETWORK-NAME>
func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
cmd := Cli.Subcmd("network create", []string{"NETWORK-NAME"}, "Creates a new network with a name specified by the user", false)
flDriver := cmd.String([]string{"d", "-driver"}, "bridge", "Driver to manage the Network")
flOpts := opts.NewMapOpts(nil, nil)
flIpamDriver := cmd.String([]string{"-ipam-driver"}, "default", "IP Address Management Driver")
flIpamSubnet := opts.NewListOpts(nil)
flIpamIPRange := opts.NewListOpts(nil)
flIpamGateway := opts.NewListOpts(nil)
flIpamAux := opts.NewMapOpts(nil, nil)
flIpamOpt := opts.NewMapOpts(nil, nil)
flLabels := opts.NewListOpts(nil)
cmd.Var(&flIpamSubnet, []string{"-subnet"}, "subnet in CIDR format that represents a network segment")
cmd.Var(&flIpamIPRange, []string{"-ip-range"}, "allocate container ip from a sub-range")
cmd.Var(&flIpamGateway, []string{"-gateway"}, "ipv4 or ipv6 Gateway for the master subnet")
cmd.Var(flIpamAux, []string{"-aux-address"}, "auxiliary ipv4 or ipv6 addresses used by Network driver")
cmd.Var(flOpts, []string{"o", "-opt"}, "set driver specific options")
cmd.Var(flIpamOpt, []string{"-ipam-opt"}, "set IPAM driver specific options")
cmd.Var(&flLabels, []string{"-label"}, "set metadata on a network")
flInternal := cmd.Bool([]string{"-internal"}, false, "restricts external access to the network")
flIPv6 := cmd.Bool([]string{"-ipv6"}, false, "enable IPv6 networking")
cmd.Require(flag.Exact, 1)
err := cmd.ParseFlags(args, true)
if err != nil {
return err
}
// Set the default driver to "" if the user didn't set the value.
// That way we can know whether it was user input or not.
driver := *flDriver
if !cmd.IsSet("-driver") && !cmd.IsSet("d") {
driver = ""
}
ipamCfg, err := consolidateIpam(flIpamSubnet.GetAll(), flIpamIPRange.GetAll(), flIpamGateway.GetAll(), flIpamAux.GetAll())
if err != nil {
return err
}
// Construct network create request body
nc := types.NetworkCreate{
Name: cmd.Arg(0),
Driver: driver,
IPAM: network.IPAM{Driver: *flIpamDriver, Config: ipamCfg, Options: flIpamOpt.GetAll()},
Options: flOpts.GetAll(),
CheckDuplicate: true,
Internal: *flInternal,
EnableIPv6: *flIPv6,
Labels: runconfigopts.ConvertKVStringsToMap(flLabels.GetAll()),
}
resp, err := cli.client.NetworkCreate(context.Background(), nc)
if err != nil {
return err
}
fmt.Fprintf(cli.out, "%s\n", resp.ID)
return nil
}
// CmdNetworkRm deletes one or more networks
//
// Usage: docker network rm NETWORK-NAME|NETWORK-ID [NETWORK-NAME|NETWORK-ID...]
func (cli *DockerCli) CmdNetworkRm(args ...string) error {
cmd := Cli.Subcmd("network rm", []string{"NETWORK [NETWORK...]"}, "Deletes one or more networks", false)
cmd.Require(flag.Min, 1)
if err := cmd.ParseFlags(args, true); err != nil {
return err
}
status := 0
for _, net := range cmd.Args() {
if err := cli.client.NetworkRemove(context.Background(), net); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
status = 1
continue
}
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}
// CmdNetworkConnect connects a container to a network
//
// Usage: docker network connect [OPTIONS] <NETWORK> <CONTAINER>
func (cli *DockerCli) CmdNetworkConnect(args ...string) error {
cmd := Cli.Subcmd("network connect", []string{"NETWORK CONTAINER"}, "Connects a container to a network", false)
flIPAddress := cmd.String([]string{"-ip"}, "", "IP Address")
flIPv6Address := cmd.String([]string{"-ip6"}, "", "IPv6 Address")
flLinks := opts.NewListOpts(runconfigopts.ValidateLink)
cmd.Var(&flLinks, []string{"-link"}, "Add link to another container")
flAliases := opts.NewListOpts(nil)
cmd.Var(&flAliases, []string{"-alias"}, "Add network-scoped alias for the container")
cmd.Require(flag.Min, 2)
if err := cmd.ParseFlags(args, true); err != nil {
return err
}
epConfig := &network.EndpointSettings{
IPAMConfig: &network.EndpointIPAMConfig{
IPv4Address: *flIPAddress,
IPv6Address: *flIPv6Address,
},
Links: flLinks.GetAll(),
Aliases: flAliases.GetAll(),
}
return cli.client.NetworkConnect(context.Background(), cmd.Arg(0), cmd.Arg(1), epConfig)
}
// CmdNetworkDisconnect disconnects a container from a network
//
// Usage: docker network disconnect <NETWORK> <CONTAINER>
func (cli *DockerCli) CmdNetworkDisconnect(args ...string) error {
cmd := Cli.Subcmd("network disconnect", []string{"NETWORK CONTAINER"}, "Disconnects container from a network", false)
force := cmd.Bool([]string{"f", "-force"}, false, "Force the container to disconnect from a network")
cmd.Require(flag.Exact, 2)
if err := cmd.ParseFlags(args, true); err != nil {
return err
}
return cli.client.NetworkDisconnect(context.Background(), cmd.Arg(0), cmd.Arg(1), *force)
}
// CmdNetworkLs lists all the networks managed by docker daemon
//
// Usage: docker network ls [OPTIONS]
func (cli *DockerCli) CmdNetworkLs(args ...string) error {
cmd := Cli.Subcmd("network ls", nil, "Lists networks", true)
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only display numeric IDs")
noTrunc := cmd.Bool([]string{"-no-trunc"}, false, "Do not truncate the output")
flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Filter output based on conditions provided")
cmd.Require(flag.Exact, 0)
err := cmd.ParseFlags(args, true)
if err != nil {
return err
}
// Consolidate all filter flags, and sanity check them early.
// They'll get process after get response from server.
netFilterArgs := filters.NewArgs()
for _, f := range flFilter.GetAll() {
if netFilterArgs, err = filters.ParseFlag(f, netFilterArgs); err != nil {
return err
}
}
options := types.NetworkListOptions{
Filters: netFilterArgs,
}
networkResources, err := cli.client.NetworkList(context.Background(), options)
if err != nil {
return err
}
wr := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
// unless quiet (-q) is specified, print field titles
if !*quiet {
fmt.Fprintln(wr, "NETWORK ID\tNAME\tDRIVER")
}
sort.Sort(byNetworkName(networkResources))
for _, networkResource := range networkResources {
ID := networkResource.ID
netName := networkResource.Name
if !*noTrunc {
ID = stringid.TruncateID(ID)
}
if *quiet {
fmt.Fprintln(wr, ID)
continue
}
driver := networkResource.Driver
fmt.Fprintf(wr, "%s\t%s\t%s\t",
ID,
netName,
driver)
fmt.Fprint(wr, "\n")
}
wr.Flush()
return nil
}
type byNetworkName []types.NetworkResource
func (r byNetworkName) Len() int { return len(r) }
func (r byNetworkName) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r byNetworkName) Less(i, j int) bool { return r[i].Name < r[j].Name }
// CmdNetworkInspect inspects the network object for more details
//
// Usage: docker network inspect [OPTIONS] <NETWORK> [NETWORK...]
func (cli *DockerCli) CmdNetworkInspect(args ...string) error {
cmd := Cli.Subcmd("network inspect", []string{"NETWORK [NETWORK...]"}, "Displays detailed information on one or more networks", false)
tmplStr := cmd.String([]string{"f", "-format"}, "", "Format the output using the given go template")
cmd.Require(flag.Min, 1)
if err := cmd.ParseFlags(args, true); err != nil {
return err
}
inspectSearcher := func(name string) (interface{}, []byte, error) {
i, err := cli.client.NetworkInspect(context.Background(), name)
return i, nil, err
}
return cli.inspectElements(*tmplStr, cmd.Args(), inspectSearcher)
}
// Consolidates the ipam configuration as a group from different related configurations
// user can configure network with multiple non-overlapping subnets and hence it is
// possible to correlate the various related parameters and consolidate them.
// consoidateIpam consolidates subnets, ip-ranges, gateways and auxiliary addresses into
// structured ipam data.
func consolidateIpam(subnets, ranges, gateways []string, auxaddrs map[string]string) ([]network.IPAMConfig, error) {
if len(subnets) < len(ranges) || len(subnets) < len(gateways) {
return nil, fmt.Errorf("every ip-range or gateway must have a corresponding subnet")
}
iData := map[string]*network.IPAMConfig{}
// Populate non-overlapping subnets into consolidation map
for _, s := range subnets {
for k := range iData {
ok1, err := subnetMatches(s, k)
if err != nil {
return nil, err
}
ok2, err := subnetMatches(k, s)
if err != nil {
return nil, err
}
if ok1 || ok2 {
return nil, fmt.Errorf("multiple overlapping subnet configuration is not supported")
}
}
iData[s] = &network.IPAMConfig{Subnet: s, AuxAddress: map[string]string{}}
}
// Validate and add valid ip ranges
for _, r := range ranges {
match := false
for _, s := range subnets {
ok, err := subnetMatches(s, r)
if err != nil {
return nil, err
}
if !ok {
continue
}
if iData[s].IPRange != "" {
return nil, fmt.Errorf("cannot configure multiple ranges (%s, %s) on the same subnet (%s)", r, iData[s].IPRange, s)
}
d := iData[s]
d.IPRange = r
match = true
}
if !match {
return nil, fmt.Errorf("no matching subnet for range %s", r)
}
}
// Validate and add valid gateways
for _, g := range gateways {
match := false
for _, s := range subnets {
ok, err := subnetMatches(s, g)
if err != nil {
return nil, err
}
if !ok {
continue
}
if iData[s].Gateway != "" {
return nil, fmt.Errorf("cannot configure multiple gateways (%s, %s) for the same subnet (%s)", g, iData[s].Gateway, s)
}
d := iData[s]
d.Gateway = g
match = true
}
if !match {
return nil, fmt.Errorf("no matching subnet for gateway %s", g)
}
}
// Validate and add aux-addresses
for key, aa := range auxaddrs {
match := false
for _, s := range subnets {
ok, err := subnetMatches(s, aa)
if err != nil {
return nil, err
}
if !ok {
continue
}
iData[s].AuxAddress[key] = aa
match = true
}
if !match {
return nil, fmt.Errorf("no matching subnet for aux-address %s", aa)
}
}
idl := []network.IPAMConfig{}
for _, v := range iData {
idl = append(idl, *v)
}
return idl, nil
}
func subnetMatches(subnet, data string) (bool, error) {
var (
ip net.IP
)
_, s, err := net.ParseCIDR(subnet)
if err != nil {
return false, fmt.Errorf("Invalid subnet %s : %v", s, err)
}
if strings.Contains(data, "/") {
ip, _, err = net.ParseCIDR(data)
if err != nil {
return false, fmt.Errorf("Invalid cidr %s : %v", data, err)
}
} else {
ip = net.ParseIP(data)
}
return s.Contains(ip), nil
}
func networkUsage() string {
networkCommands := map[string]string{
"create": "Create a network",
"connect": "Connect container to a network",
"disconnect": "Disconnect container from a network",
"inspect": "Display detailed network information",
"ls": "List all networks",
"rm": "Remove a network",
}
help := "Commands:\n"
for cmd, description := range networkCommands {
help += fmt.Sprintf(" %-25.25s%s\n", cmd, description)
}
help += fmt.Sprintf("\nRun 'docker network COMMAND --help' for more information on a command.")
return help
}

View File

@@ -1,34 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdPause pauses all processes within one or more containers.
//
// Usage: docker pause CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdPause(args ...string) error {
cmd := Cli.Subcmd("pause", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["pause"].Description, true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
if err := cli.client.ContainerPause(context.Background(), name); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,61 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/go-connections/nat"
)
// CmdPort lists port mappings for a container.
// If a private port is specified, it also shows the public-facing port that is NATed to the private port.
//
// Usage: docker port CONTAINER [PRIVATE_PORT[/PROTO]]
func (cli *DockerCli) CmdPort(args ...string) error {
cmd := Cli.Subcmd("port", []string{"CONTAINER [PRIVATE_PORT[/PROTO]]"}, Cli.DockerCommands["port"].Description, true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
c, err := cli.client.ContainerInspect(context.Background(), cmd.Arg(0))
if err != nil {
return err
}
if cmd.NArg() == 2 {
var (
port = cmd.Arg(1)
proto = "tcp"
parts = strings.SplitN(port, "/", 2)
)
if len(parts) == 2 && len(parts[1]) != 0 {
port = parts[0]
proto = parts[1]
}
natPort := port + "/" + proto
newP, err := nat.NewPort(proto, port)
if err != nil {
return err
}
if frontends, exists := c.NetworkSettings.Ports[newP]; exists && frontends != nil {
for _, frontend := range frontends {
fmt.Fprintf(cli.out, "%s:%s\n", frontend.HostIP, frontend.HostPort)
}
return nil
}
return fmt.Errorf("Error: No public port '%s' published for %s", natPort, cmd.Arg(0))
}
for from, frontends := range c.NetworkSettings.Ports {
for _, frontend := range frontends {
fmt.Fprintf(cli.out, "%s -> %s:%s\n", from, frontend.HostIP, frontend.HostPort)
}
}
return nil
}

View File

@@ -1,89 +0,0 @@
package client
import (
"golang.org/x/net/context"
"github.com/docker/docker/api/client/formatter"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/filters"
)
// CmdPs outputs a list of Docker containers.
//
// Usage: docker ps [OPTIONS]
func (cli *DockerCli) CmdPs(args ...string) error {
var (
err error
psFilterArgs = filters.NewArgs()
cmd = Cli.Subcmd("ps", nil, Cli.DockerCommands["ps"].Description, true)
quiet = cmd.Bool([]string{"q", "-quiet"}, false, "Only display numeric IDs")
size = cmd.Bool([]string{"s", "-size"}, false, "Display total file sizes")
all = cmd.Bool([]string{"a", "-all"}, false, "Show all containers (default shows just running)")
noTrunc = cmd.Bool([]string{"-no-trunc"}, false, "Don't truncate output")
nLatest = cmd.Bool([]string{"l", "-latest"}, false, "Show the latest created container (includes all states)")
since = cmd.String([]string{"#-since"}, "", "Show containers created since Id or Name (includes all states)")
before = cmd.String([]string{"#-before"}, "", "Only show containers created before Id or Name")
last = cmd.Int([]string{"n"}, -1, "Show n last created containers (includes all states)")
format = cmd.String([]string{"-format"}, "", "Pretty-print containers using a Go template")
flFilter = opts.NewListOpts(nil)
)
cmd.Require(flag.Exact, 0)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Filter output based on conditions provided")
cmd.ParseFlags(args, true)
if *last == -1 && *nLatest {
*last = 1
}
// Consolidate all filter flags, and sanity check them.
// They'll get processed in the daemon/server.
for _, f := range flFilter.GetAll() {
if psFilterArgs, err = filters.ParseFlag(f, psFilterArgs); err != nil {
return err
}
}
options := types.ContainerListOptions{
All: *all,
Limit: *last,
Since: *since,
Before: *before,
Size: *size,
Filter: psFilterArgs,
}
containers, err := cli.client.ContainerList(context.Background(), options)
if err != nil {
return err
}
f := *format
if len(f) == 0 {
if len(cli.PsFormat()) > 0 && !*quiet {
f = cli.PsFormat()
} else {
f = "table"
}
}
psCtx := formatter.ContainerContext{
Context: formatter.Context{
Output: cli.out,
Format: f,
Quiet: *quiet,
Trunc: !*noTrunc,
},
Size: *size,
Containers: containers,
}
psCtx.Write()
return nil
}

View File

@@ -1,89 +0,0 @@
package client
import (
"errors"
"fmt"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
"github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
)
// CmdPull pulls an image or a repository from the registry.
//
// Usage: docker pull [OPTIONS] IMAGENAME[:TAG|@DIGEST]
func (cli *DockerCli) CmdPull(args ...string) error {
cmd := Cli.Subcmd("pull", []string{"NAME[:TAG|@DIGEST]"}, Cli.DockerCommands["pull"].Description, true)
allTags := cmd.Bool([]string{"a", "-all-tags"}, false, "Download all tagged images in the repository")
addTrustedFlags(cmd, true)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
remote := cmd.Arg(0)
distributionRef, err := reference.ParseNamed(remote)
if err != nil {
return err
}
if *allTags && !reference.IsNameOnly(distributionRef) {
return errors.New("tag can't be used with --all-tags/-a")
}
if !*allTags && reference.IsNameOnly(distributionRef) {
distributionRef = reference.WithDefaultTag(distributionRef)
fmt.Fprintf(cli.out, "Using default tag: %s\n", reference.DefaultTag)
}
var tag string
switch x := distributionRef.(type) {
case reference.Canonical:
tag = x.Digest().String()
case reference.NamedTagged:
tag = x.Tag()
}
ref := registry.ParseReference(tag)
// Resolve the Repository name from fqn to RepositoryInfo
repoInfo, err := registry.ParseRepositoryInfo(distributionRef)
if err != nil {
return err
}
authConfig := cli.resolveAuthConfig(repoInfo.Index)
requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "pull")
if isTrusted() && !ref.HasDigest() {
// Check if tag is digest
return cli.trustedPull(repoInfo, ref, authConfig, requestPrivilege)
}
return cli.imagePullPrivileged(authConfig, distributionRef.String(), "", requestPrivilege)
}
func (cli *DockerCli) imagePullPrivileged(authConfig types.AuthConfig, imageID, tag string, requestPrivilege client.RequestPrivilegeFunc) error {
encodedAuth, err := encodeAuthToBase64(authConfig)
if err != nil {
return err
}
options := types.ImagePullOptions{
ImageID: imageID,
Tag: tag,
RegistryAuth: encodedAuth,
}
responseBody, err := cli.client.ImagePull(context.Background(), options, requestPrivilege)
if err != nil {
return err
}
defer responseBody.Close()
return jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, nil)
}

View File

@@ -1,76 +0,0 @@
package client
import (
"errors"
"io"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
"github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
)
// CmdPush pushes an image or repository to the registry.
//
// Usage: docker push NAME[:TAG]
func (cli *DockerCli) CmdPush(args ...string) error {
cmd := Cli.Subcmd("push", []string{"NAME[:TAG]"}, Cli.DockerCommands["push"].Description, true)
addTrustedFlags(cmd, false)
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
ref, err := reference.ParseNamed(cmd.Arg(0))
if err != nil {
return err
}
var tag string
switch x := ref.(type) {
case reference.Canonical:
return errors.New("cannot push a digest reference")
case reference.NamedTagged:
tag = x.Tag()
}
// Resolve the Repository name from fqn to RepositoryInfo
repoInfo, err := registry.ParseRepositoryInfo(ref)
if err != nil {
return err
}
// Resolve the Auth config relevant for this server
authConfig := cli.resolveAuthConfig(repoInfo.Index)
requestPrivilege := cli.registryAuthenticationPrivilegedFunc(repoInfo.Index, "push")
if isTrusted() {
return cli.trustedPush(repoInfo, tag, authConfig, requestPrivilege)
}
responseBody, err := cli.imagePushPrivileged(authConfig, ref.Name(), tag, requestPrivilege)
if err != nil {
return err
}
defer responseBody.Close()
return jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, nil)
}
func (cli *DockerCli) imagePushPrivileged(authConfig types.AuthConfig, imageID, tag string, requestPrivilege client.RequestPrivilegeFunc) (io.ReadCloser, error) {
encodedAuth, err := encodeAuthToBase64(authConfig)
if err != nil {
return nil, err
}
options := types.ImagePushOptions{
ImageID: imageID,
Tag: tag,
RegistryAuth: encodedAuth,
}
return cli.client.ImagePush(context.Background(), options, requestPrivilege)
}

View File

@@ -1,34 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdRename renames a container.
//
// Usage: docker rename OLD_NAME NEW_NAME
func (cli *DockerCli) CmdRename(args ...string) error {
cmd := Cli.Subcmd("rename", []string{"OLD_NAME NEW_NAME"}, Cli.DockerCommands["rename"].Description, true)
cmd.Require(flag.Exact, 2)
cmd.ParseFlags(args, true)
oldName := strings.TrimSpace(cmd.Arg(0))
newName := strings.TrimSpace(cmd.Arg(1))
if oldName == "" || newName == "" {
return fmt.Errorf("Error: Neither old nor new names may be empty")
}
if err := cli.client.ContainerRename(context.Background(), oldName, newName); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
return fmt.Errorf("Error: failed to rename container named %s", oldName)
}
return nil
}

View File

@@ -1,35 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdRestart restarts one or more containers.
//
// Usage: docker restart [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdRestart(args ...string) error {
cmd := Cli.Subcmd("restart", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["restart"].Description, true)
nSeconds := cmd.Int([]string{"t", "-time"}, 10, "Seconds to wait for stop before killing the container")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
if err := cli.client.ContainerRestart(context.Background(), name, *nSeconds); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,56 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/engine-api/types"
)
// CmdRm removes one or more containers.
//
// Usage: docker rm [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdRm(args ...string) error {
cmd := Cli.Subcmd("rm", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["rm"].Description, true)
v := cmd.Bool([]string{"v", "-volumes"}, false, "Remove the volumes associated with the container")
link := cmd.Bool([]string{"l", "-link"}, false, "Remove the specified link")
force := cmd.Bool([]string{"f", "-force"}, false, "Force the removal of a running container (uses SIGKILL)")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
if name == "" {
return fmt.Errorf("Container name cannot be empty")
}
name = strings.Trim(name, "/")
if err := cli.removeContainer(name, *v, *link, *force); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}
func (cli *DockerCli) removeContainer(containerID string, removeVolumes, removeLinks, force bool) error {
options := types.ContainerRemoveOptions{
ContainerID: containerID,
RemoveVolumes: removeVolumes,
RemoveLinks: removeLinks,
Force: force,
}
if err := cli.client.ContainerRemove(context.Background(), options); err != nil {
return err
}
return nil
}

View File

@@ -1,59 +0,0 @@
package client
import (
"fmt"
"net/url"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/engine-api/types"
)
// CmdRmi removes all images with the specified name(s).
//
// Usage: docker rmi [OPTIONS] IMAGE [IMAGE...]
func (cli *DockerCli) CmdRmi(args ...string) error {
cmd := Cli.Subcmd("rmi", []string{"IMAGE [IMAGE...]"}, Cli.DockerCommands["rmi"].Description, true)
force := cmd.Bool([]string{"f", "-force"}, false, "Force removal of the image")
noprune := cmd.Bool([]string{"-no-prune"}, false, "Do not delete untagged parents")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
v := url.Values{}
if *force {
v.Set("force", "1")
}
if *noprune {
v.Set("noprune", "1")
}
var errs []string
for _, name := range cmd.Args() {
options := types.ImageRemoveOptions{
ImageID: name,
Force: *force,
PruneChildren: !*noprune,
}
dels, err := cli.client.ImageRemove(context.Background(), options)
if err != nil {
errs = append(errs, err.Error())
} else {
for _, del := range dels {
if del.Deleted != "" {
fmt.Fprintf(cli.out, "Deleted: %s\n", del.Deleted)
} else {
fmt.Fprintf(cli.out, "Untagged: %s\n", del.Untagged)
}
}
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,287 +0,0 @@
package client
import (
"fmt"
"io"
"os"
"runtime"
"strings"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/pkg/signal"
runconfigopts "github.com/docker/docker/runconfig/opts"
"github.com/docker/engine-api/types"
"github.com/docker/libnetwork/resolvconf/dns"
)
const (
errCmdNotFound = "not found or does not exist."
errCmdCouldNotBeInvoked = "could not be invoked."
)
func (cid *cidFile) Close() error {
cid.file.Close()
if !cid.written {
if err := os.Remove(cid.path); err != nil {
return fmt.Errorf("failed to remove the CID file '%s': %s \n", cid.path, err)
}
}
return nil
}
func (cid *cidFile) Write(id string) error {
if _, err := cid.file.Write([]byte(id)); err != nil {
return fmt.Errorf("Failed to write the container ID to the file: %s", err)
}
cid.written = true
return nil
}
// if container start fails with 'command not found' error, return 127
// if container start fails with 'command cannot be invoked' error, return 126
// return 125 for generic docker daemon failures
func runStartContainerErr(err error) error {
trimmedErr := strings.Trim(err.Error(), "Error response from daemon: ")
statusError := Cli.StatusError{StatusCode: 125}
if strings.HasPrefix(trimmedErr, "Container command") {
if strings.Contains(trimmedErr, errCmdNotFound) {
statusError = Cli.StatusError{StatusCode: 127}
} else if strings.Contains(trimmedErr, errCmdCouldNotBeInvoked) {
statusError = Cli.StatusError{StatusCode: 126}
}
}
return statusError
}
// CmdRun runs a command in a new container.
//
// Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
func (cli *DockerCli) CmdRun(args ...string) error {
cmd := Cli.Subcmd("run", []string{"IMAGE [COMMAND] [ARG...]"}, Cli.DockerCommands["run"].Description, true)
addTrustedFlags(cmd, true)
// These are flags not stored in Config/HostConfig
var (
flAutoRemove = cmd.Bool([]string{"-rm"}, false, "Automatically remove the container when it exits")
flDetach = cmd.Bool([]string{"d", "-detach"}, false, "Run container in background and print container ID")
flSigProxy = cmd.Bool([]string{"-sig-proxy"}, true, "Proxy received signals to the process")
flName = cmd.String([]string{"-name"}, "", "Assign a name to the container")
flDetachKeys = cmd.String([]string{"-detach-keys"}, "", "Override the key sequence for detaching a container")
flAttach *opts.ListOpts
ErrConflictAttachDetach = fmt.Errorf("Conflicting options: -a and -d")
ErrConflictRestartPolicyAndAutoRemove = fmt.Errorf("Conflicting options: --restart and --rm")
ErrConflictDetachAutoRemove = fmt.Errorf("Conflicting options: --rm and -d")
)
config, hostConfig, networkingConfig, cmd, err := runconfigopts.Parse(cmd, args)
// just in case the Parse does not exit
if err != nil {
cmd.ReportError(err.Error(), true)
os.Exit(125)
}
if hostConfig.OomKillDisable != nil && *hostConfig.OomKillDisable && hostConfig.Memory == 0 {
fmt.Fprintf(cli.err, "WARNING: Disabling the OOM killer on containers without setting a '-m/--memory' limit may be dangerous.\n")
}
if len(hostConfig.DNS) > 0 {
// check the DNS settings passed via --dns against
// localhost regexp to warn if they are trying to
// set a DNS to a localhost address
for _, dnsIP := range hostConfig.DNS {
if dns.IsLocalhost(dnsIP) {
fmt.Fprintf(cli.err, "WARNING: Localhost DNS setting (--dns=%s) may fail in containers.\n", dnsIP)
break
}
}
}
if config.Image == "" {
cmd.Usage()
return nil
}
config.ArgsEscaped = false
if !*flDetach {
if err := cli.CheckTtyInput(config.AttachStdin, config.Tty); err != nil {
return err
}
} else {
if fl := cmd.Lookup("-attach"); fl != nil {
flAttach = fl.Value.(*opts.ListOpts)
if flAttach.Len() != 0 {
return ErrConflictAttachDetach
}
}
if *flAutoRemove {
return ErrConflictDetachAutoRemove
}
config.AttachStdin = false
config.AttachStdout = false
config.AttachStderr = false
config.StdinOnce = false
}
// Disable flSigProxy when in TTY mode
sigProxy := *flSigProxy
if config.Tty {
sigProxy = false
}
// Telling the Windows daemon the initial size of the tty during start makes
// a far better user experience rather than relying on subsequent resizes
// to cause things to catch up.
if runtime.GOOS == "windows" {
hostConfig.ConsoleSize[0], hostConfig.ConsoleSize[1] = cli.getTtySize()
}
createResponse, err := cli.createContainer(config, hostConfig, networkingConfig, hostConfig.ContainerIDFile, *flName)
if err != nil {
cmd.ReportError(err.Error(), true)
return runStartContainerErr(err)
}
if sigProxy {
sigc := cli.forwardAllSignals(createResponse.ID)
defer signal.StopCatch(sigc)
}
var (
waitDisplayID chan struct{}
errCh chan error
)
if !config.AttachStdout && !config.AttachStderr {
// Make this asynchronous to allow the client to write to stdin before having to read the ID
waitDisplayID = make(chan struct{})
go func() {
defer close(waitDisplayID)
fmt.Fprintf(cli.out, "%s\n", createResponse.ID)
}()
}
if *flAutoRemove && (hostConfig.RestartPolicy.IsAlways() || hostConfig.RestartPolicy.IsOnFailure()) {
return ErrConflictRestartPolicyAndAutoRemove
}
if config.AttachStdin || config.AttachStdout || config.AttachStderr {
var (
out, stderr io.Writer
in io.ReadCloser
)
if config.AttachStdin {
in = cli.in
}
if config.AttachStdout {
out = cli.out
}
if config.AttachStderr {
if config.Tty {
stderr = cli.out
} else {
stderr = cli.err
}
}
if *flDetachKeys != "" {
cli.configFile.DetachKeys = *flDetachKeys
}
options := types.ContainerAttachOptions{
ContainerID: createResponse.ID,
Stream: true,
Stdin: config.AttachStdin,
Stdout: config.AttachStdout,
Stderr: config.AttachStderr,
DetachKeys: cli.configFile.DetachKeys,
}
resp, err := cli.client.ContainerAttach(context.Background(), options)
if err != nil {
return err
}
if in != nil && config.Tty {
if err := cli.setRawTerminal(); err != nil {
return err
}
defer cli.restoreTerminal(in)
}
errCh = promise.Go(func() error {
return cli.holdHijackedConnection(config.Tty, in, out, stderr, resp)
})
}
if *flAutoRemove {
defer func() {
if err := cli.removeContainer(createResponse.ID, true, false, false); err != nil {
fmt.Fprintf(cli.err, "%v\n", err)
}
}()
}
//start the container
if err := cli.client.ContainerStart(context.Background(), createResponse.ID); err != nil {
cmd.ReportError(err.Error(), false)
return runStartContainerErr(err)
}
if (config.AttachStdin || config.AttachStdout || config.AttachStderr) && config.Tty && cli.isTerminalOut {
if err := cli.monitorTtySize(createResponse.ID, false); err != nil {
fmt.Fprintf(cli.err, "Error monitoring TTY size: %s\n", err)
}
}
if errCh != nil {
if err := <-errCh; err != nil {
logrus.Debugf("Error hijack: %s", err)
return err
}
}
// Detached mode: wait for the id to be displayed and return.
if !config.AttachStdout && !config.AttachStderr {
// Detached mode
<-waitDisplayID
return nil
}
var status int
// Attached mode
if *flAutoRemove {
// Autoremove: wait for the container to finish, retrieve
// the exit code and remove the container
if status, err = cli.client.ContainerWait(context.Background(), createResponse.ID); err != nil {
return runStartContainerErr(err)
}
if _, status, err = getExitCode(cli, createResponse.ID); err != nil {
return err
}
} else {
// No Autoremove: Simply retrieve the exit code
if !config.Tty {
// In non-TTY mode, we can't detach, so we must wait for container exit
if status, err = cli.client.ContainerWait(context.Background(), createResponse.ID); err != nil {
return err
}
} else {
// In TTY mode, there is a race: if the process dies too slowly, the state could
// be updated after the getExitCode call and result in the wrong exit code being reported
if _, status, err = getExitCode(cli, createResponse.ID); err != nil {
return err
}
}
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}

View File

@@ -1,42 +0,0 @@
package client
import (
"errors"
"io"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdSave saves one or more images to a tar archive.
//
// The tar archive is written to STDOUT by default, or written to a file.
//
// Usage: docker save [OPTIONS] IMAGE [IMAGE...]
func (cli *DockerCli) CmdSave(args ...string) error {
cmd := Cli.Subcmd("save", []string{"IMAGE [IMAGE...]"}, Cli.DockerCommands["save"].Description+" (streamed to STDOUT by default)", true)
outfile := cmd.String([]string{"o", "-output"}, "", "Write to a file, instead of STDOUT")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
if *outfile == "" && cli.isTerminalOut {
return errors.New("Cowardly refusing to save to a terminal. Use the -o flag or redirect.")
}
responseBody, err := cli.client.ImageSave(context.Background(), cmd.Args())
if err != nil {
return err
}
defer responseBody.Close()
if *outfile == "" {
_, err := io.Copy(cli.out, responseBody)
return err
}
return copyToFile(*outfile, responseBody)
}

View File

@@ -1,93 +0,0 @@
package client
import (
"fmt"
"net/url"
"sort"
"strings"
"text/tabwriter"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/stringutils"
"github.com/docker/docker/registry"
"github.com/docker/engine-api/types"
registrytypes "github.com/docker/engine-api/types/registry"
)
// CmdSearch searches the Docker Hub for images.
//
// Usage: docker search [OPTIONS] TERM
func (cli *DockerCli) CmdSearch(args ...string) error {
cmd := Cli.Subcmd("search", []string{"TERM"}, Cli.DockerCommands["search"].Description, true)
noTrunc := cmd.Bool([]string{"-no-trunc"}, false, "Don't truncate output")
automated := cmd.Bool([]string{"-automated"}, false, "Only show automated builds")
stars := cmd.Uint([]string{"s", "-stars"}, 0, "Only displays with at least x stars")
cmd.Require(flag.Exact, 1)
cmd.ParseFlags(args, true)
name := cmd.Arg(0)
v := url.Values{}
v.Set("term", name)
indexInfo, err := registry.ParseSearchIndexInfo(name)
if err != nil {
return err
}
authConfig := cli.resolveAuthConfig(indexInfo)
requestPrivilege := cli.registryAuthenticationPrivilegedFunc(indexInfo, "search")
encodedAuth, err := encodeAuthToBase64(authConfig)
if err != nil {
return err
}
options := types.ImageSearchOptions{
Term: name,
RegistryAuth: encodedAuth,
}
unorderedResults, err := cli.client.ImageSearch(context.Background(), options, requestPrivilege)
if err != nil {
return err
}
results := searchResultsByStars(unorderedResults)
sort.Sort(results)
w := tabwriter.NewWriter(cli.out, 10, 1, 3, ' ', 0)
fmt.Fprintf(w, "NAME\tDESCRIPTION\tSTARS\tOFFICIAL\tAUTOMATED\n")
for _, res := range results {
if (*automated && !res.IsAutomated) || (int(*stars) > res.StarCount) {
continue
}
desc := strings.Replace(res.Description, "\n", " ", -1)
desc = strings.Replace(desc, "\r", " ", -1)
if !*noTrunc && len(desc) > 45 {
desc = stringutils.Truncate(desc, 42) + "..."
}
fmt.Fprintf(w, "%s\t%s\t%d\t", res.Name, desc, res.StarCount)
if res.IsOfficial {
fmt.Fprint(w, "[OK]")
}
fmt.Fprint(w, "\t")
if res.IsAutomated || res.IsTrusted {
fmt.Fprint(w, "[OK]")
}
fmt.Fprint(w, "\n")
}
w.Flush()
return nil
}
// SearchResultsByStars sorts search results in descending order by number of stars.
type searchResultsByStars []registrytypes.SearchResult
func (r searchResultsByStars) Len() int { return len(r) }
func (r searchResultsByStars) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r searchResultsByStars) Less(i, j int) bool { return r[j].StarCount < r[i].StarCount }

View File

@@ -1,157 +0,0 @@
package client
import (
"fmt"
"io"
"os"
"strings"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/pkg/promise"
"github.com/docker/docker/pkg/signal"
"github.com/docker/engine-api/types"
)
func (cli *DockerCli) forwardAllSignals(cid string) chan os.Signal {
sigc := make(chan os.Signal, 128)
signal.CatchAll(sigc)
go func() {
for s := range sigc {
if s == signal.SIGCHLD || s == signal.SIGPIPE {
continue
}
var sig string
for sigStr, sigN := range signal.SignalMap {
if sigN == s {
sig = sigStr
break
}
}
if sig == "" {
fmt.Fprintf(cli.err, "Unsupported signal: %v. Discarding.\n", s)
continue
}
if err := cli.client.ContainerKill(context.Background(), cid, sig); err != nil {
logrus.Debugf("Error sending signal: %s", err)
}
}
}()
return sigc
}
// CmdStart starts one or more containers.
//
// Usage: docker start [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdStart(args ...string) error {
cmd := Cli.Subcmd("start", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["start"].Description, true)
attach := cmd.Bool([]string{"a", "-attach"}, false, "Attach STDOUT/STDERR and forward signals")
openStdin := cmd.Bool([]string{"i", "-interactive"}, false, "Attach container's STDIN")
detachKeys := cmd.String([]string{"-detach-keys"}, "", "Override the key sequence for detaching a container")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
if *attach || *openStdin {
// We're going to attach to a container.
// 1. Ensure we only have one container.
if cmd.NArg() > 1 {
return fmt.Errorf("You cannot start and attach multiple containers at once.")
}
// 2. Attach to the container.
containerID := cmd.Arg(0)
c, err := cli.client.ContainerInspect(context.Background(), containerID)
if err != nil {
return err
}
if !c.Config.Tty {
sigc := cli.forwardAllSignals(containerID)
defer signal.StopCatch(sigc)
}
if *detachKeys != "" {
cli.configFile.DetachKeys = *detachKeys
}
options := types.ContainerAttachOptions{
ContainerID: containerID,
Stream: true,
Stdin: *openStdin && c.Config.OpenStdin,
Stdout: true,
Stderr: true,
DetachKeys: cli.configFile.DetachKeys,
}
var in io.ReadCloser
if options.Stdin {
in = cli.in
}
resp, err := cli.client.ContainerAttach(context.Background(), options)
if err != nil {
return err
}
defer resp.Close()
if in != nil && c.Config.Tty {
if err := cli.setRawTerminal(); err != nil {
return err
}
defer cli.restoreTerminal(in)
}
cErr := promise.Go(func() error {
return cli.holdHijackedConnection(c.Config.Tty, in, cli.out, cli.err, resp)
})
// 3. Start the container.
if err := cli.client.ContainerStart(context.Background(), containerID); err != nil {
return err
}
// 4. Wait for attachment to break.
if c.Config.Tty && cli.isTerminalOut {
if err := cli.monitorTtySize(containerID, false); err != nil {
fmt.Fprintf(cli.err, "Error monitoring TTY size: %s\n", err)
}
}
if attchErr := <-cErr; attchErr != nil {
return attchErr
}
_, status, err := getExitCode(cli, containerID)
if err != nil {
return err
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
} else {
// We're not going to attach to anything.
// Start as many containers as we want.
return cli.startContainersWithoutAttachments(cmd.Args())
}
return nil
}
func (cli *DockerCli) startContainersWithoutAttachments(containerIDs []string) error {
var failedContainers []string
for _, containerID := range containerIDs {
if err := cli.client.ContainerStart(context.Background(), containerID); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
failedContainers = append(failedContainers, containerID)
} else {
fmt.Fprintf(cli.out, "%s\n", containerID)
}
}
if len(failedContainers) > 0 {
return fmt.Errorf("Error: failed to start containers: %v", strings.Join(failedContainers, ", "))
}
return nil
}

View File

@@ -1,208 +0,0 @@
package client
import (
"fmt"
"io"
"strings"
"sync"
"text/tabwriter"
"time"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/events"
"github.com/docker/engine-api/types/filters"
)
// CmdStats displays a live stream of resource usage statistics for one or more containers.
//
// This shows real-time information on CPU usage, memory usage, and network I/O.
//
// Usage: docker stats [OPTIONS] [CONTAINER...]
func (cli *DockerCli) CmdStats(args ...string) error {
cmd := Cli.Subcmd("stats", []string{"[CONTAINER...]"}, Cli.DockerCommands["stats"].Description, true)
all := cmd.Bool([]string{"a", "-all"}, false, "Show all containers (default shows just running)")
noStream := cmd.Bool([]string{"-no-stream"}, false, "Disable streaming stats and only pull the first result")
cmd.ParseFlags(args, true)
names := cmd.Args()
showAll := len(names) == 0
closeChan := make(chan error)
// monitorContainerEvents watches for container creation and removal (only
// used when calling `docker stats` without arguments).
monitorContainerEvents := func(started chan<- struct{}, c chan events.Message) {
f := filters.NewArgs()
f.Add("type", "container")
options := types.EventsOptions{
Filters: f,
}
resBody, err := cli.client.Events(context.Background(), options)
// Whether we successfully subscribed to events or not, we can now
// unblock the main goroutine.
close(started)
if err != nil {
closeChan <- err
return
}
defer resBody.Close()
decodeEvents(resBody, func(event events.Message, err error) error {
if err != nil {
closeChan <- err
return nil
}
c <- event
return nil
})
}
// waitFirst is a WaitGroup to wait first stat data's reach for each container
waitFirst := &sync.WaitGroup{}
cStats := stats{}
// getContainerList simulates creation event for all previously existing
// containers (only used when calling `docker stats` without arguments).
getContainerList := func() {
options := types.ContainerListOptions{
All: *all,
}
cs, err := cli.client.ContainerList(context.Background(), options)
if err != nil {
closeChan <- err
}
for _, container := range cs {
s := &containerStats{Name: container.ID[:12]}
if cStats.add(s) {
waitFirst.Add(1)
go s.Collect(cli.client, !*noStream, waitFirst)
}
}
}
if showAll {
// If no names were specified, start a long running goroutine which
// monitors container events. We make sure we're subscribed before
// retrieving the list of running containers to avoid a race where we
// would "miss" a creation.
started := make(chan struct{})
eh := eventHandler{handlers: make(map[string]func(events.Message))}
eh.Handle("create", func(e events.Message) {
if *all {
s := &containerStats{Name: e.ID[:12]}
if cStats.add(s) {
waitFirst.Add(1)
go s.Collect(cli.client, !*noStream, waitFirst)
}
}
})
eh.Handle("start", func(e events.Message) {
s := &containerStats{Name: e.ID[:12]}
if cStats.add(s) {
waitFirst.Add(1)
go s.Collect(cli.client, !*noStream, waitFirst)
}
})
eh.Handle("die", func(e events.Message) {
if !*all {
cStats.remove(e.ID[:12])
}
})
eventChan := make(chan events.Message)
go eh.Watch(eventChan)
go monitorContainerEvents(started, eventChan)
defer close(eventChan)
<-started
// Start a short-lived goroutine to retrieve the initial list of
// containers.
getContainerList()
} else {
// Artificially send creation events for the containers we were asked to
// monitor (same code path than we use when monitoring all containers).
for _, name := range names {
s := &containerStats{Name: name}
if cStats.add(s) {
waitFirst.Add(1)
go s.Collect(cli.client, !*noStream, waitFirst)
}
}
// We don't expect any asynchronous errors: closeChan can be closed.
close(closeChan)
// Do a quick pause to detect any error with the provided list of
// container names.
time.Sleep(1500 * time.Millisecond)
var errs []string
cStats.mu.Lock()
for _, c := range cStats.cs {
c.mu.Lock()
if c.err != nil {
errs = append(errs, fmt.Sprintf("%s: %v", c.Name, c.err))
}
c.mu.Unlock()
}
cStats.mu.Unlock()
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, ", "))
}
}
// before print to screen, make sure each container get at least one valid stat data
waitFirst.Wait()
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
printHeader := func() {
if !*noStream {
fmt.Fprint(cli.out, "\033[2J")
fmt.Fprint(cli.out, "\033[H")
}
io.WriteString(w, "CONTAINER\tCPU %\tMEM USAGE / LIMIT\tMEM %\tNET I/O\tBLOCK I/O\tPIDS\n")
}
for range time.Tick(500 * time.Millisecond) {
printHeader()
toRemove := []int{}
cStats.mu.Lock()
for i, s := range cStats.cs {
if err := s.Display(w); err != nil && !*noStream {
toRemove = append(toRemove, i)
}
}
for j := len(toRemove) - 1; j >= 0; j-- {
i := toRemove[j]
cStats.cs = append(cStats.cs[:i], cStats.cs[i+1:]...)
}
if len(cStats.cs) == 0 && !showAll {
return nil
}
cStats.mu.Unlock()
w.Flush()
if *noStream {
break
}
select {
case err, ok := <-closeChan:
if ok {
if err != nil {
// this is suppressing "unexpected EOF" in the cli when the
// daemon restarts so it shutdowns cleanly
if err == io.ErrUnexpectedEOF {
return nil
}
return err
}
}
default:
// just skip
}
}
return nil
}

View File

@@ -1,219 +0,0 @@
package client
import (
"encoding/json"
"fmt"
"io"
"strings"
"sync"
"time"
"github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
"github.com/docker/go-units"
"golang.org/x/net/context"
)
type containerStats struct {
Name string
CPUPercentage float64
Memory float64
MemoryLimit float64
MemoryPercentage float64
NetworkRx float64
NetworkTx float64
BlockRead float64
BlockWrite float64
PidsCurrent uint64
mu sync.RWMutex
err error
}
type stats struct {
mu sync.Mutex
cs []*containerStats
}
func (s *stats) add(cs *containerStats) bool {
s.mu.Lock()
defer s.mu.Unlock()
if _, exists := s.isKnownContainer(cs.Name); !exists {
s.cs = append(s.cs, cs)
return true
}
return false
}
func (s *stats) remove(id string) {
s.mu.Lock()
if i, exists := s.isKnownContainer(id); exists {
s.cs = append(s.cs[:i], s.cs[i+1:]...)
}
s.mu.Unlock()
}
func (s *stats) isKnownContainer(cid string) (int, bool) {
for i, c := range s.cs {
if c.Name == cid {
return i, true
}
}
return -1, false
}
func (s *containerStats) Collect(cli client.APIClient, streamStats bool, waitFirst *sync.WaitGroup) {
var (
getFirst bool
previousCPU uint64
previousSystem uint64
u = make(chan error, 1)
)
defer func() {
// if error happens and we get nothing of stats, release wait group whatever
if !getFirst {
getFirst = true
waitFirst.Done()
}
}()
responseBody, err := cli.ContainerStats(context.Background(), s.Name, streamStats)
if err != nil {
s.mu.Lock()
s.err = err
s.mu.Unlock()
return
}
defer responseBody.Close()
dec := json.NewDecoder(responseBody)
go func() {
for {
var v *types.StatsJSON
if err := dec.Decode(&v); err != nil {
u <- err
return
}
var memPercent = 0.0
var cpuPercent = 0.0
// MemoryStats.Limit will never be 0 unless the container is not running and we haven't
// got any data from cgroup
if v.MemoryStats.Limit != 0 {
memPercent = float64(v.MemoryStats.Usage) / float64(v.MemoryStats.Limit) * 100.0
}
previousCPU = v.PreCPUStats.CPUUsage.TotalUsage
previousSystem = v.PreCPUStats.SystemUsage
cpuPercent = calculateCPUPercent(previousCPU, previousSystem, v)
blkRead, blkWrite := calculateBlockIO(v.BlkioStats)
s.mu.Lock()
s.CPUPercentage = cpuPercent
s.Memory = float64(v.MemoryStats.Usage)
s.MemoryLimit = float64(v.MemoryStats.Limit)
s.MemoryPercentage = memPercent
s.NetworkRx, s.NetworkTx = calculateNetwork(v.Networks)
s.BlockRead = float64(blkRead)
s.BlockWrite = float64(blkWrite)
s.PidsCurrent = v.PidsStats.Current
s.mu.Unlock()
u <- nil
if !streamStats {
return
}
}
}()
for {
select {
case <-time.After(2 * time.Second):
// zero out the values if we have not received an update within
// the specified duration.
s.mu.Lock()
s.CPUPercentage = 0
s.Memory = 0
s.MemoryPercentage = 0
s.MemoryLimit = 0
s.NetworkRx = 0
s.NetworkTx = 0
s.BlockRead = 0
s.BlockWrite = 0
s.PidsCurrent = 0
s.mu.Unlock()
// if this is the first stat you get, release WaitGroup
if !getFirst {
getFirst = true
waitFirst.Done()
}
case err := <-u:
if err != nil {
s.mu.Lock()
s.err = err
s.mu.Unlock()
return
}
// if this is the first stat you get, release WaitGroup
if !getFirst {
getFirst = true
waitFirst.Done()
}
}
if !streamStats {
return
}
}
}
func (s *containerStats) Display(w io.Writer) error {
s.mu.RLock()
defer s.mu.RUnlock()
if s.err != nil {
return s.err
}
fmt.Fprintf(w, "%s\t%.2f%%\t%s / %s\t%.2f%%\t%s / %s\t%s / %s\t%d\n",
s.Name,
s.CPUPercentage,
units.HumanSize(s.Memory), units.HumanSize(s.MemoryLimit),
s.MemoryPercentage,
units.HumanSize(s.NetworkRx), units.HumanSize(s.NetworkTx),
units.HumanSize(s.BlockRead), units.HumanSize(s.BlockWrite),
s.PidsCurrent)
return nil
}
func calculateCPUPercent(previousCPU, previousSystem uint64, v *types.StatsJSON) float64 {
var (
cpuPercent = 0.0
// calculate the change for the cpu usage of the container in between readings
cpuDelta = float64(v.CPUStats.CPUUsage.TotalUsage) - float64(previousCPU)
// calculate the change for the entire system between readings
systemDelta = float64(v.CPUStats.SystemUsage) - float64(previousSystem)
)
if systemDelta > 0.0 && cpuDelta > 0.0 {
cpuPercent = (cpuDelta / systemDelta) * float64(len(v.CPUStats.CPUUsage.PercpuUsage)) * 100.0
}
return cpuPercent
}
func calculateBlockIO(blkio types.BlkioStats) (blkRead uint64, blkWrite uint64) {
for _, bioEntry := range blkio.IoServiceBytesRecursive {
switch strings.ToLower(bioEntry.Op) {
case "read":
blkRead = blkRead + bioEntry.Value
case "write":
blkWrite = blkWrite + bioEntry.Value
}
}
return
}
func calculateNetwork(network map[string]types.NetworkStats) (float64, float64) {
var rx, tx float64
for _, v := range network {
rx += float64(v.RxBytes)
tx += float64(v.TxBytes)
}
return rx, tx
}

View File

@@ -1,47 +0,0 @@
package client
import (
"bytes"
"sync"
"testing"
"github.com/docker/engine-api/types"
)
func TestDisplay(t *testing.T) {
c := &containerStats{
Name: "app",
CPUPercentage: 30.0,
Memory: 100 * 1024 * 1024.0,
MemoryLimit: 2048 * 1024 * 1024.0,
MemoryPercentage: 100.0 / 2048.0 * 100.0,
NetworkRx: 100 * 1024 * 1024,
NetworkTx: 800 * 1024 * 1024,
BlockRead: 100 * 1024 * 1024,
BlockWrite: 800 * 1024 * 1024,
PidsCurrent: 1,
mu: sync.RWMutex{},
}
var b bytes.Buffer
if err := c.Display(&b); err != nil {
t.Fatalf("c.Display() gave error: %s", err)
}
got := b.String()
want := "app\t30.00%\t104.9 MB / 2.147 GB\t4.88%\t104.9 MB / 838.9 MB\t104.9 MB / 838.9 MB\t1\n"
if got != want {
t.Fatalf("c.Display() = %q, want %q", got, want)
}
}
func TestCalculBlockIO(t *testing.T) {
blkio := types.BlkioStats{
IoServiceBytesRecursive: []types.BlkioStatEntry{{8, 0, "read", 1234}, {8, 1, "read", 4567}, {8, 0, "write", 123}, {8, 1, "write", 456}},
}
blkRead, blkWrite := calculateBlockIO(blkio)
if blkRead != 5801 {
t.Fatalf("blkRead = %d, want 5801", blkRead)
}
if blkWrite != 579 {
t.Fatalf("blkWrite = %d, want 579", blkWrite)
}
}

View File

@@ -1,37 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdStop stops one or more containers.
//
// A running container is stopped by first sending SIGTERM and then SIGKILL if the container fails to stop within a grace period (the default is 10 seconds).
//
// Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdStop(args ...string) error {
cmd := Cli.Subcmd("stop", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["stop"].Description+".\nSending SIGTERM and then SIGKILL after a grace period", true)
nSeconds := cmd.Int([]string{"t", "-time"}, 10, "Seconds to wait for stop before killing it")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
if err := cli.client.ContainerStop(context.Background(), name, *nSeconds); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,46 +0,0 @@
package client
import (
"errors"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/reference"
"github.com/docker/engine-api/types"
)
// CmdTag tags an image into a repository.
//
// Usage: docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
func (cli *DockerCli) CmdTag(args ...string) error {
cmd := Cli.Subcmd("tag", []string{"IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]"}, Cli.DockerCommands["tag"].Description, true)
force := cmd.Bool([]string{"#f", "#-force"}, false, "Force the tagging even if there's a conflict")
cmd.Require(flag.Exact, 2)
cmd.ParseFlags(args, true)
ref, err := reference.ParseNamed(cmd.Arg(1))
if err != nil {
return err
}
if _, isCanonical := ref.(reference.Canonical); isCanonical {
return errors.New("refusing to create a tag with a digest reference")
}
var tag string
if tagged, isTagged := ref.(reference.NamedTagged); isTagged {
tag = tagged.Tag()
}
options := types.ImageTagOptions{
ImageID: cmd.Arg(0),
RepositoryName: ref.Name(),
Tag: tag,
Force: *force,
}
return cli.client.ImageTag(context.Background(), options)
}

View File

@@ -1,41 +0,0 @@
package client
import (
"fmt"
"strings"
"text/tabwriter"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdTop displays the running processes of a container.
//
// Usage: docker top CONTAINER
func (cli *DockerCli) CmdTop(args ...string) error {
cmd := Cli.Subcmd("top", []string{"CONTAINER [ps OPTIONS]"}, Cli.DockerCommands["top"].Description, true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var arguments []string
if cmd.NArg() > 1 {
arguments = cmd.Args()[1:]
}
procList, err := cli.client.ContainerTop(context.Background(), cmd.Arg(0), arguments)
if err != nil {
return err
}
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
fmt.Fprintln(w, strings.Join(procList.Titles, "\t"))
for _, proc := range procList.Processes {
fmt.Fprintln(w, strings.Join(proc, "\t"))
}
w.Flush()
return nil
}

View File

@@ -1,559 +0,0 @@
package client
import (
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"os"
"path"
"path/filepath"
"sort"
"strconv"
"time"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
"github.com/docker/distribution/digest"
"github.com/docker/distribution/registry/client/auth"
"github.com/docker/distribution/registry/client/transport"
"github.com/docker/docker/cliconfig"
"github.com/docker/docker/distribution"
"github.com/docker/docker/pkg/jsonmessage"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/reference"
"github.com/docker/docker/registry"
apiclient "github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
registrytypes "github.com/docker/engine-api/types/registry"
"github.com/docker/go-connections/tlsconfig"
"github.com/docker/notary/client"
"github.com/docker/notary/passphrase"
"github.com/docker/notary/trustmanager"
"github.com/docker/notary/tuf/data"
"github.com/docker/notary/tuf/signed"
"github.com/docker/notary/tuf/store"
)
var (
releasesRole = path.Join(data.CanonicalTargetsRole, "releases")
untrusted bool
)
func addTrustedFlags(fs *flag.FlagSet, verify bool) {
var trusted bool
if e := os.Getenv("DOCKER_CONTENT_TRUST"); e != "" {
if t, err := strconv.ParseBool(e); t || err != nil {
// treat any other value as true
trusted = true
}
}
message := "Skip image signing"
if verify {
message = "Skip image verification"
}
fs.BoolVar(&untrusted, []string{"-disable-content-trust"}, !trusted, message)
}
func isTrusted() bool {
return !untrusted
}
type target struct {
reference registry.Reference
digest digest.Digest
size int64
}
func (cli *DockerCli) trustDirectory() string {
return filepath.Join(cliconfig.ConfigDir(), "trust")
}
// certificateDirectory returns the directory containing
// TLS certificates for the given server. An error is
// returned if there was an error parsing the server string.
func (cli *DockerCli) certificateDirectory(server string) (string, error) {
u, err := url.Parse(server)
if err != nil {
return "", err
}
return filepath.Join(cliconfig.ConfigDir(), "tls", u.Host), nil
}
func trustServer(index *registrytypes.IndexInfo) (string, error) {
if s := os.Getenv("DOCKER_CONTENT_TRUST_SERVER"); s != "" {
urlObj, err := url.Parse(s)
if err != nil || urlObj.Scheme != "https" {
return "", fmt.Errorf("valid https URL required for trust server, got %s", s)
}
return s, nil
}
if index.Official {
return registry.NotaryServer, nil
}
return "https://" + index.Name, nil
}
type simpleCredentialStore struct {
auth types.AuthConfig
}
func (scs simpleCredentialStore) Basic(u *url.URL) (string, string) {
return scs.auth.Username, scs.auth.Password
}
func (scs simpleCredentialStore) RefreshToken(u *url.URL, service string) string {
return scs.auth.IdentityToken
}
func (scs simpleCredentialStore) SetRefreshToken(*url.URL, string, string) {
}
// getNotaryRepository returns a NotaryRepository which stores all the
// information needed to operate on a notary repository.
// It creates a HTTP transport providing authentication support.
func (cli *DockerCli) getNotaryRepository(repoInfo *registry.RepositoryInfo, authConfig types.AuthConfig, actions ...string) (*client.NotaryRepository, error) {
server, err := trustServer(repoInfo.Index)
if err != nil {
return nil, err
}
var cfg = tlsconfig.ClientDefault
cfg.InsecureSkipVerify = !repoInfo.Index.Secure
// Get certificate base directory
certDir, err := cli.certificateDirectory(server)
if err != nil {
return nil, err
}
logrus.Debugf("reading certificate directory: %s", certDir)
if err := registry.ReadCertsDirectory(&cfg, certDir); err != nil {
return nil, err
}
base := &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
DualStack: true,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: &cfg,
DisableKeepAlives: true,
}
// Skip configuration headers since request is not going to Docker daemon
modifiers := registry.DockerHeaders(clientUserAgent(), http.Header{})
authTransport := transport.NewTransport(base, modifiers...)
pingClient := &http.Client{
Transport: authTransport,
Timeout: 5 * time.Second,
}
endpointStr := server + "/v2/"
req, err := http.NewRequest("GET", endpointStr, nil)
if err != nil {
return nil, err
}
challengeManager := auth.NewSimpleChallengeManager()
resp, err := pingClient.Do(req)
if err != nil {
// Ignore error on ping to operate in offline mode
logrus.Debugf("Error pinging notary server %q: %s", endpointStr, err)
} else {
defer resp.Body.Close()
// Add response to the challenge manager to parse out
// authentication header and register authentication method
if err := challengeManager.AddResponse(resp); err != nil {
return nil, err
}
}
creds := simpleCredentialStore{auth: authConfig}
tokenHandler := auth.NewTokenHandler(authTransport, creds, repoInfo.FullName(), actions...)
basicHandler := auth.NewBasicHandler(creds)
modifiers = append(modifiers, transport.RequestModifier(auth.NewAuthorizer(challengeManager, tokenHandler, basicHandler)))
tr := transport.NewTransport(base, modifiers...)
return client.NewNotaryRepository(cli.trustDirectory(), repoInfo.FullName(), server, tr, cli.getPassphraseRetriever())
}
func convertTarget(t client.Target) (target, error) {
h, ok := t.Hashes["sha256"]
if !ok {
return target{}, errors.New("no valid hash, expecting sha256")
}
return target{
reference: registry.ParseReference(t.Name),
digest: digest.NewDigestFromHex("sha256", hex.EncodeToString(h)),
size: t.Length,
}, nil
}
func (cli *DockerCli) getPassphraseRetriever() passphrase.Retriever {
aliasMap := map[string]string{
"root": "root",
"snapshot": "repository",
"targets": "repository",
"default": "repository",
}
baseRetriever := passphrase.PromptRetrieverWithInOut(cli.in, cli.out, aliasMap)
env := map[string]string{
"root": os.Getenv("DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE"),
"snapshot": os.Getenv("DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE"),
"targets": os.Getenv("DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE"),
"default": os.Getenv("DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE"),
}
// Backwards compatibility with old env names. We should remove this in 1.10
if env["root"] == "" {
if passphrase := os.Getenv("DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE"); passphrase != "" {
env["root"] = passphrase
fmt.Fprintf(cli.err, "[DEPRECATED] The environment variable DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE has been deprecated and will be removed in v1.10. Please use DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE\n")
}
}
if env["snapshot"] == "" || env["targets"] == "" || env["default"] == "" {
if passphrase := os.Getenv("DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE"); passphrase != "" {
env["snapshot"] = passphrase
env["targets"] = passphrase
env["default"] = passphrase
fmt.Fprintf(cli.err, "[DEPRECATED] The environment variable DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE has been deprecated and will be removed in v1.10. Please use DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE\n")
}
}
return func(keyName string, alias string, createNew bool, numAttempts int) (string, bool, error) {
if v := env[alias]; v != "" {
return v, numAttempts > 1, nil
}
// For non-root roles, we can also try the "default" alias if it is specified
if v := env["default"]; v != "" && alias != data.CanonicalRootRole {
return v, numAttempts > 1, nil
}
return baseRetriever(keyName, alias, createNew, numAttempts)
}
}
func (cli *DockerCli) trustedReference(ref reference.NamedTagged) (reference.Canonical, error) {
repoInfo, err := registry.ParseRepositoryInfo(ref)
if err != nil {
return nil, err
}
// Resolve the Auth config relevant for this server
authConfig := cli.resolveAuthConfig(repoInfo.Index)
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig, "pull")
if err != nil {
fmt.Fprintf(cli.out, "Error establishing connection to trust repository: %s\n", err)
return nil, err
}
t, err := notaryRepo.GetTargetByName(ref.Tag(), releasesRole, data.CanonicalTargetsRole)
if err != nil {
return nil, err
}
// Only list tags in the top level targets role or the releases delegation role - ignore
// all other delegation roles
if t.Role != releasesRole && t.Role != data.CanonicalTargetsRole {
return nil, notaryError(repoInfo.FullName(), fmt.Errorf("No trust data for %s", ref.Tag()))
}
r, err := convertTarget(t.Target)
if err != nil {
return nil, err
}
return reference.WithDigest(ref, r.digest)
}
func (cli *DockerCli) tagTrusted(trustedRef reference.Canonical, ref reference.NamedTagged) error {
fmt.Fprintf(cli.out, "Tagging %s as %s\n", trustedRef.String(), ref.String())
options := types.ImageTagOptions{
ImageID: trustedRef.String(),
RepositoryName: trustedRef.Name(),
Tag: ref.Tag(),
Force: true,
}
return cli.client.ImageTag(context.Background(), options)
}
func notaryError(repoName string, err error) error {
switch err.(type) {
case *json.SyntaxError:
logrus.Debugf("Notary syntax error: %s", err)
return fmt.Errorf("Error: no trust data available for remote repository %s. Try running notary server and setting DOCKER_CONTENT_TRUST_SERVER to its HTTPS address?", repoName)
case signed.ErrExpired:
return fmt.Errorf("Error: remote repository %s out-of-date: %v", repoName, err)
case trustmanager.ErrKeyNotFound:
return fmt.Errorf("Error: signing keys for remote repository %s not found: %v", repoName, err)
case *net.OpError:
return fmt.Errorf("Error: error contacting notary server: %v", err)
case store.ErrMetaNotFound:
return fmt.Errorf("Error: trust data missing for remote repository %s or remote repository not found: %v", repoName, err)
case signed.ErrInvalidKeyType:
return fmt.Errorf("Warning: potential malicious behavior - trust data mismatch for remote repository %s: %v", repoName, err)
case signed.ErrNoKeys:
return fmt.Errorf("Error: could not find signing keys for remote repository %s, or could not decrypt signing key: %v", repoName, err)
case signed.ErrLowVersion:
return fmt.Errorf("Warning: potential malicious behavior - trust data version is lower than expected for remote repository %s: %v", repoName, err)
case signed.ErrRoleThreshold:
return fmt.Errorf("Warning: potential malicious behavior - trust data has insufficient signatures for remote repository %s: %v", repoName, err)
case client.ErrRepositoryNotExist:
return fmt.Errorf("Error: remote trust data does not exist for %s: %v", repoName, err)
case signed.ErrInsufficientSignatures:
return fmt.Errorf("Error: could not produce valid signature for %s. If Yubikey was used, was touch input provided?: %v", repoName, err)
}
return err
}
func (cli *DockerCli) trustedPull(repoInfo *registry.RepositoryInfo, ref registry.Reference, authConfig types.AuthConfig, requestPrivilege apiclient.RequestPrivilegeFunc) error {
var refs []target
notaryRepo, err := cli.getNotaryRepository(repoInfo, authConfig, "pull")
if err != nil {
fmt.Fprintf(cli.out, "Error establishing connection to trust repository: %s\n", err)
return err
}
if ref.String() == "" {
// List all targets
targets, err := notaryRepo.ListTargets(releasesRole, data.CanonicalTargetsRole)
if err != nil {
return notaryError(repoInfo.FullName(), err)
}
for _, tgt := range targets {
t, err := convertTarget(tgt.Target)
if err != nil {
fmt.Fprintf(cli.out, "Skipping target for %q\n", repoInfo.Name())
continue
}
// Only list tags in the top level targets role or the releases delegation role - ignore
// all other delegation roles
if tgt.Role != releasesRole && tgt.Role != data.CanonicalTargetsRole {
continue
}
refs = append(refs, t)
}
if len(refs) == 0 {
return notaryError(repoInfo.FullName(), fmt.Errorf("No trusted tags for %s", repoInfo.FullName()))
}
} else {
t, err := notaryRepo.GetTargetByName(ref.String(), releasesRole, data.CanonicalTargetsRole)
if err != nil {
return notaryError(repoInfo.FullName(), err)
}
// Only get the tag if it's in the top level targets role or the releases delegation role
// ignore it if it's in any other delegation roles
if t.Role != releasesRole && t.Role != data.CanonicalTargetsRole {
return notaryError(repoInfo.FullName(), fmt.Errorf("No trust data for %s", ref.String()))
}
logrus.Debugf("retrieving target for %s role\n", t.Role)
r, err := convertTarget(t.Target)
if err != nil {
return err
}
refs = append(refs, r)
}
for i, r := range refs {
displayTag := r.reference.String()
if displayTag != "" {
displayTag = ":" + displayTag
}
fmt.Fprintf(cli.out, "Pull (%d of %d): %s%s@%s\n", i+1, len(refs), repoInfo.Name(), displayTag, r.digest)
if err := cli.imagePullPrivileged(authConfig, repoInfo.Name(), r.digest.String(), requestPrivilege); err != nil {
return err
}
// If reference is not trusted, tag by trusted reference
if !r.reference.HasDigest() {
tagged, err := reference.WithTag(repoInfo, r.reference.String())
if err != nil {
return err
}
trustedRef, err := reference.WithDigest(repoInfo, r.digest)
if err != nil {
return err
}
if err := cli.tagTrusted(trustedRef, tagged); err != nil {
return err
}
}
}
return nil
}
func (cli *DockerCli) trustedPush(repoInfo *registry.RepositoryInfo, tag string, authConfig types.AuthConfig, requestPrivilege apiclient.RequestPrivilegeFunc) error {
responseBody, err := cli.imagePushPrivileged(authConfig, repoInfo.Name(), tag, requestPrivilege)
if err != nil {
return err
}
defer responseBody.Close()
// If it is a trusted push we would like to find the target entry which match the
// tag provided in the function and then do an AddTarget later.
target := &client.Target{}
// Count the times of calling for handleTarget,
// if it is called more that once, that should be considered an error in a trusted push.
cnt := 0
handleTarget := func(aux *json.RawMessage) {
cnt++
if cnt > 1 {
// handleTarget should only be called one. This will be treated as an error.
return
}
var pushResult distribution.PushResult
err := json.Unmarshal(*aux, &pushResult)
if err == nil && pushResult.Tag != "" && pushResult.Digest.Validate() == nil {
h, err := hex.DecodeString(pushResult.Digest.Hex())
if err != nil {
target = nil
return
}
target.Name = registry.ParseReference(pushResult.Tag).String()
target.Hashes = data.Hashes{string(pushResult.Digest.Algorithm()): h}
target.Length = int64(pushResult.Size)
}
}
// We want trust signatures to always take an explicit tag,
// otherwise it will act as an untrusted push.
if tag == "" {
if err = jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, nil); err != nil {
return err
}
fmt.Fprintln(cli.out, "No tag specified, skipping trust metadata push")
return nil
}
if err = jsonmessage.DisplayJSONMessagesStream(responseBody, cli.out, cli.outFd, cli.isTerminalOut, handleTarget); err != nil {
return err
}
if cnt > 1 {
return fmt.Errorf("internal error: only one call to handleTarget expected")
}
if target == nil {
fmt.Fprintln(cli.out, "No targets found, please provide a specific tag in order to sign it")
return nil
}
fmt.Fprintln(cli.out, "Signing and pushing trust metadata")
repo, err := cli.getNotaryRepository(repoInfo, authConfig, "push", "pull")
if err != nil {
fmt.Fprintf(cli.out, "Error establishing connection to notary repository: %s\n", err)
return err
}
// get the latest repository metadata so we can figure out which roles to sign
_, err = repo.Update(false)
switch err.(type) {
case client.ErrRepoNotInitialized, client.ErrRepositoryNotExist:
keys := repo.CryptoService.ListKeys(data.CanonicalRootRole)
var rootKeyID string
// always select the first root key
if len(keys) > 0 {
sort.Strings(keys)
rootKeyID = keys[0]
} else {
rootPublicKey, err := repo.CryptoService.Create(data.CanonicalRootRole, "", data.ECDSAKey)
if err != nil {
return err
}
rootKeyID = rootPublicKey.ID()
}
// Initialize the notary repository with a remotely managed snapshot key
if err := repo.Initialize(rootKeyID, data.CanonicalSnapshotRole); err != nil {
return notaryError(repoInfo.FullName(), err)
}
fmt.Fprintf(cli.out, "Finished initializing %q\n", repoInfo.FullName())
err = repo.AddTarget(target, data.CanonicalTargetsRole)
case nil:
// already initialized and we have successfully downloaded the latest metadata
err = cli.addTargetToAllSignableRoles(repo, target)
default:
return notaryError(repoInfo.FullName(), err)
}
if err == nil {
err = repo.Publish()
}
if err != nil {
fmt.Fprintf(cli.out, "Failed to sign %q:%s - %s\n", repoInfo.FullName(), tag, err.Error())
return notaryError(repoInfo.FullName(), err)
}
fmt.Fprintf(cli.out, "Successfully signed %q:%s\n", repoInfo.FullName(), tag)
return nil
}
// Attempt to add the image target to all the top level delegation roles we can
// (based on whether we have the signing key and whether the role's path allows
// us to).
// If there are no delegation roles, we add to the targets role.
func (cli *DockerCli) addTargetToAllSignableRoles(repo *client.NotaryRepository, target *client.Target) error {
var signableRoles []string
// translate the full key names, which includes the GUN, into just the key IDs
allCanonicalKeyIDs := make(map[string]struct{})
for fullKeyID := range repo.CryptoService.ListAllKeys() {
allCanonicalKeyIDs[path.Base(fullKeyID)] = struct{}{}
}
allDelegationRoles, err := repo.GetDelegationRoles()
if err != nil {
return err
}
// if there are no delegation roles, then just try to sign it into the targets role
if len(allDelegationRoles) == 0 {
return repo.AddTarget(target, data.CanonicalTargetsRole)
}
// there are delegation roles, find every delegation role we have a key for, and
// attempt to sign into into all those roles.
for _, delegationRole := range allDelegationRoles {
// We do not support signing any delegation role that isn't a direct child of the targets role.
// Also don't bother checking the keys if we can't add the target
// to this role due to path restrictions
if path.Dir(delegationRole.Name) != data.CanonicalTargetsRole || !delegationRole.CheckPaths(target.Name) {
continue
}
for _, canonicalKeyID := range delegationRole.KeyIDs {
if _, ok := allCanonicalKeyIDs[canonicalKeyID]; ok {
signableRoles = append(signableRoles, delegationRole.Name)
break
}
}
}
if len(signableRoles) == 0 {
return fmt.Errorf("no valid signing keys for delegation roles")
}
return repo.AddTarget(target, signableRoles...)
}

View File

@@ -1,56 +0,0 @@
package client
import (
"os"
"testing"
"github.com/docker/docker/registry"
registrytypes "github.com/docker/engine-api/types/registry"
)
func unsetENV() {
os.Unsetenv("DOCKER_CONTENT_TRUST")
os.Unsetenv("DOCKER_CONTENT_TRUST_SERVER")
}
func TestENVTrustServer(t *testing.T) {
defer unsetENV()
indexInfo := &registrytypes.IndexInfo{Name: "testserver"}
if err := os.Setenv("DOCKER_CONTENT_TRUST_SERVER", "https://notary-test.com:5000"); err != nil {
t.Fatal("Failed to set ENV variable")
}
output, err := trustServer(indexInfo)
expectedStr := "https://notary-test.com:5000"
if err != nil || output != expectedStr {
t.Fatalf("Expected server to be %s, got %s", expectedStr, output)
}
}
func TestHTTPENVTrustServer(t *testing.T) {
defer unsetENV()
indexInfo := &registrytypes.IndexInfo{Name: "testserver"}
if err := os.Setenv("DOCKER_CONTENT_TRUST_SERVER", "http://notary-test.com:5000"); err != nil {
t.Fatal("Failed to set ENV variable")
}
_, err := trustServer(indexInfo)
if err == nil {
t.Fatal("Expected error with invalid scheme")
}
}
func TestOfficialTrustServer(t *testing.T) {
indexInfo := &registrytypes.IndexInfo{Name: "testserver", Official: true}
output, err := trustServer(indexInfo)
if err != nil || output != registry.NotaryServer {
t.Fatalf("Expected server to be %s, got %s", registry.NotaryServer, output)
}
}
func TestNonOfficialTrustServer(t *testing.T) {
indexInfo := &registrytypes.IndexInfo{Name: "testserver", Official: false}
output, err := trustServer(indexInfo)
expectedStr := "https://" + indexInfo.Name
if err != nil || output != expectedStr {
t.Fatalf("Expected server to be %s, got %s", expectedStr, output)
}
}

View File

@@ -1,34 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdUnpause unpauses all processes within a container, for one or more containers.
//
// Usage: docker unpause CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdUnpause(args ...string) error {
cmd := Cli.Subcmd("unpause", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["unpause"].Description, true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
if err := cli.client.ContainerUnpause(context.Background(), name); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,117 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/runconfig/opts"
"github.com/docker/engine-api/types/container"
"github.com/docker/go-units"
)
// CmdUpdate updates resources of one or more containers.
//
// Usage: docker update [OPTIONS] CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdUpdate(args ...string) error {
cmd := Cli.Subcmd("update", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["update"].Description, true)
flBlkioWeight := cmd.Uint16([]string{"-blkio-weight"}, 0, "Block IO (relative weight), between 10 and 1000")
flCPUPeriod := cmd.Int64([]string{"-cpu-period"}, 0, "Limit CPU CFS (Completely Fair Scheduler) period")
flCPUQuota := cmd.Int64([]string{"-cpu-quota"}, 0, "Limit CPU CFS (Completely Fair Scheduler) quota")
flCpusetCpus := cmd.String([]string{"-cpuset-cpus"}, "", "CPUs in which to allow execution (0-3, 0,1)")
flCpusetMems := cmd.String([]string{"-cpuset-mems"}, "", "MEMs in which to allow execution (0-3, 0,1)")
flCPUShares := cmd.Int64([]string{"#c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit")
flMemoryReservation := cmd.String([]string{"-memory-reservation"}, "", "Memory soft limit")
flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Swap limit equal to memory plus swap: '-1' to enable unlimited swap")
flKernelMemory := cmd.String([]string{"-kernel-memory"}, "", "Kernel memory limit")
flRestartPolicy := cmd.String([]string{"-restart"}, "", "Restart policy to apply when a container exits")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
if cmd.NFlag() == 0 {
return fmt.Errorf("You must provide one or more flags when using this command.")
}
var err error
var flMemory int64
if *flMemoryString != "" {
flMemory, err = units.RAMInBytes(*flMemoryString)
if err != nil {
return err
}
}
var memoryReservation int64
if *flMemoryReservation != "" {
memoryReservation, err = units.RAMInBytes(*flMemoryReservation)
if err != nil {
return err
}
}
var memorySwap int64
if *flMemorySwap != "" {
if *flMemorySwap == "-1" {
memorySwap = -1
} else {
memorySwap, err = units.RAMInBytes(*flMemorySwap)
if err != nil {
return err
}
}
}
var kernelMemory int64
if *flKernelMemory != "" {
kernelMemory, err = units.RAMInBytes(*flKernelMemory)
if err != nil {
return err
}
}
var restartPolicy container.RestartPolicy
if *flRestartPolicy != "" {
restartPolicy, err = opts.ParseRestartPolicy(*flRestartPolicy)
if err != nil {
return err
}
}
resources := container.Resources{
BlkioWeight: *flBlkioWeight,
CpusetCpus: *flCpusetCpus,
CpusetMems: *flCpusetMems,
CPUShares: *flCPUShares,
Memory: flMemory,
MemoryReservation: memoryReservation,
MemorySwap: memorySwap,
KernelMemory: kernelMemory,
CPUPeriod: *flCPUPeriod,
CPUQuota: *flCPUQuota,
}
updateConfig := container.UpdateConfig{
Resources: resources,
RestartPolicy: restartPolicy,
}
names := cmd.Args()
var errs []string
for _, name := range names {
if err := cli.client.ContainerUpdate(context.Background(), name, updateConfig); err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%s\n", name)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -1,147 +1,269 @@
package client
import (
"bytes"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"os"
gosignal "os/signal"
"path/filepath"
"runtime"
"time"
"strconv"
"strings"
"golang.org/x/net/context"
"github.com/Sirupsen/logrus"
log "github.com/Sirupsen/logrus"
"github.com/docker/docker/api"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/engine"
"github.com/docker/docker/pkg/signal"
"github.com/docker/docker/pkg/stdcopy"
"github.com/docker/docker/pkg/term"
"github.com/docker/docker/registry"
"github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
registrytypes "github.com/docker/engine-api/types/registry"
"github.com/docker/docker/utils"
)
func (cli *DockerCli) electAuthServer() string {
// The daemon `/info` endpoint informs us of the default registry being
// used. This is essential in cross-platforms environment, where for
// example a Linux client might be interacting with a Windows daemon, hence
// the default registry URL might be Windows specific.
serverAddress := registry.IndexServer
if info, err := cli.client.Info(context.Background()); err != nil {
fmt.Fprintf(cli.out, "Warning: failed to get default registry endpoint from daemon (%v). Using system default: %s\n", err, serverAddress)
} else {
serverAddress = info.IndexServerAddress
}
return serverAddress
var (
ErrConnectionRefused = errors.New("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
)
func (cli *DockerCli) HTTPClient() *http.Client {
return &http.Client{Transport: cli.transport}
}
// encodeAuthToBase64 serializes the auth configuration as JSON base64 payload
func encodeAuthToBase64(authConfig types.AuthConfig) (string, error) {
buf, err := json.Marshal(authConfig)
if err != nil {
return "", err
}
return base64.URLEncoding.EncodeToString(buf), nil
}
func (cli *DockerCli) registryAuthenticationPrivilegedFunc(index *registrytypes.IndexInfo, cmdName string) client.RequestPrivilegeFunc {
return func() (string, error) {
fmt.Fprintf(cli.out, "\nPlease login prior to %s:\n", cmdName)
indexServer := registry.GetAuthConfigKey(index)
authConfig, err := cli.configureAuth("", "", indexServer, false)
if err != nil {
return "", err
func (cli *DockerCli) encodeData(data interface{}) (*bytes.Buffer, error) {
params := bytes.NewBuffer(nil)
if data != nil {
if env, ok := data.(engine.Env); ok {
if err := env.Encode(params); err != nil {
return nil, err
}
} else {
buf, err := json.Marshal(data)
if err != nil {
return nil, err
}
if _, err := params.Write(buf); err != nil {
return nil, err
}
}
return encodeAuthToBase64(authConfig)
}
return params, nil
}
func (cli *DockerCli) call(method, path string, data interface{}, passAuthInfo bool) (io.ReadCloser, int, error) {
params, err := cli.encodeData(data)
if err != nil {
return nil, -1, err
}
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", api.APIVERSION, path), params)
if err != nil {
return nil, -1, err
}
if passAuthInfo {
cli.LoadConfigFile()
// Resolve the Auth config relevant for this server
authConfig := cli.configFile.Configs[registry.IndexServerAddress()]
getHeaders := func(authConfig registry.AuthConfig) (map[string][]string, error) {
buf, err := json.Marshal(authConfig)
if err != nil {
return nil, err
}
registryAuthHeader := []string{
base64.URLEncoding.EncodeToString(buf),
}
return map[string][]string{"X-Registry-Auth": registryAuthHeader}, nil
}
if headers, err := getHeaders(authConfig); err == nil && headers != nil {
for k, v := range headers {
req.Header[k] = v
}
}
}
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.URL.Host = cli.addr
req.URL.Scheme = cli.scheme
if data != nil {
req.Header.Set("Content-Type", "application/json")
} else if method == "POST" {
req.Header.Set("Content-Type", "text/plain")
}
resp, err := cli.HTTPClient().Do(req)
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return nil, -1, ErrConnectionRefused
}
if cli.tlsConfig == nil {
return nil, -1, fmt.Errorf("%v. Are you trying to connect to a TLS-enabled daemon without TLS?", err)
}
return nil, -1, fmt.Errorf("An error occurred trying to connect: %v", err)
}
if resp.StatusCode < 200 || resp.StatusCode >= 400 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, -1, err
}
if len(body) == 0 {
return nil, resp.StatusCode, fmt.Errorf("Error: request returned %s for API route and version %s, check if the server supports the requested API version", http.StatusText(resp.StatusCode), req.URL)
}
return nil, resp.StatusCode, fmt.Errorf("Error response from daemon: %s", bytes.TrimSpace(body))
}
return resp.Body, resp.StatusCode, nil
}
func (cli *DockerCli) stream(method, path string, in io.Reader, out io.Writer, headers map[string][]string) error {
return cli.streamHelper(method, path, true, in, out, nil, headers)
}
func (cli *DockerCli) streamHelper(method, path string, setRawTerminal bool, in io.Reader, stdout, stderr io.Writer, headers map[string][]string) error {
if (method == "POST" || method == "PUT") && in == nil {
in = bytes.NewReader([]byte{})
}
req, err := http.NewRequest(method, fmt.Sprintf("/v%s%s", api.APIVERSION, path), in)
if err != nil {
return err
}
req.Header.Set("User-Agent", "Docker-Client/"+dockerversion.VERSION)
req.URL.Host = cli.addr
req.URL.Scheme = cli.scheme
if method == "POST" {
req.Header.Set("Content-Type", "text/plain")
}
if headers != nil {
for k, v := range headers {
req.Header[k] = v
}
}
resp, err := cli.HTTPClient().Do(req)
if err != nil {
if strings.Contains(err.Error(), "connection refused") {
return fmt.Errorf("Cannot connect to the Docker daemon. Is 'docker -d' running on this host?")
}
return err
}
defer resp.Body.Close()
if resp.StatusCode < 200 || resp.StatusCode >= 400 {
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
if len(body) == 0 {
return fmt.Errorf("Error :%s", http.StatusText(resp.StatusCode))
}
return fmt.Errorf("Error: %s", bytes.TrimSpace(body))
}
if api.MatchesContentType(resp.Header.Get("Content-Type"), "application/json") {
return utils.DisplayJSONMessagesStream(resp.Body, stdout, cli.outFd, cli.isTerminalOut)
}
if stdout != nil || stderr != nil {
// When TTY is ON, use regular copy
if setRawTerminal {
_, err = io.Copy(stdout, resp.Body)
} else {
_, err = stdcopy.StdCopy(stdout, stderr, resp.Body)
}
log.Debugf("[stream] End of stdout")
return err
}
return nil
}
func (cli *DockerCli) resizeTty(id string, isExec bool) {
height, width := cli.getTtySize()
cli.resizeTtyTo(id, height, width, isExec)
}
func (cli *DockerCli) resizeTtyTo(id string, height, width int, isExec bool) {
if height == 0 && width == 0 {
return
}
v := url.Values{}
v.Set("h", strconv.Itoa(height))
v.Set("w", strconv.Itoa(width))
options := types.ResizeOptions{
ID: id,
Height: height,
Width: width,
}
var err error
if isExec {
err = cli.client.ContainerExecResize(context.Background(), options)
path := ""
if !isExec {
path = "/containers/" + id + "/resize?"
} else {
err = cli.client.ContainerResize(context.Background(), options)
path = "/exec/" + id + "/resize?"
}
if err != nil {
logrus.Debugf("Error resize: %s", err)
if _, _, err := readBody(cli.call("POST", path+v.Encode(), nil, false)); err != nil {
log.Debugf("Error resize: %s", err)
}
}
func waitForExit(cli *DockerCli, containerId string) (int, error) {
stream, _, err := cli.call("POST", "/containers/"+containerId+"/wait", nil, false)
if err != nil {
return -1, err
}
var out engine.Env
if err := out.Decode(stream); err != nil {
return -1, err
}
return out.GetInt("StatusCode"), nil
}
// getExitCode perform an inspect on the container. It returns
// the running state and the exit code.
func getExitCode(cli *DockerCli, containerID string) (bool, int, error) {
c, err := cli.client.ContainerInspect(context.Background(), containerID)
func getExitCode(cli *DockerCli, containerId string) (bool, int, error) {
stream, _, err := cli.call("GET", "/containers/"+containerId+"/json", nil, false)
if err != nil {
// If we can't connect, then the daemon probably died.
if err != client.ErrConnectionFailed {
if err != ErrConnectionRefused {
return false, -1, err
}
return false, -1, nil
}
return c.State.Running, c.State.ExitCode, nil
var result engine.Env
if err := result.Decode(stream); err != nil {
return false, -1, err
}
state := result.GetSubEnv("State")
return state.GetBool("Running"), state.GetInt("ExitCode"), nil
}
// getExecExitCode perform an inspect on the exec command. It returns
// the running state and the exit code.
func getExecExitCode(cli *DockerCli, execID string) (bool, int, error) {
resp, err := cli.client.ContainerExecInspect(context.Background(), execID)
func getExecExitCode(cli *DockerCli, execId string) (bool, int, error) {
stream, _, err := cli.call("GET", "/exec/"+execId+"/json", nil, false)
if err != nil {
// If we can't connect, then the daemon probably died.
if err != client.ErrConnectionFailed {
if err != ErrConnectionRefused {
return false, -1, err
}
return false, -1, nil
}
return resp.Running, resp.ExitCode, nil
var result engine.Env
if err := result.Decode(stream); err != nil {
return false, -1, err
}
return result.GetBool("Running"), result.GetInt("ExitCode"), nil
}
func (cli *DockerCli) monitorTtySize(id string, isExec bool) error {
cli.resizeTty(id, isExec)
if runtime.GOOS == "windows" {
go func() {
prevH, prevW := cli.getTtySize()
for {
time.Sleep(time.Millisecond * 250)
h, w := cli.getTtySize()
if prevW != w || prevH != h {
cli.resizeTty(id, isExec)
}
prevH = h
prevW = w
}
}()
} else {
sigchan := make(chan os.Signal, 1)
gosignal.Notify(sigchan, signal.SIGWINCH)
go func() {
for range sigchan {
cli.resizeTty(id, isExec)
}
}()
}
sigchan := make(chan os.Signal, 1)
gosignal.Notify(sigchan, signal.SIGWINCH)
go func() {
for _ = range sigchan {
cli.resizeTty(id, isExec)
}
}()
return nil
}
@@ -151,7 +273,7 @@ func (cli *DockerCli) getTtySize() (int, int) {
}
ws, err := term.GetWinsize(cli.outFd)
if err != nil {
logrus.Debugf("Error getting size: %s", err)
log.Debugf("Error getting size: %s", err)
if ws == nil {
return 0, 0
}
@@ -159,44 +281,16 @@ func (cli *DockerCli) getTtySize() (int, int) {
return int(ws.Height), int(ws.Width)
}
func copyToFile(outfile string, r io.Reader) error {
tmpFile, err := ioutil.TempFile(filepath.Dir(outfile), ".docker_temp_")
func readBody(stream io.ReadCloser, statusCode int, err error) ([]byte, int, error) {
if stream != nil {
defer stream.Close()
}
if err != nil {
return err
return nil, statusCode, err
}
tmpPath := tmpFile.Name()
_, err = io.Copy(tmpFile, r)
tmpFile.Close()
body, err := ioutil.ReadAll(stream)
if err != nil {
os.Remove(tmpPath)
return err
return nil, -1, err
}
if err = os.Rename(tmpPath, outfile); err != nil {
os.Remove(tmpPath)
return err
}
return nil
}
// resolveAuthConfig is like registry.ResolveAuthConfig, but if using the
// default index, it uses the default index name for the daemon's platform,
// not the client's platform.
func (cli *DockerCli) resolveAuthConfig(index *registrytypes.IndexInfo) types.AuthConfig {
configKey := index.Name
if index.Official {
configKey = cli.electAuthServer()
}
a, _ := getCredentials(cli.configFile, configKey)
return a
}
func (cli *DockerCli) retrieveAuthConfigs() map[string]types.AuthConfig {
acs, _ := getAllCredentials(cli.configFile)
return acs
return body, statusCode, nil
}

View File

@@ -1,95 +0,0 @@
package client
import (
"runtime"
"text/template"
"time"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/dockerversion"
flag "github.com/docker/docker/pkg/mflag"
"github.com/docker/docker/utils"
"github.com/docker/docker/utils/templates"
"github.com/docker/engine-api/types"
)
var versionTemplate = `Client:
Version: {{.Client.Version}}
API version: {{.Client.APIVersion}}
Go version: {{.Client.GoVersion}}
Git commit: {{.Client.GitCommit}}
Built: {{.Client.BuildTime}}
OS/Arch: {{.Client.Os}}/{{.Client.Arch}}{{if .Client.Experimental}}
Experimental: {{.Client.Experimental}}{{end}}{{if .ServerOK}}
Server:
Version: {{.Server.Version}}
API version: {{.Server.APIVersion}}
Go version: {{.Server.GoVersion}}
Git commit: {{.Server.GitCommit}}
Built: {{.Server.BuildTime}}
OS/Arch: {{.Server.Os}}/{{.Server.Arch}}{{if .Server.Experimental}}
Experimental: {{.Server.Experimental}}{{end}}{{end}}`
// CmdVersion shows Docker version information.
//
// Available version information is shown for: client Docker version, client API version, client Go version, client Git commit, client OS/Arch, server Docker version, server API version, server Go version, server Git commit, and server OS/Arch.
//
// Usage: docker version
func (cli *DockerCli) CmdVersion(args ...string) (err error) {
cmd := Cli.Subcmd("version", nil, Cli.DockerCommands["version"].Description, true)
tmplStr := cmd.String([]string{"f", "#format", "-format"}, "", "Format the output using the given go template")
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
templateFormat := versionTemplate
if *tmplStr != "" {
templateFormat = *tmplStr
}
var tmpl *template.Template
if tmpl, err = templates.Parse(templateFormat); err != nil {
return Cli.StatusError{StatusCode: 64,
Status: "Template parsing error: " + err.Error()}
}
vd := types.VersionResponse{
Client: &types.Version{
Version: dockerversion.Version,
APIVersion: cli.client.ClientVersion(),
GoVersion: runtime.Version(),
GitCommit: dockerversion.GitCommit,
BuildTime: dockerversion.BuildTime,
Os: runtime.GOOS,
Arch: runtime.GOARCH,
Experimental: utils.ExperimentalBuild(),
},
}
serverVersion, err := cli.client.ServerVersion(context.Background())
if err == nil {
vd.Server = &serverVersion
}
// first we need to make BuildTime more human friendly
t, errTime := time.Parse(time.RFC3339Nano, vd.Client.BuildTime)
if errTime == nil {
vd.Client.BuildTime = t.Format(time.ANSIC)
}
if vd.ServerOK() {
t, errTime = time.Parse(time.RFC3339Nano, vd.Server.BuildTime)
if errTime == nil {
vd.Server.BuildTime = t.Format(time.ANSIC)
}
}
if err2 := tmpl.Execute(cli.out, vd); err2 != nil && err == nil {
err = err2
}
cli.out.Write([]byte{'\n'})
return err
}

View File

@@ -1,177 +0,0 @@
package client
import (
"fmt"
"sort"
"text/tabwriter"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
"github.com/docker/docker/opts"
flag "github.com/docker/docker/pkg/mflag"
runconfigopts "github.com/docker/docker/runconfig/opts"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/filters"
)
// CmdVolume is the parent subcommand for all volume commands
//
// Usage: docker volume <COMMAND> <OPTS>
func (cli *DockerCli) CmdVolume(args ...string) error {
description := Cli.DockerCommands["volume"].Description + "\n\nCommands:\n"
commands := [][]string{
{"create", "Create a volume"},
{"inspect", "Return low-level information on a volume"},
{"ls", "List volumes"},
{"rm", "Remove a volume"},
}
for _, cmd := range commands {
description += fmt.Sprintf(" %-25.25s%s\n", cmd[0], cmd[1])
}
description += "\nRun 'docker volume COMMAND --help' for more information on a command"
cmd := Cli.Subcmd("volume", []string{"[COMMAND]"}, description, false)
cmd.Require(flag.Exact, 0)
err := cmd.ParseFlags(args, true)
cmd.Usage()
return err
}
// CmdVolumeLs outputs a list of Docker volumes.
//
// Usage: docker volume ls [OPTIONS]
func (cli *DockerCli) CmdVolumeLs(args ...string) error {
cmd := Cli.Subcmd("volume ls", nil, "List volumes", true)
quiet := cmd.Bool([]string{"q", "-quiet"}, false, "Only display volume names")
flFilter := opts.NewListOpts(nil)
cmd.Var(&flFilter, []string{"f", "-filter"}, "Provide filter values (i.e. 'dangling=true')")
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
volFilterArgs := filters.NewArgs()
for _, f := range flFilter.GetAll() {
var err error
volFilterArgs, err = filters.ParseFlag(f, volFilterArgs)
if err != nil {
return err
}
}
volumes, err := cli.client.VolumeList(context.Background(), volFilterArgs)
if err != nil {
return err
}
w := tabwriter.NewWriter(cli.out, 20, 1, 3, ' ', 0)
if !*quiet {
for _, warn := range volumes.Warnings {
fmt.Fprintln(cli.err, warn)
}
fmt.Fprintf(w, "DRIVER \tVOLUME NAME")
fmt.Fprintf(w, "\n")
}
sort.Sort(byVolumeName(volumes.Volumes))
for _, vol := range volumes.Volumes {
if *quiet {
fmt.Fprintln(w, vol.Name)
continue
}
fmt.Fprintf(w, "%s\t%s\n", vol.Driver, vol.Name)
}
w.Flush()
return nil
}
type byVolumeName []*types.Volume
func (r byVolumeName) Len() int { return len(r) }
func (r byVolumeName) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r byVolumeName) Less(i, j int) bool {
return r[i].Name < r[j].Name
}
// CmdVolumeInspect displays low-level information on one or more volumes.
//
// Usage: docker volume inspect [OPTIONS] VOLUME [VOLUME...]
func (cli *DockerCli) CmdVolumeInspect(args ...string) error {
cmd := Cli.Subcmd("volume inspect", []string{"VOLUME [VOLUME...]"}, "Return low-level information on a volume", true)
tmplStr := cmd.String([]string{"f", "-format"}, "", "Format the output using the given go template")
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
if err := cmd.Parse(args); err != nil {
return nil
}
inspectSearcher := func(name string) (interface{}, []byte, error) {
i, err := cli.client.VolumeInspect(context.Background(), name)
return i, nil, err
}
return cli.inspectElements(*tmplStr, cmd.Args(), inspectSearcher)
}
// CmdVolumeCreate creates a new volume.
//
// Usage: docker volume create [OPTIONS]
func (cli *DockerCli) CmdVolumeCreate(args ...string) error {
cmd := Cli.Subcmd("volume create", nil, "Create a volume", true)
flDriver := cmd.String([]string{"d", "-driver"}, "local", "Specify volume driver name")
flName := cmd.String([]string{"-name"}, "", "Specify volume name")
flDriverOpts := opts.NewMapOpts(nil, nil)
cmd.Var(flDriverOpts, []string{"o", "-opt"}, "Set driver specific options")
flLabels := opts.NewListOpts(nil)
cmd.Var(&flLabels, []string{"-label"}, "Set metadata for a volume")
cmd.Require(flag.Exact, 0)
cmd.ParseFlags(args, true)
volReq := types.VolumeCreateRequest{
Driver: *flDriver,
DriverOpts: flDriverOpts.GetAll(),
Name: *flName,
Labels: runconfigopts.ConvertKVStringsToMap(flLabels.GetAll()),
}
vol, err := cli.client.VolumeCreate(context.Background(), volReq)
if err != nil {
return err
}
fmt.Fprintf(cli.out, "%s\n", vol.Name)
return nil
}
// CmdVolumeRm removes one or more volumes.
//
// Usage: docker volume rm VOLUME [VOLUME...]
func (cli *DockerCli) CmdVolumeRm(args ...string) error {
cmd := Cli.Subcmd("volume rm", []string{"VOLUME [VOLUME...]"}, "Remove a volume", true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var status = 0
for _, name := range cmd.Args() {
if err := cli.client.VolumeRemove(context.Background(), name); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
status = 1
continue
}
fmt.Fprintf(cli.out, "%s\n", name)
}
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}

View File

@@ -1,37 +0,0 @@
package client
import (
"fmt"
"strings"
"golang.org/x/net/context"
Cli "github.com/docker/docker/cli"
flag "github.com/docker/docker/pkg/mflag"
)
// CmdWait blocks until a container stops, then prints its exit code.
//
// If more than one container is specified, this will wait synchronously on each container.
//
// Usage: docker wait CONTAINER [CONTAINER...]
func (cli *DockerCli) CmdWait(args ...string) error {
cmd := Cli.Subcmd("wait", []string{"CONTAINER [CONTAINER...]"}, Cli.DockerCommands["wait"].Description, true)
cmd.Require(flag.Min, 1)
cmd.ParseFlags(args, true)
var errs []string
for _, name := range cmd.Args() {
status, err := cli.client.ContainerWait(context.Background(), name)
if err != nil {
errs = append(errs, err.Error())
} else {
fmt.Fprintf(cli.out, "%d\n", status)
}
}
if len(errs) > 0 {
return fmt.Errorf("%s", strings.Join(errs, "\n"))
}
return nil
}

View File

@@ -3,122 +3,51 @@ package api
import (
"fmt"
"mime"
"os"
"path/filepath"
"sort"
"strconv"
"strings"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/pkg/system"
log "github.com/Sirupsen/logrus"
"github.com/docker/docker/engine"
"github.com/docker/docker/pkg/parsers"
"github.com/docker/docker/pkg/version"
"github.com/docker/engine-api/types"
"github.com/docker/libtrust"
)
// Common constants for daemon and client.
const (
// Version of Current REST API
DefaultVersion version.Version = "1.23"
// MinVersion represents Minimum REST API version supported
MinVersion version.Version = "1.12"
// NoBaseImageSpecifier is the symbol used by the FROM
// command to specify that no base image is to be used.
NoBaseImageSpecifier string = "scratch"
APIVERSION version.Version = "1.17"
DEFAULTHTTPHOST = "127.0.0.1"
DEFAULTUNIXSOCKET = "/var/run/docker.sock"
DefaultDockerfileName string = "Dockerfile"
)
// byPortInfo is a temporary type used to sort types.Port by its fields
type byPortInfo []types.Port
func (r byPortInfo) Len() int { return len(r) }
func (r byPortInfo) Swap(i, j int) { r[i], r[j] = r[j], r[i] }
func (r byPortInfo) Less(i, j int) bool {
if r[i].PrivatePort != r[j].PrivatePort {
return r[i].PrivatePort < r[j].PrivatePort
func ValidateHost(val string) (string, error) {
host, err := parsers.ParseHost(DEFAULTHTTPHOST, DEFAULTUNIXSOCKET, val)
if err != nil {
return val, err
}
if r[i].IP != r[j].IP {
return r[i].IP < r[j].IP
}
if r[i].PublicPort != r[j].PublicPort {
return r[i].PublicPort < r[j].PublicPort
}
return r[i].Type < r[j].Type
return host, nil
}
// DisplayablePorts returns formatted string representing open ports of container
// e.g. "0.0.0.0:80->9090/tcp, 9988/tcp"
// it's used by command 'docker ps'
func DisplayablePorts(ports []types.Port) string {
type portGroup struct {
first int
last int
}
groupMap := make(map[string]*portGroup)
var result []string
var hostMappings []string
var groupMapKeys []string
sort.Sort(byPortInfo(ports))
for _, port := range ports {
current := port.PrivatePort
portKey := port.Type
if port.IP != "" {
if port.PublicPort != current {
hostMappings = append(hostMappings, fmt.Sprintf("%s:%d->%d/%s", port.IP, port.PublicPort, port.PrivatePort, port.Type))
continue
}
portKey = fmt.Sprintf("%s/%s", port.IP, port.Type)
//TODO remove, used on < 1.5 in getContainersJSON
func DisplayablePorts(ports *engine.Table) string {
result := []string{}
ports.SetKey("PublicPort")
ports.Sort()
for _, port := range ports.Data {
if port.Get("IP") == "" {
result = append(result, fmt.Sprintf("%d/%s", port.GetInt("PrivatePort"), port.Get("Type")))
} else {
result = append(result, fmt.Sprintf("%s:%d->%d/%s", port.Get("IP"), port.GetInt("PublicPort"), port.GetInt("PrivatePort"), port.Get("Type")))
}
group := groupMap[portKey]
if group == nil {
groupMap[portKey] = &portGroup{first: current, last: current}
// record order that groupMap keys are created
groupMapKeys = append(groupMapKeys, portKey)
continue
}
if current == (group.last + 1) {
group.last = current
continue
}
result = append(result, formGroup(portKey, group.first, group.last))
groupMap[portKey] = &portGroup{first: current, last: current}
}
for _, portKey := range groupMapKeys {
g := groupMap[portKey]
result = append(result, formGroup(portKey, g.first, g.last))
}
result = append(result, hostMappings...)
return strings.Join(result, ", ")
}
func formGroup(key string, start, last int) string {
parts := strings.Split(key, "/")
groupType := parts[0]
var ip string
if len(parts) > 1 {
ip = parts[0]
groupType = parts[1]
}
group := strconv.Itoa(start)
if start != last {
group = fmt.Sprintf("%s-%d", group, last)
}
if ip != "" {
group = fmt.Sprintf("%s:%s->%s", ip, group, group)
}
return fmt.Sprintf("%s/%s", group, groupType)
}
// MatchesContentType validates the content type against the expected one
func MatchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
logrus.Errorf("Error parsing media type: %s error: %v", contentType, err)
log.Errorf("Error parsing media type: %s error: %s", contentType, err.Error())
}
return err == nil && mimetype == expectedType
}
@@ -126,7 +55,7 @@ func MatchesContentType(contentType, expectedType string) bool {
// LoadOrCreateTrustKey attempts to load the libtrust key at the given path,
// otherwise generates a new one
func LoadOrCreateTrustKey(trustKeyPath string) (libtrust.PrivateKey, error) {
err := system.MkdirAll(filepath.Dir(trustKeyPath), 0700)
err := os.MkdirAll(filepath.Dir(trustKeyPath), 0700)
if err != nil {
return nil, err
}

View File

@@ -1,341 +0,0 @@
package api
import (
"io/ioutil"
"path/filepath"
"testing"
"os"
"github.com/docker/engine-api/types"
)
type ports struct {
ports []types.Port
expected string
}
// DisplayablePorts
func TestDisplayablePorts(t *testing.T) {
cases := []ports{
{
[]types.Port{
{
PrivatePort: 9988,
Type: "tcp",
},
},
"9988/tcp"},
{
[]types.Port{
{
PrivatePort: 9988,
Type: "udp",
},
},
"9988/udp",
},
{
[]types.Port{
{
IP: "0.0.0.0",
PrivatePort: 9988,
Type: "tcp",
},
},
"0.0.0.0:0->9988/tcp",
},
{
[]types.Port{
{
PrivatePort: 9988,
PublicPort: 8899,
Type: "tcp",
},
},
"9988/tcp",
},
{
[]types.Port{
{
IP: "4.3.2.1",
PrivatePort: 9988,
PublicPort: 8899,
Type: "tcp",
},
},
"4.3.2.1:8899->9988/tcp",
},
{
[]types.Port{
{
IP: "4.3.2.1",
PrivatePort: 9988,
PublicPort: 9988,
Type: "tcp",
},
},
"4.3.2.1:9988->9988/tcp",
},
{
[]types.Port{
{
PrivatePort: 9988,
Type: "udp",
}, {
PrivatePort: 9988,
Type: "udp",
},
},
"9988/udp, 9988/udp",
},
{
[]types.Port{
{
IP: "1.2.3.4",
PublicPort: 9998,
PrivatePort: 9998,
Type: "udp",
}, {
IP: "1.2.3.4",
PublicPort: 9999,
PrivatePort: 9999,
Type: "udp",
},
},
"1.2.3.4:9998-9999->9998-9999/udp",
},
{
[]types.Port{
{
IP: "1.2.3.4",
PublicPort: 8887,
PrivatePort: 9998,
Type: "udp",
}, {
IP: "1.2.3.4",
PublicPort: 8888,
PrivatePort: 9999,
Type: "udp",
},
},
"1.2.3.4:8887->9998/udp, 1.2.3.4:8888->9999/udp",
},
{
[]types.Port{
{
PrivatePort: 9998,
Type: "udp",
}, {
PrivatePort: 9999,
Type: "udp",
},
},
"9998-9999/udp",
},
{
[]types.Port{
{
IP: "1.2.3.4",
PrivatePort: 6677,
PublicPort: 7766,
Type: "tcp",
}, {
PrivatePort: 9988,
PublicPort: 8899,
Type: "udp",
},
},
"9988/udp, 1.2.3.4:7766->6677/tcp",
},
{
[]types.Port{
{
IP: "1.2.3.4",
PrivatePort: 9988,
PublicPort: 8899,
Type: "udp",
}, {
IP: "1.2.3.4",
PrivatePort: 9988,
PublicPort: 8899,
Type: "tcp",
}, {
IP: "4.3.2.1",
PrivatePort: 2233,
PublicPort: 3322,
Type: "tcp",
},
},
"4.3.2.1:3322->2233/tcp, 1.2.3.4:8899->9988/tcp, 1.2.3.4:8899->9988/udp",
},
{
[]types.Port{
{
PrivatePort: 9988,
PublicPort: 8899,
Type: "udp",
}, {
IP: "1.2.3.4",
PrivatePort: 6677,
PublicPort: 7766,
Type: "tcp",
}, {
IP: "4.3.2.1",
PrivatePort: 2233,
PublicPort: 3322,
Type: "tcp",
},
},
"9988/udp, 4.3.2.1:3322->2233/tcp, 1.2.3.4:7766->6677/tcp",
},
{
[]types.Port{
{
PrivatePort: 80,
Type: "tcp",
}, {
PrivatePort: 1024,
Type: "tcp",
}, {
PrivatePort: 80,
Type: "udp",
}, {
PrivatePort: 1024,
Type: "udp",
}, {
IP: "1.1.1.1",
PublicPort: 80,
PrivatePort: 1024,
Type: "tcp",
}, {
IP: "1.1.1.1",
PublicPort: 80,
PrivatePort: 1024,
Type: "udp",
}, {
IP: "1.1.1.1",
PublicPort: 1024,
PrivatePort: 80,
Type: "tcp",
}, {
IP: "1.1.1.1",
PublicPort: 1024,
PrivatePort: 80,
Type: "udp",
}, {
IP: "2.1.1.1",
PublicPort: 80,
PrivatePort: 1024,
Type: "tcp",
}, {
IP: "2.1.1.1",
PublicPort: 80,
PrivatePort: 1024,
Type: "udp",
}, {
IP: "2.1.1.1",
PublicPort: 1024,
PrivatePort: 80,
Type: "tcp",
}, {
IP: "2.1.1.1",
PublicPort: 1024,
PrivatePort: 80,
Type: "udp",
},
},
"80/tcp, 80/udp, 1024/tcp, 1024/udp, 1.1.1.1:1024->80/tcp, 1.1.1.1:1024->80/udp, 2.1.1.1:1024->80/tcp, 2.1.1.1:1024->80/udp, 1.1.1.1:80->1024/tcp, 1.1.1.1:80->1024/udp, 2.1.1.1:80->1024/tcp, 2.1.1.1:80->1024/udp",
},
}
for _, port := range cases {
actual := DisplayablePorts(port.ports)
if port.expected != actual {
t.Fatalf("Expected %s, got %s.", port.expected, actual)
}
}
}
// MatchesContentType
func TestJsonContentType(t *testing.T) {
if !MatchesContentType("application/json", "application/json") {
t.Fail()
}
if !MatchesContentType("application/json; charset=utf-8", "application/json") {
t.Fail()
}
if MatchesContentType("dockerapplication/json", "application/json") {
t.Fail()
}
}
// LoadOrCreateTrustKey
func TestLoadOrCreateTrustKeyInvalidKeyFile(t *testing.T) {
tmpKeyFolderPath, err := ioutil.TempDir("", "api-trustkey-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpKeyFolderPath)
tmpKeyFile, err := ioutil.TempFile(tmpKeyFolderPath, "keyfile")
if err != nil {
t.Fatal(err)
}
if _, err := LoadOrCreateTrustKey(tmpKeyFile.Name()); err == nil {
t.Fatalf("expected an error, got nothing.")
}
}
func TestLoadOrCreateTrustKeyCreateKey(t *testing.T) {
tmpKeyFolderPath, err := ioutil.TempDir("", "api-trustkey-test")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmpKeyFolderPath)
// Without the need to create the folder hierarchy
tmpKeyFile := filepath.Join(tmpKeyFolderPath, "keyfile")
if key, err := LoadOrCreateTrustKey(tmpKeyFile); err != nil || key == nil {
t.Fatalf("expected a new key file, got : %v and %v", err, key)
}
if _, err := os.Stat(tmpKeyFile); err != nil {
t.Fatalf("Expected to find a file %s, got %v", tmpKeyFile, err)
}
// With the need to create the folder hierarchy as tmpKeyFie is in a path
// where some folders do not exist.
tmpKeyFile = filepath.Join(tmpKeyFolderPath, "folder/hierarchy/keyfile")
if key, err := LoadOrCreateTrustKey(tmpKeyFile); err != nil || key == nil {
t.Fatalf("expected a new key file, got : %v and %v", err, key)
}
if _, err := os.Stat(tmpKeyFile); err != nil {
t.Fatalf("Expected to find a file %s, got %v", tmpKeyFile, err)
}
// With no path at all
defer os.Remove("keyfile")
if key, err := LoadOrCreateTrustKey("keyfile"); err != nil || key == nil {
t.Fatalf("expected a new key file, got : %v and %v", err, key)
}
if _, err := os.Stat("keyfile"); err != nil {
t.Fatalf("Expected to find a file keyfile, got %v", err)
}
}
func TestLoadOrCreateTrustKeyLoadValidKey(t *testing.T) {
tmpKeyFile := filepath.Join("fixtures", "keyfile")
if key, err := LoadOrCreateTrustKey(tmpKeyFile); err != nil || key == nil {
t.Fatalf("expected a key file, got : %v and %v", err, key)
}
}

View File

@@ -1,7 +0,0 @@
-----BEGIN EC PRIVATE KEY-----
keyID: AWX2:I27X:WQFX:IOMK:CNAK:O7PW:VYNB:ZLKC:CVAE:YJP2:SI4A:XXAY
MHcCAQEEILHTRWdcpKWsnORxSFyBnndJ4ROU41hMtr/GCiLVvwBQoAoGCCqGSM49
AwEHoUQDQgAElpVFbQ2V2UQKajqdE3fVxJ+/pE/YuEFOxWbOxF2be19BY209/iky
NzeFFK7SLpQ4CBJ7zDVXOHsMzrkY/GquGA==
-----END EC PRIVATE KEY-----

2
api/server/MAINTAINERS Normal file
View File

@@ -0,0 +1,2 @@
Victor Vieux <vieux@docker.com> (@vieux)
# Johan Euphrosine <proppy@google.com> (@proppy)

View File

@@ -1,69 +0,0 @@
package httputils
import (
"net/http"
"strings"
"github.com/Sirupsen/logrus"
)
// httpStatusError is an interface
// that errors with custom status codes
// implement to tell the api layer
// which response status to set.
type httpStatusError interface {
HTTPErrorStatusCode() int
}
// inputValidationError is an interface
// that errors generated by invalid
// inputs can implement to tell the
// api layer to set a 400 status code
// in the response.
type inputValidationError interface {
IsValidationError() bool
}
// WriteError decodes a specific docker error and sends it in the response.
func WriteError(w http.ResponseWriter, err error) {
if err == nil || w == nil {
logrus.WithFields(logrus.Fields{"error": err, "writer": w}).Error("unexpected HTTP error handling")
return
}
var statusCode int
errMsg := err.Error()
switch e := err.(type) {
case httpStatusError:
statusCode = e.HTTPErrorStatusCode()
case inputValidationError:
statusCode = http.StatusBadRequest
default:
// FIXME: this is brittle and should not be necessary, but we still need to identify if
// there are errors falling back into this logic.
// If we need to differentiate between different possible error types,
// we should create appropriate error types that implement the httpStatusError interface.
errStr := strings.ToLower(errMsg)
for keyword, status := range map[string]int{
"not found": http.StatusNotFound,
"no such": http.StatusNotFound,
"bad parameter": http.StatusBadRequest,
"conflict": http.StatusConflict,
"impossible": http.StatusNotAcceptable,
"wrong login/password": http.StatusUnauthorized,
"hasn't been activated": http.StatusForbidden,
} {
if strings.Contains(errStr, keyword) {
statusCode = status
break
}
}
}
if statusCode == 0 {
statusCode = http.StatusInternalServerError
}
http.Error(w, errMsg, statusCode)
}

View File

@@ -1,73 +0,0 @@
package httputils
import (
"fmt"
"net/http"
"path/filepath"
"strconv"
"strings"
)
// BoolValue transforms a form value in different formats into a boolean type.
func BoolValue(r *http.Request, k string) bool {
s := strings.ToLower(strings.TrimSpace(r.FormValue(k)))
return !(s == "" || s == "0" || s == "no" || s == "false" || s == "none")
}
// BoolValueOrDefault returns the default bool passed if the query param is
// missing, otherwise it's just a proxy to boolValue above
func BoolValueOrDefault(r *http.Request, k string, d bool) bool {
if _, ok := r.Form[k]; !ok {
return d
}
return BoolValue(r, k)
}
// Int64ValueOrZero parses a form value into an int64 type.
// It returns 0 if the parsing fails.
func Int64ValueOrZero(r *http.Request, k string) int64 {
val, err := Int64ValueOrDefault(r, k, 0)
if err != nil {
return 0
}
return val
}
// Int64ValueOrDefault parses a form value into an int64 type. If there is an
// error, returns the error. If there is no value returns the default value.
func Int64ValueOrDefault(r *http.Request, field string, def int64) (int64, error) {
if r.Form.Get(field) != "" {
value, err := strconv.ParseInt(r.Form.Get(field), 10, 64)
if err != nil {
return value, err
}
return value, nil
}
return def, nil
}
// ArchiveOptions stores archive information for different operations.
type ArchiveOptions struct {
Name string
Path string
}
// ArchiveFormValues parses form values and turns them into ArchiveOptions.
// It fails if the archive name and path are not in the request.
func ArchiveFormValues(r *http.Request, vars map[string]string) (ArchiveOptions, error) {
if err := ParseForm(r); err != nil {
return ArchiveOptions{}, err
}
name := vars["name"]
path := filepath.FromSlash(r.Form.Get("path"))
switch {
case name == "":
return ArchiveOptions{}, fmt.Errorf("bad parameter: 'name' cannot be empty")
case path == "":
return ArchiveOptions{}, fmt.Errorf("bad parameter: 'path' cannot be empty")
}
return ArchiveOptions{name, path}, nil
}

View File

@@ -1,105 +0,0 @@
package httputils
import (
"net/http"
"net/url"
"testing"
)
func TestBoolValue(t *testing.T) {
cases := map[string]bool{
"": false,
"0": false,
"no": false,
"false": false,
"none": false,
"1": true,
"yes": true,
"true": true,
"one": true,
"100": true,
}
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := BoolValue(r, "test")
if a != e {
t.Fatalf("Value: %s, expected: %v, actual: %v", c, e, a)
}
}
}
func TestBoolValueOrDefault(t *testing.T) {
r, _ := http.NewRequest("GET", "", nil)
if !BoolValueOrDefault(r, "queryparam", true) {
t.Fatal("Expected to get true default value, got false")
}
v := url.Values{}
v.Set("param", "")
r, _ = http.NewRequest("GET", "", nil)
r.Form = v
if BoolValueOrDefault(r, "param", true) {
t.Fatal("Expected not to get true")
}
}
func TestInt64ValueOrZero(t *testing.T) {
cases := map[string]int64{
"": 0,
"asdf": 0,
"0": 0,
"1": 1,
}
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a := Int64ValueOrZero(r, "test")
if a != e {
t.Fatalf("Value: %s, expected: %v, actual: %v", c, e, a)
}
}
}
func TestInt64ValueOrDefault(t *testing.T) {
cases := map[string]int64{
"": -1,
"-1": -1,
"42": 42,
}
for c, e := range cases {
v := url.Values{}
v.Set("test", c)
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
a, err := Int64ValueOrDefault(r, "test", -1)
if a != e {
t.Fatalf("Value: %s, expected: %v, actual: %v", c, e, a)
}
if err != nil {
t.Fatalf("Error should be nil, but received: %s", err)
}
}
}
func TestInt64ValueOrDefaultWithError(t *testing.T) {
v := url.Values{}
v.Set("test", "invalid")
r, _ := http.NewRequest("POST", "", nil)
r.Form = v
_, err := Int64ValueOrDefault(r, "test", -1)
if err == nil {
t.Fatalf("Expected an error.")
}
}

View File

@@ -1,107 +0,0 @@
package httputils
import (
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"golang.org/x/net/context"
"github.com/docker/docker/api"
"github.com/docker/docker/pkg/version"
)
// APIVersionKey is the client's requested API version.
const APIVersionKey = "api-version"
// UAStringKey is used as key type for user-agent string in net/context struct
const UAStringKey = "upstream-user-agent"
// APIFunc is an adapter to allow the use of ordinary functions as Docker API endpoints.
// Any function that has the appropriate signature can be registered as a API endpoint (e.g. getVersion).
type APIFunc func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error
// HijackConnection interrupts the http response writer to get the
// underlying connection and operate with it.
func HijackConnection(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
conn, _, err := w.(http.Hijacker).Hijack()
if err != nil {
return nil, nil, err
}
// Flush the options to make sure the client sets the raw mode
conn.Write([]byte{})
return conn, conn, nil
}
// CloseStreams ensures that a list for http streams are properly closed.
func CloseStreams(streams ...interface{}) {
for _, stream := range streams {
if tcpc, ok := stream.(interface {
CloseWrite() error
}); ok {
tcpc.CloseWrite()
} else if closer, ok := stream.(io.Closer); ok {
closer.Close()
}
}
}
// CheckForJSON makes sure that the request's Content-Type is application/json.
func CheckForJSON(r *http.Request) error {
ct := r.Header.Get("Content-Type")
// No Content-Type header is ok as long as there's no Body
if ct == "" {
if r.Body == nil || r.ContentLength == 0 {
return nil
}
}
// Otherwise it better be json
if api.MatchesContentType(ct, "application/json") {
return nil
}
return fmt.Errorf("Content-Type specified (%s) must be 'application/json'", ct)
}
// ParseForm ensures the request form is parsed even with invalid content types.
// If we don't do this, POST method without Content-type (even with empty body) will fail.
func ParseForm(r *http.Request) error {
if r == nil {
return nil
}
if err := r.ParseForm(); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
return err
}
return nil
}
// ParseMultipartForm ensures the request form is parsed, even with invalid content types.
func ParseMultipartForm(r *http.Request) error {
if err := r.ParseMultipartForm(4096); err != nil && !strings.HasPrefix(err.Error(), "mime:") {
return err
}
return nil
}
// WriteJSON writes the value v to the http response stream as json with standard json encoding.
func WriteJSON(w http.ResponseWriter, code int, v interface{}) error {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
return json.NewEncoder(w).Encode(v)
}
// VersionFromContext returns an API version from the context using APIVersionKey.
// It panics if the context value does not have version.Version type.
func VersionFromContext(ctx context.Context) (ver version.Version) {
if ctx == nil {
return
}
val := ctx.Value(APIVersionKey)
if val == nil {
return
}
return val.(version.Version)
}

View File

@@ -1,41 +0,0 @@
package server
import (
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/middleware"
"github.com/docker/docker/dockerversion"
"github.com/docker/docker/pkg/authorization"
)
// handleWithGlobalMiddlwares wraps the handler function for a request with
// the server's global middlewares. The order of the middlewares is backwards,
// meaning that the first in the list will be evaluated last.
func (s *Server) handleWithGlobalMiddlewares(handler httputils.APIFunc) httputils.APIFunc {
next := handler
handleVersion := middleware.NewVersionMiddleware(dockerversion.Version, api.DefaultVersion, api.MinVersion)
next = handleVersion(next)
if s.cfg.EnableCors {
handleCORS := middleware.NewCORSMiddleware(s.cfg.CorsHeaders)
next = handleCORS(next)
}
handleUserAgent := middleware.NewUserAgentMiddleware(s.cfg.Version)
next = handleUserAgent(next)
// Only want this on debug level
if s.cfg.Logging && logrus.GetLevel() == logrus.DebugLevel {
next = middleware.DebugRequestMiddleware(next)
}
if len(s.cfg.AuthorizationPluginNames) > 0 {
s.authZPlugins = authorization.NewPlugins(s.cfg.AuthorizationPluginNames)
handleAuthorization := middleware.NewAuthorizationMiddleware(s.authZPlugins)
next = handleAuthorization(next)
}
return next
}

View File

@@ -1,42 +0,0 @@
package middleware
import (
"net/http"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/pkg/authorization"
"golang.org/x/net/context"
)
// NewAuthorizationMiddleware creates a new Authorization middleware.
func NewAuthorizationMiddleware(plugins []authorization.Plugin) Middleware {
return func(handler httputils.APIFunc) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// FIXME: fill when authN gets in
// User and UserAuthNMethod are taken from AuthN plugins
// Currently tracked in https://github.com/docker/docker/pull/13994
user := ""
userAuthNMethod := ""
authCtx := authorization.NewCtx(plugins, user, userAuthNMethod, r.Method, r.RequestURI)
if err := authCtx.AuthZRequest(w, r); err != nil {
logrus.Errorf("AuthZRequest for %s %s returned error: %s", r.Method, r.RequestURI, err)
return err
}
rw := authorization.NewResponseModifier(w)
if err := handler(ctx, rw, r, vars); err != nil {
logrus.Errorf("Handler for %s %s returned error: %s", r.Method, r.RequestURI, err)
return err
}
if err := authCtx.AuthZResponse(rw, r); err != nil {
logrus.Errorf("AuthZResponse for %s %s returned error: %s", r.Method, r.RequestURI, err)
return err
}
return nil
}
}
}

View File

@@ -1,33 +0,0 @@
package middleware
import (
"net/http"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api/server/httputils"
"golang.org/x/net/context"
)
// NewCORSMiddleware creates a new CORS middleware.
func NewCORSMiddleware(defaultHeaders string) Middleware {
return func(handler httputils.APIFunc) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
// If "api-cors-header" is not given, but "api-enable-cors" is true, we set cors to "*"
// otherwise, all head values will be passed to HTTP handler
corsHeaders := defaultHeaders
if corsHeaders == "" {
corsHeaders = "*"
}
writeCorsHeaders(w, r, corsHeaders)
return handler(ctx, w, r, vars)
}
}
}
func writeCorsHeaders(w http.ResponseWriter, r *http.Request, corsHeaders string) {
logrus.Debugf("CORS header is enabled and set to: %s", corsHeaders)
w.Header().Add("Access-Control-Allow-Origin", corsHeaders)
w.Header().Add("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept, X-Registry-Auth")
w.Header().Add("Access-Control-Allow-Methods", "HEAD, GET, POST, DELETE, PUT, OPTIONS")
}

View File

@@ -1,56 +0,0 @@
package middleware
import (
"bufio"
"encoding/json"
"io"
"net/http"
"github.com/Sirupsen/logrus"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/pkg/ioutils"
"golang.org/x/net/context"
)
// DebugRequestMiddleware dumps the request to logger
func DebugRequestMiddleware(handler httputils.APIFunc) httputils.APIFunc {
return func(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
logrus.Debugf("Calling %s %s", r.Method, r.RequestURI)
if r.Method != "POST" {
return handler(ctx, w, r, vars)
}
if err := httputils.CheckForJSON(r); err != nil {
return handler(ctx, w, r, vars)
}
maxBodySize := 4096 // 4KB
if r.ContentLength > int64(maxBodySize) {
return handler(ctx, w, r, vars)
}
body := r.Body
bufReader := bufio.NewReaderSize(body, maxBodySize)
r.Body = ioutils.NewReadCloserWrapper(bufReader, func() error { return body.Close() })
b, err := bufReader.Peek(maxBodySize)
if err != io.EOF {
// either there was an error reading, or the buffer is full (in which case the request is too large)
return handler(ctx, w, r, vars)
}
var postForm map[string]interface{}
if err := json.Unmarshal(b, &postForm); err == nil {
if _, exists := postForm["password"]; exists {
postForm["password"] = "*****"
}
formStr, errMarshal := json.Marshal(postForm)
if errMarshal == nil {
logrus.Debugf("form data: %s", string(formStr))
} else {
logrus.Debugf("form data: %q", postForm)
}
}
return handler(ctx, w, r, vars)
}
}

View File

@@ -1,7 +0,0 @@
package middleware
import "github.com/docker/docker/api/server/httputils"
// Middleware is an adapter to allow the use of ordinary functions as Docker API filters.
// Any function that has the appropriate signature can be registered as a middleware.
type Middleware func(handler httputils.APIFunc) httputils.APIFunc

Some files were not shown because too many files have changed in this diff Show More