Compare commits

...

134 Commits

Author SHA1 Message Date
Tibor Vass
76d6bc9a9f Bump version to v1.9.0
Signed-off-by: Tibor Vass <tibor@docker.com>
2015-11-03 12:21:26 -05:00
Mary Anthony
b9c3ba79a6 Moving project to docker/opensource
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit e400125b66)
2015-11-03 12:14:28 -05:00
Madhu Venugopal
17fb8c3c5a Updating networking docs with technical information
- the /etc/hosts read caveat due to dynamic update
- information about docker_gwbridge
- Carries and closes #17654
- Updating with last change by Madhu
- Updating with the IPAM api 1.22

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 39dfc536d4)

Conflicts:
	docs/reference/api/docker_remote_api_v1.22.md
2015-11-03 12:01:52 -05:00
Mary Anthony
0849a71036 Nigel acknowledgement
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 7b978cc378)
2015-11-03 12:01:52 -05:00
Mary Anthony
b53c39f606 Fixing ZooKeeper and some other nits Nathan found
Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 0f1083c8da)
2015-11-03 12:01:52 -05:00
Sebastiaan van Stijn
5d72b7a582 docs: update remote API responses and minor fixes
Add back the "old" networksettings fields that were removed,
but added back to maintain backward compatibility, in
https://github.com/docker/docker/pull/17538

Update network endpoint responses, with updated response
introduced in;
https://github.com/docker/docker/pull/17536

Added changes to v1.22 that were applied to the v1.21 / v1.20 docs
after the API bump(s);

https://github.com/docker/docker/pull/17085
https://github.com/docker/docker/pull/17127
https://github.com/docker/docker/pull/13707

Also fixed some mixed tab/spaces indentation
and Markdown formatting issues (causing code-blocks to
be rendered incorrectly)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 286fe69d53)

Conflicts:
	docs/reference/api/docker_remote_api_v1.22.md
2015-11-03 12:01:51 -05:00
Mary Anthony
c50bd847a7 Add special memory management file
updating after assignment for Nigel
Adding in some notes from Nigel work
Updating with the storage driver content Nigel added
Updating with Nigel's polishing tech
Adding in Nigel graphics
First pass of aufs material
Capturing Nigel's latest
Comments back to Nigel on devicemapper
Incorporating Nigel's comments v3
Converting images for dm
Entering comments into aufs page
Adding the btfs storage driver
Moving to userguide
Adding in two new driver articles from Nigel
Optimized images
Updating with comments

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 950fbf99b1)
2015-11-03 12:01:51 -05:00
Mary Anthony
fa2a6b3522 First pass at consolidating
Removing old networking.md
Updating dockernetworks.md with images
Adding information on network plugins
Adding blurb about links to docker networking
Updating the working documentation
Adding Overlay Getting Started
Downplaying links by removing refs/examples, adding refs/examples for network.
Updating getting started to reflect networks not links
Pulling out old network material
Updating per discussion with Madhu to add Default docs section
Updating with bridge default
Fix bad merge
Updating with new cluster-advertise behavior
Update working and NetworkSettings examples
Correcting example for default bridge discovery behavior
Entering comments
Fixing broken Markdown Syntax
Updating with comments
Updating all the links

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 9ef855f9e5)
2015-11-03 12:01:51 -05:00
Tibor Vass
f2d0d2e516 Do not stop daemon from booting if io.EOF on loading image
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 27c2368599)
2015-11-02 23:13:29 -05:00
Antonio Murdaca
7be73fe0cb graph: enhance err message on failed image restore
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
(cherry picked from commit f5fc832b6e)
2015-11-02 23:13:29 -05:00
Jana Radhakrishnan
75847fb840 Vendoring libnetwork
Vendoring libnetwork @ 05a5a1510f85977f374a9b9804a116391bab5089

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
(cherry picked from commit 10e1b9f02e)
2015-11-02 23:13:29 -05:00
Mary Anthony
dfad0c3da7 Fixing broken links
Fixing the weight

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 5ce093e945)

Conflicts:
	docs/reference/api/docker_remote_api.md
	docs/reference/api/docker_remote_api_v1.21.md
	docs/reference/api/docker_remote_api_v1.22.md
2015-11-02 23:13:29 -05:00
Tibor Vass
263604f570 fix coding style and have make validate pass
Signed-off-by: Tibor Vass <tibor@docker.com>
2015-11-02 23:13:29 -05:00
Alessandro Boch
6cbcdc7a6d IT for daemon restarts when container connected to multiple networks
Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit e16e794805)
2015-10-30 21:18:08 -04:00
Madhu Venugopal
0d762ef9d2 fixing ungraceful daemon restart case where nw connect is not persisted
For graceful restart case it was done when the container was brought
down. But for ungraceful cases, the persistence is missing for nw
connect

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 401632c756)
2015-10-30 21:18:08 -04:00
Madhu Venugopal
9f7231816f Vendoring in libnetwork to fix an ungraceful restart case
Also picked up a minor typo fix

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 2361edbcea)
2015-10-30 21:18:08 -04:00
Donald Huang
0430024bad fix pre-1.21 docker stats
This fixes a bug introduced in #15786:

* if a pre-v1.20 client requested docker stats, the daemon
would return both an API-compatible JSON blob *and* an API-incompatible JSON
blob: see https://gist.github.com/donhcd/338a5b3681cd6a071629

Signed-off-by: Donald Huang <don.hcd@gmail.com>
(cherry picked from commit d2c04f844b)

The commit title wrongfully mentioned API v1.22, when it meant to mention v1.21.
2015-10-30 21:18:08 -04:00
Alessandro Boch
6fa9da0363 Modify IPAMConfig structure json tags
- So that it complies with docker convention for inspect

Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit d795bc7d53)
2015-10-30 21:18:08 -04:00
David Calavera
2ec7433aa4 Fix network inspect for default networks.
- Keep old fields in NetworkSetting to respect the deprecation policy.

Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit f301c5765a)
2015-10-30 21:18:07 -04:00
Alessandro Boch
51e5073111 Modify Network structure json tags
- So that they comply with docker inspect convention
  Which is allowing camel case for json field names

Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit b2d0b75018)
2015-10-30 21:18:07 -04:00
Michael Crosby
c4f0a2aa00 Don't set mem soft limit if not specifiecd
You cannot do this for individual cgroups for all the containers.  Only
set the reservation if the user requested it.  The error you will
receive is an EINTVAL when you try to set a large limit like we were in
the memory limit.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
(cherry picked from commit ecb87ed0a5)
2015-10-30 21:18:07 -04:00
evalle
581f297644 Fix ubuntu installation instructions page
Signed-off-by: evalle <shmarnev@gmail.com>
(cherry picked from commit eb0a208f4b)
2015-10-30 21:18:07 -04:00
Madhu Venugopal
cff248b35f Fixes a case of dangling endpoint during ungraceful daemon restart
When a container restarts after a ungraceful daemon restart, first
cleanup any unclean sandbox before trying to allocate network resources.

Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit 0c07096b7d)
2015-10-30 21:18:07 -04:00
Madhu Venugopal
12da51944a Vendoring libnetwork to solve an ungraceful restart condition
Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit ebf76171f6)
2015-10-30 21:18:07 -04:00
David Calavera
87417387c6 Let the api to choose the default network driver.
That way swarm can understand the user's intention.

Signed-off-by: David Calavera <david.calavera@gmail.com>
(cherry picked from commit 34668ad68b)
2015-10-30 21:18:07 -04:00
Alessandro Boch
b693a04658 Execute buildPortMapInfo after Endpoint.Join()
- As the retrieved info may not be available at
  Endpoint creation time for certain network drivers
- Also retrieve the MAC address from Endpoint.Info().Iface()

Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit e03daebb48)
2015-10-30 21:18:06 -04:00
Alessandro Boch
e50176876c Vendoring libnetwork 20351a84241aa1278493d74492db947336989be6
Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit 90744fe943)
2015-10-30 21:18:06 -04:00
Derek McGowan
22c2f04847 Fix rmi -f removing multiple tags
When an image has multiple tags and rmi is called with force on a tag, only the single tag should be removed.
The current behavior is broken and removes all tags and the image.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
(cherry picked from commit 48e7f7963e)
2015-10-30 21:18:06 -04:00
Eric Rosenberg
75f6674b11 Update kill.md
Added Note to show users that signals will not propagate to the container if the preferred exec form isn't used.

Signed-off-by: Eric Rosenberg <ehaydenr@gmail.com>
(cherry picked from commit c1a5ee53c1)
2015-10-30 20:27:45 -04:00
Tibor Vass
ea1862d346 docker-py: upgrade and fix test script
Signed-off-by: Tibor Vass <tibor@docker.com>
(cherry picked from commit 8db141049f)
2015-10-30 20:27:44 -04:00
Lei Jitang
9335cf9d29 Fix docker inspect display odd gateway value for none network mode
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 7fa601adc7)
2015-10-30 20:27:44 -04:00
Madhu Venugopal
22cf3df4c0 Prevent user from deleting pre-defined networks
Signed-off-by: Madhu Venugopal <madhu@docker.com>
(cherry picked from commit ead62b5952)

Conflicts:
	integration-cli/docker_api_network_test.go
2015-10-30 20:27:44 -04:00
David Calavera
c57d1e026d Update inspect api examples with new network settings.
Signed-off-by: David Calavera <david.calavera@gmail.com>

Conflicts:
	docs/reference/api/docker_remote_api_v1.22.md
2015-10-27 20:11:08 -04:00
David Calavera
ce6e6b18ad Extract network settings types for inspect.
Keeping backwards compatibility.

Signed-off-by: David Calavera <david.calavera@gmail.com>
Signed-off-by: Tibor Vass <tibor@docker.com>

Conflicts:
	integration-cli/docker_cli_links_test.go
2015-10-27 19:57:03 -04:00
Lei Jitang
a09d521221 Fix docker inspect container only reports last assigned information
Signed-off-by: Lei Jitang <leijitang@huawei.com>

Conflicts:
	integration-cli/docker_api_network_test.go
	integration-cli/docker_utils.go
2015-10-27 19:57:03 -04:00
Hugo Marisco
3648ffe722 update gpg add key command, without sudo it fails
Signed-off-by: Hugo Marisco <0x6875676f@gmail.com>
2015-10-27 19:57:03 -04:00
Madhu Venugopal
37e04b864e Enhancing --cluster-advertise to support <interface-name>
--cluster-advertise daemon option is enahanced to support <interface-name>
in addition to <ip-address> in order to amke it  automation friendly using
docker-machine.

Signed-off-by: Madhu Venugopal <madhu@docker.com>

Conflicts:
	integration-cli/docker_cli_info_test.go
2015-10-27 19:57:03 -04:00
Alessandro Boch
160f4e4a3b Do not update etc/hosts for every container
- Only user named containers will be published into
  other containers' etc/hosts file.
- Also block linking to containers which are not
  connected to the default network

Signed-off-by: Alessandro Boch <aboch@docker.com>
2015-10-27 19:57:02 -04:00
Santhosh Manohar
a76b5fd324 Add libnetwork call on daemon rename
Signed-off-by: Santhosh Manohar <santhosh@docker.com>
2015-10-27 19:57:02 -04:00
Madhu Venugopal
04cc66b08e integration-cli test for active container rename and reuse
Signed-off-by: Madhu Venugopal <madhu@docker.com>
Signed-off-by: Santhosh Manohar <santhosh@docker.com>
2015-10-27 19:57:02 -04:00
Santhosh Manohar
93f966cdc0 Vendor in libnetwork changes to support container rename
Signed-off-by: Santhosh Manohar <santhosh@docker.com>
2015-10-27 19:57:02 -04:00
Mary Anthony
6b779ead93 Updating network commands: adding man pages
Adding Related information blocks
Final first draft pass: ready for review
Review comments
Entering comments from the gang
Updating connect to include paused

Signed-off-by: Mary Anthony <mary@docker.com>
2015-10-27 19:57:02 -04:00
Kun Zhang
7ac3502444 command missing 'daemon'
Signed-off-by: Kun Zhang <zkazure@gmail.com>

Conflicts:
	docs/reference/logging/splunk.md
2015-10-27 19:57:02 -04:00
Tonis Tiigi
963b4d0b22 Fix duplicate container names conflict
While creating multiple containers the second
container could remove the first one from graph
and not produce an error.

Fixes #15995

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2015-10-27 19:57:02 -04:00
Madhu Venugopal
2801a373a8 Vendoring in libnetwork with fixes for issues identified in RC1 & RC2
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-27 19:57:02 -04:00
Madhu Venugopal
c2c88fec85 Simple log to indicate the chosen IP Address for the default bridge
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-27 19:57:02 -04:00
Steve Durrheimer
7825b5a5cd Fix repeatable options in zsh completion
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-27 19:57:02 -04:00
Alessandro Boch
3040ad3581 Disable built-in SD on docker0 network
Signed-off-by: Alessandro Boch <aboch@docker.com>
2015-10-27 19:57:01 -04:00
Vincent Demeester
84eaf42839 Update docker network inspect help syntax
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2015-10-27 19:57:01 -04:00
Harald Albers
581df6aaa1 bash completion for log driver options env and labels
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-27 19:57:01 -04:00
Aaron Lehmann
b8410cd2ae Add a buffered Writer between layer compression and layer upload
Without this buffering, the compressor was outputting 64 bytes at a
time to the HTTP stream, which was resulting in absurdly small chunk
sizes and a lot of extra overhead. The buffering restores the chunk size
to 32768 bytes, which matches the behavior with 1.8.2.

Times pushing to a local registry:

1.8.2: 0m18.934s
master: 0m20.564s
master+this commit: 0m17.593s

Fixes: #17038

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2015-10-27 19:57:01 -04:00
Sven Dowideit
cc1782b318 Two more links to fix
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-10-27 19:57:01 -04:00
Steve Durrheimer
f8c24afcb4 Zsh completion for 'docker network inspect' multiple networks
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-27 19:57:01 -04:00
Harald Albers
8b5c7d6a60 bash completion for docker network inspect supports multiple networks
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-27 19:57:01 -04:00
David Calavera
05e3f2f62f Fail when there is an error executing an inspect template.
- Do not execute the template directly in the cli outout, go is not atomic
in this operation and can send bytes before failing the execution.
- Fail after evaluating a raw interface if the typed execution also
failed, assuming there is a template parsing error.

Signed-off-by: David Calavera <david.calavera@gmail.com>
2015-10-22 19:44:01 -04:00
Alessandro Boch
957fcb10bf Add integ-test for fixed--cidr == bridge network
Signed-off-by: Alessandro Boch <aboch@docker.com>
2015-10-22 17:36:30 -04:00
Madhu Venugopal
9c5885135f Vendoring in libkv to be aligned with libnetwork
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 17:36:30 -04:00
Madhu Venugopal
03d7cf1aef Vendoring in Libnetwork
This carries fixes for
- Internal racy /etc/hosts updates within container during SD
- Renable SD service record watch after cluster-store restarts
- Fix to allow remote IPAM driver to return no IP if the user prefers
- Fix to allow --fixed-cidr and --bip to be in same range

Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 17:36:30 -04:00
Vincent Demeester
7ff56c563d Use RepoTags & RepoDigest in inspect
To be coherent with /images/json (images command)

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2015-10-22 17:22:16 -04:00
Tonis Tiigi
e6ff99629b Show trust variable deprecation warning only if used
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2015-10-22 16:24:29 -04:00
Alessandro Boch
50a8ce2b28 Turn off service discovery when icc==false
- Turn off built-in service discovery on docker0 bridge
  when icc is false

Signed-off-by: Alessandro Boch <aboch@docker.com>
2015-10-22 16:16:53 -04:00
Alessandro Boch
dc5a0118a1 Vendoring in libnetwork for the anonymous endpoint
- commit f3c8ebf46b890d4612c5d98e792280d13abdb761

Signed-off-by: Alessandro Boch <aboch@docker.com>
2015-10-22 16:16:53 -04:00
Sven Dowideit
f2671e07b3 Fix some errant links
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2015-10-22 16:14:13 -04:00
Harald Albers
033ce66707 bash completion for docker cp supports copying both ways
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 16:14:13 -04:00
Aaron Lehmann
6a01c901d3 Fix layer compression regression
PR #15493 removed compression of layers when pushing them to a V2
registry. This this makes layer uploads larger than they should be.

This commit restores the compression. It uses an io.Pipe to turn the
gzip compressor output Writer into a Reader, so the ReadFrom method can
be used on the BlobWriter (which is very important for avoiding many
PATCH requests per layer).

Fixes #17209
Fixes #17038

Signed-off-by: Aaron Lehmann <aaron.lehmann@docker.com>
2015-10-22 16:14:13 -04:00
David Calavera
c75fd9b5da Do not fail when a container is being removed and we request its delete again.
Abort the process and return a success response, letting the original
request finish its job.

Signed-off-by: David Calavera <david.calavera@gmail.com>
2015-10-22 16:14:13 -04:00
Tibor Vass
b83bbbe864 release: fix bash bug in script
Signed-off-by: Tibor Vass <tibor@docker.com>
2015-10-22 16:14:12 -04:00
Antonio Murdaca
f110597a78 graph: ensure _tmp dir is always removed
Also remove unused func `newTempFile` and prevent a possible deadlock
between pull_v2 `attemptIDReuse` and graph `register`

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2015-10-22 16:14:12 -04:00
Antonio Murdaca
adfbdc361c daemon: faster image cache miss detection
Lookup the graph parent reference to detect a builder cache miss before
looping the whole graph image index to build a parent-children tree.

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2015-10-22 16:14:12 -04:00
Antonio Murdaca
587a0ffa2d graph: add parent img refcount for faster rmi
also fix a typo in pkg/truncindex package comment

Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2015-10-22 16:14:12 -04:00
Antonio Murdaca
7176e68625 integration-cli: docker_cli_build_test: check error before defer
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2015-10-22 16:14:12 -04:00
Daniel Hiltgen
98af0d7f3c Wire up libnetwork with TLS discovery options
This change exposes the TLS configuration settings to libnetwork so it can
communicate with a key/value store that has been set up with mutual TLS.

TLS options were introduced with https://github.com/docker/docker/pull/16644
Libnetwork support was introduced with https://github.com/docker/libnetwork/pull/602

Signed-off-by: Daniel Hiltgen <daniel.hiltgen@docker.com>
2015-10-22 16:14:12 -04:00
David Calavera
fd2633a451 Move volume name validation to the local driver.
Delegate validation tasks to the volume drivers. It's up to them
to decide whether a name is valid or not.
Restrict volume names for the local driver to prevent creating
mount points outside docker's volumes directory.

Signed-off-by: David Calavera <david.calavera@gmail.com>
2015-10-22 16:14:11 -04:00
xlgao-zju
b576f54b73 validate the name of named volume
Signed-off-by: xlgao-zju <xlgao@zju.edu.cn>
2015-10-22 15:07:07 -04:00
Burke Libbey
d2e25ed81c Better error when --host=ipc but no /dev/mqueue
Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
2015-10-22 15:07:07 -04:00
Burke Libbey
44c74f2109 Revert "Fix --ipc=host dependency on /dev/mqueue existing"
This reverts commit f624d6187a.

Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
2015-10-22 15:07:07 -04:00
Burke Libbey
434221d0eb Fix --ipc=host dependency on /dev/mqueue existing
Since #15862, containers fail to start when started with --ipc=host if
/dev/mqueue is not present. This change causes docker to create
container-local mounts for --ipc=host containers as well as in the
default case.

Signed-off-by: Burke Libbey <burke.libbey@shopify.com>
2015-10-22 15:07:07 -04:00
Vincent Demeester
620817efba Add support for multiple network in inspect
To be consistent with other inspect command (on container and images),
add the possiblity to pass multiple network to the network inspect
commands.

`docker network inspect host bridge none` is possible now.

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2015-10-22 15:07:07 -04:00
Lei Jitang
f4dc974b2a Make default tls host work
Signed-off-by: Lei Jitang <leijitang@huawei.com>
2015-10-22 15:07:07 -04:00
Tonis Tiigi
42abfab3a7 Don’t overwrite layer checksum on push
After v1.8.3 layer checksum is used for image ID
validation. Rewriting the checksums on push would
mean that next pulls will get different image IDs
and pulls may fail if its detected that same
manifest digest can now point to new image ID.

Fixes #17178

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2015-10-22 15:07:06 -04:00
Tobias Gesellchen
b63c2ab3c1 rename POST /volumes to POST /volumes/create to be consistent with the other POST /.../create endpoints
see #17132

Signed-off-by: Tobias Gesellchen <tobias@gesellix.de>
2015-10-22 15:07:06 -04:00
Madhu Venugopal
595fd5226b Integration test for default bridge init with invalid cluster config
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 15:07:06 -04:00
Madhu Venugopal
46b742dbb3 Vendoring in libnetwork to fix daemon bootup instabilities
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 15:07:06 -04:00
David Lawrence
cd8357e5a6 some bugfixes on getting tuf files, this is backed by a lot of new unit tests in gotuf
Signed-off-by: David Lawrence <david.lawrence@docker.com> (github: endophage)
2015-10-22 15:07:06 -04:00
Alessandro Boch
d6a407f9b2 Do not mask ipam driver if no ip config is passed
Signed-off-by: Alessandro Boch <aboch@docker.com>
2015-10-22 15:07:06 -04:00
Antonio Murdaca
392fe9e507 Return empty Config fields, now omitempty, for API < 1.21
Signed-off-by: Antonio Murdaca <runcom@redhat.com>
2015-10-22 15:07:05 -04:00
Harald Albers
2b4c34e756 bash completion for new docker network create options
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:07:05 -04:00
Steve Durrheimer
130f92a67f Add zsh completion for 'docker network create -o --opt'
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:07:05 -04:00
Jessica Frazelle
d676ebc156 fix copy of multiple rpms
Signed-off-by: Jessica Frazelle <acidburn@docker.com>
2015-10-22 15:07:05 -04:00
Vivek Goyal
7e6a483578 devmapper: Drop devices lock before returning from function
cleanupDeleted() takes devices.Lock() but does not drop it if there are
no deleted devices. Hence docker deadlocks if one is using deferred
device deletion feature. (--storage-opt dm.use_deferred_deletion=true).

Fix it. Drop the lock before returning.

Also added a unit test case to make sure in future this can be easily
detected if somebody changes the function.

Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
2015-10-22 15:07:05 -04:00
Jana Radhakrishnan
304a08ee91 Fix docker startup failure due to dangling endpoints
Fixes docker startup failure due to dangling endpoints
which makes docker to not come up.

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
2015-10-22 15:07:05 -04:00
Jana Radhakrishnan
38fb64bba3 Vendoring libnetwork
Vendoring libnetwork @ 05890386de89e01c73f8898c2941b020bbe57052

Has bug fixes for 1.9

Signed-off-by: Jana Radhakrishnan <mrjana@docker.com>
2015-10-22 15:07:04 -04:00
Hu Keping
69cfb521bb Docs: update docs for API stats
Signed-off-by: Hu Keping <hukeping@huawei.com>
2015-10-22 15:07:04 -04:00
Aidan Hobson Sayers
9d8bf95811 Remove references to the docker-ut image
Signed-off-by: Aidan Hobson Sayers <aidanhs@cantab.net>
2015-10-22 15:07:04 -04:00
Aidan Hobson Sayers
10b594924a Update ambassador image, use the socat -t option
Signed-off-by: Aidan Hobson Sayers <aidanhs@cantab.net>
2015-10-22 15:07:04 -04:00
Stefan J. Wernli
1b4f705a2b Fixing hang in archive.CopyWithTar with invalid dst
Signed-off-by: Stefan J. Wernli <swernli@microsoft.com>
2015-10-22 15:07:04 -04:00
Madhu Venugopal
737cddc85d Fail the container start if the network has been removed
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 15:07:04 -04:00
Harald Albers
1f209fffd0 Bash completion for restart policy unless-stopped
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:07:03 -04:00
Harald Albers
fb6ef9086d bash completion: support for dm.use_deferred_* options
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:07:03 -04:00
David Calavera
f2bbcab6c9 Return 404 for all network operations without network controller.
This will prevent the api from trying to serve network requests in
systems where libnetwork is not enabled, returning 404 responses in any
case.

Signed-off-by: David Calavera <david.calavera@gmail.com>
2015-10-22 15:07:03 -04:00
Derek McGowan
292d3b2052 Increase ping timeout
Ensure v2 registries are given more than 5 seconds to return a ping and avoid an unnecessary fallback to v1.
Elevates log level about failed v2 ping to a warning to match the warning related to using v1 registries.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
2015-10-22 15:07:03 -04:00
Christy Perez
f04a7bd388 Fix TestInspectInt64 for run scenarios that return warnings
Instead of returning only the container ID, starting a container may
also return a warning:

"WARNING: Your kernel does not support swap
limit capabilities, memory limited without
swap.\nff6ebd9f7a8d035d17bb9a61eb9d3f0a5d563160cc43471a9d7ac9f71945d061"

The test assumes that only the container ID is returned and uses the
entire message as the name for the inspect command. To avoid the need to
parse the container ID from the output after the run command, give the
container a name and use that instead.

Signed-off-by: Christy Perez <christy@linux.vnet.ibm.com>
2015-10-22 15:07:03 -04:00
Tobias Gesellchen
08cec812cb docs: fix description of filters param for /volumes and /networks.
fixes #17091

Signed-off-by: Tobias Gesellchen <tobias@gesellix.de>
2015-10-22 15:07:03 -04:00
ripcurld00d
34fdfc8f67 Update fedora doc
`service` is deprecated from Fedora v21.
It's important to enable and start the `docker` daemon using `systemctl`.

Signed-off-by: Boaz Shuster <ripcurld.github@gmail.com>
2015-10-22 15:07:02 -04:00
Sebastiaan van Stijn
1d5b8348ce docs: fix storage driver options list
This fixes the indentation of the storage driver
options list.

Also wraps/reformats some examples to prevent
horizontal scrollbars on the rendered HTML

Fixes #17140

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2015-10-22 15:07:02 -04:00
Vincent Demeester
c0705bc51d Deprecate -c cli short variant flag in docker cli
- build
- create

Signed-off-by: Vincent Demeester <vincent@sbr.pm>
2015-10-22 15:07:02 -04:00
Shijiang Wei
d88f6c33a3 make sure the value of the dangling filter is correct
Signed-off-by: Shijiang Wei <mountkin@gmail.com>
2015-10-22 15:07:02 -04:00
Madhu Venugopal
1361eaef7c Pass network driver option in docker network command
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 15:07:02 -04:00
Madhu Venugopal
cbec9c9a6b Vendoring libnetwork to Replace the label variable to DriverOpts
Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 15:07:02 -04:00
Harald Albers
4a66841730 Add missing options to bash completion for docker import
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:07:01 -04:00
Harald Albers
f679c1a5ed bash completion: minor refactoring for consistency
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:07:01 -04:00
Harald Albers
69c0989dfb Add missing options to bash completion for docker build
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:07:01 -04:00
Steve Durrheimer
10f946d312 Zsh completion : all --<option>= flag values may be given in the next argument
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:07:01 -04:00
Steve Durrheimer
45cfeed39d Add zsh completion for 'docker import -m --message'
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:07:01 -04:00
David Lawrence
417386caa6 updating notary and gotuf with latest bugfixes
Signed-off-by: David Lawrence <david.lawrence@docker.com>
2015-10-22 15:07:01 -04:00
Daniel Nephin
71f5c74a04 Correct API docs for /images/create
Signed-off-by: Daniel Nephin <dnephin@docker.com>
2015-10-22 15:07:00 -04:00
Steve Durrheimer
daa20c724a Deprecate 'docker run -c' option in zsh completion
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:07:00 -04:00
Steve Durrheimer
b4207be09d Add zsh completion for 'unless-stopped' restart policy
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:07:00 -04:00
Steve Durrheimer
02a8baf069 Add zsh completion for 'docker build --build-arg'
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:07:00 -04:00
liaoqingwei
447c28b165 Use of checkers on docker_cli_network_unix_test.go.
Signed-off-by: liaoqingwei <liaoqingwei@huawei.com>
2015-10-22 15:07:00 -04:00
Mary Anthony
78afc5876a bad d
Signed-off-by: Mary Anthony <mary@docker.com>
2015-10-22 15:07:00 -04:00
Mary Anthony
3748494073 Removing extra tic
Signed-off-by: Mary Anthony <mary@docker.com>
2015-10-22 15:07:00 -04:00
Derek Ch
90d352e765 fix a race crash when building with "ADD some-broken.tar.xz ..."
The race is between pools.Put which calls buf.Reset and exec.Cmd
doing io.Copy from the buffer; it caused a runtime crash, as
described in #16924:

``` docker-daemon cat the-tarball.xz | xz -d -c -q | docker-untar /path/to/... (aufs ) ```

When docker-untar side fails (like try to set xattr on aufs, or a broken
tar), invokeUnpack will be responsible to exhaust all input, otherwise
`xz` will be write pending for ever.

this change add a receive only channel to cmdStream, and will close it
to notify it's now safe to close the input stream;

in CmdStream the change to use Stdin / Stdout / Stderr keeps the
code simple, os/exec.Cmd will spawn goroutines and call io.Copy automatically.

the CmdStream is actually called in the same file only, change it
lowercase to mark as private.

[...]
INFO[0000] Docker daemon                                 commit=0a8c2e3 execdriver=native-0.2 graphdriver=aufs version=1.8.2

DEBU[0006] Calling POST /build
INFO[0006] POST /v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&t=gentoo-x32&ulimits=null
DEBU[0008] [BUILDER] Cache miss
DEBU[0009] Couldn't untar /home/lib-docker-v1.8.2-tmp/tmp/docker-build316710953/stage3-x32-20151004.tar.xz to /home/lib-docker-v1.8.2-tmp/aufs/mnt/d909abb87150463939c13e8a349b889a72d9b14f0cfcab42a8711979be285537: Untar re-exec error: exit status 1: output: operation not supported
DEBU[0009] CopyFileWithTar(/home/lib-docker-v1.8.2-tmp/tmp/docker-build316710953/stage3-x32-20151004.tar.xz, /home/lib-docker-v1.8.2-tmp/aufs/mnt/d909abb87150463939c13e8a349b889a72d9b14f0cfcab42a8711979be285537/)
panic: runtime error: slice bounds out of range

goroutine 42 [running]:
bufio.(*Reader).fill(0xc208187800)
    /usr/local/go/src/bufio/bufio.go:86 +0x2db
bufio.(*Reader).WriteTo(0xc208187800, 0x7ff39602d150, 0xc2083f11a0, 0x508000, 0x0, 0x0)
    /usr/local/go/src/bufio/bufio.go:449 +0x27e
io.Copy(0x7ff39602d150, 0xc2083f11a0, 0x7ff3960261f8, 0xc208187800, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/io.go:354 +0xb2
github.com/docker/docker/pkg/archive.func·006()
    /go/src/github.com/docker/docker/pkg/archive/archive.go:817 +0x71
created by github.com/docker/docker/pkg/archive.CmdStream
    /go/src/github.com/docker/docker/pkg/archive/archive.go:819 +0x1ec

goroutine 1 [chan receive]:
main.(*DaemonCli).CmdDaemon(0xc20809da30, 0xc20800a020, 0xd, 0xd, 0x0, 0x0)
    /go/src/github.com/docker/docker/docker/daemon.go:289 +0x1781
reflect.callMethod(0xc208140090, 0xc20828fce0)
    /usr/local/go/src/reflect/value.go:605 +0x179
reflect.methodValueCall(0xc20800a020, 0xd, 0xd, 0x1, 0xc208140090, 0x0, 0x0, 0xc208140090, 0x0, 0x45343f, ...)
    /usr/local/go/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc208129fb0, 0xc20800a010, 0xe, 0xe, 0x0, 0x0)
    /go/src/github.com/docker/docker/cli/cli.go:89 +0x38e
main.main()
    /go/src/github.com/docker/docker/docker/docker.go:69 +0x428

goroutine 5 [syscall]:
os/signal.loop()
    /usr/local/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
    /usr/local/go/src/os/signal/signal_unix.go:27 +0x35

Signed-off-by: Derek Ch <denc716@gmail.com>
2015-10-22 15:06:59 -04:00
Steve Durrheimer
e51ffc7dd3 Remove '-n -l --latest' options from 'docker network ls' in zsh completion
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:06:59 -04:00
Steve Durrheimer
bbd690e2e2 Add zsh completion for '--ipam-driver --subnet --ip-range --gateway --aux-address' for 'docker network create'
Signed-off-by: Steve Durrheimer <s.durrheimer@gmail.com>
2015-10-22 15:06:59 -04:00
Madhu Venugopal
14897a0b47 Added network to docker --help and help cleanup
Fixes https://github.com/docker/docker/issues/16909

Signed-off-by: Madhu Venugopal <madhu@docker.com>
2015-10-22 15:06:59 -04:00
Victor Vieux
ac4349bd30 use Server Version
Signed-off-by: Victor Vieux <vieux@docker.com>
2015-10-22 15:06:59 -04:00
Victor Vieux
fb525a33e1 only display 'Engine Version' when it's not empty
Signed-off-by: Victor Vieux <vieux@docker.com>
2015-10-22 15:06:59 -04:00
Sally O'Malley
17def9a2c6 add clarity to -p option
Signed-off-by: Sally O'Malley <somalley@redhat.com>
2015-10-22 15:06:58 -04:00
Jian Zhang
d1e3a75934 Improve the way we deliver Examples in command line. (Add descriptive titles)
Signed-off-by: Jian Zhang <zhangjian.fnst@cn.fujitsu.com>
2015-10-22 15:06:58 -04:00
Harald Albers
38cd4eb1b2 Add bash completion for docker inspect --size
Signed-off-by: Harald Albers <github@albersweb.de>
2015-10-22 15:06:58 -04:00
Jessica Frazelle
fbe20aa393 update tests
Signed-off-by: Jessica Frazelle <acidburn@docker.com>
2015-10-22 15:06:58 -04:00
Antonio Murdaca
9b313adb51 daemon: execdriver: lxc: fix cgroup paths
When running LXC dind (outer docker is started with native driver)
cgroup paths point to `/docker/CID` inside `/proc/self/mountinfo` but
these paths aren't mounted (root is wrong). This fix just discard the
cgroup dir from mountinfo and set it to root `/`.
This patch fixes/skip OOM LXC tests that were failing.
Fix #16520

Signed-off-by: Antonio Murdaca <runcom@linux.com>
Signed-off-by: Antonio Murdaca <amurdaca@redhat.com>
2015-10-22 15:06:58 -04:00
Antonio Murdaca
9419eade34 daemon: execdriver: lxc: fix set memory swap
On LXC memory swap was only set to memory_limit*2 even if a value for
memory swap was provided. This patch fix this behavior to be the same
as the native driver and set correct memory swap in the template.
Also add a test specifically for LXC but w/o adding a new test
requirement.

Signed-off-by: Antonio Murdaca <runcom@linux.com>
2015-10-22 15:06:58 -04:00
323 changed files with 9391 additions and 6608 deletions

View File

@@ -1,5 +1,113 @@
# Changelog
Items starting with `DEPRECATE` are important deprecation notices. For more
information on the list of deprecated flags and APIs please have a look at
https://docs.docker.com/misc/deprecated/ where target removal dates can also
be found.
## 1.9.0 (2015-11-03)
## Runtime
+ `docker stats` now returns block IO metrics (#15005)
+ `docker stats` now details network stats per interface (#15786)
+ Add `ancestor=<image>` filter to `docker ps --filter` flag to filter
containers based on their ancestor images (#14570)
+ Add `label=<somelabel>` filter to `docker ps --filter` to filter containers
based on label (#16530)
+ Add `--kernel-memory` flag to `docker run` (#14006)
+ Add `--message` flag to `docker import` allowing to specify an optional
message (#15711)
+ Add `--privileged` flag to `docker exec` (#14113)
+ Add `--stop-signal` flag to `docker run` allowing to replace the container
process stopping signal (#15307)
+ Add a new `unless-stopped` restart policy (#15348)
+ Inspecting an image now returns tags (#13185)
+ Add container size information to `docker inspect` (#15796)
+ Add `RepoTags` and `RepoDigests` field to `/images/{name:.*}/json` (#17275)
- Remove the deprecated `/container/ps` endpoint from the API (#15972)
- Send and document correct HTTP codes for `/exec/<name>/start` (#16250)
- Share shm and mqueue between containers sharing IPC namespace (#15862)
- Event stream now shows OOM status when `--oom-kill-disable` is set (#16235)
- Ensure special network files (/etc/hosts etc.) are read-only if bind-mounted
with `ro` option (#14965)
- Improve `rmi` performance (#16890)
- Do not update /etc/hosts for the default bridge network, except for links (#17325)
- Fix conflict with duplicate container names (#17389)
- Fix an issue with incorrect template execution in `docker inspect` (#17284)
- DEPRECATE `-c` short flag variant for `--cpu-shares` in docker run (#16271)
## Client
+ Allow `docker import` to import from local files (#11907)
## Builder
+ Add a `STOPSIGNAL` Dockerfile instruction allowing to set a different
stop-signal for the container process (#15307)
+ Add an `ARG` Dockerfile instruction and a `--build-arg` flag to `docker build`
that allows to add build-time environment variables (#15182)
- Improve cache miss performance (#16890)
## Storage
- devicemapper: Implement deferred deletion capability (#16381)
## Networking
+ `docker network` exits experimental and is part of standard release (#16645)
+ New network top-level concept, with associated subcommands and API (#16645)
WARNING: the API is different from the experimental API
+ Support for multiple isolated/micro-segmented networks (#16645)
+ Built-in multihost networking using VXLAN based overlay driver (#14071)
+ Support for third-party network plugins (#13424)
+ Ability to dynamically connect containers to multiple networks (#16645)
+ Support for user-defined IP address management via pluggable IPAM drivers (#16910)
+ Add daemon flags `--cluster-store` and `--cluster-advertise` for built-in nodes discovery (#16229)
+ Add `--cluster-store-opt` for setting up TLS settings (#16644)
+ Add `--dns-opt` to the daemon (#16031)
- DEPRECATE following container `NetworkSettings` fields in API v1.21: `EndpointID`, `Gateway`,
`GlobalIPv6Address`, `GlobalIPv6PrefixLen`, `IPAddress`, `IPPrefixLen`, `IPv6Gateway` and `MacAddress`.
Those are now specific to the `bridge` network. Use `NetworkSettings.Networks` to inspect
the networking settings of a container per network.
## Volumes
+ New top-level `volume` subcommand and API (#14242)
- Move API volume driver settings to host-specific config (#15798)
- Print an error message if volume name is not unique (#16009)
- Ensure volumes created from Dockerfiles always use the local volume driver
(#15507)
- DEPRECATE auto-creating missing host paths for bind mounts (#16349)
## Logging
+ Add `awslogs` logging driver for Amazon CloudWatch (#15495)
+ Add generic `tag` log option to allow customizing container/image
information passed to driver (e.g. show container names) (#15384)
- Implement the `docker logs` endpoint for the journald driver (#13707)
- DEPRECATE driver-specific log tags (e.g. `syslog-tag`, etc.) (#15384)
## Distribution
+ `docker search` now works with partial names (#16509)
- Push optimization: avoid buffering to file (#15493)
- The daemon will display progress for images that were already being pulled
by another client (#15489)
- Only permissions required for the current action being performed are requested (#)
+ Renaming trust keys (and respective environment variables) from `offline` to
`root` and `tagging` to `repository` (#16894)
- DEPRECATE trust key environment variables
`DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE` and
`DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE` (#16894)
## Security
+ Add SELinux profiles to the rpm package (#15832)
- Fix various issues with AppArmor profiles provided in the deb package
(#14609)
- Add AppArmor policy that prevents writing to /proc (#15571)
## 1.8.3 (2015-10-12)
### Distribution

View File

@@ -150,10 +150,11 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Get the "docker-py" source so we can run their integration tests
ENV DOCKER_PY_COMMIT 139850f3f3b17357bab5ba3edfb745fb14043764
ENV DOCKER_PY_COMMIT 47ab89ec2bd3bddf1221b856ffbaff333edeabb4
RUN git clone https://github.com/docker/docker-py.git /docker-py \
&& cd /docker-py \
&& git checkout -q $DOCKER_PY_COMMIT
&& git checkout -q $DOCKER_PY_COMMIT \
&& pip install -r test-requirements.txt
# Setup s3cmd config
RUN { \

View File

@@ -1 +1 @@
1.9.0-dev
1.9.0

View File

@@ -58,7 +58,7 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
dockerfileName := cmd.String([]string{"f", "-file"}, "", "Name of the Dockerfile (Default is 'PATH/Dockerfile')")
flMemoryString := cmd.String([]string{"m", "-memory"}, "", "Memory limit")
flMemorySwap := cmd.String([]string{"-memory-swap"}, "", "Total memory (memory + swap), '-1' to disable swap")
flCPUShares := cmd.Int64([]string{"c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
flCPUShares := cmd.Int64([]string{"#c", "-cpu-shares"}, 0, "CPU shares (relative weight)")
flCPUPeriod := cmd.Int64([]string{"-cpu-period"}, 0, "Limit the CPU CFS (Completely Fair Scheduler) period")
flCPUQuota := cmd.Int64([]string{"-cpu-quota"}, 0, "Limit the CPU CFS (Completely Fair Scheduler) quota")
flCPUSetCpus := cmd.String([]string{"-cpuset-cpus"}, "", "CPUs in which to allow execution (0-3, 0,1)")

View File

@@ -112,8 +112,13 @@ func NewDockerCli(in io.ReadCloser, out, err io.Writer, clientFlags *cli.ClientF
return errors.New("Please specify only one -H")
}
defaultHost := opts.DefaultTCPHost
if clientFlags.Common.TLSOptions != nil {
defaultHost = opts.DefaultTLSHost
}
var e error
if hosts[0], e = opts.ParseHost(hosts[0]); e != nil {
if hosts[0], e = opts.ParseHost(defaultHost, hosts[0]); e != nil {
return e
}

View File

@@ -35,7 +35,7 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
fmt.Fprintf(cli.out, "Containers: %d\n", info.Containers)
fmt.Fprintf(cli.out, "Images: %d\n", info.Images)
fmt.Fprintf(cli.out, "Engine Version: %s\n", info.ServerVersion)
ioutils.FprintfIfNotEmpty(cli.out, "Server Version: %s\n", info.ServerVersion)
ioutils.FprintfIfNotEmpty(cli.out, "Storage Driver: %s\n", info.Driver)
if info.DriverStatus != nil {
for _, pair := range info.DriverStatus {
@@ -107,5 +107,8 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
fmt.Fprintf(cli.out, "Cluster store: %s\n", info.ClusterStore)
}
if info.ClusterAdvertise != "" {
fmt.Fprintf(cli.out, "Cluster advertise: %s\n", info.ClusterAdvertise)
}
return nil
}

View File

@@ -59,7 +59,6 @@ func (cli *DockerCli) CmdInspect(args ...string) error {
}
for _, name := range cmd.Args() {
if *inspectType == "" || *inspectType == "container" {
obj, _, err = readBody(cli.call("GET", "/containers/"+name+"/json?"+v.Encode(), nil, nil))
if err != nil && *inspectType == "container" {
@@ -101,42 +100,45 @@ func (cli *DockerCli) CmdInspect(args ...string) error {
} else {
rdr := bytes.NewReader(obj)
dec := json.NewDecoder(rdr)
buf := bytes.NewBufferString("")
if isImage {
inspPtr := types.ImageInspect{}
if err := dec.Decode(&inspPtr); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
fmt.Fprintf(cli.err, "Unable to read inspect data: %v\n", err)
status = 1
continue
break
}
if err := tmpl.Execute(cli.out, inspPtr); err != nil {
if err := tmpl.Execute(buf, inspPtr); err != nil {
rdr.Seek(0, 0)
var raw interface{}
if err := dec.Decode(&raw); err != nil {
return err
}
if err = tmpl.Execute(cli.out, raw); err != nil {
return err
var ok bool
if buf, ok = cli.decodeRawInspect(tmpl, dec); !ok {
fmt.Fprintf(cli.err, "Template parsing error: %v\n", err)
status = 1
break
}
}
} else {
inspPtr := types.ContainerJSON{}
if err := dec.Decode(&inspPtr); err != nil {
fmt.Fprintf(cli.err, "%s\n", err)
fmt.Fprintf(cli.err, "Unable to read inspect data: %v\n", err)
status = 1
continue
break
}
if err := tmpl.Execute(cli.out, inspPtr); err != nil {
if err := tmpl.Execute(buf, inspPtr); err != nil {
rdr.Seek(0, 0)
var raw interface{}
if err := dec.Decode(&raw); err != nil {
return err
}
if err = tmpl.Execute(cli.out, raw); err != nil {
return err
var ok bool
if buf, ok = cli.decodeRawInspect(tmpl, dec); !ok {
fmt.Fprintf(cli.err, "Template parsing error: %v\n", err)
status = 1
break
}
}
}
cli.out.Write(buf.Bytes())
cli.out.Write([]byte{'\n'})
}
indented.WriteString(",")
@@ -162,3 +164,33 @@ func (cli *DockerCli) CmdInspect(args ...string) error {
}
return nil
}
// decodeRawInspect executes the inspect template with a raw interface.
// This allows docker cli to parse inspect structs injected with Swarm fields.
// Unfortunately, go 1.4 doesn't fail executing invalid templates when the input is an interface.
// It doesn't allow to modify this behavior either, sending <no value> messages to the output.
// We assume that the template is invalid when there is a <no value>, if the template was valid
// we'd get <nil> or "" values. In that case we fail with the original error raised executing the
// template with the typed input.
//
// TODO: Go 1.5 allows to customize the error behavior, we can probably get rid of this as soon as
// we build Docker with that version:
// https://golang.org/pkg/text/template/#Template.Option
func (cli *DockerCli) decodeRawInspect(tmpl *template.Template, dec *json.Decoder) (*bytes.Buffer, bool) {
var raw interface{}
buf := bytes.NewBufferString("")
if rawErr := dec.Decode(&raw); rawErr != nil {
fmt.Fprintf(cli.err, "Unable to read inspect data: %v\n", rawErr)
return buf, false
}
if rawErr := tmpl.Execute(buf, raw); rawErr != nil {
return buf, false
}
if strings.Contains(buf.String(), "<no value>") {
return buf, false
}
return buf, true
}

View File

@@ -34,6 +34,7 @@ func (cli *DockerCli) CmdNetwork(args ...string) error {
func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
cmd := Cli.Subcmd("network create", []string{"NETWORK-NAME"}, "Creates a new network with a name specified by the user", false)
flDriver := cmd.String([]string{"d", "-driver"}, "bridge", "Driver to manage the Network")
flOpts := opts.NewMapOpts(nil, nil)
flIpamDriver := cmd.String([]string{"-ipam-driver"}, "default", "IP Address Management Driver")
flIpamSubnet := opts.NewListOpts(nil)
@@ -41,10 +42,11 @@ func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
flIpamGateway := opts.NewListOpts(nil)
flIpamAux := opts.NewMapOpts(nil, nil)
cmd.Var(&flIpamSubnet, []string{"-subnet"}, "Subnet in CIDR format that represents a network segment")
cmd.Var(&flIpamSubnet, []string{"-subnet"}, "subnet in CIDR format that represents a network segment")
cmd.Var(&flIpamIPRange, []string{"-ip-range"}, "allocate container ip from a sub-range")
cmd.Var(&flIpamGateway, []string{"-gateway"}, "ipv4 or ipv6 Gateway for the master subnet")
cmd.Var(flIpamAux, []string{"-aux-address"}, "Auxiliary ipv4 or ipv6 addresses used by network driver")
cmd.Var(flIpamAux, []string{"-aux-address"}, "auxiliary ipv4 or ipv6 addresses used by Network driver")
cmd.Var(flOpts, []string{"o", "-opt"}, "set driver specific options")
cmd.Require(flag.Exact, 1)
err := cmd.ParseFlags(args, true)
@@ -52,6 +54,13 @@ func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
return err
}
// Set the default driver to "" if the user didn't set the value.
// That way we can know whether it was user input or not.
driver := *flDriver
if !cmd.IsSet("-driver") && !cmd.IsSet("d") {
driver = ""
}
ipamCfg, err := consolidateIpam(flIpamSubnet.GetAll(), flIpamIPRange.GetAll(), flIpamGateway.GetAll(), flIpamAux.GetAll())
if err != nil {
return err
@@ -60,8 +69,9 @@ func (cli *DockerCli) CmdNetworkCreate(args ...string) error {
// Construct network create request body
nc := types.NetworkCreate{
Name: cmd.Arg(0),
Driver: *flDriver,
Driver: driver,
IPAM: network.IPAM{Driver: *flIpamDriver, Config: ipamCfg},
Options: flOpts.GetAll(),
CheckDuplicate: true,
}
obj, _, err := readBody(cli.call("POST", "/networks/create", nc, nil))
@@ -181,31 +191,48 @@ func (cli *DockerCli) CmdNetworkLs(args ...string) error {
// CmdNetworkInspect inspects the network object for more details
//
// Usage: docker network inspect <NETWORK>
// CmdNetworkInspect handles Network inspect UI
// Usage: docker network inspect [OPTIONS] <NETWORK> [NETWORK...]
func (cli *DockerCli) CmdNetworkInspect(args ...string) error {
cmd := Cli.Subcmd("network inspect", []string{"NETWORK"}, "Displays detailed information on a network", false)
cmd.Require(flag.Exact, 1)
cmd := Cli.Subcmd("network inspect", []string{"NETWORK [NETWORK...]"}, "Displays detailed information on a network", false)
cmd.Require(flag.Min, 1)
err := cmd.ParseFlags(args, true)
if err != nil {
return err
}
obj, _, err := readBody(cli.call("GET", "/networks/"+cmd.Arg(0), nil, nil))
status := 0
var networks []*types.NetworkResource
for _, name := range cmd.Args() {
obj, _, err := readBody(cli.call("GET", "/networks/"+name, nil, nil))
if err != nil {
if strings.Contains(err.Error(), "not found") {
fmt.Fprintf(cli.err, "Error: No such network: %s\n", name)
} else {
fmt.Fprintf(cli.err, "%s", err)
}
status = 1
continue
}
networkResource := types.NetworkResource{}
if err := json.NewDecoder(bytes.NewReader(obj)).Decode(&networkResource); err != nil {
return err
}
networks = append(networks, &networkResource)
}
b, err := json.MarshalIndent(networks, "", " ")
if err != nil {
return err
}
networkResource := &types.NetworkResource{}
if err := json.NewDecoder(bytes.NewReader(obj)).Decode(networkResource); err != nil {
return err
}
indented := new(bytes.Buffer)
if err := json.Indent(indented, obj, "", " "); err != nil {
if _, err := io.Copy(cli.out, bytes.NewReader(b)); err != nil {
return err
}
if _, err := io.Copy(cli.out, indented); err != nil {
return err
io.WriteString(cli.out, "\n")
if status != 0 {
return Cli.StatusError{StatusCode: status}
}
return nil
}

View File

@@ -198,15 +198,17 @@ func (cli *DockerCli) getPassphraseRetriever() passphrase.Retriever {
// Backwards compatibility with old env names. We should remove this in 1.10
if env["root"] == "" {
env["root"] = os.Getenv("DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE")
fmt.Fprintf(cli.err, "[DEPRECATED] The environment variable DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE has been deprecated and will be removed in v1.10. Please use DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE\n")
if passphrase := os.Getenv("DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE"); passphrase != "" {
env["root"] = passphrase
fmt.Fprintf(cli.err, "[DEPRECATED] The environment variable DOCKER_CONTENT_TRUST_OFFLINE_PASSPHRASE has been deprecated and will be removed in v1.10. Please use DOCKER_CONTENT_TRUST_ROOT_PASSPHRASE\n")
}
}
if env["snapshot"] == "" || env["targets"] == "" {
env["snapshot"] = os.Getenv("DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE")
env["targets"] = os.Getenv("DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE")
fmt.Fprintf(cli.err, "[DEPRECATED] The environment variable DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE has been deprecated and will be removed in v1.10. Please use DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE\n")
if passphrase := os.Getenv("DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE"); passphrase != "" {
env["snapshot"] = passphrase
env["targets"] = passphrase
fmt.Fprintf(cli.err, "[DEPRECATED] The environment variable DOCKER_CONTENT_TRUST_TAGGING_PASSPHRASE has been deprecated and will be removed in v1.10. Please use DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE\n")
}
}
return func(keyName string, alias string, createNew bool, numAttempts int) (string, bool, error) {

View File

@@ -195,7 +195,7 @@ func (cli *DockerCli) CmdVolumeCreate(args ...string) error {
volReq.Name = *flName
}
resp, err := cli.call("POST", "/volumes", volReq, nil)
resp, err := cli.call("POST", "/volumes/create", volReq, nil)
if err != nil {
return err
}

View File

@@ -141,7 +141,7 @@ func (r *router) initRoutes() {
NewPostRoute("/exec/{name:.*}/start", r.postContainerExecStart),
NewPostRoute("/exec/{name:.*}/resize", r.postContainerExecResize),
NewPostRoute("/containers/{name:.*}/rename", r.postContainerRename),
NewPostRoute("/volumes", r.postVolumesCreate),
NewPostRoute("/volumes/create", r.postVolumesCreate),
// PUT
NewPutRoute("/containers/{name:.*}/archive", r.putContainersArchive),
// DELETE

View File

@@ -1,9 +1,14 @@
package network
import (
"net/http"
"github.com/docker/docker/api/server/httputils"
"github.com/docker/docker/api/server/router"
"github.com/docker/docker/api/server/router/local"
"github.com/docker/docker/daemon"
"github.com/docker/docker/errors"
"golang.org/x/net/context"
)
// networkRouter is a router to talk with the network controller
@@ -29,13 +34,24 @@ func (r *networkRouter) Routes() []router.Route {
func (r *networkRouter) initRoutes() {
r.routes = []router.Route{
// GET
local.NewGetRoute("/networks", r.getNetworksList),
local.NewGetRoute("/networks/{id:.*}", r.getNetwork),
local.NewGetRoute("/networks", r.controllerEnabledMiddleware(r.getNetworksList)),
local.NewGetRoute("/networks/{id:.*}", r.controllerEnabledMiddleware(r.getNetwork)),
// POST
local.NewPostRoute("/networks/create", r.postNetworkCreate),
local.NewPostRoute("/networks/{id:.*}/connect", r.postNetworkConnect),
local.NewPostRoute("/networks/{id:.*}/disconnect", r.postNetworkDisconnect),
local.NewPostRoute("/networks/create", r.controllerEnabledMiddleware(r.postNetworkCreate)),
local.NewPostRoute("/networks/{id:.*}/connect", r.controllerEnabledMiddleware(r.postNetworkConnect)),
local.NewPostRoute("/networks/{id:.*}/disconnect", r.controllerEnabledMiddleware(r.postNetworkDisconnect)),
// DELETE
local.NewDeleteRoute("/networks/{id:.*}", r.deleteNetwork),
local.NewDeleteRoute("/networks/{id:.*}", r.controllerEnabledMiddleware(r.deleteNetwork)),
}
}
func (r *networkRouter) controllerEnabledMiddleware(handler httputils.APIFunc) httputils.APIFunc {
if r.daemon.NetworkControllerEnabled() {
return handler
}
return networkControllerDisabled
}
func networkControllerDisabled(ctx context.Context, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return errors.ErrorNetworkControllerNotEnabled.WithArgs()
}

View File

@@ -13,6 +13,7 @@ import (
"github.com/docker/docker/daemon"
"github.com/docker/docker/daemon/network"
"github.com/docker/docker/pkg/parsers/filters"
"github.com/docker/docker/runconfig"
"github.com/docker/libnetwork"
)
@@ -85,6 +86,11 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
return err
}
if runconfig.IsPreDefinedNetwork(create.Name) {
return httputils.WriteJSON(w, http.StatusForbidden,
fmt.Sprintf("%s is a pre-defined network and cannot be created", create.Name))
}
nw, err := n.daemon.GetNetwork(create.Name, daemon.NetworkByName)
if _, ok := err.(libnetwork.ErrNoSuchNetwork); err != nil && !ok {
return err
@@ -96,7 +102,7 @@ func (n *networkRouter) postNetworkCreate(ctx context.Context, w http.ResponseWr
warning = fmt.Sprintf("Network with name %s (id : %s) already exists", nw.Name(), nw.ID())
}
nw, err = n.daemon.CreateNetwork(create.Name, create.Driver, create.IPAM)
nw, err = n.daemon.CreateNetwork(create.Name, create.Driver, create.IPAM, create.Options)
if err != nil {
return err
}
@@ -169,6 +175,11 @@ func (n *networkRouter) deleteNetwork(ctx context.Context, w http.ResponseWriter
return err
}
if runconfig.IsPreDefinedNetwork(nw.Name()) {
return httputils.WriteJSON(w, http.StatusForbidden,
fmt.Sprintf("%s is a pre-defined network and cannot be removed", nw.Name()))
}
return nw.Delete()
}
@@ -182,6 +193,7 @@ func buildNetworkResource(nw libnetwork.Network) *types.NetworkResource {
r.ID = nw.ID()
r.Scope = nw.Info().Scope()
r.Driver = nw.Type()
r.Options = nw.Info().DriverOptions()
r.Containers = make(map[string]types.EndpointResource)
buildIpamResources(r, nw)

View File

@@ -165,6 +165,7 @@ func (s *Server) makeHTTPHandler(handler httputils.APIFunc) http.HandlerFunc {
func (s *Server) InitRouters(d *daemon.Daemon) {
s.addRouter(local.NewRouter(d))
s.addRouter(network.NewRouter(d))
for _, srv := range s.servers {
srv.srv.Handler = s.CreateMux()
}

View File

@@ -5,6 +5,7 @@ import (
"time"
"github.com/docker/docker/daemon/network"
"github.com/docker/docker/pkg/nat"
"github.com/docker/docker/pkg/version"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
@@ -96,7 +97,8 @@ type GraphDriverData struct {
// GET "/images/{name:.*}/json"
type ImageInspect struct {
ID string `json:"Id"`
Tags []string
RepoTags []string
RepoDigests []string
Parent string
Comment string
Created string
@@ -218,6 +220,7 @@ type Info struct {
ExperimentalBuild bool
ServerVersion string
ClusterStore string
ClusterAdvertise string
}
// ExecStartCheck is a temp struct used by execStart
@@ -254,7 +257,6 @@ type ContainerJSONBase struct {
Args []string
State *ContainerState
Image string
NetworkSettings *network.Settings
ResolvConfPath string
HostnamePath string
HostsPath string
@@ -276,8 +278,43 @@ type ContainerJSONBase struct {
// ContainerJSON is newly used struct along with MountPoint
type ContainerJSON struct {
*ContainerJSONBase
Mounts []MountPoint
Config *runconfig.Config
Mounts []MountPoint
Config *runconfig.Config
NetworkSettings *NetworkSettings
}
// NetworkSettings exposes the network settings in the api
type NetworkSettings struct {
NetworkSettingsBase
DefaultNetworkSettings
Networks map[string]*network.EndpointSettings
}
// NetworkSettingsBase holds basic information about networks
type NetworkSettingsBase struct {
Bridge string
SandboxID string
HairpinMode bool
LinkLocalIPv6Address string
LinkLocalIPv6PrefixLen int
Ports nat.PortMap
SandboxKey string
SecondaryIPAddresses []network.Address
SecondaryIPv6Addresses []network.Address
}
// DefaultNetworkSettings holds network information
// during the 2 release deprecation period.
// It will be removed in Docker 1.11.
type DefaultNetworkSettings struct {
EndpointID string
Gateway string
GlobalIPv6Address string
GlobalIPv6PrefixLen int
IPAddress string
IPPrefixLen int
IPv6Gateway string
MacAddress string
}
// MountPoint represents a mount point configuration inside the container.
@@ -304,7 +341,7 @@ type VolumesListResponse struct {
}
// VolumeCreateRequest contains the response for the remote API:
// POST "/volumes"
// POST "/volumes/create"
type VolumeCreateRequest struct {
Name string // Name is the requested name of the volume
Driver string // Driver is the name of the driver that should be used to create the volume
@@ -313,42 +350,44 @@ type VolumeCreateRequest struct {
// NetworkResource is the body of the "get network" http response message
type NetworkResource struct {
Name string `json:"name"`
ID string `json:"id"`
Scope string `json:"scope"`
Driver string `json:"driver"`
IPAM network.IPAM `json:"ipam"`
Containers map[string]EndpointResource `json:"containers"`
Name string
ID string `json:"Id"`
Scope string
Driver string
IPAM network.IPAM
Containers map[string]EndpointResource
Options map[string]string
}
//EndpointResource contains network resources allocated and usd for a container in a network
type EndpointResource struct {
EndpointID string `json:"endpoint"`
MacAddress string `json:"mac_address"`
IPv4Address string `json:"ipv4_address"`
IPv6Address string `json:"ipv6_address"`
EndpointID string
MacAddress string
IPv4Address string
IPv6Address string
}
// NetworkCreate is the expected body of the "create network" http request message
type NetworkCreate struct {
Name string `json:"name"`
CheckDuplicate bool `json:"check_duplicate"`
Driver string `json:"driver"`
IPAM network.IPAM `json:"ipam"`
Name string
CheckDuplicate bool
Driver string
IPAM network.IPAM
Options map[string]string
}
// NetworkCreateResponse is the response message sent by the server for network create call
type NetworkCreateResponse struct {
ID string `json:"id"`
Warning string `json:"warning"`
ID string `json:"Id"`
Warning string
}
// NetworkConnect represents the data to be used to connect a container to the network
type NetworkConnect struct {
Container string `json:"container"`
Container string
}
// NetworkDisconnect represents the data to be used to disconnect a container from the network
type NetworkDisconnect struct {
Container string `json:"container"`
Container string
}

View File

@@ -3,6 +3,8 @@ package v1p19
import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions/v1p20"
"github.com/docker/docker/pkg/nat"
"github.com/docker/docker/runconfig"
)
@@ -10,15 +12,20 @@ import (
// Note this is not used by the Windows daemon.
type ContainerJSON struct {
*types.ContainerJSONBase
Volumes map[string]string
VolumesRW map[string]bool
Config *ContainerConfig
Volumes map[string]string
VolumesRW map[string]bool
Config *ContainerConfig
NetworkSettings *v1p20.NetworkSettings
}
// ContainerConfig is a backcompatibility struct for APIs prior to 1.20.
type ContainerConfig struct {
*runconfig.Config
MacAddress string
NetworkDisabled bool
ExposedPorts map[nat.Port]struct{}
// backward compatibility, they now live in HostConfig
VolumeDriver string
Memory int64

View File

@@ -3,20 +3,26 @@ package v1p20
import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/pkg/nat"
"github.com/docker/docker/runconfig"
)
// ContainerJSON is a backcompatibility struct for the API 1.20
type ContainerJSON struct {
*types.ContainerJSONBase
Mounts []types.MountPoint
Config *ContainerConfig
Mounts []types.MountPoint
Config *ContainerConfig
NetworkSettings *NetworkSettings
}
// ContainerConfig is a backcompatibility struct used in ContainerJSON for the API 1.20
type ContainerConfig struct {
*runconfig.Config
MacAddress string
NetworkDisabled bool
ExposedPorts map[nat.Port]struct{}
// backward compatibility, they now live in HostConfig
VolumeDriver string
}
@@ -26,3 +32,9 @@ type StatsJSON struct {
types.Stats
Network types.NetworkStats `json:"network,omitempty"`
}
// NetworkSettings is a backward compatible struct for APIs prior to 1.21
type NetworkSettings struct {
types.NetworkSettingsBase
types.DefaultNetworkSettings
}

View File

@@ -45,6 +45,7 @@ var dockerCommands = []Command{
{"login", "Register or log in to a Docker registry"},
{"logout", "Log out from a Docker registry"},
{"logs", "Fetch the logs of a container"},
{"network", "Manage Docker networks"},
{"pause", "Pause all processes within a container"},
{"port", "List port mappings or a specific mapping for the CONTAINER"},
{"ps", "List containers"},

View File

@@ -303,6 +303,7 @@ __docker_capabilities() {
__docker_log_drivers() {
COMPREPLY=( $( compgen -W "
awslogs
fluentd
gelf
journald
@@ -314,15 +315,21 @@ __docker_log_drivers() {
__docker_log_driver_options() {
# see docs/reference/logging/index.md
local fluentd_options="fluentd-address tag"
local gelf_options="gelf-address tag"
local json_file_options="max-file max-size"
local syslog_options="syslog-address syslog-facility tag"
local awslogs_options="awslogs-region awslogs-group awslogs-stream"
local fluentd_options="env fluentd-address labels tag"
local gelf_options="env gelf-address labels tag"
local journald_options="env labels"
local json_file_options="env labels max-file max-size"
local syslog_options="syslog-address syslog-facility tag"
local all_options="$fluentd_options $gelf_options $journald_options $json_file_options $syslog_options"
case $(__docker_value_of_option --log-driver) in
'')
COMPREPLY=( $( compgen -W "$fluentd_options $gelf_options $json_file_options $syslog_options" -S = -- "$cur" ) )
COMPREPLY=( $( compgen -W "$all_options" -S = -- "$cur" ) )
;;
awslogs)
COMPREPLY=( $( compgen -W "$awslogs_options" -S = -- "$cur" ) )
;;
fluentd)
COMPREPLY=( $( compgen -W "$fluentd_options" -S = -- "$cur" ) )
@@ -330,15 +337,15 @@ __docker_log_driver_options() {
gelf)
COMPREPLY=( $( compgen -W "$gelf_options" -S = -- "$cur" ) )
;;
journald)
COMPREPLY=( $( compgen -W "$journald_options" -S = -- "$cur" ) )
;;
json-file)
COMPREPLY=( $( compgen -W "$json_file_options" -S = -- "$cur" ) )
;;
syslog)
COMPREPLY=( $( compgen -W "$syslog_options" -S = -- "$cur" ) )
;;
awslogs)
COMPREPLY=( $( compgen -W "$awslogs_options" -S = -- "$cur" ) )
;;
*)
return
;;
@@ -461,8 +468,37 @@ _docker_attach() {
}
_docker_build() {
local options_with_args="
--build-arg
--cgroup-parent
--cpuset-cpus
--cpuset-mems
--cpu-shares
--cpu-period
--cpu-quota
--file -f
--memory -m
--memory-swap
--tag -t
--ulimit
"
local boolean_options="
--disable-content-trust=false
--force-rm
--help
--no-cache
--pull
--quiet -q
--rm
"
local all_options="$options_with_args $boolean_options"
case "$prev" in
--cgroup-parent|--cpuset-cpus|--cpuset-mems|--cpu-shares|-c|--cpu-period|--cpu-quota|--memory|-m|--memory-swap)
--build-arg)
COMPREPLY=( $( compgen -e -- "$cur" ) )
__docker_nospace
return
;;
--file|-f)
@@ -473,14 +509,17 @@ _docker_build() {
__docker_image_repos_and_tags
return
;;
$(__docker_to_extglob "$options_with_args") )
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--cgroup-parent --cpuset-cpus --cpuset-mems --cpu-shares -c --cpu-period --cpu-quota --file -f --force-rm --help --memory -m --memory-swap --no-cache --pull --quiet -q --rm --tag -t --ulimit" -- "$cur" ) )
COMPREPLY=( $( compgen -W "$all_options" -- "$cur" ) )
;;
*)
local counter="$(__docker_pos_first_nonflag '--cgroup-parent|--cpuset-cpus|--cpuset-mems|--cpu-shares|-c|--cpu-period|--cpu-quota|--file|-f|--memory|-m|--memory-swap|--tag|-t')"
local counter=$( __docker_pos_first_nonflag $( __docker_to_alternatives "$options_with_args" ) )
if [ $cword -eq $counter ]; then
_filedir -d
fi
@@ -529,9 +568,18 @@ _docker_cp() {
return
;;
*)
# combined container and filename completion
_filedir
local files=( ${COMPREPLY[@]} )
__docker_containers_all
COMPREPLY=( $( compgen -W "${COMPREPLY[*]}" -S ':' ) )
__docker_nospace
local containers=( ${COMPREPLY[@]} )
COMPREPLY=( $( compgen -W "${files[*]} ${containers[*]}" -- "$cur" ) )
if [[ "$COMPREPLY" == *: ]]; then
__docker_nospace
fi
return
;;
esac
@@ -539,7 +587,13 @@ _docker_cp() {
(( counter++ ))
if [ $cword -eq $counter ]; then
_filedir -d
if [ -e "$prev" ]; then
__docker_containers_all
COMPREPLY=( $( compgen -W "${COMPREPLY[*]}" -S ':' ) )
__docker_nospace
else
_filedir
fi
return
fi
;;
@@ -635,6 +689,8 @@ _docker_daemon() {
dm.mountopt
dm.override_udev_sync_check
dm.thinpooldev
dm.use_deferred_deletion
dm.use_deferred_removal
"
local zfs_options="zfs.fsname"
@@ -672,7 +728,7 @@ _docker_daemon() {
case "${words[$cword-2]}$prev=" in
# completions for --storage-opt
*dm.blkdiscard=*)
*dm.@(blkdiscard|override_udev_sync_check|use_deferred_@(removal|deletion))=*)
COMPREPLY=( $( compgen -W "false true" -- "${cur#=}" ) )
return
;;
@@ -680,10 +736,6 @@ _docker_daemon() {
COMPREPLY=( $( compgen -W "ext4 xfs" -- "${cur#=}" ) )
return
;;
*dm.override_udev_sync_check=*)
COMPREPLY=( $( compgen -W "false true" -- "${cur#=}" ) )
return
;;
*dm.thinpooldev=*)
_filedir
return
@@ -865,12 +917,18 @@ _docker_images() {
}
_docker_import() {
case "$prev" in
--change|-c|--message|-m)
return
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--change -c --help --message -m" -- "$cur" ) )
;;
*)
local counter=$(__docker_pos_first_nonflag)
local counter=$(__docker_pos_first_nonflag '--change|-c|--message|-m')
if [ $cword -eq $counter ]; then
return
fi
@@ -906,7 +964,7 @@ _docker_inspect() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--format -f --type --help" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--format -f --help --size -s --type" -- "$cur" ) )
;;
*)
case $(__docker_value_of_option --type) in
@@ -1016,6 +1074,13 @@ _docker_network_connect() {
_docker_network_create() {
case "$prev" in
--aux-address|--gateway|--ip-range|--opt|-o|--subnet)
return
;;
--ipam-driver)
COMPREPLY=( $( compgen -W "default" -- "$cur" ) )
return
;;
--driver|-d)
# no need to suggest drivers that allow one instance only
# (host, null)
@@ -1026,7 +1091,7 @@ _docker_network_create() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--driver -d --help" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--aux-address --driver -d --gateway --help --ip-range --ipam-driver --opt -o --subnet" -- "$cur" ) )
;;
esac
}
@@ -1043,11 +1108,7 @@ _docker_network_inspect() {
COMPREPLY=( $( compgen -W "--help" -- "$cur" ) )
;;
*)
local counter=$(__docker_pos_first_nonflag)
if [ $cword -eq $counter ]; then
__docker_networks
fi
;;
__docker_networks
esac
}
@@ -1060,7 +1121,7 @@ _docker_network_ls() {
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "--help --latest -l -n --no-trunc --quiet -q" -- "$cur" ) )
COMPREPLY=( $( compgen -W "--help --no-trunc --quiet -q" -- "$cur" ) )
;;
esac
}
@@ -1272,7 +1333,7 @@ _docker_run() {
--cpu-quota
--cpuset-cpus
--cpuset-mems
--cpu-shares -c
--cpu-shares
--device
--dns
--dns-opt
@@ -1311,7 +1372,7 @@ _docker_run() {
--workdir -w
"
local all_options="$options_with_args
local boolean_options="
--disable-content-trust=false
--help
--interactive -i
@@ -1322,14 +1383,14 @@ _docker_run() {
--tty -t
"
local all_options="$options_with_args $boolean_options"
[ "$command" = "run" ] && all_options="$all_options
--detach -d
--rm
--sig-proxy=false
"
local options_with_args_glob=$(__docker_to_extglob "$options_with_args")
case "$prev" in
--add-host)
case "$cur" in
@@ -1427,7 +1488,7 @@ _docker_run() {
on-failure:*)
;;
*)
COMPREPLY=( $( compgen -W "no on-failure on-failure: always" -- "$cur") )
COMPREPLY=( $( compgen -W "always no on-failure on-failure: unless-stopped" -- "$cur") )
;;
esac
return
@@ -1454,7 +1515,7 @@ _docker_run() {
__docker_containers_all
return
;;
$options_with_args_glob )
$(__docker_to_extglob "$options_with_args") )
return
;;
esac

View File

@@ -116,7 +116,7 @@ complete -c docker -A -f -n '__fish_seen_subcommand_from cp' -l help -d 'Print u
complete -c docker -f -n '__fish_docker_no_subcommand' -a create -d 'Create a new container'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s a -l attach -d 'Attach to STDIN, STDOUT or STDERR.'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l add-host -d 'Add a custom host-to-IP mapping (host:ip)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -s c -l cpu-shares -d 'CPU shares (relative weight)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cpu-shares -d 'CPU shares (relative weight)'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-add -d 'Add Linux capabilities'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cap-drop -d 'Drop Linux capabilities'
complete -c docker -A -f -n '__fish_seen_subcommand_from create' -l cidfile -d 'Write the container ID to the file'

View File

@@ -253,18 +253,22 @@ __docker_network_subcommand() {
_arguments -A '-*' \
$opts_help \
"($help -d --driver)"{-d,--driver=}"[Driver to manage the Network]:driver:(null host bridge overlay)" \
"($help)--ipam-driver=[IP Address Management Driver]:driver:(default)" \
"($help)*--subnet=[Subnet in CIDR format that represents a network segment]:IP/mask: " \
"($help)*--ip-range=[Allocate container ip from a sub-range]:IP/mask: " \
"($help)*--gateway=[ipv4 or ipv6 Gateway for the master subnet]:IP: " \
"($help)*--aux-address[Auxiliary ipv4 or ipv6 addresses used by network driver]:key=IP: " \
"($help)*"{-o=,--opt=}"[Set driver specific options]:key=value: " \
"($help -)1:Network Name: " && ret=0
;;
(inspect|rm)
_arguments \
$opts_help \
"($help -):network:__docker_networks" && ret=0
"($help -)*:network:__docker_networks" && ret=0
;;
(ls)
_arguments \
$opts_help \
"($help -l --latest)"{-l,--latest}"[Show the latest network created]" \
"($help)-n=-[Show n last created networks]:Number of networks: " \
"($help)--no-trunc[Do not truncate the output]" \
"($help -q --quiet)"{-q,--quiet}"[Only display numeric IDs]" && ret=0
;;
@@ -330,20 +334,20 @@ __docker_volume_subcommand() {
(create)
_arguments \
$opts_help \
"($help -d --driver)"{-d,--driver=-}"[Specify volume driver name]:Driver name: " \
"($help)--name=-[Specify volume name]" \
"($help -o --opt)*"{-o,--opt=-}"[Set driver specific options]:Driver option: " && ret=0
"($help -d --driver)"{-d,--driver=}"[Specify volume driver name]:Driver name: " \
"($help)--name=[Specify volume name]" \
"($help)*"{-o,--opt=}"[Set driver specific options]:Driver option: " && ret=0
;;
(inspect)
_arguments \
$opts_help \
"($help -f --format)"{-f,--format=-}"[Format the output using the given go template]:template: " \
"($help -f --format)"{-f,--format=}"[Format the output using the given go template]:template: " \
"($help -)1:volume:__docker_volumes" && ret=0
;;
(ls)
_arguments \
$opts_help \
"($help -f --filter)*"{-f,--filter=-}"[Provide filter values (i.e. 'dangling=true')]:filter: " \
"($help)*"{-f,--filter=}"[Provide filter values (i.e. 'dangling=true')]:filter: " \
"($help -q --quiet)"{-q,--quiet}"[Only display volume names]" && ret=0
;;
(rm)
@@ -391,57 +395,57 @@ __docker_subcommand() {
opts_help=("(: -)--help[Print usage]")
opts_cpumemlimit=(
"($help -c --cpu-shares)"{-c,--cpu-shares=-}"[CPU shares (relative weight)]:CPU shares:(0 10 100 200 500 800 1000)"
"($help)--cgroup-parent=-[Parent cgroup for the container]:cgroup: "
"($help)--cpu-period=-[Limit the CPU CFS (Completely Fair Scheduler) period]:CPU period: "
"($help)--cpu-quota=-[Limit the CPU CFS (Completely Fair Scheduler) quota]:CPU quota: "
"($help)--cpuset-cpus=-[CPUs in which to allow execution]:CPUs: "
"($help)--cpuset-mems=-[MEMs in which to allow execution]:MEMs: "
"($help -m --memory)"{-m,--memory=-}"[Memory limit]:Memory limit: "
"($help)--memory-swap=-[Total memory limit with swap]:Memory limit: "
"($help)*--ulimit=-[ulimit options]:ulimit: "
"($help)--cpu-shares=[CPU shares (relative weight)]:CPU shares:(0 10 100 200 500 800 1000)"
"($help)--cgroup-parent=[Parent cgroup for the container]:cgroup: "
"($help)--cpu-period=[Limit the CPU CFS (Completely Fair Scheduler) period]:CPU period: "
"($help)--cpu-quota=[Limit the CPU CFS (Completely Fair Scheduler) quota]:CPU quota: "
"($help)--cpuset-cpus=[CPUs in which to allow execution]:CPUs: "
"($help)--cpuset-mems=[MEMs in which to allow execution]:MEMs: "
"($help -m --memory)"{-m,--memory=}"[Memory limit]:Memory limit: "
"($help)--memory-swap=[Total memory limit with swap]:Memory limit: "
"($help)*--ulimit=[ulimit options]:ulimit: "
)
opts_create=(
"($help -a --attach)"{-a,--attach=-}"[Attach to stdin, stdout or stderr]:device:(STDIN STDOUT STDERR)"
"($help)*--add-host=-[Add a custom host-to-IP mapping]:host\:ip mapping: "
"($help)--blkio-weight=-[Block IO (relative weight), between 10 and 1000]:Block IO weight:(10 100 500 1000)"
"($help)*--cap-add=-[Add Linux capabilities]:capability: "
"($help)*--cap-drop=-[Drop Linux capabilities]:capability: "
"($help)--cidfile=-[Write the container ID to the file]:CID file:_files"
"($help)*--device=-[Add a host device to the container]:device:_files"
"($help)*--dns=-[Set custom DNS servers]:DNS server: "
"($help)*--dns-opt=-[Set custom DNS options]:DNS option: "
"($help)*--dns-search=-[Set custom DNS search domains]:DNS domains: "
"($help)*"{-e,--env=-}"[Set environment variables]:environment variable: "
"($help)--entrypoint=-[Overwrite the default entrypoint of the image]:entry point: "
"($help)*--env-file=-[Read environment variables from a file]:environment file:_files"
"($help)*--expose=-[Expose a port from the container without publishing it]: "
"($help)*--group-add=-[Add additional groups to run as]:group:_groups"
"($help -h --hostname)"{-h,--hostname=-}"[Container host name]:hostname:_hosts"
"($help -a --attach)"{-a,--attach=}"[Attach to stdin, stdout or stderr]:device:(STDIN STDOUT STDERR)"
"($help)*--add-host=[Add a custom host-to-IP mapping]:host\:ip mapping: "
"($help)--blkio-weight=[Block IO (relative weight), between 10 and 1000]:Block IO weight:(10 100 500 1000)"
"($help)*--cap-add=[Add Linux capabilities]:capability: "
"($help)*--cap-drop=[Drop Linux capabilities]:capability: "
"($help)--cidfile=[Write the container ID to the file]:CID file:_files"
"($help)*--device=[Add a host device to the container]:device:_files"
"($help)*--dns=[Set custom DNS servers]:DNS server: "
"($help)*--dns-opt=[Set custom DNS options]:DNS option: "
"($help)*--dns-search=[Set custom DNS search domains]:DNS domains: "
"($help)*"{-e,--env=}"[Set environment variables]:environment variable: "
"($help)--entrypoint=[Overwrite the default entrypoint of the image]:entry point: "
"($help)*--env-file=[Read environment variables from a file]:environment file:_files"
"($help)*--expose=[Expose a port from the container without publishing it]: "
"($help)*--group-add=[Add additional groups to run as]:group:_groups"
"($help -h --hostname)"{-h,--hostname=}"[Container host name]:hostname:_hosts"
"($help -i --interactive)"{-i,--interactive}"[Keep stdin open even if not attached]"
"($help)--ipc=-[IPC namespace to use]:IPC namespace: "
"($help)--ipc=[IPC namespace to use]:IPC namespace: "
"($help)--kernel-memory[Kernel memory limit in bytes.]:Memory limit: "
"($help)*--link=-[Add link to another container]:link:->link"
"($help)*"{-l,--label=-}"[Set meta data on a container]:label: "
"($help)--log-driver=-[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs none)"
"($help)*--log-opt=-[Log driver specific options]:log driver options: "
"($help)*--lxc-conf=-[Add custom lxc options]:lxc options: "
"($help)--mac-address=-[Container MAC address]:MAC address: "
"($help)--name=-[Container name]:name: "
"($help)--net=-[Connect a container to a network]:network mode:(bridge none container host)"
"($help)*--link=[Add link to another container]:link:->link"
"($help)*"{-l,--label=}"[Set meta data on a container]:label: "
"($help)--log-driver=[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs none)"
"($help)*--log-opt=[Log driver specific options]:log driver options: "
"($help)*--lxc-conf=[Add custom lxc options]:lxc options: "
"($help)--mac-address=[Container MAC address]:MAC address: "
"($help)--name=[Container name]:name: "
"($help)--net=[Connect a container to a network]:network mode:(bridge none container host)"
"($help)--oom-kill-disable[Disable OOM Killer]"
"($help -P --publish-all)"{-P,--publish-all}"[Publish all exposed ports]"
"($help)*"{-p,--publish=-}"[Expose a container's port to the host]:port:_ports"
"($help)--pid=-[PID namespace to use]:PID: "
"($help)*"{-p,--publish=}"[Expose a container's port to the host]:port:_ports"
"($help)--pid=[PID namespace to use]:PID: "
"($help)--privileged[Give extended privileges to this container]"
"($help)--read-only[Mount the container's root filesystem as read only]"
"($help)--restart=-[Restart policy]:restart policy:(no on-failure always)"
"($help)*--security-opt=-[Security options]:security option: "
"($help)--restart=[Restart policy]:restart policy:(no on-failure always unless-stopped)"
"($help)*--security-opt=[Security options]:security option: "
"($help -t --tty)"{-t,--tty}"[Allocate a pseudo-tty]"
"($help -u --user)"{-u,--user=-}"[Username or UID]:user:_users"
"($help -u --user)"{-u,--user=}"[Username or UID]:user:_users"
"($help)*-v[Bind mount a volume]:volume: "
"($help)*--volumes-from=-[Mount volumes from the specified container]:volume: "
"($help -w --workdir)"{-w,--workdir=-}"[Working directory inside the container]:directory:_directories"
"($help)*--volumes-from=[Mount volumes from the specified container]:volume: "
"($help -w --workdir)"{-w,--workdir=}"[Working directory inside the container]:directory:_directories"
)
case "$words[1]" in
@@ -456,21 +460,22 @@ __docker_subcommand() {
_arguments \
$opts_help \
$opts_cpumemlimit \
"($help -f --file)"{-f,--file=-}"[Name of the Dockerfile]:Dockerfile:_files" \
"($help)*--build-arg[Set build-time variables]:<varname>=<value>: " \
"($help -f --file)"{-f,--file=}"[Name of the Dockerfile]:Dockerfile:_files" \
"($help)--force-rm[Always remove intermediate containers]" \
"($help)--no-cache[Do not use cache when building the image]" \
"($help)--pull[Attempt to pull a newer version of the image]" \
"($help -q --quiet)"{-q,--quiet}"[Suppress verbose build output]" \
"($help)--rm[Remove intermediate containers after a successful build]" \
"($help -t --tag)"{-t,--tag=-}"[Repository, name and tag for the image]: :__docker_repositories_with_tags" \
"($help -t --tag)"{-t,--tag=}"[Repository, name and tag for the image]: :__docker_repositories_with_tags" \
"($help -):path or URL:_directories" && ret=0
;;
(commit)
_arguments \
$opts_help \
"($help -a --author)"{-a,--author=-}"[Author]:author: " \
"($help -c --change)*"{-c,--change=-}"[Apply Dockerfile instruction to the created image]:Dockerfile:_files" \
"($help -m --message)"{-m,--message=-}"[Commit message]:message: " \
"($help -a --author)"{-a,--author=}"[Author]:author: " \
"($help)*"{-c,--change=}"[Apply Dockerfile instruction to the created image]:Dockerfile:_files" \
"($help -m --message)"{-m,--message=}"[Commit message]:message: " \
"($help -p --pause)"{-p,--pause}"[Pause container during commit]" \
"($help -):container:__docker_containers" \
"($help -): :__docker_repositories_with_tags" && ret=0
@@ -513,49 +518,49 @@ __docker_subcommand() {
(daemon)
_arguments \
$opts_help \
"($help)--api-cors-header=-[Set CORS headers in the remote API]:CORS headers: " \
"($help -b --bridge)"{-b,--bridge=-}"[Attach containers to a network bridge]:bridge:_net_interfaces" \
"($help)--bip=-[Specify network bridge IP]" \
"($help)--api-cors-header=[Set CORS headers in the remote API]:CORS headers: " \
"($help -b --bridge)"{-b,--bridge=}"[Attach containers to a network bridge]:bridge:_net_interfaces" \
"($help)--bip=[Specify network bridge IP]" \
"($help -D --debug)"{-D,--debug}"[Enable debug mode]" \
"($help)--default-gateway[Container default gateway IPv4 address]:IPv4 address: " \
"($help)--default-gateway-v6[Container default gateway IPv6 address]:IPv6 address: " \
"($help)--cluster-store=-[URL of the distributed storage backend]:Cluster Store:->cluster-store" \
"($help)--cluster-advertise=-[Address of the daemon instance to advertise]:Instance to advertise (host\:port): " \
"($help)--cluster-store=[URL of the distributed storage backend]:Cluster Store:->cluster-store" \
"($help)--cluster-advertise=[Address of the daemon instance to advertise]:Instance to advertise (host\:port): " \
"($help)*--cluster-store-opt[Set cluster options]:Cluster options:->cluster-store-options" \
"($help)*--dns=-[DNS server to use]:DNS: " \
"($help)*--dns-search=-[DNS search domains to use]:DNS search: " \
"($help)*--dns-opt=-[DNS options to use]:DNS option: " \
"($help)*--default-ulimit=-[Set default ulimit settings for containers]:ulimit: " \
"($help)*--dns=[DNS server to use]:DNS: " \
"($help)*--dns-search=[DNS search domains to use]:DNS search: " \
"($help)*--dns-opt=[DNS options to use]:DNS option: " \
"($help)*--default-ulimit=[Set default ulimit settings for containers]:ulimit: " \
"($help)--disable-legacy-registry[Do not contact legacy registries]" \
"($help -e --exec-driver)"{-e,--exec-driver=-}"[Exec driver to use]:driver:(native lxc windows)" \
"($help)*--exec-opt=-[Set exec driver options]:exec driver options: " \
"($help)--exec-root=-[Root of the Docker execdriver]:path:_directories" \
"($help)--fixed-cidr=-[IPv4 subnet for fixed IPs]:IPv4 subnet: " \
"($help)--fixed-cidr-v6=-[IPv6 subnet for fixed IPs]:IPv6 subnet: " \
"($help -G --group)"{-G,--group=-}"[Group for the unix socket]:group:_groups" \
"($help -g --graph)"{-g,--graph=-}"[Root of the Docker runtime]:path:_directories" \
"($help -H --host)"{-H,--host=-}"[tcp://host:port to bind/connect to]:host: " \
"($help -e --exec-driver)"{-e,--exec-driver=}"[Exec driver to use]:driver:(native lxc windows)" \
"($help)*--exec-opt=[Set exec driver options]:exec driver options: " \
"($help)--exec-root=[Root of the Docker execdriver]:path:_directories" \
"($help)--fixed-cidr=[IPv4 subnet for fixed IPs]:IPv4 subnet: " \
"($help)--fixed-cidr-v6=[IPv6 subnet for fixed IPs]:IPv6 subnet: " \
"($help -G --group)"{-G,--group=}"[Group for the unix socket]:group:_groups" \
"($help -g --graph)"{-g,--graph=}"[Root of the Docker runtime]:path:_directories" \
"($help -H --host)"{-H,--host=}"[tcp://host:port to bind/connect to]:host: " \
"($help)--icc[Enable inter-container communication]" \
"($help)*--insecure-registry=-[Enable insecure registry communication]:registry: " \
"($help)--ip=-[Default IP when binding container ports]" \
"($help)*--insecure-registry=[Enable insecure registry communication]:registry: " \
"($help)--ip=[Default IP when binding container ports]" \
"($help)--ip-forward[Enable net.ipv4.ip_forward]" \
"($help)--ip-masq[Enable IP masquerading]" \
"($help)--iptables[Enable addition of iptables rules]" \
"($help)--ipv6[Enable IPv6 networking]" \
"($help -l --log-level)"{-l,--log-level=-}"[Set the logging level]:level:(debug info warn error fatal)" \
"($help)*--label=-[Set key=value labels to the daemon]:label: " \
"($help)--log-driver=-[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs none)" \
"($help)*--log-opt=-[Log driver specific options]:log driver options: " \
"($help)--mtu=-[Set the containers network MTU]:mtu:(0 576 1420 1500 9000)" \
"($help -p --pidfile)"{-p,--pidfile=-}"[Path to use for daemon PID file]:PID file:_files" \
"($help)*--registry-mirror=-[Preferred Docker registry mirror]:registry mirror: " \
"($help -s --storage-driver)"{-s,--storage-driver=-}"[Storage driver to use]:driver:(aufs devicemapper btrfs zfs overlay)" \
"($help -l --log-level)"{-l,--log-level=}"[Set the logging level]:level:(debug info warn error fatal)" \
"($help)*--label=[Set key=value labels to the daemon]:label: " \
"($help)--log-driver=[Default driver for container logs]:Logging driver:(json-file syslog journald gelf fluentd awslogs none)" \
"($help)*--log-opt=[Log driver specific options]:log driver options: " \
"($help)--mtu=[Set the containers network MTU]:mtu:(0 576 1420 1500 9000)" \
"($help -p --pidfile)"{-p,--pidfile=}"[Path to use for daemon PID file]:PID file:_files" \
"($help)*--registry-mirror=[Preferred Docker registry mirror]:registry mirror: " \
"($help -s --storage-driver)"{-s,--storage-driver=}"[Storage driver to use]:driver:(aufs devicemapper btrfs zfs overlay)" \
"($help)--selinux-enabled[Enable selinux support]" \
"($help)*--storage-opt=-[Set storage driver options]:storage driver options: " \
"($help)*--storage-opt=[Set storage driver options]:storage driver options: " \
"($help)--tls[Use TLS]" \
"($help)--tlscacert=-[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlscert=-[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlskey=-[Path to TLS key file]:Key file:_files -g "*.(pem|key)"" \
"($help)--tlscacert=[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlscert=[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlskey=[Path to TLS key file]:Key file:_files -g "*.(pem|key)"" \
"($help)--tlsverify[Use TLS and verify the remote]" \
"($help)--userland-proxy[Use userland proxy for loopback traffic]" && ret=0
@@ -586,9 +591,9 @@ __docker_subcommand() {
(events)
_arguments \
$opts_help \
"($help)*"{-f,--filter=-}"[Filter values]:filter: " \
"($help)--since=-[Events created since this timestamp]:timestamp: " \
"($help)--until=-[Events created until this timestamp]:timestamp: " && ret=0
"($help)*"{-f,--filter=}"[Filter values]:filter: " \
"($help)--since=[Events created since this timestamp]:timestamp: " \
"($help)--until=[Events created until this timestamp]:timestamp: " && ret=0
;;
(exec)
local state
@@ -598,7 +603,7 @@ __docker_subcommand() {
"($help -i --interactive)"{-i,--interactive}"[Keep stdin open even if not attached]" \
"($help)--privileged[Give extended Linux capabilities to the command]" \
"($help -t --tty)"{-t,--tty}"[Allocate a pseudo-tty]" \
"($help -u --user)"{-u,--user=-}"[Username or UID]:user:_users" \
"($help -u --user)"{-u,--user=}"[Username or UID]:user:_users" \
"($help -):containers:__docker_runningcontainers" \
"($help -)*::command:->anycommand" && ret=0
@@ -613,7 +618,7 @@ __docker_subcommand() {
(export)
_arguments \
$opts_help \
"($help -o --output)"{-o,--output=-}"[Write to a file, instead of stdout]:output file:_files" \
"($help -o --output)"{-o,--output=}"[Write to a file, instead of stdout]:output file:_files" \
"($help -)*:containers:__docker_containers" && ret=0
;;
(history)
@@ -629,7 +634,7 @@ __docker_subcommand() {
$opts_help \
"($help -a --all)"{-a,--all}"[Show all images]" \
"($help)--digest[Show digests]" \
"($help)*"{-f,--filter=-}"[Filter values]:filter: " \
"($help)*"{-f,--filter=}"[Filter values]:filter: " \
"($help)--no-trunc[Do not truncate output]" \
"($help -q --quiet)"{-q,--quiet}"[Only show numeric IDs]" \
"($help -): :__docker_repositories" && ret=0
@@ -637,7 +642,8 @@ __docker_subcommand() {
(import)
_arguments \
$opts_help \
"($help -c --change)*"{-c,--change=-}"[Apply Dockerfile instruction to the created image]:Dockerfile:_files" \
"($help)*"{-c,--change=}"[Apply Dockerfile instruction to the created image]:Dockerfile:_files" \
"($help -m --message)"{-m,--message=}"[Set commit message for imported image]:message: " \
"($help -):URL:(- http:// file://)" \
"($help -): :__docker_repositories_with_tags" && ret=0
;;
@@ -649,9 +655,9 @@ __docker_subcommand() {
local state
_arguments \
$opts_help \
"($help -f --format=-)"{-f,--format=-}"[Format the output using the given go template]:template: " \
"($help -f --format)"{-f,--format=}"[Format the output using the given go template]:template: " \
"($help -s --size)"{-s,--size}"[Display total file sizes if the type is container]" \
"($help)--type=-[Return JSON for specified type]:type:(image container)" \
"($help)--type=[Return JSON for specified type]:type:(image container)" \
"($help -)*: :->values" && ret=0
case $state in
@@ -669,20 +675,20 @@ __docker_subcommand() {
(kill)
_arguments \
$opts_help \
"($help -s --signal)"{-s,--signal=-}"[Signal to send]:signal:_signals" \
"($help -s --signal)"{-s,--signal=}"[Signal to send]:signal:_signals" \
"($help -)*:containers:__docker_runningcontainers" && ret=0
;;
(load)
_arguments \
$opts_help \
"($help -i --input)"{-i,--input=-}"[Read from tar archive file]:archive file:_files -g "*.((tar|TAR)(.gz|.GZ|.Z|.bz2|.lzma|.xz|)|(tbz|tgz|txz))(-.)"" && ret=0
"($help -i --input)"{-i,--input=}"[Read from tar archive file]:archive file:_files -g "*.((tar|TAR)(.gz|.GZ|.Z|.bz2|.lzma|.xz|)|(tbz|tgz|txz))(-.)"" && ret=0
;;
(login)
_arguments \
$opts_help \
"($help -e --email)"{-e,--email=-}"[Email]:email: " \
"($help -p --password)"{-p,--password=-}"[Password]:password: " \
"($help -u --user)"{-u,--user=-}"[Username]:username: " \
"($help -e --email)"{-e,--email=}"[Email]:email: " \
"($help -p --password)"{-p,--password=}"[Password]:password: " \
"($help -u --user)"{-u,--user=}"[Username]:username: " \
"($help -)1:server: " && ret=0
;;
(logout)
@@ -694,9 +700,9 @@ __docker_subcommand() {
_arguments \
$opts_help \
"($help -f --follow)"{-f,--follow}"[Follow log output]" \
"($help -s --since)"{-s,--since=-}"[Show logs since this timestamp]:timestamp: " \
"($help -s --since)"{-s,--since=}"[Show logs since this timestamp]:timestamp: " \
"($help -t --timestamps)"{-t,--timestamps}"[Show timestamps]" \
"($help)--tail=-[Output the last K lines]:lines:(1 10 20 50 all)" \
"($help)--tail=[Output the last K lines]:lines:(1 10 20 50 all)" \
"($help -)*:containers:__docker_containers" && ret=0
;;
(network)
@@ -731,15 +737,15 @@ __docker_subcommand() {
_arguments \
$opts_help \
"($help -a --all)"{-a,--all}"[Show all containers]" \
"($help)--before=-[Show only container created before...]:containers:__docker_containers" \
"($help)*"{-f,--filter=-}"[Filter values]:filter: " \
"($help)--before=[Show only container created before...]:containers:__docker_containers" \
"($help)*"{-f,--filter=}"[Filter values]:filter: " \
"($help)--format[Pretty-print containers using a Go template]:format: " \
"($help -l --latest)"{-l,--latest}"[Show only the latest created container]" \
"($help)-n[Show n last created containers, include non-running one]:n:(1 5 10 25 50)" \
"($help)--no-trunc[Do not truncate output]" \
"($help -q --quiet)"{-q,--quiet}"[Only show numeric IDs]" \
"($help -s --size)"{-s,--size}"[Display total file sizes]" \
"($help)--since=-[Show only containers created since...]:containers:__docker_containers" && ret=0
"($help)--since=[Show only containers created since...]:containers:__docker_containers" && ret=0
;;
(pull)
_arguments \
@@ -761,7 +767,7 @@ __docker_subcommand() {
(restart|stop)
_arguments \
$opts_help \
"($help -t --time=-)"{-t,--time=-}"[Number of seconds to try to stop for before killing the container]:seconds to before killing:(1 5 10 30 60)" \
"($help -t --time)"{-t,--time=}"[Number of seconds to try to stop for before killing the container]:seconds to before killing:(1 5 10 30 60)" \
"($help -)*:containers:__docker_runningcontainers" && ret=0
;;
(rm)
@@ -787,7 +793,7 @@ __docker_subcommand() {
"($help -d --detach)"{-d,--detach}"[Detached mode: leave the container running in the background]" \
"($help)--rm[Remove intermediate containers when it exits]" \
"($help)--sig-proxy[Proxy all received signals to the process (non-TTY mode only)]" \
"($help)--stop-signal=-[Signal to kill a container]:signal:_signals" \
"($help)--stop-signal=[Signal to kill a container]:signal:_signals" \
"($help -): :__docker_images" \
"($help -):command: _command_names -e" \
"($help -)*::arguments: _normal" && ret=0
@@ -806,7 +812,7 @@ __docker_subcommand() {
(save)
_arguments \
$opts_help \
"($help -o --output)"{-o,--output=-}"[Write to file]:file:_files" \
"($help -o --output)"{-o,--output=}"[Write to file]:file:_files" \
"($help -)*: :__docker_images" && ret=0
;;
(search)
@@ -814,7 +820,7 @@ __docker_subcommand() {
$opts_help \
"($help)--automated[Only show automated builds]" \
"($help)--no-trunc[Do not truncate output]" \
"($help -s --stars)"{-s,--stars=-}"[Only display with at least X stars]:stars:(0 10 100 1000)" \
"($help -s --stars)"{-s,--stars=}"[Only display with at least X stars]:stars:(0 10 100 1000)" \
"($help -):term: " && ret=0
;;
(start)
@@ -895,12 +901,12 @@ _docker() {
"(: -)"{-h,--help}"[Print usage]" \
"($help)--config[Location of client config files]:path:_directories" \
"($help -D --debug)"{-D,--debug}"[Enable debug mode]" \
"($help -H --host)"{-H,--host=-}"[tcp://host:port to bind/connect to]:host: " \
"($help -l --log-level)"{-l,--log-level=-}"[Set the logging level]:level:(debug info warn error fatal)" \
"($help -H --host)"{-H,--host=}"[tcp://host:port to bind/connect to]:host: " \
"($help -l --log-level)"{-l,--log-level=}"[Set the logging level]:level:(debug info warn error fatal)" \
"($help)--tls[Use TLS]" \
"($help)--tlscacert=-[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlscert=-[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlskey=-[Path to TLS key file]:Key file:_files -g "*.(pem|key)"" \
"($help)--tlscacert=[Trust certs signed only by this CA]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlscert=[Path to TLS certificate file]:PEM file:_files -g "*.(pem|crt)"" \
"($help)--tlskey=[Path to TLS key file]:Key file:_files -g "*.(pem|key)"" \
"($help)--tlsverify[Use TLS and verify the remote]" \
"($help)--userland-proxy[Use userland proxy for loopback traffic]" \
"($help -v --version)"{-v,--version}"[Print version information and quit]" \

View File

@@ -71,7 +71,7 @@ func (config *Config) InstallCommonFlags(cmd *flag.FlagSet, usageFn func(string)
cmd.Var(opts.NewListOptsRef(&config.Labels, opts.ValidateLabel), []string{"-label"}, usageFn("Set key=value labels to the daemon"))
cmd.StringVar(&config.LogConfig.Type, []string{"-log-driver"}, "json-file", usageFn("Default driver for container logs"))
cmd.Var(opts.NewMapOpts(config.LogConfig.Config, nil), []string{"-log-opt"}, usageFn("Set log driver options"))
cmd.StringVar(&config.ClusterAdvertise, []string{"-cluster-advertise"}, "", usageFn("Address of the daemon instance to advertise"))
cmd.StringVar(&config.ClusterAdvertise, []string{"-cluster-advertise"}, "", usageFn("Address or interface name to advertise"))
cmd.StringVar(&config.ClusterStore, []string{"-cluster-store"}, "", usageFn("Set the cluster store"))
cmd.Var(opts.NewMapOpts(config.ClusterOpts, nil), []string{"-cluster-store-opt"}, usageFn("Set cluster store options"))
}

View File

@@ -328,10 +328,6 @@ func (streamConfig *streamConfig) StderrPipe() io.ReadCloser {
return ioutils.NewBufReader(reader)
}
func (container *Container) isNetworkAllocated() bool {
return container.NetworkSettings.IPAddress != ""
}
// cleanup releases any network resources allocated to the container along with any rules
// around how containers are linked together. It also unmounts the container's root filesystem.
func (container *Container) cleanup() {

View File

@@ -89,15 +89,25 @@ func (container *Container) setupLinkedContainers() ([]string, error) {
return nil, err
}
bridgeSettings := container.NetworkSettings.Networks["bridge"]
if bridgeSettings == nil {
return nil, nil
}
if len(children) > 0 {
for linkAlias, child := range children {
if !child.IsRunning() {
return nil, derr.ErrorCodeLinkNotRunning.WithArgs(child.Name, linkAlias)
}
childBridgeSettings := child.NetworkSettings.Networks["bridge"]
if childBridgeSettings == nil {
return nil, fmt.Errorf("container %d not attached to default bridge network", child.ID)
}
link := links.NewLink(
container.NetworkSettings.IPAddress,
child.NetworkSettings.IPAddress,
bridgeSettings.IPAddress,
childBridgeSettings.IPAddress,
linkAlias,
child.Config.Env,
child.Config.ExposedPorts,
@@ -218,6 +228,12 @@ func populateCommand(c *Container, env []string) error {
} else {
ipc.HostIpc = c.hostConfig.IpcMode.IsHost()
if ipc.HostIpc {
if _, err := os.Stat("/dev/shm"); err != nil {
return fmt.Errorf("/dev/shm is not mounted, but must be for --host=ipc")
}
if _, err := os.Stat("/dev/mqueue"); err != nil {
return fmt.Errorf("/dev/mqueue is not mounted, but must be for --host=ipc")
}
c.ShmPath = "/dev/shm"
c.MqueuePath = "/dev/mqueue"
}
@@ -525,6 +541,9 @@ func (container *Container) buildSandboxOptions(n libnetwork.Network) ([]libnetw
}
for linkAlias, child := range children {
if !isLinkable(child) {
return nil, fmt.Errorf("Cannot link to %s, as it does not belong to the default network", child.Name)
}
_, alias := path.Split(linkAlias)
// allow access to the linked container via the alias, real name, and container hostname
aliasList := alias + " " + child.Config.Hostname
@@ -532,13 +551,14 @@ func (container *Container) buildSandboxOptions(n libnetwork.Network) ([]libnetw
if alias != child.Name[1:] {
aliasList = aliasList + " " + child.Name[1:]
}
sboxOptions = append(sboxOptions, libnetwork.OptionExtraHost(aliasList, child.NetworkSettings.IPAddress))
sboxOptions = append(sboxOptions, libnetwork.OptionExtraHost(aliasList, child.NetworkSettings.Networks["bridge"].IPAddress))
cEndpoint, _ := child.getEndpointInNetwork(n)
if cEndpoint != nil && cEndpoint.ID() != "" {
childEndpoints = append(childEndpoints, cEndpoint.ID())
}
}
bridgeSettings := container.NetworkSettings.Networks["bridge"]
refs := container.daemon.containerGraph().RefPaths(container.ID)
for _, ref := range refs {
if ref.ParentID == "0" {
@@ -551,8 +571,8 @@ func (container *Container) buildSandboxOptions(n libnetwork.Network) ([]libnetw
}
if c != nil && !container.daemon.configStore.DisableBridge && container.hostConfig.NetworkMode.IsPrivate() {
logrus.Debugf("Update /etc/hosts of %s for alias %s with ip %s", c.ID, ref.Name, container.NetworkSettings.IPAddress)
sboxOptions = append(sboxOptions, libnetwork.OptionParentUpdate(c.ID, ref.Name, container.NetworkSettings.IPAddress))
logrus.Debugf("Update /etc/hosts of %s for alias %s with ip %s", c.ID, ref.Name, bridgeSettings.IPAddress)
sboxOptions = append(sboxOptions, libnetwork.OptionParentUpdate(c.ID, ref.Name, bridgeSettings.IPAddress))
if ep.ID() != "" {
parentEndpoints = append(parentEndpoints, ep.ID())
}
@@ -571,6 +591,12 @@ func (container *Container) buildSandboxOptions(n libnetwork.Network) ([]libnetw
return sboxOptions, nil
}
func isLinkable(child *Container) bool {
// A container is linkable only if it belongs to the default network
_, ok := child.NetworkSettings.Networks["bridge"]
return ok
}
func (container *Container) getEndpointInNetwork(n libnetwork.Network) (libnetwork.Endpoint, error) {
endpointName := strings.TrimPrefix(container.Name, "/")
return n.EndpointByName(endpointName)
@@ -595,10 +621,6 @@ func (container *Container) buildPortMapInfo(ep libnetwork.Endpoint, networkSett
return networkSettings, nil
}
if mac, ok := driverInfo[netlabel.MacAddress]; ok {
networkSettings.MacAddress = mac.(net.HardwareAddr).String()
}
networkSettings.Ports = nat.PortMap{}
if expData, ok := driverInfo[netlabel.ExposedPorts]; ok {
@@ -632,7 +654,7 @@ func (container *Container) buildPortMapInfo(ep libnetwork.Endpoint, networkSett
return networkSettings, nil
}
func (container *Container) buildEndpointInfo(ep libnetwork.Endpoint, networkSettings *network.Settings) (*network.Settings, error) {
func (container *Container) buildEndpointInfo(n libnetwork.Network, ep libnetwork.Endpoint, networkSettings *network.Settings) (*network.Settings, error) {
if ep == nil {
return nil, derr.ErrorCodeEmptyEndpoint
}
@@ -647,36 +669,50 @@ func (container *Container) buildEndpointInfo(ep libnetwork.Endpoint, networkSet
return networkSettings, nil
}
if _, ok := networkSettings.Networks[n.Name()]; !ok {
networkSettings.Networks[n.Name()] = new(network.EndpointSettings)
}
networkSettings.Networks[n.Name()].EndpointID = ep.ID()
iface := epInfo.Iface()
if iface == nil {
return networkSettings, nil
}
if iface.MacAddress() != nil {
networkSettings.Networks[n.Name()].MacAddress = iface.MacAddress().String()
}
if iface.Address() != nil {
ones, _ := iface.Address().Mask.Size()
networkSettings.IPAddress = iface.Address().IP.String()
networkSettings.IPPrefixLen = ones
networkSettings.Networks[n.Name()].IPAddress = iface.Address().IP.String()
networkSettings.Networks[n.Name()].IPPrefixLen = ones
}
if iface.AddressIPv6() != nil && iface.AddressIPv6().IP.To16() != nil {
onesv6, _ := iface.AddressIPv6().Mask.Size()
networkSettings.GlobalIPv6Address = iface.AddressIPv6().IP.String()
networkSettings.GlobalIPv6PrefixLen = onesv6
networkSettings.Networks[n.Name()].GlobalIPv6Address = iface.AddressIPv6().IP.String()
networkSettings.Networks[n.Name()].GlobalIPv6PrefixLen = onesv6
}
return networkSettings, nil
}
func (container *Container) updateJoinInfo(ep libnetwork.Endpoint) error {
func (container *Container) updateJoinInfo(n libnetwork.Network, ep libnetwork.Endpoint) error {
if _, err := container.buildPortMapInfo(ep, container.NetworkSettings); err != nil {
return err
}
epInfo := ep.Info()
if epInfo == nil {
// It is not an error to get an empty endpoint info
return nil
}
container.NetworkSettings.Gateway = epInfo.Gateway().String()
if epInfo.Gateway() != nil {
container.NetworkSettings.Networks[n.Name()].Gateway = epInfo.Gateway().String()
}
if epInfo.GatewayIPv6().To16() != nil {
container.NetworkSettings.IPv6Gateway = epInfo.GatewayIPv6().String()
container.NetworkSettings.Networks[n.Name()].IPv6Gateway = epInfo.GatewayIPv6().String()
}
return nil
@@ -684,11 +720,10 @@ func (container *Container) updateJoinInfo(ep libnetwork.Endpoint) error {
func (container *Container) updateNetworkSettings(n libnetwork.Network) error {
if container.NetworkSettings == nil {
container.NetworkSettings = &network.Settings{Networks: []string{}}
container.NetworkSettings = &network.Settings{Networks: make(map[string]*network.EndpointSettings)}
}
settings := container.NetworkSettings
for _, s := range settings.Networks {
for s := range container.NetworkSettings.Networks {
sn, err := container.daemon.FindNetwork(s)
if err != nil {
continue
@@ -707,18 +742,13 @@ func (container *Container) updateNetworkSettings(n libnetwork.Network) error {
return runconfig.ErrConflictNoNetwork
}
}
settings.Networks = append(settings.Networks, n.Name())
container.NetworkSettings.Networks[n.Name()] = new(network.EndpointSettings)
return nil
}
func (container *Container) updateEndpointNetworkSettings(n libnetwork.Network, ep libnetwork.Endpoint) error {
networkSettings, err := container.buildPortMapInfo(ep, container.NetworkSettings)
if err != nil {
return err
}
networkSettings, err = container.buildEndpointInfo(ep, networkSettings)
networkSettings, err := container.buildEndpointInfo(n, ep, container.NetworkSettings)
if err != nil {
return err
}
@@ -749,7 +779,7 @@ func (container *Container) updateNetwork() error {
// Find if container is connected to the default bridge network
var n libnetwork.Network
for _, name := range container.NetworkSettings.Networks {
for name := range container.NetworkSettings.Networks {
sn, err := container.daemon.FindNetwork(name)
if err != nil {
continue
@@ -777,7 +807,7 @@ func (container *Container) updateNetwork() error {
return nil
}
func (container *Container) buildCreateEndpointOptions() ([]libnetwork.EndpointOption, error) {
func (container *Container) buildCreateEndpointOptions(n libnetwork.Network) ([]libnetwork.EndpointOption, error) {
var (
portSpecs = make(nat.PortSet)
bindings = make(nat.PortMap)
@@ -855,6 +885,10 @@ func (container *Container) buildCreateEndpointOptions() ([]libnetwork.EndpointO
createOptions = append(createOptions, libnetwork.EndpointOptionGeneric(genericOption))
}
if n.Name() == "bridge" || container.NetworkSettings.IsAnonymousEndpoint {
createOptions = append(createOptions, libnetwork.CreateOptionAnonymous())
}
return createOptions, nil
}
@@ -875,11 +909,16 @@ func createNetwork(controller libnetwork.NetworkController, dnet string, driver
}
func (container *Container) allocateNetwork() error {
settings := container.NetworkSettings.Networks
controller := container.daemon.netController
// Cleanup any stale sandbox left over due to ungraceful daemon shutdown
if err := controller.SandboxDestroy(container.ID); err != nil {
logrus.Errorf("failed to cleanup up stale network sandbox for container %s", container.ID)
}
updateSettings := false
if settings == nil {
if len(container.NetworkSettings.Networks) == 0 {
mode := container.hostConfig.NetworkMode
controller := container.daemon.netController
if container.Config.NetworkDisabled || mode.IsContainer() {
return nil
}
@@ -888,32 +927,49 @@ func (container *Container) allocateNetwork() error {
if mode.IsDefault() {
networkName = controller.Config().Daemon.DefaultNetwork
}
settings = []string{networkName}
container.NetworkSettings.Networks = make(map[string]*network.EndpointSettings)
container.NetworkSettings.Networks[networkName] = new(network.EndpointSettings)
updateSettings = true
}
for _, n := range settings {
for n := range container.NetworkSettings.Networks {
if err := container.connectToNetwork(n, updateSettings); err != nil {
if updateSettings {
return err
}
// dont fail a container restart case if the user removed the network
logrus.Warnf("Could not connect container %s : %v", container.ID, err)
return err
}
}
return container.writeHostConfig()
}
func (container *Container) getNetworkSandbox() libnetwork.Sandbox {
var sb libnetwork.Sandbox
container.daemon.netController.WalkSandboxes(func(s libnetwork.Sandbox) bool {
if s.ContainerID() == container.ID {
sb = s
return true
}
return false
})
return sb
}
// ConnectToNetwork connects a container to a netork
func (container *Container) ConnectToNetwork(idOrName string) error {
if !container.Running {
return derr.ErrorCodeNotRunning.WithArgs(container.ID)
}
return container.connectToNetwork(idOrName, true)
if err := container.connectToNetwork(idOrName, true); err != nil {
return err
}
if err := container.toDiskLocking(); err != nil {
return fmt.Errorf("Error saving container to disk: %v", err)
}
return nil
}
func (container *Container) connectToNetwork(idOrName string, updateSettings bool) error {
var err error
if container.hostConfig.NetworkMode.IsContainer() {
return runconfig.ErrConflictSharedNetwork
}
@@ -938,35 +994,37 @@ func (container *Container) connectToNetwork(idOrName string, updateSettings boo
}
ep, err := container.getEndpointInNetwork(n)
if err != nil {
if _, ok := err.(libnetwork.ErrNoSuchEndpoint); !ok {
return err
}
createOptions, err := container.buildCreateEndpointOptions()
if err != nil {
return err
}
endpointName := strings.TrimPrefix(container.Name, "/")
ep, err = n.CreateEndpoint(endpointName, createOptions...)
if err != nil {
return err
}
if err == nil {
return fmt.Errorf("container already connected to network %s", idOrName)
}
if _, ok := err.(libnetwork.ErrNoSuchEndpoint); !ok {
return err
}
createOptions, err := container.buildCreateEndpointOptions(n)
if err != nil {
return err
}
endpointName := strings.TrimPrefix(container.Name, "/")
ep, err = n.CreateEndpoint(endpointName, createOptions...)
if err != nil {
return err
}
defer func() {
if err != nil {
if e := ep.Delete(); e != nil {
logrus.Warnf("Could not rollback container connection to network %s", idOrName)
}
}
}()
if err := container.updateEndpointNetworkSettings(n, ep); err != nil {
return err
}
var sb libnetwork.Sandbox
controller.WalkSandboxes(func(s libnetwork.Sandbox) bool {
if s.ContainerID() == container.ID {
sb = s
return true
}
return false
})
sb := container.getNetworkSandbox()
if sb == nil {
options, err := container.buildSandboxOptions(n)
if err != nil {
@@ -976,15 +1034,15 @@ func (container *Container) connectToNetwork(idOrName string, updateSettings boo
if err != nil {
return err
}
}
container.updateSandboxNetworkSettings(sb)
container.updateSandboxNetworkSettings(sb)
}
if err := ep.Join(sb); err != nil {
return err
}
if err := container.updateJoinInfo(ep); err != nil {
if err := container.updateJoinInfo(n, ep); err != nil {
return derr.ErrorCodeJoinInfo.WithArgs(err)
}
@@ -1111,6 +1169,9 @@ func (container *Container) releaseNetwork() {
sid := container.NetworkSettings.SandboxID
networks := container.NetworkSettings.Networks
for n := range networks {
networks[n] = &network.EndpointSettings{}
}
container.NetworkSettings = &network.Settings{Networks: networks}
@@ -1124,14 +1185,6 @@ func (container *Container) releaseNetwork() {
return
}
for _, ns := range networks {
n, err := container.daemon.FindNetwork(ns)
if err != nil {
continue
}
container.disconnectFromNetwork(n, false)
}
if err := sb.Delete(); err != nil {
logrus.Errorf("Error deleting sandbox id %s for container %s: %v", sid, container.ID, err)
}
@@ -1143,10 +1196,17 @@ func (container *Container) DisconnectFromNetwork(n libnetwork.Network) error {
return derr.ErrorCodeNotRunning.WithArgs(container.ID)
}
return container.disconnectFromNetwork(n, true)
if err := container.disconnectFromNetwork(n); err != nil {
return err
}
if err := container.toDiskLocking(); err != nil {
return fmt.Errorf("Error saving container to disk: %v", err)
}
return nil
}
func (container *Container) disconnectFromNetwork(n libnetwork.Network, updateSettings bool) error {
func (container *Container) disconnectFromNetwork(n libnetwork.Network) error {
var (
ep libnetwork.Endpoint
sbox libnetwork.Sandbox
@@ -1176,20 +1236,7 @@ func (container *Container) disconnectFromNetwork(n libnetwork.Network, updateSe
return fmt.Errorf("endpoint delete failed for container %s on network %s: %v", container.ID, n.Name(), err)
}
if updateSettings {
networks := container.NetworkSettings.Networks
for i, s := range networks {
sn, err := container.daemon.FindNetwork(s)
if err != nil {
continue
}
if sn.Name() == n.Name() {
networks = append(networks[:i], networks[i+1:]...)
container.NetworkSettings.Networks = networks
break
}
}
}
delete(container.NetworkSettings.Networks, n.Name())
return nil
}

View File

@@ -12,7 +12,6 @@ import (
"io/ioutil"
"os"
"path/filepath"
"regexp"
"runtime"
"strings"
"sync"
@@ -50,6 +49,7 @@ import (
"github.com/docker/docker/pkg/truncindex"
"github.com/docker/docker/registry"
"github.com/docker/docker/runconfig"
"github.com/docker/docker/utils"
volumedrivers "github.com/docker/docker/volume/drivers"
"github.com/docker/docker/volume/local"
"github.com/docker/docker/volume/store"
@@ -57,8 +57,8 @@ import (
)
var (
validContainerNameChars = `[a-zA-Z0-9][a-zA-Z0-9_.-]`
validContainerNamePattern = regexp.MustCompile(`^/?` + validContainerNameChars + `+$`)
validContainerNameChars = utils.RestrictedNameChars
validContainerNamePattern = utils.RestrictedNamePattern
errSystemNotSupported = errors.New("The Docker daemon is not supported on this platform.")
)
@@ -406,20 +406,12 @@ func (daemon *Daemon) reserveName(id, name string) (string, error) {
conflictingContainer, err := daemon.GetByName(name)
if err != nil {
if strings.Contains(err.Error(), "Could not find entity") {
return "", err
}
// Remove name and continue starting the container
if err := daemon.containerGraphDB.Delete(name); err != nil {
return "", err
}
} else {
nameAsKnownByUser := strings.TrimPrefix(name, "/")
return "", fmt.Errorf(
"Conflict. The name %q is already in use by container %s. You have to remove (or rename) that container to be able to reuse that name.", nameAsKnownByUser,
stringid.TruncateID(conflictingContainer.ID))
return "", err
}
return "", fmt.Errorf(
"Conflict. The name %q is already in use by container %s. You have to remove (or rename) that container to be able to reuse that name.", strings.TrimPrefix(name, "/"),
stringid.TruncateID(conflictingContainer.ID))
}
return name, nil
}
@@ -476,8 +468,9 @@ func (daemon *Daemon) getEntrypointAndArgs(configEntrypoint *stringutils.StrSlic
func (daemon *Daemon) newContainer(name string, config *runconfig.Config, imgID string) (*Container, error) {
var (
id string
err error
id string
err error
noExplicitName = name == ""
)
id, name, err = daemon.generateIDAndName(name)
if err != nil {
@@ -494,7 +487,7 @@ func (daemon *Daemon) newContainer(name string, config *runconfig.Config, imgID
base.Config = config
base.hostConfig = &runconfig.HostConfig{}
base.ImageID = imgID
base.NetworkSettings = &network.Settings{}
base.NetworkSettings = &network.Settings{IsAnonymousEndpoint: noExplicitName}
base.Name = name
base.Driver = daemon.driver.String()
base.ExecDriver = daemon.execDriver.Name()
@@ -761,10 +754,17 @@ func NewDaemon(config *Config, registryService *registry.Service) (daemon *Daemo
// initialized, the daemon is registered and we can store the discovery backend as its read-only
// DiscoveryWatcher version.
if config.ClusterStore != "" && config.ClusterAdvertise != "" {
var err error
if d.discoveryWatcher, err = initDiscovery(config.ClusterStore, config.ClusterAdvertise, config.ClusterOpts); err != nil {
advertise, err := discovery.ParseAdvertise(config.ClusterStore, config.ClusterAdvertise)
if err != nil {
return nil, fmt.Errorf("discovery advertise parsing failed (%v)", err)
}
config.ClusterAdvertise = advertise
d.discoveryWatcher, err = initDiscovery(config.ClusterStore, config.ClusterAdvertise, config.ClusterOpts)
if err != nil {
return nil, fmt.Errorf("discovery initialization failed (%v)", err)
}
} else if config.ClusterAdvertise != "" {
return nil, fmt.Errorf("invalid cluster configuration. --cluster-advertise must be accompanied by --cluster-store configuration")
}
d.netController, err = d.initNetworkController(config)
@@ -1136,6 +1136,14 @@ func (daemon *Daemon) GetRemappedUIDGID() (int, int) {
// created. nil is returned if a child cannot be found. An error is
// returned if the parent image cannot be found.
func (daemon *Daemon) ImageGetCached(imgID string, config *runconfig.Config) (*image.Image, error) {
// for now just exit if imgID has no children.
// maybe parentRefs in graph could be used to store
// the Image obj children for faster lookup below but this can
// be quite memory hungry.
if !daemon.Graph().HasChildren(imgID) {
return nil, nil
}
// Retrieve all images
images := daemon.Graph().Map()

View File

@@ -114,9 +114,6 @@ func (daemon *Daemon) adaptContainerSettings(hostConfig *runconfig.HostConfig, a
// By default, MemorySwap is set to twice the size of Memory.
hostConfig.MemorySwap = hostConfig.Memory * 2
}
if hostConfig.MemoryReservation == 0 && hostConfig.Memory > 0 {
hostConfig.MemoryReservation = hostConfig.Memory
}
}
// verifyPlatformContainerSettings performs platform-specific validation of the
@@ -341,6 +338,9 @@ func (daemon *Daemon) networkOptions(dconfig *Config) ([]nwconfig.Option, error)
options = append(options, nwconfig.OptionKVProvider(kv[0]))
options = append(options, nwconfig.OptionKVProviderURL(strings.Join(kv[1:], "://")))
}
if len(dconfig.ClusterOpts) > 0 {
options = append(options, nwconfig.OptionKVOpts(dconfig.ClusterOpts))
}
if daemon.discoveryWatcher != nil {
options = append(options, nwconfig.OptionDiscoveryWatcher(daemon.discoveryWatcher))
@@ -441,6 +441,8 @@ func initBridgeDriver(controller libnetwork.NetworkController, config *Config) e
return err
}
ipamV4Conf.Gateway = ip.String()
} else if bridgeName == bridge.DefaultBridgeName && ipamV4Conf.PreferredPool != "" {
logrus.Infof("Default bridge (%s) is assigned with an IP address %s. Daemon option --bip can be used to set a preferred IP address", bridgeName, ipamV4Conf.PreferredPool)
}
if config.Bridge.FixedCIDR != "" {

View File

@@ -193,7 +193,7 @@ func (d Docker) Copy(c *daemon.Container, destPath string, src builder.FileInfo,
// GetCachedImage returns a reference to a cached image whose parent equals `parent`
// and runconfig equals `cfg`. A cache miss is expected to return an empty ID and a nil error.
func (d Docker) GetCachedImage(imgID string, cfg *runconfig.Config) (string, error) {
cache, err := d.Daemon.ImageGetCached(string(imgID), cfg)
cache, err := d.Daemon.ImageGetCached(imgID, cfg)
if cache == nil || err != nil {
return "", err
}

View File

@@ -76,22 +76,20 @@ func (daemon *Daemon) rm(container *Container, forceRemove bool) (err error) {
}
}
// Container state RemovalInProgress should be used to avoid races.
if err = container.setRemovalInProgress(); err != nil {
if err == derr.ErrorCodeAlreadyRemoving {
// do not fail when the removal is in progress started by other request.
return nil
}
return derr.ErrorCodeRmState.WithArgs(err)
}
defer container.resetRemovalInProgress()
// stop collection of stats for the container regardless
// if stats are currently getting collected.
daemon.statsCollector.stopCollection(container)
element := daemon.containers.Get(container.ID)
if element == nil {
return derr.ErrorCodeRmNotFound.WithArgs(container.ID)
}
// Container state RemovalInProgress should be used to avoid races.
if err = container.setRemovalInProgress(); err != nil {
return derr.ErrorCodeRmState.WithArgs(err)
}
defer container.resetRemovalInProgress()
if err = container.Stop(3); err != nil {
return err
}

39
daemon/delete_test.go Normal file
View File

@@ -0,0 +1,39 @@
package daemon
import (
"io/ioutil"
"os"
"testing"
"github.com/docker/docker/runconfig"
)
func TestContainerDoubleDelete(t *testing.T) {
tmp, err := ioutil.TempDir("", "docker-daemon-unix-test-")
if err != nil {
t.Fatal(err)
}
defer os.RemoveAll(tmp)
daemon := &Daemon{
repository: tmp,
root: tmp,
}
container := &Container{
CommonContainer: CommonContainer{
State: NewState(),
Config: &runconfig.Config{},
},
}
// Mark the container as having a delete in progress
if err := container.setRemovalInProgress(); err != nil {
t.Fatal(err)
}
// Try to remove the container when it's start is removalInProgress.
// It should ignore the container and not return an error.
if err := daemon.rm(container, true); err != nil {
t.Fatal(err)
}
}

View File

@@ -324,24 +324,20 @@ func (d *Driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, hooks execd
c.ContainerPid = pid
oomKill := false
oomKillNotification, err := notifyOnOOM(cgroupPaths)
if hooks.Start != nil {
logrus.Debugf("Invoking startCallback")
hooks.Start(&c.ProcessConfig, pid, oomKillNotification)
chOOM := make(chan struct{})
close(chOOM)
hooks.Start(&c.ProcessConfig, pid, chOOM)
}
oomKillNotification := notifyChannelOOM(cgroupPaths)
<-waitLock
exitCode := getExitCode(c)
if err == nil {
_, oomKill = <-oomKillNotification
logrus.Debugf("oomKill error: %v, waitErr: %v", oomKill, waitErr)
} else {
logrus.Warnf("Your kernel does not support OOM notifications: %s", err)
}
_, oomKill := <-oomKillNotification
logrus.Debugf("oomKill error: %v, waitErr: %v", oomKill, waitErr)
// check oom error
if oomKill {
@@ -351,6 +347,17 @@ func (d *Driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, hooks execd
return execdriver.ExitStatus{ExitCode: exitCode, OOMKilled: oomKill}, waitErr
}
func notifyChannelOOM(paths map[string]string) <-chan struct{} {
oom, err := notifyOnOOM(paths)
if err != nil {
logrus.Warnf("Your kernel does not support OOM notifications: %s", err)
c := make(chan struct{})
close(c)
return c
}
return oom
}
// copy from libcontainer
func notifyOnOOM(paths map[string]string) (<-chan struct{}, error) {
dir := paths["memory"]
@@ -386,11 +393,13 @@ func notifyOnOOM(paths map[string]string) (<-chan struct{}, error) {
buf := make([]byte, 8)
for {
if _, err := eventfd.Read(buf); err != nil {
logrus.Warn(err)
return
}
// When a cgroup is destroyed, an event is sent to eventfd.
// So if the control path is gone, return instead of notifying.
if _, err := os.Lstat(eventControlPath); os.IsNotExist(err) {
logrus.Warn(err)
return
}
ch <- struct{}{}
@@ -424,6 +433,11 @@ func cgroupPaths(containerID string) (map[string]string, error) {
//unsupported subystem
continue
}
// if we are running dind
dockerPathIdx := strings.LastIndex(cgroupDir, "docker")
if dockerPathIdx != -1 {
cgroupDir = cgroupDir[:dockerPathIdx-1]
}
path := filepath.Join(cgroupRoot, cgroupDir, "lxc", containerID)
paths[subsystem] = path
}

View File

@@ -91,9 +91,9 @@ lxc.mount.entry = {{$value.Source}} {{escapeFstabSpaces $ROOTFS}}/{{escapeFstabS
{{if .Resources}}
{{if .Resources.Memory}}
lxc.cgroup.memory.limit_in_bytes = {{.Resources.Memory}}
{{with $memSwap := getMemorySwap .Resources}}
lxc.cgroup.memory.memsw.limit_in_bytes = {{$memSwap}}
{{end}}
{{if gt .Resources.MemorySwap 0}}
lxc.cgroup.memory.memsw.limit_in_bytes = {{.Resources.MemorySwap}}
{{end}}
{{if gt .Resources.MemoryReservation 0}}
lxc.cgroup.memory.soft_limit_in_bytes = {{.Resources.MemoryReservation}}
@@ -209,15 +209,6 @@ func isDirectory(source string) string {
return "file"
}
func getMemorySwap(v *execdriver.Resources) int64 {
// By default, MemorySwap is set to twice the size of RAM.
// If you want to omit MemorySwap, set it to `-1'.
if v.MemorySwap < 0 {
return 0
}
return v.Memory * 2
}
func getLabel(c map[string][]string, name string) string {
label := c["label"]
for _, l := range label {
@@ -242,7 +233,6 @@ func getHostname(env []string) string {
func init() {
var err error
funcMap := template.FuncMap{
"getMemorySwap": getMemorySwap,
"escapeFstabSpaces": escapeFstabSpaces,
"formatMountLabel": label.FormatMountLabel,
"isDirectory": isDirectory,

View File

@@ -34,6 +34,7 @@ func TestLXCConfig(t *testing.T) {
memMin = 33554432
memMax = 536870912
mem = memMin + r.Intn(memMax-memMin)
swap = memMax
cpuMin = 100
cpuMax = 10000
cpu = cpuMin + r.Intn(cpuMax-cpuMin)
@@ -46,8 +47,9 @@ func TestLXCConfig(t *testing.T) {
command := &execdriver.Command{
ID: "1",
Resources: &execdriver.Resources{
Memory: int64(mem),
CPUShares: int64(cpu),
Memory: int64(mem),
MemorySwap: int64(swap),
CPUShares: int64(cpu),
},
Network: &execdriver.Network{
Mtu: 1500,
@@ -63,7 +65,7 @@ func TestLXCConfig(t *testing.T) {
fmt.Sprintf("lxc.cgroup.memory.limit_in_bytes = %d", mem))
grepFile(t, p,
fmt.Sprintf("lxc.cgroup.memory.memsw.limit_in_bytes = %d", mem*2))
fmt.Sprintf("lxc.cgroup.memory.memsw.limit_in_bytes = %d", swap))
}
func TestCustomLxcConfig(t *testing.T) {

View File

@@ -167,7 +167,6 @@ func (d *Driver) Run(c *execdriver.Command, pipes *execdriver.Pipes, hooks execd
oom := notifyOnOOM(cont)
if hooks.Start != nil {
pid, err := p.Pid()
if err != nil {
p.Signal(os.Kill)

View File

@@ -599,6 +599,7 @@ func (devices *DeviceSet) cleanupDeletedDevices() error {
// If there are no deleted devices, there is nothing to do.
if devices.nrDeletedDevices == 0 {
devices.Unlock()
return nil
}

View File

@@ -5,6 +5,7 @@ package devmapper
import (
"fmt"
"testing"
"time"
"github.com/docker/docker/daemon/graphdriver"
"github.com/docker/docker/daemon/graphdriver/graphtest"
@@ -79,3 +80,31 @@ func testChangeLoopBackSize(t *testing.T, delta, expectDataSize, expectMetaDataS
t.Fatal(err)
}
}
// Make sure devices.Lock() has been release upon return from cleanupDeletedDevices() function
func TestDevmapperLockReleasedDeviceDeletion(t *testing.T) {
driver := graphtest.GetDriver(t, "devicemapper").(*graphtest.Driver).Driver.(*graphdriver.NaiveDiffDriver).ProtoDriver.(*Driver)
defer graphtest.PutDriver(t)
// Call cleanupDeletedDevices() and after the call take and release
// DeviceSet Lock. If lock has not been released, this will hang.
driver.DeviceSet.cleanupDeletedDevices()
doneChan := make(chan bool)
go func() {
driver.DeviceSet.Lock()
defer driver.DeviceSet.Unlock()
doneChan <- true
}()
select {
case <-time.After(time.Second * 5):
// Timer expired. That means lock was not released upon
// function return and we are deadlocked. Release lock
// here so that cleanup could succeed and fail the test.
driver.DeviceSet.Unlock()
t.Fatalf("Could not acquire devices lock after call to cleanupDeletedDevices()")
case <-doneChan:
}
}

View File

@@ -84,6 +84,11 @@ func (daemon *Daemon) ImageDelete(imageRef string, force, prune bool) ([]types.I
daemon.EventsService.Log("untag", img.ID, "")
records = append(records, untaggedRecord)
// If has remaining references then untag finishes the remove
if daemon.repositories.HasReferences(img) {
return records, nil
}
removedRepositoryRef = true
} else {
// If an ID reference was given AND there is exactly one
@@ -279,7 +284,7 @@ func (daemon *Daemon) checkImageDeleteHardConflict(img *image.Image) *imageDelet
}
// Check if the image has any descendent images.
if daemon.Graph().HasChildren(img) {
if daemon.Graph().HasChildren(img.ID) {
return &imageDeleteConflict{
hard: true,
imgID: img.ID,
@@ -337,5 +342,5 @@ func (daemon *Daemon) checkImageDeleteSoftConflict(img *image.Image) *imageDelet
// that there are no repository references to the given image and it has no
// child images.
func (daemon *Daemon) imageIsDangling(img *image.Image) bool {
return !(daemon.repositories.HasReferences(img) || daemon.Graph().HasChildren(img))
return !(daemon.repositories.HasReferences(img) || daemon.Graph().HasChildren(img.ID))
}

View File

@@ -92,6 +92,7 @@ func (daemon *Daemon) SystemInfo() (*types.Info, error) {
ExperimentalBuild: utils.ExperimentalBuild(),
ServerVersion: dockerversion.VERSION,
ClusterStore: daemon.config().ClusterStore,
ClusterAdvertise: daemon.config().ClusterAdvertise,
}
// TODO Windows. Refactor this more once sysinfo is refactored into

View File

@@ -6,6 +6,7 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/versions/v1p20"
"github.com/docker/docker/daemon/network"
)
// ContainerInspect returns low-level information about a
@@ -26,8 +27,23 @@ func (daemon *Daemon) ContainerInspect(name string, size bool) (*types.Container
}
mountPoints := addMountPoints(container)
networkSettings := &types.NetworkSettings{
NetworkSettingsBase: types.NetworkSettingsBase{
Bridge: container.NetworkSettings.Bridge,
SandboxID: container.NetworkSettings.SandboxID,
HairpinMode: container.NetworkSettings.HairpinMode,
LinkLocalIPv6Address: container.NetworkSettings.LinkLocalIPv6Address,
LinkLocalIPv6PrefixLen: container.NetworkSettings.LinkLocalIPv6PrefixLen,
Ports: container.NetworkSettings.Ports,
SandboxKey: container.NetworkSettings.SandboxKey,
SecondaryIPAddresses: container.NetworkSettings.SecondaryIPAddresses,
SecondaryIPv6Addresses: container.NetworkSettings.SecondaryIPv6Addresses,
},
DefaultNetworkSettings: daemon.getDefaultNetworkSettings(container.NetworkSettings.Networks),
Networks: container.NetworkSettings.Networks,
}
return &types.ContainerJSON{base, mountPoints, container.Config}, nil
return &types.ContainerJSON{base, mountPoints, container.Config, networkSettings}, nil
}
// ContainerInspect120 serializes the master version of a container into a json type.
@@ -48,10 +64,14 @@ func (daemon *Daemon) ContainerInspect120(name string) (*v1p20.ContainerJSON, er
mountPoints := addMountPoints(container)
config := &v1p20.ContainerConfig{
container.Config,
container.Config.MacAddress,
container.Config.NetworkDisabled,
container.Config.ExposedPorts,
container.hostConfig.VolumeDriver,
}
networkSettings := daemon.getBackwardsCompatibleNetworkSettings(container.NetworkSettings)
return &v1p20.ContainerJSON{base, mountPoints, config}, nil
return &v1p20.ContainerJSON{base, mountPoints, config, networkSettings}, nil
}
func (daemon *Daemon) getInspectData(container *Container, size bool) (*types.ContainerJSONBase, error) {
@@ -88,22 +108,21 @@ func (daemon *Daemon) getInspectData(container *Container, size bool) (*types.Co
}
contJSONBase := &types.ContainerJSONBase{
ID: container.ID,
Created: container.Created.Format(time.RFC3339Nano),
Path: container.Path,
Args: container.Args,
State: containerState,
Image: container.ImageID,
NetworkSettings: container.NetworkSettings,
LogPath: container.LogPath,
Name: container.Name,
RestartCount: container.RestartCount,
Driver: container.Driver,
ExecDriver: container.ExecDriver,
MountLabel: container.MountLabel,
ProcessLabel: container.ProcessLabel,
ExecIDs: container.getExecIDs(),
HostConfig: &hostConfig,
ID: container.ID,
Created: container.Created.Format(time.RFC3339Nano),
Path: container.Path,
Args: container.Args,
State: containerState,
Image: container.ImageID,
LogPath: container.LogPath,
Name: container.Name,
RestartCount: container.RestartCount,
Driver: container.Driver,
ExecDriver: container.ExecDriver,
MountLabel: container.MountLabel,
ProcessLabel: container.ProcessLabel,
ExecIDs: container.getExecIDs(),
HostConfig: &hostConfig,
}
var (
@@ -148,3 +167,40 @@ func (daemon *Daemon) VolumeInspect(name string) (*types.Volume, error) {
}
return volumeToAPIType(v), nil
}
func (daemon *Daemon) getBackwardsCompatibleNetworkSettings(settings *network.Settings) *v1p20.NetworkSettings {
result := &v1p20.NetworkSettings{
NetworkSettingsBase: types.NetworkSettingsBase{
Bridge: settings.Bridge,
SandboxID: settings.SandboxID,
HairpinMode: settings.HairpinMode,
LinkLocalIPv6Address: settings.LinkLocalIPv6Address,
LinkLocalIPv6PrefixLen: settings.LinkLocalIPv6PrefixLen,
Ports: settings.Ports,
SandboxKey: settings.SandboxKey,
SecondaryIPAddresses: settings.SecondaryIPAddresses,
SecondaryIPv6Addresses: settings.SecondaryIPv6Addresses,
},
DefaultNetworkSettings: daemon.getDefaultNetworkSettings(settings.Networks),
}
return result
}
// getDefaultNetworkSettings creates the deprecated structure that holds the information
// about the bridge network for a container.
func (daemon *Daemon) getDefaultNetworkSettings(networks map[string]*network.EndpointSettings) types.DefaultNetworkSettings {
var settings types.DefaultNetworkSettings
if defaultNetwork, ok := networks["bridge"]; ok {
settings.EndpointID = defaultNetwork.EndpointID
settings.Gateway = defaultNetwork.Gateway
settings.GlobalIPv6Address = defaultNetwork.GlobalIPv6Address
settings.GlobalIPv6PrefixLen = defaultNetwork.GlobalIPv6PrefixLen
settings.IPAddress = defaultNetwork.IPAddress
settings.IPPrefixLen = defaultNetwork.IPPrefixLen
settings.IPv6Gateway = defaultNetwork.IPv6Gateway
settings.MacAddress = defaultNetwork.MacAddress
}
return settings
}

View File

@@ -41,14 +41,18 @@ func (daemon *Daemon) ContainerInspectPre120(name string) (*v1p19.ContainerJSON,
config := &v1p19.ContainerConfig{
container.Config,
container.Config.MacAddress,
container.Config.NetworkDisabled,
container.Config.ExposedPorts,
container.hostConfig.VolumeDriver,
container.hostConfig.Memory,
container.hostConfig.MemorySwap,
container.hostConfig.CPUShares,
container.hostConfig.CpusetCpus,
}
networkSettings := daemon.getBackwardsCompatibleNetworkSettings(container.NetworkSettings)
return &v1p19.ContainerJSON{base, volumes, volumesRW, config}, nil
return &v1p19.ContainerJSON{base, volumes, volumesRW, config, networkSettings}, nil
}
func addMountPoints(container *Container) []types.MountPoint {

View File

@@ -17,6 +17,12 @@ const (
NetworkByName
)
// NetworkControllerEnabled checks if the networking stack is enabled.
// This feature depends on OS primitives and it's dissabled in systems like Windows.
func (daemon *Daemon) NetworkControllerEnabled() bool {
return daemon.netController != nil
}
// FindNetwork function finds a network for a given string that can represent network name or id
func (daemon *Daemon) FindNetwork(idName string) (libnetwork.Network, error) {
// Find by Name
@@ -80,7 +86,7 @@ func (daemon *Daemon) GetNetworksByID(partialID string) []libnetwork.Network {
}
// CreateNetwork creates a network with the given name, driver and other optional parameters
func (daemon *Daemon) CreateNetwork(name, driver string, ipam network.IPAM) (libnetwork.Network, error) {
func (daemon *Daemon) CreateNetwork(name, driver string, ipam network.IPAM, options map[string]string) (libnetwork.Network, error) {
c := daemon.netController
if driver == "" {
driver = c.Config().Daemon.DefaultDriver
@@ -93,9 +99,8 @@ func (daemon *Daemon) CreateNetwork(name, driver string, ipam network.IPAM) (lib
return nil, err
}
if len(ipam.Config) > 0 {
nwOptions = append(nwOptions, libnetwork.NetworkOptionIpam(ipam.Driver, "", v4Conf, v6Conf))
}
nwOptions = append(nwOptions, libnetwork.NetworkOptionIpam(ipam.Driver, "", v4Conf, v6Conf))
nwOptions = append(nwOptions, libnetwork.NetworkOptionDriverOpts(options))
return c.NewNetwork(driver, name, nwOptions...)
}

View File

@@ -10,37 +10,42 @@ type Address struct {
// IPAM represents IP Address Management
type IPAM struct {
Driver string `json:"driver"`
Config []IPAMConfig `json:"config"`
Driver string
Config []IPAMConfig
}
// IPAMConfig represents IPAM configurations
type IPAMConfig struct {
Subnet string `json:"subnet,omitempty"`
IPRange string `json:"ip_range,omitempty"`
Gateway string `json:"gateway,omitempty"`
AuxAddress map[string]string `json:"auxiliary_address,omitempty"`
Subnet string `json:",omitempty"`
IPRange string `json:",omitempty"`
Gateway string `json:",omitempty"`
AuxAddress map[string]string `json:"AuxiliaryAddresses,omitempty"`
}
// Settings stores configuration details about the daemon network config
// TODO Windows. Many of these fields can be factored out.,
type Settings struct {
Bridge string
EndpointID string
SandboxID string
Gateway string
GlobalIPv6Address string
GlobalIPv6PrefixLen int
HairpinMode bool
IPAddress string
IPPrefixLen int
IPv6Gateway string
LinkLocalIPv6Address string
LinkLocalIPv6PrefixLen int
MacAddress string
Networks []string
Networks map[string]*EndpointSettings
Ports nat.PortMap
SandboxKey string
SecondaryIPAddresses []Address
SecondaryIPv6Addresses []Address
IsAnonymousEndpoint bool
}
// EndpointSettings stores the network endpoint details
type EndpointSettings struct {
EndpointID string
Gateway string
IPAddress string
IPPrefixLen int
IPv6Gateway string
GlobalIPv6Address string
GlobalIPv6PrefixLen int
MacAddress string
}

View File

@@ -1,18 +1,28 @@
package daemon
import (
"github.com/Sirupsen/logrus"
derr "github.com/docker/docker/errors"
"github.com/docker/libnetwork"
"strings"
)
// ContainerRename changes the name of a container, using the oldName
// to find the container. An error is returned if newName is already
// reserved.
func (daemon *Daemon) ContainerRename(oldName, newName string) error {
var (
err error
sid string
sb libnetwork.Sandbox
container *Container
)
if oldName == "" || newName == "" {
return derr.ErrorCodeEmptyRename
}
container, err := daemon.Get(oldName)
container, err = daemon.Get(oldName)
if err != nil {
return err
}
@@ -27,19 +37,44 @@ func (daemon *Daemon) ContainerRename(oldName, newName string) error {
container.Name = newName
undo := func() {
container.Name = oldName
daemon.reserveName(container.ID, oldName)
daemon.containerGraphDB.Delete(newName)
}
defer func() {
if err != nil {
container.Name = oldName
daemon.reserveName(container.ID, oldName)
daemon.containerGraphDB.Delete(newName)
}
}()
if err := daemon.containerGraphDB.Delete(oldName); err != nil {
undo()
if err = daemon.containerGraphDB.Delete(oldName); err != nil {
return derr.ErrorCodeRenameDelete.WithArgs(oldName, err)
}
if err := container.toDisk(); err != nil {
undo()
if err = container.toDisk(); err != nil {
return err
}
if !container.Running {
container.logEvent("rename")
return nil
}
defer func() {
if err != nil {
container.Name = oldName
if e := container.toDisk(); e != nil {
logrus.Errorf("%s: Failed in writing to Disk on rename failure: %v", container.ID, e)
}
}
}()
sid = container.NetworkSettings.SandboxID
sb, err = daemon.netController.SandboxByID(sid)
if err != nil {
return err
}
err = sb.Rename(strings.TrimPrefix(container.Name, "/"))
if err != nil {
return err
}

View File

@@ -75,7 +75,8 @@ func (daemon *Daemon) ContainerStats(prefixOrName string, config *ContainerStats
return nil
}
statsJSON := getStatJSON(v)
var statsJSON interface{}
statsJSONPost120 := getStatJSON(v)
if config.Version.LessThan("1.21") {
var (
rxBytes uint64
@@ -87,7 +88,7 @@ func (daemon *Daemon) ContainerStats(prefixOrName string, config *ContainerStats
txErrors uint64
txDropped uint64
)
for _, v := range statsJSON.Networks {
for _, v := range statsJSONPost120.Networks {
rxBytes += v.RxBytes
rxPackets += v.RxPackets
rxErrors += v.RxErrors
@@ -97,8 +98,8 @@ func (daemon *Daemon) ContainerStats(prefixOrName string, config *ContainerStats
txErrors += v.TxErrors
txDropped += v.TxDropped
}
statsJSONPre121 := &v1p20.StatsJSON{
Stats: statsJSON.Stats,
statsJSON = &v1p20.StatsJSON{
Stats: statsJSONPost120.Stats,
Network: types.NetworkStats{
RxBytes: rxBytes,
RxPackets: rxPackets,
@@ -110,20 +111,8 @@ func (daemon *Daemon) ContainerStats(prefixOrName string, config *ContainerStats
TxDropped: txDropped,
},
}
if !config.Stream && noStreamFirstFrame {
// prime the cpu stats so they aren't 0 in the final output
noStreamFirstFrame = false
continue
}
if err := enc.Encode(statsJSONPre121); err != nil {
return err
}
if !config.Stream {
return nil
}
} else {
statsJSON = statsJSONPost120
}
if !config.Stream && noStreamFirstFrame {

View File

@@ -210,6 +210,7 @@ func (cli *DaemonCli) CmdDaemon(args ...string) error {
}
serverConfig = setPlatformServerConfig(serverConfig, cli.Config)
defaultHost := opts.DefaultHost
if commonFlags.TLSOptions != nil {
if !commonFlags.TLSOptions.InsecureSkipVerify {
// server requires and verifies client's certificate
@@ -220,6 +221,7 @@ func (cli *DaemonCli) CmdDaemon(args ...string) error {
logrus.Fatal(err)
}
serverConfig.TLSConfig = tlsConfig
defaultHost = opts.DefaultTLSHost
}
if len(commonFlags.Hosts) == 0 {
@@ -227,7 +229,7 @@ func (cli *DaemonCli) CmdDaemon(args ...string) error {
}
for i := 0; i < len(commonFlags.Hosts); i++ {
var err error
if commonFlags.Hosts[i], err = opts.ParseHost(commonFlags.Hosts[i]); err != nil {
if commonFlags.Hosts[i], err = opts.ParseHost(defaultHost, commonFlags.Hosts[i]); err != nil {
logrus.Fatalf("error parsing -H %s : %v", commonFlags.Hosts[i], err)
}
}

View File

@@ -81,42 +81,43 @@ On the Docker host (192.168.1.52) that Redis will run on:
^D
# add redis ambassador
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 busybox sh
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 alpine:3.2 sh
In the `redis_ambassador` container, you can see the linked Redis
containers `env`:
$ env
/ # env
REDIS_PORT=tcp://172.17.0.136:6379
REDIS_PORT_6379_TCP_ADDR=172.17.0.136
REDIS_NAME=/redis_ambassador/redis
HOSTNAME=19d7adf4705e
SHLVL=1
HOME=/root
REDIS_PORT_6379_TCP_PORT=6379
HOME=/
REDIS_PORT_6379_TCP_PROTO=tcp
container=lxc
REDIS_PORT_6379_TCP=tcp://172.17.0.136:6379
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
This environment is used by the ambassador `socat` script to expose Redis
to the world (via the `-p 6379:6379` port mapping):
$ docker rm redis_ambassador
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 docker-ut sh
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
$ CMD="apk update && apk add socat && sh"
$ docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 alpine:3.2 sh -c "$CMD"
[...]
/ # socat -t 100000000 TCP4-LISTEN:6379,fork,reuseaddr TCP4:172.17.0.136:6379
Now ping the Redis server via the ambassador:
Now go to a different server:
$ sudo ./contrib/mkimage-unittest.sh
$ docker run -t -i --expose 6379 --name redis_ambassador docker-ut sh
$ socat TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
$ CMD="apk update && apk add socat && sh"
$ docker run -t -i --expose 6379 --name redis_ambassador alpine:3.2 sh -c "$CMD"
[...]
/ # socat -t 100000000 TCP4-LISTEN:6379,fork,reuseaddr TCP4:192.168.1.52:6379
And get the `redis-cli` image so we can talk over the ambassador bridge.
@@ -127,8 +128,8 @@ And get the `redis-cli` image so we can talk over the ambassador bridge.
## The svendowideit/ambassador Dockerfile
The `svendowideit/ambassador` image is a small `busybox` image with
`socat` built in. When you start the container, it uses a small `sed`
The `svendowideit/ambassador` image is based on the `alpine:3.2` image with
`socat` installed. When you start the container, it uses a small `sed`
script to parse out the (possibly multiple) link environment variables
to set up the port forwarding. On the remote host, you need to set the
variable using the `-e` command line option.
@@ -139,19 +140,21 @@ Will forward the local `1234` port to the remote IP and port, in this
case `192.168.1.52:6379`.
#
#
# first you need to build the docker-ut image
# using ./contrib/mkimage-unittest.sh
# then
# docker build -t SvenDowideit/ambassador .
# docker tag SvenDowideit/ambassador ambassador
# do
# docker build -t svendowideit/ambassador .
# then to run it (on the host that has the real backend on it)
# docker run -t -i --link redis:redis --name redis_ambassador -p 6379:6379 ambassador
# docker run -t -i -link redis:redis -name redis_ambassador -p 6379:6379 svendowideit/ambassador
# on the remote host, you can set up another ambassador
# docker run -t -i --name redis_ambassador --expose 6379 sh
# docker run -t -i -name redis_ambassador -expose 6379 -e REDIS_PORT_6379_TCP=tcp://192.168.1.52:6379 svendowideit/ambassador sh
# you can read more about this process at https://docs.docker.com/articles/ambassador_pattern_linking/
FROM docker-ut
MAINTAINER SvenDowideit@home.org.au
# use alpine because its a minimal image with a package manager.
# prettymuch all that is needed is a container that has a functioning env and socat (or equivalent)
FROM alpine:3.2
MAINTAINER SvenDowideit@home.org.au
RUN apk update && \
apk add socat && \
rm -r /var/cache/
CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \&/' | sh && top
CMD env | grep _TCP= | sed 's/.*_PORT_\([0-9]*\)_TCP=tcp:\/\/\(.*\):\(.*\)/socat -t 100000000 TCP4-LISTEN:\1,fork,reuseaddr TCP4:\2:\3 \& wait/' | sh

View File

@@ -59,7 +59,7 @@ in a database image.
In almost all cases, you should only run a single process in a single
container. Decoupling applications into multiple containers makes it much
easier to scale horizontally and reuse containers. If that service depends on
another service, make use of [container linking](../userguide/dockerlinks.md).
another service, make use of [container linking](../userguide/networking/default_network/dockerlinks.md).
### Minimize the number of layers

View File

@@ -4,8 +4,7 @@ title = "Automatically start containers"
description = "How to generate scripts for upstart, systemd, etc."
keywords = ["systemd, upstart, supervisor, docker, documentation, host integration"]
[menu.main]
parent = "smn_containers"
weight = 99
parent = "smn_administrate"
+++
<![end-metadata]-->

File diff suppressed because it is too large Load Diff

View File

@@ -39,7 +39,7 @@ of another container. Of course, if the host system is setup
accordingly, containers can interact with each other through their
respective network interfaces — just like they can interact with
external hosts. When you specify public ports for your containers or use
[*links*](../userguide/dockerlinks.md)
[*links*](../userguide/networking/default_network/dockerlinks.md)
then IP traffic is allowed between containers. They can ping each other,
send/receive UDP packets, and establish TCP connections, but that can be
restricted if necessary. From a network architecture point of view, all
@@ -129,7 +129,7 @@ privilege separation.
Eventually, it is expected that the Docker daemon will run restricted
privileges, delegating operations well-audited sub-processes,
each with its own (very limited) scope of Linux capabilities,
each with its own (very limited) scope of Linux capabilities,
virtual network setup, filesystem management, etc. That is, most likely,
pieces of the Docker engine itself will run inside of containers.

View File

@@ -172,6 +172,6 @@ the exposed port to two different ports on the host
$ mongo --port 28001
$ mongo --port 28002
- [Linking containers](../userguide/dockerlinks.md)
- [Linking containers](../userguide/networking/default_network/dockerlinks.md)
- [Cross-host linking containers](../articles/ambassador_pattern_linking.md)
- [Creating an Automated Build](https://docs.docker.com/docker-hub/builds/)

View File

@@ -10,7 +10,7 @@ parent = "smn_applied"
# Dockerizing PostgreSQL
> **Note**:
> **Note**:
> - **If you don't like sudo** then see [*Giving non-root
> access*](../installation/binaries.md#giving-non-root-access)
@@ -85,7 +85,7 @@ And run the PostgreSQL server container (in the foreground):
$ docker run --rm -P --name pg_test eg_postgresql
There are 2 ways to connect to the PostgreSQL server. We can use [*Link
Containers*](../userguide/dockerlinks.md), or we can access it from our host
Containers*](../userguide/networking/default_network/dockerlinks.md), or we can access it from our host
(or the network).
> **Note**:

View File

@@ -18,9 +18,8 @@ plugins.
Plugins extend Docker's functionality. They come in specific types. For
example, a [volume plugin](plugins_volume.md) might enable Docker
volumes to persist across multiple Docker hosts and a
[network plugin](plugins_network.md) might provide network plumbing
using a favorite networking technology, such as vxlan overlay, ipvlan, EVPN, etc.
volumes to persist across multiple Docker hosts and a
[network plugin](plugins_network.md) might provide network plumbing.
Currently Docker supports volume and network driver plugins. In the future it
will support additional plugin types.

View File

@@ -1,7 +1,7 @@
<!--[metadata]>
+++
title = "Docker network driver plugins"
description = "Network drive plugins."
description = "Network driver plugins."
keywords = ["Examples, Usage, plugins, docker, documentation, user guide"]
[menu.main]
parent = "mn_extend"
@@ -11,41 +11,48 @@ weight=-1
# Docker network driver plugins
Docker supports network driver plugins via
[LibNetwork](https://github.com/docker/libnetwork). Network driver plugins are
implemented as "remote drivers" for LibNetwork, which shares plugin
infrastructure with Docker. In effect this means that network driver plugins
are activated in the same way as other plugins, and use the same kind of
protocol.
Docker network plugins enable Docker deployments to be extended to support a
wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN or
something completely different. Network driver plugins are supported via the
LibNetwork project. Each plugin is implemented asa "remote driver" for
LibNetwork, which shares plugin infrastructure with Docker. Effectively,
network driver plugins are activated in the same way as other plugins, and use
the same kind of protocol.
## Using network driver plugins
The means of installing and running a network driver plugin will depend on the
particular plugin.
The means of installing and running a network driver plugin depend on the
particular plugin. So, be sure to install your plugin according to the
instructions obtained from the plugin developer.
Once running however, network driver plugins are used just like the built-in
network drivers: by being mentioned as a driver in network-oriented Docker
commands. For example,
docker network create -d weave mynet
$ docker network create --driver weave mynet
Some network driver plugins are listed in [plugins](plugins.md)
The network thus created is owned by the plugin, so subsequent commands
referring to that network will also be run through the plugin such as,
The `mynet` network is now owned by `weave`, so subsequent commands
referring to that network will be sent to the plugin,
docker run --net=mynet busybox top
$ docker run --net=mynet busybox top
## Network driver plugin protocol
The network driver protocol, additional to the plugin activation call, is
documented as part of LibNetwork:
## Write a network plugin
Network plugins implement the [Docker plugin
API](https://docs.docker.com/extend/plugin_api/) and the network plugin protocol
## Network plugin protocol
The network driver protocol, in addition to the plugin activation call, is
documented as part of libnetwork:
[https://github.com/docker/libnetwork/blob/master/docs/remote.md](https://github.com/docker/libnetwork/blob/master/docs/remote.md).
# Related GitHub PRs and issues
# Related Information
Please record your feedback in the following issue, on the usual
Google Groups, or the IRC channel #docker-network.
To interact with the Docker maintainers and other interested users, se the IRC channel `#docker-network`.
- [#14083](https://github.com/docker/docker/issues/14083) Feedback on
experimental networking features
- [Docker networks feature overview](../userguide/networking/index.md)
- The [LibNetwork](https://github.com/docker/libnetwork) project

View File

@@ -109,8 +109,8 @@ You must delete the user created configuration files manually.
## Where to go from here
You can find more details about Docker on openSUSE or SUSE Linux Enterprise in
the [Docker quick start guide](https://www.suse.com/documentation/sles-12/dockerquick/data/dockerquick.
html) on the SUSE website. The document targets SUSE Linux Enterprise, but its contents apply also to openSUSE.
You can find more details about Docker on openSUSE or SUSE Linux Enterprise in the
[Docker quick start guide](https://www.suse.com/documentation/sles-12/dockerquick/data/dockerquick.html)
on the SUSE website. The document targets SUSE Linux Enterprise, but its contents apply also to openSUSE.
Continue to the [User Guide](../userguide/).

View File

@@ -59,7 +59,7 @@ There are two ways to install Docker Engine. You can install with the `yum` pac
For Fedora 22 run:
$ cat >/etc/yum.repos.d/docker.repo <<-EOF
$ cat >/etc/yum.repos.d/docker.repo <<-EOF
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/fedora/22
@@ -74,7 +74,7 @@ There are two ways to install Docker Engine. You can install with the `yum` pac
5. Start the Docker daemon.
$ sudo service docker start
$ sudo systemctl start docker
6. Verify `docker` is installed correctly by running a test image in a container.
@@ -123,13 +123,13 @@ There are two ways to install Docker Engine. You can install with the `yum` pac
4. Start the Docker daemon.
$ sudo service docker start
$ sudo systemctl start docker
5. Verify `docker` is installed correctly by running a test image in a container.
$ sudo docker run hello-world
## Create a docker group
## Create a docker group
The `docker` daemon binds to a Unix socket instead of a TCP port. By default
that Unix socket is owned by the user `root` and other users can access it with
@@ -163,7 +163,7 @@ To create the `docker` group and add your user:
To ensure Docker starts when you boot your system, do the following:
$ sudo chkconfig docker on
$ sudo systemctl enable docker
If you need to add an HTTP Proxy, set a different directory or partition for the
Docker runtime files, or make other customizations, read our Systemd article to
@@ -190,7 +190,7 @@ This configuration allows IP forwarding from the container as expected.
## Uninstall
You can uninstall the Docker software with `yum`.
You can uninstall the Docker software with `yum`.
1. List the package you have installed.

View File

@@ -49,13 +49,13 @@ your `apt` sources to the new Docker repository.
Docker's `apt` repository contains Docker 1.7.1 and higher. To set `apt` to use
packages from the new repository:
1. If you haven't already done so, log into your Ubuntu instance.
1. If you haven't already done so, log into your Ubuntu instance as a privileged user.
2. Open a terminal window.
3. Add the new `gpg` key.
$ apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
4. Open the `/etc/apt/sources.list.d/docker.list` file in your favorite editor.

View File

@@ -21,6 +21,8 @@ The following short variant options are deprecated in favor of their long
variants:
docker run -c (--cpu-shares)
docker build -c (--cpu-shares)
docker create -c (--cpu-shares)
### Driver Specific Log Tags
**Deprecated In Release: v1.9**
@@ -42,7 +44,7 @@ The built-in LXC execution driver is deprecated for an external implementation.
The lxc-conf flag and API fields will also be removed.
### Old Command Line Options
**Deprecated In Release: [v1.8.0](../release-notes.md#docker-engine-1-8-0)**
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
**Target For Removal In Release: v1.10**

View File

@@ -98,7 +98,7 @@ with several powerful functionalities:
- *Sharing.* Docker has access to a public registry [on Docker Hub](https://hub.docker.com/)
where thousands of people have uploaded useful images: anything from Redis,
CouchDB, PostgreSQL to IRC bouncers to Rails app servers to Hadoop to base
CouchDB, PostgreSQL to IRC bouncers to Rails app servers to Hadoop to base
images for various Linux distros. The
[*registry*](https://docs.docker.com/registry/) also
includes an official "standard library" of useful containers maintained by the
@@ -135,8 +135,7 @@ thousands or even millions of containers running in parallel.
### How do I connect Docker containers?
Currently the recommended way to link containers is via the link primitive. You
can see details of how to [work with links here](../userguide/dockerlinks.md).
Currently the recommended way to connect containers is via the Docker network feature. You can see details of how to [work with Docker networks here](https://docs.docker.com/networking).
Also useful for more flexible service portability is the [Ambassador linking
pattern](../articles/ambassador_pattern_linking.md).
@@ -154,19 +153,19 @@ the container will continue to as well. You can see a more substantial example
Linux:
- Ubuntu 12.04, 13.04 et al
- Fedora 19/20+
- RHEL 6.5+
- CentOS 6+
- Gentoo
- ArchLinux
- openSUSE 12.3+
- Ubuntu 12.04, 13.04 et al
- Fedora 19/20+
- RHEL 6.5+
- CentOS 6+
- Gentoo
- ArchLinux
- openSUSE 12.3+
- CRUX 3.0+
Cloud:
- Amazon EC2
- Google Compute Engine
- Amazon EC2
- Google Compute Engine
- Microsoft Azure
- Rackspace
@@ -263,11 +262,11 @@ how to do this, check the documentation for your OS.
You can find more answers on:
- [Docker user mailinglist](https://groups.google.com/d/forum/docker-user)
- [Docker developer mailinglist](https://groups.google.com/d/forum/docker-dev)
- [IRC, docker on freenode](irc://chat.freenode.net#docker)
- [GitHub](https://github.com/docker/docker)
- [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker)
- [Docker user mailinglist](https://groups.google.com/d/forum/docker-user)
- [Docker developer mailinglist](https://groups.google.com/d/forum/docker-dev)
- [IRC, docker on freenode](irc://chat.freenode.net#docker)
- [GitHub](https://github.com/docker/docker)
- [Ask questions on Stackoverflow](http://stackoverflow.com/search?q=docker)
- [Join the conversation on Twitter](http://twitter.com/docker)
Looking for something else to read? Checkout the [User Guide](../userguide/).

View File

@@ -1,159 +0,0 @@
<!--[metadata]>
+++
title = "Advanced contributing"
description = "Explains workflows for refactor and design proposals"
keywords = ["contribute, project, design, refactor, proposal"]
[menu.main]
parent = "smn_contribute"
weight=6
+++
<![end-metadata]-->
# Advanced contributing
In this section, you learn about the more advanced contributions you can make.
They are advanced because they have a more involved workflow or require greater
programming experience. Don't be scared off though, if you like to stretch and
challenge yourself, this is the place for you.
This section gives generalized instructions for advanced contributions. You'll
read about the workflow but there are not specific descriptions of commands.
Your goal should be to understand the processes described.
At this point, you should have read and worked through the earlier parts of
the project contributor guide. You should also have
<a href="../make-a-contribution/" target="_blank"> made at least one project contribution</a>.
## Refactor or cleanup proposal
A refactor or cleanup proposal changes Docker's internal structure without
altering the external behavior. To make this type of proposal:
1. Fork `docker/docker`.
2. Make your changes in a feature branch.
3. Sync and rebase with `master` as you work.
3. Run the full test suite.
4. Submit your code through a pull request (PR).
The PR's title should have the format:
**Cleanup:** _short title_
If your changes required logic changes, note that in your request.
5. Work through Docker's review process until merge.
## Design proposal
A design proposal solves a problem or adds a feature to the Docker software.
The process for submitting design proposals requires two pull requests, one
for the design and one for the implementation.
![Simple process](images/proposal.png)
The important thing to notice is that both the design pull request and the
implementation pull request go through a review. In other words, there is
considerable time commitment in a design proposal; so, you might want to pair
with someone on design work.
The following provides greater detail on the process:
1. Come up with an idea.
Ideas usually come from limitations users feel working with a product. So,
take some time to really use Docker. Try it on different platforms; explore
how it works with different web applications. Go to some community events
and find out what other users want.
2. Review existing issues and proposals to make sure no other user is proposing a similar idea.
The design proposals are <a
href="https://github.com/docker/docker/pulls?q=is%3Aopen+is%3Apr+label%
3Akind%2Fproposal" target="_blank">all online in our GitHub pull requests</a>.
3. Talk to the community about your idea.
We have lots of <a href="../get-help/" target="_blank">community forums</a>
where you can get feedback on your idea. Float your idea in a forum or two
to get some commentary going on it.
4. Fork `docker/docker` and clone the repo to your local host.
5. Create a new Markdown file in the area you wish to change.
For example, if you want to redesign our daemon create a new file under the
`daemon/` folder.
6. Name the file descriptively, for example `redesign-daemon-proposal.md`.
7. Write a proposal for your change into the file.
This is a Markdown file that describes your idea. Your proposal
should include information like:
* Why is this change needed or what are the use cases?
* What are the requirements this change should meet?
* What are some ways to design/implement this feature?
* Which design/implementation do you think is best and why?
* What are the risks or limitations of your proposal?
This is your chance to convince people your idea is sound.
8. Submit your proposal in a pull request to `docker/docker`.
The title should have the format:
**Proposal:** _short title_
The body of the pull request should include a brief summary of your change
and then say something like "_See the file for a complete description_".
9. Refine your proposal through review.
The maintainers and the community review your proposal. You'll need to
answer questions and sometimes explain or defend your approach. This is
chance for everyone to both teach and learn.
10. Pull request accepted.
Your request may also be rejected. Not every idea is a good fit for Docker.
Let's assume though your proposal succeeded.
11. Implement your idea.
Implementation uses all the standard practices of any contribution.
* fork `docker/docker`
* create a feature branch
* sync frequently back to master
* test as you go and full test before a PR
If you run into issues, the community is there to help.
12. When you have a complete implementation, submit a pull request back to `docker/docker`.
13. Review and iterate on your code.
If you are making a large code change, you can expect greater scrutiny
during this phase.
14. Acceptance and merge!
## About the advanced process
Docker is a large project. Our core team gets a great many design proposals.
Design proposal discussions can span days, weeks, and longer. The number of comments can reach the 100s.
In that situation, following the discussion flow and the decisions reached is crucial.
Making a pull request with a design proposal simplifies this process:
* you can leave comments on specific design proposal line
* replies around line are easy to track
* as a proposal changes and is updated, pages reset as line items resolve
* GitHub maintains the entire history
While proposals in pull requests do not end up merged into a master repository, they provide a convenient tool for managing the design process.

View File

@@ -1,103 +0,0 @@
<!--[metadata]>
+++
title = "Coding style checklist"
description = "List of guidelines for coding Docker contributions"
keywords = ["change, commit, squash, request, pull request, test, unit test, integration tests, Go, gofmt, LGTM"]
[menu.main]
parent = "smn_contribute"
weight=7
+++
<![end-metadata]-->
# Coding style checklist
This checklist summarizes the material you experienced working through [make a
code contribution](make-a-contribution.md) and [advanced
contributing](advanced-contributing.md). The checklist applies to both
program code and documentation code.
## Change and commit code
* Fork the `docker/docker` repository.
* Make changes on your fork in a feature branch. Name your branch `XXXX-something`
where `XXXX` is the issue number you are working on.
* Run `gofmt -s -w file.go` on each changed file before
committing your changes. Most editors have plug-ins that do this automatically.
* Run `golint` on each changed file before
committing your changes.
* Update the documentation when creating or modifying features.
* Commits that fix or close an issue should reference them in the commit message
`Closes #XXXX` or `Fixes #XXXX`. Mentions help by automatically closing the
issue on a merge.
* After every commit, run the test suite and ensure it is passing.
* Sync and rebase frequently as you code to keep up with `docker` master.
* Set your `git` signature and make sure you sign each commit.
* Do not add yourself to the `AUTHORS` file. This file is autogenerated from the
Git history.
## Tests and testing
* Submit unit tests for your changes.
* Make use of the builtin Go test framework built.
* Use existing Docker test files (`name_test.go`) for inspiration.
* Run <a href="../test-and-docs" target="_blank">the full test suite</a> on your
branch before submitting a pull request.
* Run `make docs` to build the documentation and then check it locally.
* Use an <a href="http://www.hemingwayapp.com" target="_blank">online grammar
checker</a> or similar to test you documentation changes for clarity,
concision, and correctness.
## Pull requests
* Sync and cleanly rebase on top of Docker's `master` without multiple branches
mixed into the PR.
* Before the pull request, squash your commits into logical units of work using
`git rebase -i` and `git push -f`.
* Include documentation changes in the same commit so that a revert would
remove all traces of the feature or fix.
* Reference each issue in your pull request description (`#XXXX`)
## Respond to pull requests reviews
* Docker maintainers use LGTM (**l**ooks-**g**ood-**t**o-**m**e) in PR comments
to indicate acceptance.
* Code review comments may be added to your pull request. Discuss, then make
the suggested modifications and push additional commits to your feature
branch.
* Incorporate changes on your feature branch and push to your fork. This
automatically updates your open pull request.
* Post a comment after pushing to alert reviewers to PR changes; pushing a
change does not send notifications.
* A change requires LGTMs from an absolute majority maintainers of an
affected component. For example, if you change `docs/` and `registry/` code,
an absolute majority of the `docs/` and the `registry/` maintainers must
approve your PR.
## Merges after pull requests
* After a merge, [a master build](https://master.dockerproject.org/) is
available almost immediately.
* If you made a documentation change, you can see it at
[docs.master.dockerproject.org](http://docs.master.dockerproject.org/).

View File

@@ -1,138 +0,0 @@
<!--[metadata]>
+++
title = "Create a pull request (PR)"
description = "Basic workflow for Docker contributions"
keywords = ["contribute, pull request, review, workflow, beginner, squash, commit"]
[menu.main]
parent = "smn_contribute"
weight=4
+++
<![end-metadata]-->
# Create a pull request (PR)
A pull request (PR) sends your changes to the Docker maintainers for review. You
create a pull request on GitHub. A pull request "pulls" changes from your forked
repository into the `docker/docker` repository.
You can see <a href="https://github.com/docker/docker/pulls" target="_blank">the
list of active pull requests to Docker</a> on GitHub.
## Check your work
Before you create a pull request, check your work.
1. In a terminal window, go to the root of your `docker-fork` repository.
$ cd ~/repos/docker-fork
2. Checkout your feature branch.
$ git checkout 11038-fix-rhel-link
Switched to branch '11038-fix-rhel-link'
3. Run the full test suite on your branch.
$ make test
All the tests should pass. If they don't, find out why and correct the
situation.
4. Optionally, if modified the documentation, build the documentation:
$ make docs
5. Commit and push any changes that result from your checks.
## Rebase your branch
Always rebase and squash your commits before making a pull request.
1. Checkout your feature branch in your local `docker-fork` repository.
This is the branch associated with your request.
2. Fetch any last minute changes from `docker/docker`.
$ git fetch upstream master
From github.com:docker/docker
* branch master -> FETCH_HEAD
3. Start an interactive rebase.
$ git rebase -i upstream/master
4. Rebase opens an editor with a list of commits.
pick 1a79f55 Tweak some of the other text for grammar
pick 53e4983 Fix a link
pick 3ce07bb Add a new line about RHEL
5. Replace the `pick` keyword with `squash` on all but the first commit.
pick 1a79f55 Tweak some of the other text for grammar
squash 53e4983 Fix a link
squash 3ce07bb Add a new line about RHEL
After you save the changes and quit from the editor, git starts
the rebase, reporting the progress along the way. Sometimes
your changes can conflict with the work of others. If git
encounters a conflict, it stops the rebase, and prints guidance
for how to correct the conflict.
6. Edit and save your commit message.
$ git commit -s
Make sure your message includes <a href="../set-up-git" target="_blank">your signature</a>.
7. Force push any changes to your fork on GitHub.
$ git push -f origin 11038-fix-rhel-link
## Create a PR on GitHub
You create and manage PRs on GitHub:
1. Open your browser to your fork on GitHub.
You should see the latest activity from your branch.
![Latest commits](images/latest_commits.png)
2. Click "Compare & pull request."
The system displays the pull request dialog.
![PR dialog](images/to_from_pr.png)
The pull request compares your changes to the `master` branch on the
`docker/docker` repository.
3. Edit the dialog's description and add a reference to the issue you are fixing.
GitHub helps you out by searching for the issue as you type.
![Fixes issue](images/fixes_num.png)
4. Scroll down and verify the PR contains the commits and changes you expect.
For example, is the file count correct? Are the changes in the files what
you expect?
![Commits](images/commits_expected.png)
5. Press "Create pull request".
The system creates the request and opens it for you in the `docker/docker`
repository.
![Pull request made](images/pull_request_made.png)
## Where to go next
Congratulations, you've created your first pull request to Docker. The next
step is for you learn how to [participate in your PR's
review](review-pr.md).

View File

@@ -1,283 +0,0 @@
<!--[metadata]>
+++
title = "Style guide for Docker documentation"
description = "Style guide for Docker documentation describing standards and conventions for contributors"
keywords = ["style, guide, docker, documentation"]
[menu.main]
parent = "mn_opensource"
weight=100
+++
<![end-metadata]-->
# Docker documentation: style & grammar conventions
## Style standards
Over time, different publishing communities have written standards for the style
and grammar they prefer in their publications. These standards are called
[style guides](http://en.wikipedia.org/wiki/Style_guide). Generally, Dockers
documentation uses the standards described in the
[Associated Press's (AP) style guide](http://en.wikipedia.org/wiki/AP_Stylebook).
If a question about syntactical, grammatical, or lexical practice comes up,
refer to the AP guide first. If you dont have a copy of (or online subscription
to) the AP guide, you can almost always find an answer to a specific question by
searching the web. If you cant find an answer, please ask a
[maintainer](https://github.com/docker/docker/blob/master/MAINTAINERS) and
we will find the answer.
That said, please don't get too hung up on using correct style. We'd rather have
you submit good information that doesn't conform to the guide than no
information at all. Docker's tech writers are always happy to help you with the
prose, and we promise not to judge or use a red pen!
> **Note:**
> The documentation is written with paragraphs wrapped at 80 column lines to
> make it easier for terminal use. You can probably set up your favorite text
> editor to do this automatically for you.
### Prose style
In general, try to write simple, declarative prose. We prefer short,
single-clause sentences and brief three-to-five sentence paragraphs. Try to
choose vocabulary that is straightforward and precise. Avoid creating new terms,
using obscure terms or, in particular, using a lot of jargon. For example, use
"use" instead of leveraging "leverage".
That said, dont feel like you have to write for localization or for
English-as-a-second-language (ESL) speakers specifically. Assume you are writing
for an ordinary speaker of English with a basic university education. If your
prose is simple, clear, and straightforward it will translate readily.
One way to think about this is to assume Dockers users are generally university
educated and read at at least a "16th" grade level (meaning they have a
university degree). You can use a [readability
tester](https://readability-score.com/) to help guide your judgement. For
example, the readability score for the phrase "Containers should be ephemeral"
is around the 13th grade level (first year at university), and so is acceptable.
In all cases, we prefer clear, concise communication over stilted, formal
language. Don't feel like you have to write documentation that "sounds like
technical writing."
### Metaphor and figurative language
One exception to the "dont write directly for ESL" rule is to avoid the use of
metaphor or other
[figurative language](http://en.wikipedia.org/wiki/Literal_and_figurative_language) to
describe things. There are too many cultural and social issues that can prevent
a reader from correctly interpreting a metaphor.
## Specific conventions
Below are some specific recommendations (and a few deviations) from AP style
that we use in our docs.
### Contractions
As long as your prose does not become too slangy or informal, it's perfectly
acceptable to use contractions in our documentation. Make sure to use
apostrophes correctly.
### Use of dashes in a sentence.
Dashes refers to the en dash () and the em dash (—). Dashes can be used to
separate parenthetical material.
Usage Example: This is an example of a Docker client which uses the Big Widget
to run and does x, y, and z.
Use dashes cautiously and consider whether commas or parentheses would work just
as well. We always emphasize short, succinct sentences.
More info from the always handy [Grammar Girl site](http://www.quickanddirtytips.com/education/grammar/dashes-parentheses-and-commas).
### Pronouns
It's okay to use first and second person pronouns, especially if it lets you avoid a passive construction. Specifically, always use "we" to
refer to Docker and "you" to refer to the user. For example, "We built the
`exec` command so you can resize a TTY session." That said, in general, try to write simple, imperative sentences that avoid the use of pronouns altogether. Say "Now, enter your SSH key" rather than "You can now enter your SSH key."
As much as possible, avoid using gendered pronouns ("he" and "she", etc.).
Either recast the sentence so the pronoun is not needed or, less preferably,
use "they" instead. If you absolutely can't get around using a gendered pronoun,
pick one and stick to it. Which one you choose is up to you. One common
convention is to use the pronoun of the author's gender, but if you prefer to
default to "he" or "she", that's fine too.
### Capitalization
#### In general
Only proper nouns should be capitalized in body text. In general, strive to be
as strict as possible in applying this rule. Avoid using capitals for emphasis
or to denote "specialness".
The word "Docker" should always be capitalized when referring to either the
company or the technology. The only exception is when the term appears in a code
sample.
#### Starting sentences
Because code samples should always be written exactly as they would appear
on-screen, you should avoid starting sentences with a code sample.
#### In headings
Headings take sentence capitalization, meaning that only the first letter is
capitalized (and words that would normally be capitalized in a sentence, e.g.,
"Docker"). Do not use Title Case (i.e., capitalizing every word) for headings. Generally, we adhere to [AP style
for titles](http://www.quickanddirtytips.com/education/grammar/capitalizing-titles).
### Periods
We prefer one space after a period at the end of a sentence, not two.
See [lists](#lists) below for how to punctuate list items.
### Abbreviations and acronyms
* Exempli gratia (e.g.) and id est ( i.e.): these should always have periods and
are always followed by a comma.
* Acronyms are pluralized by simply adding "s", e.g., PCs, OSs.
* On first use on a given page, the complete term should be used, with the
abbreviation or acronym in parentheses. E.g., Red Hat Enterprise Linux (RHEL).
The exception is common, non-technical acronyms like AKA or ASAP. Note that
acronyms other than i.e. and e.g. are capitalized.
* Other than "e.g." and "i.e." (as discussed above), acronyms do not take
periods, PC not P.C.
### Lists
When writing lists, keep the following in mind:
Use bullets when the items being listed are independent of each other and the
order of presentation is not important.
Use numbers for steps that have to happen in order or if you have mentioned the
list in introductory text. For example, if you wrote "There are three config
settings available for SSL, as follows:", you would number each config setting
in the subsequent list.
In all lists, if an item is a complete sentence, it should end with a
period. Otherwise, we prefer no terminal punctuation for list items.
Each item in a list should start with a capital.
### Numbers
Write out numbers in body text and titles from one to ten. From 11 on, use numerals.
### Notes
Use notes sparingly and only to bring things to the reader's attention that are
critical or otherwise deserving of being called out from the body text. Please
format all notes as follows:
> **Note:**
> One line of note text
> another line of note text
### Avoid excess use of "i.e."
Minimize your use of "i.e.". It can add an unnecessary interpretive burden on
the reader. Avoid writing "This is a thing, i.e., it is like this". Just
say what it is: "This thing is …"
### Preferred usages
#### Login vs. log in.
A "login" is a noun (one word), as in "Enter your login". "Log in" is a compound
verb (two words), as in "Log in to the terminal".
### Oxford comma
One way in which we differ from AP style is that Dockers docs use the [Oxford
comma](http://en.wikipedia.org/wiki/Serial_comma) in all cases. Thats our
position on this controversial topic, we won't change our mind, and thats that!
### Code and UI text styling
We require `code font` styling (monospace, sans-serif) for all text that refers
to a command or other input or output from the CLI. This includes file paths
(e.g., `/etc/hosts/docker.conf`). If you enclose text in backticks (`) markdown
will style the text as code.
Text from a CLI should be quoted verbatim, even if it contains errors or its
style contradicts this guide. You can add "(sic)" after the quote to indicate
the errors are in the quote and are not errors in our docs.
Text taken from a GUI (e.g., menu text or button text) should appear in "double
quotes". The text should take the exact same capitalisation, etc. as appears in
the GUI. E.g., Click "Continue" to save the settings.
Text that refers to a keyboard command or hotkey is capitalized (e.g., Ctrl-D).
When writing CLI examples, give the user hints by making the examples resemble
exactly what they see in their shell:
* Indent shell examples by 4 spaces so they get rendered as code blocks.
* Start typed commands with `$ ` (dollar space), so that they are easily
differentiated from program output.
* Program output has no prefix.
* Comments begin with # (hash space).
* In-container shell commands, begin with `$$ ` (dollar dollar space).
Please test all code samples to ensure that they are correct and functional so
that users can successfully cut-and-paste samples directly into the CLI.
## Pull requests
The pull request (PR) process is in place so that we can ensure changes made to
the docs are the best changes possible. A good PR will do some or all of the
following:
* Explain why the change is needed
* Point out potential issues or questions
* Ask for help from experts in the company or the community
* Encourage feedback from core developers and others involved in creating the
software being documented.
Writing a PR that is singular in focus and has clear objectives will encourage
all of the above. Done correctly, the process allows reviewers (maintainers and
community members) to validate the claims of the documentation and identify
potential problems in communication or presentation.
### Commit messages
In order to write clear, useful commit messages, please follow these
[recommendations](http://robots.thoughtbot.com/5-useful-tips-for-a-better-commit-message).
## Links
For accessibility and usability reasons, avoid using phrases such as "click
here" for link text. Recast your sentence so that the link text describes the
content of the link, as we did in the
["Commit messages" section](#commit-messages) above.
You can use relative links (../linkeditem) to link to other pages in Docker's
documentation.
## Graphics
When you need to add a graphic, try to make the file-size as small as possible.
If you need help reducing file-size of a high-resolution image, feel free to
contact us for help.
Usually, graphics should go in the same directory as the .md file that
references them, or in a subdirectory for images if one already exists.
The preferred file format for graphics is PNG, but GIF and JPG are also
acceptable.
If you are referring to a specific part of the UI in an image, use
call-outs (circles and arrows or lines) to highlight what youre referring to.
Line width for call-outs should not exceed five pixels. The preferred color for
call-outs is red.
Be sure to include descriptive alt-text for the graphic. This greatly helps
users with accessibility issues.
Lastly, be sure you have permission to use any included graphics.

View File

@@ -1,237 +0,0 @@
<!--[metadata]>
+++
title = "Find and claim an issue"
description = "Basic workflow for Docker contributions"
keywords = ["contribute, issue, review, workflow, beginner, expert, squash, commit"]
[menu.main]
parent = "smn_contribute"
weight=2
+++
<![end-metadata]-->
<style type="text/css">
.gh-label {
display: inline-block;
padding: 3px 4px;
font-size: 12px;
font-weight: bold;
line-height: 1;
color: #fff;
border-radius: 2px;
box-shadow: inset 0 -1px 0 rgba(0,0,0,0.12);
}
.gh-label.beginner { background-color: #B5E0B5; color: #333333; }
.gh-label.expert { background-color: #599898; color: #ffffff; }
.gh-label.master { background-color: #306481; color: #ffffff; }
.gh-label.novice { background-color: #D6F2AC; color: #333333; }
.gh-label.proficient { background-color: #8DC7A9; color: #333333; }
.gh-label.bug { background-color: #FF9DA4; color: #333333; }
.gh-label.cleanup { background-color: #FFB7B3; color: #333333; }
.gh-label.content { background-color: #CDD3C2; color: #333333; }
.gh-label.feature { background-color: #B7BEB7; color: #333333; }
.gh-label.graphics { background-color: #E1EFCB; color: #333333; }
.gh-label.improvement { background-color: #EBD2BB; color: #333333; }
.gh-label.proposal { background-color: #FFD9C0; color: #333333; }
.gh-label.question { background-color: #EEF1D1; color: #333333; }
.gh-label.usecase { background-color: #F0E4C2; color: #333333; }
.gh-label.writing { background-color: #B5E9D5; color: #333333; }
</style>
# Find and claim an issue
On this page, you choose the issue you want to work on. As a contributor, you can work
on whatever you want. If you are new to contributing, you should start by
working with our known issues.
## Understand the issue types
An existing issue is something reported by a Docker user. As issues come in,
our maintainers triage them. Triage is its own topic. For now, it is important
for you to know that triage includes ranking issues according to difficulty.
Triaged issues have one of these labels:
<table class="tg">
<thead>
<tr>
<td class="tg-031e">Label</td>
<td class="tg-031e">Experience level guideline</td>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-031e"><strong class="gh-label beginner">exp/beginner</strong></td>
<td class="tg-031e">You have made less than ten contributions in your life time to any open source project.</td>
</tr>
<tr>
<td class="tg-031e"><strong class="gh-label novice">exp/novice</strong></td>
<td class="tg-031e">You have made more than ten contributions to an open source project or at least 5 contributions to Docker. </td>
</tr>
<tr>
<td class="tg-031e"><strong class="gh-label proficient">exp/proficient</strong></td>
<td class="tg-031e">You have made more than five contributions to Docker which amount to at least 200 code lines or 1000 documentation lines. </td>
</tr>
<tr>
<td class="tg-031e"><strong class="gh-label expert">exp/expert</strong></td>
<td class="tg-031e">You have made less than 20 commits to Docker which amount to 500-1000 code lines or 1000-3000 documentation lines. </td>
</tr>
<tr>
<td class="tg-031e"><strong class="gh-label master">exp/master</strong></td>
<td class="tg-031e">You have made more than 20 commits to Docker and greater than 1000 code lines or 3000 documentation lines.</td>
</tr>
</tbody>
</table>
These labels are guidelines. You might have written a whole plugin for Docker in a personal
project and never contributed to Docker. With that kind of experience, you could take on an <strong
class="gh-label expert">exp/expert</strong> or <strong class="gh-label
master">exp/master</strong> level issue.
## Claim a beginner or novice issue
To claim an issue:
1. Go to the `docker/docker` <a
href="https://github.com/docker/docker" target="_blank">repository</a>.
2. Click the "Issues" link.
A list of the open issues appears.
![Open issues](images/issue_list.png)
3. From the "Labels" drop-down, select <strong class="gh-label beginner">exp/beginner</strong>.
The system filters to show only open <strong class="gh-label beginner">exp/beginner</strong> issues.
4. Open an issue that interests you.
The comments on the issues describe the problem and can provide information for a potential
solution.
5. When you find an open issue that both interests you and is unclaimed, add a
`#dibs` comment. Make sure that no other user has chosen to work on the issue.
The project does not permit external contributors to assign issues to themselves. Read
the comments to find if a user claimed the issue by leaving a
`#dibs` comment on the issue.
7. Your issue # will be different depending on what you claimed. After a moment, Gordon the Docker
bot, changes the issue status to claimed. The following example shows issue #11038.
![Easy issue](images/easy_issue.png)
8. Make a note of the issue number; you will need it for later.
## Sync your fork and create a new branch
If you have followed along in this guide, you forked the `docker/docker`
repository. Maybe that was an hour ago or a few days ago. In any case, before
you start working on your issue, sync your repository with the upstream
`docker/docker` master. Syncing ensures your repository has the latest
changes.
To sync your repository:
1. Open a terminal on your local host.
2. Change directory to the `docker-fork` root.
$ cd ~/repos/docker-fork
3. Checkout the master branch.
$ git checkout master
Switched to branch 'master'
Your branch is up-to-date with 'origin/master'.
Recall that `origin/master` is a branch on your remote GitHub repository.
4. Make sure you have the upstream remote `docker/docker` by listing them.
$ git remote -v
origin https://github.com/moxiegirl/docker.git (fetch)
origin https://github.com/moxiegirl/docker.git (push)
upstream https://github.com/docker/docker.git (fetch)
upstream https://github.com/docker/docker.git (push)
If the `upstream` is missing, add it.
$ git remote add upstream https://github.com/docker/docker.git
5. Fetch all the changes from the `upstream master` branch.
$ git fetch upstream master
remote: Counting objects: 141, done.
remote: Compressing objects: 100% (29/29), done.
remote: Total 141 (delta 52), reused 46 (delta 46), pack-reused 66
Receiving objects: 100% (141/141), 112.43 KiB | 0 bytes/s, done.
Resolving deltas: 100% (79/79), done.
From github.com:docker/docker
* branch master -> FETCH_HEAD
This command says get all the changes from the `master` branch belonging to
the `upstream` remote.
7. Rebase your local master with the `upstream/master`.
$ git rebase upstream/master
First, rewinding head to replay your work on top of it...
Fast-forwarded master to upstream/master.
This command applies all the commits from the upstream master to your local
master.
8. Check the status of your local branch.
$ git status
On branch master
Your branch is ahead of 'origin/master' by 38 commits.
(use "git push" to publish your local commits)
nothing to commit, working directory clean
Your local repository now has all the changes from the `upstream` remote. You
need to push the changes to your own remote fork which is `origin master`.
9. Push the rebased master to `origin master`.
$ git push origin master
Username for 'https://github.com': moxiegirl
Password for 'https://moxiegirl@github.com':
Counting objects: 223, done.
Compressing objects: 100% (38/38), done.
Writing objects: 100% (69/69), 8.76 KiB | 0 bytes/s, done.
Total 69 (delta 53), reused 47 (delta 31)
To https://github.com/moxiegirl/docker.git
8e107a9..5035fa1 master -> master
9. Create a new feature branch to work on your issue.
Your branch name should have the format `XXXX-descriptive` where `XXXX` is
the issue number you are working on. For example:
$ git checkout -b 11038-fix-rhel-link
Switched to a new branch '11038-fix-rhel-link'
Your branch should be up-to-date with the `upstream/master`. Why? Because you
branched off a freshly synced master. Let's check this anyway in the next
step.
9. Rebase your branch from upstream/master.
$ git rebase upstream/master
Current branch 11038-fix-rhel-link is up to date.
At this point, your local branch, your remote repository, and the Docker
repository all have identical code. You are ready to make changes for your
issue.
## Where to go next
At this point, you know what you want to work on and you have a branch to do
your work in. Go onto the next section to learn [how to work on your
changes](work-issue.md).

View File

@@ -1,224 +0,0 @@
<!--[metadata]>
+++
title = "Where to chat or get help"
description = "Describes Docker's communication channels"
keywords = ["IRC, Google group, Twitter, blog, Stackoverflow"]
[menu.main]
parent = "mn_opensource"
+++
<![end-metadata]-->
<style type="text/css">
/* @TODO add 'no-zebra' table-style to the docs-base stylesheet */
/* Table without "zebra" striping */
.content-body table.no-zebra tr {
background-color: transparent;
}
</style>
# Where to chat or get help
There are several communications channels you can use to chat with Docker
community members and developers.
<table>
<col width="25%">
<col width="75%">
<tr>
<td>Internet Relay Chat (IRC)</th>
<td>
<p>
IRC a direct line to our most knowledgeable Docker users.
The <code>#docker</code> and <code>#docker-dev</code> group on
<strong>chat.freenode.net</strong>. IRC was first created in 1988.
So, it is a rich chat protocol but it can overwhelm new users. You can search
<a href="https://botbot.me/freenode/docker/#" target="_blank">our chat archives</a>.
</p>
Use our IRC quickstart guide below for easy ways to get started with IRC.
</td>
</tr>
<tr>
<td>Google Groups</td>
<td>
There are two groups.
<a href="https://groups.google.com/forum/#!forum/docker-user" target="_blank">Docker-user</a>
is for people using Docker containers.
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
group is for contributors and other people contributing to the Docker
project.
</td>
</tr>
<tr>
<td>Twitter</td>
<td>
You can follow <a href="https://twitter.com/docker/" target="_blank">Docker's twitter</a>
to get updates on our products. You can also tweet us questions or just
share blogs or stories.
</td>
</tr>
<tr>
<td>Stack Overflow</td>
<td>
Stack Overflow has over 7000K Docker questions listed. We regularly
monitor <a href="http://stackoverflow.com/search?tab=newest&q=docker" target="_blank">Docker questions</a>
and so do many other knowledgeable Docker users.
</td>
</tr>
</table>
# IRC Quickstart
The following instructions show you how to register with two web based IRC
tools. Use one illustrated here or find another. While these instructions are
only for two IRC web clients there are many IRC Clients available on most
platforms.
## Webchat
Using Webchat from Freenode.net is a quick and easy way to get chatting. To
register:
1. In your browser open <a href="https://webchat.freenode.net" target="_blank">https://webchat.freenode.net</a>
![Login to webchat screen](images/irc_connect.png)
2. Fill out the form.
<table class="no-zebra" style="width: auto">
<tr>
<td><b>Nickname</b></td>
<td>The short name you want to be known as on IRC chat channels.</td>
</tr>
<tr>
<td><b>Channels</b></td>
<td><code>#docker</code></td>
</tr>
<tr>
<td><b>reCAPTCHA</b></td>
<td>Use the value provided.</td>
</tr>
</table>
3. Click on the "Connect" button.
The browser connects you to Webchat. You'll see a lot of text. At the bottom of
the Webchat web page is a command line bar. Just above the command line bar
a message is shown asking you to register.
![Registration needed screen](images/irc_after_login.png)
4. Register your nickname by entering the following command in the
command line bar:
/msg NickServ REGISTER yourpassword youremail@example.com
![Registering screen](images/register_nic.png)
This command line bar is also the entry field that you will use for entering
chat messages into IRC chat channels after you have registered and joined a
chat channel.
After entering the REGISTER command, an email is sent to the email address
that you provided. This email will contain instructions for completing
your registration.
5. Open your email client and look for the email.
![Login screen](images/register_email.png)
6. Back in the browser, complete the registration according to the email
by entering the following command into the webchat command line bar:
/msg NickServ VERIFY REGISTER yournickname somecode
Your nickname is now registered to chat on freenode.net.
[Jump ahead to tips to join a docker channel and start chatting](#tips)
## IRCCloud
IRCCloud is a web-based IRC client service that is hosted in the cloud. This is
a Freemium product, meaning the free version is limited and you can pay for more
features. To use IRCCloud:
1. Select the following link:
<a href="https://www.irccloud.com/invite?channel=%23docker&amp;hostname=chat.freenode.net&amp;port=6697" target="_blank">Join the #docker channel on chat.freenode.net</a>
The following web page is displayed in your browser:
![IRCCloud Register screen](images/irccloud-join.png)
2. If this is your first time using IRCCloud enter a valid email address in the
form. People who have already registered with IRCCloud can select the "sign in
here" link. Additionally, people who are already registered with IRCCloud may
have a cookie stored on their web browser that enables a quick start "let's go"
link to be shown instead of the above form. In this case just select the
"let's go" link and [jump ahead to start chatting](#start-chatting)
3. After entering your email address in the form, check your email for an invite
from IRCCloud and follow the instructions provided in the email.
4. After following the instructions in your email you should have an IRCCloud
Client web page in your browser:
![IRCCloud](images/irccloud-register-nick.png)
The message shown above may appear indicating that you need to register your
nickname.
5. To register your nickname enter the following message into the command line bar
at the bottom of the IRCCloud Client:
/msg NickServ REGISTER yourpassword youremail@example.com
This command line bar is for chatting and entering in IRC commands.
6. Check your email for an invite to freenode.net:
![Login screen](images/register_email.png)
7. Back in the browser, complete the registration according to the email.
/msg NickServ VERIFY REGISTER yournickname somecode
## Tips
The procedures in this section apply to both IRC clients.
### Set a nickname
Next time you return to log into chat, you may need to re-enter your password
on the command line using this command:
/msg NickServ identify <password>
With Webchat if you forget or lose your password see <a
href="https://freenode.net/faq.shtml#sendpass" target="_blank">the FAQ on
freenode.net</a> to learn how to recover it.
### Join a Docker Channel
Join the `#docker` group using the following command in the command line bar of
your IRC Client:
/j #docker
You can also join the `#docker-dev` group:
/j #docker-dev
### Start chatting
To ask questions to the group just type messages in the command line bar:
![Web Chat Screen](images/irc_chat.png)
## Learning more about IRC
This quickstart was meant to get you up and into IRC very quickly. If you find
IRC useful there is more to learn. Drupal, another open source project,
has <a href="https://www.drupal.org/irc/setting-up" target="_blank">
written some documentation about using IRC</a> for their project
(thanks Drupal!).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 223 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 69 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

Some files were not shown because too many files have changed in this diff Show More