Compare commits

...

125 Commits

Author SHA1 Message Date
Brian Goff
301bfbdd21 Merge pull request #27089 from dmandalidis/27029-docs
Update volume options (fixes #27029)
2017-02-10 10:30:11 -05:00
Misty Stanley-Jones
5213a0a67e Addressed feedback
Signed-off-by: Misty Stanley-Jones <misty@docker.com>
2017-01-26 12:10:59 -08:00
Dimitris Mandalidis
164ab2cfc9 Update volume options (fixes #27029)
Signed-off-by: Dimitris Mandalidis <dimitris.mandalidis@gmail.com>
2016-10-20 10:59:20 +03:00
Sven Dowideit
fc438e4171 Merge pull request #23653 from thaJeztah/20160616-docs-cherry-picks
20160616 docs cherry picks
2016-06-17 14:26:52 +10:00
Sebastiaan van Stijn
c4b72004fd Merge pull request #23510 from sbose78/23376-update-centos-yum-repo-version
Fixes #23376 Broken URL for Centos yum repo for docker fixed to harcoded version number.
(cherry picked from commit 0c78a7dd64)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-16 19:52:35 -07:00
Sebastiaan van Stijn
b74db4d8e3 Merge pull request #23485 from sbose78/22514-run-image-using-digest
Fixes #22514 - Added example for using image digest in the docker run command
(cherry picked from commit 3bb42723ac)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-16 19:52:27 -07:00
Sven Dowideit
ad2a9f3d1e Merge pull request #23478 from cpuguy83/22833_optional_mountpoint_docs
Clarify volume plugin `Mountpoint` is optional
(cherry picked from commit 9a8affb0ff)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-16 19:52:12 -07:00
Sebastiaan van Stijn
782e218be6 Merge pull request #23419 from ahmetalpbalkan/docs/azure-vol-plugin
docs: Add Azure File Storage Volume Driver plugin
(cherry picked from commit 949f8d0d11)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-16 19:48:21 -07:00
Brian Goff
5d402c1c27 Merge pull request #23331 from yongtang/06062016-docs-typo
Fix a couple of typos in the docs of `docker attach`
(cherry picked from commit 3d8fdbb626)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-16 19:48:14 -07:00
Sebastiaan van Stijn
76f6c611f1 Merge pull request #23195 from cyli/update-content-trust-docs
Update content trust docs to reflect latest notary compose file changes
(cherry picked from commit 1842077541)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-16 19:48:05 -07:00
Sven Dowideit
9aa451ec5f Merge pull request #23318 from thaJeztah/20160606-docs-cherry-picks
20160606 docs cherry picks
2016-06-10 19:47:14 +10:00
Sebastiaan van Stijn
98638feaae Add alias to make CI pass
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-09 01:08:41 +02:00
Sebastiaan van Stijn
0d8b909281 Merge pull request #23293 from mountkin/network-docs
docs: correct network create command
(cherry picked from commit ac14aa11b6)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:37:33 +02:00
Sebastiaan van Stijn
f646c041b8 Merge pull request #23282 from yongtang/06052016-docs-typo
Fix a couple of typos in docker attach docs.
(cherry picked from commit cfcb470aad)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:37:24 +02:00
Sebastiaan van Stijn
683700619d Merge pull request #23251 from SvenDowideit/more-validation-fixes
docs validation fixes
(cherry picked from commit da703f026e)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:37:13 +02:00
Sven Dowideit
650ddc1f81 Merge pull request #23193 from allencloud/fix-typos
use grep to find all a/an typos
(cherry picked from commit 98c245c9e6)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:35:13 +02:00
Sven Dowideit
af3b1fbcbc Merge pull request #23192 from orsenthil/docfixes/ubuntu_install
Fix the docker daemon restart command for ubuntu.
(cherry picked from commit e2528712db)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:28:57 +02:00
Vincent Demeester
785e38d57b Merge pull request #23179 from kerneltime/master
Add VMware Docker Volume Plugin.
(cherry picked from commit 09033b8df2)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:28:45 +02:00
Sebastiaan van Stijn
cc68a963e0 Merge pull request #23165 from thaJeztah/update-logging-code-hints
cleanup logging driver documentation
(cherry picked from commit 8d75709f90)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:23:12 +02:00
Vincent Demeester
a6bbdf347a Merge pull request #23143 from bfirsh/remove-status-column-from-clinet-libraries-page
Remove status column from client libraries page
(cherry picked from commit 74c7363965)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:14:30 +02:00
Vincent Demeester
89b2a14932 Merge pull request #23106 from LINBIT/master
Add the DRBD Docker Volume Plugin to the documentation
(cherry picked from commit ef42e2f214)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:14:23 +02:00
Sebastiaan van Stijn
5df3c258a1 Merge pull request #23060 from friism/add-power-shell-example
Add power shell example
(cherry picked from commit 068d466cc7)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:14:16 +02:00
Sven Dowideit
d3bbe11b99 Merge pull request #22763 from zreigz/doc-multiple-daemons
Add documentation for running multiple daemons
(cherry picked from commit 9be8f04950)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-06-06 20:14:07 +02:00
Sebastiaan van Stijn
b6766eb676 Merge pull request #23066 from SvenDowideit/docs-cherry-picks-29may2016
Docs cherry picks 29may2016
2016-05-27 21:19:32 +02:00
Vincent Demeester
25ccb74f4f Merge pull request #23039 from yongtang/05262016-docs-cluster-store-opts
Fix error in dockerd.md for incorrect cluster-store-opts example.
(cherry picked from commit f1276cd3aa)

Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2016-05-27 17:56:38 +00:00
Sebastiaan van Stijn
e85cc97d65 Merge pull request #23035 from SvenDowideit/fix-links
Fix up stale links
(cherry picked from commit bd5c9f59ea)

Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
2016-05-27 17:56:25 +00:00
Sven Dowideit
b0e27887f2 Merge pull request #23033 from thaJeztah/docs-cherry-picks-2016-05-26
Docs cherry picks 2016 05 26
2016-05-26 15:39:54 -07:00
Sebastiaan van Stijn
fa6685ca85 Merge pull request #23001 from Djelibeybi/fix-oracle-docs
Fix URLs for official Oracle installation guide.
(cherry picked from commit 8863d6dc5f)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-27 00:29:40 +02:00
Vincent Demeester
6ccc50d04e Merge pull request #22999 from deed02392/master
Update debian.md
(cherry picked from commit 215324251a)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-27 00:29:31 +02:00
Sven Dowideit
d544b3ee99 Merge pull request #22990 from thaJeztah/docs-cherry-picks
Docs cherry picks (2016-05-25)
2016-05-25 13:25:05 -07:00
Vincent Demeester
a54af42edb Merge pull request #22890 from thaJeztah/docs/slashes
fix docs not building if branch-name contains slashes
(cherry picked from commit 07f79621ea)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:43:19 +02:00
Sven Dowideit
7abc60a93b Merge pull request #22394 from SvenDowideit/use-docs-base-oss
convert docs Dockerfiles to use docs/base:oss
(cherry picked from commit 1c0edf6c39)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:39:32 +02:00
Matt Bentley
3bdc7244a8 Fix thin pool devicemapper docs overwritten
Signed-off-by: Matt Bentley <matt.bentley@docker.com>
(cherry picked from commit 79205c3f06)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:34:29 +02:00
Sebastiaan van Stijn
fa29ecbceb Merge pull request #22987 from Microsoft/jjh/labeldocs
Docs: Label clarification
(cherry picked from commit bb80563a81)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:34:21 +02:00
Vincent Demeester
c70c8f1ed9 Merge pull request #22928 from friism/patch-3
remove duplicated text
(cherry picked from commit bf7bae9662)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:34:11 +02:00
Vincent Demeester
4efcbf5784 Merge pull request #22906 from nshalman/patch-1
Clarification about 'docker build --build-arg'
(cherry picked from commit ce07eac570)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:34:02 +02:00
Vincent Demeester
9114864ace Merge pull request #22900 from AkihiroSuda/fix22020
update docs/reference/commandline/cp.md
(cherry picked from commit 6a385a0022)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:55 +02:00
Sebastiaan van Stijn
0626fa4555 Merge pull request #22885 from yongtang/05212016-typo-in-dockernetworks-md
Fix a typos in docs of networking guide
(cherry picked from commit c0c36bc150)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:49 +02:00
Sebastiaan van Stijn
d41cd45c3d Merge pull request #22876 from Microsoft/jjh/docsclarification
Docs: JSON vs Shell clarification
(cherry picked from commit e3079b4704)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:41 +02:00
Sebastiaan van Stijn
a9f9a79d5c Merge pull request #22839 from thaJeztah/update-selinux-example
Remove MLS example from SELinux example in run reference
(cherry picked from commit 74ee26cceb)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:33 +02:00
Sebastiaan van Stijn
08f1f62f41 Merge pull request #22821 from zunayed/patch-1
fix duplicate command in uninstall instructions
(cherry picked from commit 1691fe6d23)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:26 +02:00
Vincent Demeester
e47de0ba4d Merge pull request #22778 from DoraALin/10972-docs-Support-for-non-proxied-private-registry
doc:http pkg variables info added in pull cmd
(cherry picked from commit 9751170f08)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:19 +02:00
Vincent Demeester
e3007c51a5 Merge pull request #22768 from mansinahar/run-cmd-doc
Update 'run' command doc for better readability.  Issue:#22721
(cherry picked from commit 28a436af36)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:12 +02:00
Vincent Demeester
4971a81f75 Merge pull request #22757 from gondor/master
Documentation: Updated URL to plugin reference - docker-volume-netshare
(cherry picked from commit 37dfd8bc8c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:33:04 +02:00
Sebastiaan van Stijn
b42d833cd8 Merge pull request #22751 from igrcic/docs-small-typo-reference-attach
remove double "using" in reference attach docs
(cherry picked from commit 6e12d0720f)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:32:44 +02:00
Sebastiaan van Stijn
281f0fa53b Merge pull request #22743 from yongtang/05142016-typo-in-work-with-networks
Fix a typo in work-with-networks.md
(cherry picked from commit e333675cd7)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:28:47 +02:00
Vincent Demeester
36ada7d158 Merge pull request #22742 from yongtang/05142016-update-deprecated-docs-for-LXC-built-in-exec-driver
Update deprecated docs for LXC built-in exec driver
(cherry picked from commit 1bcc42e038)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:28:36 +02:00
Sebastiaan van Stijn
307195b3e1 Merge pull request #22727 from clawconduce/master
Fix error for env variables example in docker reference
(cherry picked from commit 3723b88406)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:28:29 +02:00
Vincent Demeester
d1024638f9 Merge pull request #22720 from thaJeztah/fix-markdown
Fix Markdown formatting in Devicemapper docs
(cherry picked from commit 2f94a367d7)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:28:21 +02:00
Sebastiaan van Stijn
9396307501 Merge pull request #22661 from SvenDowideit/update-compatibility-matrix
docs: update graphdriver compatibility matrix
(cherry picked from commit a5e4aaaf71)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-25 21:28:12 +02:00
Sven Dowideit
54a58f9886 Merge pull request #22711 from thaJeztah/docs-cherry-picks
Docs cherry picks
2016-05-13 21:40:22 +10:00
Vincent Demeester
ab595882e3 Merge pull request #22689 from thaJeztah/docs-update-menu-order
docs: update menu order in security section
(cherry picked from commit 24a0f1f3e8)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:58:41 +02:00
Sebastiaan van Stijn
fcd432d110 Merge pull request #22707 from TimWolla/patch-1
User network does not work with IPv6
(cherry picked from commit ab090291dd)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:34:23 +02:00
Vincent Demeester
77efb507b3 Merge pull request #22694 from allencloud/fix-typos-in-docs
docs: correct some typos
(cherry picked from commit 475c37dd66)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:34:07 +02:00
Vincent Demeester
c23ad97de5 Merge pull request #22687 from haoshuwei/fix-docs-securitymd
Fixing security.md
(cherry picked from commit edf5e097a2)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:33:17 +02:00
Sebastiaan van Stijn
b28e6b7eda Merge pull request #22683 from npcode/docs-no-request-status
docs: Remove RequestStatusCode
(cherry picked from commit 2ae863c28f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:33:08 +02:00
Sven Dowideit
785665203d Merge pull request #22672 from kevinmeredith/correct_trapped_signals
Correct docs for a docker container's clean-up.
(cherry picked from commit c273163e80)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:33:00 +02:00
Vincent Demeester
36a62de41f Merge pull request #22669 from thaJeztah/docs-update-seccomp-whitelist
docs: update seccomp whitelist
(cherry picked from commit 4c654eeea2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:32:53 +02:00
Sebastiaan van Stijn
e7d0711142 Merge pull request #22666 from yongtang/05112016-update-deprecated-docs-cli-flags
Update deprecated docs for cli flags removal.
(cherry picked from commit 3710f9074e)

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:32:45 +02:00
Sebastiaan van Stijn
6e916fca02 Merge pull request #22579 from jfrazelle/docs-add-security-non-events
docs: add security non-events
(cherry picked from commit a14e85c40d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-13 01:32:32 +02:00
Sven Dowideit
bfe58ad819 Merge pull request #22662 from thaJeztah/docs-cherry-picks
Docs cherry picks
2016-05-11 23:07:33 +10:00
Tonis Tiigi
8d485949d6 docs: clarify docker attach
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
(cherry picked from commit da1dbd2093)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:41:23 +02:00
Yong Tang
7e83ae34dc Add the missing subtitle in deprecated docs for --security-opt.
The colon separator(`:`) of `--security-opt` flag was deprecated
in 1.11.0. However, the subtitle in deprecated docs is missing
so it is placed under the same subtitle as the deprecated `-e` and
`--email` flags.

This fix adds the missing subtitle in deprecated docs.

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
(cherry picked from commit 018c22880d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:41:16 +02:00
Yuan Sun
ef76dd0761 from inheritted to inherited
Signed-off-by: Yuan Sun <sunyuan3@huawei.com>
(cherry picked from commit fe1130b7ba)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:41:08 +02:00
Lars Kellogg-Stedman
b706ee90ca docs: note requirements for systemd drop-in filenames
the documentations says that you can drop "a file" into the
`docker.service.d` directory, but does not note that the file must end
with `.conf` in order to be recognized by systemd.  This can lead to
some [confusion][] if readers are not previously familiar with
systemd.

[confusion]: https://botbot.me/freenode/docker/2016-05-06/?msg=65605541&page=11

Signed-off-by: Lars Kellogg-Stedman <lars@redhat.com>
(cherry picked from commit 987b03054a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:41:01 +02:00
Sebastiaan van Stijn
2f4b69229a docs: update supervisord example
This updates the supervisor example documentation
to use an up-to-date version of Ubuntu.

Also reduced the use of "royal We", and tweaked some
language.

Finally, added some language hints for code-highlighting.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e38678e660)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:40:54 +02:00
objectified
b34165ae37 remove trailing comma from top command
Signed-off-by: objectified <objectified@gmail.com>
(cherry picked from commit c7e738d641)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:40:47 +02:00
Tomasz Kopczynski
cef1a10f5e Docs: fixing typos in admin/supervisor
Signed-off-by: Tomasz Kopczynski <tomek@kopczynski.net.pl>
(cherry picked from commit 74d382ff8d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:40:39 +02:00
Doug Davis
2c531b4fb7 Remove unnecessary double-double quotes
Signed-off-by: Doug Davis <dug@us.ibm.com>
(cherry picked from commit 8eb2188bd9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:40:30 +02:00
Eric Yang
8fc5559841 fix typo
fix typo

Signed-off-by: Qizhao Yang <windfarer@gmail.com>
(cherry picked from commit 176e9e2ffc)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:40:19 +02:00
Liron Levin
ea8b5e07b6 Remove response modification sections from authorization design doc
Signed-off-by: Liron Levin <liron@twistlock.com>
(cherry picked from commit 638096431a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:38:20 +02:00
Chun Chen
3cf8c53515 Add docs about how to extend devicemapper thin pool
Signed-off-by: Chun Chen <ramichen@tencent.com>

Update to device mapper
Entering comments

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit a7b2f87b06)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-11 14:07:59 +02:00
Vincent Demeester
15af7564cf Merge pull request #22421 from thaJeztah/docs-cherry-picks
docs cherry-pick
2016-05-02 20:11:29 +02:00
Ben Firshman
340c9a4619 Remove out-of-date client libraries
For each language, if there is one library which is clearly the
best or most active, I've removed the other libraries so users
aren't mislead.

I've removed the web UIs because they're not really client
libraries.

Signed-off-by: Ben Firshman <ben@firshman.co.uk>
(cherry picked from commit bb94cfce62)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-01 20:31:08 +02:00
Ben Firshman
3b2ed5b2df Fix lasote/docker_client link
Signed-off-by: Ben Firshman <ben@firshman.co.uk>
(cherry picked from commit 91fe274dcf)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-01 20:30:52 +02:00
Yuan Sun
0b5c960bb0 remove "the" in docs.
Signed-off-by: Yuan Sun <sunyuan3@huawei.com>
(cherry picked from commit 043c9ef076)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-01 20:29:15 +02:00
Lorenzo Fontana
14a2985f37 Mention the fact that authz plugins are available today
Signed-off-by: Lorenzo Fontana <fontanalorenzo@me.com>
(cherry picked from commit 96cc1ee44c)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-05-01 20:27:48 +02:00
Sebastiaan van Stijn
a1d16b4557 update API example response for docker events
the events API was rewritten in 723be0a332,
but the example response in the documentation doesn't reflect the actual output

this fixes the example response

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 3932d46a78)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-29 17:55:50 +02:00
Sebastiaan van Stijn
290e0ea54c Merge pull request #22392 from thaJeztah/docs-cherry-picks-3
documentation cherry-picks
2016-04-28 16:57:08 +02:00
Sylvain Bellemare
5901eda4f7 Fix typo
Signed-off-by: Sylvain Bellemare <sylvain@ascribe.io>
(cherry picked from commit 63aa03ce0a)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 15:28:40 +02:00
Hao Zhang
c2bea5ba26 update cgroup link in doc of run
Signed-off-by: Hao Zhang <21521210@zju.edu.cn>
(cherry picked from commit 8fec7c26d4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:32:11 +02:00
Sven Dowideit
4703824f00 Small API formating fix.
Signed-off-by: Sven Dowideit <SvenDowideit@home.org.au>
(cherry picked from commit 204a52c689)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:32:00 +02:00
Sebastiaan van Stijn
4bf3f74126 docs: remove duplicate line in "Understand the architecture"
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 00e84ca4d2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:31:25 +02:00
Riyaz Faizullabhoy
5b1136ba8d Update DCT docs with 1.11 info, fix typos
Signed-off-by: Riyaz Faizullabhoy <riyaz.faizullabhoy@docker.com>
(cherry picked from commit 77da3bcb72)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:31:17 +02:00
Dimitry Andric
f264a73afd The daemon.json storage-opts settings is actually a list.
Signed-off-by: Dimitry Andric <d.andric@activevideo.com>
(cherry picked from commit e3eb24fc21)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:31:06 +02:00
Kai Qiang Wu(Kennan)
fc4f927588 Fix the old exit status example
Signed-off-by: Kai Qiang Wu(Kennan) <wkqwu@cn.ibm.com>
(cherry picked from commit 896ebb1ca2)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:30:57 +02:00
Sebastiaan van Stijn
84314e09ab docs: add note about MAC addresses not being unique
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 763aceeb73)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:30:47 +02:00
搏通
895569df9d optimise docs
Signed-off-by: 搏通 <yufeng.pyf@alibaba-inc.com>
(cherry picked from commit 9abf304c25)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:30:39 +02:00
Sebastiaan van Stijn
21223f8873 docs: use tables for available plugins
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 79351caec1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:30:29 +02:00
Máximo Cuadros
ac9b38b60d documentation: adding gce-docker plugin to plugins.md
Signed-off-by: Máximo Cuadros <mcuadros@gmail.com>
(cherry picked from commit c018018b69)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:30:15 +02:00
Wen Cheng Ma
53b4fd1d81 Fix asa
Signed-off-by: Wen Cheng Ma <wenchma@cn.ibm.com>
(cherry picked from commit 6d4e7b67be)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:22:42 +02:00
Jared Hocutt
ae4c7b1493 Add the NetApp Docker Volume Plugin to the documentation
Signed-off-by: Jared Hocutt <jaredh@netapp.com>
(cherry picked from commit f310fd14a9)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:22:33 +02:00
Sebastiaan van Stijn
71c0acf1ca Remove API versions 1.17 and older from documentation
Docker 1.5 and older is no longer supported by Docker Hub,
so there's not much need to document the API for those
versions.

Documentation is still available on GitHub, and through
the older versions of the documentation for those
that really need it.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 68f9a45440)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:22:22 +02:00
Yong Tang
97a5055198 Docs: Container creation param descriptions not under HostConfig
This fix tries to fix issues mentioned in #22100 for incorrect
description of remote API's container creation params.

Several issues have been fixed:
1. CPU and memory related params (e.g., `MemorySwap`, `CpuShares`, etc.)
were incorrectly placed under the top level instead of under the HostConfig.
(v1.18-v1.24)
2. The param `Cpuset` has been deprecated but was never removed.
(v1.18-v1.24)
3. The param `PidsLimit` was not added even though the description
has been added.
(v1.23-v1.24)

This fix fixes #22100

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
(cherry picked from commit 332e3b545b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:22:09 +02:00
Florian
2d532738b5 Remove outdated Node.js example - include a link to the new guide later!
As recommended by @moxiegirl and squashed.

Signed-off-by: FWirtz <florian.wirtz08@gmail.com>
(cherry picked from commit d9c0d67b51)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:21:27 +02:00
yorkie
3eaabe0257 doc: fix typo
Signed-off-by: yorkie <yorkiefixer@gmail.com>
(cherry picked from commit d2c5bf23f1)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:21:18 +02:00
Alessandro Boch
b9f11557af Clarify container external connectivity in multi-network scenario
Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit c2e088e134)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:21:09 +02:00
Sebastiaan van Stijn
ce9bc253f6 docs: remove unused "registry" parameter
The "registry" query-param was in added 10c0e99037,
and removed in docker 0.5.0 via 66a9d06d9f.

Aparently, it was never removed from the documentation,
and included in all versions of the API docs.

This removes it from the documentation.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit e035a86c1d)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:21:01 +02:00
Yong Tang
6bbe5722aa Fix incorrect docs in remote API for the option of SecurityOpt
This fix tries to fix the issue in remote API docs for v1.15 (Docker 1.3.x)
and v1.16 (Docker 1.4.x) where `SecurityOpts` was used but the actual field
should be `SecurityOpt`.

This `SecurityOpt` field is verified through the source code in
v1.3.0 and v1.4.0:
https://github.com/docker/docker/blob/v1.3.0/runconfig/config.go#L35
https://github.com/docker/docker/blob/v1.4.0/runconfig/hostconfig.go#L98

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
(cherry picked from commit f3f981624b)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:20:53 +02:00
Thomas Grainger
cde2df6db9 Fix security documentation, XSS -> CSRF
Signed-off-by: Thomas Grainger <tagrain@gmail.com>
(cherry picked from commit ea8f9c9723)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:20:42 +02:00
Sebastiaan van Stijn
d9cf30d7de docs: update API for features added in 1.11
Docker 1.11 added a feature to set labels on volumes,
networks and images (during build), but these changes
were not documented in the API documentation.

This adds the new features to the documentation.

Also fixes some minor formatting, and options that
were not used in the examples.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit ba353f3787)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:20:22 +02:00
Robin Naundorf
9af185e3d0 closes #11703 closes #11560
Signed-off-by: Robin Naundorf <r.naundorf@fh-muenster.de>
(cherry picked from commit 297d6c04a3)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:00:50 +02:00
Jess Frazelle
8d2798c37e Add example to apparmor docs
Signed-off-by: Jess Frazelle <jess@mesosphere.com>
(cherry picked from commit 80d63e2e11)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:00:42 +02:00
Chun Chen
4858230a07 Add docs about how to extend devicemapper thin pool
Signed-off-by: Chun Chen <ramichen@tencent.com>
(cherry picked from commit b21d90c28f)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-28 13:00:34 +02:00
Tibor Vass
4dc5990d75 Merge pull request #22006 from tiborvass/fix-changelog-1.11
Fix some CHANGELOG entries
2016-04-13 14:03:42 -04:00
Tibor Vass
365d80b3e1 Merge pull request #22005 from crosbymichael/runc-source
Improve source for containerd/runc copy
2016-04-13 14:03:34 -04:00
Tibor Vass
2535db8678 Fix some CHANGELOG entries
Signed-off-by: Tibor Vass <tibor@docker.com>
2016-04-13 13:46:47 -04:00
Michael Crosby
54edfc41c6 Improve source for containerd/runc copy
This improves getting the source for the binaries that are compiled on
the system so that they can be copied into the bundles output.

Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
2016-04-13 10:34:07 -07:00
Tibor Vass
4a6f2274be Merge pull request #22001 from thaJeztah/more-docs-cherries
More docs cherry-picks
2016-04-13 12:59:12 -04:00
Mary Anthony
db08f19e36 Fixes #21701 devicemapper docs
Copy edit the content
Updates to existing material
Adding mbentley's comments
Updating with last minute comments
Update with Seb's comments

Signed-off-by: Mary Anthony <mary@docker.com>
(cherry picked from commit 783ebebff4)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-13 17:20:27 +02:00
Brian Goff
af370ff997 Merge pull request #21998 from tiborvass/fix-runc-path
runc install path changed from /usr/local/bin to /usr/local/sbin
2016-04-13 10:17:37 -04:00
Tibor Vass
645836f250 Merge pull request #21996 from thaJeztah/cherry-pick-docs
Cherry pick docs for 1.11
2016-04-13 10:07:58 -04:00
Tibor Vass
3d85e51ef4 runc install path changed from /usr/local/bin to /usr/local/sbin
Signed-off-by: Tibor Vass <tibor@docker.com>
2016-04-13 09:49:40 -04:00
Alessandro Boch
5618fbf18c Update /containers/create remote API docs
- Show how to pass the networking config in POST containers/create body

Signed-off-by: Alessandro Boch <aboch@docker.com>
(cherry picked from commit 30859c3456)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-13 15:36:39 +02:00
Sebastiaan van Stijn
bf312aca9c docs: update installation from binaries for 1.11
Binaries are now distributed as a '.tgz' or '.zip'
archive, and contain multiple binaries for Linux.

This updates the instructions for 1.11.

Also mention that the Windows 64-bit binary
actually can be used as a daemon. Given that
this is still in beta, no instructions were
added for *running* a daemon on Windows.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit f5336c7370)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-13 15:35:03 +02:00
Tibor Vass
4172802d68 Merge pull request #21987 from thaJeztah/update-golang
[1.11] update remaining Go versions to 1.5.4
2016-04-13 08:51:06 -04:00
Sebastiaan van Stijn
fb06ddf4db update remaining Go versions to 1.5.4
Some Dockerfiles were missed during update to
1.5.4. This changes those Dockerfiles.

Note that Dockerfile.armhf is not yet updated; it
currently uses Dave Cheney's unofficial ARM builds,
which are marked "end of life", so added a TODO
instead.

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2016-04-13 11:13:06 +02:00
Tibor Vass
d126e2fb72 Merge pull request #21979 from tiborvass/1.11-carry-21959
[1.11] Fix docker load progressbar
2016-04-13 01:53:24 -04:00
Tibor Vass
3130169eb2 Merge pull request #21977 from tiborvass/1.11-go1.5.4
[1.11] Bump Go version to 1.5.4 (security fix)
2016-04-13 01:51:59 -04:00
Lei Jitang
260a835cb4 Fix docker load progressbar, fixes #21957
Signed-off-by: Lei Jitang <leijitang@huawei.com>
(cherry picked from commit 96d7db665b)
2016-04-12 22:48:45 -04:00
Tibor Vass
d42b3f6765 Bump Go version to 1.5.4 (security fix)
https://groups.google.com/forum/#!msg/golang-announce/9eqIHqaWvck/kXsfO0ogLAAJ

Dockerfile.armhf cannot currently be updated.

Signed-off-by: Tibor Vass <tibor@docker.com>
2016-04-12 22:29:46 -04:00
Tibor Vass
134990dd07 Merge pull request #21936 from sanimej/v1.11_bump
Update Networking changelog for 1.11
2016-04-12 21:37:31 -04:00
Alexander Morozov
aa7da70459 Merge pull request #21937 from tiborvass/fix-21808-1.11
[1.11] vendor runc to fix issue#21808
2016-04-12 13:49:33 -07:00
Tibor Vass
8ff913e45f vendor runc to fix issue#21808
Signed-off-by: Tibor Vass <tibor@docker.com>
2016-04-12 15:35:04 -04:00
Santhosh Manohar
2153d9ec9d Update Networking changelog for 1.11
Signed-off-by: Santhosh Manohar <santhosh@docker.com>
2016-04-10 08:31:18 -07:00
112 changed files with 2109 additions and 8698 deletions

View File

@@ -5,13 +5,12 @@ information on the list of deprecated flags and APIs please have a look at
https://docs.docker.com/engine/deprecated/ where target removal dates can also
be found.
## 1.11.0 (2016-04-12)
## 1.11.0 (2016-04-13)
**IMPORTANT**: With Docker 1.11, a Linux docker installation is now made of 4 binaries (`docker`, [`docker-containerd`](https://github.com/docker/containerd), [`docker-containerd-shim`](https://github.com/docker/containerd) and [`docker-runc`](https://github.com/opencontainers/runc)). If you have scripts relying on docker being a single static binaries, please make sure to update them. Interaction with the daemon stay the same otherwise, the usage of the other binaries should be transparent. A Windows docker installation remains a single binary, `docker.exe`.
### Builder
- Fix docker not sending credentials during build if content trust is enabled ([#21693](https://github.com/docker/docker/pull/21693))
- Fix a bug where Docker would not used the correct uid/gid when processing the `WORKDIR` command ([#21033](https://github.com/docker/docker/pull/21033))
- Fix a bug where copy operations with userns would not use the proper uid/gid ([#20782](https://github.com/docker/docker/pull/20782), [#21162](https://github.com/docker/docker/pull/21162))
@@ -37,7 +36,6 @@ be found.
### Distribution
- Fix the download manager closing the tempfile twice ([#21676](https://github.com/docker/docker/pull/21676))
- Fix a panic that occurred when pulling an images with 0 layers ([#21222](https://github.com/docker/docker/pull/21222))
- Fix a panic that could occur on error while pushing to a registry with a misconfigured token service ([#21212](https://github.com/docker/docker/pull/21212))
+ All first-level delegation roles are now signed when doing a trusted push ([#21046](https://github.com/docker/docker/pull/21046))
@@ -47,6 +45,7 @@ be found.
* Docker will now fallback to registry V1 if no basic auth credentials are available ([#20241](https://github.com/docker/docker/pull/20241))
* Docker will now try to resume layer download where it left off after a network error/timeout ([#19840](https://github.com/docker/docker/pull/19840))
- Fix generated manifest mediaType when pushing cross-repository ([#19509](https://github.com/docker/docker/pull/19509))
- Fix docker requesting additional push credentials when pulling an image if Content Trust is enabled ([#20382](https://github.com/docker/docker/pull/20382))
### Logging
@@ -79,15 +78,29 @@ be found.
- Fix panic if a node is forcibly removed from the cluster ([#21671](https://github.com/docker/docker/pull/21671))
- Fix "error creating vxlan interface" when starting a container in a Swarm cluster ([#21671](https://github.com/docker/docker/pull/21671))
- Fix a bug where IPv6 addresses were not properly handled ([#20842](https://github.com/docker/docker/pull/20842))
* `docker network inspect` will now report all endpoints whether they have an active container or not ([#21160](https://github.com/docker/docker/pull/21160))
+ Experimental support for the MacVlan and IPVlan network drivers have been added ([#21122](https://github.com/docker/docker/pull/21122))
* Output of `docker network ls` is now sorted by network name ([#20383](https://github.com/docker/docker/pull/20383))
- Fix a bug where Docker would allow a network to be created with the reserved `default` name ([#19431](https://github.com/docker/docker/pull/19431))
* `docker network inspect` now returns whether a network is internal or not ([#19357](https://github.com/docker/docker/pull/19357))
* `docker network inspect` returns whether a network is internal or not ([#19357](https://github.com/docker/docker/pull/19357))
+ Control IPv6 via explicit option when creating a network (`docker network create --ipv6`). This shows up as a new `EnableIPv6` field in `docker network inspect` ([#17513](https://github.com/docker/docker/pull/17513))
* Support for AAAA Records (aka IPv6 Service Discovery) in embedded DNS Server [#21396](https://github.com/docker/docker/pull/21396)
* Multiple A/AAAA records from embedded DNS Server for DNS Round robin [#21019](https://github.com/docker/docker/pull/21019)
* Support for AAAA Records (aka IPv6 Service Discovery) in embedded DNS Server ([#21396](https://github.com/docker/docker/pull/21396))
- Fix to not forward docker domain IPv6 queries to external servers ([#21396](https://github.com/docker/docker/pull/21396))
* Multiple A/AAAA records from embedded DNS Server for DNS Round robin ([#21019](https://github.com/docker/docker/pull/21019))
- Fix endpoint count inconsistency after an ungraceful dameon restart ([#21261](https://github.com/docker/docker/pull/21261))
- Move the ownership of exposed ports and port-mapping options from Endpoint to Sandbox ([#21019](https://github.com/docker/docker/pull/21019))
- Fixed a bug which prevents docker reload when host is configured with ipv6.disable=1 ([#21019](https://github.com/docker/docker/pull/21019))
- Added inbuilt nil IPAM driver ([#21019](https://github.com/docker/docker/pull/21019))
- Fixed bug in iptables.Exists() logic [#21019](https://github.com/docker/docker/pull/21019)
- Fixed a Veth interface leak when using overlay network ([#21019](https://github.com/docker/docker/pull/21019))
- Fixed a bug which prevents docker reload after a network delete during shutdown ([#20214](https://github.com/docker/docker/pull/20214))
- Make sure iptables chains are recreated on firewalld reload ([#20419](https://github.com/docker/docker/pull/20419))
- Allow to pass global datastore during config reload ([#20419](https://github.com/docker/docker/pull/20419))
- For anonymous containers use the alias name for IP to name mapping, ie:DNS PTR record ([#21019](https://github.com/docker/docker/pull/21019))
- Fix a panic when deleting an entry from /etc/hosts file ([#21019](https://github.com/docker/docker/pull/21019))
- Source the forwarded DNS queries from the container net namespace ([#21019](https://github.com/docker/docker/pull/21019))
- Fix to retain the network internal mode config for bridge networks on daemon reload ([#21780] (https://github.com/docker/docker/pull/21780))
- Fix to retain IPAM driver option configs on daemon reload ([#21914] (https://github.com/docker/docker/pull/21914))
### Plugins
@@ -128,12 +141,17 @@ be found.
* `docker inspect` now also returns a new `State` field containing the container state in a human readable way (i.e. one of `created`, `restarting`, `running`, `paused`, `exited` or `dead`)([#18966](https://github.com/docker/docker/pull/18966))
+ Docker learned to limit the number of active pids (i.e. processes) within the container via the `pids-limit` flags. NOTE: This requires `CGROUP_PIDS=y` to be in the kernel configuration. ([#18697](https://github.com/docker/docker/pull/18697))
- `docker load` now has a `--quiet` option to suppress the load output ([#20078](https://github.com/docker/docker/pull/20078))
- Fix a bug in neighbor discovery for IPv6 peers ([#20842](https://github.com/docker/docker/pull/20842))
- Fix a panic during cleanup if a container was started with invalid options ([#21802](https://github.com/docker/docker/pull/21802))
- Fix a situation where a container cannot be stopped if the terminal is closed ([#21840](https://github.com/docker/docker/pull/21840))
### Security
* Object with the `pcp_pmcd_t` selinux type were given management access to `/var/lib/docker(/.*)?` ([#21370](https://github.com/docker/docker/pull/21370))
* `restart_syscall`, `copy_file_range`, `mlock2` joined the list of allowed calls in the default seccomp profile ([#21117](https://github.com/docker/docker/pull/21117), [#21262](https://github.com/docker/docker/pull/21262))
* `send`, `recv` and `x32` were added to the list of allowed syscalls and arch in the default seccomp profile ([#19432](https://github.com/docker/docker/pull/19432))
* Docker Content Trust now requests the server to perform snapshot signing ([#21046](https://github.com/docker/docker/pull/21046))
* Support for using YubiKeys for Content Trust signing has been moved out of experimental ([#21591](https://github.com/docker/docker/pull/21591))
### Volumes
@@ -1668,7 +1686,7 @@ With the ongoing changes to the networking and execution subsystems of docker te
+ Add -rm to docker run for removing a container on exit
- Remove error messages which are not actually errors
- Fix `docker rm` with volumes
- Fix some error cases where a HTTP body might not be closed
- Fix some error cases where an HTTP body might not be closed
- Fix panic with wrong dockercfg file
- Fix the attach behavior with -i
* Record termination time in state.

View File

@@ -152,9 +152,9 @@ However, there might be a way to implement that feature *on top of* Docker.
<a href="https://groups.google.com/forum/#!forum/docker-user" target="_blank">Docker-user</a>
is for people using Docker containers.
The <a href="https://groups.google.com/forum/#!forum/docker-dev" target="_blank">docker-dev</a>
group is for contributors and other people contributing to the Docker
project.
You can join them without an google account by sending an email to e.g. "docker-user+subscribe@googlegroups.com".
group is for contributors and other people contributing to the Docker project.
You can join them without a google account by sending an email to
<a href="mailto:docker-dev+subscribe@googlegroups.com">docker-dev+subscribe@googlegroups.com</a>.
After receiving the join-request message, you can simply reply to that to confirm the subscribtion.
</td>
</tr>

View File

@@ -126,7 +126,7 @@ RUN set -x \
# IMPORTANT: If the version of Go is updated, the Windows to Linux CI machines
# will need updating, to avoid errors. Ping #docker-maintainers on IRC
# with a heads-up.
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fsSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" \
| tar -xzC /usr/local
ENV PATH /go/bin:/usr/local/go/bin:$PATH
@@ -248,7 +248,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -96,7 +96,7 @@ RUN set -x \
# We don't have official binary tarballs for ARM64, eigher for Go or bootstrap,
# so we use the official armv6 released binaries as a GOROOT_BOOTSTRAP, and
# build Go from source code.
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN mkdir /usr/src/go && curl -fsSL https://storage.googleapis.com/golang/go${GO_VERSION}.src.tar.gz | tar -v -C /usr/src/go -xz --strip-components=1 \
&& cd /usr/src/go/src \
&& GOOS=linux GOARCH=arm64 GOROOT_BOOTSTRAP="$(go env GOROOT)" ./make.bash
@@ -181,7 +181,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -65,6 +65,8 @@ RUN cd /usr/local/lvm2 \
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Install Go
# TODO Update to 1.5.4 once available, or build from source, as these builds
# are marked "end of life", see http://dave.cheney.net/unofficial-arm-tarballs
ENV GO_VERSION 1.5.3
RUN curl -fsSL "http://dave.cheney.net/paste/go${GO_VERSION}.linux-arm.tar.gz" \
| tar -xzC /usr/local
@@ -198,7 +200,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -74,7 +74,7 @@ WORKDIR /go/src/github.com/docker/docker
ENV DOCKER_BUILDTAGS apparmor seccomp selinux
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -74,10 +74,10 @@ RUN cd /usr/local/lvm2 \
# TODO install Go, using gccgo as GOROOT_BOOTSTRAP (Go 1.5+ supports ppc64le properly)
# possibly a ppc64le/golang image?
## BUILD GOLANG 1.5.3
ENV GO_VERSION 1.5.3
## BUILD GOLANG
ENV GO_VERSION 1.5.4
ENV GO_DOWNLOAD_URL https://golang.org/dl/go${GO_VERSION}.src.tar.gz
ENV GO_DOWNLOAD_SHA256 a96cce8ce43a9bf9b2a4c7d470bc7ee0cb00410da815980681c8353218dcf146
ENV GO_DOWNLOAD_SHA256 002acabce7ddc140d0d55891f9d4fcfbdd806b9332fb8b110c91bc91afb0bc93
ENV GOROOT_BOOTSTRAP /usr/local
RUN curl -fsSL "$GO_DOWNLOAD_URL" -o golang.tar.gz \
@@ -199,7 +199,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -178,7 +178,7 @@ RUN set -x \
&& rm -rf "$GOPATH"
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -30,7 +30,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
# Install runc
ENV RUNC_COMMIT 6c88a526cdd74aab90cc88018368c452c7294a06
ENV RUNC_COMMIT e87436998478d222be209707503c27f6f91be0c5
RUN set -x \
&& export GOPATH="$(mktemp -d)" \
&& git clone git://github.com/opencontainers/runc.git "$GOPATH/src/github.com/opencontainers/runc" \

View File

@@ -40,7 +40,7 @@ FROM windowsservercore
# Environment variable notes:
# - GO_VERSION must consistent with 'Dockerfile' used by Linux'.
# - FROM_DOCKERFILE is used for detection of building within a container.
ENV GO_VERSION=1.5.3 \
ENV GO_VERSION=1.5.4 \
GIT_LOCATION=https://github.com/git-for-windows/git/releases/download/v2.7.2.windows.1/Git-2.7.2-64-bit.exe \
RSRC_COMMIT=ba14da1f827188454a4591717fff29999010887f \
GOPATH=C:/go;C:/go/src/github.com/docker/docker/vendor \

View File

@@ -41,7 +41,7 @@ func (cli *DockerCli) CmdLoad(args ...string) error {
}
defer response.Body.Close()
if response.JSON {
if response.Body != nil && response.JSON {
return jsonmessage.DisplayJSONMessagesStream(response.Body, cli.out, cli.outFd, cli.isTerminalOut, nil)
}

View File

@@ -269,7 +269,17 @@ func (s *imageRouter) postImagesLoad(ctx context.Context, w http.ResponseWriter,
return err
}
quiet := httputils.BoolValueOrDefault(r, "quiet", true)
w.Header().Set("Content-Type", "application/json")
if !quiet {
w.Header().Set("Content-Type", "application/json")
output := ioutils.NewWriteFlusher(w)
defer output.Close()
if err := s.backend.LoadImage(r.Body, output, quiet); err != nil {
output.Write(streamformatter.NewJSONStreamFormatter().FormatError(err))
}
return nil
}
return s.backend.LoadImage(r.Body, w, quiet)
}

View File

@@ -6,7 +6,7 @@ FROM debian:jessie
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev pkg-config libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM debian:stretch
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM debian:wheezy-backports
RUN apt-get update && apt-get install -y -t wheezy-backports btrfs-tools --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev pkg-config --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:precise
RUN apt-get update && apt-get install -y apparmor bash-completion build-essential curl ca-certificates debhelper dh-apparmor git libapparmor-dev libltdl-dev libsqlite3-dev pkg-config --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:trusty
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libsqlite3-dev pkg-config libsystemd-journal-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:wily
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -6,7 +6,7 @@ FROM ubuntu:xenial
RUN apt-get update && apt-get install -y apparmor bash-completion btrfs-tools build-essential curl ca-certificates debhelper dh-apparmor dh-systemd git libapparmor-dev libdevmapper-dev libltdl-dev libseccomp-dev libsqlite3-dev pkg-config libsystemd-dev --no-install-recommends && rm -rf /var/lib/apt/lists/*
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -8,7 +8,7 @@ RUN yum groupinstall -y "Development Tools"
RUN yum -y swap -- remove systemd-container systemd-container-libs -- install systemd systemd-libs
RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM fedora:22
RUN dnf install -y @development-tools fedora-packager
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM fedora:23
RUN dnf install -y @development-tools fedora-packager
RUN dnf install -y btrfs-progs-devel device-mapper-devel glibc-static libseccomp-devel libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM opensuse:13.2
RUN zypper --non-interactive install ca-certificates* curl gzip rpm-build
RUN zypper --non-interactive install libbtrfs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkg-config selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git systemd-rpm-macros
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -10,7 +10,7 @@ RUN yum install -y btrfs-progs-devel device-mapper-devel glibc-static libselinu
RUN yum install -y yum-utils && curl -o /etc/yum.repos.d/public-yum-ol6.repo http://yum.oracle.com/public-yum-ol6.repo && yum-config-manager -q --enable ol6_UEKR4
RUN yum install -y kernel-uek-devel-4.1.12-32.el6uek
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -7,7 +7,7 @@ FROM oraclelinux:7
RUN yum groupinstall -y "Development Tools"
RUN yum install -y --enablerepo=ol7_optional_latest btrfs-progs-devel device-mapper-devel glibc-static libselinux-devel libtool-ltdl-devel pkgconfig selinux-policy selinux-policy-devel sqlite-devel systemd-devel tar git
ENV GO_VERSION 1.5.3
ENV GO_VERSION 1.5.4
RUN curl -fSL "https://storage.googleapis.com/golang/go${GO_VERSION}.linux-amd64.tar.gz" | tar xzC /usr/local
ENV PATH $PATH:/usr/local/go/bin

View File

@@ -0,0 +1,14 @@
Docker device tool for devicemapper storage driver backend
===================
The ./contrib/docker-device-tool contains a tool to manipulate devicemapper thin-pool.
Compile
========
$ make shell
## inside build container
$ go build contrib/docker-device-tool/device_tool.go
# if devicemapper version is old and compliation fails, compile with `libdm_no_deferred_remove` tag
$ go build -tags libdm_no_deferred_remove contrib/docker-device-tool/device_tool.go

View File

@@ -1,17 +1,8 @@
FROM docs/base:latest
FROM docs/base:oss
MAINTAINER Mary Anthony <mary@docker.com> (@moxiegirl)
RUN svn checkout https://github.com/docker/compose/trunk/docs /docs/content/compose
RUN svn checkout https://github.com/docker/swarm/trunk/docs /docs/content/swarm
RUN svn checkout https://github.com/docker/machine/trunk/docs /docs/content/machine
RUN svn checkout https://github.com/docker/distribution/trunk/docs /docs/content/registry
RUN svn checkout https://github.com/docker/notary/trunk/docs /docs/content/notary
RUN svn checkout https://github.com/docker/kitematic/trunk/docs /docs/content/kitematic
RUN svn checkout https://github.com/docker/toolbox/trunk/docs /docs/content/toolbox
RUN svn checkout https://github.com/docker/opensource/trunk/docs /docs/content/opensource
ENV PROJECT=engine
# To get the git info for this repo
COPY . /src
RUN rm -r /docs/content/$PROJECT/
COPY . /docs/content/$PROJECT/

View File

@@ -24,9 +24,8 @@ HUGO_BASE_URL=$(shell test -z "$(DOCKER_IP)" && echo localhost || echo "$(DOCKER
HUGO_BIND_IP=0.0.0.0
GIT_BRANCH := $(shell git rev-parse --abbrev-ref HEAD 2>/dev/null)
DOCKER_IMAGE := docker$(if $(GIT_BRANCH),:$(GIT_BRANCH))
DOCKER_DOCS_IMAGE := docs-base$(if $(GIT_BRANCH),:$(GIT_BRANCH))
GIT_BRANCH_CLEAN := $(shell echo $(GIT_BRANCH) | sed -e "s/[^[:alnum:]]/-/g")
DOCKER_DOCS_IMAGE := docker-docs$(if $(GIT_BRANCH_CLEAN),:$(GIT_BRANCH_CLEAN))
DOCKER_RUN_DOCS := docker run --rm -it $(DOCS_MOUNT) -e AWS_S3_BUCKET -e NOCACHE

View File

@@ -1,150 +0,0 @@
<!--[metadata]>
+++
aliases = ["/engine/articles/cfengine/"]
title = "Process management with CFEngine"
description = "Managing containerized processes with CFEngine"
keywords = ["cfengine, process, management, usage, docker, documentation"]
[menu.main]
parent = "engine_admin"
+++
<![end-metadata]-->
# Process management with CFEngine
Create Docker containers with managed processes.
Docker monitors one process in each running container and the container
lives or dies with that process. By introducing CFEngine inside Docker
containers, we can alleviate a few of the issues that may arise:
- It is possible to easily start multiple processes within a
container, all of which will be managed automatically, with the
normal `docker run` command.
- If a managed process dies or crashes, CFEngine will start it again
within 1 minute.
- The container itself will live as long as the CFEngine scheduling
daemon (cf-execd) lives. With CFEngine, we are able to decouple the
life of the container from the uptime of the service it provides.
## How it works
CFEngine, together with the cfe-docker integration policies, are
installed as part of the Dockerfile. This builds CFEngine into our
Docker image.
The Dockerfile's `ENTRYPOINT` takes an arbitrary
amount of commands (with any desired arguments) as parameters. When we
run the Docker container these parameters get written to CFEngine
policies and CFEngine takes over to ensure that the desired processes
are running in the container.
CFEngine scans the process table for the `basename` of the commands given
to the `ENTRYPOINT` and runs the command to start the process if the `basename`
is not found. For example, if we start the container with
`docker run "/path/to/my/application parameters"`, CFEngine will look for a
process named `application` and run the command. If an entry for `application`
is not found in the process table at any point in time, CFEngine will execute
`/path/to/my/application parameters` to start the application once again. The
check on the process table happens every minute.
Note that it is therefore important that the command to start your
application leaves a process with the basename of the command. This can
be made more flexible by making some minor adjustments to the CFEngine
policies, if desired.
## Usage
This example assumes you have Docker installed and working. We will
install and manage `apache2` and `sshd`
in a single container.
There are three steps:
1. Install CFEngine into the container.
2. Copy the CFEngine Docker process management policy into the
containerized CFEngine installation.
3. Start your application processes as part of the `docker run` command.
### Building the image
The first two steps can be done as part of a Dockerfile, as follows.
FROM ubuntu
MAINTAINER Eystein Måløy Stenberg <eytein.stenberg@gmail.com>
RUN apt-get update && apt-get install -y wget lsb-release unzip ca-certificates
# install latest CFEngine
RUN wget -qO- http://cfengine.com/pub/gpg.key | apt-key add -
RUN echo "deb http://cfengine.com/pub/apt $(lsb_release -cs) main" > /etc/apt/sources.list.d/cfengine-community.list
RUN apt-get update && apt-get install -y cfengine-community
# install cfe-docker process management policy
RUN wget https://github.com/estenberg/cfe-docker/archive/master.zip -P /tmp/ && unzip /tmp/master.zip -d /tmp/
RUN cp /tmp/cfe-docker-master/cfengine/bin/* /var/cfengine/bin/
RUN cp /tmp/cfe-docker-master/cfengine/inputs/* /var/cfengine/inputs/
RUN rm -rf /tmp/cfe-docker-master /tmp/master.zip
# apache2 and openssh are just for testing purposes, install your own apps here
RUN apt-get update && apt-get install -y openssh-server apache2
RUN mkdir -p /var/run/sshd
RUN echo "root:password" | chpasswd # need a password for ssh
ENTRYPOINT ["/var/cfengine/bin/docker_processes_run.sh"]
By saving this file as Dockerfile to a working directory, you can then build
your image with the docker build command, e.g.,
`docker build -t managed_image`.
### Testing the container
Start the container with `apache2` and `sshd` running and managed, forwarding
a port to our SSH instance:
$ docker run -p 127.0.0.1:222:22 -d managed_image "/usr/sbin/sshd" "/etc/init.d/apache2 start"
We now clearly see one of the benefits of the cfe-docker integration: it
allows to start several processes as part of a normal `docker run` command.
We can now log in to our new container and see that both `apache2` and `sshd`
are running. We have set the root password to "password" in the Dockerfile
above and can use that to log in with ssh:
ssh -p222 root@127.0.0.1
ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 07:48 ? 00:00:00 /bin/bash /var/cfengine/bin/docker_processes_run.sh /usr/sbin/sshd /etc/init.d/apache2 start
root 18 1 0 07:48 ? 00:00:00 /var/cfengine/bin/cf-execd -F
root 20 1 0 07:48 ? 00:00:00 /usr/sbin/sshd
root 32 1 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
www-data 34 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
www-data 35 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
www-data 36 32 0 07:48 ? 00:00:00 /usr/sbin/apache2 -k start
root 93 20 0 07:48 ? 00:00:00 sshd: root@pts/0
root 105 93 0 07:48 pts/0 00:00:00 -bash
root 112 105 0 07:49 pts/0 00:00:00 ps -ef
If we stop apache2, it will be started again within a minute by
CFEngine.
service apache2 status
Apache2 is running (pid 32).
service apache2 stop
* Stopping web server apache2 ... waiting [ OK ]
service apache2 status
Apache2 is NOT running.
# ... wait up to 1 minute...
service apache2 status
Apache2 is running (pid 173).
## Adapting to your applications
To make sure your applications get managed in the same manner, there are
just two things you need to adjust from the above example:
- In the Dockerfile used above, install your applications instead of
`apache2` and `sshd`.
- When you start the container with `docker run`,
specify the command line arguments to your applications rather than
`apache2` and `sshd`.

View File

@@ -48,7 +48,7 @@ Some of the daemon's options are:
| `--tls=false` | Enable or disable TLS. By default, this is false. |
Here is a an example of running the `docker` daemon with configuration options:
Here is an example of running the `docker` daemon with configuration options:
$ docker daemon -D --tls=true --tlscert=/var/docker/server.pem --tlskey=/var/docker/serverkey.pem -H tcp://192.168.59.3:2376

View File

@@ -5,7 +5,6 @@ description = "Describes how to use the etwlogs logging driver."
keywords = ["ETW, docker, logging, driver"]
[menu.main]
parent = "smn_logging"
weight=2
+++
<![end-metadata]-->

View File

@@ -6,7 +6,6 @@ description = "Describes how to use the fluentd logging driver."
keywords = ["Fluentd, docker, logging, driver"]
[menu.main]
parent = "smn_logging"
weight=2
+++
<![end-metadata]-->
@@ -90,7 +89,7 @@ and [its documents](http://docs.fluentd.org/).
To use this logging driver, start the `fluentd` daemon on a host. We recommend
that you use [the Fluentd docker
image](https://hub.docker.com/r/fluent/fluentd/). This image is
especially useful if you want to aggregate multiple container logs on a each
especially useful if you want to aggregate multiple container logs on each
host then, later, transfer the logs to another Fluentd node to create an
aggregate store.

View File

@@ -5,7 +5,6 @@ description = "Describes how to use the Google Cloud Logging driver."
keywords = ["gcplogs, google, docker, logging, driver"]
[menu.main]
parent = "smn_logging"
weight = 2
+++
<![end-metadata]-->

View File

@@ -1,12 +1,11 @@
<!--[metadata]>
+++
aliases = ["/engine/reference/logging/journald/"]
title = "journald logging driver"
title = "Journald logging driver"
description = "Describes how to use the fluentd logging driver."
keywords = ["Fluentd, docker, logging, driver"]
keywords = ["Journald, docker, logging, driver"]
[menu.main]
parent = "smn_logging"
weight = 2
+++
<![end-metadata]-->

View File

@@ -6,7 +6,7 @@ description = "Describes how to format tags for."
keywords = ["docker, logging, driver, syslog, Fluentd, gelf, journald"]
[menu.main]
parent = "smn_logging"
weight = 1
weight = -1
+++
<![end-metadata]-->

View File

@@ -6,7 +6,7 @@ description = "Configure logging driver."
keywords = ["docker, logging, driver, Fluentd"]
[menu.main]
parent = "smn_logging"
weight=-1
weight=-99
+++
<![end-metadata]-->
@@ -15,10 +15,13 @@ weight=-1
The container can have a different logging driver than the Docker daemon. Use
the `--log-driver=VALUE` with the `docker run` command to configure the
container's logging driver. The following options are supported:
container's logging driver. If the `--log-driver` option is not set, docker
uses the default (`json-file`) logging driver. The following options are
supported:
| `none` | Disables any logging for the container. `docker logs` won't be available with this driver. |
| Driver | Description |
|-------------|-------------------------------------------------------------------------------------------------------------------------------|
| `none` | Disables any logging for the container. `docker logs` won't be available with this driver. |
| `json-file` | Default logging driver for Docker. Writes JSON messages to file. |
| `syslog` | Syslog logging driver for Docker. Writes log messages to syslog. |
| `journald` | Journald logging driver for Docker. Writes log messages to `journald`. |
@@ -32,40 +35,58 @@ container's logging driver. The following options are supported:
The `docker logs`command is available only for the `json-file` and `journald`
logging drivers.
The `labels` and `env` options add additional attributes for use with logging drivers that accept them. Each option takes a comma-separated list of keys. If there is collision between `label` and `env` keys, the value of the `env` takes precedence.
The `labels` and `env` options add additional attributes for use with logging
drivers that accept them. Each option takes a comma-separated list of keys. If
there is collision between `label` and `env` keys, the value of the `env` takes
precedence.
To use attributes, specify them when you start the Docker daemon.
To use attributes, specify them when you start the Docker daemon. For example,
to manually start the daemon with the `json-file` driver, and include additional
attributes in the output, run the following command:
```
docker daemon --log-driver=json-file --log-opt labels=foo --log-opt env=foo,fizz
```bash
$ docker daemon \
--log-driver=json-file \
--log-opt labels=foo \
--log-opt env=foo,fizz
```
Then, run a container and specify values for the `labels` or `env`. For example, you might use this:
Then, run a container and specify values for the `labels` or `env`. For
example, you might use this:
```
docker run --label foo=bar -e fizz=buzz -d -P training/webapp python app.py
```bash
$ docker run -dit --label foo=bar -e fizz=buzz alpine sh
```
This adds additional fields to the log depending on the driver, e.g. for
`json-file` that looks like:
"attrs":{"fizz":"buzz","foo":"bar"}
```json
"attrs":{"fizz":"buzz","foo":"bar"}
```
## json-file options
The following logging options are supported for the `json-file` logging driver:
--log-opt max-size=[0-9+][k|m|g]
--log-opt max-file=[0-9+]
--log-opt labels=label1,label2
--log-opt env=env1,env2
```bash
--log-opt max-size=[0-9+][k|m|g]
--log-opt max-file=[0-9+]
--log-opt labels=label1,label2
--log-opt env=env1,env2
```
Logs that reach `max-size` are rolled over. You can set the size in kilobytes(k), megabytes(m), or gigabytes(g). eg `--log-opt max-size=50m`. If `max-size` is not set, then logs are not rolled over.
Logs that reach `max-size` are rolled over. You can set the size in
kilobytes(k), megabytes(m), or gigabytes(g). eg `--log-opt max-size=50m`. If
`max-size` is not set, then logs are not rolled over.
`max-file` specifies the maximum number of files that a log is rolled over before being discarded. eg `--log-opt max-file=100`. If `max-size` is not set, then `max-file` is not honored.
`max-file` specifies the maximum number of files that a log is rolled over
before being discarded. eg `--log-opt max-file=100`. If `max-size` is not set,
then `max-file` is not honored.
If `max-size` and `max-file` are set, `docker logs` only returns the log lines from the newest log file.
If `max-size` and `max-file` are set, `docker logs` only returns the log lines
from the newest log file.
## syslog options
@@ -82,17 +103,20 @@ The following logging options are supported for the `syslog` logging driver:
--log-opt tag="mailer"
--log-opt syslog-format=[rfc5424|rfc3164]
`syslog-address` specifies the remote syslog server address where the driver connects to.
If not specified it defaults to the local unix socket of the running system.
If transport is either `tcp` or `udp` and `port` is not specified it defaults to `514`
The following example shows how to have the `syslog` driver connect to a `syslog`
remote server at `192.168.0.42` on port `123`
`syslog-address` specifies the remote syslog server address where the driver
connects to. If not specified it defaults to the local unix socket of the
running system. If transport is either `tcp` or `udp` and `port` is not
specified it defaults to `514` The following example shows how to have the
`syslog` driver connect to a `syslog` remote server at `192.168.0.42` on port
`123`
$ docker run --log-driver=syslog --log-opt syslog-address=tcp://192.168.0.42:123
```bash
$ docker run --log-driver=syslog --log-opt syslog-address=tcp://192.168.0.42:123
```
The `syslog-facility` option configures the syslog facility. By default, the system uses the
`daemon` value. To override this behavior, you can provide an integer of 0 to 23 or any of
the following named facilities:
The `syslog-facility` option configures the syslog facility. By default, the
system uses the `daemon` value. To override this behavior, you can provide an
integer of 0 to 23 or any of the following named facilities:
* `kern`
* `user`
@@ -116,18 +140,19 @@ the following named facilities:
* `local7`
`syslog-tls-ca-cert` specifies the absolute path to the trust certificates
signed by the CA. This option is ignored if the address protocol is not `tcp+tls`.
signed by the CA. This option is ignored if the address protocol is not
`tcp+tls`.
`syslog-tls-cert` specifies the absolute path to the TLS certificate file.
`syslog-tls-cert` specifies the absolute path to the TLS certificate file. This
option is ignored if the address protocol is not `tcp+tls`.
`syslog-tls-key` specifies the absolute path to the TLS key file. This option
is ignored if the address protocol is not `tcp+tls`.
`syslog-tls-skip-verify` configures the TLS verification. This verification is
enabled by default, but it can be overriden by setting this option to `true`.
This option is ignored if the address protocol is not `tcp+tls`.
`syslog-tls-key` specifies the absolute path to the TLS key file.
This option is ignored if the address protocol is not `tcp+tls`.
`syslog-tls-skip-verify` configures the TLS verification.
This verification is enabled by default, but it can be overriden by setting
this option to `true`. This option is ignored if the address protocol is not `tcp+tls`.
By default, Docker uses the first 12 characters of the container ID to tag log messages.
Refer to the [log tag option documentation](log_tags.md) for customizing
the log tag format.
@@ -137,34 +162,40 @@ If not specified it defaults to the local unix syslog format without hostname sp
Specify rfc3164 to perform logging in RFC-3164 compatible format. Specify rfc5424 to perform
logging in RFC-5424 compatible format
## journald options
The `journald` logging driver stores the container id in the journal's `CONTAINER_ID` field. For detailed information on
working with this logging driver, see [the journald logging driver](journald.md)
reference documentation.
The `journald` logging driver stores the container id in the journal's
`CONTAINER_ID` field. For detailed information on working with this logging
driver, see [the journald logging driver](journald.md) reference documentation.
## gelf options
## GELF options
The GELF logging driver supports the following options:
--log-opt gelf-address=udp://host:port
--log-opt tag="database"
--log-opt labels=label1,label2
--log-opt env=env1,env2
--log-opt gelf-compression-type=gzip
--log-opt gelf-compression-level=1
```bash
--log-opt gelf-address=udp://host:port
--log-opt tag="database"
--log-opt labels=label1,label2
--log-opt env=env1,env2
--log-opt gelf-compression-type=gzip
--log-opt gelf-compression-level=1
```
The `gelf-address` option specifies the remote GELF server address that the
driver connects to. Currently, only `udp` is supported as the transport and you must
specify a `port` value. The following example shows how to connect the `gelf`
driver to a GELF remote server at `192.168.0.42` on port `12201`
driver connects to. Currently, only `udp` is supported as the transport and you
must specify a `port` value. The following example shows how to connect the
`gelf` driver to a GELF remote server at `192.168.0.42` on port `12201`
$ docker run --log-driver=gelf --log-opt gelf-address=udp://192.168.0.42:12201
```bash
$ docker run -dit \
--log-driver=gelf \
--log-opt gelf-address=udp://192.168.0.42:12201 \
alpine sh
```
By default, Docker uses the first 12 characters of the container ID to tag log messages.
Refer to the [log tag option documentation](log_tags.md) for customizing
the log tag format.
By default, Docker uses the first 12 characters of the container ID to tag log
messages. Refer to the [log tag option documentation](log_tags.md) for
customizing the log tag format.
The `labels` and `env` options are supported by the gelf logging
driver. It adds additional key on the `extra` fields, prefixed by an
@@ -179,14 +210,15 @@ The `gelf-compression-type` option can be used to change how the GELF driver
compresses each log message. The accepted values are `gzip`, `zlib` and `none`.
`gzip` is chosen by default.
The `gelf-compression-level` option can be used to change the level of compresssion
when `gzip` or `zlib` is selected as `gelf-compression-type`. Accepted value
must be from from -1 to 9 (BestCompression). Higher levels typically
run slower but compress more. Default value is 1 (BestSpeed).
The `gelf-compression-level` option can be used to change the level of
compresssion when `gzip` or `zlib` is selected as `gelf-compression-type`.
Accepted value must be from from -1 to 9 (BestCompression). Higher levels
typically run slower but compress more. Default value is 1 (BestSpeed).
## fluentd options
## Fluentd options
You can use the `--log-opt NAME=VALUE` flag to specify these additional Fluentd logging driver options.
You can use the `--log-opt NAME=VALUE` flag to specify these additional Fluentd
logging driver options.
- `fluentd-address`: specify `host:port` to connect [localhost:24224]
- `tag`: specify tag for `fluentd` message
@@ -197,7 +229,13 @@ You can use the `--log-opt NAME=VALUE` flag to specify these additional Fluentd
For example, to specify both additional options:
`docker run --log-driver=fluentd --log-opt fluentd-address=localhost:24224 --log-opt tag=docker.{{.Name}}`
```bash
$ docker run -dit \
--log-driver=fluentd \
--log-opt fluentd-address=localhost:24224 \
--log-opt tag="docker.{{.Name}}" \
alpine sh
```
If container cannot connect to the Fluentd daemon on the specified address and
`fluentd-async-connect` is not enabled, the container stops immediately.
@@ -205,42 +243,51 @@ For detailed information on working with this logging driver,
see [the fluentd logging driver](fluentd.md)
## Specify Amazon CloudWatch Logs options
## Amazon CloudWatch Logs options
The Amazon CloudWatch Logs logging driver supports the following options:
--log-opt awslogs-region=<aws_region>
--log-opt awslogs-group=<log_group_name>
--log-opt awslogs-stream=<log_stream_name>
```bash
--log-opt awslogs-region=<aws_region>
--log-opt awslogs-group=<log_group_name>
--log-opt awslogs-stream=<log_stream_name>
```
For detailed information on working with this logging driver, see [the awslogs logging driver](awslogs.md) reference documentation.
For detailed information on working with this logging driver, see [the awslogs
logging driver](awslogs.md) reference documentation.
## Splunk options
The Splunk logging driver requires the following options:
--log-opt splunk-token=<splunk_http_event_collector_token>
--log-opt splunk-url=https://your_splunk_instance:8088
```bash
--log-opt splunk-token=<splunk_http_event_collector_token>
--log-opt splunk-url=https://your_splunk_instance:8088
```
For detailed information about working with this logging driver, see the [Splunk logging driver](splunk.md)
reference documentation.
For detailed information about working with this logging driver, see the
[Splunk logging driver](splunk.md) reference documentation.
## ETW logging driver options
The etwlogs logging driver does not require any options to be specified. This logging driver will forward each log message
as an ETW event. An ETW listener can then be created to listen for these events.
The etwlogs logging driver does not require any options to be specified. This
logging driver forwards each log message as an ETW event. An ETW listener
can then be created to listen for these events.
For detailed information on working with this logging driver, see [the ETW logging driver](etwlogs.md) reference documentation.
The ETW logging driver is only available on Windows. For detailed information
on working with this logging driver, see [the ETW logging driver](etwlogs.md)
reference documentation.
## Google Cloud Logging
## Google Cloud Logging options
The Google Cloud Logging driver supports the following options:
--log-opt gcp-project=<gcp_projext>
--log-opt labels=<label1>,<label2>
--log-opt env=<envvar1>,<envvar2>
--log-opt log-cmd=true
```bash
--log-opt gcp-project=<gcp_projext>
--log-opt labels=<label1>,<label2>
--log-opt env=<envvar1>,<envvar2>
--log-opt log-cmd=true
```
For detailed information about working with this logging driver, see the [Google Cloud Logging driver](gcplogs.md).
reference documentation.
For detailed information about working with this logging driver, see the
[Google Cloud Logging driver](gcplogs.md). reference documentation.

View File

@@ -6,7 +6,6 @@ description = "Describes how to use the Splunk logging driver."
keywords = ["splunk, docker, logging, driver"]
[menu.main]
parent = "smn_logging"
weight = 2
+++
<![end-metadata]-->

View File

@@ -33,7 +33,7 @@ more details about the `docker stats` command.
## Control groups
Linux Containers rely on [control groups](
https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt)
https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt)
which not only track groups of processes, but also expose metrics about
CPU, memory, and block I/O usage. You can access those metrics and
obtain network usage metrics as well. This is relevant for "pure" LXC
@@ -256,7 +256,7 @@ compatibility reasons.
Block I/O is accounted in the `blkio` controller.
Different metrics are scattered across different files. While you can
find in-depth details in the [blkio-controller](
https://www.kernel.org/doc/Documentation/cgroups/blkio-controller.txt)
https://www.kernel.org/doc/Documentation/cgroup-v1/blkio-controller.txt)
file in the kernel documentation, here is a short list of the most
relevant ones:

View File

@@ -33,15 +33,19 @@ If you want Docker to start at boot, you should also:
There are a number of ways to configure the daemon flags and environment variables
for your Docker daemon.
The recommended way is to use a systemd drop-in file. These are local files in
the `/etc/systemd/system/docker.service.d` directory. This could also be
`/etc/systemd/system/docker.service`, which also works for overriding the
defaults from `/lib/systemd/system/docker.service`.
The recommended way is to use a systemd drop-in file (as described in
the <a target="_blank"
href="https://www.freedesktop.org/software/systemd/man/systemd.unit.html">systemd.unit</a>
documentation). These are local files named `<something>.conf` in the
`/etc/systemd/system/docker.service.d` directory. This could also be
`/etc/systemd/system/docker.service`, which also works for overriding
the defaults from `/lib/systemd/system/docker.service`.
However, if you had previously used a package which had an `EnvironmentFile`
(often pointing to `/etc/sysconfig/docker`) then for backwards compatibility,
you drop a file in the `/etc/systemd/system/docker.service.d`
directory including the following:
However, if you had previously used a package which had an
`EnvironmentFile` (often pointing to `/etc/sysconfig/docker`) then for
backwards compatibility, you drop a file with a `.conf` extension into
the `/etc/systemd/system/docker.service.d` directory including the
following:
[Service]
EnvironmentFile=-/etc/sysconfig/docker
@@ -119,7 +123,7 @@ If you fail to specify an empty configuration, Docker reports an error such as:
This example overrides the default `docker.service` file.
If you are behind a HTTP proxy server, for example in corporate settings,
If you are behind an HTTP proxy server, for example in corporate settings,
you will need to add this configuration in the Docker systemd service file.
First, create a systemd drop-in directory for the docker service:

View File

@@ -15,104 +15,140 @@ parent = "engine_admin"
> - **If you don't like sudo** then see [*Giving non-root
> access*](../installation/binaries.md#giving-non-root-access)
Traditionally a Docker container runs a single process when it is
launched, for example an Apache daemon or a SSH server daemon. Often
though you want to run more than one process in a container. There are a
number of ways you can achieve this ranging from using a simple Bash
script as the value of your container's `CMD` instruction to installing
a process management tool.
Traditionally a Docker container runs a single process when it is launched, for
example an Apache daemon or a SSH server daemon. Often though you want to run
more than one process in a container. There are a number of ways you can
achieve this ranging from using a simple Bash script as the value of your
container's `CMD` instruction to installing a process management tool.
In this example we're going to make use of the process management tool,
[Supervisor](http://supervisord.org/), to manage multiple processes in
our container. Using Supervisor allows us to better control, manage, and
restart the processes we want to run. To demonstrate this we're going to
install and manage both an SSH daemon and an Apache daemon.
In this example you're going to make use of the process management tool,
[Supervisor](http://supervisord.org/), to manage multiple processes in a
container. Using Supervisor allows you to better control, manage, and restart
the processes inside the container. To demonstrate this we're going to install
and manage both an SSH daemon and an Apache daemon.
## Creating a Dockerfile
Let's start by creating a basic `Dockerfile` for our
new image.
Let's start by creating a basic `Dockerfile` for our new image.
FROM ubuntu:13.04
MAINTAINER examples@docker.com
```Dockerfile
FROM ubuntu:16.04
MAINTAINER examples@docker.com
```
## Installing Supervisor
We can now install our SSH and Apache daemons as well as Supervisor in
our container.
You can now install the SSH and Apache daemons as well as Supervisor in the
container.
RUN apt-get update && apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
```Dockerfile
RUN apt-get update && apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
```
Here we're installing the `openssh-server`,
`apache2` and `supervisor`
(which provides the Supervisor daemon) packages. We're also creating four
new directories that are needed to run our SSH daemon and Supervisor.
The first `RUN` instruction installs the `openssh-server`, `apache2` and
`supervisor` (which provides the Supervisor daemon) packages. The next `RUN`
instruction creates four new directories that are needed to run the SSH daemon
and Supervisor.
## Adding Supervisor's configuration file
Now let's add a configuration file for Supervisor. The default file is
called `supervisord.conf` and is located in
`/etc/supervisor/conf.d/`.
Now let's add a configuration file for Supervisor. The default file is called
`supervisord.conf` and is located in `/etc/supervisor/conf.d/`.
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
```Dockerfile
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
```
Let's see what is inside our `supervisord.conf`
file.
Let's see what is inside the `supervisord.conf` file.
[supervisord]
nodaemon=true
```ini
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
```
The `supervisord.conf` configuration file contains
directives that configure Supervisor and the processes it manages. The
first block `[supervisord]` provides configuration
for Supervisor itself. We're using one directive, `nodaemon`
which tells Supervisor to run interactively rather than
daemonize.
The `supervisord.conf` configuration file contains directives that configure
Supervisor and the processes it manages. The first block `[supervisord]`
provides configuration for Supervisor itself. The `nodaemon` directive is used,
which tells Supervisor to run interactively rather than daemonize.
The next two blocks manage the services we wish to control. Each block
controls a separate process. The blocks contain a single directive,
`command`, which specifies what command to run to
start each process.
The next two blocks manage the services we wish to control. Each block controls
a separate process. The blocks contain a single directive, `command`, which
specifies what command to run to start each process.
## Exposing ports and running Supervisor
Now let's finish our `Dockerfile` by exposing some
required ports and specifying the `CMD` instruction
to start Supervisor when our container launches.
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
Here We've exposed ports 22 and 80 on the container and we're running
the `/usr/bin/supervisord` binary when the container
Now let's finish the `Dockerfile` by exposing some required ports and
specifying the `CMD` instruction to start Supervisor when our container
launches.
```Dockerfile
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
```
These instructions tell Docker that ports 22 and 80 are exposed by the
container and that the `/usr/bin/supervisord` binary should be executed when
the container launches.
## Building our image
We can now build our new image.
Your completed Dockerfile now looks like this:
$ docker build -t <yourname>/supervisord .
```Dockerfile
FROM ubuntu:16.04
MAINTAINER examples@docker.com
## Running our Supervisor container
RUN apt-get update && apt-get install -y openssh-server apache2 supervisor
RUN mkdir -p /var/lock/apache2 /var/run/apache2 /var/run/sshd /var/log/supervisor
Once We've got a built image we can launch a container from it.
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
$ docker run -p 22 -p 80 -t -i <yourname>/supervisord
2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2013-11-25 18:53:22,342 INFO supervisord started with pid 1
2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
. . .
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
```
We've launched a new container interactively using the `docker run` command.
And your `supervisord.conf` file looks like this;
```ini
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:apache2]
command=/bin/bash -c "source /etc/apache2/envvars && exec /usr/sbin/apache2 -DFOREGROUND"
```
You can now build the image using this command;
```bash
$ docker build -t mysupervisord .
```
## Running your Supervisor container
Once you have built your image you can launch a container from it.
```bash
$ docker run -p 22 -p 80 -t -i mysupervisord
2013-11-25 18:53:22,312 CRIT Supervisor running as root (no user in config file)
2013-11-25 18:53:22,312 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2013-11-25 18:53:22,342 INFO supervisord started with pid 1
2013-11-25 18:53:23,346 INFO spawned: 'sshd' with pid 6
2013-11-25 18:53:23,349 INFO spawned: 'apache2' with pid 7
...
```
You launched a new container interactively using the `docker run` command.
That container has run Supervisor and launched the SSH and Apache daemons with
it. We've specified the `-p` flag to expose ports 22 and 80. From here we can
now identify the exposed ports and connect to one or both of the SSH and Apache

View File

@@ -21,6 +21,11 @@ The following list of features are deprecated in Engine.
The docker login command is removing the ability to automatically register for an account with the target registry if the given username doesn't exist. Due to this change, the email flag is no longer required, and will be deprecated.
### Separator (`:`) of `--security-opt` flag on `docker run`
**Deprecated In Release: v1.11**
**Target For Removal In Release: v1.13**
The flag `--security-opt` doesn't use the colon separator(`:`) anymore to divide keys and values, it uses the equal symbol(`=`) for consinstency with other similar flags, like `--storage-opt`.
### Ambiguous event fields in API
@@ -79,15 +84,14 @@ Because of which, the driver specific log tag options `syslog-tag`, `gelf-tag` a
### LXC built-in exec driver
**Deprecated In Release: v1.8**
**Target For Removal In Release: v1.10**
**Removed In Release: v1.10**
The built-in LXC execution driver is deprecated for an external implementation.
The lxc-conf flag and API fields will also be removed.
The built-in LXC execution driver, the lxc-conf flag, and API fields have been removed.
### Old Command Line Options
**Deprecated In Release: [v1.8.0](https://github.com/docker/docker/releases/tag/v1.8.0)**
**Target For Removal In Release: v1.10**
**Removed In Release: [v1.10.0](https://github.com/docker/docker/releases/tag/v1.10.0)**
The flags `-d` and `--daemon` are deprecated in favor of the `daemon` subcommand:

View File

@@ -39,7 +39,7 @@ Starting Couchbase Server -- Web UI available at http://<ip>:8091
> Docker using Docker machine, you can obtain the IP address
> of the Docker host using `docker-machine ip <MACHINE-NAME>`.
The logs show that Couchbase console can be accessed at http://192.168.99.100:8091. The default username is `Administrator` and the password is `password`.
The logs show that Couchbase console can be accessed at `http://192.168.99.100:8091`. The default username is `Administrator` and the password is `password`.
## Configure Couchbase Docker container
@@ -228,7 +228,7 @@ cbq> select * from `travel-sample` limit 1;
[Couchbase Web Console](http://developer.couchbase.com/documentation/server/4.1/admin/ui-intro.html) is a console that allows to manage a Couchbase instance. It can be seen at:
http://192.168.99.100:8091/
`http://192.168.99.100:8091/`
Make sure to replace the IP address with the IP address of your Docker Machine or `localhost` if Docker is running locally.

View File

@@ -17,7 +17,6 @@ This section contains the following:
* [Dockerizing MongoDB](mongodb.md)
* [Dockerizing PostgreSQL](postgresql_service.md)
* [Dockerizing a CouchDB service](couchdb_data_volumes.md)
* [Dockerizing a Node.js web app](nodejs_web_app.md)
* [Dockerizing a Redis service](running_redis_service.md)
* [Dockerizing an apt-cacher-ng service](apt-cacher-ng.md)
* [Dockerizing applications: A 'Hello world'](../userguide/containers/dockerizing.md)

View File

@@ -1,199 +0,0 @@
<!--[metadata]>
+++
title = "Dockerizing a Node.js web app"
description = "Installing and running a Node.js app with Docker"
keywords = ["docker, example, package installation, node, centos"]
[menu.main]
parent = "engine_dockerize"
+++
<![end-metadata]-->
# Dockerizing a Node.js web app
> **Note**:
> - **If you don't like sudo** then see [*Giving non-root
> access*](../installation/binaries.md#giving-non-root-access)
The goal of this example is to show you how you can build your own
Docker images from a parent image using a `Dockerfile`
. We will do that by making a simple Node.js hello world web
application running on CentOS. You can get the full source code at[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/).
## Create Node.js app
First, create a directory `src` where all the files
would live. Then create a `package.json` file that
describes your app and its dependencies:
{
"name": "docker-centos-hello",
"private": true,
"version": "0.0.1",
"description": "Node.js Hello world app on CentOS using docker",
"author": "Daniel Gasienica <daniel@gasienica.ch>",
"dependencies": {
"express": "3.2.4"
}
}
Then, create an `index.js` file that defines a web
app using the [Express.js](http://expressjs.com/) framework:
var express = require('express');
// Constants
var PORT = 8080;
// App
var app = express();
app.get('/', function (req, res) {
res.send('Hello world\n');
});
app.listen(PORT);
console.log('Running on http://localhost:' + PORT);
In the next steps, we'll look at how you can run this app inside a
CentOS container using Docker. First, you'll need to build a Docker
image of your app.
## Creating a Dockerfile
Create an empty file called `Dockerfile`:
touch Dockerfile
Open the `Dockerfile` in your favorite text editor
Define the parent image you want to use to build your own image on
top of. Here, we'll use
[CentOS](https://hub.docker.com/_/centos/) (tag: `centos6`)
available on the [Docker Hub](https://hub.docker.com/):
FROM centos:centos6
Since we're building a Node.js app, you'll have to install Node.js as
well as npm on your CentOS image. Node.js is required to run your app
and npm is required to install your app's dependencies defined in
`package.json`. To install the right package for
CentOS, we'll use the instructions from the [Node.js wiki](
https://github.com/joyent/node/wiki/Installing-Node.js-
via-package-manager#rhelcentosscientific-linux-6):
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
RUN yum install -y epel-release
# Install Node.js and npm
RUN yum install -y nodejs npm
Install your app dependencies using the `npm` binary:
# Install app dependencies
COPY package.json /src/package.json
RUN cd /src; npm install --production
To bundle your app's source code inside the Docker image, use the `COPY`
instruction:
# Bundle app source
COPY . /src
Your app binds to port `8080` so you'll use the `EXPOSE` instruction to have
it mapped by the `docker` daemon:
EXPOSE 8080
Last but not least, define the command to run your app using `CMD` which
defines your runtime, i.e. `node`, and the path to our app, i.e. `src/index.js`
(see the step where we added the source to the container):
CMD ["node", "/src/index.js"]
Your `Dockerfile` should now look like this:
FROM centos:centos6
# Enable Extra Packages for Enterprise Linux (EPEL) for CentOS
RUN yum install -y epel-release
# Install Node.js and npm
RUN yum install -y nodejs npm
# Install app dependencies
COPY package.json /src/package.json
RUN cd /src; npm install --production
# Bundle app source
COPY . /src
EXPOSE 8080
CMD ["node", "/src/index.js"]
## Building your image
Go to the directory that has your `Dockerfile` and run the following command
to build a Docker image. The `-t` flag lets you tag your image so it's easier
to find later using the `docker images` command:
$ docker build -t <your username>/centos-node-hello .
Your image will now be listed by Docker:
$ docker images
# Example
REPOSITORY TAG ID CREATED
centos centos6 539c0211cd76 8 weeks ago
<your username>/centos-node-hello latest d64d3505b0d2 2 hours ago
## Run the image
Running your image with `-d` runs the container in detached mode, leaving the
container running in the background. The `-p` flag redirects a public port to
a private port in the container. Run the image you previously built:
$ docker run -p 49160:8080 -d <your username>/centos-node-hello
Print the output of your app:
# Get container ID
$ docker ps
# Print app output
$ docker logs <container id>
# Example
Running on http://localhost:8080
## Test
To test your app, get the port of your app that Docker mapped:
$ docker ps
# Example
ID IMAGE COMMAND ... PORTS
ecce33b30ebf <your username>/centos-node-hello:latest node /src/index.js 49160->8080
In the example above, Docker mapped the `8080` port of the container to `49160`.
Now you can call your app using `curl` (install if needed via:
`sudo apt-get install curl`):
$ curl -i localhost:49160
HTTP/1.1 200 OK
X-Powered-By: Express
Content-Type: text/html; charset=utf-8
Content-Length: 12
Date: Sun, 02 Jun 2013 03:53:22 GMT
Connection: keep-alive
Hello world
If you use Docker Machine on OS X, the port is actually mapped to the Docker
host VM, and you should use the following command:
$ curl $(docker-machine ip VM_NAME):49160
We hope this tutorial helped you get up and running with Node.js and
CentOS on Docker. You can get the full source code at
[https://github.com/enokd/docker-node-hello/](https://github.com/enokd/docker-node-hello/).

View File

@@ -1,7 +1,7 @@
<!--[metadata]>
+++
title = "Dockerizing a Redis service"
description = "Installing and running an redis service"
description = "Installing and running a redis service"
keywords = ["docker, example, package installation, networking, redis"]
[menu.main]
parent = "engine_dockerize"

View File

@@ -22,7 +22,7 @@ example, a [volume plugin](plugins_volume.md) might enable Docker
volumes to persist across multiple Docker hosts and a
[network plugin](plugins_network.md) might provide network plumbing.
Currently Docker supports volume and network driver plugins. In the future it
Currently Docker supports authorization, volume and network driver plugins. In the future it
will support additional plugin types.
## Installing a plugin
@@ -31,78 +31,48 @@ Follow the instructions in the plugin's documentation.
## Finding a plugin
The following plugins exist:
The sections below provide an inexhaustive overview of available plugins.
* The [Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume)
is a volume plugin that provides access to an extensible set of
container-based persistent storage options. It supports single and multi-host Docker
environments with features that include tenant isolation, automated
provisioning, encryption, secure deletion, snapshots and QoS.
<style>
#content tr td:first-child { white-space: nowrap;}
</style>
* The [Convoy plugin](https://github.com/rancher/convoy) is a volume plugin for a
variety of storage back-ends including device mapper and NFS. It's a simple standalone
executable written in Go and provides the framework to support vendor-specific extensions
such as snapshots, backups and restore.
### Network plugins
* The [Flocker plugin](https://clusterhq.com/docker-plugin/) is a volume plugin
which provides multi-host portable volumes for Docker, enabling you to run
databases and other stateful containers and move them around across a cluster
of machines.
Plugin | Description
----------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[Contiv Networking](https://github.com/contiv/netplugin) | An open source network plugin to provide infrastructure and security policies for a multi-tenant micro services deployment, while providing an integration to physical network for non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards.
[Kuryr Network Plugin](https://github.com/openstack/kuryr) | A network plugin is developed as part of the OpenStack Kuryr project and implements the Docker networking (libnetwork) remote driver API by utilizing Neutron, the OpenStack networking service. It includes an IPAM driver as well.
[Weave Network Plugin](https://www.weave.works/docs/net/latest/introducing-weave/) | A network plugin that creates a virtual network that connects your Docker containers - across multiple hosts or clouds and enables automatic discovery of applications. Weave networks are resilient, partition tolerant, secure and work in partially connected networks, and other adverse environments - all configured with delightful simplicity.
* The [GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) is
another volume plugin that provides multi-host volumes management for Docker
using GlusterFS.
### Volume plugins
* The [Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) allows on-demand,
version controlled access to your data. Horcrux is an open-source plugin,
written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3.
Plugin | Description
----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[Azure File Storage plugin](https://github.com/Azure/azurefile-dockervolumedriver) | Lets you mount Microsoft [Azure File Storage](https://azure.microsoft.com/blog/azure-file-storage-now-generally-available/) shares to Docker containers as volumes using the SMB 3.0 protocol. [Learn more](https://azure.microsoft.com/blog/persistent-docker-volumes-with-azure-file-storage/).
[Blockbridge plugin](https://github.com/blockbridge/blockbridge-docker-volume) | A volume plugin that provides access to an extensible set of container-based persistent storage options. It supports single and multi-host Docker environments with features that include tenant isolation, automated provisioning, encryption, secure deletion, snapshots and QoS.
[Contiv Volume Plugin](https://github.com/contiv/volplugin) | An open source volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption using ceph underneath.
[Convoy plugin](https://github.com/rancher/convoy) | A volume plugin for a variety of storage back-ends including device mapper and NFS. It's a simple standalone executable written in Go and provides the framework to support vendor-specific extensions such as snapshots, backups and restore.
[DRBD plugin](https://www.drbd.org/en/supported-projects/docker) | A volume plugin that provides highly available storage replicated by [DRBD](https://www.drbd.org). Data written to the docker volume is replicated in a cluster of DRBD nodes.
[Flocker plugin](https://clusterhq.com/docker-plugin/) | A volume plugin that provides multi-host portable volumes for Docker, enabling you to run databases and other stateful containers and move them around across a cluster of machines.
[gce-docker plugin](https://github.com/mcuadros/gce-docker) | A volume plugin able to attach, format and mount Google Compute [persistent-disks](https://cloud.google.com/compute/docs/disks/persistent-disks).
[GlusterFS plugin](https://github.com/calavera/docker-volume-glusterfs) | A volume plugin that provides multi-host volumes management for Docker using GlusterFS.
[Horcrux Volume Plugin](https://github.com/muthu-r/horcrux) | A volume plugin that allows on-demand, version controlled access to your data. Horcrux is an open-source plugin, written in Go, and supports SCP, [Minio](https://www.minio.io) and Amazon S3.
[IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs) | An open source volume plugin that allows using an [ipfs](https://ipfs.io/) filesystem as a volume.
[Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) | A plugin that provides credentials and secret management using Keywhiz as a central repository.
[Local Persist Plugin](https://github.com/CWSpear/local-persist) | A volume plugin that extends the default `local` driver's functionality by allowing you specify a mountpoint anywhere on the host, which enables the files to *always persist*, even if the volume is removed via `docker volume rm`.
[NetApp Plugin](https://github.com/NetApp/netappdvp) (nDVP) | A volume plugin that provides direct integration with the Docker ecosystem for the NetApp storage portfolio. The nDVP package supports the provisioning and management of storage resources from the storage platform to Docker hosts, with a robust framework for adding additional platforms in the future.
[Netshare plugin](https://github.com/ContainX/docker-volume-netshare) | A volume plugin that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
[OpenStorage Plugin](https://github.com/libopenstorage/openstorage) | A cluster-aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
[Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) | A volume plugin that connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
[REX-Ray plugin](https://github.com/emccode/rexray) | A volume plugin which is written in Go and provides advanced storage functionality for many platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
[VMware vSphere Storage Plugin](https://github.com/vmware/docker-volume-vsphere) | Docker Volume Driver for vSphere enables customers to address persistent storage requirements for Docker containers in vSphere environments.
* The [IPFS Volume Plugin](http://github.com/vdemeester/docker-volume-ipfs)
is an open source volume plugin that allows using an
[ipfs](https://ipfs.io/) filesystem as a volume.
### Authorization plugins
* The [Keywhiz plugin](https://github.com/calavera/docker-volume-keywhiz) is
a plugin that provides credentials and secret management using Keywhiz as
a central repository.
* The [Netshare plugin](https://github.com/gondor/docker-volume-netshare) is a volume plugin
that provides volume management for NFS 3/4, AWS EFS and CIFS file systems.
* The [OpenStorage Plugin](https://github.com/libopenstorage/openstorage) is a cluster aware volume plugin that provides volume management for file and block storage solutions. It implements a vendor neutral specification for implementing extensions such as CoS, encryption, and snapshots. It has example drivers based on FUSE, NFS, NBD and EBS to name a few.
* The [Quobyte Volume Plugin](https://github.com/quobyte/docker-volume) connects Docker to [Quobyte](http://www.quobyte.com/containers)'s data center file system, a general-purpose scalable and fault-tolerant storage platform.
* The [REX-Ray plugin](https://github.com/emccode/rexray) is a volume plugin
which is written in Go and provides advanced storage functionality for many
platforms including VirtualBox, EC2, Google Compute Engine, OpenStack, and EMC.
* The [Contiv Volume Plugin](https://github.com/contiv/volplugin) is an open
source volume plugin that provides multi-tenant, persistent, distributed storage
with intent based consumption using ceph underneath.
* The [Contiv Networking](https://github.com/contiv/netplugin) is an open source
libnetwork plugin to provide infrastructure and security policies for a
multi-tenant micro services deployment, while providing an integration to
physical network for non-container workload. Contiv Networking implements the
remote driver and IPAM APIs available in Docker 1.9 onwards.
* The [Weave Network Plugin](http://docs.weave.works/weave/latest_release/plugin.html)
creates a virtual network that connects your Docker containers -
across multiple hosts or clouds and enables automatic discovery of
applications. Weave networks are resilient, partition tolerant,
secure and work in partially connected networks, and other adverse
environments - all configured with delightful simplicity.
* The [Kuryr Network Plugin](https://github.com/openstack/kuryr) is
developed as part of the OpenStack Kuryr project and implements the
Docker networking (libnetwork) remote driver API by utilizing
Neutron, the OpenStack networking service. It includes an IPAM
driver as well.
* The [Local Persist Plugin](https://github.com/CWSpear/local-persist)
extends the default `local` driver's functionality by allowing you specify
a mountpoint anywhere on the host, which enables the files to *always persist*,
even if the volume is removed via `docker volume rm`.
Plugin | Description
------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[Twistlock AuthZ Broker](https://github.com/twistlock/authz) | A basic extendable authorization plugin that runs directly on the host or inside a container. This plugin allows you to define user policies that it evaluates during authorization. Basic authorization is provided if Docker daemon is started with the --tlsverify flag (username is extracted from the certificate common name).
## Troubleshooting a plugin

View File

@@ -151,8 +151,7 @@ should implement the following two methods:
"RequestMethod": "The HTTP method",
"RequestURI": "The HTTP request URI",
"RequestBody": "Byte array containing the raw HTTP request body",
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string ",
"RequestStatusCode": "Request status code"
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string "
}
```
@@ -177,7 +176,6 @@ should implement the following two methods:
"RequestURI": "The HTTP request URI",
"RequestBody": "Byte array containing the raw HTTP request body",
"RequestHeader": "Byte array containing the raw HTTP request header as a map[string][]string",
"RequestStatusCode": "Request status code",
"ResponseBody": "Byte array containing the raw HTTP response body",
"ResponseHeader": "Byte array containing the raw HTTP response header as a map[string][]string",
"ResponseStatusCode":"Response status code"
@@ -190,17 +188,10 @@ should implement the following two methods:
{
"Allow": "Determined whether the user is allowed or not",
"Msg": "The authorization message",
"Err": "The error message if things go wrong",
"ModifiedBody": "Byte array containing a modified body of the raw HTTP body (or nil if no changes required)",
"ModifiedHeader": "Byte array containing a modified header of the HTTP response (or nil if no changes required)",
"ModifiedStatusCode": "int containing the modified version of the status code (or 0 if not change is required)"
"Err": "The error message if things go wrong"
}
```
The modified response enables the authorization plugin to manipulate the content
of the HTTP response. In case of more than one plugin, each subsequent plugin
receives a response (optionally) modified by a previous plugin.
### Request authorization
Each plugin must support two request authorization messages formats, one from the daemon to the plugin and then from the plugin to the daemon. The tables below detail the content expected in each message.

View File

@@ -13,7 +13,7 @@ parent = "engine_extend"
Docker Engine network plugins enable Engine deployments to be extended to
support a wide range of networking technologies, such as VXLAN, IPVLAN, MACVLAN
or something completely different. Network driver plugins are supported via the
LibNetwork project. Each plugin is implemented asa "remote driver" for
LibNetwork project. Each plugin is implemented as a "remote driver" for
LibNetwork, which shares plugin infrastructure with Engine. Effectively, network
driver plugins are activated in the same way as other plugins, and use the same
kind of protocol.

View File

@@ -140,7 +140,8 @@ Docker needs reminding of the path to the volume on the host.
```
Respond with the path on the host filesystem where the volume has been made
available, and/or a string error if an error occurred.
available, and/or a string error if an error occurred. `Mountpoint` is optional,
however the plugin may be queried again later if one is not provided.
### /VolumeDriver.Unmount
@@ -188,7 +189,8 @@ Get the volume info.
}
```
Respond with a string error if an error occurred.
Respond with a string error if an error occurred. `Mountpoint` and `Status` are
optional.
### /VolumeDriver.List
@@ -213,4 +215,4 @@ Get the list of volumes registered with the plugin.
}
```
Respond with a string error if an error occurred.
Respond with a string error if an error occurred. `Mountpoint` is optional.

View File

@@ -9,7 +9,7 @@ weight = 110
+++
<![end-metadata]-->
# Binaries
# Installation from binaries
**This instruction set is meant for hackers who want to try out Docker
on a variety of environments.**
@@ -85,90 +85,137 @@ exhibit unexpected behaviour.
> vendor for the system, and might break regulations and security
> policies in heavily regulated environments.
## Get the Docker binary
## Get the Docker Engine binaries
You can download either the latest release binary or a specific version.
After downloading a binary file, you must set the file's execute bit to run it.
You can download either the latest release binaries or a specific version. To get
the list of stable release version numbers from GitHub, view the `docker/docker`
[releases page](https://github.com/docker/docker/releases). You can get the MD5
and SHA256 hashes by appending .md5 and .sha256 to the URLs respectively
To set the file's execute bit on Linux and OS X:
$ chmod +x docker
To get the list of stable release version numbers from GitHub, view the
`docker/docker` [releases page](https://github.com/docker/docker/releases).
> **Note**
>
> 1) You can get the MD5 and SHA256 hashes by appending .md5 and .sha256 to the URLs respectively
>
> 2) You can get the compressed binaries by appending .tgz to the URLs
### Get the Linux binary
### Get the Linux binaries
To download the latest version for Linux, use the
following URLs:
https://get.docker.com/builds/Linux/i386/docker-latest
https://get.docker.com/builds/Linux/i386/docker-latest.tgz
https://get.docker.com/builds/Linux/x86_64/docker-latest
https://get.docker.com/builds/Linux/x86_64/docker-latest.tgz
To download a specific version for Linux, use the
following URL patterns:
https://get.docker.com/builds/Linux/i386/docker-<version>
https://get.docker.com/builds/Linux/i386/docker-<version>.tgz
https://get.docker.com/builds/Linux/x86_64/docker-<version>
https://get.docker.com/builds/Linux/x86_64/docker-<version>.tgz
For example:
https://get.docker.com/builds/Linux/i386/docker-1.9.1
https://get.docker.com/builds/Linux/i386/docker-1.11.0.tgz
https://get.docker.com/builds/Linux/x86_64/docker-1.9.1
https://get.docker.com/builds/Linux/x86_64/docker-1.11.0.tgz
> **Note** These instructions are for Docker Engine 1.11 and up. Engine 1.10 and
> under consists of a single binary, and instructions for those versions are
> different. To install version 1.10 or below, follow the instructions in the
> <a href="/v1.10/engine/installation/binaries/" target="_blank">1.10 documentation</a>.
#### Install the Linux binaries
After downloading, you extract the archive, which puts the binaries in a
directory named `docker` in your current location.
```bash
$ tar -xvzf docker-latest.tgz
docker/
docker/docker-containerd-ctr
docker/docker
docker/docker-containerd
docker/docker-runc
docker/docker-containerd-shim
```
Engine requires these binaries to be installed in your host's `$PATH`.
For example, to install the binaries in `/usr/bin`:
```bash
$ mv docker/* /usr/bin/
```
> **Note**: If you already have Engine installed on your host, make sure you
> stop Engine before installing (`killall docker`), and install the binaries
> in the same location. You can find the location of the current installation
> with `dirname $(which docker)`.
#### Run the Engine daemon on Linux
You can manually start the Engine in daemon mode using:
```bash
$ sudo docker daemon &
```
The GitHub repository provides samples of init-scripts you can use to control
the daemon through a process manager, such as upstart or systemd. You can find
these scripts in the <a href="https://github.com/docker/docker/tree/master/contrib/init">
contrib directory</a>.
For additional information about running the Engine in daemon mode, refer to
the [daemon command](../reference/commandline/daemon.md) in the Engine command
line reference.
### Get the Mac OS X binary
The Mac OS X binary is only a client. You cannot use it to run the `docker`
daemon. To download the latest version for Mac OS X, use the following URLs:
https://get.docker.com/builds/Darwin/x86_64/docker-latest
https://get.docker.com/builds/Darwin/x86_64/docker-latest.tgz
To download a specific version for Mac OS X, use the
following URL patterns:
following URL pattern:
https://get.docker.com/builds/Darwin/x86_64/docker-<version>
https://get.docker.com/builds/Darwin/x86_64/docker-<version>.tgz
For example:
https://get.docker.com/builds/Darwin/x86_64/docker-1.9.1
https://get.docker.com/builds/Darwin/x86_64/docker-1.11.0.tgz
You can extract the downloaded archive either by double-clicking the downloaded
`.tgz` or on the command line, using `tar -xvzf docker-1.11.0.tgz`. The client
binary can be executed from any location on your filesystem.
### Get the Windows binary
You can only download the Windows client binary for version `1.9.1` onwards.
Moreover, the binary is only a client, you cannot use it to run the `docker` daemon.
You can only download the Windows binary for version `1.9.1` onwards.
Moreover, the 32-bit (`i386`) binary is only a client, you cannot use it to
run the `docker` daemon. The 64-bit binary (`x86_64`) is both a client and
daemon.
To download the latest version for Windows, use the following URLs:
https://get.docker.com/builds/Windows/i386/docker-latest.exe
https://get.docker.com/builds/Windows/i386/docker-latest.zip
https://get.docker.com/builds/Windows/x86_64/docker-latest.exe
https://get.docker.com/builds/Windows/x86_64/docker-latest.zip
To download a specific version for Windows, use the following URL pattern:
https://get.docker.com/builds/Windows/i386/docker-<version>.exe
https://get.docker.com/builds/Windows/i386/docker-<version>.zip
https://get.docker.com/builds/Windows/x86_64/docker-<version>.exe
https://get.docker.com/builds/Windows/x86_64/docker-<version>.zip
For example:
https://get.docker.com/builds/Windows/i386/docker-1.9.1.exe
https://get.docker.com/builds/Windows/i386/docker-1.11.0.zip
https://get.docker.com/builds/Windows/x86_64/docker-1.9.1.exe
https://get.docker.com/builds/Windows/x86_64/docker-1.11.0.zip
## Run the Docker daemon
# start the docker in daemon mode from the directory you unpacked
$ sudo ./docker daemon &
> **Note** These instructions are for Engine 1.11 and up. Instructions for older
> versions are slightly different. To install version 1.10 or below, follow the
> instructions in the <a href="/v1.10/engine/installation/binaries/" target="_blank">1.10 documentation</a>.
## Giving non-root access
@@ -188,21 +235,15 @@ need to add `sudo` to all the client commands.
> The *docker* group (or the group specified with `-G`) is root-equivalent;
> see [*Docker Daemon Attack Surface*](../security/security.md#docker-daemon-attack-surface) details.
## Upgrades
## Upgrade Docker Engine
To upgrade your manual installation of Docker, first kill the docker
To upgrade your manual installation of Docker Engine on Linux, first kill the docker
daemon:
$ killall docker
Then follow the regular installation steps.
Then follow the [regular installation steps](#get-the-linux-binaries).
## Run your first container!
# check your docker version
$ sudo ./docker version
# run a container and open an interactive shell in the container
$ sudo ./docker run -i -t ubuntu /bin/bash
## Next steps
Continue with the [User Guide](../userguide/index.md).

View File

@@ -57,7 +57,7 @@ package manager.
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

View File

@@ -16,7 +16,7 @@ Docker is supported on the following versions of Debian:
- [*Debian testing stretch (64-bit)*](#debian-wheezy-stable-7-x-64-bit)
- [*Debian 8.0 Jessie (64-bit)*](#debian-jessie-80-64-bit)
- [*Debian 7.7 Wheezy (64-bit)*](#debian-wheezy-stable-7-x-64-bit)
- [*Debian 7.7 Wheezy (64-bit)*](#debian-wheezy-stable-7-x-64-bit) (backports required)
>**Note**: If you previously installed Docker using `APT`, make sure you update
your `APT` sources to the new `APT` repository.
@@ -36,6 +36,26 @@ Docker is supported on the following versions of Debian:
$ uname -r
Additionally, for users of Debian Wheezy, backports must be available. To enable backports in Wheezy:
1. Log into your machine and open a terminal with `sudo` or `root` privileges.
2. Open the `/etc/apt/sources.list.d/backports.list` file in your favorite editor.
If the file doesn't exist, create it.
3. Remove any existing entries.
4. Add an entry for backports on Debian Wheezy.
An example entry:
deb http://http.debian.net/debian wheezy-backports main
5. Update package information:
$ apt-get update
### Update your apt repository
Docker's `APT` repository contains Docker 1.7.1 and higher. To set `APT` to use

View File

@@ -26,7 +26,7 @@ version, open a terminal and use `uname -r` to display your kernel version:
$ uname -r
3.19.5-100.fc21.x86_64
If your kernel is at a older version, you must update it.
If your kernel is at an older version, you must update it.
Finally, is it recommended that you fully update your system. Please keep in
mind that your system should be fully patched to fix any potential kernel bugs. Any
@@ -186,7 +186,7 @@ You can uninstall the Docker software with `dnf`.
1. List the package you have installed.
$ dnf list installed | grep docker dnf list installed | grep docker
$ dnf list installed | grep docker
docker-engine.x86_64 1.7.1-0.1.fc21 @/docker-engine-1.7.1-0.1.fc21.el7.x86_64
2. Remove the package.

View File

@@ -29,13 +29,8 @@ btrfs storage engine on both Oracle Linux 6 and 7.
> follow the installation instructions provided in the
> [Oracle Linux documentation](https://docs.oracle.com/en/operating-systems/?tab=2).
>
> The installation instructions for Oracle Linux 6 can be found in [Chapter 10 of
> the Administrator&apos;s
> Solutions Guide](https://docs.oracle.com/cd/E37670_01/E37355/html/ol_docker.html)
>
> The installation instructions for Oracle Linux 7 can be found in [Chapter 29 of
> the Administrator&apos;s
> Guide](https://docs.oracle.com/cd/E52668_01/E54669/html/ol7-docker.html)
> The installation instructions for Oracle Linux 6 and 7 can be found in [Chapter 2 of
> the Docker User&apos;s Guide](https://docs.oracle.com/cd/E52668_01/E75728/html/docker_install_upgrade.html)
1. Log into your machine as a user with `sudo` or `root` privileges.

View File

@@ -391,7 +391,7 @@ To specify a DNS server for use by Docker:
5. Restart the Docker daemon.
$ sudo restart docker
$ sudo service docker restart
&nbsp;

View File

@@ -85,7 +85,7 @@ Docker container using standard localhost addressing such as `localhost:8000` or
![Linux Architecture Diagram](images/linux_docker_host.svg)
In an Windows installation, the `docker` daemon is running inside a Linux virtual
In a Windows installation, the `docker` daemon is running inside a Linux virtual
machine. You use the Windows Docker client to talk to the Docker host VM. Your
Docker containers run inside this host.

View File

@@ -29,7 +29,7 @@ instructions that didnt modify the filesystem.
Content addressability is the foundation for the new distribution features. The
image pull and push code has been reworked to use a download/upload manager
concept that makes pushing and pulling images much more stable and mitigate any
concept that makes pushing and pulling images much more stable and mitigates any
parallel request issues. The download manager also brings retries on failed
downloads and better prioritization for concurrent downloads.

View File

@@ -49,10 +49,6 @@ Docker version | API version | Changes
1.8.x | [1.20](docker_remote_api_v1.20.md) | [API changes](docker_remote_api.md#v1-20-api-changes)
1.7.x | [1.19](docker_remote_api_v1.19.md) | [API changes](docker_remote_api.md#v1-19-api-changes)
1.6.x | [1.18](docker_remote_api_v1.18.md) | [API changes](docker_remote_api.md#v1-18-api-changes)
1.5.x | [1.17](docker_remote_api_v1.17.md) | [API changes](docker_remote_api.md#v1-17-api-changes)
1.4.x | [1.16](docker_remote_api_v1.16.md) | [API changes](docker_remote_api.md#v1-16-api-changes)
1.3.x | [1.15](docker_remote_api_v1.15.md) | [API changes](docker_remote_api.md#v1-15-api-changes)
1.2.x | [1.14](docker_remote_api_v1.14.md) | [API changes](docker_remote_api.md#v1-14-api-changes)
Refer to the [GitHub repository](
https://github.com/docker/docker/tree/master/docs/reference/api) for
@@ -60,12 +56,12 @@ older releases.
## Authentication
Since API version 1.2, the auth configuration is now handled client side, so the
Authentication configuration is handled client side, so the
client has to send the `authConfig` as a `POST` in `/images/(name)/push`. The
`authConfig`, set as the `X-Registry-Auth` header, is currently a Base64 encoded
(JSON) string with the following structure:
```
```JSON
{"username": "string", "password": "string", "email": "string",
"serveraddress" : "string", "auth": ""}
```
@@ -239,53 +235,3 @@ end point now returns the new boolean fields `CpuCfsPeriod`, `CpuCfsQuota`, and
* `POST /build` closing the HTTP request cancels the build
* `POST /containers/(id)/exec` includes `Warnings` field to response.
### v1.17 API changes
[Docker Remote API v1.17](docker_remote_api_v1.17.md) documentation
* The build supports `LABEL` command. Use this to add metadata to an image. For
example you could add data describing the content of an image. `LABEL
"com.example.vendor"="ACME Incorporated"`
* `POST /containers/(id)/attach` and `POST /exec/(id)/start`
* The Docker client now hints potential proxies about connection hijacking using HTTP Upgrade headers.
* `POST /containers/create` sets labels on container create describing the container.
* `GET /containers/json` returns the labels associated with the containers (`Labels`).
* `GET /containers/(id)/json` returns the list current execs associated with the
container (`ExecIDs`). This endpoint now returns the container labels
(`Config.Labels`).
* `POST /containers/(id)/rename` renames a container `id` to a new name.*
* `POST /containers/create` and `POST /containers/(id)/start` callers can pass
`ReadonlyRootfs` in the host config to mount the container's root filesystem as
read only.
* `GET /containers/(id)/stats` returns a live stream of a container's resource usage statistics.
* `GET /images/json` returns the labels associated with each image (`Labels`).
### v1.16 API changes
[Docker Remote API v1.16](docker_remote_api_v1.16.md)
* `GET /info` returns the number of CPUs available on the machine (`NCPU`),
total memory available (`MemTotal`), a user-friendly name describing the running Docker daemon (`Name`), a unique ID identifying the daemon (`ID`), and
a list of daemon labels (`Labels`).
* `POST /containers/create` callers can set the new container's MAC address explicitly.
* Volumes are now initialized when the container is created.
* `POST /containers/(id)/copy` copies data which is contained in a volume.
### v1.15 API changes
[Docker Remote API v1.15](docker_remote_api_v1.15.md) documentation
`POST /containers/create` you can set a container's `HostConfig` when creating a
container. Previously this was only available when starting a container.
### v1.14 API changes
[Docker Remote API v1.14](docker_remote_api_v1.14.md) documentation
* `DELETE /containers/(id)` when using `force`, the container will be immediately killed with SIGKILL.
* `POST /containers/(id)/start` the `HostConfig` option accepts the field `CapAdd`, which specifies a list of capabilities
to add, and the field `CapDrop`, which specifies a list of capabilities to drop.
* `POST /images/create` th `fromImage` and `repo` parameters support the
`repo:tag` format. Consequently, the `tag` parameter is now obsolete. Using the
new format and the `tag` parameter at the same time will return an error.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -206,13 +206,6 @@ Json Parameters:
- **Domainname** - A string value containing the desired domain name to use
for the container.
- **User** - A string value containing the user to use inside the container.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **CpuShares** - An integer value containing the CPU Shares for container
(ie. the relative weight vs other containers).
- **Cpuset** - The same as CpusetCpus, but deprecated, please don't use.
- **CpusetCpus** - String value containing the cgroups CpusetCpus to use.
- **AttachStdin** - Boolean value, attaches to stdin.
- **AttachStdout** - Boolean value, attaches to stdout.
- **AttachStderr** - Boolean value, attaches to stderr.
@@ -243,6 +236,12 @@ Json Parameters:
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations will only
work when using the `lxc` execution driver.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **CpuShares** - An integer value containing the CPU Shares for container
(ie. the relative weight vs other containers).
- **CpusetCpus** - String value containing the cgroups CpusetCpus to use.
- **PortBindings** - A map of exposed container ports and the host port they
should map to. It should be specified in the form
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
@@ -1252,7 +1251,6 @@ Query Parameters:
can be retrieved or `-` to read the image from the request body.
- **repo** repository
- **tag** tag
- **registry** the registry to pull from
Request Headers:

View File

@@ -213,18 +213,6 @@ Json Parameters:
- **Domainname** - A string value containing the domain name to use
for the container.
- **User** - A string value specifying the user inside the container.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **AttachStdin** - Boolean value, attaches to `stdin`.
- **AttachStdout** - Boolean value, attaches to `stdout`.
- **AttachStderr** - Boolean value, attaches to `stderr`.
@@ -254,6 +242,17 @@ Json Parameters:
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
work when using the `lxc` execution driver.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **PortBindings** - A map of exposed container ports and the host port they
should map to. A JSON object in the form
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
@@ -1301,7 +1300,6 @@ Query Parameters:
can be retrieved or `-` to read the image from the request body.
- **repo** Repository name.
- **tag** Tag.
- **registry** The registry to pull from.
Request Headers:

View File

@@ -215,19 +215,6 @@ Json Parameters:
- **Domainname** - A string value containing the domain name to use
for the container.
- **User** - A string value specifying the user inside the container.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **AttachStdin** - Boolean value, attaches to `stdin`.
- **AttachStdout** - Boolean value, attaches to `stdout`.
- **AttachStderr** - Boolean value, attaches to `stderr`.
@@ -257,6 +244,18 @@ Json Parameters:
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
work when using the `lxc` execution driver.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **PortBindings** - A map of exposed container ports and the host port they
should map to. A JSON object in the form
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
@@ -1128,7 +1127,7 @@ following section.
`GET /containers/(id or name)/archive`
Get an tar archive of a resource in the filesystem of container `id`.
Get a tar archive of a resource in the filesystem of container `id`.
Query Parameters:
@@ -1398,14 +1397,14 @@ Query Parameters:
}
}
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
Status Codes:
@@ -1443,7 +1442,6 @@ Query Parameters:
can be retrieved or `-` to read the image from the request body.
- **repo** Repository name.
- **tag** Tag.
- **registry** The registry to pull from.
Request Headers:

View File

@@ -224,21 +224,6 @@ Json Parameters:
- **Domainname** - A string value containing the domain name to use
for the container.
- **User** - A string value specifying the user inside the container.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **MemoryReservation** - Memory soft limit in bytes.
- **KernelMemory** - Kernel memory limit in bytes.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **AttachStdin** - Boolean value, attaches to `stdin`.
- **AttachStdout** - Boolean value, attaches to `stdout`.
- **AttachStderr** - Boolean value, attaches to `stderr`.
@@ -271,6 +256,20 @@ Json Parameters:
in the form of `container_name:alias`.
- **LxcConf** - LXC specific configurations. These configurations only
work when using the `lxc` execution driver.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **MemoryReservation** - Memory soft limit in bytes.
- **KernelMemory** - Kernel memory limit in bytes.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **PortBindings** - A map of exposed container ports and the host port they
should map to. A JSON object in the form
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
@@ -1211,7 +1210,7 @@ following section.
`GET /containers/(id or name)/archive`
Get an tar archive of a resource in the filesystem of container `id`.
Get a tar archive of a resource in the filesystem of container `id`.
Query Parameters:
@@ -1487,14 +1486,14 @@ Query Parameters:
}
}
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
Status Codes:

View File

@@ -298,6 +298,17 @@ Create a container
"CgroupParent": "",
"VolumeDriver": "",
"ShmSize": 67108864
},
"NetworkingConfig": {
"EndpointsConfig": {
"isolated_nw" : {
"IPAMConfig": {
"IPv4Address":"172.20.30.33",
"IPv6Address":"2001:db8:abcd::3033"
},
"Links":["container_1", "container_2"],
"Aliases":["server_x", "server_y"]
}
}
}
@@ -318,31 +329,6 @@ Json Parameters:
- **Domainname** - A string value containing the domain name to use
for the container.
- **User** - A string value specifying the user inside the container.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **MemoryReservation** - Memory soft limit in bytes.
- **KernelMemory** - Kernel memory limit in bytes.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
- **AttachStdin** - Boolean value, attaches to `stdin`.
- **AttachStdout** - Boolean value, attaches to `stdout`.
- **AttachStderr** - Boolean value, attaches to `stderr`.
@@ -372,6 +358,30 @@ Json Parameters:
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **MemoryReservation** - Memory soft limit in bytes.
- **KernelMemory** - Kernel memory limit in bytes.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
- **PortBindings** - A map of exposed container ports and the host port they
should map to. A JSON object in the form
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
@@ -1379,7 +1389,7 @@ following section.
`GET /containers/(id or name)/archive`
Get an tar archive of a resource in the filesystem of container `id`.
Get a tar archive of a resource in the filesystem of container `id`.
Query Parameters:
@@ -1656,14 +1666,14 @@ Query Parameters:
}
}
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
Status Codes:
@@ -2349,49 +2359,157 @@ Docker networks report the following events:
HTTP/1.1 200 OK
Content-Type: application/json
Server: Docker/1.10.0 (linux)
Date: Fri, 29 Apr 2016 15:18:06 GMT
Transfer-Encoding: chunked
[
{
"action": "pull",
"type": "image",
"actor": {
"id": "busybox:latest",
"attributes": {}
}
"time": 1442421700,
"timeNano": 1442421700598988358
},
{
"action": "create",
"type": "container",
"actor": {
"id": "5745704abe9caa5",
"attributes": {"image": "busybox"}
}
"time": 1442421716,
"timeNano": 1442421716853979870
},
{
"action": "attach",
"type": "container",
"actor": {
"id": "5745704abe9caa5",
"attributes": {"image": "busybox"}
}
"time": 1442421716,
"timeNano": 1442421716894759198
},
{
"action": "start",
"type": "container",
"actor": {
"id": "5745704abe9caa5",
"attributes": {"image": "busybox"}
}
"time": 1442421716,
"timeNano": 1442421716983607193
}
]
{
"status": "pull",
"id": "alpine:latest",
"Type": "image",
"Action": "pull",
"Actor": {
"ID": "alpine:latest",
"Attributes": {
"name": "alpine"
}
},
"time": 1461943101,
"timeNano": 1461943101301854122
}
{
"status": "create",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "create",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943101,
"timeNano": 1461943101381709551
}
{
"status": "attach",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "attach",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943101,
"timeNano": 1461943101383858412
}
{
"Type": "network",
"Action": "connect",
"Actor": {
"ID": "7dc8ac97d5d29ef6c31b6052f3938c1e8f2749abbd17d1bd1febf2608db1b474",
"Attributes": {
"container": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"name": "bridge",
"type": "bridge"
}
},
"time": 1461943101,
"timeNano": 1461943101394865557
}
{
"status": "start",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "start",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943101,
"timeNano": 1461943101607533796
}
{
"status": "resize",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "resize",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"height": "46",
"image": "alpine",
"name": "my-container",
"width": "204"
}
},
"time": 1461943101,
"timeNano": 1461943101610269268
}
{
"status": "die",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "die",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"exitCode": "0",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943105,
"timeNano": 1461943105079144137
}
{
"Type": "network",
"Action": "disconnect",
"Actor": {
"ID": "7dc8ac97d5d29ef6c31b6052f3938c1e8f2749abbd17d1bd1febf2608db1b474",
"Attributes": {
"container": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"name": "bridge",
"type": "bridge"
}
},
"time": 1461943105,
"timeNano": 1461943105230860245
}
{
"status": "destroy",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "destroy",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943105,
"timeNano": 1461943105338056026
}
Query Parameters:
@@ -2952,11 +3070,17 @@ Content-Type: application/json
"Name":"isolated_nw",
"Driver":"bridge",
"IPAM":{
"Config":[{
"Subnet":"172.20.0.0/16",
"IPRange":"172.20.10.0/24",
"Gateway":"172.20.10.11"
}],
"Config":[
{
"Subnet":"172.20.0.0/16",
"IPRange":"172.20.10.0/24",
"Gateway":"172.20.10.11"
},
{
"Subnet":"2001:db8:abcd::/64",
"Gateway":"2001:db8:abcd::1011"
}
],
"Options": {
"foo": "bar"
}

View File

@@ -296,6 +296,7 @@ Create a container
"MemorySwappiness": 60,
"OomKillDisable": false,
"OomScoreAdj": 500,
"PidsLimit": -1,
"PortBindings": { "22/tcp": [{ "HostPort": "11022" }] },
"PublishAllPorts": false,
"Privileged": false,
@@ -317,6 +318,17 @@ Create a container
"CgroupParent": "",
"VolumeDriver": "",
"ShmSize": 67108864
},
"NetworkingConfig": {
"EndpointsConfig": {
"isolated_nw" : {
"IPAMConfig": {
"IPv4Address":"172.20.30.33",
"IPv6Address":"2001:db8:abcd::3033"
},
"Links":["container_1", "container_2"],
"Aliases":["server_x", "server_y"]
}
}
}
@@ -337,32 +349,6 @@ Json Parameters:
- **Domainname** - A string value containing the domain name to use
for the container.
- **User** - A string value specifying the user inside the container.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **MemoryReservation** - Memory soft limit in bytes.
- **KernelMemory** - Kernel memory limit in bytes.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **Cpuset** - Deprecated please don't use. Use `CpusetCpus` instead.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
- **PidsLimit** - Tune a container's pids limit. Set -1 for unlimited.
- **AttachStdin** - Boolean value, attaches to `stdin`.
- **AttachStdout** - Boolean value, attaches to `stdout`.
- **AttachStderr** - Boolean value, attaches to `stderr`.
@@ -385,13 +371,44 @@ Json Parameters:
`"ExposedPorts": { "<port>/<tcp|udp>: {}" }`
- **StopSignal** - Signal to stop a container as a string or unsigned integer. `SIGTERM` by default.
- **HostConfig**
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
+ `host_path:container_path` to bind-mount a host path into the container
+ `host_path:container_path:ro` to make the bind-mount read-only inside the container.
+ `volume_name:container_path` to bind-mount a volume managed by a volume plugin into the container.
+ `volume_name:container_path:ro` to make the bind mount read-only inside the container.
- **Binds** A list of volume bindings for this container. Each volume binding is a string in one of these forms:
- `host_path:container_path[:options]` to bind-mount a host path into the container
- `volume_name:container_path[:options]` to bind-mount a volume managed by a volume driver into the container.
`options` is a comma-delimited list of:
- `[ro|rw]` mounts a volume read-only or read-write, respectively. If omitted or set to `rw`, volumes are mounted read-write.
- `[z|Z]` specifies that multiple containers can read and write to the same volume.
- `z`: a _shared_ content label is applied to the content. This label indicates that multiple containers can share the volume content, for both reading and writing.
- `Z`: a _private unshared_ label is applied to the content. This label indicates that only the current container can use a private volume. Labeling systems such as SELinux require proper labels to be placed on volume content that is mounted into a container. Without a label, the security system can prevent a container's processes from using the content. By default, the labels set by the host operating system are not modified.
- `[[r]shared|[r]slave|[r]private]` specifies mount [propagation behavior](https://www.kernel.org/doc/Documentation/filesystems/sharedsubtree.txt). This only applies to bind-mounted volumes, not internal volumes or named volumes. Mount propagation requires the source mount point (the location where the source directory is mounted in the host operating system) to have the correct propagation properties. For shared volumes, the source mount point must be set to `shared`. For slave volumes, the mount must be set to either `shared` or `slave`.
- `nocopy` disables automatic copying of data from the container path to the volume. The `nocopy` flag only applies to named volumes.
- **Links** - A list of links for the container. Each link entry should be
in the form of `container_name:alias`.
- **Memory** - Memory limit in bytes.
- **MemorySwap** - Total memory limit (memory + swap); set `-1` to enable unlimited swap.
You must use this with `memory` and make the swap value larger than `memory`.
- **MemoryReservation** - Memory soft limit in bytes.
- **KernelMemory** - Kernel memory limit in bytes.
- **CpuShares** - An integer value containing the container's CPU Shares
(ie. the relative weight vs other containers).
- **CpuPeriod** - The length of a CPU period in microseconds.
- **CpuQuota** - Microseconds of CPU time that the container can get in a CPU period.
- **CpusetCpus** - String value containing the `cgroups CpusetCpus` to use.
- **CpusetMems** - Memory nodes (MEMs) in which to allow execution (0-3, 0,1). Only effective on NUMA systems.
- **BlkioWeight** - Block IO weight (relative weight) accepts a weight value between 10 and 1000.
- **BlkioWeightDevice** - Block IO weight (relative device weight) in the form of: `"BlkioWeightDevice": [{"Path": "device_path", "Weight": weight}]`
- **BlkioDeviceReadBps** - Limit read rate (bytes per second) from a device in the form of: `"BlkioDeviceReadBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceWriteBps** - Limit write rate (bytes per second) to a device in the form of: `"BlkioDeviceWriteBps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteBps": [{"Path": "/dev/sda", "Rate": "1024"}]"`
- **BlkioDeviceReadIOps** - Limit read rate (IO per second) from a device in the form of: `"BlkioDeviceReadIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceReadIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **BlkioDeviceWiiteIOps** - Limit write rate (IO per second) to a device in the form of: `"BlkioDeviceWriteIOps": [{"Path": "device_path", "Rate": rate}]`, for example:
`"BlkioDeviceWriteIOps": [{"Path": "/dev/sda", "Rate": "1000"}]`
- **MemorySwappiness** - Tune a container's memory swappiness behavior. Accepts an integer between 0 and 100.
- **OomKillDisable** - Boolean value, whether to disable OOM Killer for the container or not.
- **OomScoreAdj** - An integer value containing the score given to the container in order to tune OOM killer preferences.
- **PidsLimit** - Tune a container's pids limit. Set -1 for unlimited.
- **PortBindings** - A map of exposed container ports and the host port they
should map to. A JSON object in the form
`{ <port>/<protocol>: [{ "HostPort": "<port>" }] }`
@@ -506,8 +523,8 @@ Return low-level information on the container `id`
"Tty": false,
"User": "",
"Volumes": {
"/volumes/data": {}
},
"/volumes/data": {}
},
"WorkingDir": "",
"StopSignal": "SIGTERM"
},
@@ -1408,7 +1425,7 @@ following section.
`GET /containers/(id or name)/archive`
Get an tar archive of a resource in the filesystem of container `id`.
Get a tar archive of a resource in the filesystem of container `id`.
Query Parameters:
@@ -1649,7 +1666,7 @@ Query Parameters:
You can provide one or more `t` parameters.
- **remote** A Git repository URI or HTTP/HTTPS URI build source. If the
URI specifies a filename, the file's contents are placed into a file
called `Dockerfile`.
called `Dockerfile`.
- **q** Suppress verbose build output.
- **nocache** Do not use the cache when building the image.
- **pull** - Attempt to pull the image even if an older image exists locally.
@@ -1667,6 +1684,7 @@ Query Parameters:
variable expansion in other Dockerfile instructions. This is not meant for
passing secret values. [Read more about the buildargs instruction](../../reference/builder.md#arg)
- **shmsize** - Size of `/dev/shm` in bytes. The size must be greater than 0. If omitted the system uses 64MB.
- **labels** JSON map of string pairs for labels to set on the image.
Request Headers:
@@ -1685,14 +1703,14 @@ Query Parameters:
}
}
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
This object maps the hostname of a registry to an object containing the
"username" and "password" for that registry. Multiple registries may
be specified as the build may be based on an image requiring
authentication to pull from any arbitrary registry. Only the registry
domain name (and port if not the default "443") are required. However
(for legacy reasons) the "official" Docker, Inc. hosted registry must
be specified with both a "https://" prefix and a "/v1/" suffix even
though Docker will prefer to use the v2 registry API.
Status Codes:
@@ -2392,49 +2410,158 @@ Docker networks report the following events:
HTTP/1.1 200 OK
Content-Type: application/json
Server: Docker/1.11.0 (linux)
Date: Fri, 29 Apr 2016 15:18:06 GMT
Transfer-Encoding: chunked
{
"status": "pull",
"id": "alpine:latest",
"Type": "image",
"Action": "pull",
"Actor": {
"ID": "alpine:latest",
"Attributes": {
"name": "alpine"
}
},
"time": 1461943101,
"timeNano": 1461943101301854122
}
{
"status": "create",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "create",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943101,
"timeNano": 1461943101381709551
}
{
"status": "attach",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "attach",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943101,
"timeNano": 1461943101383858412
}
{
"Type": "network",
"Action": "connect",
"Actor": {
"ID": "7dc8ac97d5d29ef6c31b6052f3938c1e8f2749abbd17d1bd1febf2608db1b474",
"Attributes": {
"container": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"name": "bridge",
"type": "bridge"
}
},
"time": 1461943101,
"timeNano": 1461943101394865557
}
{
"status": "start",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "start",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943101,
"timeNano": 1461943101607533796
}
{
"status": "resize",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "resize",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"height": "46",
"image": "alpine",
"name": "my-container",
"width": "204"
}
},
"time": 1461943101,
"timeNano": 1461943101610269268
}
{
"status": "die",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "die",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"exitCode": "0",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943105,
"timeNano": 1461943105079144137
}
{
"Type": "network",
"Action": "disconnect",
"Actor": {
"ID": "7dc8ac97d5d29ef6c31b6052f3938c1e8f2749abbd17d1bd1febf2608db1b474",
"Attributes": {
"container": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"name": "bridge",
"type": "bridge"
}
},
"time": 1461943105,
"timeNano": 1461943105230860245
}
{
"status": "destroy",
"id": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"from": "alpine",
"Type": "container",
"Action": "destroy",
"Actor": {
"ID": "ede54ee1afda366ab42f824e8a5ffd195155d853ceaec74a927f249ea270c743",
"Attributes": {
"com.example.some-label": "some-label-value",
"image": "alpine",
"name": "my-container"
}
},
"time": 1461943105,
"timeNano": 1461943105338056026
}
[
{
"action": "pull",
"type": "image",
"actor": {
"id": "busybox:latest",
"attributes": {}
}
"time": 1442421700,
"timeNano": 1442421700598988358
},
{
"action": "create",
"type": "container",
"actor": {
"id": "5745704abe9caa5",
"attributes": {"image": "busybox"}
}
"time": 1442421716,
"timeNano": 1442421716853979870
},
{
"action": "attach",
"type": "container",
"actor": {
"id": "5745704abe9caa5",
"attributes": {"image": "busybox"}
}
"time": 1442421716,
"timeNano": 1442421716894759198
},
{
"action": "start",
"type": "container",
"actor": {
"id": "5745704abe9caa5",
"attributes": {"image": "busybox"}
}
"time": 1442421716,
"timeNano": 1442421716983607193
}
]
Query Parameters:
@@ -2628,7 +2755,7 @@ interactive session with the `exec` command.
**Example response**:
HTTP/1.1 200 OK
Content-Type: vnd.docker.raw-stream
Content-Type: application/vnd.docker.raw-stream
{{ STREAM }}
@@ -2763,7 +2890,11 @@ Create a volume
Content-Type: application/json
{
"Name": "tardis"
"Name": "tardis",
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
},
}
**Example response**:
@@ -2774,7 +2905,11 @@ Create a volume
{
"Name": "tardis",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/tardis"
"Mountpoint": "/var/lib/docker/volumes/tardis",
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
},
}
Status Codes:
@@ -2788,6 +2923,7 @@ JSON Parameters:
- **Driver** - Name of the volume driver to use. Defaults to `local` for the name.
- **DriverOpts** - A mapping of driver options and values. These options are
passed directly to the driver and are driver specific.
- **Labels** - Labels to set on the volume, specified as a map: `{"key":"value" [,"key2":"value2"]}`
### Inspect a volume
@@ -2805,9 +2941,13 @@ Return low-level information on the volume `name`
Content-Type: application/json
{
"Name": "tardis",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/tardis"
"Name": "tardis",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/tardis/_data",
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
}
}
Status Codes:
@@ -2978,6 +3118,10 @@ Content-Type: application/json
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
}
}
```
@@ -3001,19 +3145,38 @@ Content-Type: application/json
{
"Name":"isolated_nw",
"CheckDuplicate":false,
"Driver":"bridge",
"EnableIPv6": false,
"EnableIPv6": true,
"IPAM":{
"Config":[{
"Subnet":"172.20.0.0/16",
"IPRange":"172.20.10.0/24",
"Gateway":"172.20.10.11"
}],
"Config":[
{
"Subnet":"172.20.0.0/16",
"IPRange":"172.20.10.0/24",
"Gateway":"172.20.10.11"
},
{
"Subnet":"2001:db8:abcd::/64",
"Gateway":"2001:db8:abcd::1011"
}
],
"Options": {
"foo": "bar"
}
},
"Internal":true
"Internal":true,
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {
"com.example.some-label": "some-value",
"com.example.some-other-label": "some-other-value"
}
}
```
@@ -3038,12 +3201,13 @@ Status Codes:
JSON Parameters:
- **Name** - The new network's name. this is a mandatory field
- **CheckDuplicate** - Requests daemon to check for networks with same name
- **Driver** - Name of the network driver plugin to use. Defaults to `bridge` driver
- **Internal** - Restrict external access to the network
- **IPAM** - Optional custom IP scheme for the network
- **EnableIPv6** - Enable IPv6 on the network
- **Options** - Network specific options to be used by the drivers
- **CheckDuplicate** - Requests daemon to check for networks with same name
- **Labels** - Labels to set on the network, specified as a map: `{"key":"value" [,"key2":"value2"]}`
### Connect a container to a network

View File

@@ -11,24 +11,22 @@ weight = 90
# Docker Remote API client libraries
These libraries have not been tested by the Docker maintainers for
compatibility. Please file issues with the library owners. If you find
more library implementations, please list them in Docker doc bugs and we
will add the libraries here.
These libraries make it easier to build applications on top of the Docker
Remote API with various programming languages. They have not been tested by the
Docker maintainers for compatibility, so if you run into any issues, file them
with the library maintainers.
<table border="1" class="docutils">
<colgroup>
<col width="24%">
<col width="17%">
<col width="29%">
<col width="23%">
<col width="48%">
<col width="11%">
</colgroup>
<thead valign="bottom">
<tr>
<th class="head">Language/Framework</th>
<th class="head">Name</th>
<th class="head">Repository</th>
<th class="head">Status</th>
</tr>
</thead>
<tbody valign = "top">
@@ -36,213 +34,101 @@ will add the libraries here.
<td>C#</td>
<td>Docker.DotNet</td>
<td><a class="reference external" href="https://github.com/ahmetalpbalkan/Docker.DotNet">https://github.com/ahmetalpbalkan/Docker.DotNet</a></td>
<td>Active</td>
</tr>
<tr>
<td>C++</td>
<td>lasote/docker_client</td>
<td><a class="reference external" href="http://www.biicode.com/lasote/docker_client">http://www.biicode.com/lasote/docker_client (Biicode C++ dependency manager)</a></td>
<td>Active</td>
<td><a class="reference external" href="https://github.com/lasote/docker_client">https://github.com/lasote/docker_client</a></td>
</tr>
<tr>
<td>Erlang</td>
<td>erldocker</td>
<td><a class="reference external" href="https://github.com/proger/erldocker">https://github.com/proger/erldocker</a></td>
<td>Active</td>
</tr>
<tr>
<td>Dart</td>
<td>bwu_docker</td>
<td><a class="reference external" href="https://github.com/bwu-dart/bwu_docker">https://github.com/bwu-dart/bwu_docker</a></td>
<td>Active</td>
</tr>
<tr>
<td>Go</td>
<td>engine-api</td>
<td><a class="reference external" href="https://github.com/docker/engine-api">https://github.com/docker/engine-api</a></td>
<td>Active</td>
</tr>
<tr>
<td>Go</td>
<td>go-dockerclient</td>
<td><a class="reference external" href="https://github.com/fsouza/go-dockerclient">https://github.com/fsouza/go-dockerclient</a></td>
<td>Active</td>
</tr>
<tr>
<td>Go</td>
<td>dockerclient</td>
<td><a class="reference external" href="https://github.com/samalba/dockerclient">https://github.com/samalba/dockerclient</a></td>
<td>Active</td>
</tr>
<tr>
<td>Gradle</td>
<td>gradle-docker-plugin</td>
<td><a class="reference external" href="https://github.com/gesellix/gradle-docker-plugin">https://github.com/gesellix/gradle-docker-plugin</a></td>
<td>Active</td>
</tr>
<tr>
<td>Groovy</td>
<td>docker-client</td>
<td><a class="reference external" href="https://github.com/gesellix/docker-client">https://github.com/gesellix/docker-client</a></td>
<td>Active</td>
</tr>
<tr>
<td>Haskell</td>
<td>docker-hs</td>
<td><a class="reference external" href="https://github.com/denibertovic/docker-hs">https://github.com/denibertovic/docker-hs</a></td>
<td>Active</td>
</tr>
<tr>
<td>HTML (Web Components)</td>
<td>docker-elements</td>
<td><a class="reference external" href="https://github.com/kapalhq/docker-elements">https://github.com/kapalhq/docker-elements</a></td>
<td>Active</td>
</tr>
<tr>
<td>Java</td>
<td>docker-java</td>
<td><a class="reference external" href="https://github.com/docker-java/docker-java">https://github.com/docker-java/docker-java</a></td>
<td>Active</td>
</tr>
<tr>
<td>Java</td>
<td>docker-client</td>
<td><a class="reference external" href="https://github.com/spotify/docker-client">https://github.com/spotify/docker-client</a></td>
<td>Active</td>
</tr>
<tr>
<td>Java</td>
<td>jclouds-docker</td>
<td><a class="reference external" href="https://github.com/jclouds/jclouds-labs/tree/master/docker">https://github.com/jclouds/jclouds-labs/tree/master/docker</a></td>
<td>Active</td>
</tr>
<tr>
<td>Java</td>
<td>rx-docker-client</td>
<td><a class="reference external" href="https://github.com/shekhargulati/rx-docker-client">https://github.com/shekhargulati/rx-docker-client</a></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript (NodeJS)</td>
<td>dockerizer</td>
<td><a class="reference external" href="https://github.com/kesarion/dockerizer">https://github.com/kesarion/dockerizer</a></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript (NodeJS)</td>
<td>NodeJS</td>
<td>dockerode</td>
<td><a class="reference external" href="https://github.com/apocas/dockerode">https://github.com/apocas/dockerode</a>
Install via NPM: <cite>npm install dockerode</cite></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript (NodeJS)</td>
<td>docker.io</td>
<td><a class="reference external" href="https://github.com/appersonlabs/docker.io">https://github.com/appersonlabs/docker.io</a>
Install via NPM: <cite>npm install docker.io</cite></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript</td>
<td>docker-js</td>
<td><a class="reference external" href="https://github.com/dgoujard/docker-js">https://github.com/dgoujard/docker-js</a></td>
<td>Outdated</td>
</tr>
<tr>
<td>JavaScript (Angular) <strong>WebUI</strong></td>
<td>Albatros</td>
<td><a class="reference external" href="https://github.com/dcylabs/albatros">https://github.com/dcylabs/albatros</a></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript (Angular) <strong>WebUI</strong></td>
<td>docker-cp</td>
<td><a class="reference external" href="https://github.com/13W/docker-cp">https://github.com/13W/docker-cp</a></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript (Angular) <strong>WebUI</strong></td>
<td>dockerui</td>
<td><a class="reference external" href="https://github.com/crosbymichael/dockerui">https://github.com/crosbymichael/dockerui</a></td>
<td>Active</td>
</tr>
<tr>
<td>JavaScript (Angular) <strong>WebUI</strong></td>
<td>dockery</td>
<td><a class="reference external" href="https://github.com/lexandro/dockery">https://github.com/lexandro/dockery</a></td>
<td>Active</td>
</tr>
<tr>
<td>Perl</td>
<td>Net::Docker</td>
<td><a class="reference external" href="https://metacpan.org/pod/Net::Docker">https://metacpan.org/pod/Net::Docker</a></td>
<td>Active</td>
<td><a class="reference external" href="https://github.com/apocas/dockerode">https://github.com/apocas/dockerode</a></td>
</tr>
<tr>
<td>Perl</td>
<td>Eixo::Docker</td>
<td><a class="reference external" href="https://github.com/alambike/eixo-docker">https://github.com/alambike/eixo-docker</a></td>
<td>Active</td>
</tr>
<tr>
<td>PHP</td>
<td>Alvine</td>
<td><a class="reference external" href="http://pear.alvine.io/">http://pear.alvine.io/</a> (alpha)</td>
<td>Active</td>
</tr>
<tr>
<td>PHP</td>
<td>Docker-PHP</td>
<td><a class="reference external" href="https://github.com/docker-php/docker-php">https://github.com/docker-php/docker-php</a></td>
<td>Active</td>
</tr>
<tr>
<td>PHP</td>
<td>Docker-PHP-Client</td>
<td><a class="reference external" href="https://github.com/jarkt/docker-php-client">https://github.com/jarkt/docker-php-client</a></td>
<td>Active</td>
</tr>
<tr>
<td>Python</td>
<td>docker-py</td>
<td><a class="reference external" href="https://github.com/docker/docker-py">https://github.com/docker/docker-py</a></td>
<td>Active</td>
</tr>
<tr>
<td>Ruby</td>
<td>docker-api</td>
<td><a class="reference external" href="https://github.com/swipely/docker-api">https://github.com/swipely/docker-api</a></td>
<td>Active</td>
</tr>
<tr>
<td>Ruby</td>
<td>docker-client</td>
<td><a class="reference external" href="https://github.com/geku/docker-client">https://github.com/geku/docker-client</a></td>
<td>Outdated</td>
</tr>
<tr>
<td>Rust</td>
<td>docker-rust</td>
<td><a class="reference external" href="https://github.com/abh1nav/docker-rust">https://github.com/abh1nav/docker-rust</a></td>
<td>Active</td>
</tr>
<tr>
<td>Rust</td>
<td>shiplift</td>
<td><a class="reference external" href="https://github.com/softprops/shiplift">https://github.com/softprops/shiplift</a></td>
<td>Active</td>
</tr>
<tr>
<td>Scala</td>
<td>tugboat</td>
<td><a class="reference external" href="https://github.com/softprops/tugboat">https://github.com/softprops/tugboat</a></td>
<td>Active</td>
</tr>
<tr>
<td>Scala</td>
<td>reactive-docker</td>
<td><a class="reference external" href="https://github.com/almoehi/reactive-docker">https://github.com/almoehi/reactive-docker</a></td>
<td>Active</td>
</tr>
</tbody>
</table>

View File

@@ -361,7 +361,16 @@ RUN /bin/bash -c 'source $HOME/.bashrc ; echo $HOME'
> This means that normal shell processing does not happen. For example,
> `RUN [ "echo", "$HOME" ]` will not do variable substitution on `$HOME`.
> If you want shell processing then either use the *shell* form or execute
> a shell directly, for example: `RUN [ "sh", "-c", "echo", "$HOME" ]`.
> a shell directly, for example: `RUN [ "sh", "-c", "echo $HOME" ]`.
>
> **Note**:
> In the *JSON* form, it is necessary to escape backslashes. This is
> particularly relevant on Windows where the backslash is the path seperator.
> The following line would otherwise be treated as *shell* form due to not
> being valid JSON, and fail in an unexpected way:
> `RUN ["c:\windows\system32\tasklist.exe"]`
> The correct syntax for this example is:
> `RUN ["c:\\windows\\system32\\tasklist.exe"]`
The cache for `RUN` instructions isn't invalidated automatically during
the next build. The cache for an instruction like
@@ -834,7 +843,7 @@ does some more work:
# USE the trap if you need to also do manual cleanup after the service is stopped,
# or need to start multiple services in the one container
trap "echo TRAPed signal" HUP INT QUIT KILL TERM
trap "echo TRAPed signal" HUP INT QUIT TERM
# start service in background here
/usr/sbin/apachectl start

View File

@@ -28,7 +28,7 @@ detached process.
To stop a container, use `CTRL-c`. This key sequence sends `SIGKILL` to the
container. If `--sig-proxy` is true (the default),`CTRL-c` sends a `SIGINT` to
the container. You can detach from a container and leave it running using the
using `CTRL-p CTRL-q` key sequence.
`CTRL-p CTRL-q` key sequence.
> **Note:**
> A process running as PID 1 inside a container is treated specially by
@@ -39,12 +39,21 @@ using `CTRL-p CTRL-q` key sequence.
It is forbidden to redirect the standard input of a `docker attach` command
while attaching to a tty-enabled container (i.e.: launched with `-t`).
While a client is connected to container's stdio using `docker attach`, Docker
uses a ~1MB memory buffer to maximize the throughput of the application. If
this buffer is filled, the speed of the API connection will start to have an
effect on the process output writing speed. This is similar to other
applications like SSH. Because of this, it is not recommended to run
performance critical applications that generate a lot of output in the
foreground over a slow client connection. Instead, users should use the
`docker logs` command to get access to the logs.
## Override the detach sequence
If you want, you can configure a override the Docker key sequence for detach.
This is is useful if the Docker default sequence conflicts with key squence you
use for other applications. There are two ways to defines a your own detach key
If you want, you can configure an override the Docker key sequence for detach.
This is useful if the Docker default sequence conflicts with key sequence you
use for other applications. There are two ways to define your own detach key
sequence, as a per-container override or as a configuration property on your
entire configuration.

View File

@@ -79,7 +79,11 @@ Build Syntax Suffix | Commit Used | Build Context Used
Instead of specifying a context, you can pass a single Dockerfile in the `URL`
or pipe the file in via `STDIN`. To pipe a Dockerfile from `STDIN`:
docker build - < Dockerfile
$ docker build - < Dockerfile
With Powershell on Windows, you can run:
Get-Content Dockerfile | docker build -
If you use STDIN or specify a `URL`, the system places the contents into a file
called `Dockerfile`, and any `-f`, `--file` option is ignored. In this
@@ -298,6 +302,9 @@ accessed like regular environment variables in the `RUN` instruction of the
Dockerfile. Also, these values don't persist in the intermediate or final images
like `ENV` values do.
Using this flag will not alter the output you see when the `ARG` lines from the
Dockerfile are echoed during the build process.
For detailed information on using `ARG` and `ENV` instructions, see the
[Dockerfile reference](../builder.md).

View File

@@ -81,7 +81,17 @@ you must be explicit with a relative or absolute path, for example:
`/path/to/file:name.txt` or `./file:name.txt`
It is not possible to copy certain system files such as resources under
`/proc`, `/sys`, `/dev`, and mounts created by the user in the container.
`/proc`, `/sys`, `/dev`, [tmpfs](run.md#mount-tmpfs-tmpfs), and mounts created by
the user in the container. However, you can still copy such files by manually
running `tar` in `docker exec`. For example (consider `SRC_PATH` and `DEST_PATH`
are directories):
$ docker exec foo tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | tar Cxf DEST_PATH -
or
$ tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | docker exec -i foo tar Cxf DEST_PATH -
Using `-` as the `SRC_PATH` streams the contents of `STDIN` as a tar archive.
The command extracts the content of the tar to the `DEST_PATH` in container's

View File

@@ -1,6 +1,7 @@
<!--[metadata]>
+++
title = "daemon"
aliases = ["/engine/reference/commandline/dockerd/", "/engine/reference/commandline/dockerd.md"]
description = "The daemon command description and usage"
keywords = ["container, daemon, runtime"]
[menu.main]
@@ -88,7 +89,7 @@ membership.
If you need to access the Docker daemon remotely, you need to enable the `tcp`
Socket. Beware that the default setup provides un-encrypted and
un-authenticated direct access to the Docker daemon - and should be secured
either using the [built in HTTPS encrypted socket](../../security/https/), or by
either using the [built in HTTPS encrypted socket](../../security/https.md), or by
putting a secure web proxy in front of it. You can listen on port `2375` on all
network interfaces with `-H tcp://0.0.0.0:2375`, or on a particular network
interface using its IP address: `-H tcp://192.168.59.103:2375`. It is
@@ -195,17 +196,17 @@ options for `zfs` start with `zfs`.
to create and manage the thin-pool volume. This volume is then handed to Docker
to exclusively create snapshot volumes needed for images and containers.
Managing the thin-pool outside of Docker makes for the most feature-rich
Managing the thin-pool outside of Engine makes for the most feature-rich
method of having Docker utilize device mapper thin provisioning as the
backing storage for Docker's containers. The highlights of the lvm-based
backing storage for Docker containers. The highlights of the lvm-based
thin-pool management feature include: automatic or interactive thin-pool
resize support, dynamically changing thin-pool features, automatic thinp
metadata checking when lvm activates the thin-pool, etc.
As a fallback if no thin pool is provided, loopback files will be
As a fallback if no thin pool is provided, loopback files are
created. Loopback is very slow, but can be used without any
pre-configuration of storage. It is strongly recommended that you do
not use loopback in production. Ensure your Docker daemon has a
not use loopback in production. Ensure your Engine daemon has a
`--storage-opt dm.thinpooldev` argument provided.
Example use:
@@ -441,29 +442,33 @@ options for `zfs` start with `zfs`.
* `dm.min_free_space`
Specifies the min free space percent in thin pool require for new device
Specifies the min free space percent in a thin pool require for new device
creation to succeed. This check applies to both free data space as well
as free metadata space. Valid values are from 0% - 99%. Value 0% disables
free space checking logic. If user does not specify a value for this optoin,
then default value for this option is 10%.
free space checking logic. If user does not specify a value for this option,
the Engine uses a default value of 10%.
Whenever a new thin pool device is created (during docker pull or
during container creation), docker will check minimum free space is
available as specified by this parameter. If that is not the case, then
device creation will fail and docker operation will fail.
Whenever a new a thin pool device is created (during `docker pull` or during
container creation), the Engine checks if the minimum free space is
available. If sufficient space is unavailable, then device creation fails
and any relevant `docker` operation fails.
One will have to create more free space in thin pool to recover from the
error. Either delete some of the images and containers from thin pool and
create free space or add more storage to thin pool.
To recover from this error, you must create more free space in the thin pool
to recover from the error. You can create free space by deleting some images
and containers from the thin pool. You can also add more storage to the thin
pool.
For lvm thin pool, one can add more storage to volume group container thin
pool and that should automatically resolve it. If loop devices are being
used, then stop docker, grow the size of loop files and restart docker and
that should resolve the issue.
To add more space to a LVM (logical volume management) thin pool, just add
more storage to the volume group container thin pool; this should automatically
resolve any errors. If your configuration uses loop devices, then stop the
Engine daemon, grow the size of loop files and restart the daemon to resolve
the issue.
Example use:
$ docker daemon --storage-opt dm.min_free_space=10%
```bash
$ docker daemon --storage-opt dm.min_free_space=10%
```
Currently supported options of `zfs`:
@@ -565,9 +570,9 @@ system's list of trusted CAs instead of enabling `--insecure-registry`.
Enabling `--disable-legacy-registry` forces a docker daemon to only interact with registries which support the V2 protocol. Specifically, the daemon will not attempt `push`, `pull` and `login` to v1 registries. The exception to this is `search` which can still be performed on v1 registries.
## Running a Docker daemon behind a HTTPS_PROXY
## Running a Docker daemon behind an HTTPS_PROXY
When running inside a LAN that uses a `HTTPS` proxy, the Docker Hub
When running inside a LAN that uses an `HTTPS` proxy, the Docker Hub
certificates will be replaced by the proxy's certificates. These certificates
need to be added to your Docker host's configuration:
@@ -884,7 +889,7 @@ This is a full example of the allowed configuration options in the file:
"exec-opts": [],
"exec-root": "",
"storage-driver": "",
"storage-opts": "",
"storage-opts": [],
"labels": [],
"log-driver": "",
"log-opts": [],
@@ -892,7 +897,7 @@ This is a full example of the allowed configuration options in the file:
"pidfile": "",
"graph": "",
"cluster-store": "",
"cluster-store-opts": [],
"cluster-store-opts": {},
"cluster-advertise": "",
"debug": true,
"hosts": [],
@@ -952,3 +957,59 @@ has been provided in flags and `cluster-advertise` not, `cluster-advertise`
can be added in the configuration file without accompanied by `--cluster-store`
Configuration reload will log a warning message if it detects a change in
previously configured cluster configurations.
## Running multiple daemons
> **Note:** Running multiple daemons on a single host is considered as "experimental". The user should be aware of
> unsolved problems. This solution may not work properly in some cases. Solutions are currently under development
> and will be delivered in the near future.
This section describes how to run multiple Docker daemons on a single host. To
run multiple daemons, you must configure each daemon so that it does not
conflict with other daemons on the same host. You can set these options either
by providing them as flags, or by using a [daemon configuration file](#daemon-configuration-file).
The following daemon options must be configured for each daemon:
```bash
-b, --bridge= Attach containers to a network bridge
--exec-root=/var/run/docker Root of the Docker execdriver
-g, --graph=/var/lib/docker Root of the Docker runtime
-p, --pidfile=/var/run/docker.pid Path to use for daemon PID file
-H, --host=[] Daemon socket(s) to connect to
--config-file=/etc/docker/daemon.json Daemon configuration file
--tlscacert="~/.docker/ca.pem" Trust certs signed only by this CA
--tlscert="~/.docker/cert.pem" Path to TLS certificate file
--tlskey="~/.docker/key.pem" Path to TLS key file
```
When your daemons use different values for these flags, you can run them on the same host without any problems.
It is very important to properly understand the meaning of those options and to use them correctly.
- The `-b, --bridge=` flag is set to `docker0` as default bridge network. It is created automatically when you install Docker.
If you are not using the default, you must create and configure the bridge manually or just set it to 'none': `--bridge=none`
- `--exec-root` is the path where the container state is stored. The default value is `/var/run/docker`. Specify the path for
your running daemon here.
- `--graph` is the path where images are stored. The default value is `/var/lib/docker`. To avoid any conflict with other daemons
set this parameter separately for each daemon.
- `-p, --pidfile=/var/run/docker.pid` is the path where the process ID of the daemon is stored. Specify the path for your
pid file here.
- `--host=[]` specifies where the Docker daemon will listen for client connections. If unspecified, it defaults to `/var/run/docker.sock`.
- `--config-file=/etc/docker/daemon.json` is the path where configuration file is stored. You can use it instead of
daemon flags. Specify the path for each daemon.
- `--tls*` Docker daemon supports `--tlsverify` mode that enforces encrypted and authenticated remote connections.
The `--tls*` options enable use of specific certificates for individual daemons.
Example script for a separate “bootstrap” instance of the Docker daemon without network:
```bash
$ docker daemon \
-H unix:///var/run/docker-bootstrap.sock \
-p /var/run/docker-bootstrap.pid \
--iptables=false \
--ip-masq=false \
--bridge=none \
--graph=/var/lib/docker-bootstrap \
--exec-root=/var/run/docker-bootstrap
```

View File

@@ -101,7 +101,7 @@ disconnect` command.
When you create a network, Engine creates a non-overlapping subnetwork for the network by default. This subnetwork is not a subdivision of an existing network. It is purely for ip-addressing purposes. You can override this default and specify subnetwork values directly using the `--subnet` option. On a `bridge` network you can only create a single subnet:
```bash
docker network create -d --subnet=192.168.0.0/16
docker network create --driver=bridge --subnet=192.168.0.0/16 br0
```
Additionally, you also specify the `--gateway` `--ip-range` and `--aux-address` options.

View File

@@ -23,5 +23,5 @@ the process is unaware, and unable to capture, that it is being suspended,
and subsequently resumed.
See the
[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt)
[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroup-v1/freezer-subsystem.txt)
for further details.

View File

@@ -27,6 +27,15 @@ can `pull` and try without needing to define and configure your own.
To download a particular image, or set of images (i.e., a repository),
use `docker pull`.
## Proxy configuration
If you are behind an HTTP proxy server, for example in corporate settings,
before open a connect to registry, you may need to configure the Docker
daemon's proxy settings, using the `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY`
environment variables. To set these environment variables on a host using
`systemd`, refer to the [control and configure Docker with systemd](../../admin/systemd.md#http-proxy)
for variables configuration.
## Examples
### Pull an image from Docker Hub

View File

@@ -227,12 +227,12 @@ system's interfaces.
This sets simple (non-array) environmental variables in the container. For
illustration all three
flags are shown here. Where `-e`, `--env` take an environment variable and
value, or if no `=` is provided, then that variable's current value is passed
through (i.e. `$MYVAR1` from the host is set to `$MYVAR1` in the container).
When no `=` is provided and that variable is not defined in the client's
environment then that variable will be removed from the container's list of
environment variables.
All three flags, `-e`, `--env` and `--env-file` can be repeated.
value, or if no `=` is provided, then that variable's current value, set via
`export`, is passed through (i.e. `$MYVAR1` from the host is set to `$MYVAR1`
in the container). When no `=` is provided and that variable is not defined
in the client's environment then that variable will be removed from the
container's list of environment variables. All three flags, `-e`, `--env` and
`--env-file` can be repeated.
Regardless of the order of these three flags, the `--env-file` are processed
first, and then `-e`, `--env` flags. This way, the `-e` or `--env` will

View File

@@ -20,5 +20,5 @@ The `docker unpause` command uses the cgroups freezer to un-suspend all
processes in a container.
See the
[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroups/freezer-subsystem.txt)
[cgroups freezer documentation](https://www.kernel.org/doc/Documentation/cgroup-v1/freezer-subsystem.txt)
for further details.

View File

@@ -190,6 +190,11 @@ Images using the v2 or later image format have a content-addressable identifier
called a digest. As long as the input used to generate the image is unchanged,
the digest value is predictable and referenceable.
The following example runs a container from the `alpine` image with the
`sha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0` digest:
$ docker run alpine@sha256:9cacb71397b640eca97488cf08582ae4e4068513101088e9f96c9814bfda95e0 date
## PID settings (--pid)
--pid="" : Set the PID (Process) Namespace mode for the container,
@@ -291,7 +296,8 @@ you can override this with `--dns`.
By default, the MAC address is generated using the IP address allocated to the
container. You can set the container's MAC address explicitly by providing a
MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`).
MAC address via the `--mac-address` parameter (format:`12:34:56:78:9a:bc`).Be
aware that Docker does not check if manually specified MAC addresses are unique.
Supported networks :
@@ -562,20 +568,18 @@ the exit codes follow the `chroot` standard, see below:
**_126_** if the **_contained command_** cannot be invoked
$ docker run busybox /etc; echo $?
# exec: "/etc": permission denied
docker: Error response from daemon: Contained command could not be invoked
# docker: Error response from daemon: Container command '/etc' could not be invoked.
126
**_127_** if the **_contained command_** cannot be found
$ docker run busybox foo; echo $?
# exec: "foo": executable file not found in $PATH
docker: Error response from daemon: Contained command not found or does not exist
# docker: Error response from daemon: Container command 'foo' not found or does not exist.
127
**_Exit code_** of **_contained command_** otherwise
$ docker run busybox /bin/sh -c 'exit 3'
$ docker run busybox /bin/sh -c 'exit 3'; echo $?
# 3
## Clean up (--rm)
@@ -595,7 +599,7 @@ associated with the container when the container is removed. This is similar
to running `docker rm -v my-container`. Only volumes that are specified without a
name are removed. For example, with
`docker run --rm -v /foo -v awesome:/bar busybox top`, the volume for `/foo` will be removed,
but the volume for `/bar` will not. Volumes inheritted via `--volumes-from` will be removed
but the volume for `/bar` will not. Volumes inherited via `--volumes-from` will be removed
with the same logic -- if the original volume was specified with a name it will **not** be removed.
## Security configuration
@@ -613,15 +617,12 @@ with the same logic -- if the original volume was specified with a name it will
You can override the default labeling scheme for each container by specifying
the `--security-opt` flag. For example, you can specify the MCS/MLS level, a
requirement for MLS systems. Specifying the level in the following command
the `--security-opt` flag. Specifying the level in the following command
allows you to share the same content between containers.
$ docker run --security-opt label=level:s0:c100,c200 -it fedora bash
An MLS example might be:
$ docker run --security-opt label=level:TopSecret -it rhel7 bash
> **Note**: Automatic translation of MLS labels is not currently supported.
To disable the security labeling for this container versus running with the
`--permissive` flag, use the following command:
@@ -849,7 +850,7 @@ limit and "K" the kernel limit. There are three possible ways to set limits:
deployments where the total amount of memory per-cgroup is overcommitted.
Overcommitting kernel memory limits is definitely not recommended, since the
box can still run out of non-reclaimable memory.
In this case, the you can configure K so that the sum of all groups is
In this case, you can configure K so that the sum of all groups is
never greater than the total memory. Then, freely set U at the expense of
the system's service quality.
</td>
@@ -1084,7 +1085,7 @@ By default, Docker containers are "unprivileged" and cannot, for
example, run a Docker daemon inside a Docker container. This is because
by default a container is not allowed to access any devices, but a
"privileged" container is given access to all devices (see
the documentation on [cgroups devices](https://www.kernel.org/doc/Documentation/cgroups/devices.txt)).
the documentation on [cgroups devices](https://www.kernel.org/doc/Documentation/cgroup-v1/devices.txt)).
When the operator executes `docker run --privileged`, Docker will enable
to access to all devices on the host as well as set some configuration
@@ -1332,7 +1333,7 @@ If the operator uses `--link` when starting a new client container in the
default bridge network, then the client container can access the exposed
port via a private networking interface.
If `--link` is used when starting a container in a user-defined network as
described in [*Docker network overview*""](../userguide/networking/index.md)),
described in [*Docker network overview*](../userguide/networking/index.md)),
it will provide a named alias for the container being linked to.
### ENV (environment variables)
@@ -1434,7 +1435,7 @@ The `host-src` can either be an absolute path or a `name` value. If you
supply an absolute path for the `host-dir`, Docker bind-mounts to the path
you specify. If you supply a `name`, Docker creates a named volume by that `name`.
A `name` value must start with start with an alphanumeric character,
A `name` value must start with an alphanumeric character,
followed by `a-z0-9`, `_` (underscore), `.` (period) or `-` (hyphen).
An absolute path starts with a `/` (forward slash).

View File

@@ -20,8 +20,8 @@ Docker automatically loads container profiles. The Docker binary installs
a `docker-default` profile in the `/etc/apparmor.d/docker` file. This profile
is used on containers, _not_ on the Docker Daemon.
A profile for the Docker Engine Daemon exists but it is not currently installed
with the deb packages. If you are interested in the source for the Daemon
A profile for the Docker Engine daemon exists but it is not currently installed
with the `deb` packages. If you are interested in the source for the daemon
profile, it is located in
[contrib/apparmor](https://github.com/docker/docker/tree/master/contrib/apparmor)
in the Docker Engine source repository.
@@ -72,15 +72,15 @@ explicitly specifies the default policy:
$ docker run --rm -it --security-opt apparmor=docker-default hello-world
```
## Loading and Unloading Profiles
## Load and unload profiles
To load a new profile into AppArmor, for use with containers:
To load a new profile into AppArmor for use with containers:
```
```bash
$ apparmor_parser -r -W /path/to/your_profile
```
Then you can run the custom profile with `--security-opt` like so:
Then, run the custom profile with `--security-opt` like so:
```bash
$ docker run --rm -it --security-opt apparmor=your_profile hello-world
@@ -97,39 +97,174 @@ $ apparmor_parser -R /path/to/profile
$ /etc/init.d/apparmor start
```
## Debugging AppArmor
### Resources for writing profiles
### Using `dmesg`
The syntax for file globbing in AppArmor is a bit different than some other
globbing implementations. It is highly suggested you take a look at some of the
below resources with regard to AppArmor profile syntax.
- [Quick Profile Language](http://wiki.apparmor.net/index.php/QuickProfileLanguage)
- [Globbing Syntax](http://wiki.apparmor.net/index.php/AppArmor_Core_Policy_Reference#AppArmor_globbing_syntax)
## Nginx example profile
In this example, you create a custom AppArmor profile for Nginx. Below is the
custom profile.
```
#include <tunables/global>
profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
network inet tcp,
network inet udp,
network inet icmp,
deny network raw,
deny network packet,
file,
umount,
deny /bin/** wl,
deny /boot/** wl,
deny /dev/** wl,
deny /etc/** wl,
deny /home/** wl,
deny /lib/** wl,
deny /lib64/** wl,
deny /media/** wl,
deny /mnt/** wl,
deny /opt/** wl,
deny /proc/** wl,
deny /root/** wl,
deny /sbin/** wl,
deny /srv/** wl,
deny /tmp/** wl,
deny /sys/** wl,
deny /usr/** wl,
audit /** w,
/var/run/nginx.pid w,
/usr/sbin/nginx ix,
deny /bin/dash mrwklx,
deny /bin/sh mrwklx,
deny /usr/bin/top mrwklx,
capability chown,
capability dac_override,
capability setuid,
capability setgid,
capability net_bind_service,
deny @{PROC}/{*,**^[0-9*],sys/kernel/shm*} wkx,
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny @{PROC}/kcore rwklx,
deny mount,
deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/efi/efivars/** rwklx,
deny /sys/kernel/security/** rwklx,
}
```
1. Save the custom profile to disk in the
`/etc/apparmor.d/containers/docker-nginx` file.
The file path in this example is not a requirement. In production, you could
use another.
2. Load the profile.
```bash
$ sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx
```
3. Run a container with the profile.
To run nginx in detached mode:
```bash
$ docker run --security-opt "apparmor=docker-nginx" \
-p 80:80 -d --name apparmor-nginx nginx
```
4. Exec into the running container
```bash
$ docker exec -it apparmor-nginx bash
```
5. Try some operations to test the profile.
```bash
root@6da5a2a930b9:~# ping 8.8.8.8
ping: Lacking privilege for raw socket.
root@6da5a2a930b9:/# top
bash: /usr/bin/top: Permission denied
root@6da5a2a930b9:~# touch ~/thing
touch: cannot touch 'thing': Permission denied
root@6da5a2a930b9:/# sh
bash: /bin/sh: Permission denied
root@6da5a2a930b9:/# dash
bash: /bin/dash: Permission denied
```
Congrats! You just deployed a container secured with a custom apparmor profile!
## Debug AppArmor
You can use `dmesg` to debug problems and `aa-status` check the loaded profiles.
### Use dmesg
Here are some helpful tips for debugging any problems you might be facing with
regard to AppArmor.
AppArmor sends quite verbose messaging to `dmesg`. Usually an AppArmor line
will look like the following:
looks like the following:
```
[ 5442.864673] audit: type=1400 audit(1453830992.845:37): apparmor="ALLOWED" operation="open" profile="/usr/bin/docker" name="/home/jessie/docker/man/man1/docker-attach.1" pid=10923 comm="docker" requested_mask="r" denied_mask="r" fsuid=1000 ouid=0
```
In the above example, the you can see `profile=/usr/bin/docker`. This means the
In the above example, you can see `profile=/usr/bin/docker`. This means the
user has the `docker-engine` (Docker Engine Daemon) profile loaded.
> **Note:** On version of Ubuntu > 14.04 this is all fine and well, but Trusty
> users might run into some issues when trying to `docker exec`.
Let's look at another log line:
Look at another log line:
```
[ 3256.689120] type=1400 audit(1405454041.341:73): apparmor="DENIED" operation="ptrace" profile="docker-default" pid=17651 comm="docker" requested_mask="receive" denied_mask="receive"
```
This time the profile is `docker-default`, which is run on containers by
default unless in `privileged` mode. It is telling us, that apparmor has denied
`ptrace` in the container. This is great.
default unless in `privileged` mode. This line shows that apparmor has denied
`ptrace` in the container. This is exactly as expected.
### Using `aa-status`
### Use aa-status
If you need to check which profiles are loaded you can use `aa-status`. The
If you need to check which profiles are loaded, you can use `aa-status`. The
output looks like:
```bash
@@ -162,17 +297,17 @@ apparmor module is loaded.
0 processes are unconfined but have a profile defined.
```
In the above output you can tell that the `docker-default` profile running on
various container PIDs is in `enforce` mode. This means AppArmor will actively
block and audit in `dmesg` anything outside the bounds of the `docker-default`
The above output shows that the `docker-default` profile running on various
container PIDs is in `enforce` mode. This means AppArmor is actively blocking
and auditing in `dmesg` anything outside the bounds of the `docker-default`
profile.
The output above also shows the `/usr/bin/docker` (Docker Engine Daemon)
profile is running in `complain` mode. This means AppArmor will _only_ log to
`dmesg` activity outside the bounds of the profile. (Except in the case of
Ubuntu Trusty, where we have seen some interesting behaviors being enforced.)
The output above also shows the `/usr/bin/docker` (Docker Engine daemon) profile
is running in `complain` mode. This means AppArmor _only_ logs to `dmesg`
activity outside the bounds of the profile. (Except in the case of Ubuntu
Trusty, where some interesting behaviors are enforced.)
## Contributing to AppArmor code in Docker
## Contribute Docker's AppArmor code
Advanced users and package managers can find a profile for `/usr/bin/docker`
(Docker Engine Daemon) underneath

View File

@@ -12,7 +12,7 @@ parent = "smn_secure_docker"
# Protect the Docker daemon socket
By default, Docker runs via a non-networked Unix socket. It can also
optionally communicate using a HTTP socket.
optionally communicate using an HTTP socket.
If you need Docker to be reachable via the network in a safe manner, you can
enable TLS by specifying the `tlsverify` flag and pointing Docker's

View File

@@ -0,0 +1,83 @@
<!--[metadata]>
+++
title = "Docker Security Non-events"
description = "Review of security vulnerabilities Docker mitigated"
keywords = ["Docker, Docker documentation, security, security non-events"]
[menu.main]
parent = "smn_secure_docker"
+++
<![end-metadata]-->
# Docker Security Non-events
This page lists security vulnerabilities which Docker mitigated, such that
processes run in Docker containers were never vulnerable to the bug—even before
it was fixed. This assumes containers are run without adding extra capabilities
or not run as `--privileged`.
The list below is not even remotely complete. Rather, it is a sample of the few
bugs we've actually noticed to have attracted security review and publicly
disclosed vulnerabilities. In all likelihood, the bugs that haven't been
reported far outnumber those that have. Luckily, since Docker's approach to
secure by default through apparmor, seccomp, and dropping capabilities, it
likely mitigates unknown bugs just as well as it does known ones.
Bugs mitigated:
* [CVE-2013-1956](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1956),
[1957](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1957),
[1958](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1958),
[1959](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1959),
[1979](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-1979),
[CVE-2014-4014](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4014),
[5206](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5206),
[5207](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-5207),
[7970](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7970),
[7975](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-7975),
[CVE-2015-2925](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-2925),
[8543](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-8543),
[CVE-2016-3134](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3134),
[3135](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-3135), etc.:
The introduction of unprivileged user namespaces lead to a huge increase in the
attack surface available to unprivileged users by giving such users legitimate
access to previously root-only system calls like `mount()`. All of these CVEs
are examples of security vulnerabilities due to introduction of user namespaces.
Docker can use user namespaces to set up containers, but then disallows the
process inside the container from creating its own nested namespaces through the
default seccomp profile, rendering these vulnerabilities unexploitable.
* [CVE-2014-0181](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-0181),
[CVE-2015-3339](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3339):
These are bugs that require the presence of a setuid binary. Docker disables
setuid binaries inside containers via the `NO_NEW_PRIVS` process flag and
other mechanisms.
* [CVE-2014-4699](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-4699):
A bug in `ptrace()` could allow privilege escalation. Docker disables `ptrace()`
inside the container using apparmor, seccomp and by dropping `CAP_PTRACE`.
Three times the layers of protection there!
* [CVE-2014-9529](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2014-9529):
A series of crafted `keyctl()` calls could cause kernel DoS / memory corruption.
Docker disables `keyctl()` inside containers using seccomp.
* [CVE-2015-3214](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3214),
[4036](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-4036): These are
bugs in common virtualization drivers which could allow a guest OS user to
execute code on the host OS. Exploiting them requires access to virtualization
devices in the guest. Docker hides direct access to these devices when run
without `--privileged`. Interestingly, these seem to be cases where containers
are "more secure" than a VM, going against common wisdom that VMs are
"more secure" than containers.
* [CVE-2016-0728](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-0728):
Use-after-free caused by crafted `keyctl()` calls could lead to privilege
escalation. Docker disables `keyctl()` inside containers using the default
seccomp profile.
* [CVE-2016-2383](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2383):
A bug in eBPF -- the special in-kernel DSL used to express things like seccomp
filters -- allowed arbitrary reads of kernel memory. The `bpf()` system call
is blocked inside Docker containers using (ironically) seccomp.
Bugs *not* mitigated:
* [CVE-2015-3290](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3290),
[5157](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-5157): Bugs in
the kernel's non-maskable interrupt handling allowed privilege escalation.
Can be exploited in Docker containers because the `modify_ldt()` system call is
not currently blocked using seccomp.

View File

@@ -99,7 +99,6 @@ the reason each syscall is blocked rather than white-listed.
| `keyctl` | Prevent containers from using the kernel keyring, which is not namespaced. |
| `lookup_dcookie` | Tracing/profiling syscall, which could leak a lot of information on the host. |
| `mbind` | Syscall that modifies kernel memory and NUMA settings. Already gated by `CAP_SYS_NICE`. |
| `modify_ldt` | Old syscall only used in 16-bit code and a potential information leak. |
| `mount` | Deny mounting, already gated by `CAP_SYS_ADMIN`. |
| `move_pages` | Syscall that modifies kernel memory and NUMA settings. |
| `name_to_handle_at` | Sister syscall to `open_by_handle_at`. Already gated by `CAP_SYS_NICE`. |

View File

@@ -52,8 +52,8 @@ How mature is the code providing kernel namespaces and private
networking? Kernel namespaces were introduced [between kernel version
2.6.15 and
2.6.26](http://lxc.sourceforge.net/index.php/about/kernel-namespaces/).
This means that since July 2008 (date of the 2.6.26 release, now 7 years
ago), namespace code has been exercised and scrutinized on a large
This means that since July 2008 (date of the 2.6.26 release
), namespace code has been exercised and scrutinized on a large
number of production systems. And there is more: the design and
inspiration for the namespaces code are even older. Namespaces are
actually an effort to reimplement the features of [OpenVZ](
@@ -106,7 +106,7 @@ arbitrary containers.
For this reason, the REST API endpoint (used by the Docker CLI to
communicate with the Docker daemon) changed in Docker 0.5.2, and now
uses a UNIX socket instead of a TCP socket bound on 127.0.0.1 (the
latter being prone to cross-site-scripting attacks if you happen to run
latter being prone to cross-site request forgery attacks if you happen to run
Docker directly on your local machine, outside of a VM). You can then
use traditional UNIX permission checks to limit access to the control
socket.

View File

@@ -13,15 +13,11 @@ weight=-1
When transferring data among networked systems, *trust* is a central concern. In
particular, when communicating over an untrusted medium such as the internet, it
is critical to ensure the integrity and publisher of all the data a system
operates on. You use Docker to push and pull images (data) to a registry. Content trust
gives you the ability to both verify the integrity and the publisher of all the
is critical to ensure the integrity and the publisher of all the data a system
operates on. You use Docker Engine to push and pull images (data) to a public or private registry. Content trust
gives you the ability to verify both the integrity and the publisher of all the
data received from a registry over any channel.
Content trust is currently only available for users of the public Docker Hub. It
is currently not available for the Docker Trusted Registry or for private
registries.
## Understand trust in Docker
Content trust allows operations with a remote Docker registry to enforce
@@ -82,7 +78,7 @@ desirable, unsigned image tags are "invisible" to them.
![Trust view](images/trust_view.png)
To the consumer who does not enabled content trust, nothing about how they
To the consumer who has not enabled content trust, nothing about how they
work with Docker images changes. Every image is visible regardless of whether it
is signed or not.
@@ -111,7 +107,7 @@ Trust for an image tag is managed through the use of signing keys. A key set is
created when an operation using content trust is first invoked. A key set consists
of the following classes of keys:
- an offline key that is the root of content trust for a image tag
- an offline key that is the root of content trust for an image tag
- repository or tagging keys that sign tags
- server-managed keys such as the timestamp key, which provides freshness
security guarantees for your repository
@@ -127,7 +123,7 @@ The following image depicts the various signing keys and their relationships:
>tag from this repository prior to the loss.
You should backup the root key somewhere safe. Given that it is only required
to create new repositories, it is a good idea to store it offline.
to create new repositories, it is a good idea to store it offline in hardware.
For details on securing, and backing up your keys, make sure you
read how to [manage keys for content trust](trust_key_mng.md).
@@ -198,11 +194,12 @@ When you push your first tagged image with content trust enabled, the `docker`
client recognizes this is your first push and:
- alerts you that it will create a new root key
- requests a passphrase for the key
- requests a passphrase for the root key
- generates a root key in the `~/.docker/trust` directory
- requests a passphrase for the repository key
- generates a repository key for in the `~/.docker/trust` directory
The passphrase you chose for both the root key and your content key-pair
The passphrase you chose for both the root key and your repository key-pair
should be randomly generated and stored in a *password manager*.
> **NOTE**: If you omit the `latest` tag, content trust is skipped. This is true
@@ -267,7 +264,7 @@ Because the tag `docker/cliffs:latest` is not trusted, the `pull` fails.
### Disable content trust for specific operations
A user that wants to disable content trust for a particular operation can use the
A user who wants to disable content trust for a particular operation can use the
`--disable-content-trust` flag. **Warning: this flag disables content trust for
this operation**. With this flag, Docker will ignore content-trust and allow all
operations to be done without verifying any signatures. If we wanted the

View File

@@ -21,7 +21,7 @@ The easiest way to deploy Notary Server is by using Docker Compose. To follow th
docker-compose up -d
For more detailed documentation about how to deploy Notary Server see https://github.com/docker/notary.
For more detailed documentation about how to deploy Notary Server see the [instructions to run a Notary service](/notary/running_a_service.md) as well as https://github.com/docker/notary for more information.
3. Make sure that your Docker or Notary client trusts Notary Server's certificate before you try to interact with the Notary server.
See the instructions for [Docker](../../reference/commandline/cli.md#notary) or

View File

@@ -10,7 +10,7 @@ parent= "smn_content_trust"
# Automation with content trust
Your automation systems that pull or build images can also work with trust. Any automation environment must set `DOCKER_TRUST_ENABLED` either manually or in in a scripted fashion before processing images.
Your automation systems that pull or build images can also work with trust. Any automation environment must set `DOCKER_TRUST_ENABLED` either manually or in a scripted fashion before processing images.
## Bypass requests for passphrases
@@ -43,7 +43,7 @@ Signing and pushing trust metadata
## Building with content trust
You can also build with content trust. Before running the `docker build` command, you should set the environment variable `DOCKER_CONTENT_TRUST` either manually or in in a scripted fashion. Consider the simple Dockerfile below.
You can also build with content trust. Before running the `docker build` command, you should set the environment variable `DOCKER_CONTENT_TRUST` either manually or in a scripted fashion. Consider the simple Dockerfile below.
```Dockerfile
FROM docker/trusttest:latest

View File

@@ -18,7 +18,7 @@ sharing your repository key (a combination of your targets and snapshot keys -
please see "[Manage keys for content trust](trust_key_mng.md)" for more information).
A collaborator can keep their own delegation key private.
The `targest/releases` delegation is currently an optional feature - in order
The `targets/releases` delegation is currently an optional feature - in order
to set up delegations, you must use the Notary CLI:
1. [Download the client](https://github.com/docker/notary/releases) and ensure that it is
@@ -40,7 +40,7 @@ available on your path
For more detailed information about how to use Notary outside of the default
Docker Content Trust use cases, please refer to the
[the Notary CLI documentation](https://docs.docker.com/notary/getting_started/).
[the Notary CLI documentation](/notary/getting_started.md).
Note that when publishing and listing delegation changes using the Notary client,
your Docker Hub credentials are required.
@@ -67,7 +67,7 @@ e is 65537 (0x10001)
They should keep `delegation.key` private - this is what they will use to sign
tags.
Then they need to generate a x509 certificate containing the public key, which is
Then they need to generate an x509 certificate containing the public key, which is
what they will give to you. Here is the command to generate a CSR (certificate
signing request):

View File

@@ -15,7 +15,7 @@ trust makes use of five different types of keys:
| Key | Description |
|---------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| root key | Root of content trust for a image tag. When content trust is enabled, you create the root key once. Also known as the offline key, because it should be kept offline. |
| root key | Root of content trust for an image tag. When content trust is enabled, you create the root key once. Also known as the offline key, because it should be kept offline. |
| targets | This key allows you to sign image tags, to manage delegations including delegated keys or permitted delegation paths. Also known as the repository key, since this key determines what tags can be signed into an image repository. |
| snapshot | This key signs the current collection of image tags, preventing mix and match attacks.
| timestamp | This key allows Docker image repositories to have freshness security guarantees without requiring periodic content refreshes on the client's side. |
@@ -37,7 +37,7 @@ workflow. They need to be
Note: Prior to Docker Engine 1.11, the snapshot key was also generated and stored
locally client-side. [Use the Notary CLI to manage your snapshot key locally
again](https://docs.docker.com/notary/advanced_usage/#rotate-keys) for
again](/notary/advanced_usage.md#rotate-keys) for
repositories created with newer versions of Docker.
## Choosing a passphrase
@@ -64,6 +64,16 @@ Before backing them up, you should `tar` them into an archive:
$ umask 077; tar -zcvf private_keys_backup.tar.gz ~/.docker/trust/private; umask 022
```
## Hardware storage and signing
Docker Content Trust can store and sign with root keys from a Yubikey 4. The
Yubikey is prioritized over keys stored in the filesystem. When you initialize a
new repository with content trust, Docker Engine looks for a root key locally. If a
key is not found and the Yubikey 4 exists, Docker Engine creates a root key in the
Yubikey 4. Please consult the [Notary documentation](/notary/advanced_usage.md#use-a-yubikey) for more details.
Prior to Docker Engine 1.11, this feature was only in the experimental branch.
## Lost keys
If a publisher loses keys it means losing the ability to sign trusted content for

View File

@@ -21,186 +21,128 @@ overview](content_trust.md).
These instructions assume you are running in Linux or Mac OS X. You can run
this sandbox on a local machine or on a virtual machine. You will need to
have `sudo` privileges on your local machine or in the VM.
have privileges to run docker commands on your local machine or in the VM.
This sandbox requires you to install two Docker tools: Docker Engine and Docker
Compose. To install the Docker Engine, choose from the [list of supported
platforms](../../installation/index.md). To install Docker Compose, see the
This sandbox requires you to install two Docker tools: Docker Engine >= 1.10.0
and Docker Compose >= 1.6.0. To install the Docker Engine, choose from the
[list of supported platforms](../../installation/index.md). To install
Docker Compose, see the
[detailed instructions here](https://docs.docker.com/compose/install/).
Finally, you'll need to have `git` installed on your local system or VM.
Finally, you'll need to have a text editor installed on your local system or VM.
## What is in the sandbox?
If you are just using trust out-of-the-box you only need your Docker Engine
client and access to the Docker hub. The sandbox mimics a
production trust environment, and requires these additional components:
production trust environment, and sets up these additional components.
| Container | Description |
|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------|
| notarysandbox | A container with the latest version of Docker Engine and with some preconfigured certifications. This is your sandbox where you can use the `docker` client to test trust operations. |
| trustsandbox | A container with the latest version of Docker Engine and with some preconfigured certificates. This is your sandbox where you can use the `docker` client to test trust operations. |
| Registry server | A local registry service. |
| Notary server | The service that does all the heavy-lifting of managing trust |
| Notary signer | A service that ensures that your keys are secure. |
| MySQL | The database where all of the trust information will be stored |
The sandbox uses the Docker daemon on your local system. Within the `notarysandbox`
you interact with a local registry rather than the Docker Hub. This means
your everyday image repositories are not used. They are protected while you play.
This means you will be running your own content trust (Notary) server and registry.
If you work exclusively with the Docker Hub, you would not need with these components.
They are built into the Docker Hub for you. For the sandbox, however, you build
your own entire, mock production environment.
Within the `trustsandbox` container, you interact with your local registry rather
than the Docker Hub. This means your everyday image repositories are not used.
They are protected while you play.
When you play in the sandbox, you'll also create root and repository keys. The
sandbox is configured to store all the keys and files inside the `notarysandbox`
sandbox is configured to store all the keys and files inside the `trustsandbox`
container. Since the keys you create in the sandbox are for play only,
destroying the container destroys them as well.
By using a docker-in-docker image for the `trustsandbox` container, you will also
not pollute your real docker daemon cache with any images you push and pull. The
images will instead be stored in an anonymous volume attached to this container,
and can be destroyed after you destroy the container.
## Build the sandbox
In this section, you build the Docker components for your trust sandbox. If you
work exclusively with the Docker Hub, you would not need with these components.
They are built into the Docker Hub for you. For the sandbox, however, you must
build your own entire, mock production environment and registry.
### Configure /etc/hosts
The sandbox' `notaryserver` and `sandboxregistry` run on your local server. The
client inside the `notarysandbox` container connects to them over your network.
So, you'll need an entry for both the servers in your local `/etc/hosts` file.
1. Add an entry for the `notaryserver` to `/etc/hosts`.
$ sudo sh -c 'echo "127.0.0.1 notaryserver" >> /etc/hosts'
2. Add an entry for the `sandboxregistry` to `/etc/hosts`.
$ sudo sh -c 'echo "127.0.0.1 sandboxregistry" >> /etc/hosts'
In this section, you'll use Docker Compose to specify how to set up and link together
the `trustsandbox` container, the Notary server, and the Registry server.
### Build the notarytest image
1. Create a new `trustsandbox` directory and change into it.
1. Create a `notarytest` directory on your system.
$ mkdir `trustsandbox`
$ cd `trustsandbox`
$ mkdir notarysandbox
2. Create a filed called `docker-compose.yml` with your favorite editor. For example, using vim:
2. Change into your `notarysandbox` directory.
$ touch docker-compose.yml
$ vim docker-compose.yml
$ cd notarysandbox
3. Add the following to the new file.
3. Create a `notarytest` directory then change into that.
version: "2"
services:
notaryserver:
image: dockersecurity/notary_autobuilds:server-v0.3.0
volumes:
- notarycerts:/go/src/github.com/docker/notary/fixtures
networks:
- sandbox
environment:
- NOTARY_SERVER_STORAGE_TYPE=memory
- NOTARY_SERVER_TRUST_SERVICE_TYPE=local
sandboxregistry:
image: registry:2.4.1
networks:
- sandbox
container_name: sandboxregistry
trustsandbox:
image: docker:dind
networks:
- sandbox
volumes:
- notarycerts:/notarycerts
privileged: true
container_name: trustsandbox
entrypoint: ""
command: |-
sh -c '
cp /notarycerts/root-ca.crt /usr/local/share/ca-certificates/root-ca.crt &&
update-ca-certificates &&
dockerd-entrypoint.sh --insecure-registry sandboxregistry:5000'
volumes:
notarycerts:
external: false
networks:
sandbox:
external: false
$ mkdir notarytest
$ cd notarytest
4. Save and close the file.
4. Create a filed called `Dockerfile` with your favorite editor.
5. Add the following to the new file.
FROM debian:jessie
ADD https://master.dockerproject.org/linux/amd64/docker /usr/bin/docker
RUN chmod +x /usr/bin/docker \
&& apt-get update \
&& apt-get install -y \
tree \
vim \
git \
ca-certificates \
--no-install-recommends
WORKDIR /root
RUN git clone -b trust-sandbox https://github.com/docker/notary.git
RUN cp /root/notary/fixtures/root-ca.crt /usr/local/share/ca-certificates/root-ca.crt
RUN update-ca-certificates
ENTRYPOINT ["bash"]
6. Save and close the file.
7. Build the testing container.
$ docker build -t notarysandbox .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM debian:jessie
...
Successfully built 5683f17e9d72
### Build and start up the trust servers
In this step, you get the source code for your notary and registry services.
Then, you'll use Docker Compose to build and start them on your local system.
1. Change to back to the root of your `notarysandbox` directory.
$ cd notarysandbox
2. Clone the `notary` project.
$ git clone -b trust-sandbox https://github.com/docker/notary.git
3. Clone the `distribution` project.
$ git clone https://github.com/docker/distribution.git
4. Change to the `notary` project directory.
$ cd notary
The directory contains a `docker-compose` file that you'll use to run a
notary server together with a notary signer and the corresponding MySQL
databases. The databases store the trust information for an image.
5. Build the server images.
$ docker-compose build
The first time you run this, the build takes some time.
6. Run the server containers on your local system.
5. Run the containers on your local system.
$ docker-compose up -d
Once the trust services are up, you'll setup a local version of the Docker
Registry v2.
The first time you run this, the docker-in-docker, Notary server, and registry
images will be first downloaded from Docker Hub.
7. Change to the `notarysandbox/distribution` directory.
8. Build the `sandboxregistry` server.
$ docker build -t sandboxregistry .
9. Start the `sandboxregistry` server running.
$ docker run -p 5000:5000 --name sandboxregistry sandboxregistry &
## Playing in the sandbox
Now that everything is setup, you can go into your `notarysandbox` container and
start testing Docker content trust.
Now that everything is setup, you can go into your `trustsandbox` container and
start testing Docker content trust. From your host machine, obtain a shell
in the `trustsandbox` container.
### Start the notarysandbox container
In this procedure, you start the `notarysandbox` and link it to the running
`notary_notaryserver_1` and `sandboxregistry` containers. The links allow
communication among the containers.
```
$ docker run -it -v /var/run/docker.sock:/var/run/docker.sock --link notary_notaryserver_1:notaryserver --link sandboxregistry:sandboxregistry notarysandbox
root@0710762bb59a:/#
```
Mounting the `docker.sock` gives the `notarysandbox` access to the `docker`
daemon on your host, while storing all the keys and files inside the sandbox
container. When you destroy the container, you destroy the "play" keys.
$ docker exec -it trustsandbox sh
/ #
### Test some trust operations
Now, you'll pull some images.
Now, you'll pull some images from within the `trustsandbox` container.
1. Download a `docker` image to test with.
# docker pull docker/trusttest
/ # docker pull docker/trusttest
docker pull docker/trusttest
Using default tag: latest
latest: Pulling from docker/trusttest
@@ -212,34 +154,34 @@ Now, you'll pull some images.
2. Tag it to be pushed to our sandbox registry:
# docker tag docker/trusttest sandboxregistry:5000/test/trusttest:latest
/ # docker tag docker/trusttest sandboxregistry:5000/test/trusttest:latest
3. Enable content trust.
# export DOCKER_CONTENT_TRUST=1
/ # export DOCKER_CONTENT_TRUST=1
4. Identify the trust server.
# export DOCKER_CONTENT_TRUST_SERVER=https://notaryserver:4443
/ # export DOCKER_CONTENT_TRUST_SERVER=https://notaryserver:4443
This step is only necessary because the sandbox is using its own server.
Normally, if you are using the Docker Public Hub this step isn't necessary.
5. Pull the test image.
# docker pull sandboxregistry:5000/test/trusttest
/ # docker pull sandboxregistry:5000/test/trusttest
Using default tag: latest
no trust data available
Error: remote trust data does not exist for sandboxregistry:5000/test/trusttest: notaryserver:4443 does not have trust data for sandboxregistry:5000/test/trusttest
You see an error, because this content doesn't exist on the `sandboxregistry` yet.
You see an error, because this content doesn't exist on the `notaryserver` yet.
6. Push the trusted image.
6. Push and sign the trusted image.
# docker push sandboxregistry:5000/test/trusttest:latest
The push refers to a repository [sandboxregistry:5000/test/trusttest] (len: 1)
a9539b34a6ab: Image successfully pushed
b3dbab3810fc: Image successfully pushed
latest: digest: sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c size: 3348
/ # docker push sandboxregistry:5000/test/trusttest:latest
The push refers to a repository [sandboxregistry:5000/test/trusttest]
5f70bf18a086: Pushed
c22f7bc058a9: Pushed
latest: digest: sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926 size: 734
Signing and pushing trust metadata
You are about to create a new root signing key passphrase. This passphrase
will be used to protect the most sensitive key in your signing system. Please
@@ -247,25 +189,24 @@ Now, you'll pull some images.
key file itself secure and backed up. It is highly recommended that you use a
password manager to generate the passphrase and keep it safe. There will be no
way to recover this key. You can find the key in your config directory.
Enter passphrase for new root key with id 8c69e04:
Repeat passphrase for new root key with id 8c69e04:
Enter passphrase for new repository key with id sandboxregistry:5000/test/trusttest (93c362a):
Repeat passphrase for new repository key with id sandboxregistry:5000/test/trusttest (93c362a):
Enter passphrase for new root key with ID 27ec255:
Repeat passphrase for new root key with ID 27ec255:
Enter passphrase for new repository key with ID 58233f9 (sandboxregistry:5000/test/trusttest):
Repeat passphrase for new repository key with ID 58233f9 (sandboxregistry:5000/test/trusttest):
Finished initializing "sandboxregistry:5000/test/trusttest"
latest: digest: sha256:d149ab53f8718e987c3a3024bb8aa0e2caadf6c0328f1d9d850b2a2a67f2819a size: 3355
Signing and pushing trust metadata
Successfully signed "sandboxregistry:5000/test/trusttest":latest
Because you are pushing this repository for the first time, docker creates new root and repository keys and asks you for passphrases with which to encrypt them. If you push again after this, it will only ask you for repository passphrase so it can decrypt the key and sign again.
7. Try pulling the image you just pushed:
# docker pull sandboxregistry:5000/test/trusttest
/ # docker pull sandboxregistry:5000/test/trusttest
Using default tag: latest
Pull (1 of 1): sandboxregistry:5000/test/trusttest:latest@sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c
sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c: Pulling from test/trusttest
b3dbab3810fc: Already exists
a9539b34a6ab: Already exists
Digest: sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c
Status: Downloaded newer image for sandboxregistry:5000/test/trusttest@sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c
Tagging sandboxregistry:5000/test/trusttest@sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c as sandboxregistry:5000/test/trusttest:latest
Pull (1 of 1): sandboxregistry:5000/test/trusttest:latest@sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926
sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926: Pulling from test/trusttest
Digest: sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926
Status: Downloaded newer image for sandboxregistry:5000/test/trusttest@sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926
Tagging sandboxregistry:5000/test/trusttest@sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926 as sandboxregistry:5000/test/trusttest:latest
### Test with malicious images
@@ -274,49 +215,64 @@ What happens when data is corrupted and you try to pull it when trust is
enabled? In this section, you go into the `sandboxregistry` and tamper with some
data. Then, you try and pull it.
1. Leave the sandbox container running.
1. Leave the `trustsandbox` shell and and container running.
2. Open a new bash terminal from your host into the `sandboxregistry`.
2. Open a new interactive terminal from your host, and obtain a shell into the
`sandboxregistry` container.
$ docker exec -it sandboxregistry bash
296db6068327#
root@65084fc6f047:/#
3. Change into the registry storage.
3. List the layers for the `test/trusttest` image you pushed:
You'll need to provide the `sha` you received when you pushed the image.
root@65084fc6f047:/# ls -l /var/lib/registry/docker/registry/v2/repositories/test/trusttest/_layers/sha256
total 12
drwxr-xr-x 2 root root 4096 Jun 10 17:26 a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
drwxr-xr-x 2 root root 4096 Jun 10 17:26 aac0c133338db2b18ff054943cee3267fe50c75cdee969aed88b1992539ed042
drwxr-xr-x 2 root root 4096 Jun 10 17:26 cc7629d1331a7362b5e5126beb5bf15ca0bf67eb41eab994c719a45de53255cd
# cd /var/lib/registry/docker/registry/v2/blobs/sha256/aa/aac0c133338db2b18ff054943cee3267fe50c75cdee969aed88b1992539ed042
4. Change into the registry storage for one of those layers (note that this is in a different directory)
4. Add malicious data to one of the trusttest layers:
root@65084fc6f047:/# cd /var/lib/registry/docker/registry/v2/blobs/sha256/aa/aac0c133338db2b18ff054943cee3267fe50c75cdee969aed88b1992539ed042
# echo "Malicious data" > data
5. Add malicious data to one of the trusttest layers:
5. Got back to your sandbox terminal.
root@65084fc6f047:/# echo "Malicious data" > data
6. List the trusttest image.
6. Go back to your `trustsandbox` terminal.
# docker images | grep trusttest
docker/trusttest latest a9539b34a6ab 7 weeks ago 5.025 MB
sandboxregistry:5000/test/trusttest latest a9539b34a6ab 7 weeks ago 5.025 MB
sandboxregistry:5000/test/trusttest <none> a9539b34a6ab 7 weeks ago 5.025 MB
7. List the trusttest image.
7. Remove the `trusttest:latest` image.
/ # docker images | grep trusttest
REPOSITORY TAG IMAGE ID CREATED SIZE
docker/trusttest latest cc7629d1331a 11 months ago 5.025 MB
sandboxregistry:5000/test/trusttest latest cc7629d1331a 11 months ago 5.025 MB
sandboxregistry:5000/test/trusttest <none> cc7629d1331a 11 months ago 5.025 MB
# docker rmi -f a9539b34a6ab
8. Remove the `trusttest:latest` image from our local cache.
/ # docker rmi -f cc7629d1331a
Untagged: docker/trusttest:latest
Untagged: sandboxregistry:5000/test/trusttest:latest
Untagged: sandboxregistry:5000/test/trusttest@sha256:1d871dcb16805f0604f10d31260e79c22070b35abc71a3d1e7ee54f1042c8c7c
Deleted: a9539b34a6aba01d3942605dfe09ab821cd66abf3cf07755b0681f25ad81f675
Deleted: b3dbab3810fc299c21f0894d39a7952b363f14520c2f3d13443c669b63b6aa20
Untagged: sandboxregistry:5000/test/trusttest@sha256:ebf59c538accdf160ef435f1a19938ab8c0d6bd96aef8d4ddd1b379edf15a926
Deleted: sha256:cc7629d1331a7362b5e5126beb5bf15ca0bf67eb41eab994c719a45de53255cd
Deleted: sha256:2a1f6535dc6816ffadcdbe20590045e6cbf048d63fd4cc753a684c9bc01abeea
Deleted: sha256:c22f7bc058a9a8ffeb32989b5d3338787e73855bf224af7aa162823da015d44c
8. Pull the image again.
Docker does not re-download images that it already has cached, but we want
Docker to attempt to download the tampered image from the registry and reject
it because it is invalid.
# docker pull sandboxregistry:5000/test/trusttest
8. Pull the image again. This will download the image from the registry, because we don't have it cached.
/ # docker pull sandboxregistry:5000/test/trusttest
Using default tag: latest
...
b3dbab3810fc: Verifying Checksum
a9539b34a6ab: Pulling fs layer
filesystem layer verification failed for digest sha256:aac0c133338db2b18ff054943cee3267fe50c75cdee969aed88b1992539ed042
Pull (1 of 1): sandboxregistry:5000/test/trusttest:latest@sha256:35d5bc26fd358da8320c137784fe590d8fcf9417263ef261653e8e1c7f15672e
sha256:35d5bc26fd358da8320c137784fe590d8fcf9417263ef261653e8e1c7f15672e: Pulling from test/trusttest
aac0c133338d: Retrying in 5 seconds
a3ed95caeb02: Download complete
error pulling image configuration: unexpected EOF
You'll see the pull did not complete because the trust system was
unable to verify the image.
@@ -328,4 +284,10 @@ feel free to play with it and see how it behaves. If you find any security
issues with Docker, feel free to send us an email at <security@docker.com>.
&nbsp;
## Cleaning up your sandbox
When you are done, and want to clean up all the services you've started and any
anonymous volumes that have been created, just run the following command in the
directory where you've created your Docker Compose file:
$ docker-compose down -v

View File

@@ -214,8 +214,7 @@ In order, Docker Engine does the following:
- **Pulls the `ubuntu` image:** Docker Engine checks for the presence of the `ubuntu`
image. If the image already exists, then Docker Engine uses it for the new container.
If it doesn't exist locally on the host, then Docker Engine pulls it from
[Docker Hub](https://hub.docker.com). If the image already exists, then Docker Engine
uses it for the new container.
[Docker Hub](https://hub.docker.com).
- **Creates a new container:** Once Docker Engine has the image, it uses it to create a
container.
- **Allocates a filesystem and mounts a read-write _layer_:** The container is created in

View File

@@ -345,7 +345,7 @@ source. When the container is deleted, you should instruction the Engine daemon
to clean up anonymous volumes. To do this, use the `--rm` option, for example:
```bash
$ docker run --rm -v /foo -v awesome:/bar busybox top,
$ docker run --rm -v /foo -v awesome:/bar busybox top
```
This command creates an anonymous `/foo` volume. When the container is removed,

View File

@@ -87,7 +87,7 @@ Go to [Docker Machine user guide](https://docs.docker.com/machine/).
### Docker Compose
Docker Compose allows you to define a application's components -- their containers,
Docker Compose allows you to define an application's components -- their containers,
configuration, links and volumes -- in a single file. Then a single command
will set everything up and start your application running.

View File

@@ -22,7 +22,7 @@ times but with different values, newer labels overwrite previous labels. Docker
uses the last `key=value` you supply.
>**Note:** Support for daemon-labels was added in Docker 1.4.1. Labels on
>containers and images are new in Docker 1.6.0
>containers and images were added in Docker 1.6.0
## Label keys (namespaces)

View File

@@ -59,6 +59,8 @@ could be added:
$ iptables -I DOCKER -i ext_if ! -s 8.8.8.8 -j DROP
```
where *ext_if* is the name of the interface providing external connectivity to the host.
## Communication between containers
Whether two containers can communicate is governed, at the operating system level, by two factors.

View File

@@ -18,7 +18,7 @@ applications run on. Docker container networks give you that control.
This section provides an overview of the default networking behavior that Docker
Engine delivers natively. It describes the type of networks created by default
and how to create your own, user--defined networks. It also describes the
and how to create your own, user-defined networks. It also describes the
resources required to create networks on a single host or across a cluster of
hosts.
@@ -57,7 +57,7 @@ docker0 Link encap:Ethernet HWaddr 02:42:47:bc:3a:eb
RX bytes:1100 (1.1 KB) TX bytes:648 (648.0 B)
```
The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at it's stack you see this:
The `none` network adds a container to a container-specific network stack. That container lacks a network interface. Attaching to such a container and looking at its stack you see this:
```
$ docker attach nonenetcontainer
@@ -138,7 +138,7 @@ $ docker run -itd --name=container2 busybox
94447ca479852d29aeddca75c28f7104df3c3196d7b6d83061879e339946805c
```
Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the container
Inspecting the `bridge` network again after starting two containers shows both newly launched containers in the network. Their ids show up in the "Containers" section of `docker network inspect`:
```
$ docker network inspect bridge
@@ -293,7 +293,9 @@ specifications.
You can create multiple networks. You can add containers to more than one
network. Containers can only communicate within networks but not across
networks. A container attached to two networks can communicate with member
containers in either network.
containers in either network. When a container is connected to multiple
networks, its external connectivity is provided via the first non-internal
network, in lexical order.
The next few sections describe each of Docker's built-in network drivers in
greater detail.

View File

@@ -228,7 +228,8 @@ $ docker run --net=isolated_nw --ip=172.25.3.3 -itd --name=container3 busybox
As you can see you were able to specify the ip address for your container.
As long as the network to which the container is connecting was created with
a user specified subnet, you will be able to select the IPv4 and/or IPv6 address(es)
for your container when executing `docker run` and `docker network connect` commands.
for your container when executing `docker run` and `docker network connect` commands
by respectively passing the `--ip` and `--ip6` flags for IPv4 and IPv6.
The selected IP address is part of the container networking configuration and will be
preserved across container reload. The feature is only available on user defined networks,
because they guarantee their subnets configuration does not change across daemon reload.
@@ -288,7 +289,7 @@ examine its networking stack:
$ docker attach container2
```
If you look a the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network.
If you look at the container's network stack you should see two Ethernet interfaces, one for the default bridge network and one for the `isolated_nw` network.
```bash
/ # ifconfig

View File

@@ -16,9 +16,9 @@ leverages the thin provisioning and snapshotting capabilities of this framework
for image and container management. This article refers to the Device Mapper
storage driver as `devicemapper`, and the kernel framework as `Device Mapper`.
>**Note**: The [Commercially Supported Docker Engine (CS-Engine) running on RHEL and CentOS Linux](https://www.docker.com/compatibility-maintenance) requires that you use the `devicemapper` storage driver.
>**Note**: The [Commercially Supported Docker Engine (CS-Engine) running on RHEL
and CentOS Linux](https://www.docker.com/compatibility-maintenance) requires
that you use the `devicemapper` storage driver.
## An alternative to AUFS
@@ -59,20 +59,20 @@ With `devicemapper` the high level process for creating images is as follows:
1. The `devicemapper` storage driver creates a thin pool.
The pool is created from block devices or loop mounted sparse files (more
on this later).
The pool is created from block devices or loop mounted sparse files (more
on this later).
2. Next it creates a *base device*.
A base device is a thin device with a filesystem. You can see which
filesystem is in use by running the `docker info` command and checking the
`Backing filesystem` value.
A base device is a thin device with a filesystem. You can see which
filesystem is in use by running the `docker info` command and checking the
`Backing filesystem` value.
3. Each new image (and image layer) is a snapshot of this base device.
These are thin provisioned copy-on-write snapshots. This means that they
are initially empty and only consume space from the pool when data is written
to them.
These are thin provisioned copy-on-write snapshots. This means that they
are initially empty and only consume space from the pool when data is written
to them.
With `devicemapper`, container layers are snapshots of the image they are
created from. Just as with images, container snapshots are thin provisioned
@@ -107,9 +107,9 @@ block (`0x44f`) in an example container.
1. An application makes a read request for block `0x44f` in the container.
Because the container is a thin snapshot of an image it does not have the
data. Instead, it has a pointer (PTR) to where the data is stored in the image
snapshot lower down in the image stack.
Because the container is a thin snapshot of an image it does not have the
data. Instead, it has a pointer (PTR) to where the data is stored in the image
snapshot lower down in the image stack.
2. The storage driver follows the pointer to block `0xf33` in the snapshot
relating to image layer `a005...`.
@@ -119,7 +119,7 @@ snapshot to memory in the container.
4. The storage driver returns the data to the requesting application.
### Write examples
## Write examples
With the `devicemapper` driver, writing new data to a container is accomplished
by an *allocate-on-demand* operation. Updating existing data uses a
@@ -130,7 +130,7 @@ For example, when making a small change to a large file in a container, the
`devicemapper` storage driver does not copy the entire file. It only copies the
blocks to be modified. Each block is 64KB.
#### Writing new data
### Writing new data
To write 56KB of new data to a container:
@@ -139,12 +139,12 @@ To write 56KB of new data to a container:
2. The allocate-on-demand operation allocates a single new 64KB block to the
container's snapshot.
If the write operation is larger than 64KB, multiple new blocks are
allocated to the container's snapshot.
If the write operation is larger than 64KB, multiple new blocks are
allocated to the container's snapshot.
3. The data is written to the newly allocated block.
#### Overwriting existing data
### Overwriting existing data
To modify existing data for the first time:
@@ -161,7 +161,7 @@ The application in the container is unaware of any of these
allocate-on-demand and copy-on-write operations. However, they may add latency
to the application's read and write operations.
## Configuring Docker with Device Mapper
## Configure Docker with devicemapper
The `devicemapper` is the default Docker storage driver on some Linux
distributions. This includes RHEL and most of its forks. Currently, the
@@ -180,18 +180,20 @@ deployments should not run under `loop-lvm` mode.
You can detect the mode by viewing the `docker info` command:
$ sudo docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-202:2-25220302-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
...
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
...
```bash
$ sudo docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-202:2-25220302-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
[...]
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
[...]
```
The output above shows a Docker host running with the `devicemapper` storage
driver operating in `loop-lvm` mode. This is indicated by the fact that the
@@ -201,7 +203,7 @@ files.
### Configure direct-lvm mode for production
The preferred configuration for production deployments is `direct lvm`. This
The preferred configuration for production deployments is `direct-lvm`. This
mode uses block devices to create the thin pool. The following procedure shows
you how to configure a Docker host to use the `devicemapper` storage driver in
a `direct-lvm` configuration.
@@ -210,7 +212,7 @@ a `direct-lvm` configuration.
> and have images you want to keep, `push` them Docker Hub or your private
> Docker Trusted Registry before attempting this procedure.
The procedure below will create a 90GB data volume and 4GB metadata volume to
The procedure below will create a logical volume configured as a thin pool to
use as backing for the storage pool. It assumes that you have a spare block
device at `/dev/xvdf` with enough free space to complete the task. The device
identifier and volume sizes may be be different in your environment and you
@@ -219,114 +221,172 @@ assumes that the Docker daemon is in the `stopped` state.
1. Log in to the Docker host you want to configure and stop the Docker daemon.
2. If it exists, delete your existing image store by removing the
`/var/lib/docker` directory.
2. Install the LVM2 package.
The LVM2 package includes the userspace toolset that provides logical volume
management facilities on linux.
$ sudo rm -rf /var/lib/docker
3. Create a physical volume replacing `/dev/xvdf` with your block device.
3. Create an LVM physical volume (PV) on your spare block device using the
`pvcreate` command.
```bash
$ pvcreate /dev/xvdf
```
$ sudo pvcreate /dev/xvdf
Physical volume `/dev/xvdf` successfully created
4. Create a 'docker' volume group.
The device identifier may be different on your system. Remember to
substitute your value in the command above.
```bash
$ vgcreate docker /dev/xvdf
```
4. Create a new volume group (VG) called `vg-docker` using the PV created in
the previous step.
5. Create a thin pool named `thinpool`.
$ sudo vgcreate vg-docker /dev/xvdf
Volume group `vg-docker` successfully created
In this example, the data logical is 95% of the 'docker' volume group size.
Leaving this free space allows for auto expanding of either the data or
metadata if space runs low as a temporary stopgap.
5. Create a new 90GB logical volume (LV) called `data` from space in the
`vg-docker` volume group.
```bash
$ lvcreate --wipesignatures y -n thinpool docker -l 95%VG
$ lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG
```
$ sudo lvcreate -L 90G -n data vg-docker
Logical volume `data` created.
6. Convert the pool to a thin pool.
The command creates an LVM logical volume called `data` and an associated
block device file at `/dev/vg-docker/data`. In a later step, you instruct the
`devicemapper` storage driver to use this block device to store image and
container data.
```bash
$ lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta
```
If you receive a signature detection warning, make sure you are working on
the correct devices before continuing. Signature warnings indicate that the
device you're working on is currently in use by LVM or has been used by LVM in
the past.
7. Configure autoextension of thin pools via an `lvm` profile.
6. Create a new logical volume (LV) called `metadata` from space in the
`vg-docker` volume group.
```bash
$ vi /etc/lvm/profile/docker-thinpool.profile
```
$ sudo lvcreate -L 4G -n metadata vg-docker
Logical volume `metadata` created.
8. Specify 'thin_pool_autoextend_threshold' value.
This creates an LVM logical volume called `metadata` and an associated
block device file at `/dev/vg-docker/metadata`. In the next step you instruct
the `devicemapper` storage driver to use this block device to store image and
container metadata.
The value should be the percentage of space used before `lvm` attempts
to autoextend the available space (100 = disabled).
7. Start the Docker daemon with the `devicemapper` storage driver and the
`--storage-opt` flags.
```
thin_pool_autoextend_threshold = 80
```
The `data` and `metadata` devices that you pass to the `--storage-opt`
options were created in the previous steps.
9. Modify the `thin_pool_autoextend_percent` for when thin pool autoextension occurs.
$ sudo docker daemon --storage-driver=devicemapper --storage-opt dm.datadev=/dev/vg-docker/data --storage-opt dm.metadatadev=/dev/vg-docker/metadata &
[1] 2163
[root@ip-10-0-0-75 centos]# INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
INFO[0027] Option DefaultDriver: bridge
INFO[0027] Option DefaultNetwork: bridge
<output truncated>
INFO[0027] Daemon has completed initialization
INFO[0027] Docker daemon commit=1b09a95-unsupported graphdriver=aufs version=1.11.0-dev
The value's setting is the perentage of space to increase the thin pool (100 =
disabled)
It is also possible to set the `--storage-driver` and `--storage-opt` flags
in the Docker config file and start the daemon normally using the `service` or
`systemd` commands.
```
thin_pool_autoextend_percent = 20
```
8. Use the `docker info` command to verify that the daemon is using `data` and
`metadata` devices you created.
10. Check your work, your `docker-thinpool.profile` file should appear similar to the following:
$ sudo docker info
INFO[0180] GET /v1.20/info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: docker-202:1-1032-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/vg-docker/data
Metadata file: /dev/vg-docker/metadata
[...]
An example `/etc/lvm/profile/docker-thinpool.profile` file:
```
activation {
thin_pool_autoextend_threshold=80
thin_pool_autoextend_percent=20
}
```
11. Apply your new lvm profile
```bash
$ lvchange --metadataprofile docker-thinpool docker/thinpool
```
12. Verify the `lv` is monitored.
```bash
$ lvs -o+seg_monitor
```
13. If the Docker daemon was previously started, clear your graph driver directory.
Clearing your graph driver removes any images, containers, and volumes in your
Docker installation.
```bash
$ rm -rf /var/lib/docker/*
```
14. Configure the Docker daemon with specific devicemapper options.
There are two ways to do this. You can set options on the commmand line if you start the daemon there:
```bash
--storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool --storage-opt dm.use_deferred_removal=true
```
You can also set them for startup in the `daemon.json` configuration, for example:
```json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/docker-thinpool",
"dm.use_deferred_removal=true"
]
}
```
15. If using systemd and modifying the daemon configuration via unit or drop-in file, reload systemd to scan for changes.
```bash
$ systemctl daemon-reload
```
16. Start the Docker daemon.
```bash
$ systemctl start docker
```
After you start the Docker daemon, ensure you monitor your thin pool and volume
group free space. While the volume group will auto-extend, it can still fill
up. To monitor logical volumes, use `lvs` without options or `lvs -a` to see tha
data and metadata sizes. To monitor volume group free space, use the `vgs` command.
Logs can show the auto-extension of the thin pool when it hits the threshold, to
view the logs use:
```bash
$ journalctl -fu dm-event.service
```
If you run into repeated problems with thin pool, you can use the
`dm.min_free_space` option to tune the Engine behavior. This value ensures that
operations fail with a warning when the free space is at or near the minimum.
For information, see <a
href="../../../reference/commandline/daemon/#storage-driver-options"
target="_blank">the storage driver options in the Engine daemon reference</a>.
The output of the command above shows the storage driver as `devicemapper`.
The last two lines also confirm that the correct devices are being used for
the `Data file` and the `Metadata file`.
### Examine devicemapper structures on the host
You can use the `lsblk` command to see the device files created above and the
`pool` that the `devicemapper` storage driver creates on top of them.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 10G 0 disk
├─vg--docker-data 253:0 0 90G 0 lvm
│ └─docker-202:1-1032-pool 253:2 0 10G 0 dm
└─vg--docker-metadata 253:1 0 4G 0 lvm
└─docker-202:1-1032-pool 253:2 0 10G 0 dm
```bash
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 10G 0 disk
├─vg--docker-data 253:0 0 90G 0 lvm
│ └─docker-202:1-1032-pool 253:2 0 10G 0 dm
└─vg--docker-metadata 253:1 0 4G 0 lvm
└─docker-202:1-1032-pool 253:2 0 10G 0 dm
```
The diagram below shows the image from prior examples updated with the detail
from the `lsblk` command above.
![](http://farm1.staticflickr.com/703/22116692899_0471e5e160_b.jpg)
![](images/lsblk-diagram.jpg)
In the diagram, the pool is named `Docker-202:1-1032-pool` and spans the `data`
and `metadata` devices created earlier. The `devicemapper` constructs the pool
name as follows:
and `metadata` devices created earlier. The `devicemapper` constructs the pool
name as follows:
```
Docker-MAJ:MIN-INO-pool
@@ -336,13 +396,201 @@ Docker-MAJ:MIN-INO-pool
Because Device Mapper operates at the block level it is more difficult to see
diffs between image layers and containers. Docker 1.10 and later no longer
matches image layer IDs with directory names in `/var/lib/docker`. However,
matches image layer IDs with directory names in `/var/lib/docker`. However,
there are two key directories. The `/var/lib/docker/devicemapper/mnt` directory
contains the mount points for image and container layers. The
`/var/lib/docker/devicemapper/metadata`directory contains one file for every
image layer and container snapshot. The files contain metadata about each
snapshot in JSON format.
## Increase capacity on a running device
You can increase the capacity of the pool on a running thin-pool device. This is
useful if the data's logical volume is full and the volume group is at full
capacity.
### For a loop-lvm configuration
In this scenario, the thin pool is configured to use `loop-lvm` mode. To show
the specifics of the existing configuration use `docker info`:
```bash
$ sudo docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 2
Server Version: 1.11.0-rc2
Storage Driver: devicemapper
Pool Name: docker-8:1-123141-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.202 GB
Data Space Total: 107.4 GB
Data Space Available: 4.506 GB
Metadata Space Used: 1.729 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.146 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.90 (2014-09-01)
Logging Driver: json-file
[...]
```
The `Data Space` values show that the pool is 100GB total. This example extends the pool to 200GB.
1. List the sizes of the devices.
```bash
$ sudo ls -lh /var/lib/docker/devicemapper/devicemapper/
total 1175492
-rw------- 1 root root 100G Mar 30 05:22 data
-rw------- 1 root root 2.0G Mar 31 11:17 metadata
```
2. Truncate `data` file to the size of the `metadata` file (approximage 200GB).
```bash
$ sudo truncate -s 214748364800 /var/lib/docker/devicemapper/devicemapper/data
```
3. Verify the file size changed.
```bash
$ sudo ls -lh /var/lib/docker/devicemapper/devicemapper/
total 1.2G
-rw------- 1 root root 200G Apr 14 08:47 data
-rw------- 1 root root 2.0G Apr 19 13:27 metadata
```
4. Reload data loop device
```bash
$ sudo blockdev --getsize64 /dev/loop0
107374182400
$ sudo losetup -c /dev/loop0
$ sudo blockdev --getsize64 /dev/loop0
214748364800
```
5. Reload devicemapper thin pool.
a. Get the pool name first.
```bash
$ sudo dmsetup status | grep pool
docker-8:1-123141-pool: 0 209715200 thin-pool 91
422/524288 18338/1638400 - rw discard_passdown queue_if_no_space -
```
The name is the string before the colon.
b. Dump the device mapper table first.
```bash
$ sudo dmsetup table docker-8:1-123141-pool
0 209715200 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing
```
c. Calculate the real total sectors of the thin pool now.
Change the second number of the table info (i.e. the disk end sector) to
reflect the new number of 512 byte sectors in the disk. For example, as the
new loop size is 200GB, change the second number to 419430400.
d. Reload the thin pool with the new sector number
```bash
$ sudo dmsetup suspend docker-8:1-123141-pool \
&& sudo dmsetup reload docker-8:1-123141-pool --table '0 419430400 thin-pool 7:1 7:0 128 32768 1 skip_block_zeroing' \
&& sudo dmsetup resume docker-8:1-123141-pool
```
#### The device_tool
The Docker's projects `contrib` directory contains not part of the core
distribution. These tools that are often useful but can also be out-of-date. <a
href="https://goo.gl/wNfDTi">In this directory, is the `device_tool.go`</a>
which you can also resize the loop-lvm thin pool.
To use the tool, compile it first. Then, do the following to resize the pool:
```bash
$ ./device_tool resize 200GB
```
### For a direct-lvm mode configuration
In this example, you extend the capacity of a running device that uses the
`direct-lvm` configuration. This example assumes you are using the `/dev/sdh1`
disk partition.
1. Extend the volume group (VG) `vg-docker`.
```bash
$ sudo vgextend vg-docker /dev/sdh1
Volume group "vg-docker" successfully extended
```
Your volume group may use a different name.
2. Extend the `data` logical volume(LV) `vg-docker/data`
```bash
$ sudo lvextend -l+100%FREE -n vg-docker/data
Extending logical volume data to 200 GiB
Logical volume data successfully resized
```
3. Reload devicemapper thin pool.
a. Get the pool name.
```bash
$ sudo dmsetup status | grep pool
docker-253:17-1835016-pool: 0 96460800 thin-pool 51593 6270/1048576 701943/753600 - rw no_discard_passdown queue_if_no_space
```
The name is the string before the colon.
b. Dump the device mapper table.
```bash
$ sudo dmsetup table docker-253:17-1835016-pool
0 96460800 thin-pool 252:0 252:1 128 32768 1 skip_block_zeroing
```
c. Calculate the real total sectors of the thin pool now. we can use `blockdev` to get the real size of data lv.
Change the second number of the table info (i.e. the number of sectors) to
reflect the new number of 512 byte sectors in the disk. For example, as the
new data `lv` size is `264132100096` bytes, change the second number to
`515883008`.
```bash
$ sudo blockdev --getsize64 /dev/vg-docker/data
264132100096
```
d. Then reload the thin pool with the new sector number.
```bash
$ sudo dmsetup suspend docker-253:17-1835016-pool \
&& sudo dmsetup reload docker-253:17-1835016-pool --table '0 515883008 thin-pool 252:0 252:1 128 32768 1 skip_block_zeroing' \
&& sudo dmsetup resume docker-253:17-1835016-pool
```
## Device Mapper and Docker performance
It is important to understand the impact that allocate-on-demand and
@@ -383,20 +631,20 @@ There are several other things that impact the performance of the
`devicemapper` storage driver.
- **The mode.** The default mode for Docker running the `devicemapper` storage
driver is `loop-lvm`. This mode uses sparse files and suffers from poor
performance. It is **not recommended for production**. The recommended mode for
production environments is `direct-lvm` where the storage driver writes
directly to raw block devices.
driver is `loop-lvm`. This mode uses sparse files and suffers from poor
performance. It is **not recommended for production**. The recommended mode for
production environments is `direct-lvm` where the storage driver writes
directly to raw block devices.
- **High speed storage.** For best performance you should place the `Data file`
and `Metadata file` on high speed storage such as SSD. This can be direct
attached storage or from a SAN or NAS array.
and `Metadata file` on high speed storage such as SSD. This can be direct
attached storage or from a SAN or NAS array.
- **Memory usage.** `devicemapper` is not the most memory efficient Docker
storage driver. Launching *n* copies of the same container loads *n* copies of
its files into memory. This can have a memory impact on your Docker host. As a
result, the `devicemapper` storage driver may not be the best choice for PaaS
and other high density use cases.
storage driver. Launching *n* copies of the same container loads *n* copies of
its files into memory. This can have a memory impact on your Docker host. As a
result, the `devicemapper` storage driver may not be the best choice for PaaS
and other high density use cases.
One final point, data volumes provide the best and most predictable
performance. This is because they bypass the storage driver and do not incur
@@ -410,3 +658,4 @@ data volumes.
* [Select a storage driver](selectadriver.md)
* [AUFS storage driver in practice](aufs-driver.md)
* [Btrfs storage driver in practice](btrfs-driver.md)
* [daemon reference](../../reference/commandline/daemon#storage-driver-options)

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Some files were not shown because too many files have changed in this diff Show More