Compare commits

...

395 Commits

Author SHA1 Message Date
Tianon Gravi
8502ad4ba7 Bump version to v0.7.3 2014-01-02 21:19:19 -07:00
Andy Rothfusz
58ec7855bc substantial spelling fix 2014-01-02 16:46:50 -08:00
Andy Rothfusz
949fde88df Merge pull request #3437 from jamtur01/faqhere
Fixed duplicate here references in FAQ
2014-01-02 16:44:38 -08:00
James Turnbull
5a9f45cb7a Fixed duplicate here references in FAQ 2014-01-02 19:40:19 -05:00
Andy Rothfusz
8f4a54734f Merge pull request #3428 from jamtur01/port1.8
Added some 1.7 API updates that were missing from 1.8
2014-01-02 16:34:56 -08:00
Andy Rothfusz
9359d79c4f Merge pull request #3389 from SvenDowideit/3366-simplify-volumes-documentation
simplify the volumes documentation and mention more details
2014-01-02 16:29:09 -08:00
James Turnbull
69db6ea867 Added some 1.7 API updates that were missing from 1.8 2014-01-02 19:27:59 -05:00
Andy Rothfusz
3b89187d03 Merge pull request #3409 from Chris00/patch-1
Mention lxc-checkconfig
2014-01-02 16:20:30 -08:00
Andy Rothfusz
82a47b0e82 Keep original transcript in Thatcher's voice
and clarify that this is an update to the example.
2014-01-02 16:17:52 -08:00
Andy Rothfusz
e0f07bc186 Merge pull request #3399 from rgstephens/patch-1
Update for Ubuntu 13.10
2014-01-02 16:14:47 -08:00
Guillaume J. Charmes
194eb246ef Merge pull request #3353 from creack/improve_add_cache
Improve add cache
2014-01-02 16:07:33 -08:00
Michael Crosby
81e596e272 Merge pull request #3287 from vieux/fix_progressbar_display_push
fix progressbar in docker push
2014-01-02 13:38:11 -08:00
Tianon Gravi
acfdfa81be Merge pull request #3427 from tianon/stub-deb-init-config
Add stubbed and commented "/etc/default/docker" to our deb package
2014-01-02 09:44:36 -08:00
Tianon Gravi
7fd6dcc831 Add stubbed and commented "/etc/default/docker" to our deb package
This is to especially fix FPM 1.0+ complaining that we told it we have an /etc/default/docker "config file", but didn't actually include one.
2014-01-01 22:34:22 -07:00
James Turnbull
add97f7eb0 Merge pull request #3419 from newzealandpaul/master
Mac OS X command in docs for pip fixed
2014-01-01 20:27:38 -08:00
Tianon Gravi
5f55784224 Merge pull request #3374 from tianon/fpm-license-vendor
Replace FPM --vendor with --license, and give it the proper value of "Apache-2.0"
2014-01-01 14:42:47 -08:00
Paul
f3816ee024 Mac OSX command for pip was incorrect 2014-01-02 09:52:07 +13:00
Christophe Troestler
0b3e153588 Implement suggestions 2014-01-01 12:28:22 +01:00
Christophe Troestler
2226989410 Mention lxc-checkconfig
This is important because the list below is not complete when one build one's own kernel.  Plus it is more robust in the face of changes.
2014-01-01 00:59:47 +01:00
Tianon Gravi
c23b15b9d8 Merge pull request #3398 from thomasleveil/bash_completion_names
support for container names in bash completion
2013-12-31 08:28:21 -08:00
Thomas LEVEIL
055b32e3f4 support for container names in bash completion 2013-12-31 11:34:23 +00:00
Michael Crosby
907d9ce13c Merge pull request #3355 from creack/fix_build_exit
Exit with code 1 in case of error for `docker build`
2013-12-30 22:14:01 -08:00
Tianon Gravi
74d45789dd Merge pull request #3379 from tianon/organize-contrib-syntax
Reorganize the syntax highlighting files in contrib under a common directory
2013-12-30 21:57:19 -08:00
Tianon Gravi
40522c0380 Merge pull request #3392 from tianon/fix-install-failure
Fix install failure when busybox can't be downloaded
2013-12-30 20:04:30 -08:00
Tianon Gravi
d5bb0ff80a Merge pull request #3401 from odonnellryan/patch-1
Swap "grep -q" with "grep > /dev/null" so that "go help" doesn't balk when the pipe closes unexpectedly (since "grep -q" closes the pipe the second it finds a match)
2013-12-30 18:44:23 -08:00
Ryan O'Donnell
ad80da3389 Fixes Issue #3400
See Issue #3400
2013-12-30 20:58:25 -05:00
rgstephens
1f80c2a652 Update for Ubuntu 13.10
With two additional commands, this procedure will work for Ubuntu 13.10 using the image stackbrew/ubuntu:13.10.

1) change /etc/pam.d/sshd, pam_loginuid line 'required' to 'optional'
2) echo LANG=\"en_US.UTF-8\" > /etc/default/locale
2013-12-30 17:50:58 -08:00
Andy Rothfusz
1bc3f6b7b5 Merge pull request #3303 from crosbymichael/add-host-flag
Add DOCKER_HOST env var for client
2013-12-30 15:54:37 -08:00
Andy Rothfusz
643621133f Merge pull request #3395 from jamtur01/basicsfixes
Minor fixes to the basic command documentation
2013-12-30 15:41:59 -08:00
Sven Dowideit
fd240413ff simplify the volumes documentation and mention more details 2013-12-31 09:09:57 +10:00
Solomon Hykes
392b1e99b2 Merge pull request #3344 from tianon/dockerfile-cleanup
Many various Dockerfile cleanups
2013-12-30 11:26:13 -08:00
James Turnbull
0dfebf2d93 Minor fixes to the basic command documentation 2013-12-30 13:57:11 -05:00
Andy Rothfusz
40aaebe56a Merge pull request #3394 from jamtur01/clifixes
Numerous small fixes to the CLI documentation
2013-12-30 10:56:47 -08:00
James Turnbull
a1dba16fe8 Numerous small fixes to the CLI documentation 2013-12-30 13:48:12 -05:00
Andy Rothfusz
e31f1f1eba Merge pull request #3371 from Grunny/volumes-typo-fix
Fix typo in working with volumes doc page
2013-12-30 10:35:09 -08:00
Andy Rothfusz
7e720d0a77 Merge pull request #3312 from DeX77/master
Installation for FrugalWare
2013-12-30 10:34:39 -08:00
Andy Rothfusz
237868e9c3 Merge pull request #3380 from SvenDowideit/add-no-auth-note
add a simple note to suggest to the user to use RUN curl/wget/tool if the URL they need is behind auth
2013-12-30 10:00:51 -08:00
Tianon Gravi
fc197188d7 Fix install failure when busybox can't be downloaded
Whether or not the "busybox" image downloads and runs properly at the end of the build, we don't want to have the script return a failing exit code, especially since at that point, Docker is successfully installed, and we're just tooting our own horn for good measure.
2013-12-30 08:38:43 -07:00
Thatcher
d59080d119 Merge pull request #3386 from bwiklund/headerfix
Fixed header image on docs.docker.io site
2013-12-30 06:48:02 -08:00
Tianon Gravi
484a75f354 Update Dockerfile to use stackbrew/ubuntu (until it graduates), update Dockerfile MAINTAINER line, coalesce all apt-get installs into one invocation (including s3cmd by bringing in backports) 2013-12-29 23:27:01 -07:00
Ben Wiklund
434cf6c8ca fixed header size on narrow desktop window 2013-12-29 19:22:52 -08:00
Sven Dowideit
e93b7b4647 add a simple note to suggest to the user to use RUN curl/wget/tool if the URL they need is behind auth 2013-12-29 21:27:59 +10:00
Tianon Gravi
06a818616b Reorganize the syntax highlighting files in contrib under a common directory to match "contrib/init" and "contrib/completion"
This is split off from #2970.
2013-12-29 01:46:00 -07:00
Michael Crosby
f50b8b08b5 Add DOCKER_HOST env var for client
This env var will set the -H flag on the docker
client.
2013-12-28 16:42:18 -08:00
Tianon Gravi
cda146547e Replace FPM --vendor with --license, and give it the proper value of "Apache-2.0"
Fixes #3372
2013-12-28 06:35:00 -07:00
grunny
a17fd7b294 Fix typo in working with volumes doc page 2013-12-28 20:22:46 +10:00
Tianon Gravi
22162687df Merge pull request #3358 from asbjornenge/master
Apply automatically to files named 'Dockerfile'
2013-12-27 08:22:52 -08:00
Daniel Exner
d256f3049b * s/docker/Docker/ 2013-12-27 11:55:57 +01:00
Asbjørn Enge
a1a4a99d7e Apply automatically to files named 'Dockerfile' 2013-12-27 10:05:25 +01:00
Guillaume J. Charmes
4986958e7e Exit with code 1 in case of error for docker build 2013-12-26 17:49:41 -08:00
Guillaume J. Charmes
cd735496da Hash the sums for directory (ureadable when there is too many 2013-12-26 16:42:05 -08:00
Guillaume J. Charmes
894d4a23fb Change BuildFile in order to use TarSum instead of custom checksum 2013-12-26 16:16:26 -08:00
Guillaume J. Charmes
fc9f4d8bad Log files name along with their checksum in TarSum + add a Method to retrieve the checksum map 2013-12-26 16:01:36 -08:00
Guillaume J. Charmes
1d4b7d8fa1 Make sure the cache lookup returns always the same result 2013-12-26 15:43:27 -08:00
Guillaume J. Charmes
360078d761 Remove old debug from tarsum 2013-12-26 15:39:06 -08:00
Guillaume J. Charmes
808f2d39bd Merge pull request #3343 from bwiklund/comment-edits
small batch of edits/corrections to comments
2013-12-26 10:22:33 -08:00
Guillaume J. Charmes
d1ca12e81b Merge pull request #3347 from smarterclayton/patch-1
Fix typo in devmapper error message
2013-12-26 10:19:55 -08:00
Guillaume J. Charmes
a042c9fb1b Merge pull request #3348 from manuel-woelker/document-name-param-in-remote-api
Document "name" query parameter for "POST /containers/create" in Remote API v1.8
2013-12-26 10:19:42 -08:00
Manuel Woelker
721bb410f6 Document "name" query parameter for "POST /containers/create" in Remote API v1.8 2013-12-26 06:43:26 +01:00
Clayton Coleman
029625981d Fix typo in devmapper error message 2013-12-25 15:49:58 -05:00
Ben Wiklund
0fccf0f686 small batch of edits/corrections to comments 2013-12-24 16:40:14 -08:00
Guillaume J. Charmes
efaf2cac5c Merge pull request #2809 from graydon/880-cache-ADD-commands-in-dockerfiles
Issue #880 - cache ADD commands in dockerfiles
2013-12-24 16:22:51 -08:00
Guillaume J. Charmes
cb1fe939a8 Merge pull request #3337 from tianon/release-darwin
Release tgz and binaries for cross-compiled platforms
2013-12-24 14:44:41 -08:00
Andy Rothfusz
c654aea4f2 Merge pull request #3340 from jamtur01/apiport
Fixed #3039 - Added clarification on port options in API
2013-12-24 14:34:13 -08:00
James Turnbull
d2d8a4a6c5 Fixed #3039 - Added clarify on port options in API 2013-12-24 13:01:59 -05:00
Tianon Gravi
4100e9b7df Update cross and tgz to play nicely together (creating a tgz for each supported OS/ARCH), and update release.sh to upload binaries and tgz files for all the supported OS/ARCH combos 2013-12-23 23:55:06 -07:00
Tianon Gravi
5875953d9b Merge pull request #3112 from shawnl/master
hack/PACKAGERS.md: libdevmapper
2013-12-23 19:24:36 -08:00
Tianon Gravi
f4ce106e02 Merge pull request #3318 from sugi/mkimage-deb-fix-cache-glob
Fix glob expansion on mkimage-debootstrap
2013-12-23 17:20:41 -08:00
Victor Vieux
7ec1236cee Merge pull request #3324 from roylee17/enhance-docker-top-err-msg
docker-top: improve error message for non-running containers
2013-12-23 17:15:37 -08:00
Guillaume J. Charmes
2b4bb67ce0 Merge pull request #3327 from shykes/pkg-graphdb
Move utility package 'graphdb' to pkg/graphdb
2013-12-23 16:33:11 -08:00
Victor Vieux
6155f07561 Merge pull request #3331 from shykes/pkg-names
Move utility package 'namesgenerator' to pkg/namesgenerator
2013-12-23 16:15:26 -08:00
Victor Vieux
e6e35e5984 Merge pull request #3330 from shykes/pkg-term
Move utility package 'term' to pkg/term
2013-12-23 16:11:42 -08:00
Victor Vieux
0d207abf8e Merge pull request #3329 from shykes/pkg-netlink
Move utility package 'netlink' to pkg/netlink
2013-12-23 16:08:07 -08:00
Solomon Hykes
a009d4ae8d Move utility package 'namesgenerator' to pkg/namesgenerator 2013-12-23 23:45:18 +00:00
Victor Vieux
b75f385abd Merge pull request #3304 from vieux/prevent_orphan_deletion
Prevent orphan in docker rmi
2013-12-23 15:44:46 -08:00
Solomon Hykes
7ce7516c12 Move utility package 'term' to pkg/term 2013-12-23 23:42:37 +00:00
Solomon Hykes
f6b91262a7 Move utility package 'netlink' to pkg/netlink 2013-12-23 23:39:39 +00:00
Solomon Hykes
d16d748132 Move utility package 'graphdb' to pkg/graphdb 2013-12-23 23:33:06 +00:00
Victor Vieux
3fc9de3d03 Merge pull request #3325 from shykes/pkg-systemd
Move utility package 'systemd' to pkg/systemd
2013-12-23 15:26:25 -08:00
Solomon Hykes
652c2c2a80 Add README to pkg 2013-12-23 23:12:19 +00:00
Solomon Hykes
8e7db0432e Move utility package 'systemd' to pkg/systemd 2013-12-23 23:07:01 +00:00
Guillaume J. Charmes
e1a15b25dc Merge pull request #3309 from apocas/master
Wrong HTTP method in events endpoint. (documentation)
2013-12-23 14:58:49 -08:00
Tzu-Jung Lee
b1a3a55802 docker-top: improve error message for non-running containers
Signed-off-by: Tzu-Jung Lee <roylee17@gmail.com>
2013-12-23 14:50:24 -08:00
apocas
614bc5c1e1 http method typo in documentation for events endpoint (all api versions) 2013-12-23 21:21:01 +00:00
Victor Vieux
3fe4d5477a Merge pull request #3298 from creack/add_arch_user_agent
Add arch/os info to user agent (Registry)
2013-12-23 12:08:26 -08:00
Andy Rothfusz
cda24e345c Merge pull request #3302 from briandorsey/master
Added a note about a networking work-around.
2013-12-23 11:25:20 -08:00
Andy Rothfusz
88037b2877 Merge pull request #3320 from jamtur01/privapi
API documentation update for Privileged
2013-12-23 11:22:47 -08:00
Andy Rothfusz
6cdd1aa350 Merge pull request #3321 from jamtur01/apiclean
Minor style cleanups to the API spec docs
2013-12-23 11:20:56 -08:00
James Turnbull
ea8a3438f7 Minor style cleanups to the API spec docs 2013-12-23 11:45:51 -05:00
Tianon Gravi
954158ce52 Merge pull request #2986 from SvenDowideit/still-need-privileged
still need  -privileged to run a build / test
2013-12-23 07:58:17 -08:00
James Turnbull
bf17383e35 API documentation update for Privileged
The 1.7 API docs show the ability to pass Privileged
when creating a container. This is not supported as.
Privileged is now part of hostConfig and can only be
passed when starting a container.

This fixes the documentation issue.
2013-12-23 09:08:28 -05:00
Sven Dowideit
83d81758b0 use the Makefile - it makes life so much simpler 2013-12-23 23:46:52 +10:00
Tatsuki Sugiura
e3b878ce98 Fix glob expansion for no-cache setting.
On previous version, glob pattern will be expanded to actual file
names when writing setting to etc/apt/apt.conf.d/no-cache.

This patch fixes to quote to work cache clean command properly.
2013-12-23 17:27:22 +09:00
James Turnbull
1e5f9334e0 Merge pull request #3308 from crigor/patch-1
Change an to a
2013-12-21 15:12:50 -08:00
Daniel Exner
3edbf416bf + added missing link in index.rst
* some gramatic and spelling fixes

Thx jamtur :)
2013-12-22 00:00:20 +01:00
James Turnbull
c2364b978d Merge pull request #3272 from SvenDowideit/more-complete-run-cmd-example
A more complete example of docker run.
2013-12-21 14:52:41 -08:00
Daniel Exner
158e3d60ec Installation for FrugalWare 2013-12-21 23:38:13 +01:00
apocas
e4e579b40d Wrong HTTP method in events endpoint. 2013-12-21 16:41:41 +00:00
Christopher Rigor
071528e103 Change an to a 2013-12-21 16:23:13 +08:00
Victor Vieux
a2fcd3d8f0 Merge pull request #3306 from roylee17/3224-fix-udp-cleanup
network: fix a typo in udp cleanup path
2013-12-21 00:03:07 -08:00
Tzu-Jung Lee
7d2e851d8e network: fix a typo in udp cleanup path
Fix #3224 - Port already in use error when running a container

Signed-off-by: Tzu-Jung Lee <roylee17@gmail.com>
2013-12-20 17:54:54 -08:00
Victor Vieux
85f9b778f5 fix progressbar in docker push 2013-12-20 16:55:41 -08:00
Victor Vieux
369cde4ad7 discard test output 2013-12-20 16:50:31 -08:00
Victor Vieux
3ffc52bcf5 Merge branch 'test-container-orphaning' of https://github.com/gabrtv/docker into prevent_orphan_deletion 2013-12-20 16:27:33 -08:00
Victor Vieux
8dcca2125a prevent orphan 2013-12-20 16:26:02 -08:00
Brian Dorsey
cdd14b1a31 Update MTU work-around to use the new -mtu flag 2013-12-20 16:19:35 -08:00
Brian Dorsey
37ed178611 Added a note about a networking work around.
An additional flag to limit the networking MTU is required in three Compute Engine zones for reliable networking from inside the docker container. Added a warning to that effect to the QuickStart guide.
2013-12-20 16:06:28 -08:00
Gabriel Monroy
c995c9bb91 add TestContainerOrphaning integration test 2013-12-20 16:52:34 -07:00
Michael Crosby
aa619de748 Merge pull request #3289 from crosbymichael/add-mtu-option
Allow mtu to be configured at daemon start
2013-12-20 13:21:56 -08:00
Michael Crosby
6fde28c293 Merge pull request #3300 from crosbymichael/fix-mountinfo-parsing
Only parse upto the mountpoint in mountinfo
2013-12-20 13:21:20 -08:00
Michael Crosby
f4358fc647 Merge pull request #3291 from dineshs-altiscale/3282-sparse-files
Add -S option to tar for efficient sparse file handling
2013-12-20 11:10:31 -08:00
Michael Crosby
57e19b1475 Merge pull request #3294 from discordianfish/3293-better-error-dockerfile-empty
Return error if Dockerfile is empty
2013-12-20 10:45:56 -08:00
Michael Crosby
8051b6c1a1 Only parse upto the mountpoint in mountinfo 2013-12-20 13:34:05 -05:00
Michael Crosby
566ff54d0d Allow mtu to be configured at daemon start 2013-12-20 12:12:03 -05:00
Guillaume J. Charmes
f9359f59a8 Add dynamic os/arch detection to Images 2013-12-20 08:20:08 -08:00
Guillaume J. Charmes
e4561438f1 Add arch/os info to user agent (Registry) 2013-12-20 08:19:25 -08:00
Johannes 'fish' Ziemke
f7ba1c34bb Return error if Dockerfile is empty 2013-12-20 14:13:52 +01:00
Sven Dowideit
df87919165 make a more complete example of docker run, showing the use of most of the options (Closes #1500) 2013-12-20 20:06:07 +10:00
Dinesh Subhraveti
733bf5d3dd Add -S option to tar for efficient sparse file handling
Fixes issue #3282
2013-12-19 21:41:22 -08:00
Andy Rothfusz
efde305c05 Merge pull request #3286 from pandrew/3279-documentation-docker-inspect
Update docs to include images for docker inspect
2013-12-19 14:50:55 -08:00
Guillaume J. Charmes
636dfc82b0 Merge pull request #3064 from tianon/custom-dockerinit-path
Allow custom dockerinit path
2013-12-19 14:31:41 -08:00
Victor Vieux
93abcc3a3b Merge pull request #3273 from crosbymichael/set-mtu-in-dockerinit
Move MTU setting outside of lxc and set with netlink
2013-12-19 14:25:27 -08:00
Tianon Gravi
c3ec696284 Merge pull request #3244 from codeaholics/remove-apt-errors-during-build
Tidy up some of the error messages from apt during build
2013-12-19 13:28:56 -08:00
Michael Crosby
fdd81b423b Merge pull request #3288 from tianon/makefile-maintainer
Add Tianon as Makefile maintainer
2013-12-19 13:26:52 -08:00
Tianon Gravi
cd89fe5c4f Add Tianon as Makefile maintainer 2013-12-19 13:42:35 -07:00
Tianon Gravi
1636ed9826 Merge pull request #3283 from jpoimboe/update-vendor.sh
add gosqlite to vendor.sh
2013-12-19 12:37:25 -08:00
pandrew
8072d3a4e0 Update docs to include images for docker inspect 2013-12-19 20:55:19 +01:00
Josh Poimboeuf
d215724ad6 add gosqlite to vendor.sh
Add gosqlite and its latest revision to vendor.sh so that the vendor
directory can be reliably recreated.
2013-12-19 13:51:46 -06:00
Michael Crosby
0e6f0c4e02 Move MTU setting outside of lxc and set with netlink 2013-12-19 11:51:44 -08:00
Andy Rothfusz
629cc2fce4 Merge pull request #3284 from jamtur01/faqmore
Added some more items to the FAQ
2013-12-19 11:32:53 -08:00
James Turnbull
8c52140059 Added some more items to the FAQ 2013-12-19 14:27:47 -05:00
Victor Vieux
f21bd80e90 Merge pull request #3271 from crosbymichael/mount-outside
Perform docker specific mounts outside of lxc
2013-12-19 11:13:31 -08:00
Guillaume J. Charmes
4bdd4599f0 Merge pull request #3243 from alexlarsson/compressed-tar
Handle compressed tars in ApplyLayer
2013-12-19 11:02:32 -08:00
Michael Crosby
ed93dab9a8 Merge pull request #3276 from tianon/cross-compile
Add new "cross" bundle to cross-compile the Docker client
2013-12-19 10:41:12 -08:00
Tianon Gravi
62a81370ff Add new "cross" bundle to cross-compile the Docker client for other platforms (currently just 32-bit and 64-bit OS X) 2013-12-19 11:33:49 -07:00
Guillaume J. Charmes
e74c65c3db Merge pull request #3274 from tianon/tianon-dockerfile
Make Tianon the official root "Dockerfile" maintainer
2013-12-19 10:31:00 -08:00
Victor Vieux
248eadd341 Merge pull request #3277 from jpoimboe/fix-root-symlink
Move root symlink check to engine.New
2013-12-19 10:24:23 -08:00
Victor Vieux
e829d5b6d2 Merge pull request #3275 from crosbymichael/sqlite-import
Move sqlite conn to graph db for cross compile support
2013-12-19 10:18:30 -08:00
Michael Crosby
35d8ac94f3 Merge pull request #3270 from vreon/tree-box-drawing-characters
Use box-drawing characters in `docker images -tree`
2013-12-19 10:00:33 -08:00
Josh Poimboeuf
94821a3353 Move root symlink check to engine.New
Since commit c91c365, when starting the docker daemon without an
existing /var/lib/docker directory, it fails with:

  2013/12/18 23:39:36 Unable to canonicalize root (%!s(*string=0xc210077c80)): lstat /var/lib/docker: no such file or directory

Move the symlink checking code to engine.New after the root dir has been
created.
2013-12-19 00:39:12 -06:00
Jesse Dubay
d14c162fd6 Use box-drawing characters in docker images -tree
This makes the output of `docker images -tree` look a little prettier.
Previously it displayed a combination of box-drawing characters and pipe
characters, so the lines didn't quite connect...

Before:

    └─aceb1e132fe5 Size: 487 MB (virtual 1.728 GB)
      |─c5480c55e00a Size: 44.89 MB (virtual 1.773 GB)
      | └─96c21b5e3c80 Size: 17.25 kB (virtual 1.773 GB)
      |   └─58f3f2293512 Size: 8.191 MB (virtual 1.782 GB)

After:

    └─aceb1e132fe5 Size: 487 MB (virtual 1.728 GB)
      ├─c5480c55e00a Size: 44.89 MB (virtual 1.773 GB)
      │ └─96c21b5e3c80 Size: 17.25 kB (virtual 1.773 GB)
      │   └─58f3f2293512 Size: 8.191 MB (virtual 1.782 GB)
2013-12-18 22:30:21 -08:00
Tianon Gravi
14d1c5a2c3 Make Tianon the official root "Dockerfile" maintainer, since it's so hard-locked to hack changes most of the time 2013-12-18 22:29:48 -07:00
Michael Crosby
329d154209 Move sqlite conn to graph db for cross compile support 2013-12-18 21:14:16 -08:00
Michael Crosby
7bc96aec7b Improve interface by moving to subpkg
Enable builds on OSX
2013-12-18 16:42:49 -08:00
Michael Crosby
a6fdc5d208 Fix unmount issues 2013-12-18 15:24:08 -08:00
Guillaume J. Charmes
681b40c801 Merge pull request #3268 from vieux/prevent_panic_volume
prevent a panic with docker run -v /
2013-12-18 14:06:44 -08:00
Victor Vieux
536da93380 prevent a panic with docker run -v / 2013-12-18 13:57:49 -08:00
Michael Crosby
45d7dcfea2 Handle external mounts outside of lxc 2013-12-18 13:46:02 -08:00
Guillaume J. Charmes
210fa0871c Merge pull request #3267 from vieux/debug_daemon_start
add some debug to runtime.restore()
2013-12-18 13:45:22 -08:00
Victor Vieux
f768c6adb7 Merge pull request #3263 from tianon/abspath-root
Canonicalize our root path before we try using it
2013-12-18 11:41:47 -08:00
Victor Vieux
fde909ffb8 add some debug to runtime.restore() 2013-12-18 10:57:21 -08:00
Michael Crosby
553b4dae45 Merge pull request #3264 from creack/fix_osx_compilation
Fix osx compilation
2013-12-18 10:50:14 -08:00
Tianon Gravi
929662a4d5 Merge pull request #3266 from tianon/fix-integration-test-building
Add -a to our BUILDFLAGS directly, which fixes some fun test compilation issues
2013-12-18 10:37:55 -08:00
Tianon Gravi
fbac812540 Add -a to our BUILDFLAGS directly, which fixes some fun test compilation issues
Also, now that we use "-a", we no longer get any benefit from "go test -i", and it actually causes problems sometimes, so let's nuke it.
2013-12-18 11:32:25 -07:00
Guillaume J. Charmes
e481c82fa9 Fix OSX compilation for aufs 2013-12-18 10:18:49 -08:00
Guillaume J. Charmes
73a1ef7c22 Fix OSX build for sysinit 2013-12-18 10:16:48 -08:00
Tianon Gravi
c91c365f88 Canonicalize our root path before we try using it, because we make assumptions about it not containing symlinks
Fixes #3242
2013-12-18 11:15:09 -07:00
Alexander Larsson
b8a4f570fb archive: Re-add XZ compression support
This shells out to the xz binary to support .tar.xz layers, as
there is no compression/xz support in go.
2013-12-18 10:50:22 +01:00
Michael Crosby
70c7220a99 Merge pull request #3128 from codeaholics/1530-improve-error-message
Improve error message when refusing to remove image due to multiple repo tags
2013-12-17 20:49:25 -08:00
Michael Crosby
0f45e3c6e0 Merge pull request #3205 from cddr/Vagrantfile-typos
Fix typos in Vagrantfile
2013-12-17 20:41:53 -08:00
Michael Crosby
be0beb897a Merge pull request #3238 from tianon/go-build-a
Readd go build -a
2013-12-17 20:40:36 -08:00
Michael Crosby
8fa4c4b062 Merge pull request #3253 from titanous/update-container-name-validation
Update container name validation
2013-12-17 20:34:53 -08:00
Jonathan Rudenberg
c06ab5f9c2 Add container name validation test 2013-12-17 20:19:23 -05:00
Jonathan Rudenberg
3ec39ad01a DRY up valid container name pattern usage 2013-12-17 20:17:50 -05:00
Jonathan Rudenberg
1940015824 Add '.' to valid container name pattern 2013-12-17 20:17:06 -05:00
Andy Rothfusz
1acefac97e Merge pull request #3234 from creack/default_unix_path
Default unix path
2013-12-17 16:24:01 -08:00
Andy Rothfusz
f630fbc7cf Merge pull request #3228 from maztaim/patch-1
Update binaries.rst
2013-12-17 13:10:08 -08:00
Andy Rothfusz
e61f327ec9 Merge pull request #3250 from vincentwoo/patch-1
Update docker_remote_api.rst to reflect that the latest remote API version is 1.8
2013-12-17 13:09:37 -08:00
Vincent Woo
c4444ce48f Update docker_remote_api.rst to reflect that the latest remote API version is 1.8 2013-12-17 11:48:21 -08:00
Andy Rothfusz
7ba0f1f421 Merge pull request #3236 from dhrp/doc-master-warning
Added warning when browsing master. & no longer hides alternative versions
2013-12-17 11:17:17 -08:00
Andy Rothfusz
30454bb85c Merge pull request #3249 from tianon/fix-sphinx-warning
Fix minor sphinx warning
2013-12-17 11:03:44 -08:00
Andy Rothfusz
2deb0c3365 Merge pull request #3248 from lsm5/docker-fedora-conflict
Docker fedora conflict
2013-12-17 11:01:37 -08:00
Thatcher Peskens
efc0610c0e Removed unnessary span element from version floater and
Replaced social footer by the one from www.docker.io
2013-12-17 10:39:22 -08:00
Andy Rothfusz
391676b598 Merge pull request #3221 from jamtur01/introlink
Updated Introduction link
2013-12-17 10:36:00 -08:00
Guillaume J. Charmes
5204feeaa9 Merge pull request #3237 from tianon/hack-fix-cover-detection
Fix "go tool cover" detection to only add -cover and -coverprofile if we...
2013-12-17 10:09:40 -08:00
Tianon Gravi
81d112cb7f Fix minor sphinx warning 2013-12-17 10:38:55 -07:00
Lokesh Mandvekar
25be0b1e98 Fedora first letter capitalize and misc. rewording
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-17 11:32:40 -06:00
Michael Crosby
c56b045270 Merge pull request #3239 from tianon/old-go-not-supported
Purge more hack references to Go 1.1.2
2013-12-17 09:08:42 -08:00
Guillaume J. Charmes
d9a1cc7e2b Merge pull request #3168 from discordianfish/2464-replace-lxc-ps
Reimplement lxc-ps
2013-12-17 09:06:52 -08:00
Lokesh Mandvekar
30b4a0f76a docker and docker-io conflict update for epel
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-17 10:56:31 -06:00
Lokesh Mandvekar
7d95145b76 update fedora doc: docker and docker-io conflict
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-17 10:33:52 -06:00
Guillaume J. Charmes
379a7fab07 Update docs 2013-12-17 07:55:36 -08:00
Danny Yates
36e060299f Tidy up some of the error messages from apt during build 2013-12-17 13:50:37 +00:00
Alexander Larsson
a96a26c62f Handle compressed tars in ApplyLayer
When pulling from a registry we get a compressed tar archive, so
we need to wrap the stream in the right kind of compress reader.

Unfortunately go doesn't have an Xz decompression method, but I
don't think any docker layers use that atm anyway.
2013-12-17 14:19:48 +01:00
Danny Yates
c3705e83e7 Improve error message when refusing to remove image due to multiple repo tags 2013-12-17 12:34:25 +00:00
Tianon Gravi
5e9b4a23e6 Purge more hack references to Go 1.1.2 (since it requires backported archive/tar patches now, and Go 1.2 is _widely_ packaged successfully) 2013-12-16 23:57:54 -07:00
Tianon Gravi
a1c5e276f4 Add "-a" back to our "go build" 2013-12-16 23:50:03 -07:00
Tianon Gravi
eddda577a4 Fix "go tool cover" detection to only add -cover and -coverprofile if we both have cover support in Go, and if we have the cover tool downloaded 2013-12-16 22:54:06 -07:00
Tianon Gravi
2ed1001c57 Allow packagers to specify a custom dockerinit lookup location via DOCKER_INITPATH in dynbinary
Only necessary if distro policy dictates that the path deviate from the paths already listed in utils/utils.go - please refrain from using it otherwise.
2013-12-16 22:29:08 -07:00
Tianon Gravi
f02d766f9a Add dockerinit SHA1 and path to "docker info" when debug mode is enabled 2013-12-16 22:29:05 -07:00
Tianon Gravi
2035af44aa Always copy dockerinit locally, regardless of whether our docker binary is static, because even it might get deleted or moved/renamed 2013-12-16 22:29:00 -07:00
Thatcher Peskens
746ae155fb Added warning when browsing master. & no longer hides alternative versions. 2013-12-16 18:36:35 -08:00
Graydon Hoare
a26801c73f Add CHANGELOG.md entry for ADD caching. 2013-12-16 17:38:16 -08:00
Graydon Hoare
670b326c1b Add self to AUTHORS. 2013-12-16 17:36:51 -08:00
Graydon Hoare
15a6854119 Add testcases for ADD caching, closes #880. 2013-12-16 17:36:51 -08:00
Graydon Hoare
3f9416b58d Record added-tree sha256 in buildfile.CmdAdd, probe and use cache. 2013-12-16 17:36:51 -08:00
Graydon Hoare
7afd7a82bd Factor cache-probing logic out of buildfile.commit() and CmdRun(). 2013-12-16 17:36:50 -08:00
Michael Crosby
124da338fd Merge pull request #3207 from alexlarsson/fix-applylayer
Re-enable TestApplyLayer and make it work
2013-12-16 16:55:48 -08:00
Guillaume J. Charmes
69a31c3386 Improve TestParseHost 2013-12-16 16:35:56 -08:00
Guillaume J. Charmes
20605eb310 Allow to use -H unix:// like -H tcp:// 2013-12-16 16:30:23 -08:00
Thatcher
945a1f06f9 Merge pull request #3232 from dhrp/Full-width-documentation
Full width documentation
2013-12-16 15:56:39 -08:00
Michael Crosby
64136071c6 Change version to 0.7.2-dev 2013-12-16 15:43:46 -08:00
Guillaume J. Charmes
28b162eeb4 Merge pull request #3233 from crosbymichael/bump_0.7.2
Bump to 0.7.2
2013-12-16 15:06:42 -08:00
Michael Crosby
e960152a1e Bump to v0.7.2 2013-12-16 14:50:07 -08:00
Thatcher Peskens
fe956ad449 Improvement upon @SvenDowideit suggestion to make the docs use full-width
Moved the style comments source to into the .less file
2013-12-16 14:37:56 -08:00
Guillaume J. Charmes
47375ddf54 Merge pull request #3230 from crosbymichael/allow-untag
Allow untag operations with no container validation
2013-12-16 14:34:56 -08:00
Michael Crosby
f0d6a91a1b Merge pull request #3217 from SvenDowideit/deal-with-changing-paths-for-lxc-start
lxc-start-unconfined softlink can go bad
2013-12-16 13:38:03 -08:00
Michael Crosby
62213ee314 Allow untag operations with no container validation 2013-12-16 13:29:43 -08:00
Thatcher Peskens
fa48f17493 Merge branch 'use-complete-width-of-browser-for-docs' of github.com:SvenDowideit/docker into SvenDowideit-use-complete-width-of-browser-for-docs 2013-12-16 13:08:00 -08:00
Guillaume J. Charmes
41d972baf1 Merge pull request #3219 from unclejack/vagrant_fix_version_check
install vbox guest additions if the latest aren't already installed
2013-12-16 12:49:32 -08:00
Guillaume J. Charmes
b3ad330782 Merge pull request #3099 from vieux/fix_pull_build
added authConfig to docker build
2013-12-16 10:53:10 -08:00
Tim Bosse
6721525068 Update binaries.rst
Along with the kernel requirement, you need a working copy of lxc.  When trying to start using "/docker -d", received error 'initapi: exec: "lxc-start": executable file not found in $PATH'
2013-12-16 12:04:29 -05:00
Johannes 'fish' Ziemke
5cfcb05486 Fix and re-enable TestGetContainersTop 2013-12-16 16:01:55 +01:00
Alexander Larsson
78c22c24b3 ApplyLayer: Fix TestLookupImage
The TestLookupImage test seems to use a layer that contains
/etc/postgres/postgres.conf, but not e.g. /etc/postgres.

To handle this we ensure that the parent directory always
exists, and if not we create it.
2013-12-16 14:35:43 +01:00
Johannes 'fish' Ziemke
4faba4fae7 Reimplement lxc-ps
Instead of calling lxc-ps in top endpoint, we reimplement it by
calling ps and filter for pids running in a given container.
2013-12-16 13:30:35 +01:00
Sven Dowideit
e1efd4cb8c please, don't use 30% of the screen for whitespace, and thus compress the examples so they are ~80 chars wide - the output of 'docker ps' and 'docker images' becomes needlessly unreadable 2013-12-16 13:24:35 +10:00
Tianon Gravi
606cacdca0 Merge pull request #3222 from gurjeet/zfs_driver_owner
Update readme to mark ZFS driver as Alpha quality.
2013-12-15 07:09:00 -08:00
Gurjeet Singh
d526038503 Update readme to mark ZFS driver as Alpha quality. 2013-12-15 09:17:16 -05:00
James Turnbull
58daccab26 Updated Introduction link
Previously the introduction link pointed to www.docker.io. That
did not seem to make a lot of sense to me so instead I pointed it
at:

http://www.docker.io/learn_more/
2013-12-15 03:13:41 -05:00
unclejack
12fb508262 install vbox guest additions if not latest 2013-12-14 16:00:52 +02:00
Sven Dowideit
0a3eedd4c9 when sharing a /var/lib/docker dir with more than one distribution, an existing lxc-start-unconfined softlink may point to a non-existant path, following that link (as Stat does) will cause the daemon to fail to start 2013-12-14 15:29:08 +10:00
Guillaume J. Charmes
a6928e70ac Merge pull request #3197 from ajhager/3138-names
Validate container names on creation. Fixes #3138
2013-12-13 17:28:36 -08:00
Guillaume J. Charmes
20197385b2 Merge pull request #3173 from vieux/docker_info_job
Move info to job
2013-12-13 17:27:59 -08:00
Victor Vieux
85b9338205 add GetenvInt64 ans SetenvInt64 2013-12-13 16:29:22 -08:00
Victor Vieux
51e2c1794b move docker info to the job api 2013-12-13 16:15:15 -08:00
Guillaume J. Charmes
20899cdb34 Merge pull request #3183 from vieux/job_commit
Move commit to job
2013-12-13 16:11:58 -08:00
Guillaume J. Charmes
f5ab2516d8 Merge pull request #2897 from crosbymichael/aufs-42
Increase max image depth to 127
2013-12-13 16:03:57 -08:00
Victor Vieux
d5f5ecb658 improve GetenvJson 2013-12-13 16:02:19 -08:00
Victor Vieux
4b5ceb0f24 use args 2013-12-13 14:29:27 -08:00
Andy Rothfusz
906b481148 Merge pull request #3213 from metalivedev/1695-dockerlogs
Add more information about Docker logging
2013-12-13 14:29:14 -08:00
Victor Vieux
930ec9f52c move commit to job 2013-12-13 14:19:56 -08:00
Guillaume J. Charmes
aaa1c48d24 Merge pull request #3175 from vieux/engine-job-stop
Move stop to job
2013-12-13 14:15:58 -08:00
Victor Vieux
d7123a597f Merge pull request #3214 from dotcloud/shykes_maintainer
Temporarily remove @shykes from engine/MAINTAINERS
2013-12-13 14:03:08 -08:00
Guillaume J. Charmes
9a9ecda7c8 Merge pull request #3208 from WarheadsSE/bridgeip
Add -bip flag: allow specification of dynamic bridge IP via CIDR
2013-12-13 13:56:35 -08:00
Guillaume J. Charmes
071338172c Merge pull request #3187 from vieux/resize_job
Move resize to job
2013-12-13 13:55:23 -08:00
Victor Vieux
4975c1b549 Temporarily remve @shykes from engine/MAINTAINERS 2013-12-13 13:51:20 -08:00
Victor Vieux
73e8a39ff2 move resize to job 2013-12-13 13:15:39 -08:00
Victor Vieux
847cf5b599 Merge branch 'master' of https://github.com/dotcloud/docker 2013-12-13 13:15:22 -08:00
Michael Crosby
bf91636558 Merge pull request #3210 from rsampaio/fix_bridge_creation_3141
Bridge creation when ipv6 is not enabled
2013-12-13 12:03:55 -08:00
Andy Rothfusz
1e85aabf71 Fix #1695 by adding more about logging. 2013-12-13 11:42:58 -08:00
Michael Crosby
4fe0a9b6a0 Merge pull request #3211 from tianon/hack-make-cover
Add new cover bundlescript for giving a nice report across all the coverprofiles
2013-12-13 11:17:03 -08:00
Joseph Hager
f63cdf0260 Validate container names on creation. Fixes #3138
Move valid container name regex to the top of the file

Added hyphen as a valid rune in container names.

Remove group in valid container name regex.
2013-12-13 14:14:05 -05:00
Victor Vieux
9fb1ba97b1 Merge branch 'master' of https://github.com/dotcloud/docker 2013-12-13 11:06:20 -08:00
Tianon Gravi
59dc2876a7 Add new cover bundlescript for giving a nice report across all the coverprofiles generated by the test scripts 2013-12-13 11:59:54 -07:00
Tianon Gravi
23ab0af2ff Merge pull request #3132 from tianon/hack-separate-integration
Separate Integration Tests
2013-12-13 10:55:49 -08:00
Victor Vieux
b8a16b3459 Merge pull request #3194 from tianon/tianon-hack-maintainer
Make Tianon the hack maintainer
2013-12-13 10:55:07 -08:00
Rodrigo Vaz
a530b8d981 fix #3141 Bridge creation when ipv6 is not enabled 2013-12-13 16:39:49 -02:00
Victor Vieux
89beb55c32 Merge branch 'master' of https://github.com/dotcloud/docker 2013-12-13 10:38:26 -08:00
Victor Vieux
f9328ad9cc Merge pull request #3201 from jpoimboe/libvirt-prereq-network
Set hostname and IP address from dockerinit
2013-12-13 10:38:17 -08:00
Victor Vieux
20759c3ef7 Merge branch 'libvirt-prereq-network' of https://github.com/jpoimboe/docker 2013-12-13 10:34:09 -08:00
Victor Vieux
5d81776714 Merge pull request #3202 from jpoimboe/libvirt-prereq-env
dockerinit: propagate "container" env variable from lxc
2013-12-13 10:32:17 -08:00
Tianon Gravi
0ef1ff91cb Merge pull request #3151 from tianon/more-debootstrap-tweaks
Update mkimage-debootstrap with even more tweaks for keeping images tiny...
2013-12-13 09:28:11 -08:00
WarheadsSE
a68d7f3d70 Add -bip flag: allow specification of dynamic bridge IP via CIDR
e.g.:

```
docker -d -bip "10.10.0.1/16"
```

If set and valid, use provided in place of trial and error from pre-defined array in network.go.
Mutually exclusive of -b option.
2013-12-13 10:47:19 -05:00
Alexander Larsson
a8af12f80a Re-enable TestApplyLayer
With the previous two changes we now pass this test.
2013-12-13 15:50:25 +01:00
Alexander Larsson
10cd902f90 Fix change detection when applying tar layers
The default gnu tar format has no sub-second precision mtime support,
and the golang tar writer currently doesn't support that either.
This means if we export the changes from a container we will not
get zeron in the sub-second precision field when the change is applied.

This means we can't compare that to the original without getting a
spurious change. So, we detect this case by treating a case where the
seconds match and either of the two nanoseconds are zero as equal.
2013-12-13 15:46:41 +01:00
Alexander Larsson
818c249bae archive: Implement ApplyLayer directly
Rather than calling out to tar we use the golang tar parser
to directly extract the tar files. This has two major advantages:

1) We're able to replace an existing directory with a file in the
   new layer. This currently breaks with the external tar, since
   it refuses to recursively remove the destination directory in
   this case, and there are no options to make it do that.

2) We avoid extracting the whiteout files just to later remove them.
2013-12-13 15:43:50 +01:00
Tianon Gravi
5a89c6f6df Merge pull request #3192 from unclejack/update_virtualbox_guest_additions
vagrant: update & verify virtualbox guest tools
2013-12-12 21:22:29 -08:00
Andy Chambers
2e6dbe87ad Fix typos in Vagrantfile 2013-12-13 00:09:05 -05:00
Josh Poimboeuf
e877294321 dockerinit: propagate "container" env variable from lxc
Lxc (and libvirt) already set the "container" env variable
appropriately[1], so just use that.

[1] http://www.freedesktop.org/wiki/Software/systemd/ContainerInterface/
2013-12-12 20:08:58 -06:00
Josh Poimboeuf
ecc51cd465 dockerinit: set IP address
Set the IP address in dockerinit instead of lxc utils, to prepare for
using libvirt-lxc.
2013-12-12 19:57:11 -06:00
Josh Poimboeuf
f7c7f7978c dockerinit: set hostname
Set the hostname in dockerinit instead of with lxc utils.  libvirt-lxc
doesn't have a way to do this, so do it in a common place.
2013-12-12 19:56:05 -06:00
Michael Crosby
8224e13bd2 Merge pull request #3185 from vieux/job_tag
Move tag to job
2013-12-12 17:02:39 -08:00
Guillaume J. Charmes
912bf8ff92 Merge pull request #3015 from jpoimboe/dockerinit-libvirt-prereqs
dockerinit: drop capabilities
2013-12-12 15:49:21 -08:00
Victor Vieux
e43ff2f6f2 move tag to job 2013-12-12 11:52:11 -08:00
Josh Poimboeuf
b8f1c73705 dockerinit: drop capabilities
Drop capabilities in dockerinit instead of with lxc utils, since
libvirt-lxc doesn't support it.

This will also be needed for machine container mode, since dockerinit
needs CAP_SYS_ADMIN to setup /dev/console correctly.
2013-12-12 13:47:24 -06:00
Josh Poimboeuf
1572989201 dockerinit: refactor error handling 2013-12-12 13:47:24 -06:00
Josh Poimboeuf
bd02d6e662 dockerinit: put args in a struct 2013-12-12 13:47:23 -06:00
Andy Rothfusz
2d1f61ef0e Merge pull request #3190 from zain/master
Small typo fixes
2013-12-12 11:25:11 -08:00
Andy Rothfusz
54df95f26c Merge pull request #3189 from aknikitin/patch-1
Minor spelling fix
2013-12-12 11:24:45 -08:00
Guillaume J. Charmes
5b33ae5971 Merge pull request #3145 from vieux/fix_docker_images
multiple fixed in docker images
2013-12-12 11:17:19 -08:00
Tianon Gravi
0db1c60542 Make Tianon the hack maintainer 2013-12-12 11:25:30 -07:00
unclejack
f216448c82 vagrant: update & verify virtualbox guest tools 2013-12-12 13:03:33 +02:00
Zain Memon
f26a9d456c Small typo fixes 2013-12-12 01:23:16 -08:00
Anton Nikitin
bf5b949ffc Minor spelling fix 2013-12-12 01:09:24 -05:00
Victor Vieux
621523a041 Merge pull request #3184 from creack/fix-volumes-on-host
Fix volumes on host
2013-12-11 18:06:25 -08:00
Guillaume J. Charmes
8fd9633a6b Improve FollowLink to handle recursive link and be more strick 2013-12-11 17:19:02 -08:00
Victor Vieux
1124261158 Merge pull request #3144 from codeaholics/643-stale-nfs-handle
Prevent deletion of image if ANY container is depending on it; not just running containers
2013-12-11 17:18:54 -08:00
Andy Rothfusz
b722f809e7 Merge pull request #3181 from lsm5/rhel-docs-typos
Rhel docs typos
2013-12-11 16:37:24 -08:00
Michael Crosby
f396c42cad Fix volumes on the host by following symlinks in a scope 2013-12-11 16:31:02 -08:00
Lokesh Mandvekar
8874f2aef9 keeping rhel page sorta in sync with fedora
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 18:26:17 -06:00
Lokesh Mandvekar
e8ec3dba7b remove step numbers, keep consistent with fedora
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 18:21:52 -06:00
Andy Rothfusz
4eda2a54de Merge pull request #3177 from tianon/fix-turnbull-github
Fix James's github handle in docs/MAINTAINERS
2013-12-11 16:00:06 -08:00
Andy Rothfusz
d3292078dc Merge pull request #3176 from lsm5/rhel-docs
Rhel docs
2013-12-11 15:58:42 -08:00
Victor Vieux
6ba456ff87 move t from arg to env 2013-12-11 15:36:50 -08:00
Lokesh Mandvekar
44984602c7 more typo corrections
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 16:36:14 -06:00
Lokesh Mandvekar
d534e1c3a1 some typo corrections
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 16:20:54 -06:00
Andy Rothfusz
d56d8ab96e Merge pull request #3174 from richo/features/https_install_script
Use https to get the install script
2013-12-11 14:09:22 -08:00
Andy Rothfusz
6cf8ec606e Merge pull request #3161 from SvenDowideit/make-replace-docker-binary-note-more-obvious
associate swapping the built docker binary with building the binary, rather than a note in building the docs
2013-12-11 14:04:34 -08:00
Lokesh Mandvekar
db3019d50b rhel page keywords update
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 16:00:46 -06:00
Lokesh Mandvekar
42c38bf34d rhel description update
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 15:59:35 -06:00
Andy Rothfusz
11b3fbb3bd Merge pull request #3167 from qbrossard/patch-1
Corrected typo (resdis -> redis)
2013-12-11 13:57:59 -08:00
Andy Rothfusz
036f41fde3 Merge pull request #3165 from SvenDowideit/cmd-rmi-example
add example for docker rmi, and explain the need to remove all references (tags) to and image before its garbage collected :)
2013-12-11 13:57:13 -08:00
Andy Rothfusz
6e9c1590c6 Merge pull request #3162 from SvenDowideit/docker-commit-example-change-CMD
add a direct example for changing the cmd that is run
2013-12-11 13:52:12 -08:00
Tianon Gravi
39cc8a32b1 Fix James's github handle in docs/MAINTAINERS 2013-12-11 14:13:55 -07:00
Lokesh Mandvekar
31961ccd94 rhel page only for rhel
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 14:36:12 -06:00
Lokesh Mandvekar
eec48f93a3 rhel docs update
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-11 14:34:51 -06:00
Solomon Hykes
dbe1915fee Engine: new command 'stop' gracefully stops a container. 2013-12-11 11:52:59 -08:00
Solomon Hykes
bef8de9319 Engine: integer job status, improved stream API
* Jobs return an integer status instead of a string
* Status convention mimics unix process execution: 0=success, 1=generic error, 127="no such command"
* Stdout and Stderr support multiple thread-safe data receivers and ring buffer filtering
2013-12-11 11:52:59 -08:00
Richo Healey
81fc368a6d Use https to get the install script 2013-12-11 11:27:36 -08:00
Michael Crosby
bd292759f0 Merge pull request #3153 from vieux/improve_docker_push_display
Update docker push to use new display
2013-12-11 11:11:53 -08:00
Michael Crosby
5fd3c8204d Merge pull request #2735 from shykes/engine-job-kill
New engine command: 'kill'
2013-12-11 10:35:57 -08:00
Quentin Brossard
af21908493 Corrected typo (resdis -> redis) 2013-12-11 13:15:27 +01:00
Sven Dowideit
7edd1f6bad add example for docker rmi, and explain the need to remove all references (tags) to and image before its garbage collected :) 2013-12-11 15:54:34 +10:00
Sven Dowideit
d878632b25 add a direct example for changing the cmd that is run 2013-12-11 12:07:07 +10:00
Sven Dowideit
be13735001 associate swapping the built docker binary with building the binary, rather than a note in building the docs 2013-12-11 11:12:11 +10:00
Andy Rothfusz
fb9ddc5de5 Merge pull request #3159 from SvenDowideit/make-docs-consistency
Makefile: make docs is more consistent
2013-12-10 16:51:35 -08:00
Sven Dowideit
27646c4459 make docs is more consistent 2013-12-11 10:14:56 +10:00
Victor Vieux
b98d51dddb revert 'firstErr' 2013-12-10 15:37:03 -08:00
Victor Vieux
0025e9bd71 Merge pull request #3113 from shykes/engine-export
Move 'docker export' to the engine API
2013-12-10 13:28:24 -08:00
Victor Vieux
4c6e528f13 Merge pull request #3152 from daniel-garcia/3129_dont-open-bindmounted-files
don't open bind mounted files/dirs to get Stat, use os.Lstat
2013-12-10 11:05:39 -08:00
Victor Vieux
95f061b408 update docker push to use [====> ] 2013-12-10 10:57:16 -08:00
Daniel Garcia
761184df52 don't open bind mounted files/dirs to get Stat, use os.Lstat 2013-12-10 12:49:53 -06:00
Tianon Gravi
78b85220be Update mkimage-debootstrap with even more tweaks for keeping images tiny by more aggressively removing cache files and by not downloading apt-cache Translations files 2013-12-10 10:59:32 -07:00
Andy Rothfusz
8814c11b14 Merge pull request #3103 from metalivedev/1229-titleactions
Update "Use" titles to be action-oriented
2013-12-09 18:57:48 -08:00
Victor Vieux
09d2c2351c Merge pull request #3119 from shykes/engine-version
Port 'docker version' to the engine API
2013-12-09 17:35:44 -08:00
Victor Vieux
c618a906a4 fix size in -tree 2013-12-09 17:27:05 -08:00
Andy Rothfusz
9c1e9a5157 Fix #1229. Update titles, fix some wrapping.
Make the Ambassador container explicit.
Apply Sven's suggestions.
2013-12-09 17:23:56 -08:00
Andy Rothfusz
0b0b0ca0f9 Merge pull request #3146 from jamtur01/linkedits
Some minor cleanup of the Links use document
2013-12-09 17:09:59 -08:00
Victor Vieux
ac1093b83a fix docker images -tree <invalid_image> and docker images -viz <image_name> 2013-12-09 16:58:17 -08:00
James Turnbull
c9cedb4c04 Some minor cleanup of the Links use document 2013-12-09 16:47:19 -08:00
Victor Vieux
a74be95b23 Merge pull request #2843 from shykes/engine-job-wait
New engine job:wait
2013-12-09 16:36:23 -08:00
Victor Vieux
8291f00a0e refactor and fix tests 2013-12-09 16:25:19 -08:00
Andy Rothfusz
b7bc80cba9 Merge pull request #3109 from artagnon/arch-install
Update Arch installation doc
2013-12-09 15:52:54 -08:00
Andy Rothfusz
864729b96f Merge pull request #3104 from jamtur01/uses
Added section on Trusted Builds
2013-12-09 15:51:52 -08:00
Andy Rothfusz
a67571668e Merge pull request #3143 from idupree/patch-1
Make it extra clear that the `docker` group is root-equivalent.
2013-12-09 15:48:05 -08:00
Danny Yates
776bb43c9e Prevent deletion of image if ANY container is depending on it; not just running containers 2013-12-09 20:46:21 +00:00
Michael Crosby
75bd5bea70 Merge pull request #3114 from shykes/hack-stats
Hack: stats.sh prints useful project stats for maintainers
2013-12-09 10:36:17 -08:00
Isaac Dupree
e2ee5c71fc Make it extra clear that the docker group is root-equivalent. 2013-12-09 12:25:20 -05:00
Tianon Gravi
f0879a1e14 Add separate "test-integration" bundlescript (and corresponding dyntest-integration bundlescript) 2013-12-08 18:43:24 -07:00
Tianon Gravi
ca405786f4 Unify dyntest/test and dynbinary/binary hack bundlescripts further by cross-invocation and keeping all the logic in one place, taking advantage of LDFLAGS_STATIC that is the only bit that gets replaced for dyntest/dynbinary 2013-12-08 18:40:05 -07:00
Michael Crosby
cdc07f7d5c Merge pull request #3126 from unclejack/remove_vendored_tar
Remove vendored dotcloud/tar
2013-12-08 16:51:52 -08:00
Shawn Landden
f379f667a2 hack/PACKAGERS.md: libdevmapper 2013-12-08 14:39:06 -08:00
Tianon Gravi
45cea94a82 Unify hack/make/*test further by invoking hack/make/test directly from dyntest 2013-12-08 15:34:08 -07:00
unclejack
8ec96c9605 remove vendored dotcloud/tar
The tar dependency has been removed. It's
time to remove the vendored tar as well.
2013-12-09 00:02:13 +02:00
James Turnbull
c094807a1b Added section on Trusted Builds 2013-12-08 15:54:12 -05:00
Tianon Gravi
bac3a8e6f5 Add much better pruning of non-tested directories, including pruning the integration tests directory (doing more with "find" and nothing with "grep") 2013-12-08 13:50:48 -07:00
Tianon Gravi
dcfc4ada4d Clean output and simplify hack/make/*test by adding go_test_dir function in make.sh 2013-12-08 13:49:57 -07:00
Tianon Gravi
416b16e1e2 Simplify and resync hack/make/test and hack/make/dyntest output handling 2013-12-08 12:57:11 -07:00
Ramkumar Ramachandra
f832b76bdf archlinux installation doc: correct some details
1. The AUR package is called docker-git, not lxc-docker-git.

2. According to the official community package, docker depends on
   sqlite.

3. 02ef8ec (Update archlinux.rst as packages have changed, 2013-12-06)
   updated the installation instructions, but left behind residual
   wording about the AUR package not being officially supported; the
   community repository is officially supported.

Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
2013-12-08 15:36:02 +05:30
Solomon Hykes
d502f0cfac Merge pull request #3118 from shykes/engine-structured-output
Engine: jobs can send structured output as json on stdout
2013-12-07 23:46:50 -08:00
Solomon Hykes
16fad96007 Merge pull request #3117 from shykes/engine-refactor-env
Engine: break out Env utilities into their own type - Env
2013-12-07 23:45:00 -08:00
Solomon Hykes
de35b346d1 Port 'docker version' to the engine API 2013-12-08 07:41:53 +00:00
Solomon Hykes
869a11bc93 Cleanup version introspection
* Unify version checking code into version.go
* Make 'version' available as a job in the engine
* Use simplified version checking code when setting user agent for registry client.
2013-12-08 07:35:24 +00:00
Solomon Hykes
f806818154 Engine: convenience http transport for simple remote job execution 2013-12-08 07:33:23 +00:00
Solomon Hykes
a7a171b6c2 Engine: Output.AddEnv decodes structured data from the standard output of a job 2013-12-08 06:16:10 +00:00
Solomon Hykes
a80c059bae Engine: break out Env utilities into their own type - Env 2013-12-08 06:06:05 +00:00
Solomon Hykes
edace08327 Hack: stats.sh prints useful project stats for maintainers 2013-12-08 01:47:03 +00:00
Solomon Hykes
9656cdf0c2 Engine: 'export' returns a raw archive of a container's filesystem 2013-12-08 01:33:37 +00:00
Solomon Hykes
50f3a696bd Engine: don't log job stdout to engine stdout (it might be non-text output, for example tar data for 'export' 2013-12-08 01:33:05 +00:00
Solomon Hykes
f4676f0ffa Merge pull request #3101 from creack/merge_release
Merge release
2013-12-06 18:09:38 -08:00
Guillaume J. Charmes
3c1f3be032 Update version 2013-12-06 17:31:09 -08:00
Guillaume J. Charmes
aeba4e6482 Merge remote-tracking branch 'origin/release' into merge_release 2013-12-06 17:30:52 -08:00
Solomon Hykes
3569d080af New engine command: 'wait' 2013-12-06 23:05:21 +00:00
Solomon Hykes
427bdb60e7 Engine: port 'kill' to the new integer status. 2013-12-06 23:02:27 +00:00
Solomon Hykes
9b1930c5a0 gofmt 2013-12-06 23:02:27 +00:00
Solomon Hykes
2546a2c645 Hack: use new 'kill' command in integration tests 2013-12-06 23:02:27 +00:00
Solomon Hykes
fdb3de7b11 Engine: new command 'kill' sends a signal to a running container 2013-12-06 23:02:27 +00:00
Guillaume J. Charmes
04ffa53ba8 Merge pull request #3077 from jlhawn/3076-handle-inactive-user-login
Adjusted handling of inactive user login
2013-12-06 14:40:35 -08:00
Andy Rothfusz
07f7643bbc Merge pull request #3030 from jamtur01/versions
Fixed #2136 - Added styles
2013-12-06 14:27:53 -08:00
Victor Vieux
228091c79e added authConfig to docker build 2013-12-06 14:27:10 -08:00
Andy Rothfusz
6fa1463614 Merge pull request #3094 from tang0th/patch-1
Update archlinux.rst as packages have changed
2013-12-06 14:18:31 -08:00
Guillaume J. Charmes
9b644ff246 Merge pull request #3096 from dotcloud/fix_fix_jsonmessage
fix jsonmessage in build
2013-12-06 14:10:24 -08:00
Victor Vieux
2c646b2d46 disable progressbar in non-terminal 2013-12-06 14:09:27 -08:00
Victor Vieux
becb13dc26 update doc 2013-12-06 14:09:27 -08:00
Victor Vieux
05f416d869 fix jsonmessage in build 2013-12-06 14:09:27 -08:00
Andy Rothfusz
7fd64e0196 Merge pull request #3088 from SvenDowideit/start-cmdline-examples-with-dollar-for-easier-testing
change the policy wrt $ sudo docker to simplify auto-testing
2013-12-06 13:37:02 -08:00
Sven Dowideit
13da09d22b change the policy wrt $ sudo docker to simplify auto-testing 2013-12-07 07:23:53 +10:00
Josh Hawn
6720bfb243 Adjusted handling of inactive user login
The return status for inactive users was being checked
too early in the process, so I moved it from just after
the handling of POST /v1/users/ to after getting the
response from GET /v1/users/
2013-12-06 11:57:05 -08:00
Andy Rothfusz
d75fc6e529 Merge pull request #3071 from lsm5/fedora-docs-update
use mattdm/fedora in fedora doc and other cosmetic changes
2013-12-06 11:25:10 -08:00
Andy Rothfusz
4a148919c3 Merge pull request #3052 from shawnl/patch-1
nftables dependancies in kernel
2013-12-06 11:02:49 -08:00
Guillaume J. Charmes
c7d75588f4 Merge pull request #3079 from crosbymichael/give-engine-noop-tests
Enable engine to take Stderr and Stdout for mocking in tests
2013-12-06 10:43:39 -08:00
Guillaume J. Charmes
dfade9e2d8 Merge pull request #3095 from jpoimboe/missing-defines
devmapper: add missing defines
2013-12-06 10:31:42 -08:00
Guillaume J. Charmes
b655406faa Merge pull request #3085 from tianon/fix-cgroup-dep
Revert "Add cgroup-bin dependency to our Ubuntu package"
2013-12-06 09:15:41 -08:00
Josh Poimboeuf
a015f38f4a devmapper: add missing defines
Add some missing defines which are needed for compiling on older systems
like RHEL 6.
2013-12-06 10:13:47 -06:00
tang0th
02ef8ec3ca Update archlinux.rst as packages have changed
The docker package has been added into the Arch Linux community repo, this means that the package names and installation instructions have slightly changed.
2013-12-06 15:47:24 +00:00
Michael Crosby
25d3db048e Enable engine to take Stderr and Stdout for mocking in tests 2013-12-06 01:18:18 -08:00
Shawn Landden
a69bb25820 specific kernel config 2013-12-05 23:54:23 -08:00
Andy Rothfusz
5f5949f6a6 Merge pull request #3086 from metalivedev/3045-addmirrors
Add debian mirrors. Fixes #3045.
2013-12-05 18:16:28 -08:00
Andy Rothfusz
58b75f8f29 Add debian mirrors. Fixes #3045. 2013-12-05 18:08:56 -08:00
Tianon Gravi
aea7418d8a Revert "Add cgroup-bin dependency to our Ubuntu package"
This reverts commit c81bb20f5b.

After re-reading the documentation: "The Recommends field should list packages that would be found together with this one in all but unusual installations."

Thus, "Recommends" is an acceptable place for this dep, and anyone disabling that gets to keep the pieces.

The main crux of why this needs to be reverted is because it breaks Debian completely because "lxc" and "cgroup-bin" can't be installed concurrently.
2013-12-05 19:03:47 -07:00
Andy Rothfusz
f9147effac Merge pull request #3069 from proppy/patch-1
docs/installation/google: add enabling Google Compute Engine step
2013-12-05 17:40:42 -08:00
Andy Rothfusz
0e2b0f284c Merge pull request #3001 from dotcloud/api_json
add docs for the new json format
2013-12-05 17:35:51 -08:00
Andy Rothfusz
80dfa23da8 Merge pull request #3051 from pariviere/2490-docs-network
Network documentation page
2013-12-05 17:29:54 -08:00
Andy Rothfusz
4bea68dfa6 Clean up quoting, wraps, and build error on code-block. 2013-12-05 17:16:31 -08:00
Andy Rothfusz
ea0ed9a915 Merge branch 'docker-run-prose-2149' of github.com:SvenDowideit/docker into 3036-test 2013-12-05 17:03:26 -08:00
Pierre-Alain RIVIERE
eac95671f5 refs #2490 : add a network page to docs 2013-12-05 23:40:33 +01:00
Lokesh Mandvekar
7ab4f37d60 separate block for yum update docker
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-05 16:11:33 -06:00
Lokesh Mandvekar
5d022f0445 add unofficial header back, yum update docker
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-05 16:08:08 -06:00
Lokesh Mandvekar
61fbf3d8e2 yum upgrade on fedora not required before install
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-05 14:21:03 -06:00
Lokesh Mandvekar
f49eb29497 use mattdm/fedora in fedora doc and other cosmetic changes
Signed-off-by: Lokesh Mandvekar <lsm5@redhat.com>
2013-12-05 14:03:17 -06:00
James Turnbull
03f8a3bbae Fixed #2136 - Added styles
Added styling for versionadded, versionchanged, and
deprecated.
2013-12-05 13:46:15 -05:00
Victor Vieux
f95f2789f2 add docs for the new json format 2013-12-05 10:12:22 -08:00
Johan Euphrosine
5a17c208cd docs/installation/google: add enabling Google Compute Engine step 2013-12-05 09:43:08 -08:00
Sven Dowideit
04c32495f6 add a little prose to tell the user that run creates a container, and then starts it 2013-12-05 14:20:16 +10:00
Michael Crosby
6d34c50e89 Increase max image depth to 127 2013-11-26 17:04:55 -08:00
196 changed files with 6262 additions and 3845 deletions

View File

@@ -20,6 +20,7 @@ Antony Messerli <amesserl@rackspace.com>
Asbjørn Enge <asbjorn@hanafjedle.net>
Barry Allard <barry.allard@gmail.com>
Ben Toews <mastahyeti@gmail.com>
Ben Wiklund <ben@daisyowl.com>
Benoit Chesneau <bchesneau@gmail.com>
Bhiraj Butala <abhiraj.butala@gmail.com>
Bouke Haarsma <bouke@webatoom.nl>
@@ -47,6 +48,7 @@ Daniel YC Lin <dlin.tw@gmail.com>
Darren Coxall <darren@darrencoxall.com>
David Calavera <david.calavera@gmail.com>
David Sissitka <me@dsissitka.com>
Dinesh Subhraveti <dineshs@altiscale.com>
Deni Bertovic <deni@kset.org>
Dominik Honnef <dominik@honnef.co>
Don Spaulding <donspauldingii@gmail.com>
@@ -68,6 +70,7 @@ Francisco Souza <f@souza.cc>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Gabriel Monroy <gabriel@opdemand.com>
Gareth Rushgrove <gareth@morethanseven.net>
Graydon Hoare <graydon@pobox.com>
Greg Thornton <xdissent@me.com>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com>
Gurjeet Singh <gurjeet@singh.im>
@@ -113,6 +116,7 @@ Kyle Conroy <kyle.j.conroy@gmail.com>
Laurie Voss <github@seldo.com>
Louis Opter <kalessin@kalessin.fr>
Manuel Meurer <manuel@krautcomputing.com>
Manuel Woelker <docker@manuel.woelker.org>
Marco Hennings <marco.hennings@freiheit.com>
Marcus Farkas <toothlessgear@finitebox.com>
Marcus Ramberg <marcus@nordaaker.com>

View File

@@ -1,5 +1,104 @@
# Changelog
## 0.7.3 (2013-01-02)
#### Builder
+ Update ADD to use the image cache, based on a hash of the added content
* Add error message for empty Dockerfile
#### Documentation
- Fix outdated link to the "Introduction" on www.docker.io
+ Update the docs to get wider when the screen does
- Add information about needing to install LXC when using raw binaries
* Update Fedora documentation to disentangle the docker and docker.io conflict
* Add a note about using the new `-mtu` flag in several GCE zones
+ Add FrugalWare installation instructions
+ Add a more complete example of `docker run`
- Fix API documentation for creating and starting Privileged containers
- Add missing "name" parameter documentation on "/containers/create"
* Add a mention of `lxc-checkconfig` as a way to check for some of the necessary kernel configuration
- Update the 1.8 API documentation with some additions that were added to the docs for 1.7
#### Hack
- Add missing libdevmapper dependency to the packagers documentation
* Update minimum Go requirement to a hard line at Go 1.2+
* Many minor improvements to the Vagrantfile
+ Add ability to customize dockerinit search locations when compiling (to be used very sparingly only by packagers of platforms who require a nonstandard location)
+ Add coverprofile generation reporting
- Add `-a` to our Go build flags, removing the need for recompiling the stdlib manually
* Update Dockerfile to be more canonical and have less spurious warnings during build
- Fix some miscellaneous `docker pull` progress bar display issues
* Migrate more miscellaneous packages under the "pkg" folder
* Update TextMate highlighting to automatically be enabled for files named "Dockerfile"
* Reorganize syntax highlighting files under a common "contrib/syntax" directory
* Update install.sh script (https://get.docker.io/) to not fail if busybox fails to download or run at the end of the Ubuntu/Debian installation
* Add support for container names in bash completion
#### Packaging
+ Add an official Docker client binary for Darwin (Mac OS X)
* Remove empty "Vendor" string and added "License" on deb package
+ Add a stubbed version of "/etc/default/docker" in the deb package
#### Runtime
* Update layer application to extract tars in place, avoiding file churn while handling whiteouts
- Fix permissiveness of mtime comparisons in tar handling (since GNU tar and Go tar do not yet support sub-second mtime precision)
* Reimplement `docker top` in pure Go to work more consistently, and even inside Docker-in-Docker (thus removing the shell injection vulnerability present in some versions of `lxc-ps`)
+ Update `-H unix://` to work similarly to `-H tcp://` by inserting the default values for missing portions
- Fix more edge cases regarding dockerinit and deleted or replaced docker or dockerinit files
* Update container name validation to include '.'
- Fix use of a symlink or non-absolute path as the argument to `-g` to work as expected
* Update to handle external mounts outside of LXC, fixing many small mounting quirks and making future execution backends and other features simpler
* Update to use proper box-drawing characters everywhere in `docker images -tree`
* Move MTU setting from LXC configuration to directly use netlink
* Add `-S` option to external tar invocation for more efficient spare file handling
+ Add arch/os info to User-Agent string, especially for registry requests
+ Add `-mtu` option to Docker daemon for configuring MTU
- Fix `docker build` to exit with a non-zero exit code on error
+ Add `DOCKER_HOST` environment variable to configure the client `-H` flag without specifying it manually for every invocation
## 0.7.2 (2013-12-16)
#### Runtime
+ Validate container names on creation with standard regex
* Increase maximum image depth to 127 from 42
* Continue to move api endpoints to the job api
+ Add -bip flag to allow specification of dynamic bridge IP via CIDR
- Allow bridge creation when ipv6 is not enabled on certain systems
* Set hostname and IP address from within dockerinit
* Drop capabilities from within dockerinit
- Fix volumes on host when symlink is present the image
- Prevent deletion of image if ANY container is depending on it even if the container is not running
* Update docker push to use new progress display
* Use os.Lstat to allow mounting unix sockets when inspecting volumes
- Adjust handling of inactive user login
- Add missing defines in devicemapper for older kernels
- Allow untag operations with no container validation
- Add auth config to docker build
#### Documentation
* Add more information about Docker logging
+ Add RHEL documentation
* Add a direct example for changing the CMD that is run in a container
* Update Arch installation documentation
+ Add section on Trusted Builds
+ Add Network documentation page
#### Other
+ Add new cover bundle for providing code coverage reporting
* Separate integration tests in bundles
* Make Tianon the hack maintainer
* Update mkimage-debootstrap with more tweaks for keeping images small
* Use https to get the install script
* Remove vendored dotcloud/tar now that Go 1.2 has been released
## 0.7.1 (2013-12-05)
#### Documentation
@@ -72,7 +171,7 @@
#### Runtime
* Improved stability, fixes some race conditons
* Improve stability, fixes some race conditons
* Skip the volumes mounted when deleting the volumes of container.
* Fix layer size computation: handle hard links correctly
* Use the work Path for docker cp CONTAINER:PATH
@@ -115,7 +214,7 @@
+ Add lock around write operations in graph
* Check if port is valid
* Fix restart runtime error with ghost container networking
+ Added some more colors and animals to increase the pool of generated names
+ Add some more colors and animals to increase the pool of generated names
* Fix issues in docker inspect
+ Escape apparmor confinement
+ Set environment variables using a file.
@@ -269,7 +368,7 @@
* Improve network performance for VirtualBox
* Revamp install.sh to be usable by more people, and to use official install methods whenever possible (apt repo, portage tree, etc.)
- Fix contrib/mkimage-debian.sh apt caching prevention
+ Added Dockerfile.tmLanguage to contrib
+ Add Dockerfile.tmLanguage to contrib
* Configured FPM to make /etc/init/docker.conf a config file
* Enable SSH Agent forwarding in Vagrant VM
* Several small tweaks/fixes for contrib/mkimage-debian.sh
@@ -383,7 +482,7 @@
* Mount /dev/shm as a tmpfs
- Switch from http to https for get.docker.io
* Let userland proxy handle container-bound traffic
* Updated the Docker CLI to specify a value for the "Host" header.
* Update the Docker CLI to specify a value for the "Host" header.
- Change network range to avoid conflict with EC2 DNS
- Reduce connect and read timeout when pinging the registry
* Parallel pull
@@ -579,7 +678,7 @@
+ Builder: 'docker build git://URL' fetches and builds a remote git repository
* Runtime: 'docker ps -s' optionally prints container size
* Tests: Improved and simplified
* Tests: improved and simplified
- Runtime: fix a regression introduced in 0.4.3 which caused the logs command to fail.
- Builder: fix a regression when using ADD with single regular file.
@@ -594,7 +693,7 @@
+ ADD of a local file will detect tar archives and unpack them
* ADD improvements: use tar for copy + automatically unpack local archives
* ADD uses tar/untar for copies instead of calling 'cp -ar'
* Fixed the behavior of ADD to be (mostly) reverse-compatible, predictable and well-documented.
* Fix the behavior of ADD to be (mostly) reverse-compatible, predictable and well-documented.
- Fix a bug which caused builds to fail if ADD was the first command
* Nicer output for 'docker build'
@@ -639,7 +738,7 @@
+ Detect faulty DNS configuration and replace it with a public default
+ Allow docker run <name>:<id>
+ You can now specify public port (ex: -p 80:4500)
* Improved image removal to garbage-collect unreferenced parents
* Improve image removal to garbage-collect unreferenced parents
#### Client
@@ -693,7 +792,7 @@
#### Documentation
* Improved install instructions.
* Improve install instructions.
## 0.3.3 (2013-05-23)
@@ -778,7 +877,7 @@
+ Support for data volumes ('docker run -v=PATH')
+ Share data volumes between containers ('docker run -volumes-from')
+ Improved documentation
+ Improve documentation
* Upgrade to Go 1.0.3
* Various upgrades to the dev environment for contributors
@@ -834,7 +933,7 @@
- Add debian packaging
- Documentation: installing on Arch Linux
- Documentation: running Redis on docker
- Fixed lxc 0.9 compatibility
- Fix lxc 0.9 compatibility
- Automatically load aufs module
- Various bugfixes and stability improvements
@@ -869,7 +968,7 @@
- Stabilize process management
- Layers can include a commit message
- Simplified 'docker attach'
- Fixed support for re-attaching
- Fix support for re-attaching
- Various bugfixes and stability improvements
- Auto-download at run
- Auto-login on push

View File

@@ -24,40 +24,32 @@
#
docker-version 0.6.1
FROM ubuntu:12.04
MAINTAINER Solomon Hykes <solomon@dotcloud.com>
FROM stackbrew/ubuntu:12.04
MAINTAINER Tianon Gravi <admwiggin@gmail.com> (@tianon)
# Build dependencies
RUN echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y -q curl
RUN apt-get install -y -q git
RUN apt-get install -y -q mercurial
RUN apt-get install -y -q build-essential libsqlite3-dev
# Add precise-backports to get s3cmd >= 1.1.0 (so we get ENV variable support in our .s3cfg)
RUN echo 'deb http://archive.ubuntu.com/ubuntu precise-backports main universe' > /etc/apt/sources.list.d/backports.list
# Install Go
RUN curl -s https://go.googlecode.com/files/go1.2.src.tar.gz | tar -v -C /usr/local -xz
ENV PATH /usr/local/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
ENV GOPATH /go:/go/src/github.com/dotcloud/docker/vendor
RUN cd /usr/local/go/src && ./make.bash && go install -ldflags '-w -linkmode external -extldflags "-static -Wl,--unresolved-symbols=ignore-in-shared-libs"' -tags netgo -a std
# Ubuntu stuff
RUN apt-get install -y -q ruby1.9.3 rubygems libffi-dev
RUN gem install --no-rdoc --no-ri fpm
RUN apt-get install -y -q reprepro dpkg-sig
RUN apt-get install -y -q python-pip
RUN pip install s3cmd==1.1.0-beta3
RUN pip install python-magic==0.4.6
RUN /bin/echo -e '[default]\naccess_key=$AWS_ACCESS_KEY\nsecret_key=$AWS_SECRET_KEY\n' > /.s3cfg
# Runtime dependencies
RUN apt-get install -y -q iptables
RUN apt-get install -y -q lxc
RUN apt-get install -y -q aufs-tools
# Packaged dependencies
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -yq \
apt-utils \
aufs-tools \
build-essential \
curl \
dpkg-sig \
git \
iptables \
libsqlite3-dev \
lxc \
mercurial \
reprepro \
ruby1.9.1 \
ruby1.9.1-dev \
s3cmd=1.1.0* \
--no-install-recommends
# Get lvm2 source for compiling statically
RUN git clone https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2 && cd /usr/local/lvm2 && git checkout v2_02_103
RUN git clone https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2 && cd /usr/local/lvm2 && git checkout -q v2_02_103
# see https://git.fedorahosted.org/cgit/lvm2.git/refs/tags for release tags
# note: we can't use "git clone -b" above because it requires at least git 1.7.10 to be able to use that on a tag instead of a branch and we only have 1.7.9.5
@@ -65,6 +57,26 @@ RUN git clone https://git.fedorahosted.org/git/lvm2.git /usr/local/lvm2 && cd /u
RUN cd /usr/local/lvm2 && ./configure --enable-static_link && make device-mapper && make install_device-mapper
# see https://git.fedorahosted.org/cgit/lvm2.git/tree/INSTALL
# Install Go
RUN curl -s https://go.googlecode.com/files/go1.2.src.tar.gz | tar -v -C /usr/local -xz
ENV PATH /usr/local/go/bin:$PATH
ENV GOPATH /go:/go/src/github.com/dotcloud/docker/vendor
RUN cd /usr/local/go/src && ./make.bash --no-clean 2>&1
# Compile Go for cross compilation
ENV DOCKER_CROSSPLATFORMS darwin/amd64 darwin/386
# TODO add linux/386 and linux/arm
RUN cd /usr/local/go/src && bash -xc 'for platform in $DOCKER_CROSSPLATFORMS; do GOOS=${platform%/*} GOARCH=${platform##*/} ./make.bash --no-clean 2>&1; done'
# Grab Go's cover tool for dead-simple code coverage testing
RUN go get code.google.com/p/go.tools/cmd/cover
# TODO replace FPM with some very minimal debhelper stuff
RUN gem install --no-rdoc --no-ri fpm --version 1.0.1
# Setup s3cmd config
RUN /bin/echo -e '[default]\naccess_key=$AWS_ACCESS_KEY\nsecret_key=$AWS_SECRET_KEY' > /.s3cfg
VOLUME /var/lib/docker
WORKDIR /go/src/github.com/dotcloud/docker

View File

@@ -3,4 +3,6 @@ Guillaume Charmes <guillaume@dotcloud.com> (@creack)
Victor Vieux <victor@dotcloud.com> (@vieux)
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
api.go: Victor Vieux <victor@dotcloud.com> (@vieux)
Dockerfile: Tianon Gravi <admwiggin@gmail.com> (@tianon)
Makefile: Tianon Gravi <admwiggin@gmail.com> (@tianon)
Vagrantfile: Daniel Mizyrycki <daniel@dotcloud.com> (@mzdaniel)

View File

@@ -1,4 +1,4 @@
.PHONY: all binary build default doc shell test
.PHONY: all binary build cross default docs shell test
DOCKER_RUN_DOCKER := docker run -rm -i -t -privileged -e TESTFLAGS -v $(CURDIR)/bundles:/go/src/github.com/dotcloud/docker/bundles docker
@@ -10,11 +10,14 @@ all: build
binary: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary
doc:
cross: build
$(DOCKER_RUN_DOCKER) hack/make.sh binary cross
docs:
docker build -t docker-docs docs && docker run -p 8000:8000 docker-docs
test: build
$(DOCKER_RUN_DOCKER) hack/make.sh test
$(DOCKER_RUN_DOCKER) hack/make.sh test test-integration
shell: build
$(DOCKER_RUN_DOCKER) bash

View File

@@ -1 +1 @@
0.7.1
0.7.3

11
Vagrantfile vendored
View File

@@ -26,7 +26,7 @@ fi
# Adding an apt gpg key is idempotent.
wget -q -O - https://get.docker.io/gpg | apt-key add -
# Creating the docker.list file is idempotent, but it may overrite desired
# Creating the docker.list file is idempotent, but it may overwrite desired
# settings if it already exists. This could be solved with md5sum but it
# doesn't seem worth it.
echo 'deb http://get.docker.io/ubuntu docker main' > \
@@ -41,7 +41,7 @@ apt-get install -q -y lxc-docker
usermod -a -G docker "$user"
tmp=`mktemp -q` && {
# Only install the backport kernel, don't bother upgrade if the backport is
# Only install the backport kernel, don't bother upgrading if the backport is
# already installed. We want parse the output of apt so we need to save it
# with 'tee'. NOTE: The installation of the kernel will trigger dkms to
# install vboxguest if needed.
@@ -70,7 +70,7 @@ SCRIPT
# trigger dkms to build the virtualbox guest module install.
$vbox_script = <<VBOX_SCRIPT + $script
# Install the VirtualBox guest additions if they aren't already installed.
if [ ! -d /opt/VBoxGuestAdditions-4.3.2/ ]; then
if [ ! -d /opt/VBoxGuestAdditions-4.3.4/ ]; then
# Update remote package metadata. 'apt-get update' is idempotent.
apt-get update -q
@@ -79,9 +79,10 @@ if [ ! -d /opt/VBoxGuestAdditions-4.3.2/ ]; then
apt-get install -q -y linux-headers-generic-lts-raring dkms
echo 'Downloading VBox Guest Additions...'
wget -cq http://dlc.sun.com.edgesuite.net/virtualbox/4.3.2/VBoxGuestAdditions_4.3.2.iso
wget -cq http://dlc.sun.com.edgesuite.net/virtualbox/4.3.4/VBoxGuestAdditions_4.3.4.iso
echo "f120793fa35050a8280eacf9c930cf8d9b88795161520f6515c0cc5edda2fe8a VBoxGuestAdditions_4.3.4.iso" | sha256sum --check || exit 1
mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.3.2.iso /mnt
mount -o loop,ro /home/vagrant/VBoxGuestAdditions_4.3.4.iso /mnt
/mnt/VBoxLinuxAdditions.run --nox11
umount /mnt
fi

125
api.go
View File

@@ -10,7 +10,7 @@ import (
"fmt"
"github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/systemd"
"github.com/dotcloud/docker/pkg/systemd"
"github.com/dotcloud/docker/utils"
"github.com/gorilla/mux"
"io"
@@ -140,7 +140,8 @@ func postAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Reque
}
func getVersion(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return writeJSON(w, http.StatusOK, srv.DockerVersion())
srv.Eng.ServeHTTP(w, r)
return nil
}
func postContainersKill(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -150,19 +151,11 @@ func postContainersKill(srv *Server, version float64, w http.ResponseWriter, r *
if err := parseForm(r); err != nil {
return err
}
name := vars["name"]
signal := 0
if r != nil {
if s := r.Form.Get("signal"); s != "" {
s, err := strconv.Atoi(s)
if err != nil {
return err
}
signal = s
}
job := srv.Eng.Job("kill", vars["name"])
if sig := r.Form.Get("signal"); sig != "" {
job.Args = append(job.Args, sig)
}
if err := srv.ContainerKill(name, signal); err != nil {
if err := job.Run(); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
@@ -173,10 +166,11 @@ func getContainersExport(srv *Server, version float64, w http.ResponseWriter, r
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerExport(name, w); err != nil {
utils.Errorf("%s", err)
job := srv.Eng.Job("export", vars["name"])
if err := job.Stdout.Add(w); err != nil {
return err
}
if err := job.Run(); err != nil {
return err
}
return nil
@@ -222,7 +216,8 @@ func getImagesViz(srv *Server, version float64, w http.ResponseWriter, r *http.R
}
func getInfo(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
return writeJSON(w, http.StatusOK, srv.DockerInfo())
srv.Eng.ServeHTTP(w, r)
return nil
}
func getEvents(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -362,18 +357,13 @@ func postImagesTag(srv *Server, version float64, w http.ResponseWriter, r *http.
if err := parseForm(r); err != nil {
return err
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
force, err := getBoolParam(r.Form.Get("force"))
if err != nil {
return err
}
if err := srv.ContainerTag(name, repo, tag, force); err != nil {
job := srv.Eng.Job("tag", vars["name"], r.Form.Get("repo"), r.Form.Get("tag"))
job.Setenv("force", r.Form.Get("force"))
if err := job.Run(); err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
@@ -388,13 +378,17 @@ func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Req
if err := json.NewDecoder(r.Body).Decode(config); err != nil && err != io.EOF {
utils.Errorf("%s", err)
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
container := r.Form.Get("container")
author := r.Form.Get("author")
comment := r.Form.Get("comment")
id, err := srv.ContainerCommit(container, repo, tag, author, comment, config)
if err != nil {
job := srv.Eng.Job("commit", r.Form.Get("container"))
job.Setenv("repo", r.Form.Get("repo"))
job.Setenv("tag", r.Form.Get("tag"))
job.Setenv("author", r.Form.Get("author"))
job.Setenv("comment", r.Form.Get("comment"))
job.SetenvJson("config", config)
var id string
job.Stdout.AddString(&id)
if err := job.Run(); err != nil {
return err
}
@@ -689,17 +683,12 @@ func postContainersStop(srv *Server, version float64, w http.ResponseWriter, r *
if err := parseForm(r); err != nil {
return err
}
t, err := strconv.Atoi(r.Form.Get("t"))
if err != nil || t < 0 {
t = 10
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerStop(name, t); err != nil {
job := srv.Eng.Job("stop", vars["name"])
job.Setenv("t", r.Form.Get("t"))
if err := job.Run(); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
@@ -710,33 +699,28 @@ func postContainersWait(srv *Server, version float64, w http.ResponseWriter, r *
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
status, err := srv.ContainerWait(name)
job := srv.Eng.Job("wait", vars["name"])
var statusStr string
job.Stdout.AddString(&statusStr)
if err := job.Run(); err != nil {
return err
}
// Parse a 16-bit encoded integer to map typical unix exit status.
status, err := strconv.ParseInt(statusStr, 10, 16)
if err != nil {
return err
}
return writeJSON(w, http.StatusOK, &APIWait{StatusCode: status})
return writeJSON(w, http.StatusOK, &APIWait{StatusCode: int(status)})
}
func postContainersResize(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
height, err := strconv.Atoi(r.Form.Get("h"))
if err != nil {
return err
}
width, err := strconv.Atoi(r.Form.Get("w"))
if err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerResize(name, height, width); err != nil {
if err := srv.Eng.Job("resize", vars["name"], r.Form.Get("h"), r.Form.Get("w")).Run(); err != nil {
return err
}
return nil
@@ -905,12 +889,25 @@ func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Requ
if version < 1.3 {
return fmt.Errorf("Multipart upload for build is no longer supported. Please upgrade your docker client.")
}
remoteURL := r.FormValue("remote")
repoName := r.FormValue("t")
rawSuppressOutput := r.FormValue("q")
rawNoCache := r.FormValue("nocache")
rawRm := r.FormValue("rm")
repoName, tag := utils.ParseRepositoryTag(repoName)
var (
remoteURL = r.FormValue("remote")
repoName = r.FormValue("t")
rawSuppressOutput = r.FormValue("q")
rawNoCache = r.FormValue("nocache")
rawRm = r.FormValue("rm")
authEncoded = r.Header.Get("X-Registry-Auth")
authConfig = &auth.AuthConfig{}
tag string
)
repoName, tag = utils.ParseRepositoryTag(repoName)
if authEncoded != "" {
authJson := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJson).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &auth.AuthConfig{}
}
}
var context io.Reader
@@ -978,7 +975,7 @@ func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Requ
Writer: utils.NewWriteFlusher(w),
StreamFormatter: sf,
},
!suppressOutput, !noCache, rm, utils.NewWriteFlusher(w), sf)
!suppressOutput, !noCache, rm, utils.NewWriteFlusher(w), sf, authConfig)
id, err := b.Build(context)
if err != nil {
if sf.Used() {

View File

@@ -29,23 +29,6 @@ type (
VirtualSize int64
}
APIInfo struct {
Debug bool
Containers int
Images int
Driver string `json:",omitempty"`
DriverStatus [][2]string `json:",omitempty"`
NFd int `json:",omitempty"`
NGoroutines int `json:",omitempty"`
MemoryLimit bool `json:",omitempty"`
SwapLimit bool `json:",omitempty"`
IPv4Forwarding bool `json:",omitempty"`
LXCVersion string `json:",omitempty"`
NEventsListener int `json:",omitempty"`
KernelVersion string `json:",omitempty"`
IndexServerAddress string `json:",omitempty"`
}
APITop struct {
Titles []string
Processes [][]string
@@ -95,12 +78,6 @@ type (
IP string
}
APIVersion struct {
Version string
GitCommit string `json:",omitempty"`
GoVersion string `json:",omitempty"`
}
APIWait struct {
StatusCode int
}

View File

@@ -3,6 +3,8 @@ package archive
import (
"archive/tar"
"bytes"
"compress/gzip"
"compress/bzip2"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
@@ -59,6 +61,43 @@ func DetectCompression(source []byte) Compression {
return Uncompressed
}
func xzDecompress(archive io.Reader) (io.Reader, error) {
args := []string{"xz", "-d", "-c", "-q"}
return CmdStream(exec.Command(args[0], args[1:]...), archive, nil)
}
func DecompressStream(archive io.Reader) (io.Reader, error) {
buf := make([]byte, 10)
totalN := 0
for totalN < 10 {
n, err := archive.Read(buf[totalN:])
if err != nil {
if err == io.EOF {
return nil, fmt.Errorf("Tarball too short")
}
return nil, err
}
totalN += n
utils.Debugf("[tar autodetect] n: %d", n)
}
compression := DetectCompression(buf)
wrap := io.MultiReader(bytes.NewReader(buf), archive)
switch compression {
case Uncompressed:
return wrap, nil
case Gzip:
return gzip.NewReader(wrap)
case Bzip2:
return bzip2.NewReader(wrap), nil
case Xz:
return xzDecompress(wrap)
default:
return nil, fmt.Errorf("Unsupported compression format %s", (&compression).Extension())
}
}
func (compression *Compression) Flag() string {
switch *compression {
case Bzip2:
@@ -110,7 +149,7 @@ func escapeName(name string) string {
// Tar creates an archive from the directory at `path`, only including files whose relative
// paths are included in `filter`. If `filter` is nil, then all files are included.
func TarFilter(path string, options *TarOptions) (io.Reader, error) {
args := []string{"tar", "--numeric-owner", "-f", "-", "-C", path, "-T", "-"}
args := []string{"tar", "-S", "--numeric-owner", "-f", "-", "-C", path, "-T", "-"}
if options.Includes == nil {
options.Includes = []string{"."}
}
@@ -155,7 +194,7 @@ func TarFilter(path string, options *TarOptions) (io.Reader, error) {
}
}
return CmdStream(exec.Command(args[0], args[1:]...), &files, func() {
return CmdStream(exec.Command(args[0], args[1:]...), bytes.NewBufferString(files), func() {
if tmpDir != "" {
_ = os.RemoveAll(tmpDir)
}
@@ -189,7 +228,7 @@ func Untar(archive io.Reader, path string, options *TarOptions) error {
compression := DetectCompression(buf)
utils.Debugf("Archive compression detected: %s", compression.Extension())
args := []string{"--numeric-owner", "-f", "-", "-C", path, "-x" + compression.Flag()}
args := []string{"-S", "--numeric-owner", "-f", "-", "-C", path, "-x" + compression.Flag()}
if options != nil {
for _, exclude := range options.Excludes {
@@ -301,7 +340,7 @@ func CopyFileWithTar(src, dst string) error {
// CmdStream executes a command, and returns its stdout as a stream.
// If the command fails to run or doesn't complete successfully, an error
// will be returned, including anything written on stderr.
func CmdStream(cmd *exec.Cmd, input *string, atEnd func()) (io.Reader, error) {
func CmdStream(cmd *exec.Cmd, input io.Reader, atEnd func()) (io.Reader, error) {
if input != nil {
stdin, err := cmd.StdinPipe()
if err != nil {
@@ -312,7 +351,7 @@ func CmdStream(cmd *exec.Cmd, input *string, atEnd func()) (io.Reader, error) {
}
// Write stdin if any
go func() {
_, _ = stdin.Write([]byte(*input))
io.Copy(stdin, input)
stdin.Close()
}()
}

View File

@@ -6,6 +6,7 @@ import (
"path/filepath"
"strings"
"syscall"
"time"
)
type ChangeType int
@@ -34,6 +35,21 @@ func (change *Change) String() string {
return fmt.Sprintf("%s %s", kind, change.Path)
}
// Gnu tar and the go tar writer don't have sub-second mtime
// precision, which is problematic when we apply changes via tar
// files, we handle this by comparing for exact times, *or* same
// second count and either a or b having exactly 0 nanoseconds
func sameFsTime(a, b time.Time) bool {
return a == b ||
(a.Unix() == b.Unix() &&
(a.Nanosecond() == 0 || b.Nanosecond() == 0))
}
func sameFsTimeSpec(a, b syscall.Timespec) bool {
return a.Sec == b.Sec &&
(a.Nsec == b.Nsec || a.Nsec == 0 || b.Nsec == 0)
}
func Changes(layers []string, rw string) ([]Change, error) {
var changes []Change
err := filepath.Walk(rw, func(path string, f os.FileInfo, err error) error {
@@ -85,7 +101,7 @@ func Changes(layers []string, rw string) ([]Change, error) {
// However, if it's a directory, maybe it wasn't actually modified.
// If you modify /foo/bar/baz, then /foo will be part of the changed files only because it's the parent of bar
if stat.IsDir() && f.IsDir() {
if f.Size() == stat.Size() && f.Mode() == stat.Mode() && f.ModTime() == stat.ModTime() {
if f.Size() == stat.Size() && f.Mode() == stat.Mode() && sameFsTime(f.ModTime(), stat.ModTime()) {
// Both directories are the same, don't record the change
return nil
}
@@ -181,7 +197,7 @@ func (info *FileInfo) addChanges(oldInfo *FileInfo, changes *[]Change) {
oldStat.Rdev != newStat.Rdev ||
// Don't look at size for dirs, its not a good measure of change
(oldStat.Size != newStat.Size && oldStat.Mode&syscall.S_IFDIR != syscall.S_IFDIR) ||
getLastModification(oldStat) != getLastModification(newStat) {
!sameFsTimeSpec(getLastModification(oldStat), getLastModification(newStat)) {
change := Change{
Path: newChild.path(),
Kind: ChangeModify,

View File

@@ -258,48 +258,44 @@ func TestChangesDirsMutated(t *testing.T) {
}
func TestApplyLayer(t *testing.T) {
t.Skip("Skipping TestApplyLayer due to known failures") // Disable this for now as it is broken
return
src, err := ioutil.TempDir("", "docker-changes-test")
if err != nil {
t.Fatal(err)
}
createSampleDir(t, src)
defer os.RemoveAll(src)
dst := src + "-copy"
if err := copyDir(src, dst); err != nil {
t.Fatal(err)
}
mutateSampleDir(t, dst)
defer os.RemoveAll(dst)
// src, err := ioutil.TempDir("", "docker-changes-test")
// if err != nil {
// t.Fatal(err)
// }
// createSampleDir(t, src)
// dst := src + "-copy"
// if err := copyDir(src, dst); err != nil {
// t.Fatal(err)
// }
// mutateSampleDir(t, dst)
changes, err := ChangesDirs(dst, src)
if err != nil {
t.Fatal(err)
}
// changes, err := ChangesDirs(dst, src)
// if err != nil {
// t.Fatal(err)
// }
layer, err := ExportChanges(dst, changes)
if err != nil {
t.Fatal(err)
}
// layer, err := ExportChanges(dst, changes)
// if err != nil {
// t.Fatal(err)
// }
layerCopy, err := NewTempArchive(layer, "")
if err != nil {
t.Fatal(err)
}
// layerCopy, err := NewTempArchive(layer, "")
// if err != nil {
// t.Fatal(err)
// }
if err := ApplyLayer(src, layerCopy); err != nil {
t.Fatal(err)
}
// if err := ApplyLayer(src, layerCopy); err != nil {
// t.Fatal(err)
// }
changes2, err := ChangesDirs(src, dst)
if err != nil {
t.Fatal(err)
}
// changes2, err := ChangesDirs(src, dst)
// if err != nil {
// t.Fatal(err)
// }
// if len(changes2) != 0 {
// t.Fatalf("Unexpected differences after re applying mutation: %v", changes)
// }
// os.RemoveAll(src)
// os.RemoveAll(dst)
if len(changes2) != 0 {
t.Fatalf("Unexpected differences after reapplying mutation: %v", changes2)
}
}

View File

@@ -1,6 +1,9 @@
package archive
import (
"archive/tar"
"github.com/dotcloud/docker/utils"
"io"
"os"
"path/filepath"
"strings"
@@ -8,87 +11,181 @@ import (
"time"
)
// Linux device nodes are a bit weird due to backwards compat with 16 bit device nodes.
// They are, from low to high: the lower 8 bits of the minor, then 12 bits of the major,
// then the top 12 bits of the minor
func mkdev(major int64, minor int64) uint32 {
return uint32(((minor & 0xfff00) << 12) | ((major & 0xfff) << 8) | (minor & 0xff))
}
func timeToTimespec(time time.Time) (ts syscall.Timespec) {
if time.IsZero() {
// Return UTIME_OMIT special value
ts.Sec = 0
ts.Nsec = ((1 << 30) - 2)
return
}
return syscall.NsecToTimespec(time.UnixNano())
}
// ApplyLayer parses a diff in the standard layer format from `layer`, and
// applies it to the directory `dest`.
func ApplyLayer(dest string, layer Archive) error {
// Poor man's diff applyer in 2 steps:
// We need to be able to set any perms
oldmask := syscall.Umask(0)
defer syscall.Umask(oldmask)
// Step 1: untar everything in place
if err := Untar(layer, dest, nil); err != nil {
layer, err := DecompressStream(layer)
if err != nil {
return err
}
modifiedDirs := make(map[string]*syscall.Stat_t)
addDir := func(file string) {
d := filepath.Dir(file)
if _, exists := modifiedDirs[d]; !exists {
if s, err := os.Lstat(d); err == nil {
if sys := s.Sys(); sys != nil {
if stat, ok := sys.(*syscall.Stat_t); ok {
modifiedDirs[d] = stat
tr := tar.NewReader(layer)
var dirs []*tar.Header
// Iterate through the files in the archive.
for {
hdr, err := tr.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
return err
}
// Normalize name, for safety and for a simple is-root check
hdr.Name = filepath.Clean(hdr.Name)
if !strings.HasSuffix(hdr.Name, "/") {
// Not the root directory, ensure that the parent directory exists.
// This happened in some tests where an image had a tarfile without any
// parent directories.
parent := filepath.Dir(hdr.Name)
parentPath := filepath.Join(dest, parent)
if _, err := os.Lstat(parentPath); err != nil && os.IsNotExist(err) {
err = os.MkdirAll(parentPath, 600)
if err != nil {
return err
}
}
}
// Skip AUFS metadata dirs
if strings.HasPrefix(hdr.Name, ".wh..wh.") {
continue
}
path := filepath.Join(dest, hdr.Name)
base := filepath.Base(path)
if strings.HasPrefix(base, ".wh.") {
originalBase := base[len(".wh."):]
originalPath := filepath.Join(filepath.Dir(path), originalBase)
if err := os.RemoveAll(originalPath); err != nil {
return err
}
} else {
// If path exits we almost always just want to remove and replace it.
// The only exception is when it is a directory *and* the file from
// the layer is also a directory. Then we want to merge them (i.e.
// just apply the metadata from the layer).
hasDir := false
if fi, err := os.Lstat(path); err == nil {
if fi.IsDir() && hdr.Typeflag == tar.TypeDir {
hasDir = true
} else {
if err := os.RemoveAll(path); err != nil {
return err
}
}
}
switch hdr.Typeflag {
case tar.TypeDir:
if !hasDir {
err = os.Mkdir(path, os.FileMode(hdr.Mode))
if err != nil {
return err
}
}
dirs = append(dirs, hdr)
case tar.TypeReg, tar.TypeRegA:
// Source is regular file
file, err := os.OpenFile(path, os.O_CREATE|os.O_WRONLY, os.FileMode(hdr.Mode))
if err != nil {
return err
}
if _, err := io.Copy(file, tr); err != nil {
file.Close()
return err
}
file.Close()
case tar.TypeBlock, tar.TypeChar, tar.TypeFifo:
mode := uint32(hdr.Mode & 07777)
switch hdr.Typeflag {
case tar.TypeBlock:
mode |= syscall.S_IFBLK
case tar.TypeChar:
mode |= syscall.S_IFCHR
case tar.TypeFifo:
mode |= syscall.S_IFIFO
}
if err := syscall.Mknod(path, mode, int(mkdev(hdr.Devmajor, hdr.Devminor))); err != nil {
return err
}
case tar.TypeLink:
if err := os.Link(filepath.Join(dest, hdr.Linkname), path); err != nil {
return err
}
case tar.TypeSymlink:
if err := os.Symlink(hdr.Linkname, path); err != nil {
return err
}
default:
utils.Debugf("unhandled type %d\n", hdr.Typeflag)
}
if err = syscall.Lchown(path, hdr.Uid, hdr.Gid); err != nil {
return err
}
// There is no LChmod, so ignore mode for symlink. Also, this
// must happen after chown, as that can modify the file mode
if hdr.Typeflag != tar.TypeSymlink {
err = syscall.Chmod(path, uint32(hdr.Mode&07777))
if err != nil {
return err
}
}
// Directories must be handled at the end to avoid further
// file creation in them to modify the mtime
if hdr.Typeflag != tar.TypeDir {
ts := []syscall.Timespec{timeToTimespec(hdr.AccessTime), timeToTimespec(hdr.ModTime)}
// syscall.UtimesNano doesn't support a NOFOLLOW flag atm, and
if hdr.Typeflag != tar.TypeSymlink {
if err := syscall.UtimesNano(path, ts); err != nil {
return err
}
} else {
if err := LUtimesNano(path, ts); err != nil {
return err
}
}
}
}
}
// Step 2: walk for whiteouts and apply them, removing them in the process
err := filepath.Walk(dest, func(fullPath string, f os.FileInfo, err error) error {
if err != nil {
if os.IsNotExist(err) {
// This happens in the case of whiteouts in parent dir removing a directory
// We just ignore it
return filepath.SkipDir
}
return err
}
// Rebase path
path, err := filepath.Rel(dest, fullPath)
if err != nil {
return err
}
path = filepath.Join("/", path)
// Skip AUFS metadata
if matched, err := filepath.Match("/.wh..wh.*", path); err != nil {
return err
} else if matched {
addDir(fullPath)
if err := os.RemoveAll(fullPath); err != nil {
return err
}
}
filename := filepath.Base(path)
if strings.HasPrefix(filename, ".wh.") {
rmTargetName := filename[len(".wh."):]
rmTargetPath := filepath.Join(filepath.Dir(fullPath), rmTargetName)
// Remove the file targeted by the whiteout
addDir(rmTargetPath)
if err := os.RemoveAll(rmTargetPath); err != nil {
return err
}
// Remove the whiteout itself
addDir(fullPath)
if err := os.RemoveAll(fullPath); err != nil {
return err
}
}
return nil
})
if err != nil {
return err
}
for k, v := range modifiedDirs {
lastAccess := getLastAccess(v)
lastModification := getLastModification(v)
aTime := time.Unix(lastAccess.Unix())
mTime := time.Unix(lastModification.Unix())
if err := os.Chtimes(k, aTime, mTime); err != nil {
for _, hdr := range dirs {
path := filepath.Join(dest, hdr.Name)
ts := []syscall.Timespec{timeToTimespec(hdr.AccessTime), timeToTimespec(hdr.ModTime)}
if err := syscall.UtimesNano(path, ts); err != nil {
return err
}
}

View File

@@ -9,3 +9,7 @@ func getLastAccess(stat *syscall.Stat_t) syscall.Timespec {
func getLastModification(stat *syscall.Stat_t) syscall.Timespec {
return stat.Mtimespec
}
func LUtimesNano(path string, ts []syscall.Timespec) error {
return nil
}

View File

@@ -1,6 +1,9 @@
package archive
import "syscall"
import (
"syscall"
"unsafe"
)
func getLastAccess(stat *syscall.Stat_t) syscall.Timespec {
return stat.Atim
@@ -9,3 +12,21 @@ func getLastAccess(stat *syscall.Stat_t) syscall.Timespec {
func getLastModification(stat *syscall.Stat_t) syscall.Timespec {
return stat.Mtim
}
func LUtimesNano(path string, ts []syscall.Timespec) error {
// These are not currently available in syscall
AT_FDCWD := -100
AT_SYMLINK_NOFOLLOW := 0x100
var _path *byte
_path, err := syscall.BytePtrFromString(path)
if err != nil {
return err
}
if _, _, err := syscall.Syscall6(syscall.SYS_UTIMENSAT, uintptr(AT_FDCWD), uintptr(unsafe.Pointer(_path)), uintptr(unsafe.Pointer(&ts[0])), uintptr(AT_SYMLINK_NOFOLLOW), 0, 0); err != 0 && err != syscall.ENOSYS {
return err
}
return nil
}

View File

@@ -163,7 +163,7 @@ func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, e
loginAgainstOfficialIndex := serverAddress == IndexServerAddress()
// to avoid sending the server address to the server it should be removed before marshalled
// to avoid sending the server address to the server it should be removed before being marshalled
authCopy := *authConfig
authCopy.ServerAddress = ""
@@ -192,13 +192,6 @@ func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, e
} else {
status = "Account created. Please see the documentation of the registry " + serverAddress + " for instructions how to activate it."
}
} else if reqStatusCode == 403 {
if loginAgainstOfficialIndex {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please check your e-mail for a confirmation link.")
}
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please see the documentation of the registry " + serverAddress + " for instructions how to activate it.")
} else if reqStatusCode == 400 {
if string(reqBody) == "\"Username or email already exists\"" {
req, err := factory.NewRequest("GET", serverAddress+"users/", nil)
@@ -216,9 +209,13 @@ func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, e
status = "Login Succeeded"
} else if resp.StatusCode == 401 {
return "", fmt.Errorf("Wrong login/password, please try again")
} else if resp.StatusCode == 403 {
if loginAgainstOfficialIndex {
return "", fmt.Errorf("Login: Account is not Active. Please check your e-mail for a confirmation link.")
}
return "", fmt.Errorf("Login: Account is not Active. Please see the documentation of the registry %s for instructions how to activate it.", serverAddress)
} else {
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body,
resp.StatusCode, resp.Header)
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body, resp.StatusCode, resp.Header)
}
} else {
return "", fmt.Errorf("Registration: %s", reqBody)
@@ -236,7 +233,7 @@ func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, e
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return "", err
}
}
if resp.StatusCode == 200 {
status = "Login Succeeded"
} else if resp.StatusCode == 401 {
@@ -257,11 +254,11 @@ func (config *ConfigFile) ResolveAuthConfig(registry string) AuthConfig {
// default to the index server
return config.Configs[IndexServerAddress()]
}
// if its not the index server there are three cases:
// if it's not the index server there are three cases:
//
// 1. this is a full config url -> it should be used as is
// 2. it could be a full url, but with the wrong protocol
// 3. it can be the hostname optionally with a port
// 1. a full config url -> it should be used as is
// 2. a full url, but with the wrong protocol
// 3. a hostname, with an optional port
//
// as there is only one auth entry which is fully qualified we need to start
// parsing and matching

View File

@@ -1,20 +1,30 @@
package docker
import (
"crypto/sha256"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"net/url"
"os"
"path"
"path/filepath"
"reflect"
"regexp"
"sort"
"strings"
)
var (
ErrDockerfileEmpty = errors.New("Dockerfile cannot be empty")
)
type BuildFile interface {
Build(io.Reader) (string, error)
CmdFrom(string) error
@@ -25,14 +35,19 @@ type buildFile struct {
runtime *Runtime
srv *Server
image string
maintainer string
config *Config
context string
image string
maintainer string
config *Config
contextPath string
context *utils.TarSum
verbose bool
utilizeCache bool
rm bool
authConfig *auth.AuthConfig
tmpContainers map[string]struct{}
tmpImages map[string]struct{}
@@ -57,7 +72,7 @@ func (b *buildFile) CmdFrom(name string) error {
if err != nil {
if b.runtime.graph.IsNotExist(err) {
remote, tag := utils.ParseRepositoryTag(name)
if err := b.srv.ImagePull(remote, tag, b.outOld, b.sf, nil, nil, true); err != nil {
if err := b.srv.ImagePull(remote, tag, b.outOld, b.sf, b.authConfig, nil, true); err != nil {
return err
}
image, err = b.runtime.repositories.LookupImage(name)
@@ -84,6 +99,27 @@ func (b *buildFile) CmdMaintainer(name string) error {
return b.commit("", b.config.Cmd, fmt.Sprintf("MAINTAINER %s", name))
}
// probeCache checks to see if image-caching is enabled (`b.utilizeCache`)
// and if so attempts to look up the current `b.image` and `b.config` pair
// in the current server `b.srv`. If an image is found, probeCache returns
// `(true, nil)`. If no image is found, it returns `(false, nil)`. If there
// is any error, it returns `(false, err)`.
func (b *buildFile) probeCache() (bool, error) {
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return false, err
} else if cache != nil {
fmt.Fprintf(b.outStream, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return true, nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
}
return false, nil
}
func (b *buildFile) CmdRun(args string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run")
@@ -101,17 +137,12 @@ func (b *buildFile) CmdRun(args string) error {
utils.Debugf("Command to be executed: %v", b.config.Cmd)
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.outStream, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
hit, err := b.probeCache()
if err != nil {
return err
}
if hit {
return nil
}
cid, err := b.run()
@@ -257,44 +288,27 @@ func (b *buildFile) CmdVolume(args string) error {
return nil
}
func (b *buildFile) addRemote(container *Container, orig, dest string) error {
file, err := utils.Download(orig)
func (b *buildFile) checkPathForAddition(orig string) error {
origPath := path.Join(b.contextPath, orig)
if !strings.HasPrefix(origPath, b.contextPath) {
return fmt.Errorf("Forbidden path outside the build context: %s (%s)", orig, origPath)
}
_, err := os.Stat(origPath)
if err != nil {
return err
return fmt.Errorf("%s: no such file or directory", orig)
}
defer file.Body.Close()
// If the destination is a directory, figure out the filename.
if strings.HasSuffix(dest, "/") {
u, err := url.Parse(orig)
if err != nil {
return err
}
path := u.Path
if strings.HasSuffix(path, "/") {
path = path[:len(path)-1]
}
parts := strings.Split(path, "/")
filename := parts[len(parts)-1]
if filename == "" {
return fmt.Errorf("cannot determine filename from url: %s", u)
}
dest = dest + filename
}
return container.Inject(file.Body, dest)
return nil
}
func (b *buildFile) addContext(container *Container, orig, dest string) error {
origPath := path.Join(b.context, orig)
destPath := path.Join(container.RootfsPath(), dest)
var (
origPath = path.Join(b.contextPath, orig)
destPath = path.Join(container.RootfsPath(), dest)
)
// Preserve the trailing '/'
if strings.HasSuffix(dest, "/") {
destPath = destPath + "/"
}
if !strings.HasPrefix(origPath, b.context) {
return fmt.Errorf("Forbidden path outside the build context: %s (%s)", orig, origPath)
}
fi, err := os.Stat(origPath)
if err != nil {
return fmt.Errorf("%s: no such file or directory", orig)
@@ -318,7 +332,7 @@ func (b *buildFile) addContext(container *Container, orig, dest string) error {
}
func (b *buildFile) CmdAdd(args string) error {
if b.context == "" {
if b.context == nil {
return fmt.Errorf("No context given. Impossible to use ADD")
}
tmp := strings.SplitN(args, " ", 2)
@@ -338,8 +352,90 @@ func (b *buildFile) CmdAdd(args string) error {
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)}
b.config.Image = b.image
// FIXME: do we really need this?
var (
origPath = orig
destPath = dest
)
if utils.IsURL(orig) {
resp, err := utils.Download(orig)
if err != nil {
return err
}
tmpDirName, err := ioutil.TempDir(b.contextPath, "docker-remote")
if err != nil {
return err
}
tmpFileName := path.Join(tmpDirName, "tmp")
tmpFile, err := os.OpenFile(tmpFileName, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
if err != nil {
return err
}
defer os.RemoveAll(tmpDirName)
if _, err = io.Copy(tmpFile, resp.Body); err != nil {
return err
}
origPath = path.Join(filepath.Base(tmpDirName), filepath.Base(tmpFileName))
tmpFile.Close()
// If the destination is a directory, figure out the filename.
if strings.HasSuffix(dest, "/") {
u, err := url.Parse(orig)
if err != nil {
return err
}
path := u.Path
if strings.HasSuffix(path, "/") {
path = path[:len(path)-1]
}
parts := strings.Split(path, "/")
filename := parts[len(parts)-1]
if filename == "" {
return fmt.Errorf("cannot determine filename from url: %s", u)
}
destPath = dest + filename
}
}
if err := b.checkPathForAddition(origPath); err != nil {
return err
}
// Hash path and check the cache
if b.utilizeCache {
var (
hash string
sums = b.context.GetSums()
)
if fi, err := os.Stat(path.Join(b.contextPath, origPath)); err != nil {
return err
} else if fi.IsDir() {
var subfiles []string
for file, sum := range sums {
if strings.HasPrefix(file, origPath) {
subfiles = append(subfiles, sum)
}
}
sort.Strings(subfiles)
hasher := sha256.New()
hasher.Write([]byte(strings.Join(subfiles, ",")))
hash = "dir:" + hex.EncodeToString(hasher.Sum(nil))
} else {
hash = "file:" + sums[origPath]
}
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", hash, dest)}
hit, err := b.probeCache()
if err != nil {
return err
}
if hit {
return nil
}
}
// Create the container and start it
container, _, err := b.runtime.Create(b.config, "")
if err != nil {
@@ -352,14 +448,8 @@ func (b *buildFile) CmdAdd(args string) error {
}
defer container.Unmount()
if utils.IsURL(orig) {
if err := b.addRemote(container, orig, dest); err != nil {
return err
}
} else {
if err := b.addContext(container, orig, dest); err != nil {
return err
}
if err := b.addContext(container, origPath, destPath); err != nil {
return err
}
if err := b.commit(container.ID, cmd, fmt.Sprintf("ADD %s in %s", orig, dest)); err != nil {
@@ -457,17 +547,12 @@ func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
b.config.Cmd = []string{"/bin/sh", "-c", "#(nop) " + comment}
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.outStream, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
hit, err := b.probeCache()
if err != nil {
return err
}
if hit {
return nil
}
container, warnings, err := b.runtime.Create(b.config, "")
@@ -508,17 +593,17 @@ func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
var lineContinuation = regexp.MustCompile(`\s*\\\s*\n`)
func (b *buildFile) Build(context io.Reader) (string, error) {
// FIXME: @creack "name" is a terrible variable name
name, err := ioutil.TempDir("", "docker-build")
tmpdirPath, err := ioutil.TempDir("", "docker-build")
if err != nil {
return "", err
}
if err := archive.Untar(context, name, nil); err != nil {
b.context = &utils.TarSum{Reader: context}
if err := archive.Untar(b.context, tmpdirPath, nil); err != nil {
return "", err
}
defer os.RemoveAll(name)
b.context = name
filename := path.Join(name, "Dockerfile")
defer os.RemoveAll(tmpdirPath)
b.contextPath = tmpdirPath
filename := path.Join(tmpdirPath, "Dockerfile")
if _, err := os.Stat(filename); os.IsNotExist(err) {
return "", fmt.Errorf("Can't build a directory with no Dockerfile")
}
@@ -526,6 +611,9 @@ func (b *buildFile) Build(context io.Reader) (string, error) {
if err != nil {
return "", err
}
if len(fileBytes) == 0 {
return "", ErrDockerfileEmpty
}
dockerfile := string(fileBytes)
dockerfile = lineContinuation.ReplaceAllString(dockerfile, "")
stepN := 0
@@ -568,7 +656,7 @@ func (b *buildFile) Build(context io.Reader) (string, error) {
return "", fmt.Errorf("An error occurred during the build\n")
}
func NewBuildFile(srv *Server, outStream, errStream io.Writer, verbose, utilizeCache, rm bool, outOld io.Writer, sf *utils.StreamFormatter) BuildFile {
func NewBuildFile(srv *Server, outStream, errStream io.Writer, verbose, utilizeCache, rm bool, outOld io.Writer, sf *utils.StreamFormatter, auth *auth.AuthConfig) BuildFile {
return &buildFile{
runtime: srv.runtime,
srv: srv,
@@ -581,6 +669,7 @@ func NewBuildFile(srv *Server, outStream, errStream io.Writer, verbose, utilizeC
utilizeCache: utilizeCache,
rm: rm,
sf: sf,
authConfig: auth,
outOld: outOld,
}
}

View File

@@ -11,8 +11,9 @@ import (
"fmt"
"github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/auth"
"github.com/dotcloud/docker/engine"
"github.com/dotcloud/docker/pkg/term"
"github.com/dotcloud/docker/registry"
"github.com/dotcloud/docker/term"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
@@ -226,11 +227,21 @@ func (cli *DockerCli) CmdBuild(args ...string) error {
}
headers := http.Header(make(map[string][]string))
buf, err := json.Marshal(cli.configFile)
if err != nil {
return err
}
headers.Add("X-Registry-Auth", base64.URLEncoding.EncodeToString(buf))
if context != nil {
headers.Set("Content-Type", "application/tar")
}
err = cli.stream("POST", fmt.Sprintf("/build?%s", v.Encode()), body, cli.out, headers)
if jerr, ok := err.(*utils.JSONError); ok {
// If no error code is set, default to 1
if jerr.Code == 0 {
jerr.Code = 1
}
return &utils.StatusError{Status: jerr.Message, StatusCode: jerr.Code}
}
return err
@@ -391,26 +402,24 @@ func (cli *DockerCli) CmdVersion(args ...string) error {
return err
}
var out APIVersion
err = json.Unmarshal(body, &out)
out := engine.NewOutput()
remoteVersion, err := out.AddEnv()
if err != nil {
utils.Errorf("Error unmarshal: body: %s, err: %s\n", body, err)
utils.Errorf("Error reading remote version: %s\n", err)
return err
}
if out.Version != "" {
fmt.Fprintf(cli.out, "Server version: %s\n", out.Version)
if _, err := out.Write(body); err != nil {
utils.Errorf("Error reading remote version: %s\n", err)
return err
}
if out.GitCommit != "" {
fmt.Fprintf(cli.out, "Git commit (server): %s\n", out.GitCommit)
}
if out.GoVersion != "" {
fmt.Fprintf(cli.out, "Go version (server): %s\n", out.GoVersion)
}
out.Close()
fmt.Fprintf(cli.out, "Server version: %s\n", remoteVersion.Get("Version"))
fmt.Fprintf(cli.out, "Git commit (server): %s\n", remoteVersion.Get("GitCommit"))
fmt.Fprintf(cli.out, "Go version (server): %s\n", remoteVersion.Get("GoVersion"))
release := utils.GetReleaseVersion()
if release != "" {
fmt.Fprintf(cli.out, "Last stable version: %s", release)
if (VERSION != "" || out.Version != "") && (strings.Trim(VERSION, "-dev") != release || strings.Trim(out.Version, "-dev") != release) {
if (VERSION != "" || remoteVersion.Exists("Version")) && (strings.Trim(VERSION, "-dev") != release || strings.Trim(remoteVersion.Get("Version"), "-dev") != release) {
fmt.Fprintf(cli.out, ", please update docker")
}
fmt.Fprintf(cli.out, "\n")
@@ -434,42 +443,60 @@ func (cli *DockerCli) CmdInfo(args ...string) error {
return err
}
var out APIInfo
if err := json.Unmarshal(body, &out); err != nil {
out := engine.NewOutput()
remoteInfo, err := out.AddEnv()
if err != nil {
return err
}
fmt.Fprintf(cli.out, "Containers: %d\n", out.Containers)
fmt.Fprintf(cli.out, "Images: %d\n", out.Images)
fmt.Fprintf(cli.out, "Driver: %s\n", out.Driver)
for _, pair := range out.DriverStatus {
if _, err := out.Write(body); err != nil {
utils.Errorf("Error reading remote info: %s\n", err)
return err
}
out.Close()
fmt.Fprintf(cli.out, "Containers: %d\n", remoteInfo.GetInt("Containers"))
fmt.Fprintf(cli.out, "Images: %d\n", remoteInfo.GetInt("Images"))
fmt.Fprintf(cli.out, "Driver: %s\n", remoteInfo.Get("Driver"))
var driverStatus [][2]string
if err := remoteInfo.GetJson("DriverStatus", &driverStatus); err != nil {
return err
}
for _, pair := range driverStatus {
fmt.Fprintf(cli.out, " %s: %s\n", pair[0], pair[1])
}
if out.Debug || os.Getenv("DEBUG") != "" {
fmt.Fprintf(cli.out, "Debug mode (server): %v\n", out.Debug)
if remoteInfo.GetBool("Debug") || os.Getenv("DEBUG") != "" {
fmt.Fprintf(cli.out, "Debug mode (server): %v\n", remoteInfo.GetBool("Debug"))
fmt.Fprintf(cli.out, "Debug mode (client): %v\n", os.Getenv("DEBUG") != "")
fmt.Fprintf(cli.out, "Fds: %d\n", out.NFd)
fmt.Fprintf(cli.out, "Goroutines: %d\n", out.NGoroutines)
fmt.Fprintf(cli.out, "LXC Version: %s\n", out.LXCVersion)
fmt.Fprintf(cli.out, "EventsListeners: %d\n", out.NEventsListener)
fmt.Fprintf(cli.out, "Kernel Version: %s\n", out.KernelVersion)
}
fmt.Fprintf(cli.out, "Fds: %d\n", remoteInfo.GetInt("NFd"))
fmt.Fprintf(cli.out, "Goroutines: %d\n", remoteInfo.GetInt("NGoroutines"))
fmt.Fprintf(cli.out, "LXC Version: %s\n", remoteInfo.Get("LXCVersion"))
fmt.Fprintf(cli.out, "EventsListeners: %d\n", remoteInfo.GetInt("NEventsListener"))
fmt.Fprintf(cli.out, "Kernel Version: %s\n", remoteInfo.Get("KernelVersion"))
if len(out.IndexServerAddress) != 0 {
cli.LoadConfigFile()
u := cli.configFile.Configs[out.IndexServerAddress].Username
if len(u) > 0 {
fmt.Fprintf(cli.out, "Username: %v\n", u)
fmt.Fprintf(cli.out, "Registry: %v\n", out.IndexServerAddress)
if initSha1 := remoteInfo.Get("InitSha1"); initSha1 != "" {
fmt.Fprintf(cli.out, "Init SHA1: %s\n", initSha1)
}
if initPath := remoteInfo.Get("InitPath"); initPath != "" {
fmt.Fprintf(cli.out, "Init Path: %s\n", initPath)
}
}
if !out.MemoryLimit {
if len(remoteInfo.GetList("IndexServerAddress")) != 0 {
cli.LoadConfigFile()
u := cli.configFile.Configs[remoteInfo.Get("IndexServerAddress")].Username
if len(u) > 0 {
fmt.Fprintf(cli.out, "Username: %v\n", u)
fmt.Fprintf(cli.out, "Registry: %v\n", remoteInfo.GetList("IndexServerAddress"))
}
}
if !remoteInfo.GetBool("MemoryLimit") {
fmt.Fprintf(cli.err, "WARNING: No memory limit support\n")
}
if !out.SwapLimit {
if !remoteInfo.GetBool("SwapLimit") {
fmt.Fprintf(cli.err, "WARNING: No swap limit support\n")
}
if !out.IPv4Forwarding {
if !remoteInfo.GetBool("IPv4Forwarding") {
fmt.Fprintf(cli.err, "WARNING: IPv4 forwarding is disabled.\n")
}
return nil
@@ -1102,33 +1129,9 @@ func (cli *DockerCli) CmdImages(args ...string) error {
return nil
}
if *flViz {
body, _, err := cli.call("GET", "/images/json?all=1", nil)
if err != nil {
return err
}
filter := cmd.Arg(0)
var outs []APIImages
err = json.Unmarshal(body, &outs)
if err != nil {
return err
}
fmt.Fprintf(cli.out, "digraph docker {\n")
for _, image := range outs {
if image.ParentId == "" {
fmt.Fprintf(cli.out, " base -> \"%s\" [style=invis]\n", utils.TruncateID(image.ID))
} else {
fmt.Fprintf(cli.out, " \"%s\" -> \"%s\"\n", utils.TruncateID(image.ParentId), utils.TruncateID(image.ID))
}
if image.RepoTags[0] != "<none>:<none>" {
fmt.Fprintf(cli.out, " \"%s\" [label=\"%s\\n%s\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n", utils.TruncateID(image.ID), utils.TruncateID(image.ID), strings.Join(image.RepoTags, "\\n"))
}
}
fmt.Fprintf(cli.out, " base [style=invisible]\n}\n")
} else if *flTree {
if *flViz || *flTree {
body, _, err := cli.call("GET", "/images/json?all=1", nil)
if err != nil {
return err
@@ -1140,8 +1143,8 @@ func (cli *DockerCli) CmdImages(args ...string) error {
}
var (
startImageArg = cmd.Arg(0)
startImage APIImages
printNode func(cli *DockerCli, noTrunc bool, image APIImages, prefix string)
startImage APIImages
roots []APIImages
byParent = make(map[string][]APIImages)
@@ -1158,28 +1161,38 @@ func (cli *DockerCli) CmdImages(args ...string) error {
}
}
if startImageArg != "" {
if startImageArg == image.ID || startImageArg == utils.TruncateID(image.ID) {
if filter != "" {
if filter == image.ID || filter == utils.TruncateID(image.ID) {
startImage = image
}
for _, repotag := range image.RepoTags {
if repotag == startImageArg {
if repotag == filter {
startImage = image
}
}
}
}
if startImageArg != "" {
WalkTree(cli, noTrunc, []APIImages{startImage}, byParent, "")
if *flViz {
fmt.Fprintf(cli.out, "digraph docker {\n")
printNode = (*DockerCli).printVizNode
} else {
WalkTree(cli, noTrunc, roots, byParent, "")
printNode = (*DockerCli).printTreeNode
}
if startImage.ID != "" {
cli.WalkTree(*noTrunc, &[]APIImages{startImage}, byParent, "", printNode)
} else if filter == "" {
cli.WalkTree(*noTrunc, &roots, byParent, "", printNode)
}
if *flViz {
fmt.Fprintf(cli.out, " base [style=invisible]\n}\n")
}
} else {
v := url.Values{}
if cmd.NArg() == 1 {
v.Set("filter", cmd.Arg(0))
v.Set("filter", filter)
}
if *all {
v.Set("all", "1")
@@ -1225,41 +1238,64 @@ func (cli *DockerCli) CmdImages(args ...string) error {
return nil
}
func WalkTree(cli *DockerCli, noTrunc *bool, images []APIImages, byParent map[string][]APIImages, prefix string) {
if len(images) > 1 {
length := len(images)
for index, image := range images {
func (cli *DockerCli) WalkTree(noTrunc bool, images *[]APIImages, byParent map[string][]APIImages, prefix string, printNode func(cli *DockerCli, noTrunc bool, image APIImages, prefix string)) {
length := len(*images)
if length > 1 {
for index, image := range *images {
if index+1 == length {
PrintTreeNode(cli, noTrunc, image, prefix+"└─")
printNode(cli, noTrunc, image, prefix+"└─")
if subimages, exists := byParent[image.ID]; exists {
WalkTree(cli, noTrunc, subimages, byParent, prefix+" ")
cli.WalkTree(noTrunc, &subimages, byParent, prefix+" ", printNode)
}
} else {
PrintTreeNode(cli, noTrunc, image, prefix+"|─")
printNode(cli, noTrunc, image, prefix+"─")
if subimages, exists := byParent[image.ID]; exists {
WalkTree(cli, noTrunc, subimages, byParent, prefix+"| ")
cli.WalkTree(noTrunc, &subimages, byParent, prefix+" ", printNode)
}
}
}
} else {
for _, image := range images {
PrintTreeNode(cli, noTrunc, image, prefix+"└─")
for _, image := range *images {
printNode(cli, noTrunc, image, prefix+"└─")
if subimages, exists := byParent[image.ID]; exists {
WalkTree(cli, noTrunc, subimages, byParent, prefix+" ")
cli.WalkTree(noTrunc, &subimages, byParent, prefix+" ", printNode)
}
}
}
}
func PrintTreeNode(cli *DockerCli, noTrunc *bool, image APIImages, prefix string) {
func (cli *DockerCli) printVizNode(noTrunc bool, image APIImages, prefix string) {
var (
imageID string
parentID string
)
if noTrunc {
imageID = image.ID
parentID = image.ParentId
} else {
imageID = utils.TruncateID(image.ID)
parentID = utils.TruncateID(image.ParentId)
}
if image.ParentId == "" {
fmt.Fprintf(cli.out, " base -> \"%s\" [style=invis]\n", imageID)
} else {
fmt.Fprintf(cli.out, " \"%s\" -> \"%s\"\n", parentID, imageID)
}
if image.RepoTags[0] != "<none>:<none>" {
fmt.Fprintf(cli.out, " \"%s\" [label=\"%s\\n%s\",shape=box,fillcolor=\"paleturquoise\",style=\"filled,rounded\"];\n",
imageID, imageID, strings.Join(image.RepoTags, "\\n"))
}
}
func (cli *DockerCli) printTreeNode(noTrunc bool, image APIImages, prefix string) {
var imageID string
if *noTrunc {
if noTrunc {
imageID = image.ID
} else {
imageID = utils.TruncateID(image.ID)
}
fmt.Fprintf(cli.out, "%s%s Size: %s (virtual %s)", prefix, imageID, utils.HumanSize(image.Size), utils.HumanSize(image.VirtualSize))
fmt.Fprintf(cli.out, "%s%s Virtual Size: %s", prefix, imageID, utils.HumanSize(image.VirtualSize))
if image.RepoTags[0] != "<none>:<none>" {
fmt.Fprintf(cli.out, " Tags: %s\n", strings.Join(image.RepoTags, ", "))
} else {
@@ -1789,6 +1825,8 @@ func parseRun(cmd *flag.FlagSet, args []string, capabilities *Capabilities) (*Co
flVolumes.Set(dstDir)
binds = append(binds, bind)
flVolumes.Delete(bind)
} else if bind == "/" {
return nil, nil, cmd, fmt.Errorf("Invalid volume: path can't be '/'")
}
}

View File

@@ -128,7 +128,9 @@ func TestParseRunVolumes(t *testing.T) {
t.Fatalf("Error parsing volume flags, without volume, no volume should be present. Received %v", config.Volumes)
}
mustParse(t, "-v /")
if _, _, err := parse(t, "-v /"); err == nil {
t.Fatalf("Expected error, but got none")
}
if _, _, err := parse(t, "-v /:/"); err == nil {
t.Fatalf("Error parsing volume flags, `-v /:/` should fail but didn't")

View File

@@ -14,9 +14,11 @@ type DaemonConfig struct {
Dns []string
EnableIptables bool
BridgeIface string
BridgeIp string
DefaultIp net.IP
InterContainerCommunication bool
GraphDriver string
Mtu int
}
// ConfigFromJob creates and returns a new DaemonConfig object
@@ -36,8 +38,14 @@ func ConfigFromJob(job *engine.Job) *DaemonConfig {
} else {
config.BridgeIface = DefaultNetworkBridge
}
config.BridgeIp = job.Getenv("BridgeIp")
config.DefaultIp = net.ParseIP(job.Getenv("DefaultIp"))
config.InterContainerCommunication = job.GetenvBool("InterContainerCommunication")
config.GraphDriver = job.Getenv("GraphDriver")
if mtu := job.GetenvInt("Mtu"); mtu != -1 {
config.Mtu = mtu
} else {
config.Mtu = DefaultNetworkMtu
}
return &config
}

View File

@@ -7,7 +7,8 @@ import (
"fmt"
"github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/graphdriver"
"github.com/dotcloud/docker/term"
"github.com/dotcloud/docker/mount"
"github.com/dotcloud/docker/pkg/term"
"github.com/dotcloud/docker/utils"
"github.com/kr/pty"
"io"
@@ -48,7 +49,6 @@ type Container struct {
network *NetworkInterface
NetworkSettings *NetworkSettings
SysInitPath string
ResolvConfPath string
HostnamePath string
HostsPath string
@@ -297,7 +297,11 @@ func (container *Container) generateEnvConfig(env []string) error {
if err != nil {
return err
}
ioutil.WriteFile(container.EnvConfigPath(), data, 0600)
p, err := container.EnvConfigPath()
if err != nil {
return err
}
ioutil.WriteFile(p, data, 0600)
return nil
}
@@ -574,7 +578,12 @@ func (container *Container) Start() (err error) {
// Networking
if !container.Config.NetworkDisabled {
params = append(params, "-g", container.network.Gateway.String())
network := container.NetworkSettings
params = append(params,
"-g", network.Gateway,
"-i", fmt.Sprintf("%s/%d", network.IPAddress, network.IPPrefixLen),
"-mtu", strconv.Itoa(container.runtime.config.Mtu),
)
}
// User
@@ -586,7 +595,6 @@ func (container *Container) Start() (err error) {
env := []string{
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"container=lxc",
"HOSTNAME=" + container.Config.Hostname,
}
@@ -594,6 +602,10 @@ func (container *Container) Start() (err error) {
env = append(env, "TERM=xterm")
}
if container.hostConfig.Privileged {
params = append(params, "-privileged")
}
// Init any links between the parent and children
runtime := container.runtime
@@ -674,6 +686,45 @@ func (container *Container) Start() (err error) {
}
}
root := container.RootfsPath()
envPath, err := container.EnvConfigPath()
if err != nil {
return err
}
// Mount docker specific files into the containers root fs
if err := mount.Mount(runtime.sysInitPath, path.Join(root, "/.dockerinit"), "none", "bind,ro"); err != nil {
return err
}
if err := mount.Mount(envPath, path.Join(root, "/.dockerenv"), "none", "bind,ro"); err != nil {
return err
}
if err := mount.Mount(container.ResolvConfPath, path.Join(root, "/etc/resolv.conf"), "none", "bind,ro"); err != nil {
return err
}
if container.HostnamePath != "" && container.HostsPath != "" {
if err := mount.Mount(container.HostnamePath, path.Join(root, "/etc/hostname"), "none", "bind,ro"); err != nil {
return err
}
if err := mount.Mount(container.HostsPath, path.Join(root, "/etc/hosts"), "none", "bind,ro"); err != nil {
return err
}
}
// Mount user specified volumes
for r, v := range container.Volumes {
mountAs := "ro"
if container.VolumesRW[v] {
mountAs = "rw"
}
if err := mount.Mount(v, path.Join(root, r), "none", fmt.Sprintf("bind,%s", mountAs)); err != nil {
return err
}
}
container.cmd = exec.Command(params[0], params[1:]...)
// Setup logging of stdout and stderr to disk
@@ -774,14 +825,14 @@ func (container *Container) getBindMap() (map[string]BindMap, error) {
}
binds[path.Clean(dst)] = bindMap
}
return binds, nil
return binds, nil
}
func (container *Container) createVolumes() error {
binds, err := container.getBindMap()
if err != nil {
return err
}
binds, err := container.getBindMap()
if err != nil {
return err
}
volumesDriver := container.runtime.volumes.driver
// Create the requested volumes if they don't exist
for volPath := range container.Config.Volumes {
@@ -801,15 +852,10 @@ func (container *Container) createVolumes() error {
if strings.ToLower(bindMap.Mode) == "rw" {
srcRW = true
}
if file, err := os.Open(bindMap.SrcPath); err != nil {
if stat, err := os.Lstat(bindMap.SrcPath); err != nil {
return err
} else {
defer file.Close()
if stat, err := file.Stat(); err != nil {
return err
} else {
volIsDir = stat.IsDir()
}
volIsDir = stat.IsDir()
}
// Otherwise create an directory in $ROOT/volumes/ and use that
} else {
@@ -829,26 +875,25 @@ func (container *Container) createVolumes() error {
}
container.Volumes[volPath] = srcPath
container.VolumesRW[volPath] = srcRW
// Create the mountpoint
rootVolPath := path.Join(container.RootfsPath(), volPath)
if volIsDir {
if err := os.MkdirAll(rootVolPath, 0755); err != nil {
return err
}
volPath = path.Join(container.RootfsPath(), volPath)
rootVolPath, err := utils.FollowSymlinkInScope(volPath, container.RootfsPath())
if err != nil {
return err
}
volPath = path.Join(container.RootfsPath(), volPath)
if _, err := os.Stat(volPath); err != nil {
if _, err := os.Stat(rootVolPath); err != nil {
if os.IsNotExist(err) {
if volIsDir {
if err := os.MkdirAll(volPath, 0755); err != nil {
if err := os.MkdirAll(rootVolPath, 0755); err != nil {
return err
}
} else {
if err := os.MkdirAll(path.Dir(volPath), 0755); err != nil {
if err := os.MkdirAll(path.Dir(rootVolPath), 0755); err != nil {
return err
}
if f, err := os.OpenFile(volPath, os.O_CREATE, 0755); err != nil {
if f, err := os.OpenFile(rootVolPath, os.O_CREATE, 0755); err != nil {
return err
} else {
f.Close()
@@ -1357,6 +1402,32 @@ func (container *Container) GetImage() (*Image, error) {
}
func (container *Container) Unmount() error {
var (
err error
root = container.RootfsPath()
mounts = []string{
path.Join(root, "/.dockerinit"),
path.Join(root, "/.dockerenv"),
path.Join(root, "/etc/resolv.conf"),
}
)
if container.HostnamePath != "" && container.HostsPath != "" {
mounts = append(mounts, path.Join(root, "/etc/hostname"), path.Join(root, "/etc/hosts"))
}
for r := range container.Volumes {
mounts = append(mounts, path.Join(root, r))
}
for _, m := range mounts {
if lastError := mount.Unmount(m); lastError != nil {
err = lastError
}
}
if err != nil {
return err
}
return container.runtime.Unmount(container)
}
@@ -1376,8 +1447,20 @@ func (container *Container) jsonPath() string {
return path.Join(container.root, "config.json")
}
func (container *Container) EnvConfigPath() string {
return path.Join(container.root, "config.env")
func (container *Container) EnvConfigPath() (string, error) {
p := path.Join(container.root, "config.env")
if _, err := os.Stat(p); err != nil {
if os.IsNotExist(err) {
f, err := os.Create(p)
if err != nil {
return "", err
}
f.Close()
} else {
return "", err
}
}
return p, nil
}
func (container *Container) lxcConfigPath() string {

View File

@@ -4,7 +4,7 @@
#
# This script provides supports completion of:
# - commands and their options
# - container ids
# - container ids and names
# - image repos and tags
# - filepaths
#
@@ -25,21 +25,24 @@ __docker_containers_all()
{
local containers
containers="$( docker ps -a -q )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
names="$( docker inspect -format '{{.Name}}' $containers | sed 's,^/,,' )"
COMPREPLY=( $( compgen -W "$names $containers" -- "$cur" ) )
}
__docker_containers_running()
{
local containers
containers="$( docker ps -q )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
names="$( docker inspect -format '{{.Name}}' $containers | sed 's,^/,,' )"
COMPREPLY=( $( compgen -W "$names $containers" -- "$cur" ) )
}
__docker_containers_stopped()
{
local containers
containers="$( comm -13 <(docker ps -q | sort -u) <(docker ps -a -q | sort -u) )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
names="$( docker inspect -format '{{.Name}}' $containers | sed 's,^/,,' )"
COMPREPLY=( $( compgen -W "$names $containers" -- "$cur" ) )
}
__docker_image_repos()
@@ -70,8 +73,9 @@ __docker_containers_and_images()
{
local containers images
containers="$( docker ps -a -q )"
names="$( docker inspect -format '{{.Name}}' $containers | sed 's,^/,,' )"
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$images $containers" -- "$cur" ) )
COMPREPLY=( $( compgen -W "$images $names $containers" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}

View File

@@ -142,14 +142,22 @@ if [ -z "$strictDebootstrap" ]; then
# this forces dpkg not to call sync() after package extraction and speeds up install
# the benefit is huge on spinning disks, and the penalty is nonexistent on SSD or decent server virtualization
echo 'force-unsafe-io' | sudo tee etc/dpkg/dpkg.cfg.d/02apt-speedup > /dev/null
# we want to effectively run "apt-get clean" after every install to keep images small
echo 'DPkg::Post-Invoke {"/bin/rm -f /var/cache/apt/archives/*.deb || true";};' | sudo tee etc/apt/apt.conf.d/no-cache > /dev/null
# we want to effectively run "apt-get clean" after every install to keep images small (see output of "apt-get clean -s" for context)
{
aptGetClean='"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true";'
echo "DPkg::Post-Invoke { ${aptGetClean} };"
echo "APT::Update::Post-Invoke { ${aptGetClean} };"
echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";'
} | sudo tee etc/apt/apt.conf.d/no-cache > /dev/null
# and remove the translations, too
echo 'Acquire::Languages "none";' | sudo tee etc/apt/apt.conf.d/no-languages > /dev/null
# helpful undo lines for each the above tweaks (for lack of a better home to keep track of them):
# rm /usr/sbin/policy-rc.d
# rm /sbin/initctl; dpkg-divert --rename --remove /sbin/initctl
# rm /etc/dpkg/dpkg.cfg.d/02apt-speedup
# rm /etc/apt/apt.conf.d/no-cache
# rm /etc/apt/apt.conf.d/no-languages
if [ -z "$skipDetection" ]; then
# see also rudimentary platform detection in hack/install.sh

View File

@@ -4,6 +4,10 @@
<dict>
<key>name</key>
<string>Dockerfile</string>
<key>fileTypes</key>
<array>
<string>Dockerfile</string>
</array>
<key>patterns</key>
<array>
<dict>

View File

@@ -11,7 +11,8 @@ branch named [zfs_driver].
# Status
Pre-alpha
Alpha: The code is now capable of creating, running and destroying containers
and images.
The code is under development. Contributions in the form of suggestions,
code-reviews, and patches are welcome.

View File

@@ -30,6 +30,7 @@ func main() {
flDebug = flag.Bool("D", false, "Enable debug mode")
flAutoRestart = flag.Bool("r", true, "Restart previously running containers")
bridgeName = flag.String("b", "", "Attach containers to a pre-existing network bridge; use 'none' to disable container networking")
bridgeIp = flag.String("bip", "", "Use this CIDR notation address for the network bridge's IP, not compatible with -b")
pidfile = flag.String("p", "/var/run/docker.pid", "Path to use for daemon PID file")
flRoot = flag.String("g", "/var/lib/docker", "Path to use as the root of the docker runtime")
flEnableCors = flag.Bool("api-enable-cors", false, "Enable CORS headers in the remote API")
@@ -39,6 +40,7 @@ func main() {
flInterContainerComm = flag.Bool("icc", true, "Enable inter-container communication")
flGraphDriver = flag.String("s", "", "Force the docker runtime to use a specific storage driver")
flHosts = docker.NewListOpts(docker.ValidateHost)
flMtu = flag.Int("mtu", docker.DefaultNetworkMtu, "Set the containers network mtu")
)
flag.Var(&flDns, "dns", "Force docker to use specific DNS servers")
flag.Var(&flHosts, "H", "Multiple tcp://host:port or unix://path/to/socket to bind in daemon mode, single connection otherwise")
@@ -50,8 +52,17 @@ func main() {
return
}
if flHosts.Len() == 0 {
// If we do not have a host, default to unix socket
flHosts.Set(fmt.Sprintf("unix://%s", docker.DEFAULTUNIXSOCKET))
defaultHost := os.Getenv("DOCKER_HOST")
if defaultHost == "" || *flDaemon {
// If we do not have a host, default to unix socket
defaultHost = fmt.Sprintf("unix://%s", docker.DEFAULTUNIXSOCKET)
}
flHosts.Set(defaultHost)
}
if *bridgeName != "" && *bridgeIp != "" {
log.Fatal("You specified -b & -bip, mutually exclusive options. Please specify only one.")
}
if *flDebug {
@@ -64,6 +75,7 @@ func main() {
flag.Usage()
return
}
eng, err := engine.New(*flRoot)
if err != nil {
log.Fatal(err)
@@ -77,9 +89,11 @@ func main() {
job.SetenvList("Dns", flDns.GetAll())
job.SetenvBool("EnableIptables", *flEnableIptables)
job.Setenv("BridgeIface", *bridgeName)
job.Setenv("BridgeIp", *bridgeIp)
job.Setenv("DefaultIp", *flDefaultIp)
job.SetenvBool("InterContainerCommunication", *flInterContainerComm)
job.Setenv("GraphDriver", *flGraphDriver)
job.SetenvInt("Mtu", *flMtu)
if err := job.Run(); err != nil {
log.Fatal(err)
}

View File

@@ -1,4 +1,4 @@
Andy Rothfusz <andy@dotcloud.com> (@metalivedev)
Ken Cochrane <ken@dotcloud.com> (@kencochrane)
James Turnbull <james@lovedthanlost.net> (@jamesturnbull)
James Turnbull <james@lovedthanlost.net> (@jamtur01)
Sven Dowideit <SvenDowideit@fosiki.com> (@SvenDowideit)

View File

@@ -46,20 +46,20 @@ directory:
* Linux: `pip install -r docs/requirements.txt`
* Mac OS X: `[sudo] pip-2.7 -r docs/requirements.txt`
* Mac OS X: `[sudo] pip-2.7 install -r docs/requirements.txt`
###Alternative Installation: Docker Container
If you're running ``docker`` on your development machine then you may
find it easier and cleaner to use the Dockerfile. This installs Sphinx
find it easier and cleaner to use the docs Dockerfile. This installs Sphinx
in a container, adds the local ``docs/`` directory and builds the HTML
docs inside the container, even starting a simple HTTP server on port
8000 so that you can connect and see your changes. Just run ``docker
build .`` and run the resulting image. This is the equivalent to
``make clean server`` since each container starts clean.
8000 so that you can connect and see your changes.
In the ``docs/`` directory, run:
```docker build -t docker:docs . && docker run -p 8000:8000 docker:docs```
In the ``docker`` source directory, run:
```make docs```
This is the equivalent to ``make clean server`` since each container starts clean.
Usage
-----
@@ -128,7 +128,8 @@ Guides on using sphinx
* Code examples
* Start without $, so it's easy to copy and paste.
* Start typed commands with ``$ `` (dollar space) so that they
are easily differentiated from program output.
* Use "sudo" with docker to ensure that your command is runnable
even if they haven't [used the *docker*
group](http://docs.docker.io/en/latest/use/basics/#why-sudo).

View File

@@ -26,10 +26,10 @@ Docker Remote API
2. Versions
===========
The current version of the API is 1.7
The current version of the API is 1.8
Calling /images/<name>/insert is the same as calling
/v1.7/images/<name>/insert
/v1.8/images/<name>/insert
You can still call an old version of the api using
/v1.0/images/<name>/insert
@@ -55,6 +55,13 @@ What's new
**New!** This endpoint now returns the host config for the container.
.. http:post:: /images/create
.. http:post:: /images/(name)/insert
.. http:post:: /images/(name)/push
**New!** progressDetail object was added in the JSON. It's now possible
to get the current value and the total of the progress without having to
parse the string.
v1.7
****

View File

@@ -1078,7 +1078,7 @@ Monitor Docker's events
.. sourcecode:: http
POST /events?since=1374067924
GET /events?since=1374067924
**Example response**:

View File

@@ -1122,7 +1122,7 @@ Monitor Docker's events
.. sourcecode:: http
POST /events?since=1374067924
GET /events?since=1374067924
**Example response**:

View File

@@ -1093,7 +1093,7 @@ Monitor Docker's events
.. sourcecode:: http
POST /events?since=1374067924
GET /events?since=1374067924
**Example response**:

View File

@@ -1228,7 +1228,7 @@ Monitor Docker's events
.. sourcecode:: http
POST /events?since=1374067924
GET /events?since=1374067924
**Example response**:

View File

@@ -122,7 +122,6 @@ Create a container
"AttachStdout":true,
"AttachStderr":true,
"PortSpecs":null,
"Privileged": false,
"Tty":false,
"OpenStdin":false,
"StdinOnce":false,
@@ -136,10 +135,12 @@ Create a container
"/tmp": {}
},
"VolumesFrom":"",
"WorkingDir":""
"WorkingDir":"",
"ExposedPorts":{
"22/tcp": {}
}
}
**Example response**:
.. sourcecode:: http
@@ -364,10 +365,11 @@ Start a container
{
"Binds":["/tmp:/tmp"],
"LxcConf":{"lxc.utsname":"docker"},
"PortBindings":null
"PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] },
"Privileged":false,
"PublishAllPorts":false
}
Binds need to reference Volumes that were defined during container creation.
**Example response**:
@@ -1159,7 +1161,7 @@ Monitor Docker's events
.. sourcecode:: http
POST /events?since=1374067924
GET /events?since=1374067924
**Example response**:

View File

@@ -122,7 +122,6 @@ Create a container
"AttachStdout":true,
"AttachStderr":true,
"PortSpecs":null,
"Privileged": false,
"Tty":false,
"OpenStdin":false,
"StdinOnce":false,
@@ -132,12 +131,16 @@ Create a container
],
"Dns":null,
"Image":"base",
"Volumes":{},
"Volumes":{
"/tmp": {}
},
"VolumesFrom":"",
"WorkingDir":""
"WorkingDir":"",
"ExposedPorts":{
"22/tcp": {}
}
}
**Example response**:
.. sourcecode:: http
@@ -151,6 +154,7 @@ Create a container
}
:jsonparam config: the container's configuration
:query name: Assign the specified name to the container. Must match ``/?[a-zA-Z0-9_-]+``.
:statuscode 201: no error
:statuscode 404: no such container
:statuscode 406: impossible to attach (container not running)
@@ -377,7 +381,10 @@ Start a container
{
"Binds":["/tmp:/tmp"],
"LxcConf":{"lxc.utsname":"docker"}
"LxcConf":{"lxc.utsname":"docker"},
"PortBindings":{ "22/tcp": [{ "HostPort": "11022" }] },
"PublishAllPorts":false,
"Privileged":false
}
**Example response**:
@@ -696,7 +703,7 @@ Create an image
Content-Type: application/json
{"status":"Pulling..."}
{"status":"Pulling", "progress":"1/? (n/a)"}
{"status":"Pulling", "progress":"1 B/ 100 B", "progressDetail":{"current":1, "total":100}}
{"error":"Invalid..."}
...
@@ -736,7 +743,7 @@ Insert a file in an image
Content-Type: application/json
{"status":"Inserting..."}
{"status":"Inserting", "progress":"1/? (n/a)"}
{"status":"Inserting", "progress":"1/? (n/a)", "progressDetail":{"current":1}}
{"error":"Invalid..."}
...
@@ -857,7 +864,7 @@ Push an image on the registry
Content-Type: application/json
{"status":"Pushing..."}
{"status":"Pushing", "progress":"1/? (n/a)"}
{"status":"Pushing", "progress":"1/? (n/a)", "progressDetail":{"current":1}}}
{"error":"Invalid..."}
...
@@ -1026,6 +1033,7 @@ Build an image from Dockerfile via stdin
:query q: suppress verbose build output
:query nocache: do not use the cache when building the image
:reqheader Content-type: should be set to ``"application/tar"``.
:reqheader X-Registry-Auth: base64-encoded AuthConfig object
:statuscode 200: no error
:statuscode 500: server error
@@ -1172,7 +1180,7 @@ Monitor Docker's events
.. sourcecode:: http
POST /events?since=1374067924
GET /events?since=1374067924
**Example response**:

View File

@@ -19,7 +19,8 @@ Docker Registry API
- It doesnt have a local database
- It will be open-sourced at some point
We expect that there will be multiple registries out there. To help to grasp the context, here are some examples of registries:
We expect that there will be multiple registries out there. To help to grasp
the context, here are some examples of registries:
- **sponsor registry**: such a registry is provided by a third-party hosting infrastructure as a convenience for their customers and the docker community as a whole. Its costs are supported by the third party, but the management and operation of the registry are supported by dotCloud. It features read/write access, and delegates authentication and authorization to the Index.
- **mirror registry**: such a registry is provided by a third-party hosting infrastructure but is targeted at their customers only. Some mechanism (unspecified to date) ensures that public images are pulled from a sponsor registry to the mirror registry, to make sure that the customers of the third-party provider can “docker pull” those images locally.
@@ -37,7 +38,10 @@ We expect that there will be multiple registries out there. To help to grasp the
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
The latter would only require two new commands in docker, e.g. ``registryget``
and ``registryput``, wrapping access to the local filesystem (and optionally
doing consistency checks). Authentication and authorization are then delegated
to SSH (e.g. with public keys).
2. Endpoints
============

View File

@@ -15,11 +15,13 @@ Registry & Index Spec
---------
The Index is responsible for centralizing information about:
- User accounts
- Checksums of the images
- Public namespaces
The Index has different components:
- Web UI
- Meta-data store (comments, stars, list public repositories)
- Authentication service
@@ -27,7 +29,7 @@ The Index has different components:
The index is authoritative for those information.
We expect that there will be only one instance of the index, run and managed by dotCloud.
We expect that there will be only one instance of the index, run and managed by Docker Inc.
1.2 Registry
------------
@@ -53,12 +55,16 @@ We expect that there will be multiple registries out there. To help to grasp the
- local mount point;
- remote docker addressed through SSH.
The latter would only require two new commands in docker, e.g. registryget” and “registryput”, wrapping access to the local filesystem (and optionally doing consistency checks). Authentication and authorization are then delegated to SSH (e.g. with public keys).
The latter would only require two new commands in docker, e.g. ``registryget``
and ``registryput``, wrapping access to the local filesystem (and optionally
doing consistency checks). Authentication and authorization are then delegated
to SSH (e.g. with public keys).
1.3 Docker
----------
On top of being a runtime for LXC, Docker is the Registry client. It supports:
- Push / Pull on the registry
- Client authentication on the Index
@@ -72,21 +78,33 @@ On top of being a runtime for LXC, Docker is the Registry client. It supports:
1. Contact the Index to know where I should download “samalba/busybox”
2. Index replies:
a. samalba/busybox is on Registry A
b. here are the checksums for samalba/busybox (for all layers)
a. ``samalba/busybox`` is on Registry A
b. here are the checksums for ``samalba/busybox`` (for all layers)
c. token
3. Contact Registry A to receive the layers for samalba/busybox (all of them to the base image). Registry A is authoritative for “samalba/busybox” but keeps a copy of all inherited layers and serve them all from the same location.
3. Contact Registry A to receive the layers for ``samalba/busybox`` (all of them to the base image). Registry A is authoritative for “samalba/busybox” but keeps a copy of all inherited layers and serve them all from the same location.
4. registry contacts index to verify if token/user is allowed to download images
5. Index returns true/false lettings registry know if it should proceed or error out
6. Get the payload for all layers
Its possible to run docker pull \https://<registry>/repositories/samalba/busybox. In this case, docker bypasses the Index. However the security is not guaranteed (in case Registry A is corrupted) because there wont be any checksum checks.
It's possible to run:
Currently registry redirects to s3 urls for downloads, going forward all downloads need to be streamed through the registry. The Registry will then abstract the calls to S3 by a top-level class which implements sub-classes for S3 and local storage.
.. code-block:: bash
Token is only returned when the 'X-Docker-Token' header is sent with request.
docker pull https://<registry>/repositories/samalba/busybox
Basic Auth is required to pull private repos. Basic auth isn't required for pulling public repos, but if one is provided, it needs to be valid and for an active account.
In this case, Docker bypasses the Index. However the security is not guaranteed
(in case Registry A is corrupted) because there wont be any checksum checks.
Currently registry redirects to s3 urls for downloads, going forward all
downloads need to be streamed through the registry. The Registry will then
abstract the calls to S3 by a top-level class which implements sub-classes for
S3 and local storage.
Token is only returned when the ``X-Docker-Token`` header is sent with request.
Basic Auth is required to pull private repos. Basic auth isn't required for
pulling public repos, but if one is provided, it needs to be valid and for an
active account.
API (pulling repository foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -155,7 +173,9 @@ API (pulling repository foo/bar):
**Index can be replaced!** For a private Registry deployed, a custom Index can be used to serve and validate token according to different policies.
Docker computes the checksums and submit them to the Index at the end of the push. When a repository name does not have checksums on the Index, it means that the push is in progress (since checksums are submitted at the end).
Docker computes the checksums and submit them to the Index at the end of the
push. When a repository name does not have checksums on the Index, it means
that the push is in progress (since checksums are submitted at the end).
API (pushing repos foo/bar):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -237,10 +257,11 @@ API (pushing repos foo/bar):
2.3 Delete
----------
If you need to delete something from the index or registry, we need a nice clean way to do that. Here is the workflow.
If you need to delete something from the index or registry, we need a nice
clean way to do that. Here is the workflow.
1. Docker contacts the index to request a delete of a repository samalba/busybox (authentication required with user credentials)
2. If authentication works and repository is valid, samalba/busybox is marked as deleted and a temporary token is returned
1. Docker contacts the index to request a delete of a repository ``samalba/busybox`` (authentication required with user credentials)
2. If authentication works and repository is valid, ``samalba/busybox`` is marked as deleted and a temporary token is returned
3. Send a delete request to the registry for the repository (along with the token)
4. Registry A contacts the Index to verify the token (token must corresponds to the repository name)
5. Index validates the token. Registry A deletes the repository and everything associated to it.
@@ -312,24 +333,40 @@ The Index has two main purposes (along with its fancy social features):
3.1 Without an Index
--------------------
Using the Registry without the Index can be useful to store the images on a private network without having to rely on an external entity controlled by dotCloud.
In this case, the registry will be launched in a special mode (--standalone? --no-index?). In this mode, the only thing which changes is that Registry will never contact the Index to verify a token. It will be the Registry owner responsibility to authenticate the user who pushes (or even pulls) an image using any mechanism (HTTP auth, IP based, etc...).
Using the Registry without the Index can be useful to store the images on a
private network without having to rely on an external entity controlled by
Docker Inc.
In this scenario, the Registry is responsible for the security in case of data corruption since the checksums are not delivered by a trusted entity.
In this case, the registry will be launched in a special mode (--standalone?
--no-index?). In this mode, the only thing which changes is that Registry will
never contact the Index to verify a token. It will be the Registry owner
responsibility to authenticate the user who pushes (or even pulls) an image
using any mechanism (HTTP auth, IP based, etc...).
As hinted previously, a standalone registry can also be implemented by any HTTP server handling GET/PUT requests (or even only GET requests if no write access is necessary).
In this scenario, the Registry is responsible for the security in case of data
corruption since the checksums are not delivered by a trusted entity.
As hinted previously, a standalone registry can also be implemented by any HTTP
server handling GET/PUT requests (or even only GET requests if no write access
is necessary).
3.2 With an Index
-----------------
The Index data needed by the Registry are simple:
- Serve the checksums
- Provide and authorize a Token
In the scenario of a Registry running on a private network with the need of centralizing and authorizing, its easy to use a custom Index.
In the scenario of a Registry running on a private network with the need of
centralizing and authorizing, its easy to use a custom Index.
The only challenge will be to tell Docker to contact (and trust) this custom Index. Docker will be configurable at some point to use a specific Index, itll be the private entity responsibility (basically the organization who uses Docker in a private environment) to maintain the Index and the Dockers configuration among its consumers.
The only challenge will be to tell Docker to contact (and trust) this custom
Index. Docker will be configurable at some point to use a specific Index, itll
be the private entity responsibility (basically the organization who uses
Docker in a private environment) to maintain the Index and the Dockers
configuration among its consumers.
4. The API
==========
@@ -339,16 +376,22 @@ The first version of the api is available here: https://github.com/jpetazzo/dock
4.1 Images
----------
The format returned in the images is not defined here (for layer and json), basically because Registry stores exactly the same kind of information as Docker uses to manage them.
The format returned in the images is not defined here (for layer and JSON),
basically because Registry stores exactly the same kind of information as
Docker uses to manage them.
The format of ancestry is a line-separated list of image ids, in age order. I.e. the images parent is on the last line, the parent of the parent on the next-to-last line, etc.; if the image has no parent, the file is empty.
The format of ancestry is a line-separated list of image ids, in age order,
i.e. the images parent is on the last line, the parent of the parent on the
next-to-last line, etc.; if the image has no parent, the file is empty.
GET /v1/images/<image_id>/layer
PUT /v1/images/<image_id>/layer
GET /v1/images/<image_id>/json
PUT /v1/images/<image_id>/json
GET /v1/images/<image_id>/ancestry
PUT /v1/images/<image_id>/ancestry
.. code-block:: bash
GET /v1/images/<image_id>/layer
PUT /v1/images/<image_id>/layer
GET /v1/images/<image_id>/json
PUT /v1/images/<image_id>/json
GET /v1/images/<image_id>/ancestry
PUT /v1/images/<image_id>/ancestry
4.2 Users
---------
@@ -393,7 +436,9 @@ PUT /v1/users/<username>
4.2.3 Login (Index)
^^^^^^^^^^^^^^^^^^^
Does nothing else but asking for a user authentication. Can be used to validate credentials. HTTP Basic Auth for now, maybe change in future.
Does nothing else but asking for a user authentication. Can be used to validate
credentials. HTTP Basic Auth for now, maybe change in future.
GET /v1/users
@@ -405,7 +450,10 @@ GET /v1/users
4.3 Tags (Registry)
-------------------
The Registry does not know anything about users. Even though repositories are under usernames, its just a namespace for the registry. Allowing us to implement organizations or different namespaces per user later, without modifying the Registrys API.
The Registry does not know anything about users. Even though repositories are
under usernames, its just a namespace for the registry. Allowing us to
implement organizations or different namespaces per user later, without
modifying the Registrys API.
The following naming restrictions apply:
@@ -439,7 +487,10 @@ DELETE /v1/repositories/<namespace>/<repo_name>/tags/<tag>
4.4 Images (Index)
------------------
For the Index to “resolve” the repository name to a Registry location, it uses the X-Docker-Endpoints header. In other terms, this requests always add a “X-Docker-Endpoints” to indicate the location of the registry which hosts this repository.
For the Index to “resolve” the repository name to a Registry location, it uses
the X-Docker-Endpoints header. In other terms, this requests always add a
``X-Docker-Endpoints`` to indicate the location of the registry which hosts this
repository.
4.4.1 Get the images
^^^^^^^^^^^^^^^^^^^^^
@@ -484,17 +535,20 @@ Return 202 OK
======================
Its possible to chain Registries server for several reasons:
- Load balancing
- Delegate the next request to another server
When a Registry is a reference for a repository, it should host the entire images chain in order to avoid breaking the chain during the download.
When a Registry is a reference for a repository, it should host the entire
images chain in order to avoid breaking the chain during the download.
The Index and Registry use this mechanism to redirect on one or the other.
Example with an image download:
On every request, a special header can be returned:
X-Docker-Endpoints: server1,server2
On every request, a special header can be returned::
X-Docker-Endpoints: server1,server2
On the next request, the client will always pick a server from this list.
@@ -504,7 +558,8 @@ On the next request, the client will always pick a server from this list.
6.1 On the Index
-----------------
The Index supports both “Basic” and “Token” challenges. Usually when there is a “401 Unauthorized”, the Index replies this::
The Index supports both “Basic” and “Token” challenges. Usually when there is a
``401 Unauthorized``, the Index replies this::
401 Unauthorized
WWW-Authenticate: Basic realm="auth required",Token
@@ -543,11 +598,13 @@ The Registry only supports the Token challenge::
401 Unauthorized
WWW-Authenticate: Token
The only way is to provide a token on 401 Unauthorized responses::
The only way is to provide a token on ``401 Unauthorized`` responses::
Authorization: Token signature=123abc,repository=foo/bar,access=read
Authorization: Token signature=123abc,repository="foo/bar",access=read
Usually, the Registry provides a Cookie when a Token verification succeeded. Every time the Registry passes a Cookie, you have to pass it back the same cookie.::
Usually, the Registry provides a Cookie when a Token verification succeeded.
Every time the Registry passes a Cookie, you have to pass it back the same
cookie.::
200 OK
Set-Cookie: session="wD/J7LqL5ctqw8haL10vgfhrb2Q=?foo=UydiYXInCnAxCi4=&timestamp=RjEzNjYzMTQ5NDcuNDc0NjQzCi4="; Path=/; HttpOnly

View File

@@ -12,7 +12,7 @@ To list available commands, either run ``docker`` with no parameters or execute
$ sudo docker
Usage: docker [OPTIONS] COMMAND [arg...]
-H=[unix:///var/run/docker.sock]: tcp://host:port to bind/connect to or unix://path/to/socket to use
-H=[unix:///var/run/docker.sock]: tcp://[host[:port]] to bind/connect to or unix://[/path/to/socket] to use. When host=[0.0.0.0], port=[4243] or path=[/var/run/docker.sock] is omitted, default values are used.
A self-sufficient runtime for linux containers.
@@ -27,28 +27,42 @@ To list available commands, either run ``docker`` with no parameters or execute
Usage of docker:
-D=false: Enable debug mode
-H=[unix:///var/run/docker.sock]: Multiple tcp://host:port or unix://path/to/socket to bind in daemon mode, single connection otherwise
-H=[unix:///var/run/docker.sock]: tcp://[host[:port]] to bind or unix://[/path/to/socket] to use. When host=[0.0.0.0], port=[4243] or path=[/var/run/docker.sock] is omitted, default values are used.
-api-enable-cors=false: Enable CORS headers in the remote API
-b="": Attach containers to a pre-existing network bridge; use 'none' to disable container networking
-bip="": Use the provided CIDR notation address for the dynamically created bridge (docker0); Mutually exclusive of -b
-d=false: Enable daemon mode
-dns="": Force docker to use specific DNS servers
-g="/var/lib/docker": Path to use as the root of the docker runtime
-icc=true: Enable inter-container communication
-ip="0.0.0.0": Default IP address to use when binding container ports
-iptables=true: Disable docker's addition of iptables rules
-mtu=1500: Set the containers network mtu
-p="/var/run/docker.pid": Path to use for daemon PID file
-r=true: Restart previously running containers
-s="": Force the docker runtime to use a specific storage driver
-v=false: Print version information and quit
The docker daemon is the persistent process that manages containers. Docker uses the same binary for both the
The Docker daemon is the persistent process that manages containers. Docker uses the same binary for both the
daemon and client. To run the daemon you provide the ``-d`` flag.
To force docker to use devicemapper as the storage driver, use ``docker -d -s devicemapper``
To force Docker to use devicemapper as the storage driver, use ``docker -d -s devicemapper``.
To set the dns server for all docker containers, use ``docker -d -dns 8.8.8.8``
To set the DNS server for all Docker containers, use ``docker -d -dns 8.8.8.8``.
To run the daemon with debug output, use ``docker -d -D``.
The docker client will also honor the ``DOCKER_HOST`` environment variable to set
the ``-H`` flag for the client.
::
docker -H tcp://0.0.0.0:4243 ps
# or
export DOCKER_HOST="tcp://0.0.0.0:4243"
docker ps
# both are equal
To run the daemon with debug output, use ``docker -d -D``
.. _cli_attach:
@@ -67,11 +81,11 @@ To run the daemon with debug output, use ``docker -d -D``
You can detach from the container again (and leave it running) with
``CTRL-c`` (for a quiet exit) or ``CTRL-\`` to get a stacktrace of
the Docker client when it quits. When you detach from the container's
process the exit code will be retuned to the client.
process the exit code will be returned to the client.
To stop a container, use ``docker stop``
To stop a container, use ``docker stop``.
To kill the container, use ``docker kill``
To kill the container, use ``docker kill``.
.. _cli_attach_examples:
@@ -127,12 +141,11 @@ Examples:
-no-cache: Do not use the cache when building the image.
-rm: Remove intermediate containers after a successful build
The files at PATH or URL are called the "context" of the build. The
build process may refer to any of the files in the context, for
example when using an :ref:`ADD <dockerfile_add>` instruction. When a
single ``Dockerfile`` is given as URL, then no context is set. When a
git repository is set as URL, then the repository is used as the
context
The files at ``PATH`` or ``URL`` are called the "context" of the build. The
build process may refer to any of the files in the context, for example when
using an :ref:`ADD <dockerfile_add>` instruction. When a single ``Dockerfile``
is given as ``URL``, then no context is set. When a Git repository is set as
``URL``, then the repository is used as the context
.. _cli_build_examples:
@@ -167,13 +180,13 @@ Examples:
---> f52f38b7823e
Successfully built f52f38b7823e
This example specifies that the PATH is ``.``, and so all the files in
the local directory get tar'd and sent to the Docker daemon. The PATH
This example specifies that the ``PATH`` is ``.``, and so all the files in
the local directory get tar'd and sent to the Docker daemon. The ``PATH``
specifies where to find the files for the "context" of the build on
the Docker daemon. Remember that the daemon could be running on a
remote machine and that no parsing of the Dockerfile happens at the
remote machine and that no parsing of the ``Dockerfile`` happens at the
client side (where you're running ``docker build``). That means that
*all* the files at PATH get sent, not just the ones listed to
*all* the files at ``PATH`` get sent, not just the ones listed to
:ref:`ADD <dockerfile_add>` in the ``Dockerfile``.
The transfer of context from the local machine to the Docker daemon is
@@ -196,16 +209,16 @@ tag will be ``2.0``
This will read a ``Dockerfile`` from *stdin* without context. Due to
the lack of a context, no contents of any local directory will be sent
to the ``docker`` daemon. Since there is no context, a Dockerfile
to the ``docker`` daemon. Since there is no context, a ``Dockerfile``
``ADD`` only works if it refers to a remote URL.
.. code-block:: bash
$ sudo docker build github.com/creack/docker-firefox
This will clone the Github repository and use the cloned repository as
This will clone the GitHub repository and use the cloned repository as
context. The ``Dockerfile`` at the root of the repository is used as
``Dockerfile``. Note that you can specify an arbitrary git repository
``Dockerfile``. Note that you can specify an arbitrary Git repository
by using the ``git://`` schema.
@@ -225,8 +238,10 @@ by using the ``git://`` schema.
-run="": Configuration to be applied when the image is launched with `docker run`.
(ex: -run='{"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
Simple commit of an existing container
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _cli_commit_examples:
Commit an existing container
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: bash
@@ -240,13 +255,36 @@ Simple commit of an existing container
REPOSITORY TAG ID CREATED VIRTUAL SIZE
SvenDowideit/testimage version3 f5283438590d 16 seconds ago 335.7 MB
Change the command that a container runs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes you have an application container running just a service and you need
to make a quick change and then change it back.
In this example, we run a container with ``ls`` and then change the image to
run ``ls /etc``.
.. code-block:: bash
$ docker run -t -name test ubuntu ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin selinux srv sys tmp usr var
$ docker commit -run='{"Cmd": ["ls","/etc"]}' test test2
933d16de9e70005304c1717b5c6f2f39d6fd50752834c6f34a155c70790011eb
$ docker run -t test2
adduser.conf gshadow login.defs rc0.d
alternatives gshadow- logrotate.d rc1.d
apt host.conf lsb-base rc2.d
...
Full -run example
.................
(multiline is ok within a single quote ``'``)
The ``-run`` JSON hash changes the ``Config`` section when running ``docker inspect CONTAINERID``
or ``config`` when running ``docker inspect IMAGEID``.
::
(Multiline is okay within a single quote ``'``)
.. code-block:: bash
$ sudo docker commit -run='
{
@@ -289,7 +327,7 @@ Full -run example
Copy files/folders from the containers filesystem to the host
path. Paths are relative to the root of the filesystem.
.. code-block:: bash
$ sudo docker cp 7bb0e258aefe:/etc/debian_version .
@@ -303,7 +341,7 @@ Full -run example
::
Usage: docker diff CONTAINER
List the changed files and directories in a container's filesystem
There are 3 events that are listed in the 'diff':
@@ -312,7 +350,7 @@ There are 3 events that are listed in the 'diff':
2. ```D``` - Delete
3. ```C``` - Change
for example:
For example:
.. code-block:: bash
@@ -340,7 +378,7 @@ for example:
Usage: docker events
Get real time events from the server
-since="": Show previously created events and then stream.
(either seconds since epoch, or date string as below)
@@ -403,8 +441,8 @@ Show events in the past from a specified time
Usage: docker export CONTAINER
Export the contents of a filesystem as a tar archive to STDOUT
for example:
For example:
.. code-block:: bash
@@ -424,7 +462,7 @@ for example:
-notrunc=false: Don't truncate output
-q=false: only show numeric IDs
To see how the docker:latest image was built:
To see how the ``docker:latest`` image was built:
.. code-block:: bash
@@ -456,7 +494,7 @@ To see how the docker:latest image was built:
d5e85dc5b1d8 2 weeks ago /bin/sh -c apt-get update
13e642467c11 2 weeks ago /bin/sh -c echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
ae6dde92a94e 2 weeks ago /bin/sh -c #(nop) MAINTAINER Solomon Hykes <solomon@dotcloud.com>
ubuntu:12.04 6 months ago
ubuntu:12.04 6 months ago
.. _cli_images:
@@ -474,7 +512,7 @@ To see how the docker:latest image was built:
-q=false: only show numeric IDs
-tree=false: output graph in tree format
-viz=false: output graph in graphviz format
Listing the most recently created images
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -527,15 +565,15 @@ Displaying image hierarchy
$ sudo docker images -tree
|─8dbd9e392a96 Size: 131.5 MB (virtual 131.5 MB) Tags: ubuntu:12.04,ubuntu:latest,ubuntu:precise
─8dbd9e392a96 Size: 131.5 MB (virtual 131.5 MB) Tags: ubuntu:12.04,ubuntu:latest,ubuntu:precise
└─27cf78414709 Size: 180.1 MB (virtual 180.1 MB)
└─b750fe79269d Size: 24.65 kB (virtual 180.1 MB) Tags: ubuntu:12.10,ubuntu:quantal
|─f98de3b610d5 Size: 12.29 kB (virtual 180.1 MB)
| └─7da80deb7dbf Size: 16.38 kB (virtual 180.1 MB)
| └─65ed2fee0a34 Size: 20.66 kB (virtual 180.2 MB)
| └─a2b9ea53dddc Size: 819.7 MB (virtual 999.8 MB)
| └─a29b932eaba8 Size: 28.67 kB (virtual 999.9 MB)
| └─e270a44f124d Size: 12.29 kB (virtual 999.9 MB) Tags: progrium/buildstep:latest
─f98de3b610d5 Size: 12.29 kB (virtual 180.1 MB)
└─7da80deb7dbf Size: 16.38 kB (virtual 180.1 MB)
└─65ed2fee0a34 Size: 20.66 kB (virtual 180.2 MB)
└─a2b9ea53dddc Size: 819.7 MB (virtual 999.8 MB)
└─a29b932eaba8 Size: 28.67 kB (virtual 999.9 MB)
└─e270a44f124d Size: 12.29 kB (virtual 999.9 MB) Tags: progrium/buildstep:latest
└─17e74ac162d8 Size: 53.93 kB (virtual 180.2 MB)
└─339a3f56b760 Size: 24.65 kB (virtual 180.2 MB)
└─904fcc40e34d Size: 96.7 MB (virtual 276.9 MB)
@@ -562,10 +600,9 @@ Displaying image hierarchy
(.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it.
At this time, the URL must start with ``http`` and point to a single
file archive (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) containing a
file archive (.tar, .tar.gz, .tgz, .bzip, .tar.xz, or .txz) containing a
root filesystem. If you would like to import from a local directory or
archive, you can use the ``-`` parameter to take the data from
standard in.
archive, you can use the ``-`` parameter to take the data from *stdin*.
Examples
~~~~~~~~
@@ -575,24 +612,30 @@ Import from a remote location
This will create a new untagged image.
``$ sudo docker import http://example.com/exampleimage.tgz``
.. code-block:: bash
$ sudo docker import http://example.com/exampleimage.tgz
Import from a local file
........................
Import to docker via pipe and standard in
Import to docker via pipe and *stdin*.
``$ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new``
.. code-block:: bash
$ cat exampleimage.tgz | sudo docker import - exampleimagelocal:new
Import from a local directory
.............................
``$ sudo tar -c . | docker import - exampleimagedir``
.. code-block:: bash
Note the ``sudo`` in this example -- you must preserve the ownership
of the files (especially root ownership) during the archiving with
tar. If you are not root (or sudo) when you tar, then the ownerships
might not get preserved.
$ sudo tar -c . | docker import - exampleimagedir
Note the ``sudo`` in this example -- you must preserve the ownership of the
files (especially root ownership) during the archiving with tar. If you are not
root (or the sudo command) when you tar, then the ownerships might not get
preserved.
.. _cli_info:
@@ -631,16 +674,16 @@ might not get preserved.
Insert a file from URL in the IMAGE at PATH
Use the specified IMAGE as the parent for a new image which adds a
:ref:`layer <layer_def>` containing the new file. ``insert`` does not modify
the original image, and the new image has the contents of the parent image,
plus the new file.
Use the specified ``IMAGE`` as the parent for a new image which adds a
:ref:`layer <layer_def>` containing the new file. The ``insert`` command does
not modify the original image, and the new image has the contents of the parent
image, plus the new file.
Examples
~~~~~~~~
Insert file from github
Insert file from GitHub
.......................
.. code-block:: bash
@@ -655,16 +698,16 @@ Insert file from github
::
Usage: docker inspect [OPTIONS] CONTAINER
Usage: docker inspect CONTAINER|IMAGE [CONTAINER|IMAGE...]
Return low-level information on a container
Return low-level information on a container/image
-format="": template to output results
-format="": Format the output using the given go template.
By default, this will render all results in a JSON array. If a format
is specified, the given template will be executed for each result.
Go's `text/template <http://golang.org/pkg/text/template/>` package
Go's `text/template <http://golang.org/pkg/text/template/>`_ package
describes all the details of the format.
Examples
@@ -769,6 +812,15 @@ Known Issues (kill)
Fetch the logs of a container
The ``docker logs`` command is a convenience which batch-retrieves whatever
logs are present at the time of execution. This does not guarantee execution
order when combined with a ``docker run`` (i.e. your run may not have generated
any logs at the time you execute ``docker logs``).
The ``docker logs -f`` command combines ``docker logs`` and ``docker attach``:
it will first return all logs from the beginning and then continue streaming
new output from the container's stdout and stderr.
.. _cli_port:
@@ -900,6 +952,38 @@ containers will not be deleted.
Usage: docker rmi IMAGE [IMAGE...]
Remove one or more images
Removing tagged images
~~~~~~~~~~~~~~~~~~~~~~
Images can be removed either by their short or long ID's, or their image names.
If an image has more than one name, each of them needs to be removed before the
image is removed.
.. code-block:: bash
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test1 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)
test latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)
test2 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)
$ sudo docker rmi fd484f19954f
Error: Conflict, cannot delete image fd484f19954f because it is tagged in multiple repositories
2013/12/11 05:47:16 Error: failed to remove one or more images
$ sudo docker rmi test1
Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
$ sudo docker rmi test2
Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test1 latest fd484f19954f 23 seconds ago 7 B (virtual 4.964 MB)
$ sudo docker rmi test
Untagged: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
Deleted: fd484f19954f4920da7ff372b5067f5b7ddb2fd3830cecd17b96ea9e286ba5b8
.. _cli_run:
@@ -938,6 +1022,14 @@ containers will not be deleted.
-name="": Assign the specified name to the container. If no name is specific docker will generate a random name
-P=false: Publish all exposed ports to the host interfaces
The ``docker run`` command first ``creates`` a writeable container layer over
the specified image, and then ``starts`` it using the specified command. That
is, ``docker run`` is equivalent to the API ``/containers/create`` then
``/containers/(id)/start``.
The ``docker run`` command can be used in combination with ``docker commit`` to
:ref:`change the command that a container runs <cli_commit_examples>`.
Known Issues (run -volumes-from)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -952,10 +1044,10 @@ Examples:
$ sudo docker run -cidfile /tmp/docker_test.cid ubuntu echo "test"
This will create a container and print "test" to the console. The
``cidfile`` flag makes docker attempt to create a new file and write the
container ID to it. If the file exists already, docker will return an
error. Docker will close this file when docker run exits.
This will create a container and print ``test`` to the console. The
``cidfile`` flag makes Docker attempt to create a new file and write the
container ID to it. If the file exists already, Docker will return an
error. Docker will close this file when ``docker run`` exits.
.. code-block:: bash
@@ -989,7 +1081,7 @@ use-cases, like running Docker within Docker.
$ sudo docker run -w /path/to/dir/ -i -t ubuntu pwd
The ``-w`` lets the command being executed inside directory given,
here /path/to/dir/. If the path does not exists it is created inside the
here ``/path/to/dir/``. If the path does not exists it is created inside the
container.
.. code-block:: bash
@@ -1006,7 +1098,7 @@ using the container, but inside the current working directory.
$ sudo docker run -p 127.0.0.1:80:8080 ubuntu bash
This binds port ``8080`` of the container to port ``80`` on 127.0.0.1 of the
This binds port ``8080`` of the container to port ``80`` on ``127.0.0.1`` of the
host machine. :ref:`port_redirection` explains in detail how to manipulate ports
in Docker.
@@ -1040,11 +1132,31 @@ to the newly created container.
$ sudo docker run -volumes-from 777f7dc92da7,ba8c0c54f0f2:ro -i -t ubuntu pwd
The ``-volumes-from`` flag mounts all the defined volumes from the
refrence containers. Containers can be specified by a comma seperated
referenced containers. Containers can be specified by a comma seperated
list or by repetitions of the ``-volumes-from`` argument. The container
id may be optionally suffixed with ``:ro`` or ``:rw`` to mount the volumes in
ID may be optionally suffixed with ``:ro`` or ``:rw`` to mount the volumes in
read-only or read-write mode, respectively. By default, the volumes are mounted
in the same mode (rw or ro) as the reference container.
in the same mode (read write or read only) as the reference container.
A complete example
..................
.. code-block:: bash
$ sudo docker run -d -name static static-web-files sh
$ sudo docker run -d -expose=8098 -name riak riakserver
$ sudo docker run -d -m 100m -e DEVELOPMENT=1 -e BRANCH=example-code -v $(pwd):/app/bin:ro -name app appserver
$ sudo docker run -d -p 1443:443 -dns=dns.dev.org -v /var/log/httpd -volumes-from static -link riak -link app -h www.sven.dev.org -name web webserver
$ sudo docker run -t -i -rm -volumes-from web -w /var/log/httpd busybox tail -f access.log
This example shows 5 containers that might be set up to test a web application change:
1. Start a pre-prepared volume image ``static-web-files`` (in the background) that has CSS, image and static HTML in it, (with a ``VOLUME`` instruction in the ``Dockerfile`` to allow the web server to use those files);
2. Start a pre-prepared ``riakserver`` image, give the container name ``riak`` and expose port ``8098`` to any containers that link to it;
3. Start the ``appserver`` image, restricting its memory usage to 100MB, setting two environment variables ``DEVELOPMENT`` and ``BRANCH`` and bind-mounting the current directory (``$(pwd)``) in the container in read-only mode as ``/app/bin``;
4. Start the ``webserver``, mapping port ``443`` in the container to port ``1443`` on the Docker server, setting the DNS server to ``dns.dev.org``, creating a volume to put the log files into (so we can access it from another container), then importing the files from the volume exposed by the ``static`` container, and linking to all exposed ports from ``riak`` and ``app``. Lastly, we set the hostname to ``web.sven.dev.org`` so its consistent with the pre-generated SSL certificate;
5. Finally, we create a container that runs ``tail -f access.log`` using the logs volume from the ``web`` container, setting the workdir to ``/var/log/httpd``. The ``-rm`` option means that when the container exits, the container's layer is removed.
.. _cli_save:
@@ -1080,7 +1192,7 @@ in the same mode (rw or ro) as the reference container.
::
Usage: docker start [OPTIONS] NAME
Usage: docker start [OPTIONS] CONTAINER
Start a stopped container
@@ -1131,7 +1243,7 @@ The main process inside the container will receive SIGTERM, and after a grace pe
``version``
-----------
Show the version of the docker client, daemon, and latest released version.
Show the version of the Docker client, daemon, and latest released version.
.. _cli_wait:

View File

@@ -44,7 +44,8 @@ This following command will build a development environment using the Dockerfile
sudo make build
If the build is successful, congratulations! You have produced a clean build of docker, neatly encapsulated in a standard build environment.
If the build is successful, congratulations! You have produced a clean build of
docker, neatly encapsulated in a standard build environment.
Step 4: Build the Docker Binary
@@ -58,6 +59,19 @@ To create the Docker binary, run this command:
This will create the Docker binary in ``./bundles/<version>-dev/binary/``
Using your built Docker binary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The binary is available outside the container in the directory
``./bundles/<version>-dev/binary/``. You can swap your host docker executable
with this binary for live testing - for example, on ubuntu:
.. code-block:: bash
sudo service docker stop ; sudo cp $(which docker) $(which docker)_ ; sudo cp ./bundles/<version>-dev/binary/docker-<version>-dev $(which docker);sudo service docker start
.. note:: Its safer to run the tests below before swapping your hosts docker binary.
Step 5: Run the Tests
---------------------
@@ -121,22 +135,19 @@ You can run an interactive session in the newly built container:
# type 'exit' or Ctrl-D to exit
Extra Step: Build and view the Documenation
-------------------------------------------
Extra Step: Build and view the Documentation
--------------------------------------------
If you want to read the documentation from a local website, or are making changes
to it, you can build the documentation and then serve it by:
.. code-block:: bash
sudo make doc
sudo make docs
# when its done, you can point your browser to http://yourdockerhost:8000
# type Ctrl-C to exit
.. note:: The binary is available outside the container in the directory ``./bundles/<version>-dev/binary/``. You can swap your host docker executable with this binary for live testing - for example, on ubuntu: ``sudo service docker stop ; sudo cp $(which docker) $(which docker)_ ; sudo cp ./bundles/<version>-dev/binary/docker-<version>-dev $(which docker);sudo service docker start``.
**Need More Help?**
If you need more help then hop on to the `#docker-dev IRC channel <irc://chat.freenode.net#docker-dev>`_ or post a message on the `Docker developer mailinglist <https://groups.google.com/d/forum/docker-dev>`_.
If you need more help then hop on to the `#docker-dev IRC channel <irc://chat.freenode.net#docker-dev>`_ or post a message on the `Docker developer mailing list <https://groups.google.com/d/forum/docker-dev>`_.

View File

@@ -94,5 +94,13 @@ The password is ``screencast``.
$ ifconfig
$ ssh root@192.168.33.10 -p 49154
# Thanks for watching, Thatcher thatcher@dotcloud.com
Update:
-------
For Ubuntu 13.10 using stackbrew/ubuntu, you may need do these additional steps:
1. change /etc/pam.d/sshd, pam_loginuid line 'required' to 'optional'
2. echo LANG=\"en_US.UTF-8\" > /etc/default/locale

View File

@@ -111,7 +111,7 @@ What does Docker add to just plain LXC?
registry to store and transfer private containers, for internal
server deployments for example.
* *Tool ecosystem.*
* *Tool ecosystem.*
Docker defines an API for automating and customizing the
creation and deployment of containers. There are a huge number
of tools integrating with Docker to extend its
@@ -122,6 +122,11 @@ What does Docker add to just plain LXC?
(Jenkins, Strider, Travis), etc. Docker is rapidly establishing
itself as the standard for container-based tooling.
What is different between a Docker container and a VM?
......................................................
There's a great StackOverflow answer `showing the differences <http://stackoverflow.com/questions/16047306/how-is-docker-io-different-from-a-normal-virtual-machine>`_.
Do I lose my data when the container exits?
...........................................
@@ -129,6 +134,53 @@ Not at all! Any data that your application writes to disk gets preserved
in its container until you explicitly delete the container. The file
system for the container persists even after the container halts.
How far do Docker containers scale?
...................................
Some of the largest server farms in the world today are based on containers.
Large web deployments like Google and Twitter, and platform providers such as
Heroku and dotCloud all run on container technology, at a scale of hundreds of
thousands or even millions of containers running in parallel.
How do I connect Docker containers?
...................................
Currently the recommended way to link containers is via the `link` primitive.
You can see details of how to `work with links here
<http://docs.docker.io/en/latest/use/working_with_links_names/>`_.
Also of useful when enabling more flexible service portability is the
`Ambassador linking pattern
<http://docs.docker.io/en/latest/use/ambassador_pattern_linking/>`_.
How do I run more than one process in a Docker container?
.........................................................
Any capable process supervisor such as http://supervisord.org/, runit, s6, or
daemontools can do the trick. Docker will start up the process management
daemon which will then fork to run additional processes. As long as the
processor manager daemon continues to run, the container will continue to as
well. You can see a more substantial example `that uses supervisord here
<http://docs.docker.io/en/latest/examples/using_supervisord/>`_.
What platforms does Docker run on?
..................................
Linux:
- Ubuntu 12.04, 13.04 et al
- Fedora 19/20+
- RHEL 6.5+
- Centos 6+
- Gento
- ArchLinux
Cloud:
- Amazon EC2
- Google Compute Engine
- Rackspace
Can I help by adding some questions and answers?
................................................

View File

@@ -25,7 +25,7 @@ currently in active development, so this documentation will change
frequently.
For an overview of Docker, please see the `Introduction
<http://www.docker.io>`_. When you're ready to start working with
<http://www.docker.io/learn_more/>`_. When you're ready to start working with
Docker, we have a `quick start <http://www.docker.io/gettingstarted>`_
and a more in-depth guide to :ref:`ubuntu_linux` and other
:ref:`installation_list` paths including prebuilt binaries,

View File

@@ -11,41 +11,50 @@ Arch Linux
.. include:: install_unofficial.inc
Installing on Arch Linux is not officially supported but can be handled via
one of the following AUR packages:
Installing on Arch Linux can be handled via the package in community:
* `lxc-docker <https://aur.archlinux.org/packages/lxc-docker/>`_
* `lxc-docker-git <https://aur.archlinux.org/packages/lxc-docker-git/>`_
* `lxc-docker-nightly <https://aur.archlinux.org/packages/lxc-docker-nightly/>`_
* `docker <https://www.archlinux.org/packages/community/x86_64/docker/>`_
The lxc-docker package will install the latest tagged version of docker.
The lxc-docker-git package will build from the current master branch.
The lxc-docker-nightly package will install the latest build.
or the following AUR package:
* `docker-git <https://aur.archlinux.org/packages/docker-git/>`_
The docker package will install the latest tagged version of docker.
The docker-git package will build from the current master branch.
Dependencies
------------
Docker depends on several packages which are specified as dependencies in
the AUR packages. The core dependencies are:
the packages. The core dependencies are:
* bridge-utils
* device-mapper
* iproute2
* lxc
* sqlite
Installation
------------
For the normal package a simple
::
pacman -S docker
is all that is needed.
For the AUR package execute:
::
yaourt -S docker-git
The instructions here assume **yaourt** is installed. See
`Arch User Repository <https://wiki.archlinux.org/index.php/Arch_User_Repository#Installing_packages>`_
for information on building and installing packages from the AUR if you have not
done so before.
::
yaourt -S lxc-docker
Starting Docker
---------------

View File

@@ -21,6 +21,11 @@ Check Your Kernel
Your host's Linux kernel must meet the Docker :ref:`kernel`
Check for User Space Tools
--------------------------
You must have a working installation of the `lxc <http://linuxcontainers.org>`_ utilities and library.
Get the docker binary:
----------------------

View File

@@ -1,6 +1,6 @@
:title: Requirements and Installation on Fedora
:description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux
:keywords: Docker, Docker documentation, Fedora, requirements, virtualbox, vagrant, git, ssh, putty, cygwin, linux
.. _fedora:
@@ -18,25 +18,46 @@ architecture.
Installation
------------
Firstly, let's make sure our Fedora host is up-to-date.
The ``docker-io`` package provides Docker on Fedora.
If you have the (unrelated) ``docker`` package installed already, it will
conflict with ``docker-io``. There's a `bug report`_ filed for it.
To proceed with ``docker-io`` installation on Fedora 19, please remove
``docker`` first.
.. code-block:: bash
sudo yum -y upgrade
sudo yum -y remove docker
Next let's install the ``docker-io`` package which will install Docker on our host.
For Fedora 20 and later, the ``wmdocker`` package will provide the same
functionality as ``docker`` and will also not conflict with ``docker-io``.
.. code-block:: bash
sudo yum -y install wmdocker
sudo yum -y remove docker
Install the ``docker-io`` package which will install Docker on our host.
.. code-block:: bash
sudo yum -y install docker-io
Now it's installed lets start the Docker daemon.
To update the ``docker-io`` package:
.. code-block:: bash
sudo yum -y update docker-io
Now that it's installed, let's start the Docker daemon.
.. code-block:: bash
sudo systemctl start docker
If we want Docker to start at boot we should also:
If we want Docker to start at boot, we should also:
.. code-block:: bash
@@ -46,7 +67,9 @@ Now let's verify that Docker is working.
.. code-block:: bash
sudo docker run -i -t ubuntu /bin/bash
sudo docker run -i -t mattdm/fedora /bin/bash
**Done!**, now continue with the :ref:`hello_world` example.
.. _bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1043676

View File

@@ -0,0 +1,80 @@
:title: Installation on FrugalWare
:description: Docker installation on FrugalWare.
:keywords: frugalware linux, virtualization, docker, documentation, installation
.. _frugalware:
FrugalWare
==========
.. include:: install_header.inc
.. include:: install_unofficial.inc
Installing on FrugalWare is handled via the official packages:
* `lxc-docker i686 <http://www.frugalware.org/packages/200141>`_
* `lxc-docker x86_64 <http://www.frugalware.org/packages/200130>`_
The `lxc-docker` package will install the latest tagged version of Docker.
Dependencies
------------
Docker depends on several packages which are specified as dependencies in
the packages. The core dependencies are:
* systemd
* lvm2
* sqlite3
* libguestfs
* lxc
* iproute2
* bridge-utils
Installation
------------
A simple
::
pacman -S lxc-docker
is all that is needed.
Starting Docker
---------------
There is a systemd service unit created for Docker. To start Docker as service:
::
sudo systemctl start lxc-docker
To start on system boot:
::
sudo systemctl enable lxc-docker
Network Configuration
---------------------
IPv4 packet forwarding is disabled by default on FrugalWare, so Internet access from inside
the container may not work.
To enable packet forwarding, run the following command as the ``root`` user on the host system:
::
sysctl net.ipv4.ip_forward=1
And, to make it persistent across reboots, add the following to a file named **/etc/sysctl.d/docker.conf**:
::
net.ipv4.ip_forward=1

View File

@@ -12,7 +12,7 @@
`Compute Engine <https://developers.google.com/compute>`_ QuickStart for `Debian <https://www.debian.org>`_
-----------------------------------------------------------------------------------------------------------
1. Go to `Google Cloud Console <https://cloud.google.com/console>`_ and create a new Cloud Project with billing enabled.
1. Go to `Google Cloud Console <https://cloud.google.com/console>`_ and create a new Cloud Project with `Compute Engine enabled <https://developers.google.com/compute/docs/signup>`_.
2. Download and configure the `Google Cloud SDK <https://developers.google.com/cloud/sdk/>`_ to use your project with the following commands:
@@ -57,9 +57,17 @@
docker-playground:~$ curl get.docker.io | bash
docker-playground:~$ sudo update-rc.d docker defaults
7. Start a new container:
7. If running in zones: us-central1-a, europe-west1-1, and europe-west1-b, the docker daemon must be started with the `-mtu` flag. Without the flag, you may experience intermittent network pauses.
`See this issue <https://code.google.com/p/google-compute-engine/issues/detail?id=57>`_ for more details.
.. code-block:: bash
docker -d -mtu 1460
8. Start a new container:
.. code-block:: bash
docker-playground:~$ sudo docker run busybox echo 'docker on GCE \o/'
docker on GCE \o/

View File

@@ -22,6 +22,7 @@ Contents:
fedora
archlinux
gentoolinux
frugalware
vagrant
windows
amazon

View File

@@ -111,3 +111,42 @@ And replace it by the following one::
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
Then run ``update-grub``, and reboot.
Details
-------
To automatically check some of the requirements below, you can run `lxc-checkconfig`.
Networking:
- CONFIG_BRIDGE
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE
- CONFIG_NF_NAT
- CONFIG_NF_NAT_IPV4
- CONFIG_NF_NAT_NEEDED
LVM:
- CONFIG_BLK_DEV_DM
- CONFIG_DM_THIN_PROVISIONING
- CONFIG_EXT4_FS
Namespaces:
- CONFIG_NAMESPACES
- CONFIG_UTS_NS
- CONFIG_IPC_NS
- CONFIG_UID_NS
- CONFIG_PID_NS
- CONFIG_NET_NS
Cgroups:
- CONFIG_CGROUPS
Cgroup controllers (optional but highly recommended):
- CONFIG_CGROUP_CPUACCT
- CONFIG_BLK_CGROUP
- CONFIG_MEMCG
- CONFIG_MEMCG_SWAP

View File

@@ -1,56 +1,71 @@
:title: Requirements and Installation on Red Hat Enterprise Linux / CentOS
:title: Requirements and Installation on Red Hat Enterprise Linux
:description: Please note this project is currently under heavy development. It should not be used in production.
:keywords: Docker, Docker documentation, requirements, linux, rhel, centos
.. _rhel:
Red Hat Enterprise Linux / CentOS
=================================
Red Hat Enterprise Linux
========================
.. include:: install_header.inc
.. include:: install_unofficial.inc
Docker is available for **RHEL/CentOS 6**.
Docker is available for **RHEL** on EPEL. These instructions should work for
both RHEL and CentOS. They will likely work for other binary compatible EL6
distributions as well, but they haven't been tested.
Please note that this package is part of a `Extra Packages for Enterprise Linux (EPEL)`_, a community effort to create and maintain additional packages for RHEL distribution.
Please note that this package is part of `Extra Packages for Enterprise
Linux (EPEL)`_, a community effort to create and maintain additional packages
for the RHEL distribution.
Please note that due to the current Docker limitations Docker is able to run only on the **64 bit** architecture.
Also note that due to the current Docker limitations, Docker is able to run
only on the **64 bit** architecture.
Installation
------------
1. Firstly, let's make sure our RHEL host is up-to-date.
Firstly, you need to install the EPEL repository. Please follow the `EPEL installation instructions`_.
.. code-block:: bash
sudo yum -y upgrade
The ``docker-io`` package provides Docker on EPEL.
2. Next you need to install the EPEL repository. Please follow the `EPEL installation instructions`_.
3. Next let's install the ``docker-io`` package which will install Docker on our host.
If you already have the (unrelated) ``docker`` package installed, it will
conflict with ``docker-io``. There's a `bug report`_ filed for it.
To proceed with ``docker-io`` installation, please remove
``docker`` first.
Next, let's install the ``docker-io`` package which will install Docker on our host.
.. code-block:: bash
sudo yum -y install docker-io
4. Now it's installed lets start the Docker daemon.
To update the ``docker-io`` package
.. code-block:: bash
sudo yum -y update docker-io
Now that it's installed, let's start the Docker daemon.
.. code-block:: bash
sudo service docker start
If we want Docker to start at boot we should also:
If we want Docker to start at boot, we should also:
.. code-block:: bash
sudo chkconfig docker on
5. Now let's verify that Docker is working.
Now let's verify that Docker is working.
.. code-block:: bash
sudo docker run -i -t ubuntu /bin/bash
sudo docker run -i -t mattdm/fedora /bin/bash
**Done!**, now continue with the :ref:`hello_world` example.
@@ -62,4 +77,5 @@ If you have any issues - please report them directly in the `Red Hat Bugzilla fo
.. _Extra Packages for Enterprise Linux (EPEL): https://fedoraproject.org/wiki/EPEL
.. _EPEL installation instructions: https://fedoraproject.org/wiki/EPEL#How_can_I_use_these_extra_packages.3F
.. _Red Hat Bugzilla for docker-io component : https://bugzilla.redhat.com/enter_bug.cgi?product=Fedora%20EPEL&component=docker-io
.. _bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1043676

View File

@@ -63,7 +63,10 @@ Installation
These instructions have changed for 0.6. If you are upgrading from
an earlier version, you will need to follow them again.
Docker is available as a Debian package, which makes installation easy.
Docker is available as a Debian package, which makes installation
easy. **See the :ref:`installmirrors` section below if you are not in
the United States.** Other sources of the Debian packages may be
faster for you to install.
First add the Docker repository key to your local keychain. You can use the
``apt-key`` command to check the fingerprint matches: ``36A1 D786 9245 C895 0F96
@@ -74,7 +77,7 @@ First add the Docker repository key to your local keychain. You can use the
sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -"
Add the Docker repository to your apt sources list, update and install the
``lxc-docker`` package.
``lxc-docker`` package.
*You may receive a warning that the package isn't trusted. Answer yes to
continue installation.*
@@ -92,7 +95,7 @@ continue installation.*
.. code-block:: bash
curl -s http://get.docker.io/ubuntu/ | sudo sh
curl -s https://get.docker.io/ubuntu/ | sudo sh
Now verify that the installation has worked by downloading the ``ubuntu`` image
and launching a container.
@@ -199,3 +202,25 @@ incoming connections on the Docker port (default 4243):
sudo ufw allow 4243/tcp
.. _installmirrors:
Mirrors
^^^^^^^
You should ``ping get.docker.io`` and compare the latency to the
following mirrors, and pick whichever one is best for you.
Yandex
------
`Yandex <http://yandex.ru/>`_ in Russia is mirroring the Docker Debian
packages, updating every 6 hours. Substitute
``http://mirror.yandex.ru/mirrors/docker/`` for
``http://get.docker.io/ubuntu`` in the instructions above. For example:
.. code-block:: bash
sudo sh -c "echo deb http://mirror.yandex.ru/mirrors/docker/ docker main\
> /etc/apt/sources.list.d/docker.list"
sudo apt-get update
sudo apt-get install lxc-docker

View File

@@ -1,11 +1,11 @@
:title: Ambassador pattern linking
:title: Link via an Ambassador Container
:description: Using the Ambassador pattern to abstract (network) services
:keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming
.. _ambassador_pattern_linking:
Ambassador pattern linking
==========================
Link via an Ambassador Container
================================
Rather than hardcoding network links between a service consumer and provider, Docker
encourages service portability.
@@ -27,7 +27,7 @@ you can add ambassadors
(consumer) --> (redis-ambassador) ---network---> (redis-ambassador) --> (redis)
When you need to rewire your consumer to talk to a different resdis server, you
When you need to rewire your consumer to talk to a different redis server, you
can just restart the ``redis-ambassador`` container that the consumer is connected to.
This pattern also allows you to transparently move the redis server to a different
@@ -161,11 +161,12 @@ variable using the ``-e`` command line option.
local ``1234`` port to the remote IP and port - in this case ``192.168.1.52:6379``.
.. code-block:: Dockerfile
::
#
#
# first you need to build the docker-ut image using ./contrib/mkimage-unittest.sh
# first you need to build the docker-ut image
# using ./contrib/mkimage-unittest.sh
# then
# docker build -t SvenDowideit/ambassador .
# docker tag SvenDowideit/ambassador ambassador

View File

@@ -1,10 +1,10 @@
:title: Base Image Creation
:title: Create a Base Image
:description: How to create base images
:keywords: Examples, Usage, base image, docker, documentation, examples
.. _base_image_creation:
Base Image Creation
Create a Base Image
===================
So you want to create your own :ref:`base_image_def`? Great!

View File

@@ -1,15 +1,15 @@
:title: Basic Commands
:title: Learn Basic Commands
:description: Common usage and commands
:keywords: Examples, Usage, basic commands, docker, documentation, examples
The Basics
==========
Learn Basic Commands
====================
Starting Docker
---------------
If you have used one of the quick install paths', Docker may have been
If you have used one of the quick install paths, Docker may have been
installed with upstart, Ubuntu's system for starting processes at boot
time. You should be able to run ``sudo docker help`` and get output.
@@ -30,8 +30,8 @@ Download a pre-built image
# Download an ubuntu image
sudo docker pull ubuntu
This will find the ``ubuntu`` image by name in the :ref:`Central Index
<searching_central_index>` and download it from the top-level Central
This will find the ``ubuntu`` image by name in the :ref:`Central Index
<searching_central_index>` and download it from the top-level Central
Repository to a local image cache.
.. NOTE:: When the image has successfully downloaded, you will see a
@@ -53,21 +53,23 @@ Running an interactive shell
.. _dockergroup:
sudo and the docker Group
-------------------------
The sudo command and the docker Group
-------------------------------------
The ``docker`` daemon always runs as root, and since ``docker``
version 0.5.2, ``docker`` binds to a Unix socket instead of a TCP
port. By default that Unix socket is owned by the user *root*, and so,
by default, you can access it with ``sudo``.
The ``docker`` daemon always runs as the root user, and since Docker version
0.5.2, the ``docker`` daemon binds to a Unix socket instead of a TCP port. By
default that Unix socket is owned by the user *root*, and so, by default, you
can access it with ``sudo``.
Starting in version 0.5.3, if you (or your Docker installer) create a
Unix group called *docker* and add users to it, then the ``docker``
daemon will make the ownership of the Unix socket read/writable by the
*docker* group when the daemon starts. The ``docker`` daemon must
always run as root, but if you run the ``docker`` client as a user in
always run as the root user, but if you run the ``docker`` client as a user in
the *docker* group then you don't need to add ``sudo`` to all the
client commands.
client commands.
.. warning:: The *docker* group is root-equivalent.
**Example:**
@@ -97,10 +99,10 @@ Bind Docker to another host/port or a Unix socket
<https://github.com/dotcloud/docker/issues/1369>`_). Make sure you
control access to ``docker``.
With -H it is possible to make the Docker daemon to listen on a
specific ip and port. By default, it will listen on
With ``-H`` it is possible to make the Docker daemon to listen on a
specific IP and port. By default, it will listen on
``unix:///var/run/docker.sock`` to allow only local connections by the
*root* user. You *could* set it to 0.0.0.0:4243 or a specific host ip to
*root* user. You *could* set it to ``0.0.0.0:4243`` or a specific host IP to
give access to everybody, but that is **not recommended** because then
it is trivial for someone to gain root access to the host where the
daemon is running.
@@ -179,10 +181,10 @@ Committing (saving) a container state
Save your containers state to a container image, so the state can be re-used.
When you commit your container only the differences between the image
the container was created from and the current state of the container
will be stored (as a diff). See which images you already have using
``sudo docker images``
When you commit your container only the differences between the image the
container was created from and the current state of the container will be
stored (as a diff). See which images you already have using the ``docker
images`` command.
.. code-block:: bash
@@ -194,7 +196,5 @@ will be stored (as a diff). See which images you already have using
You now have a image state from which you can create new instances.
Read more about :ref:`working_with_the_repository` or continue to the
complete :ref:`cli`

View File

@@ -1,12 +1,12 @@
:title: Dockerfiles for Images
:title: Build Images (Dockerfile Reference)
:description: Dockerfiles use a simple DSL which allows you to automate the steps you would normally manually take to create an image.
:keywords: builder, docker, Dockerfile, automation, image creation
.. _dockerbuilder:
======================
Dockerfiles for Images
======================
===================================
Build Images (Dockerfile Reference)
===================================
**Docker can act as a builder** and read instructions from a text
``Dockerfile`` to automate the steps you would otherwise take manually
@@ -251,6 +251,11 @@ All new files and directories are created with mode 0755, uid and gid
if you build using STDIN (``docker build - < somefile``), there is no build
context, so the Dockerfile can only contain an URL based ADD statement.
.. note::
if your URL files are protected using authentication, you will need to use
an ``RUN wget`` , ``RUN curl`` or other tool from within the container as
ADD does not support authentication.
The copy obeys the following rules:
* The ``<src>`` path must be inside the *context* of the build; you cannot

View File

@@ -1,11 +1,11 @@
:title: Host Integration
:title: Automatically Start Containers
:description: How to generate scripts for upstart, systemd, etc.
:keywords: systemd, upstart, supervisor, docker, documentation, host integration
Host Integration
================
Automatically Start Containers
==============================
You can use your Docker containers with process managers like ``upstart``,
``systemd`` and ``supervisor``.

View File

@@ -17,8 +17,9 @@ Contents:
workingwithrepository
baseimages
port_redirection
puppet
networking
host_integration
working_with_volumes
working_with_links_names
ambassador_pattern_linking
puppet

View File

@@ -0,0 +1,153 @@
:title: Configure Networking
:description: Docker networking
:keywords: network, networking, bridge, docker, documentation
Configure Networking
====================
Docker uses Linux bridge capabilities to provide network connectivity
to containers. The ``docker0`` bridge interface is managed by Docker
itself for this purpose. Thus, when the Docker daemon starts it :
- creates the ``docker0`` bridge if not present
- searches for an IP address range which doesn't overlap with an existing route
- picks an IP in the selected range
- assigns this IP to the ``docker0`` bridge
.. code-block:: bash
# List host bridges
$ sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.000000000000 no
# Show docker0 IP address
$ sudo ifconfig docker0
docker0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
At runtime, a :ref:`specific kind of virtual
interface<vethxxxx-device>` is given to each containers which is then
bonded to the ``docker0`` bridge. Each containers also receives a
dedicated IP address from the same range as ``docker0``. The
``docker0`` IP address is then used as the default gateway for the
containers.
.. code-block:: bash
# Run a container
$ sudo docker run -t -i -d base /bin/bash
52f811c5d3d69edddefc75aff5a4525fc8ba8bcfa1818132f9dc7d4f7c7e78b4
$ sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.fef213db5a66 no vethQCDY1N
Above, ``docker0`` acts as a bridge for the ``vethQCDY1N`` interface
which is dedicated to the 52f811c5d3d6 container.
How to use a specific IP address range
---------------------------------------
Docker will try hard to find an IP range which is not used by the
host. Even if it works for most cases, it's not bullet-proof and
sometimes you need to have more control over the IP addressing scheme.
For this purpose, Docker allows you to manage the ``docker0`` bridge
or your own one using the ``-b=<bridgename>`` parameter.
In this scenario:
- ensure Docker is stopped
- create your own bridge (``bridge0`` for example)
- assign a specific IP to this bridge
- start Docker with the ``-b=bridge0`` parameter
.. code-block:: bash
# Stop Docker
$ sudo service docker stop
# Clean docker0 bridge and
# add your very own bridge0
$ sudo ifconfig docker0 down
$ sudo brctl addbr bridge0
$ sudo ifconfig bridge0 192.168.227.1 netmask 255.255.255.0
# Edit your Docker startup file
$ echo "DOCKER_OPTS=\"-b=bridge0\"" /etc/default/docker
# Start Docker
$ sudo service docker start
# Ensure bridge0 IP is not changed by Docker
$ sudo ifconfig bridge0
bridge0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:192.168.227.1 Bcast:192.168.227.255 Mask:255.255.255.0
# Run a container
$ docker run -i -t base /bin/bash
# Container IP in the 192.168.227/24 range
root@261c272cd7d5:/# ifconfig eth0
eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
inet addr:192.168.227.5 Bcast:192.168.227.255 Mask:255.255.255.0
# bridge0 IP as the default gateway
root@261c272cd7d5:/# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.227.1 0.0.0.0 UG 0 0 0 eth0
192.168.227.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
# hits CTRL+P then CTRL+Q to detach
# Display bridge info
$ sudo brctl show
bridge name bridge id STP enabled interfaces
bridge0 8000.fe7c2e0faebd no vethAQI2QT
Container intercommunication
-------------------------------
Containers can communicate with each other according to the ``icc``
parameter value of the Docker daemon.
- The default, ``-icc=true`` allows containers to communicate with each other.
- ``-icc=false`` means containers are isolated from each other.
Under the hood, ``iptables`` is used by Docker to either accept or
drop communication between containers.
.. _vethxxxx-device:
What's about the vethXXXX device?
-----------------------------------
Well. Things get complicated here.
The ``vethXXXX`` interface is the host side of a point-to-point link
between the host and the corresponding container, the other side of
the link being materialized by the container's ``eth0``
interface. This pair (host ``vethXXX`` and container ``eth0``) are
connected like a tube. Everything that comes in one side will come out
the other side.
All the plumbing is delegated to Linux network capabilities (check the
ip link command) and the namespaces infrastructure.
I want more
------------
Jérôme Petazzoni has create ``pipework`` to connect together
containers in arbitrarily complex scenarios :
https://github.com/jpetazzo/pipework

View File

@@ -1,12 +1,12 @@
:title: Port redirection
:title: Redirect Ports
:description: usage about port redirection
:keywords: Usage, basic port, docker, documentation, examples
.. _port_redirection:
Port redirection
================
Redirect Ports
==============
Interacting with a service is commonly done through a connection to a
port. When this service runs inside a container, one can connect to
@@ -31,7 +31,7 @@ container, Docker provide ways to bind the container port to an
interface of the host system. To simplify communication between
containers, Docker provides the linking mechanism.
Binding a port to an host interface
Binding a port to a host interface
-----------------------------------
To bind a port of the container to a specific interface of the host

View File

@@ -1,15 +1,16 @@
:title: Working with Links and Names
:description: How to create and use links and names
:keywords: Examples, Usage, links, docker, documentation, examples, names, name, container naming
:title: Link Containers
:description: How to create and use both links and names
:keywords: Examples, Usage, links, linking, docker, documentation, examples, names, name, container naming
.. _working_with_links_names:
Working with Links and Names
============================
Link Containers
===============
From version 0.6.5 you are now able to ``name`` a container and ``link`` it to another
container by referring to its name. This will create a parent -> child relationship
where the parent container can see selected information about its child.
From version 0.6.5 you are now able to ``name`` a container and
``link`` it to another container by referring to its name. This will
create a parent -> child relationship where the parent container can
see selected information about its child.
.. _run_name:
@@ -18,8 +19,9 @@ Container Naming
.. versionadded:: v0.6.5
You can now name your container by using the ``-name`` flag. If no name is provided, Docker
will automatically generate a name. You can see this name using the ``docker ps`` command.
You can now name your container by using the ``-name`` flag. If no
name is provided, Docker will automatically generate a name. You can
see this name using the ``docker ps`` command.
.. code-block:: bash
@@ -38,47 +40,53 @@ Links: service discovery for docker
.. versionadded:: v0.6.5
Links allow containers to discover and securely communicate with each other by using the
flag ``-link name:alias``. Inter-container communication can be disabled with the daemon
flag ``-icc=false``. With this flag set to false, Container A cannot access Container B
unless explicitly allowed via a link. This is a huge win for securing your containers.
When two containers are linked together Docker creates a parent child relationship
between the containers. The parent container will be able to access information via
environment variables of the child such as name, exposed ports, IP and other selected
environment variables.
Links allow containers to discover and securely communicate with each
other by using the flag ``-link name:alias``. Inter-container
communication can be disabled with the daemon flag
``-icc=false``. With this flag set to ``false``, Container A cannot
access Container B unless explicitly allowed via a link. This is a
huge win for securing your containers. When two containers are linked
together Docker creates a parent child relationship between the
containers. The parent container will be able to access information
via environment variables of the child such as name, exposed ports, IP
and other selected environment variables.
When linking two containers Docker will use the exposed ports of the container to create
a secure tunnel for the parent to access. If a database container only exposes port 8080
then the linked container will only be allowed to access port 8080 and nothing else if
When linking two containers Docker will use the exposed ports of the
container to create a secure tunnel for the parent to access. If a
database container only exposes port 8080 then the linked container
will only be allowed to access port 8080 and nothing else if
inter-container communication is set to false.
For example, there is an image called ``crosbymichael/redis`` that exposes the
port 6379 and starts the Redis server. Let's name the container as ``redis``
based on that image and run it as daemon.
.. code-block:: bash
# Example: there is an image called crosbymichael/redis that exposes the port 6379 and starts redis-server.
# Let's name the container as "redis" based on that image and run it as daemon.
$ sudo docker run -d -name redis crosbymichael/redis
We can issue all the commands that you would expect using the name "redis"; start, stop,
attach, using the name for our container. The name also allows us to link other containers
into this one.
We can issue all the commands that you would expect using the name
``redis``; start, stop, attach, using the name for our container. The
name also allows us to link other containers into this one.
Next, we can start a new web application that has a dependency on Redis and apply a link
to connect both containers. If you noticed when running our Redis server we did not use
the -p flag to publish the Redis port to the host system. Redis exposed port 6379 and
this is all we need to establish a link.
Next, we can start a new web application that has a dependency on
Redis and apply a link to connect both containers. If you noticed when
running our Redis server we did not use the ``-p`` flag to publish the
Redis port to the host system. Redis exposed port 6379 and this is all
we need to establish a link.
.. code-block:: bash
# Linking the redis container as a child
$ sudo docker run -t -i -link redis:db -name webapp ubuntu bash
When you specified -link redis:db you are telling docker to link the container named redis
into this new container with the alias db. Environment variables are prefixed with the alias
so that the parent container can access network and environment information from the containers
that are linked into it.
When you specified ``-link redis:db`` you are telling Docker to link
the container named ``redis`` into this new container with the alias
``db``. Environment variables are prefixed with the alias so that the
parent container can access network and environment information from
the containers that are linked into it.
If we inspect the environment variables of the second container, we would see all the information
about the child container.
If we inspect the environment variables of the second container, we
would see all the information about the child container.
.. code-block:: bash
@@ -100,14 +108,17 @@ about the child container.
_=/usr/bin/env
root@4c01db0b339c:/#
Accessing the network information along with the environment of the child container allows
us to easily connect to the Redis service on the specific IP and port in the environment.
Accessing the network information along with the environment of the
child container allows us to easily connect to the Redis service on
the specific IP and port in the environment.
Running ``docker ps`` shows the 2 containers, and the webapp/db alias name for the redis container.
Running ``docker ps`` shows the 2 containers, and the ``webapp/db``
alias name for the redis container.
.. code-block:: bash
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp
d7886598dbe2 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db
4c01db0b339c ubuntu:12.04 bash 17 seconds ago Up 16 seconds webapp
d7886598dbe2 crosbymichael/redis:latest /redis-server --dir 33 minutes ago Up 33 minutes 6379/tcp redis,webapp/db

View File

@@ -1,11 +1,11 @@
:title: Working with Volumes
:title: Share Directories via Volumes
:description: How to create and share volumes
:keywords: Examples, Usage, volume, docker, documentation, examples
.. _volume_def:
Data Volume
===========
Share Directories via Volumes
=============================
.. versionadded:: v0.3.0
Data volumes have been available since version 1 of the
@@ -13,7 +13,7 @@ Data Volume
A *data volume* is a specially-designated directory within one or more
containers that bypasses the :ref:`ufs_def` to provide several useful
features for persistant or shared data:
features for persistent or shared data:
* **Data volumes can be shared and reused between containers.** This
is the feature that makes data volumes so powerful. You can use it
@@ -30,35 +30,58 @@ Each container can have zero or more data volumes.
Getting Started
...............
Using data volumes is as simple as adding a new flag: ``-v``. The
parameter ``-v`` can be used more than once in order to create more
volumes within the new container. The example below shows the
instruction to create a container with two new volumes::
Using data volumes is as simple as adding a ``-v`` parameter to the ``docker run``
command. The ``-v`` parameter can be used more than once in order to
create more volumes within the new container. To create a new container with
two new volumes::
docker run -v /var/volume1 -v /var/volume2 shykes/couchdb
$ docker run -v /var/volume1 -v /var/volume2 busybox true
For a Dockerfile, the VOLUME instruction will add one or more new
volumes to any container created from the image::
This command will create the new container with two new volumes that
exits instantly (``true`` is pretty much the smallest, simplest program
that you can run). Once created you can mount its volumes in any other
container using the ``-volumes-from`` option; irrespecive of whether the
container is running or not.
VOLUME ["/var/volume1", "/var/volume2"]
Or, you can use the VOLUME instruction in a Dockerfile to add one or more new
volumes to any container created from that image::
# BUILD-USING: docker build -t data .
# RUN-USING: docker run -name DATA data
FROM busybox
VOLUME ["/var/volume1", "/var/volume2"]
CMD ["/usr/bin/true"]
Mount Volumes from an Existing Container:
-----------------------------------------
Creating and mounting a Data Volume Container
---------------------------------------------
The command below creates a new container which is runnning as daemon
``-d`` and with one volume ``/var/lib/couchdb``::
If you have some persistent data that you want to share between containers,
or want to use from non-persistent containers, its best to create a named
Data Volume Container, and then to mount the data from it.
COUCH1=$(sudo docker run -d -v /var/lib/couchdb shykes/couchdb:2013-05-03)
Create a named container with volumes to share (``/var/volume1`` and ``/var/volume2``)::
From the container id of that previous container ``$COUCH1`` it's
possible to create new container sharing the same volume using the
parameter ``-volumes-from container_id``::
$ docker run -v /var/volume1 -v /var/volume2 -name DATA busybox true
COUCH2=$(sudo docker run -d -volumes-from $COUCH1 shykes/couchdb:2013-05-03)
Then mount those data volumes into your application containers::
Now, the second container has the all the information from the first volume.
$ docker run -t -i -rm -volumes-from DATA -name client1 ubuntu bash
You can use multiple ``-volumes-from`` parameters to bring together multiple
data volumes from multiple containers.
Interestingly, you can mount the volumes that came from the ``DATA`` container in
yet another container via the ``client1`` middleman container::
$ docker run -t -i -rm -volumes-from client1 ubuntu -name client2 bash
This allows you to abstract the actual data source from users of that data,
similar to :ref:`ambassador_pattern_linking <ambassador_pattern_linking>`.
If you remove containers that mount volumes, including the initial DATA container,
or the middleman, the volumes will not be deleted until there are no containers still
referencing those volumes. This allows you to upgrade, or effectivly migrate data volumes
between containers.
Mount a Host Directory as a Container Volume:
---------------------------------------------
@@ -68,13 +91,13 @@ Mount a Host Directory as a Container Volume:
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro].
If "host-dir" is missing, then docker creates a new volume.
This is not available for a Dockerfile due the portability and sharing
purpose of it. The [host-dir] volumes is something 100% host dependent
and will break on any other machine.
This is not available from a Dockerfile as it makes the built image less portable
or shareable. [host-dir] volumes are 100% host dependent and will break on any
other machine.
For example::
sudo docker run -v /var/logs:/var/host_logs:ro shykes/couchdb:2013-05-03
sudo docker run -v /var/logs:/var/host_logs:ro ubuntu bash
The command above mounts the host directory ``/var/logs`` into the
container with read only permissions as ``/var/host_logs``.
@@ -87,3 +110,6 @@ Known Issues
* :issue:`2702`: "lxc-start: Permission denied - failed to mount"
could indicate a permissions problem with AppArmor. Please see the
issue for a workaround.
* :issue:`2528`: the busybox container is used to make the resulting container as small and
simple as possible - whenever you need to interact with the data in the volume
you mount it into another container.

View File

@@ -1,11 +1,11 @@
:title: Working With Repositories
:title: Share Images via Repositories
:description: Repositories allow users to share images.
:keywords: repo, repositories, usage, pull image, push image, image, documentation
.. _working_with_the_repository:
Working with Repositories
=========================
Share Images via Repositories
=============================
A *repository* is a hosted collection of tagged :ref:`images
<image_def>` that together create the file system for a container. The
@@ -152,6 +152,41 @@ or tag.
.. _using_private_repositories:
Trusted Builds
--------------
Trusted Builds automate the building and updating of images from GitHub, directly
on docker.io servers. It works by adding a commit hook to your selected repository,
triggering a build and update when you push a commit.
To setup a trusted build
++++++++++++++++++++++++
#. Create a `Docker Index account <https://index.docker.io/>`_ and login.
#. Link your GitHub account through the ``Link Accounts`` menu.
#. `Configure a Trusted build <https://index.docker.io/builds/>`_.
#. Pick a GitHub project that has a ``Dockerfile`` that you want to build.
#. Pick the branch you want to build (the default is the ``master`` branch).
#. Give the Trusted Build a name.
#. Assign an optional Docker tag to the Build.
#. Specify where the ``Dockerfile`` is located. The default is ``/``.
Once the Trusted Build is configured it will automatically trigger a build, and
in a few minutes, if there are no errors, you will see your new trusted build
on the Docker Index. It will will stay in sync with your GitHub repo until you
deactivate the Trusted Build.
If you want to see the status of your Trusted Builds you can go to your
`Trusted Builds page <https://index.docker.io/builds/>`_ on the Docker index,
and it will show you the status of your builds, and the build history.
Once you've created a Trusted Build you can deactive or delete it. You cannot
however push to a Trusted Build with the ``docker push`` command. You can only
manage it by committing code to your GitHub repository.
You can create multiple Trusted Builds per repository and configure them to
point to specific ``Dockerfile``'s or Git branches.
Private Repositories
--------------------

View File

@@ -86,26 +86,26 @@
</div>
</div>
<div class="container">
<div class="container-fluid">
<!-- Docs nav
================================================== -->
<div class="row main-row">
<div class="row-fluid main-row">
<div class="span3 sidebar bs-docs-sidebar">
<div class="sidebar bs-docs-sidebar">
<div class="page-title" >
<h4>DOCUMENTATION</h4>
</div>
{{ toctree(collapse=False, maxdepth=3) }}
<form>
<input type="text" id="st-search-input" class="st-search-input span3" style="width:160px;" />
<input type="text" id="st-search-input" class="st-search-input span3" placeholder="search in documentation" style="width:210px;" />
<div id="st-results-container"></div>
</form>
</div>
<!-- body block -->
<div class="span9 main-content">
<div class="main-content">
<!-- Main section
================================================== -->
@@ -134,13 +134,22 @@
</div>
<div class="social links">
<a class="twitter" href="http://twitter.com/docker">Twitter</a>
<a class="github" href="https://github.com/dotcloud/docker/">GitHub</a>
<a title="Docker on Twitter" class="twitter" href="http://twitter.com/docker">Twitter</a>
<a title="Docker on GitHub" class="github" href="https://github.com/dotcloud/docker/">GitHub</a>
<a title="Docker on Reddit" class="reddit" href="http://www.reddit.com/r/Docker/">Reddit</a>
<a title="Docker on Google+" class="googleplus" href="https://plus.google.com/u/0/b/100381662757235514581/communities/108146856671494713993">Google+</a>
<a title="Docker on Facebook" class="facebook" href="https://www.facebook.com/docker.run">Facebook</a>
<a title="Docker on SlideShare" class="slideshare" href="http://www.slideshare.net/dotCloud">Slideshare</a>
<a title="Docker on Youtube" class="youtube" href="http://www.youtube.com/user/dockerrun/">Youtube</a>
<a title="Docker on Flickr" class="flickr" href="http://www.flickr.com/photos/99741659@N08/">Flickr</a>
<a title="Docker on LinkedIn" class="linkedin" href="http://www.linkedin.com/company/dotcloud">LinkedIn</a>
</div>
<div class="tbox version-flyer ">
<div class="content">
<small>Current version:</small>
<p class="version-note">Note: You are currently browsing the development documentation. The current release may work differently.</p>
<small>Available versions:</small>
<ul class="inline">
{% for slug, url in versions %}
<li class="alternative"><a href="{{ url }}{%- for word in pagename.split('/') -%}
@@ -163,6 +172,7 @@
</div>
<!-- end of footer -->
</div>
</div>

View File

@@ -62,9 +62,12 @@ p a.btn {
-moz-box-shadow: 0 1px 4px rgba(0, 0, 0, 0.065);
box-shadow: 0 1px 4px rgba(0, 0, 0, 0.065);
}
.brand.logo a {
.brand-logo a {
color: white;
}
.brand-logo a img {
width: auto;
}
.inline-icon {
margin-bottom: 6px;
}
@@ -186,8 +189,15 @@ body {
.main-row {
margin-top: 40px;
}
.sidebar {
width: 215px;
float: left;
}
.main-content {
padding: 16px 18px inherit;
margin-left: 230px;
/* space for sidebar */
}
/* =======================
Social footer
@@ -198,20 +208,54 @@ body {
}
.social .twitter,
.social .github,
.social .googleplus {
background: url("https://www.docker.io/static/img/footer-links.png") no-repeat transparent;
.social .googleplus,
.social .facebook,
.social .slideshare,
.social .linkedin,
.social .flickr,
.social .youtube,
.social .reddit {
background: url("../img/social/docker_social_logos.png") no-repeat transparent;
display: inline-block;
height: 35px;
height: 32px;
overflow: hidden;
text-indent: 9999px;
width: 35px;
margin-right: 10px;
width: 32px;
margin-right: 5px;
}
.social :hover {
-webkit-transform: rotate(-10deg);
-moz-transform: rotate(-10deg);
-o-transform: rotate(-10deg);
-ms-transform: rotate(-10deg);
transform: rotate(-10deg);
}
.social .twitter {
background-position: 0px 2px;
background-position: -160px 0px;
}
.social .reddit {
background-position: -256px 0px;
}
.social .github {
background-position: -59px 2px;
background-position: -64px 0px;
}
.social .googleplus {
background-position: -96px 0px;
}
.social .facebook {
background-position: 0px 0px;
}
.social .slideshare {
background-position: -128px 0px;
}
.social .youtube {
background-position: -192px 0px;
}
.social .flickr {
background-position: -32px 0px;
}
.social .linkedin {
background-position: -224px 0px;
}
form table th {
vertical-align: top;
@@ -342,6 +386,7 @@ div.alert.alert-block {
border: 1px solid #88BABC;
padding: 5px;
font-size: larger;
max-width: 300px;
}
.version-flyer .content {
padding-right: 45px;
@@ -351,18 +396,18 @@ div.alert.alert-block {
background-position: right center;
background-repeat: no-repeat;
}
.version-flyer .alternative {
visibility: hidden;
display: none;
}
.version-flyer .active-slug {
visibility: visible;
display: inline-block;
font-weight: bolder;
}
.version-flyer:hover .alternative {
animation-duration: 1s;
display: inline-block;
visibility: visible;
}
.version-flyer .version-note {
font-size: 16px;
color: black;
}
/* =====================================
Styles for
@@ -410,3 +455,20 @@ dt:hover > a.headerlink {
.admonition.seealso {
border-color: #23cb1f;
}
/* Add styles for other types of comments */
.versionchanged,
.versionadded,
.versionmodified,
.deprecated {
font-size: larger;
font-weight: bold;
}
.versionchanged {
color: lightseagreen;
}
.versionadded {
color: mediumblue;
}
.deprecated {
color: orangered;
}

View File

@@ -98,7 +98,6 @@ p a {
}
.navbar .brand {
margin-left: 0px;
float: left;
@@ -126,9 +125,11 @@ p a {
box-shadow: 0 1px 4px rgba(0, 0, 0, 0.065);
}
.brand.logo a {
.brand-logo a {
color: white;
img {
width: auto;
}
}
.logo {
@@ -317,10 +318,18 @@ body {
margin-top: 40px;
}
.sidebar {
width: 215px;
float: left;
}
.main-content {
padding: 16px 18px inherit;
margin-left: 230px; /* space for sidebar */
}
/* =======================
Social footer
======================= */
@@ -330,24 +339,64 @@ body {
margin-top: 15px;
}
.social .twitter, .social .github, .social .googleplus {
background: url("https://www.docker.io/static/img/footer-links.png") no-repeat transparent;
display: inline-block;
height: 35px;
overflow: hidden;
text-indent: 9999px;
width: 35px;
margin-right: 10px;
.social {
.twitter, .github, .googleplus, .facebook, .slideshare, .linkedin, .flickr, .youtube, .reddit {
background: url("../img/social/docker_social_logos.png") no-repeat transparent;
display: inline-block;
height: 32px;
overflow: hidden;
text-indent: 9999px;
width: 32px;
margin-right: 5px;
}
}
.social :hover {
-webkit-transform: rotate(-10deg);
-moz-transform: rotate(-10deg);
-o-transform: rotate(-10deg);
-ms-transform: rotate(-10deg);
transform: rotate(-10deg);
}
.social .twitter {
background-position: 0px 2px;
background-position: -160px 0px;
}
.social .reddit {
background-position: -256px 0px;
}
.social .github {
background-position: -59px 2px;
background-position: -64px 0px;
}
.social .googleplus {
background-position: -96px 0px;
}
.social .facebook {
background-position: -0px 0px;
}
.social .slideshare {
background-position: -128px 0px;
}
.social .youtube {
background-position: -192px 0px;
}
.social .flickr {
background-position: -32px 0px;
}
.social .linkedin {
background-position: -224px 0px;
}
// Styles on the forms
// ----------------------------------
@@ -528,31 +577,34 @@ div.alert.alert-block {
border: 1px solid #88BABC;
padding: 5px;
font-size: larger;
max-width: 300px;
.content {
padding-right: 45px;
margin-top: 7px;
margin-left: 7px;
// display: inline-block;
background-image: url('../img/container3.png');
background-position: right center;
background-repeat: no-repeat;
}
.alternative {
visibility: hidden;
display: none;
}
.active-slug {
visibility: visible;
display: inline-block;
font-weight: bolder;
}
&:hover .alternative {
animation-duration: 1s;
display: inline-block;
visibility: visible;
}
.version-note {
font-size: 16px;
color: black;
}
}
@@ -612,3 +664,24 @@ dt:hover > a.headerlink {
}
/* Add styles for other types of comments */
.versionchanged,
.versionadded,
.versionmodified,
.deprecated {
font-size: larger;
font-weight: bold;
}
.versionchanged {
color: lightseagreen;
}
.versionadded {
color: mediumblue;
}
.deprecated {
color: orangered;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@@ -53,14 +53,6 @@ $(function(){
}
}
if (doc_version == "") {
$('.version-flyer ul').html('<li class="alternative active-slug"><a href="" title="Switch to local">Local</a></li>');
}
// mark the active documentation in the version widget
$(".version-flyer a:contains('" + doc_version + "')").parent().addClass('active-slug');
// attached handler on click
// Do not attach to first element or last (intro, faq) so that
// first and last link directly instead of accordian
@@ -95,4 +87,18 @@ $(function(){
// add class to all those which have children
$('.sidebar > ul > li').not(':last').not(':first').addClass('has-children');
if (doc_version == "") {
$('.version-flyer ul').html('<li class="alternative active-slug"><a href="" title="Switch to local">Local</a></li>');
}
if (doc_version == "master") {
$('.version-flyer .version-note').hide();
}
// mark the active documentation in the version widget
$(".version-flyer a:contains('" + doc_version + "')").parent().addClass('active-slug').setAttribute("title", "Current version");
});

View File

@@ -1 +1 @@
Solomon Hykes <solomon@dotcloud.com>
#Solomon Hykes <solomon@dotcloud.com> Temporarily unavailable

View File

@@ -3,8 +3,10 @@ package engine
import (
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"log"
"os"
"path/filepath"
"runtime"
"strings"
)
@@ -34,6 +36,9 @@ type Engine struct {
handlers map[string]Handler
hack Hack // data for temporary hackery (see hack.go)
id string
Stdout io.Writer
Stderr io.Writer
Stdin io.Reader
}
func (eng *Engine) Root() string {
@@ -75,13 +80,31 @@ func New(root string) (*Engine, error) {
}
}
}
if err := os.MkdirAll(root, 0700); err != nil && !os.IsExist(err) {
return nil, err
}
// Docker makes some assumptions about the "absoluteness" of root
// ... so let's make sure it has no symlinks
if p, err := filepath.Abs(root); err != nil {
log.Fatalf("Unable to get absolute root (%s): %s", root, err)
} else {
root = p
}
if p, err := filepath.EvalSymlinks(root); err != nil {
log.Fatalf("Unable to canonicalize root (%s): %s", root, err)
} else {
root = p
}
eng := &Engine{
root: root,
handlers: make(map[string]Handler),
id: utils.RandomString(),
Stdout: os.Stdout,
Stderr: os.Stderr,
Stdin: os.Stdin,
}
// Copy existing global handlers
for k, v := range globalHandlers {
@@ -104,9 +127,9 @@ func (eng *Engine) Job(name string, args ...string) *Job {
Stdin: NewInput(),
Stdout: NewOutput(),
Stderr: NewOutput(),
env: &Env{},
}
job.Stdout.Add(utils.NopWriteCloser(os.Stdout))
job.Stderr.Add(utils.NopWriteCloser(os.Stderr))
job.Stderr.Add(utils.NopWriteCloser(eng.Stderr))
handler, exists := eng.handlers[name]
if exists {
job.handler = handler
@@ -116,5 +139,5 @@ func (eng *Engine) Job(name string, args ...string) *Job {
func (eng *Engine) Logf(format string, args ...interface{}) (n int, err error) {
prefixedFormat := fmt.Sprintf("[%s] %s\n", eng, strings.TrimRight(format, "\n"))
return fmt.Fprintf(os.Stderr, prefixedFormat, args...)
return fmt.Fprintf(eng.Stderr, prefixedFormat, args...)
}

View File

@@ -18,7 +18,7 @@ func TestRegister(t *testing.T) {
eng := newTestEngine(t)
//Should fail because globan handlers are copied
//Should fail because global handlers are copied
//at the engine creation
if err := eng.Register("dummy1", nil); err == nil {
t.Fatalf("Expecting error, got none")

234
engine/env.go Normal file
View File

@@ -0,0 +1,234 @@
package engine
import (
"bytes"
"encoding/json"
"fmt"
"io"
"strconv"
"strings"
)
type Env []string
func (env *Env) Get(key string) (value string) {
// FIXME: use Map()
for _, kv := range *env {
if strings.Index(kv, "=") == -1 {
continue
}
parts := strings.SplitN(kv, "=", 2)
if parts[0] != key {
continue
}
if len(parts) < 2 {
value = ""
} else {
value = parts[1]
}
}
return
}
func (env *Env) Exists(key string) bool {
_, exists := env.Map()[key]
return exists
}
func (env *Env) GetBool(key string) (value bool) {
s := strings.ToLower(strings.Trim(env.Get(key), " \t"))
if s == "" || s == "0" || s == "no" || s == "false" || s == "none" {
return false
}
return true
}
func (env *Env) SetBool(key string, value bool) {
if value {
env.Set(key, "1")
} else {
env.Set(key, "0")
}
}
func (env *Env) GetInt(key string) int {
return int(env.GetInt64(key))
}
func (env *Env) GetInt64(key string) int64 {
s := strings.Trim(env.Get(key), " \t")
val, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return -1
}
return val
}
func (env *Env) SetInt(key string, value int) {
env.Set(key, fmt.Sprintf("%d", value))
}
func (env *Env) SetInt64(key string, value int64) {
env.Set(key, fmt.Sprintf("%d", value))
}
// Returns nil if key not found
func (env *Env) GetList(key string) []string {
sval := env.Get(key)
if sval == "" {
return nil
}
l := make([]string, 0, 1)
if err := json.Unmarshal([]byte(sval), &l); err != nil {
l = append(l, sval)
}
return l
}
func (env *Env) GetJson(key string, iface interface{}) error {
sval := env.Get(key)
if sval == "" {
return nil
}
return json.Unmarshal([]byte(sval), iface)
}
func (env *Env) SetJson(key string, value interface{}) error {
sval, err := json.Marshal(value)
if err != nil {
return err
}
env.Set(key, string(sval))
return nil
}
func (env *Env) SetList(key string, value []string) error {
return env.SetJson(key, value)
}
func (env *Env) Set(key, value string) {
*env = append(*env, key+"="+value)
}
func NewDecoder(src io.Reader) *Decoder {
return &Decoder{
json.NewDecoder(src),
}
}
type Decoder struct {
*json.Decoder
}
func (decoder *Decoder) Decode() (*Env, error) {
m := make(map[string]interface{})
if err := decoder.Decoder.Decode(&m); err != nil {
return nil, err
}
env := &Env{}
for key, value := range m {
env.SetAuto(key, value)
}
return env, nil
}
// DecodeEnv decodes `src` as a json dictionary, and adds
// each decoded key-value pair to the environment.
//
// If `src` cannot be decoded as a json dictionary, an error
// is returned.
func (env *Env) Decode(src io.Reader) error {
m := make(map[string]interface{})
if err := json.NewDecoder(src).Decode(&m); err != nil {
return err
}
for k, v := range m {
env.SetAuto(k, v)
}
return nil
}
func (env *Env) SetAuto(k string, v interface{}) {
// FIXME: we fix-convert float values to int, because
// encoding/json decodes integers to float64, but cannot encode them back.
// (See http://golang.org/src/pkg/encoding/json/decode.go#L46)
if fval, ok := v.(float64); ok {
env.SetInt64(k, int64(fval))
} else if sval, ok := v.(string); ok {
env.Set(k, sval)
} else if val, err := json.Marshal(v); err == nil {
env.Set(k, string(val))
} else {
env.Set(k, fmt.Sprintf("%v", v))
}
}
func (env *Env) Encode(dst io.Writer) error {
m := make(map[string]interface{})
for k, v := range env.Map() {
var val interface{}
if err := json.Unmarshal([]byte(v), &val); err == nil {
// FIXME: we fix-convert float values to int, because
// encoding/json decodes integers to float64, but cannot encode them back.
// (See http://golang.org/src/pkg/encoding/json/decode.go#L46)
if fval, isFloat := val.(float64); isFloat {
val = int(fval)
}
m[k] = val
} else {
m[k] = v
}
}
if err := json.NewEncoder(dst).Encode(&m); err != nil {
return err
}
return nil
}
func (env *Env) WriteTo(dst io.Writer) (n int64, err error) {
// FIXME: return the number of bytes written to respect io.WriterTo
return 0, env.Encode(dst)
}
func (env *Env) Export(dst interface{}) (err error) {
defer func() {
if err != nil {
err = fmt.Errorf("ExportEnv %s", err)
}
}()
var buf bytes.Buffer
// step 1: encode/marshal the env to an intermediary json representation
if err := env.Encode(&buf); err != nil {
return err
}
// step 2: decode/unmarshal the intermediary json into the destination object
if err := json.NewDecoder(&buf).Decode(dst); err != nil {
return err
}
return nil
}
func (env *Env) Import(src interface{}) (err error) {
defer func() {
if err != nil {
err = fmt.Errorf("ImportEnv: %s", err)
}
}()
var buf bytes.Buffer
if err := json.NewEncoder(&buf).Encode(src); err != nil {
return err
}
if err := env.Decode(&buf); err != nil {
return err
}
return nil
}
func (env *Env) Map() map[string]string {
m := make(map[string]string)
for _, kv := range *env {
parts := strings.SplitN(kv, "=", 2)
m[parts[0]] = parts[1]
}
return m
}

40
engine/http.go Normal file
View File

@@ -0,0 +1,40 @@
package engine
import (
"path"
"net/http"
)
// ServeHTTP executes a job as specified by the http request `r`, and sends the
// result as an http response.
// This method allows an Engine instance to be passed as a standard http.Handler interface.
//
// Note that the protocol used in this methid is a convenience wrapper and is not the canonical
// implementation of remote job execution. This is because HTTP/1 does not handle stream multiplexing,
// and so cannot differentiate stdout from stderr. Additionally, headers cannot be added to a response
// once data has been written to the body, which makes it inconvenient to return metadata such
// as the exit status.
//
func (eng *Engine) ServeHTTP(w http.ResponseWriter, r *http.Request) {
jobName := path.Base(r.URL.Path)
jobArgs, exists := r.URL.Query()["a"]
if !exists {
jobArgs = []string{}
}
w.Header().Set("Job-Name", jobName)
for _, arg := range(jobArgs) {
w.Header().Add("Job-Args", arg)
}
job := eng.Job(jobName, jobArgs...)
job.Stdout.Add(w)
job.Stderr.Add(w)
// FIXME: distinguish job status from engine error in Run()
// The former should be passed as a special header, the former
// should cause a 500 status
w.WriteHeader(http.StatusOK)
// The exit status cannot be sent reliably with HTTP1, because headers
// can only be sent before the body.
// (we could possibly use http footers via chunked encoding, but I couldn't find
// how to use them in net/http)
job.Run()
}

View File

@@ -1,11 +1,8 @@
package engine
import (
"bytes"
"encoding/json"
"fmt"
"io"
"strconv"
"strings"
"time"
)
@@ -27,7 +24,7 @@ type Job struct {
Eng *Engine
Name string
Args []string
env []string
env *Env
Stdout *Output
Stderr *Output
Stdin *Input
@@ -105,80 +102,52 @@ func (job *Job) String() string {
}
func (job *Job) Getenv(key string) (value string) {
for _, kv := range job.env {
if strings.Index(kv, "=") == -1 {
continue
}
parts := strings.SplitN(kv, "=", 2)
if parts[0] != key {
continue
}
if len(parts) < 2 {
value = ""
} else {
value = parts[1]
}
}
return
return job.env.Get(key)
}
func (job *Job) GetenvBool(key string) (value bool) {
s := strings.ToLower(strings.Trim(job.Getenv(key), " \t"))
if s == "" || s == "0" || s == "no" || s == "false" || s == "none" {
return false
}
return true
return job.env.GetBool(key)
}
func (job *Job) SetenvBool(key string, value bool) {
if value {
job.Setenv(key, "1")
} else {
job.Setenv(key, "0")
}
job.env.SetBool(key, value)
}
func (job *Job) GetenvInt(key string) int64 {
s := strings.Trim(job.Getenv(key), " \t")
val, err := strconv.ParseInt(s, 10, 64)
if err != nil {
return -1
}
return val
func (job *Job) GetenvInt64(key string) int64 {
return job.env.GetInt64(key)
}
func (job *Job) SetenvInt(key string, value int64) {
job.Setenv(key, fmt.Sprintf("%d", value))
func (job *Job) GetenvInt(key string) int {
return job.env.GetInt(key)
}
func (job *Job) SetenvInt64(key string, value int64) {
job.env.SetInt64(key, value)
}
func (job *Job) SetenvInt(key string, value int) {
job.env.SetInt(key, value)
}
// Returns nil if key not found
func (job *Job) GetenvList(key string) []string {
sval := job.Getenv(key)
if sval == "" {
return nil
}
l := make([]string, 0, 1)
if err := json.Unmarshal([]byte(sval), &l); err != nil {
l = append(l, sval)
}
return l
return job.env.GetList(key)
}
func (job *Job) GetenvJson(key string, iface interface{}) error {
return job.env.GetJson(key, iface)
}
func (job *Job) SetenvJson(key string, value interface{}) error {
sval, err := json.Marshal(value)
if err != nil {
return err
}
job.Setenv(key, string(sval))
return nil
return job.env.SetJson(key, value)
}
func (job *Job) SetenvList(key string, value []string) error {
return job.SetenvJson(key, value)
return job.env.SetJson(key, value)
}
func (job *Job) Setenv(key, value string) {
job.env = append(job.env, key+"="+value)
job.env.Set(key, value)
}
// DecodeEnv decodes `src` as a json dictionary, and adds
@@ -187,90 +156,23 @@ func (job *Job) Setenv(key, value string) {
// If `src` cannot be decoded as a json dictionary, an error
// is returned.
func (job *Job) DecodeEnv(src io.Reader) error {
m := make(map[string]interface{})
if err := json.NewDecoder(src).Decode(&m); err != nil {
return err
}
for k, v := range m {
// FIXME: we fix-convert float values to int, because
// encoding/json decodes integers to float64, but cannot encode them back.
// (See http://golang.org/src/pkg/encoding/json/decode.go#L46)
if fval, ok := v.(float64); ok {
job.SetenvInt(k, int64(fval))
} else if sval, ok := v.(string); ok {
job.Setenv(k, sval)
} else if val, err := json.Marshal(v); err == nil {
job.Setenv(k, string(val))
} else {
job.Setenv(k, fmt.Sprintf("%v", v))
}
}
return nil
return job.env.Decode(src)
}
func (job *Job) EncodeEnv(dst io.Writer) error {
m := make(map[string]interface{})
for k, v := range job.Environ() {
var val interface{}
if err := json.Unmarshal([]byte(v), &val); err == nil {
// FIXME: we fix-convert float values to int, because
// encoding/json decodes integers to float64, but cannot encode them back.
// (See http://golang.org/src/pkg/encoding/json/decode.go#L46)
if fval, isFloat := val.(float64); isFloat {
val = int(fval)
}
m[k] = val
} else {
m[k] = v
}
}
if err := json.NewEncoder(dst).Encode(&m); err != nil {
return err
}
return nil
return job.env.Encode(dst)
}
func (job *Job) ExportEnv(dst interface{}) (err error) {
defer func() {
if err != nil {
err = fmt.Errorf("ExportEnv %s", err)
}
}()
var buf bytes.Buffer
// step 1: encode/marshal the env to an intermediary json representation
if err := job.EncodeEnv(&buf); err != nil {
return err
}
// step 2: decode/unmarshal the intermediary json into the destination object
if err := json.NewDecoder(&buf).Decode(dst); err != nil {
return err
}
return nil
return job.env.Export(dst)
}
func (job *Job) ImportEnv(src interface{}) (err error) {
defer func() {
if err != nil {
err = fmt.Errorf("ImportEnv: %s", err)
}
}()
var buf bytes.Buffer
if err := json.NewEncoder(&buf).Encode(src); err != nil {
return err
}
if err := job.DecodeEnv(&buf); err != nil {
return err
}
return nil
return job.env.Import(src)
}
func (job *Job) Environ() map[string]string {
m := make(map[string]string)
for _, kv := range job.env {
parts := strings.SplitN(kv, "=", 2)
m[parts[0]] = parts[1]
}
return m
return job.env.Map()
}
func (job *Job) Logf(format string, args ...interface{}) (n int, err error) {

View File

@@ -164,3 +164,29 @@ func Tail(src io.Reader, n int, dst *[]string) {
*dst = append(*dst, v.(string))
})
}
// AddEnv starts a new goroutine which will decode all subsequent data
// as a stream of json-encoded objects, and point `dst` to the last
// decoded object.
// The result `env` can be queried using the type-neutral Env interface.
// It is not safe to query `env` until the Output is closed.
func (o *Output) AddEnv() (dst *Env, err error) {
src, err := o.AddPipe()
if err != nil {
return nil, err
}
dst = &Env{}
o.tasks.Add(1)
go func() {
defer o.tasks.Done()
decoder := NewDecoder(src)
for {
env, err := decoder.Decode()
if err != nil {
return
}
*dst = *env
}
}()
return dst, nil
}

View File

@@ -72,6 +72,26 @@ func (w *sentinelWriteCloser) Close() error {
return nil
}
func TestOutputAddEnv(t *testing.T) {
input := "{\"foo\": \"bar\", \"answer_to_life_the_universe_and_everything\": 42}"
o := NewOutput()
result, err := o.AddEnv()
if err != nil {
t.Fatal(err)
}
o.Write([]byte(input))
o.Close()
if v := result.Get("foo"); v != "bar" {
t.Errorf("Expected %v, got %v", "bar", v)
}
if v := result.GetInt("answer_to_life_the_universe_and_everything"); v != 42 {
t.Errorf("Expected %v, got %v", 42, v)
}
if v := result.Get("this-value-doesnt-exist"); v != "" {
t.Errorf("Expected %v, got %v", "", v)
}
}
func TestOutputAddClose(t *testing.T) {
o := NewOutput()
var s sentinelWriteCloser

View File

@@ -10,6 +10,7 @@ import (
"os"
"path"
"path/filepath"
"runtime"
"strings"
"syscall"
"time"
@@ -56,6 +57,7 @@ func (graph *Graph) restore() error {
graph.idIndex.Add(id)
}
}
utils.Debugf("Restored %d elements", len(dir))
return nil
}
@@ -130,7 +132,8 @@ func (graph *Graph) Create(layerData archive.Archive, container *Container, comm
DockerVersion: VERSION,
Author: author,
Config: config,
Architecture: "x86_64",
Architecture: runtime.GOARCH,
OS: runtime.GOOS,
}
if container != nil {
img.Parent = container.Image
@@ -219,7 +222,7 @@ func (graph *Graph) TempLayerArchive(id string, compression archive.Compression,
if err != nil {
return nil, err
}
return archive.NewTempArchive(utils.ProgressReader(ioutil.NopCloser(a), 0, output, sf, true, "", "Buffering to disk"), tmp)
return archive.NewTempArchive(utils.ProgressReader(ioutil.NopCloser(a), 0, output, sf, false, utils.TruncateID(id), "Buffering to disk"), tmp)
}
// Mktemp creates a temporary sub-directory inside the graph's filesystem.

View File

@@ -25,8 +25,8 @@ import (
"fmt"
"github.com/dotcloud/docker/archive"
"github.com/dotcloud/docker/graphdriver"
mountpk "github.com/dotcloud/docker/mount"
"github.com/dotcloud/docker/utils"
"log"
"os"
"os/exec"
"path"
@@ -296,7 +296,7 @@ func (a *Driver) unmount(id string) error {
func (a *Driver) mounted(id string) (bool, error) {
target := path.Join(a.rootPath(), "mnt", id)
return Mounted(target)
return mountpk.Mounted(target)
}
// During cleanup aufs needs to unmount all mountpoints
@@ -313,24 +313,44 @@ func (a *Driver) Cleanup() error {
return nil
}
func (a *Driver) aufsMount(ro []string, rw, target string) error {
rwBranch := fmt.Sprintf("%v=rw", rw)
roBranches := ""
for _, layer := range ro {
roBranches += fmt.Sprintf("%v=ro+wh:", layer)
}
branches := fmt.Sprintf("br:%v:%v,xino=/dev/shm/aufs.xino", rwBranch, roBranches)
func (a *Driver) aufsMount(ro []string, rw, target string) (err error) {
defer func() {
if err != nil {
Unmount(target)
}
}()
//if error, try to load aufs kernel module
if err := mount("none", target, "aufs", 0, branches); err != nil {
log.Printf("Kernel does not support AUFS, trying to load the AUFS module with modprobe...")
if err := exec.Command("modprobe", "aufs").Run(); err != nil {
return fmt.Errorf("Unable to load the AUFS module")
if err = a.tryMount(ro, rw, target); err != nil {
if err = a.mountRw(rw, target); err != nil {
return
}
log.Printf("...module loaded.")
if err := mount("none", target, "aufs", 0, branches); err != nil {
return fmt.Errorf("Unable to mount using aufs %s", err)
for _, layer := range ro {
branch := fmt.Sprintf("append:%s=ro+wh", layer)
if err = mount("none", target, "aufs", MsRemount, branch); err != nil {
return
}
}
}
return nil
return
}
// Try to mount using the aufs fast path, if this fails then
// append ro layers.
func (a *Driver) tryMount(ro []string, rw, target string) (err error) {
var (
rwBranch = fmt.Sprintf("%s=rw", rw)
roBranches = fmt.Sprintf("%s=ro+wh:", strings.Join(ro, "=ro+wh:"))
)
return mount("none", target, "aufs", 0, fmt.Sprintf("br:%v:%v,xino=/dev/shm/aufs.xino", rwBranch, roBranches))
}
func (a *Driver) mountRw(rw, target string) error {
return mount("none", target, "aufs", 0, fmt.Sprintf("br:%s,xino=/dev/shm/aufs.xino", rw))
}
func rollbackMount(target string, err error) {
if err != nil {
Unmount(target)
}
}

View File

@@ -1,7 +1,11 @@
package aufs
import (
"crypto/sha256"
"encoding/hex"
"fmt"
"github.com/dotcloud/docker/archive"
"io/ioutil"
"os"
"path"
"testing"
@@ -621,3 +625,70 @@ func TestApplyDiff(t *testing.T) {
t.Fatal(err)
}
}
func hash(c string) string {
h := sha256.New()
fmt.Fprint(h, c)
return hex.EncodeToString(h.Sum(nil))
}
func TestMountMoreThan42Layers(t *testing.T) {
d := newDriver(t)
defer os.RemoveAll(tmp)
defer d.Cleanup()
var last string
var expected int
for i := 1; i < 127; i++ {
expected++
var (
parent = fmt.Sprintf("%d", i-1)
current = fmt.Sprintf("%d", i)
)
if parent == "0" {
parent = ""
} else {
parent = hash(parent)
}
current = hash(current)
if err := d.Create(current, parent); err != nil {
t.Logf("Current layer %d", i)
t.Fatal(err)
}
point, err := d.Get(current)
if err != nil {
t.Logf("Current layer %d", i)
t.Fatal(err)
}
f, err := os.Create(path.Join(point, current))
if err != nil {
t.Logf("Current layer %d", i)
t.Fatal(err)
}
f.Close()
if i%10 == 0 {
if err := os.Remove(path.Join(point, parent)); err != nil {
t.Logf("Current layer %d", i)
t.Fatal(err)
}
expected--
}
last = current
}
// Perform the actual mount for the top most image
point, err := d.Get(last)
if err != nil {
t.Fatal(err)
}
files, err := ioutil.ReadDir(point)
if err != nil {
t.Fatal(err)
}
if len(files) != expected {
t.Fatalf("Expected %d got %d", expected, len(files))
}
}

View File

@@ -2,9 +2,7 @@ package aufs
import (
"github.com/dotcloud/docker/utils"
"os"
"os/exec"
"path/filepath"
"syscall"
)
@@ -17,21 +15,3 @@ func Unmount(target string) error {
}
return nil
}
func Mounted(mountpoint string) (bool, error) {
mntpoint, err := os.Stat(mountpoint)
if err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
}
parent, err := os.Stat(filepath.Join(mountpoint, ".."))
if err != nil {
return false, err
}
mntpointSt := mntpoint.Sys().(*syscall.Stat_t)
parentSt := parent.Sys().(*syscall.Stat_t)
return mntpointSt.Dev != parentSt.Dev, nil
}

View File

@@ -2,6 +2,8 @@ package aufs
import "errors"
const MsRemount = 0
func mount(source string, target string, fstype string, flags uintptr, data string) (err error) {
return errors.New("mount is not implemented on darwin")
}

View File

@@ -2,6 +2,8 @@ package aufs
import "syscall"
func mount(source string, target string, fstype string, flags uintptr, data string) (err error) {
const MsRemount = syscall.MS_REMOUNT
func mount(source string, target string, fstype string, flags uintptr, data string) error {
return syscall.Mount(source, target, fstype, flags, data)
}

View File

@@ -154,7 +154,7 @@ func (devices *DeviceSet) allocateTransactionId() uint64 {
func (devices *DeviceSet) saveMetadata() error {
jsonData, err := json.Marshal(devices.MetaData)
if err != nil {
return fmt.Errorf("Error encoding metaadata to json: %s", err)
return fmt.Errorf("Error encoding metadata to json: %s", err)
}
tmpFile, err := ioutil.TempFile(filepath.Dir(devices.jsonFile()), ".json")
if err != nil {

View File

@@ -8,6 +8,14 @@ package devmapper
#include <linux/loop.h> // FIXME: present only for defines, maybe we can remove it?
#include <linux/fs.h> // FIXME: present only for BLKGETSIZE64, maybe we can remove it?
#ifndef LOOP_CTL_GET_FREE
#define LOOP_CTL_GET_FREE 0x4C82
#endif
#ifndef LO_FLAGS_PARTSCAN
#define LO_FLAGS_PARTSCAN 8
#endif
// FIXME: Can't we find a way to do the logging in pure Go?
extern void DevmapperLogCallback(int level, char *file, int line, int dm_errno_or_class, char *str);
@@ -55,7 +63,6 @@ type (
}
)
// FIXME: Make sure the values are defined in C
// IOCTL consts
const (
BlkGetSize64 = C.BLKGETSIZE64

View File

@@ -1,2 +1 @@
Solomon Hykes <solomon@dotcloud.com> (@shykes)
Tianon Gravi <admwiggin@gmail.com> (@tianon)

View File

@@ -36,8 +36,9 @@ To build docker, you will need the following system dependencies
* An amd64 machine
* A recent version of git and mercurial
* Go version 1.2 or later (see notes below regarding using Go 1.1.2 and dynbinary)
* Go version 1.2 or later
* SQLite version 3.7.9 or later
* libdevmapper from lvm2 version 1.02.77 or later (http://www.sourceware.org/lvm2/)
* A clean checkout of the source must be added to a valid Go [workspace](http://golang.org/doc/code.html#Workspaces)
under the path *src/github.com/dotcloud/docker*.
@@ -91,8 +92,7 @@ You would do the users of your distro a disservice and "void the docker warranty
A good comparison is Busybox: all distros package it as a statically linked binary, because it just
makes sense. Docker is the same way.
If you *must* have a non-static Docker binary, or require Go 1.1.2 (since Go 1.2 is still freshly released
at the time of this writing), please use:
If you *must* have a non-static Docker binary, please use:
```bash
./hack/make.sh dynbinary

View File

@@ -136,7 +136,7 @@ sudo('echo -e "deb http://archive.ubuntu.com/ubuntu raring main universe\n'
sudo('DEBIAN_FRONTEND=noninteractive apt-get install -q -y wget python-dev'
' python-pip supervisor git mercurial linux-image-extra-$(uname -r)'
' aufs-tools make libfontconfig libevent-dev libsqlite3-dev libssl-dev')
sudo('wget -O - https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz | '
sudo('wget -O - https://go.googlecode.com/files/go1.2.linux-amd64.tar.gz | '
'tar -v -C /usr/local -xz; ln -s /usr/local/go/bin/go /usr/bin/go')
sudo('GOPATH=/go go get -d github.com/dotcloud/docker')
sudo('pip install -r {}/requirements.txt'.format(CFG_PATH))

View File

@@ -116,7 +116,7 @@ case "$lsb_dist" in
(
set -x
$sh_c 'docker run busybox echo "Docker has been successfully installed!"'
)
) || true
fi
exit 0
;;

View File

@@ -15,8 +15,9 @@ set -e
# - The script is intented to be run inside the docker container specified
# in the Dockerfile at the root of the source. In other words:
# DO NOT CALL THIS SCRIPT DIRECTLY.
# - The right way to call this script is to invoke "docker build ." from
# your checkout of the Docker repository, and then
# - The right way to call this script is to invoke "make" from
# your checkout of the Docker repository.
# the Makefile will so a "docker build -t docker ." and then
# "docker run hack/make.sh" in the resulting container image.
#
@@ -28,15 +29,19 @@ RESOLVCONF=$(readlink --canonicalize /etc/resolv.conf)
grep -q "$RESOLVCONF" /proc/mounts || {
echo >&2 "# WARNING! I don't seem to be running in a docker container."
echo >&2 "# The result of this command might be an incorrect build, and will not be officially supported."
echo >&2 "# Try this: 'docker build -t docker . && docker run docker ./hack/make.sh'"
echo >&2 "# Try this: 'make all'"
}
# List of bundles to create when no argument is passed
DEFAULT_BUNDLES=(
binary
test
test-integration
dynbinary
dyntest
dyntest-integration
cover
cross
tgz
ubuntu
)
@@ -60,7 +65,37 @@ fi
# Use these flags when compiling the tests and final binary
LDFLAGS='-X main.GITCOMMIT "'$GITCOMMIT'" -X main.VERSION "'$VERSION'" -w'
LDFLAGS_STATIC='-X github.com/dotcloud/docker/utils.IAMSTATIC true -linkmode external -extldflags "-lpthread -static -Wl,--unresolved-symbols=ignore-in-object-files"'
BUILDFLAGS='-tags netgo'
BUILDFLAGS='-tags netgo -a'
HAVE_GO_TEST_COVER=
if \
go help testflag | grep -- -cover > /dev/null \
&& go tool -n cover > /dev/null 2>&1 \
; then
HAVE_GO_TEST_COVER=1
fi
# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'.
# You can use this to select certain tests to run, eg.
#
# TESTFLAGS='-run ^TestBuild$' ./hack/make.sh test
#
go_test_dir() {
dir=$1
testcover=()
if [ "$HAVE_GO_TEST_COVER" ]; then
# if our current go install has -cover, we want to use it :)
mkdir -p "$DEST/coverprofiles"
coverprofile="docker${dir#.}"
coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
testcover=( -cover -coverprofile "$coverprofile" )
fi
(
set -x
cd "$dir"
go test ${testcover[@]} -ldflags "$LDFLAGS" $BUILDFLAGS $TESTFLAGS
)
}
bundle() {
bundlescript=$1

21
hack/make/cover Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
DEST="$1"
bundle_cover() {
coverprofiles=( "$DEST/../"*"/coverprofiles/"* )
for p in "${coverprofiles[@]}"; do
echo
(
set -x
go tool cover -func="$p"
)
done
}
if [ "$HAVE_GO_TEST_COVER" ]; then
bundle_cover 2>&1 | tee "$DEST/report.log"
else
echo >&2 'warning: the current version of go does not support -cover'
echo >&2 ' skipping test coverage report'
fi

23
hack/make/cross Normal file
View File

@@ -0,0 +1,23 @@
#!/bin/bash
DEST=$1
# if we have our linux/amd64 version compiled, let's symlink it in
if [ -x "$DEST/../binary/docker-$VERSION" ]; then
mkdir -p "$DEST/linux/amd64"
(
cd "$DEST/linux/amd64"
ln -s ../../../binary/* ./
)
echo "Created symlinks:" "$DEST/linux/amd64/"*
fi
for platform in $DOCKER_CROSSPLATFORMS; do
(
mkdir -p "$DEST/$platform" # bundles/VERSION/cross/GOOS/GOARCH/docker-VERSION
export GOOS=${platform%/*}
export GOARCH=${platform##*/}
export LDFLAGS_STATIC="" # we just need a simple client for these platforms (TODO this might change someday)
source "$(dirname "$BASH_SOURCE")/binary" "$DEST/$platform"
)
done

View File

@@ -3,7 +3,7 @@
DEST=$1
# dockerinit still needs to be a static binary, even if docker is dynamic
CGO_ENABLED=0 go build -a -o $DEST/dockerinit-$VERSION -ldflags "$LDFLAGS -d" $BUILDFLAGS ./dockerinit
CGO_ENABLED=0 go build -o $DEST/dockerinit-$VERSION -ldflags "$LDFLAGS -d" $BUILDFLAGS ./dockerinit
echo "Created binary: $DEST/dockerinit-$VERSION"
ln -sf dockerinit-$VERSION $DEST/dockerinit
@@ -11,5 +11,7 @@ ln -sf dockerinit-$VERSION $DEST/dockerinit
export DOCKER_INITSHA1="$(sha1sum $DEST/dockerinit-$VERSION | cut -d' ' -f1)"
# exported so that "dyntest" can easily access it later without recalculating it
go build -o $DEST/docker-$VERSION -ldflags "$LDFLAGS -X github.com/dotcloud/docker/utils.INITSHA1 \"$DOCKER_INITSHA1\"" $BUILDFLAGS ./docker
echo "Created binary: $DEST/docker-$VERSION"
(
export LDFLAGS_STATIC="-X github.com/dotcloud/docker/utils.INITSHA1 \"$DOCKER_INITSHA1\" -X github.com/dotcloud/docker/utils.INITPATH \"$DOCKER_INITPATH\""
source "$(dirname "$BASH_SOURCE")/binary"
)

Some files were not shown because too many files have changed in this diff Show More