Compare commits

...

1177 Commits

Author SHA1 Message Date
Guillaume J. Charmes
ca5913ff3e Bump version to 0.6.2 2013-09-17 11:11:39 -07:00
Michael Crosby
b2707dccc1 Merge pull request #1898 from dotcloud/smallfix_registry
Prevent panic upon error pulling registry
2013-09-17 10:27:19 -07:00
Michael Crosby
b826351337 Merge pull request #1897 from dotcloud/update_doc
Update the commit documentation with better example
2013-09-17 10:21:45 -07:00
Andy Rothfusz
2183644578 Information about data persistence 2013-09-16 16:44:54 -07:00
Guillaume J. Charmes
e836b0064b Prevent panic upon error pulling registry 2013-09-16 16:18:25 -07:00
Michael Crosby
986906cd7b Merge pull request #1892 from dotcloud/fix-maintainers-files
Add MAINTAINERS
2013-09-16 14:41:36 -07:00
Michael Crosby
8d48119340 Merge pull request #1894 from dotcloud/1891-remove_useless_warnings
Don't show network related warnings when networking is disabled
2013-09-16 13:48:17 -07:00
Michael Crosby
cdce873ea0 Merge pull request #1896 from dotcloud/1885-missing-hostfile
Only mount hostname files if config exists
2013-09-16 11:29:52 -07:00
Michael Crosby
5a01f7485c Only mount hostname files if config exists 2013-09-16 17:53:24 +00:00
Michael Crosby
45b50730e3 Merge pull request #1849 from dotcloud/better_handle_hostConfig
Improved hostConfig reload
2013-09-16 10:48:53 -07:00
Victor Vieux
40eaa82fc6 Don't show network related warnings when networking is disabled 2013-09-16 15:58:44 +00:00
Nick Stinemates
d6cf41cbe6 Add MAINTAINERS
Making sure we're consistent across our project.
2013-09-16 04:45:01 +00:00
Solomon Hykes
2f24b9e05c Merge pull request #1884 from dotcloud/update-vendored-deps
* Hack: Update vendored dependendies
2013-09-13 20:55:06 -07:00
Solomon Hykes
a63cfe7153 Merge pull request #1847 from dotcloud/test-docker-in-docker
* Hack: make the build tool more dev-friendly and packager-friendly
2013-09-13 20:19:37 -07:00
Andy Rothfusz
cd289cc10c clarify "use" 2013-09-13 17:22:53 -07:00
Andy Rothfusz
894eff64b4 Merge pull request #1880 from getvictor/patch-api
Fixed LxcConf example in Remote API v1.4 and v1.5
2013-09-13 13:28:26 -07:00
Andy Rothfusz
71e15d1c3c Merge pull request #1881 from dotcloud/ubuntu-install-instructions
Change apt source to http
2013-09-13 13:27:58 -07:00
Michael Crosby
74daeccec5 Change apt source to http 2013-09-13 19:21:43 +00:00
getvictor
102522bcd1 Fixing LxcConf example in docker_remote_api_v1.4.rst 2013-09-13 14:10:05 -05:00
getvictor
b3cbf424ec Fixed LxcConf example for docker_remote_api_v1.5.rst
Based on how command line docker sends the request.
2013-09-13 14:09:05 -05:00
Michael Crosby
33972627b7 Merge pull request #1848 from dotcloud/build-clean
Add rm option to docker build to remove intermediate containers
2013-09-13 10:58:59 -07:00
Michael Crosby
4234fc699f Merge pull request #1716 from eliasp/fixme
Added FIXME for iproute → netlink as advised in issue #925.
2013-09-13 10:58:28 -07:00
Solomon Hykes
4a707a8800 Add go.net to vendored dependencies 2013-09-13 10:24:38 -07:00
Michael Crosby
b986fd00e0 Merge pull request #1846 from dotcloud/fix_return_error
Fix wrong return error
2013-09-13 09:44:38 -07:00
Daniel Mizyrycki
f368fd9f9f Merge pull request #1862 from dotcloud/dockerfile-multiline-space
Replace multiline with empty string not space
2013-09-13 08:52:57 -07:00
Solomon Hykes
6c1c404501 Updated vendored dependencies 2013-09-12 20:42:26 -07:00
Guillaume J. Charmes
796d7a49f0 Update the commit documentation with better example 2013-09-12 13:26:31 -07:00
Michael Crosby
ffc850244f Merge pull request #1741 from griff/1737_fix_volume_uid_gid
Transfer uid and gid from image to volume
2013-09-12 10:38:37 -07:00
Andy Rothfusz
83623675ce Merge pull request #1593 from kyleconroy/master
Enable mobile view for website.
2013-09-12 10:30:16 -07:00
Andy Rothfusz
46f644c074 Merge pull request #1865 from lgp171188/patch-1
Fixed a minor typo
2013-09-12 10:11:18 -07:00
Andy Rothfusz
1bc2ab05ec Merge pull request #1859 from Bachmann1234/patch-1
Fix typo in hello_world.rst
2013-09-12 10:10:49 -07:00
Michael Crosby
1d935c6307 Update docker.bash script for build -rm 2013-09-12 16:57:10 +00:00
Michael Crosby
b7a3fc687e Add rm option to docker build to remove intermediate containers 2013-09-12 16:55:36 +00:00
Guruprasad
718b1f9fe7 Fixed a minor typo
Changed "You need to enable namespaces and cgroups, to the extend of what is needed to run LXC containers" to "You need to enable namespaces and cgroups, to the extent of what is needed to run LXC containers"
2013-09-12 13:25:05 +05:30
Solomon Hykes
45cedefadb hack/vendor.sh: overwrite existing dependencies and remove .git so they can be checked in 2013-09-11 18:38:09 -07:00
Michael Crosby
1e723bc95a Replace multiline with empty string not space 2013-09-11 23:43:55 +00:00
Kyle Conroy
9dd98df1a3 Merge branch 'master' of https://github.com/dotcloud/docker 2013-09-11 15:38:52 -07:00
Andy Rothfusz
557e3ec88e Merge pull request #1851 from metalivedev/1601-ec2install
Fix #1601 and #1845
2013-09-11 14:41:43 -07:00
Matt Bachmann
dfe45e1ad5 Update hello_world.rst
Small typo
2013-09-11 15:35:05 -04:00
Victor Vieux
d5480fb78d Merge pull request #1857 from shin-/master
Fixed push bug
2013-09-11 10:52:05 -07:00
shin-
c6dc90ccb9 Fixed push bug 2013-09-11 19:39:33 +02:00
Michael Crosby
02037a9b53 Merge pull request #1726 from dsissitka/1615-2
Updated "POST /containers/create" to return the right Content-Type. Fixes #1615.
2013-09-11 09:09:48 -07:00
David Sissitka
71d46eaf02 Fixed a bug I created when rebasing. 2013-09-11 01:39:55 -07:00
David Sissitka
f4ba0d4267 Replaced leading spaces with tabs. Again. 2013-09-11 01:28:20 -07:00
David Sissitka
d48ea32186 Updated postContainersCreate to use writeJSON. 2013-09-11 01:26:26 -07:00
David Sissitka
33e4d73629 Replaced leading spaces with tabs. 2013-09-11 01:07:05 -07:00
David Sissitka
07324a37eb Rebased. 2013-09-11 01:03:17 -07:00
David Sissitka
dd49cc45c8 Added a status code argument to writeJSON. 2013-09-11 00:23:53 -07:00
Andy Rothfusz
21be3b76e9 Fix #1601 with vagrantless installation instructions. 2013-09-10 18:54:13 -07:00
Brian Olsen
9cfbaecfe5 Transfer uid and gid to volume. Fixes #1737. 2013-09-11 03:17:42 +02:00
Solomon Hykes
ebee8f28ac hack/make.sh print a warning but don't exit if called outside a correct build environment. 2013-09-10 18:08:02 -07:00
Solomon Hykes
03e36caeb1 Fix typo and add dependency details in hack/PACKAGERS.md 2013-09-10 18:02:33 -07:00
Victor Vieux
f482432a76 improved hostConfig reload 2013-09-10 22:13:15 +00:00
Andy Rothfusz
c75926be4f Fix #1845 lxc typo in images 2013-09-10 14:54:35 -07:00
Michael Crosby
bd0e4fde9a Merge pull request #1838 from jmcvetta/multiline_dockerfile
Implementation of multiline syntax for Dockerfile
2013-09-10 14:22:34 -07:00
Andy Rothfusz
de75241b62 Merge pull request #1835 from metalivedev/1821-exampleslanding
fix #1821 landing page for examples
2013-09-10 14:14:00 -07:00
Solomon Hykes
228b7af516 Remove fixed FIXMEs 2013-09-10 11:33:37 -07:00
Solomon Hykes
d14058bc29 Update usage comments in hack/make.sh 2013-09-10 11:33:26 -07:00
Solomon Hykes
5b361f31f7 Packager's manual: official build vs distro build 2013-09-10 11:30:14 -07:00
Kyle Conroy
c24c5c3b60 Merge branch 'master' of https://github.com/dotcloud/docker 2013-09-10 10:57:33 -07:00
Victor Vieux
44d0f941f2 fix wrong return error 2013-09-10 17:16:59 +00:00
Victor Vieux
b7826f5666 add Brian Olsen to AUTHORS 2013-09-10 16:55:27 +00:00
Victor Vieux
27ca0c225a Merge branch '1582_fix_volume_content' of https://github.com/griff/docker into griff-1582_fix_volume_content 2013-09-10 16:52:46 +00:00
Michael Crosby
ad152efbed Merge pull request #1759 from bdon/graph-map
Minor refactor of Graph; replace uses of Graph.All (slice) with Graph.Map (map)
2013-09-10 08:49:11 -07:00
Solomon Hykes
14bbbcd571 PACKAGERS.md: a guide to packaging Docker for your favorite distro 2013-09-09 23:39:55 -07:00
Solomon Hykes
3d39336a46 Break down hack/make.sh into small scripts, one per 'bundle': test, binary, ubuntu etc. 2013-09-09 18:45:40 -07:00
Jason McVetta
6a4afb7f8e stricter regexp for Dockerfile line continuations 2013-09-09 18:23:57 -07:00
Jason McVetta
c01f6df43e Test dockerfile with line containing literal "\n" 2013-09-09 18:23:17 -07:00
Michael Crosby
e89396809f Merge pull request #1735 from dotcloud/1301-support-domainname
Add domain name support
2013-09-09 17:21:11 -07:00
Jason McVetta
4f3b8033f2 cruft removal 2013-09-09 17:17:09 -07:00
Jason McVetta
ebb934c1b0 line continuation regex 2013-09-09 17:02:45 -07:00
Victor Vieux
2801624462 Merge pull request #1796 from shin-/api_1_5
*Remote API: Bumped API version to 1.5
*Registry: Implement login with private registry
*Remote API: Improve port mapping information
2013-09-09 16:58:54 -07:00
Jason McVetta
6921ca4813 read Dockerfile into memory before parsing, to facilitate regexp preprocessing 2013-09-09 16:42:04 -07:00
Solomon Hykes
59856a20bf Add the output of the tests to each release 2013-09-09 16:30:24 -07:00
Solomon Hykes
b187cc40cd Integrate unit tests into hack/make.sh 2013-09-09 16:20:30 -07:00
Victor Vieux
58281b4f18 Merge pull request #1793 from reds/master
From FIXME: local path for ADD
2013-09-09 16:14:59 -07:00
Jason McVetta
672f1e0683 failing test case for multiline command in dockerfile 2013-09-09 15:21:04 -07:00
Victor Vieux
dd806b4ecd add Martin Redmond to AUTHORS 2013-09-09 22:19:28 +00:00
Victor Vieux
843f9091f2 Merge branch 'filter' of https://github.com/reds/docker into reds-filter 2013-09-09 22:16:16 +00:00
Victor Vieux
53da0507e4 Merge pull request #1837 from dotcloud/fix-tests-1812
only os.Exits on error
2013-09-09 15:12:32 -07:00
Victor Vieux
bd847f66c6 rebase master 2013-09-09 22:11:53 +00:00
Solomon Hykes
e503f6a878 Merge pull request #1825 from dotcloud/merge-builder-runtime
Refactor to merge builder.go into runtime.go
2013-09-09 15:11:41 -07:00
Jason McVetta
6678a26d1c gofmt 2013-09-09 15:11:30 -07:00
Solomon Hykes
e37dcd726f Hack: use vendored dependencies in-place, for less moving parts when developing 2013-09-09 15:05:25 -07:00
Andy Rothfusz
8ddfb4d6aa Merge pull request #1828 from mmikulicic/patch-1
Please add go-dockerclient to docker API clients doc
2013-09-09 14:51:28 -07:00
Andy Rothfusz
28f2fe9167 Merge pull request #1836 from srid/patch-2
remove docker-ruby from docs
2013-09-09 14:50:36 -07:00
Sridhar Ratnakumar
3c90e96b6d remove docker-ruby from docs
we don't maintain it anymore as we now recommend proper HTTP api based clients instead.
2013-09-09 14:42:45 -07:00
Victor Vieux
46a1cd69a9 only os.Exits on error 2013-09-09 21:26:35 +00:00
Victor Vieux
9b088ada7e Merge pull request #1812 from dotcloud/return-run-exit-code
*Client: Return the process exit code for run commands
2013-09-09 14:10:46 -07:00
Andy Rothfusz
0508bd8da7 Merge pull request #1773 from hectcastro/hc-riak-example
Add a Docker example for running Riak
2013-09-09 13:31:41 -07:00
Victor Vieux
446ca4b57b fix init layer 2013-09-09 20:29:57 +00:00
Victor Vieux
4f2e59f94a bind mount /etc/hosts and /etc/hostname 2013-09-09 20:29:57 +00:00
Victor Vieux
0a436e03b8 add domainname support 2013-09-09 20:29:06 +00:00
Andy Rothfusz
c66a827c98 Merge pull request #1807 from tianon/gentoo-docs
Update Gentoo docs to reflect the package name change from app-emulation/lxc-docker to app-emulation/docker
2013-09-09 13:27:48 -07:00
Andy Rothfusz
fe99e51634 Created a little landing page for the examples, combined first two. 2013-09-09 13:19:07 -07:00
Joffrey F
49b5c44ee9 Merge pull request #1833 from shin-/1815-always-push-tags
Push tags to registry even if images are already uploaded
2013-09-09 13:10:40 -07:00
shin-
64bc08f1c4 Push tags to registry even if images are already uploaded 2013-09-09 21:02:37 +02:00
Andy Rothfusz
2d4ac06426 Merge pull request #1831 from brianshumate/jbs-docs-edits
Hello World Daemon edits
2013-09-09 11:48:12 -07:00
Brian Shumate
9749d94fb0 note about exiting from attachment prior to running 2013-09-09 10:30:06 -04:00
Brian Shumate
630ae43e7d improve sentence readabilty 2013-09-09 10:19:45 -04:00
Marko Mikulicic
268928ab35 Please add go-dockerclient to docker API clients doc 2013-09-09 11:01:15 +01:00
Solomon Hykes
4cd59b96ed Hack: we no longer need to generate test binaries. 2013-09-08 18:45:23 -07:00
Solomon Hykes
055bbb79c1 vendor.sh can cleanly update vendored dependencies 2013-09-07 17:48:52 -07:00
Brandon Philips
19dc3b0792 gitignore: ignore bundles directory 2013-09-07 17:35:49 -07:00
Wes Morgan
20d24a450c move deps installation to vendor.sh script 2013-09-07 17:35:48 -07:00
Solomon Hykes
eca861a99d Add missing package import 2013-09-07 17:03:40 -07:00
Solomon Hykes
d757bd0904 Document using the Dockerfile for interactive dev/test cycles 2013-09-06 20:16:13 -07:00
Solomon Hykes
47838051be Hack: improve the Dockerfile for an easier development workflow. Build dev container once, run a shell with source mount-binded, run tests as you edit. LIKE A BOSS. 2013-09-06 20:14:03 -07:00
Solomon Hykes
fa806f26af Add usage instructions to the Dockerfile. Build, test and release docker using docker. 2013-09-06 19:58:05 -07:00
Solomon Hykes
34eab42833 Adapt Dockerfile to run docker tests inside docker 2013-09-06 19:27:49 -07:00
Solomon Hykes
3c80bd76cf Adapt the original dind script and add a description 2013-09-06 19:27:48 -07:00
Solomon Hykes
c983023661 Copy dind wrapper script from github.com/jpetazzo/dind 2013-09-06 19:27:48 -07:00
Solomon Hykes
6a9f4ecf9b Add missing comments to runtime.go 2013-09-06 17:43:34 -07:00
Solomon Hykes
24e02043a2 Merge builder.go into runtime.go 2013-09-06 17:33:05 -07:00
Guillaume J. Charmes
c8f885a4d0 Merge pull request #1822 from dotcloud/remove_os_user
Remove os user
2013-09-06 16:30:54 -07:00
Guillaume J. Charmes
b07314e2e0 Remove import os/user 2013-09-06 23:00:21 +00:00
Solomon Hykes
eed00a4afd README: remove original shipping containers 'manifesto'. It's a little long to stay here. 2013-09-06 15:09:40 -07:00
Solomon Hykes
6679f3b053 README: replace 'install instructions' and 'usage examples' with references to the website and docs 2013-09-06 15:09:04 -07:00
Solomon Hykes
70ab25a5db README: harmonize intro with website 2013-09-06 15:08:01 -07:00
Solomon Hykes
bcb8d3fd86 README: refer to the docs for install instructions to remove duplication. 2013-09-06 14:52:14 -07:00
Solomon Hykes
b9ca311008 Update roadmap 2013-09-06 14:48:06 -07:00
Thatcher
37186d9ef4 Merge pull request #1818 from metalivedev/655-searchswiftype
Add Swiftype search to docs
2013-09-06 13:46:50 -07:00
Andy Rothfusz
03a28da5f7 Add Swiftype search to docs 2013-09-06 13:34:24 -07:00
Martin Redmond
b44d113120 filter image listing using path.Match 2013-09-06 16:16:10 -04:00
Martin Redmond
35bcba8011 improve image listing 2013-09-06 15:51:49 -04:00
Daniel Mizyrycki
f2bc7ebf1e Merge pull request #1777 from dotcloud/1620-index-test
testing, issue #1620: Add index functional test on docker-ci
2013-09-06 09:21:05 -07:00
Joffrey F
8a2fd914b2 Merge pull request #1780 from shin-/1218-push-dependencies
Respect dependency graph when pushing a repository
2013-09-05 17:32:45 -07:00
Michael Crosby
f3d51bb3d5 Merge pull request #1810 from dotcloud/1798-inspect-fix
Detect images/containers conflicts in docker inspect
2013-09-05 17:17:09 -07:00
Michael Crosby
d06a0ba1a7 Merge pull request #1811 from dotcloud/672-remove_getcache
Remove unused getCache endpoint
2013-09-05 17:07:49 -07:00
Martin Redmond
8374dac021 From FIXME: local path for ADD 2013-09-05 20:05:36 -04:00
Michael Crosby
3bc73fa21e Return the process exit code for run commands 2013-09-05 23:54:03 +00:00
Victor Vieux
0d7044955a remove unused getCache endpoint 2013-09-05 23:00:52 +00:00
Victor Vieux
20a763e519 fix indent in doc 2013-09-05 22:33:04 +00:00
Victor Vieux
5ec2fea6dd Detect images/containers conflicts in docker inspect 2013-09-05 22:31:17 +00:00
Victor Vieux
908bb65dcf Merge pull request #1803 from dotcloud/1802-skip-TestGetContainersTop
testing, issue #1802: Temporarily skip TestGetContainersTop
2013-09-05 14:44:47 -07:00
Tianon Gravi
d368c2dee9 Update Gentoo docs to reflect the package name change from app-emulation/lxc-docker to app-emulation/docker as discussed in today's #docker-meeting 2013-09-05 13:36:56 -06:00
Tianon Gravi
82dd417ef6 Reformat Gentoo install instructions to 80 columns 2013-09-05 13:36:16 -06:00
Andy Rothfusz
b57d7dca9d Merge pull request #1804 from Bouke/patch-1
Fix memory recommendations (Byte, not bit)
2013-09-05 11:52:00 -07:00
Michael Crosby
8e41181591 Merge pull request #1772 from dotcloud/update-tar-revision
Update tar pkg revision number
2013-09-05 10:32:53 -07:00
Bouke Haarsma
fdf1fccd3e Fix memory recommendations (Byte, not bit) 2013-09-05 10:40:11 +02:00
Daniel Mizyrycki
bf6e241d97 testing, issue #1802: Temporarily skip TestGetContainersTop 2013-09-04 18:20:49 -07:00
Andy Rothfusz
f0cdbaa6c8 Merge pull request #1801 from vcoisne/master
Add step to docker installation using vagrant (Mac, Linux)
2013-09-04 18:15:22 -07:00
Andy Rothfusz
e0d6bae1eb Merge pull request #1789 from tommyblue/patch-1
Adds instruction about UDP port redirection
2013-09-04 18:14:34 -07:00
Andy Rothfusz
d6ff912917 Merge pull request #1797 from denibertovic/docs
fixed docs for installing binary
2013-09-04 18:13:25 -07:00
Victor Coisne
ad04cacd28 Add step to docker installation using vagrant (Mac, Linux) 2013-09-04 17:47:18 -07:00
Deni Bertovic
62823cfb97 fixed docs for installing binary 2013-09-05 00:06:41 +02:00
shin-
9ae5054c34 Bumped API version in api.go ; added <1.5 behavior to getContainersJSON 2013-09-04 23:41:44 +02:00
Daniel Mizyrycki
8878943ac9 deployment, issue #1578: Avoid pinning kernel headers. Add Vagrantfile assumptions 2013-09-04 14:41:09 -07:00
Daniel Mizyrycki
8e3844cd34 Merge pull request #1578 from jimmycuadra/vagrant-provider-detection
Better VirtualBox detection in Vagrantfile
2013-09-04 14:40:32 -07:00
Andy Rothfusz
1a32c4a1df Merge pull request #1795 from eliasp/patch-1
Typo
2013-09-04 14:13:52 -07:00
shin-
98edd0e751 Updated docs for API v1.5 2013-09-04 22:58:58 +02:00
shin-
98a1314251 Merge branch 'mhennings-1357-implement-login-with-private-registry' into api_1_5 2013-09-04 22:27:04 +02:00
shin-
34edbd4f7e Merge branch '1357-implement-login-with-private-registry' of git://github.com/mhennings/docker into mhennings-1357-implement-login-with-private-registry 2013-09-04 22:26:55 +02:00
shin-
2ea52dddec Merge branch 'better_api_ports' of git://github.com/shin-/docker into shin--better_api_ports 2013-09-04 22:25:01 +02:00
Elias Probst
396274fa6d Typo 2013-09-04 21:56:51 +02:00
Jimmy Cuadra
28e75b23b3 Don't install VirtualBox Guest Additions if VAGRANT_DEFAULT_PROVIDER is set. 2013-09-04 11:21:24 -07:00
Tommaso Visconti
ad5796de9f Adds instruction about UDP port redirection 2013-09-04 19:17:10 +02:00
Daniel Mizyrycki
3a6868bc2f Merge pull request #1788 from dotcloud/1787-deprecate-port-forwarding
deployment, issue #1787: Remove deprecated port forwarding from /Vagrantfile
2013-09-04 10:11:12 -07:00
Daniel Mizyrycki
34145f9840 deployment, issue #1787: Remove deprecated port forwarding from /Vagrantfile 2013-09-04 10:05:54 -07:00
Andy Rothfusz
58ea690b41 Merge pull request #1784 from proger/patch-1
add new docker api client for erlang, erldocker
2013-09-04 10:01:29 -07:00
Andy Rothfusz
3b92ab3465 Merge pull request #1783 from briehanlombaard/typos
Fixed typos
2013-09-04 10:00:10 -07:00
Andy Rothfusz
76f97a64fa Merge pull request #1785 from thijsterlouw/update_commandline_cli
Fix documentation index for cli
2013-09-04 09:59:30 -07:00
Andy Rothfusz
5974794942 Merge pull request #1782 from tianon/gentoo-docs
Added Gentoo Documentation
2013-09-04 09:58:36 -07:00
Thijs Terlouw
368d0385e1 Fix documentation index for cli 2013-09-04 13:21:44 +02:00
Vladimir Kirillov
58ffd03bf1 add new docker api client for erlang, erldocker 2013-09-04 12:53:38 +03:00
Tianon Gravi
9ce1d02ef6 Add Gentoo documentation 2013-09-04 02:59:18 -06:00
Briehan Lombaard
251d1261b0 Fixed typos 2013-09-04 10:52:53 +02:00
Daniel Mizyrycki
a0a5170991 Merge pull request #1449 from spkane/honor-ENV-VAGRANT_DEFAULT_PROVIDER
Assume that if VAGRANT_DEFAULT_PROVIDER is set we shouldn't install vbox tools
2013-09-03 17:38:03 -07:00
shin-
b3a70d767d Compute dependency graph and upload layers in the right order when pushing 2013-09-04 02:21:40 +02:00
Daniel Mizyrycki
91ee679549 pr #1718: Fetch docker gpg over https 2013-09-03 17:14:49 -07:00
Markus Fix
52fe452497 Changed docker apt repo URI from https to http because the http driver for apt is not installed by default 2013-09-03 17:10:03 -07:00
Daniel Mizyrycki
846524115b testing, issue #1620: Add index functional test on docker-ci 2013-09-03 15:38:06 -07:00
Hector Castro
fac4cedcc1 Add a Docker example for running Riak. 2013-09-03 16:30:40 -04:00
Marco Hennings
ded973219e Move auth header on run cmd 2013-09-03 20:59:48 +02:00
shin-
dd4aab8411 Use base64 encoding 2013-09-03 20:59:48 +02:00
shin-
d04beb7f43 Pass auth config through headers rather than as URL param 2013-09-03 20:59:48 +02:00
Marco Hennings
a2603477dd Merge on current master 2013-09-03 20:59:48 +02:00
Marco Hennings
ad322d7cca Send corrent endpoint authentication when an image is pulled during the run
cmd.
2013-09-03 20:59:48 +02:00
Marco Hennings
da3bb9a7c6 Move authConfig to a Parameter on postImagePush, too 2013-09-03 20:59:48 +02:00
Daniel Mizyrycki
0e6ee9632c Merge pull request #1603 from dotcloud/773-docker-ci-pr
773 docker-ci github pull request
2013-09-03 11:46:34 -07:00
Marco Hennings
fcee6056dc Login against private registry
To improve the use of docker with a private registry the login
command is extended with a parameter for the server address.

While implementing i noticed that two problems hindered authentication to a
private registry:

1. the resolve of the authentication did not match during push
   because the looked up key was for example localhost:8080 but
   the stored one would have been https://localhost:8080

   Besides The lookup needs to still work if the https->http fallback
   is used

2. During pull of an image no authentication is sent, which
   means all repositories are expected to be private.

These points are fixed now. The changes are implemented in
a way to be compatible to existing behavior both in the
API as also with the private registry.

Update:

- login does not require the full url any more, you can login
  to the repository prefix:

  example:
  docker logon localhost:8080

Fixed corner corner cases:

- When login is done during pull and push the registry endpoint is used and
  not the central index

- When Remote sends a 401 during pull, it is now correctly delegating to
  CmdLogin

- After a Login is done pull and push are using the newly entered login data,
  and not the previous ones. This one seems to be also broken in master, too.

- Auth config is now transfered in a parameter instead of the body when
  /images/create is called.
2013-09-03 20:45:49 +02:00
Michael Crosby
ea813d8593 Update tar pkg revision number 2013-09-03 18:30:32 +00:00
Michael Crosby
d13c2ed24e Merge pull request #1317 from calavera/login_signal
Exit from `docker login` on SIGTERM and SIGINT.
2013-09-03 11:16:05 -07:00
Andy Rothfusz
75df4953f4 Merge pull request #1758 from mattapperson/patch-1
Added NodeJS library
2013-09-03 11:02:34 -07:00
Andy Rothfusz
f703b8b839 Merge pull request #1771 from thijsterlouw/missing_cli_docs
Add 2 missing cli commands to docs (events + insert)
2013-09-03 10:41:05 -07:00
Thijs Terlouw
6380b42edb Add 2 missing cli commands to docs (events + insert) and alphabetically order docker output 2013-09-03 16:35:22 +02:00
David Calavera
ce53e21ea6 Read the stdin line properly.
Load the auth config before it's used.
2013-09-01 16:12:07 -07:00
David Calavera
35fef275b3 Merge branch 'login_signal' of https://github.com/calavera/docker into login_signal
* 'login_signal' of https://github.com/calavera/docker:
  Use flag.StringVar to capture the command line flags.
  Fix syscall name.
  Remove unused imports.
  Simplify term signal handler.
  Add the ISIG syscall back to not kill the client withing a shell with ctrl+c.
  Print a new line after getting the password from stdin.
  Exit if there is any error reading from stdin.
  Stop making a raw terminal to ask for registry login credentials.
  Allow to generate signals when termios is in raw mode.
  Use a more idiomatic syntax to capture the exit.
  Exit from `docker login` on SIGTERM and SIGINT.
2013-09-01 16:10:53 -07:00
David Calavera
fa3266efa5 Merge branch 'master' into login_signal
* master: (23 commits)
  Made calling changelog before run return empty. Fixes #1705.
  fix error in docker build when param is not a directory
  Document FROM <image>:<tag> Dockerfile instruction.
  Install Ubuntu raring backported kernel from official archives directly.
  Updated "Use -> The Basics" to use ubuntu:12.10.
  hide version when not available
  added a Dockerfile which installs all deps and builds the docs.
  Unable to find image error should print to stderr
  remove message during tests
  use init function
  add TEST env var during tests and silenced parserun during tests
  Update python_web_app.rst
  Update remaining upstart scripts to wait for lxc-net
  Fixed a minor syntax error.
  Add privileged flag in documentation for container creation
  Fix #1685: Notes on production use. General installation cleanup.
  Fix bash completion, remove have
  added apt-key finger tip and fingerprint in ubuntu installation page
  Improve formatting with 'go fmt' as stated in CONTRIBUTING.md
  Start docker after lxc-net to prevent ip forwarding race
  ...
2013-09-01 15:56:59 -07:00
Brandon Liu
113bb396cd Don't export Graph.walkAll. 2013-08-31 20:44:49 -07:00
Brandon Liu
1fca99ad90 Replace Graph.All with Graph.Map 2013-08-31 20:44:42 -07:00
Matt Apperson
e42d3a1bfa Added NodeJS library 2013-08-31 22:13:15 -04:00
Victor Vieux
3528990c73 Merge pull request #1665 from dotcloud/ksid-contrib-maintainer
Make Kawsar Saiyeed @KSid the maintainer for contributed tools
2013-08-30 17:07:03 -07:00
Andy Rothfusz
8ee4473d92 Merge pull request #1704 from dhrp/ubuntu-install-finger-fingerprint
added apt-key finger tip and fingerprint in ubuntu installation page
2013-08-30 16:15:19 -07:00
Daniel Mizyrycki
212b582362 Merge pull request #1673 from xdissent/1659-upstart-order
Start docker after lxc-net to prevent ip forwarding race
2013-08-30 16:05:40 -07:00
Guillaume J. Charmes
969e48dd5a Merge pull request #1752 from dotcloud/1750-build_error-fix
fix error in docker build when param is not a directory
2013-08-30 16:05:23 -07:00
Guillaume J. Charmes
d9b7ec6458 Merge pull request #1754 from griff/1705_changelog_fails_before_run
- Runtime: Made calling change log before run return empty list.
2013-08-30 16:02:46 -07:00
Brian Olsen
46c9c5c843 Made calling changelog before run return empty. Fixes #1705. 2013-08-30 22:46:07 +02:00
Brian Olsen
7a9c711832 Copies content from image to volumne if non-empty. Fixes #1582. 2013-08-30 22:02:05 +02:00
Brian Olsen
6756e786ac Just fixing gofmt issues in other people's code. 2013-08-30 22:02:05 +02:00
Michael Crosby
84431ec03c Merge pull request #1613 from thijsterlouw/proper_resolvconf_parsing
Proper resolv.conf parsing
2013-08-30 12:10:45 -07:00
Victor Vieux
d605e82bad fix error in docker build when param is not a directory 2013-08-30 18:08:29 +00:00
Andy Rothfusz
2ea19238ff Merge pull request #1744 from dsissitka/patch-7
Updated "Use -> The Basics" to use ubuntu:12.10.
2013-08-30 10:44:54 -07:00
Andy Rothfusz
df36f9db05 Merge pull request #1749 from msiebuhr/doc-from-tags
Document FROM <image>:<tag> Dockerfile instruction.
2013-08-30 10:44:36 -07:00
Michael Crosby
74982bda32 Merge pull request #1738 from jonasi/error-to-stderr
Unable to find image error should print to stderr
2013-08-30 10:43:45 -07:00
Michael Crosby
6d769f4b9b Merge pull request #1743 from dotcloud/fix_version
Hide version when not available
2013-08-30 10:39:48 -07:00
Morten Siebuhr
1a8a540209 Document FROM <image>:<tag> Dockerfile instruction. 2013-08-30 15:23:32 +02:00
Shih-Yuan Lee (FourDollars)
6c206f7d78 Install Ubuntu raring backported kernel from official archives directly. 2013-08-30 14:12:13 +08:00
Andy Rothfusz
110d3b7794 Merge pull request #1736 from ramonvanalteren/patch-1
Update python_web_app.rst
2013-08-29 18:36:53 -07:00
dsissitka
c0e95fa68a Updated "Use -> The Basics" to use ubuntu:12.10.
ubuntu:latest doesn't have nc. ubuntu:12.10 does.
2013-08-29 21:05:18 -04:00
Victor Vieux
e3b58d3027 hide version when not available 2013-08-30 00:46:43 +00:00
Andy Rothfusz
9b029a0854 Merge pull request #1739 from dotcloud/add-docs-dockerfile
Added a Dockerfile which installs all deps and builds the Docs
2013-08-29 17:42:17 -07:00
Victor Vieux
49d35cc0ae Merge pull request #1690 from jbbarth/code-formatting
go fmt
2013-08-29 17:13:37 -07:00
Nick Stinemates
c6702bebe1 added a Dockerfile which installs all deps and builds the docs. 2013-08-30 00:13:32 +00:00
Michael Crosby
690c3839fd Merge pull request #1710 from thomasf/master
Fix bash completion.
2013-08-29 17:08:14 -07:00
Victor Vieux
050cf70136 Merge pull request #1721 from andrewmunsell/patch-2
Add privileged flag in documentation for container creation
2013-08-29 17:05:40 -07:00
Guillaume J. Charmes
df8f95ac74 Merge pull request #1591 from dotcloud/1557_fix_docker_run_useage
add TEST env var during tests and silenced parserun during tests
2013-08-29 16:51:34 -07:00
Andy Rothfusz
0f91418b26 Merge pull request #1729 from dsissitka/patch-6
Fixed a minor syntax error.
2013-08-29 16:27:22 -07:00
Isao Jonas
4ff649ce85 Unable to find image error should print to stderr 2013-08-29 18:25:11 -05:00
Andy Rothfusz
c4394decf8 Merge pull request #1717 from metalivedev/1685-updateinstallation
Fix #1685: Notes on production use. General installation cleanup.
2013-08-29 16:06:00 -07:00
Victor Vieux
f159f4710b remove message during tests 2013-08-29 22:59:34 +00:00
Victor Vieux
740a97f1a8 use init function 2013-08-29 22:55:29 +00:00
Victor Vieux
eee6d3dae9 add TEST env var during tests and silenced parserun during tests 2013-08-29 22:55:29 +00:00
Ramon van Alteren
559724ac35 Update python_web_app.rst
Fixed typo in the WEB_PORT command, missing sudo in front of docker command
2013-08-30 00:37:47 +02:00
Greg Thornton
3f141e1fd3 Update remaining upstart scripts to wait for lxc-net 2013-08-29 14:06:24 -05:00
David Calavera
9f8e5a93b4 Use flag.StringVar to capture the command line flags. 2013-08-29 11:46:42 -07:00
David Calavera
78d995bbd6 Fix syscall name. 2013-08-29 11:46:42 -07:00
David Calavera
e7ee2f443a Remove unused imports. 2013-08-29 11:46:42 -07:00
David Calavera
b8a8962833 Simplify term signal handler. 2013-08-29 11:46:42 -07:00
David Calavera
b54ba5095b Add the ISIG syscall back to not kill the client withing a shell with ctrl+c. 2013-08-29 11:46:41 -07:00
David Calavera
f18889bf67 Print a new line after getting the password from stdin. 2013-08-29 11:46:41 -07:00
David Calavera
6e4a818ee6 Exit if there is any error reading from stdin. 2013-08-29 11:46:41 -07:00
David Calavera
2357fecc92 Stop making a raw terminal to ask for registry login credentials.
It only disables echo asking for the password and lets the terminal to handle everything else.
It fixes #1392 since blank spaces are not discarded as they did before.
It also cleans the login code a little bit to improve readability.
2013-08-29 11:46:41 -07:00
David Calavera
23dc52f528 Allow to generate signals when termios is in raw mode. 2013-08-29 11:45:04 -07:00
David Calavera
c3154fdf4d Use a more idiomatic syntax to capture the exit. 2013-08-29 11:45:03 -07:00
David Calavera
f1d0625cf8 Exit from docker login on SIGTERM and SIGINT.
Fixes #1299.
2013-08-29 11:45:03 -07:00
Andy Rothfusz
230dc07d37 fix logo path 2013-08-29 11:24:59 -07:00
Andy Rothfusz
c46d9933ec Merge pull request #1727 from dsissitka/patch-5
Fixed a minor syntax error.
2013-08-29 11:05:22 -07:00
Andy Rothfusz
bcb081a269 Merge pull request #1708 from nexxy/patch-1
Update docker_remote_api_v1.4.rst
2013-08-29 10:22:05 -07:00
dsissitka
57892365ef Fixed a minor syntax error. 2013-08-29 12:35:37 -04:00
dsissitka
075253238d Fixed a minor syntax error. 2013-08-29 12:27:35 -04:00
David Sissitka
f6b56e996d Updated "POST /containers/create" to return the right Content-Type. 2013-08-29 09:14:14 -07:00
Emily Rose
18d572abb4 Added Emily Rose to AUTHORS. 2013-08-28 22:28:31 -07:00
Andrew Munsell
37a236947e Add privileged flag in documentation for container creation 2013-08-28 22:18:47 -07:00
Emily Rose
c1f6914e43 Correct number of asterisks. 2013-08-28 20:01:18 -07:00
Andy Rothfusz
b1eae313ad Fix #1685: Notes on production use. General installation cleanup. 2013-08-28 17:26:10 -07:00
Elias Probst
a359e52f29 Added FIXME for iproute → netlink as advised in issue #925. 2013-08-29 01:50:37 +02:00
Andy Rothfusz
bf98fff925 Merge pull request #1674 from jbbarth/docs
Add mercurial to the list of packages needed in a dev environment
2013-08-28 11:10:45 -07:00
Michael Crosby
e6c3da42e7 Merge pull request #1696 from daniel-garcia/1448-add_newline_every_logmessage
Write newline after every log message
2013-08-28 10:00:38 -07:00
Victor Vieux
d5220bc081 Merge pull request #1711 from dotcloud/1709-start_doc-fix
fix start documentation
2013-08-28 09:51:20 -07:00
Victor Vieux
66ef6c0f6e fix start doc 2013-08-28 16:40:00 +00:00
Thomas Frössman
20772f90ff Fix bash completion, remove have
Should solve #1639.
2013-08-28 18:38:06 +02:00
Kyle Conroy
9a60f36ccc Merge branch 'master' of https://github.com/dotcloud/docker 2013-08-28 06:44:21 -07:00
Emily Rose
69a93b36d5 Update docker_remote_api_v1.4.rst
Fixed a (very serious) typo.
2013-08-28 04:12:37 -07:00
Thatcher Peskens
466a88b9a8 added apt-key finger tip and fingerprint in ubuntu installation page 2013-08-27 19:59:36 -07:00
Andy Rothfusz
f0620da74a Merge pull request #1703 from metalivedev/621-usebaseimages
Fix #621: add new use/base images section. Include examples.
2013-08-27 18:53:40 -07:00
Andy Rothfusz
dffb72ca89 Fix #621: add new use/base images section. Include examples. 2013-08-27 18:49:43 -07:00
Andy Rothfusz
128893062b Merge pull request #1698 from metalivedev/1684-Cleanup
Fix #1684: Cleanup introduction, working with repos, "use" index.
2013-08-27 18:15:06 -07:00
shin-
98018df078 More descriptive, easier to process container portmappings information in the API 2013-08-28 00:20:35 +02:00
Andy Rothfusz
4a3b039c2a Merge pull request #1694 from dotcloud/troubleshoot_windows_installation
Troubleshooting windows installation section
2013-08-27 14:48:15 -07:00
Andy Rothfusz
c627ff6e20 Fix #1684: Old Welcome is now Introduction. Working with Repos now follows Builder.
Clean up some doc build errors. Removed old Manifesto. Tweaked layout javascript
to allow direct link from first and last index elements.
2013-08-27 14:29:49 -07:00
daniel-garcia
d593f57952 write newline after every log message. 2013-08-27 14:09:26 -05:00
Michael Crosby
c2b273df5d Merge pull request #1611 from pysqz/master
Make sure 'Ghost' container is available with allocated IP
2013-08-27 11:30:00 -07:00
pysqz
cc18a9c650 Also reuse forwarding ports 2013-08-28 03:14:21 +08:00
Daniel Mizyrycki
f861c493bf Troubleshooting windows installation section 2013-08-27 11:19:32 -07:00
jbbarth
d80b50d4b4 Improve formatting with 'go fmt' as stated in CONTRIBUTING.md
As 'go fmt' doesn't support verifying files in multiple directories,
it's probably a good idea to run it on all '*.go' files from time to
time with something like this:

  find . -name "*.go" | xargs dirname | sort -u | xargs -n 1 echo go fmt
2013-08-27 10:05:25 +02:00
Andy Rothfusz
5b84252c73 Merge pull request #1622 from TylerBrock/patch-2
Some fixes for MongoDB example
2013-08-26 17:28:38 -07:00
Tyler Brock
f352ec945f ENTRYPOINT fixups 2013-08-26 19:17:26 -04:00
Daniel Mizyrycki
714e424d74 Merge pull request #1656 from KSid/vagrantfile-0.6
Update Vagrantfile to use docker apt repository
2013-08-26 14:59:40 -07:00
Andy Rothfusz
f7fa6c2b0b Merge pull request #1671 from kim0/patch-3
Don't force a non-existent npm version, use latest from epel
2013-08-26 14:33:07 -07:00
Michael Crosby
b866254a19 Merge pull request #1636 from unclejack/1594-return_non_zero_when_encountering_error
1594 - return non-zero exit code if at least one container has failed to start
2013-08-26 13:20:31 -07:00
Michael Crosby
108b6c644b Merge pull request #1678 from dotcloud/remove-incorrect-build-instructions
Remove incorrect build instructions
2013-08-26 12:56:08 -07:00
Guillaume J. Charmes
9ce7b1c009 Merge pull request #1663 from KSid/1661-cp-parsing-error
- Runtime: Display error if docker cp resource not specified
2013-08-26 12:51:43 -07:00
Solomon Hykes
076434ef10 Remove incorrect build instructions 2013-08-26 12:06:56 -07:00
Andy Rothfusz
f2807042c2 Merge pull request #1670 from tobstarr/patch-1
Change fetching of gpg key from http to https
2013-08-26 12:00:54 -07:00
Guillaume J. Charmes
d4c7340131 Merge pull request #1646 from mohitsoni/master
- Runtime: Fix uname -r kernel version parsing
2013-08-26 11:49:00 -07:00
Greg Thornton
331f983593 Start docker after lxc-net to prevent ip forwarding race 2013-08-26 09:43:49 -05:00
jbbarth
1adbadde80 Add mercurial to the list of packages needed in a dev environment
The dev environment setup procedure fails if mercurial is not installed
on the machine, it cannot download stuff from
http://code.google.com/p/go.net for instance.
2013-08-26 16:39:38 +02:00
kim0
843ef645d2 Don't force a non-existent npm version, use latest from epel
Not the smartest thing to do
2013-08-26 12:54:15 +03:00
Tobias Schwab
4440bde0eb Change fetching of gpg key from http to https
Or was there any reason to fetch using http?
2013-08-26 11:45:15 +02:00
Solomon Hykes
f83d556bc0 Make Kawsar Saiyeed @KSid the maintainer for contributed tools 2013-08-24 17:16:52 -07:00
Kawsar Saiyeed
d8a18ea3ae Display error if resource not specified
Display an error message if a container resource is not supplied to
'docker cp'.

Fixes error in #1661
2013-08-25 00:28:36 +01:00
Nick Stinemates
c864647721 Merge pull request #1658 from jamescarr/patch-1
Update for new mailing list
2013-08-24 10:28:26 -07:00
Nick Stinemates
2891e828bd Merge pull request #1657 from KSid/bash-completion-0.6
Add latest docker options to docker.bash
2013-08-24 10:27:25 -07:00
James Carr
465d5313c5 Update for new mailing list 2013-08-24 10:26:54 -07:00
Kawsar Saiyeed
02d1d238cd Add latest docker options to docker.bash
Adds bash completion options for the latest docker release (0.6.1)
2013-08-24 18:04:40 +01:00
Kawsar Saiyeed
1f85ed6825 Update Vagrantfile to use docker apt repository 2013-08-24 16:49:55 +01:00
Mohit Soni
ef82690144 Merge remote-tracking branch 'upstream/master' 2013-08-24 00:33:01 -07:00
Mohit Soni
f4432d50c3 Refactored code and added unit tests
- Extracted ParseRelease method from GetKernelVersion to make code
  more testable
- Added tests for ParseRelease method
2013-08-24 00:24:40 -07:00
Andy Rothfusz
820a3b962a Merge pull request #1634 from songgao/patch-1
Added utils/utils.go : CopyEscapable escale sequence to tutorial.
2013-08-23 18:52:17 -07:00
Michael Crosby
99a8306898 Merge release 0.6.1 back to master
Conflicts:
	VERSION
2013-08-23 22:59:29 +00:00
Michael Crosby
bd16107e97 Merge pull request #1648 from dotcloud/bump_0.6.1
Bump to 0.6.1
2013-08-23 15:49:28 -07:00
Michael Crosby
5105263dac Bump to v0.6.1 2013-08-23 22:23:30 +00:00
Michael Crosby
7b5c579b77 Use correct upstart script with new build tool 2013-08-23 22:20:36 +00:00
Michael Crosby
dcfb993ac7 Merge pull request #1642 from dotcloud/upstart-exec
Use correct upstart script with new build tool
2013-08-23 15:18:58 -07:00
Andy Rothfusz
dd9ee039a6 Merge pull request #1633 from abhirajbutala/patch-1
Update docker_remote_api.rst
2013-08-23 15:17:21 -07:00
Kawsar Saiyeed
15d658d36b Removed duplicate mercurial install command 2013-08-23 22:16:49 +00:00
unclejack
59281d6f80 use libffi-dev, don't build it from sources 2013-08-23 22:16:23 +00:00
shin-
29be20f987 Fixed: ImagePull in runtime test 2013-08-23 22:16:11 +00:00
shin-
5a8c32dc8e Use additional decorator in RequestFactory to pass meta headers to registry 2013-08-23 22:16:03 +00:00
Michael Crosby
3d9b4379a5 Use correct upstart script with new build tool 2013-08-23 22:10:18 +00:00
Mohit Soni
ab882da03b Fixes #1643
Changed the split statement, from SplitN to Split. Doing so takes
care of cases, when a minor version is followed by a suffix, that
starts with '.'.
2013-08-23 14:37:37 -07:00
Thatcher
e2594d162e Merge pull request #1641 from dotcloud/docs-0.6-gpg-key
Docs 0.6 gpg key
2013-08-23 13:14:21 -07:00
Thatcher Peskens
ebb9b5e85b Added adding the gpg key 2013-08-23 12:56:03 -07:00
Michael Crosby
98f9d3e81c Merge pull request #1562 from AnSavvides/contributing-link
Adding direct reference to contribution guidelines
2013-08-23 12:18:24 -07:00
Thatcher Peskens
091fb89294 Added line on top of ubuntulinux announcing change in 0.6
Added lines on top of vagrant installs
2013-08-23 11:45:52 -07:00
Michael Crosby
db91cac44f Merge pull request #1635 from unclejack/fix_main_dockerfile
Fix libffi build failure for the main Dockerfile
2013-08-23 11:31:19 -07:00
Guillaume J. Charmes
8dd3607bd1 Merge pull request #1563 from dotcloud/1073_add_loading_message
* Runtime: Add loading containers message in no debug
2013-08-23 11:20:22 -07:00
Solomon Hykes
3ebda17f0f Merge pull request #1637 from dhrp/change-docs-ubuntu-sources
Updated the docs to reflect we no longer use Launchpad and host our own
2013-08-23 11:05:59 -07:00
Michael Crosby
9149c45745 Merge pull request #1627 from shin-/registry_metaheaders
Pass "meta" headers in API calls to the registry
2013-08-23 11:05:41 -07:00
Michael Crosby
969ab9c450 Merged 0.6.0 release back to master 2013-08-23 17:50:24 +00:00
Michael Crosby
588f8e1cec Merge pull request #1628 from dotcloud/bump_0.6.0
Bump to 0.6.0
2013-08-23 10:45:14 -07:00
Thatcher Peskens
f753dfe6b3 Updated the docs to reflect we no longer use Launchpad and host our own repository. 2013-08-23 10:41:53 -07:00
unclejack
d1ad0e278d return error if at least one container fails to start
This makes docker start exit with exit code 1 if at least one container
didn't start. This also prints an error at the end.
2013-08-23 20:14:06 +03:00
unclejack
72cfa3de35 use libffi-dev, don't build it from sources 2013-08-23 19:13:30 +03:00
Abhiraj Butala
3396a183c6 Update docker_remote_api.rst 2013-08-23 01:28:58 -07:00
shin-
1c6af604e8 Fixed: ImagePull in runtime test 2013-08-23 04:32:09 +02:00
Guillaume J. Charmes
ccc2276469 Merge pull request #1630 from KSid/docker-build-dockerfile
Removed duplicate mercurial install command from Dockerfile
2013-08-22 17:31:35 -07:00
Guillaume J. Charmes
78a71b1273 Merge pull request #1587 from dotcloud/1559_improve_version
Improve version
2013-08-22 17:28:18 -07:00
Kawsar Saiyeed
2191419f4c Removed duplicate mercurial install command 2013-08-22 23:12:28 +01:00
Michael Crosby
f4a4f1ca87 Bump to 0.6.0 2013-08-22 21:09:50 +00:00
Guillaume J. Charmes
f925edd12d Merge pull request #1525 from griff/1503-fix
Don't read from stdout when only attached to stdin
2013-08-22 13:43:05 -07:00
Michael Crosby
12715c8ddc Merge pull request #1609 from jpetazzo/release-docker-with-docker
Release docker with docker
2013-08-22 13:13:06 -07:00
shin-
093b85b72f Use additional decorator in RequestFactory to pass meta headers to registry 2013-08-22 21:15:31 +02:00
Michael Crosby
326dadd224 Merge pull request #1565 from dotcloud/only_load_authconfig_when_needed
Load authConfig only when needed and fix useless WARNING
2013-08-22 11:10:16 -07:00
Michael Crosby
a3510c99f1 Merge pull request #1560 from dotcloud/439-allow-lxc-args
Add lxc-conf flag to allow custom lxc options
2013-08-22 09:34:27 -07:00
Michael Crosby
262d57e387 Merge pull request #1623 from mhennings/1592-fix-race-conditions-in-parallel-pull
Fix race conditions in parallel pull
2013-08-22 09:30:21 -07:00
Michael Crosby
551092f9c0 Add lxc-conf flag to allow custom lxc options 2013-08-22 16:05:21 +00:00
Marco Hennings
3f802f4a13 Fix race conditions in parallel pull
During parallel pull of a repostiory it can happen that the same layer
is pulled more than once.

To fix this I have extended the locking code to
- avoid multiple pulls of the same image
- avoid multiple pulls of the same layer


If an error occurs the other layers are awaited before returning as leaving
the scope before the go routines leave causes crashes of the server sometimes
if the download status is updated while the http stream is already closed


Beside this I have extended status display.
2013-08-22 13:23:43 +02:00
Tyler Brock
d6f53049c4 Some fixes for MongoDB example 2013-08-21 23:48:32 -04:00
Daniel Mizyrycki
5e3386473a testing, issue #773: Add infrastructure docker-ci PR documentation 2013-08-21 17:39:45 -07:00
Andy Rothfusz
0b9c8e2860 Merge pull request #1596 from metalivedev/1149-easyfixes
Fix #1330 and #1149. Improve CMD, ENTRYPOINT, and attach docs.
2013-08-21 14:06:41 -07:00
Andy Rothfusz
42fe550c9e Merge pull request #1614 from denibertovic/docs
Small fix to docs regarding adding docker groups
2013-08-21 12:26:56 -07:00
Andy Rothfusz
f5bd137216 Add mongodb example 2013-08-21 12:09:40 -07:00
Andy Rothfusz
348696f3fe Merge pull request #1607 from TylerBrock/mongodb
Add MongoDB image example
2013-08-21 12:07:50 -07:00
Michael Crosby
6da071985f Merge pull request #1513 from dotcloud/add_user_dockerfile
Add USER instruction do Dockerfile
2013-08-21 09:19:31 -07:00
Michael Crosby
56e02dd0c7 Merge pull request #1588 from dotcloud/1561_fix_warning_in_tests
assume ip_forwarding = 1 by default
2013-08-21 09:15:23 -07:00
Michael Crosby
e0a7013836 Merge pull request #1576 from MatthewMueller/patch-1
updated default -H docs
2013-08-21 09:09:43 -07:00
Deni Bertovic
467dbb75f1 small fix to docs regarding adding docker groups 2013-08-21 17:28:13 +02:00
pysqz
2f6ce27fde Make sure 'Ghost' container is available with allocated IP 2013-08-21 22:37:58 +08:00
Thijs Terlouw
c349b4d56c Keep linebreaks and generalize code 2013-08-21 15:48:39 +02:00
Thijs Terlouw
62e84785b6 proper resolv.conf parsing 2013-08-21 15:23:12 +02:00
Matt Mueller
215094903a update help 2013-08-21 02:08:32 -07:00
Jérôme Petazzoni
8a7c0495e0 Remove -x flag — we do not want to be *that* verbose. 2013-08-20 20:50:42 -07:00
Jérôme Petazzoni
885afebe07 Bump up VERSION file to 0.5.3-dev 2013-08-20 19:36:42 -07:00
Jérôme Petazzoni
e06372d6f4 Update packaging/README to point to hack/release 2013-08-20 19:36:06 -07:00
Jérôme Petazzoni
b5a48eaed3 Moved release scripts to hack/release and updated instructions. 2013-08-20 19:36:06 -07:00
Jérôme Petazzoni
a8059059c6 +CHANGES is now -dirty (works better in URLs), and we have postinstall and prerm jobs. 2013-08-20 19:36:06 -07:00
Jérôme Petazzoni
c8c69a1499 Bump to 0.5.3 (VERSION file) 2013-08-20 19:36:06 -07:00
Jérôme Petazzoni
0469e47674 Release script also takes care of index file (if the S3 bucket is WS-enabled) 2013-08-20 19:36:06 -07:00
Jérôme Petazzoni
5b630d436d If there are changes, add the timestamp to the package version. 2013-08-20 19:35:31 -07:00
Jérôme Petazzoni
9c06420b18 Implement apt-secure repository signing. 2013-08-20 19:35:31 -07:00
Jérôme Petazzoni
87872006ce Repository should also have i386 index, since Ubuntu is multi-arch by default 2013-08-20 19:35:31 -07:00
Jérôme Petazzoni
abfa7a204d Update to go 1.1.2. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
5b0eaef602 Run reprepro from release, incrementally (it needs S3 credentials). Add virtual package. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
9694fb85d7 Install python-magic (it helps s3cmd) and a convenience /src symlink 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
bfee2c726e Polish instructions a little bit. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
ab4fb9bbfa Add a check for S3 bucket access. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
fbd5b20c38 Running the build image will now execute release.sh automatically. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
ff30eb96b6 Protect the release.sh script against accidental use. Infer VERSION automatically. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
749a7d0e4f Add a check to make sure that make.sh only runs within a container. 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
d9f769930b Cosmetic changes: rewrapping, `` → $()… before starting the real work 2013-08-20 19:35:30 -07:00
Jérôme Petazzoni
d750060f0c add "|| true" otherwise "set -e" kills us if the repo is clean 2013-08-20 19:35:30 -07:00
Guillaume J. Charmes
bdbac9f7a1 Upgrade Dockerfile with new dependency 2013-08-20 19:35:30 -07:00
Solomon Hykes
13201775de Update release checklist. Still needs work. 2013-08-20 19:34:10 -07:00
Solomon Hykes
ccc3969536 release.sh: publish a full release of docker to a S3 bucket, including static binary and a full APT repository with install instructions 2013-08-20 19:34:10 -07:00
Solomon Hykes
89ee524229 Good-bye, ugly mega-Makefile. Docker can now be built with docker, with the help of a simple very simple shell script. 2013-08-20 19:34:10 -07:00
Solomon Hykes
9fce6f662a docker -v: show version and build information without making remote connections 2013-08-20 19:32:37 -07:00
Solomon Hykes
9087ef9a77 Move VERSION to a dedicated file to facilitate automated builds and releases 2013-08-20 19:32:37 -07:00
Solomon Hykes
aa2ab5143b Deprecate dockerbuilder in favor of a standard Dockerfile 2013-08-20 19:31:30 -07:00
Tyler Brock
2c147dd721 add mongodb image example 2013-08-20 18:08:15 -04:00
Daniel Mizyrycki
ee64e099e0 testing, issue #773: Automatically test pull requests on docker-ci 2013-08-20 12:32:32 -07:00
Daniel Mizyrycki
30726c3785 Improve docker-ci deployment and tests 2013-08-20 12:18:29 -07:00
Victor Vieux
41973d41e9 fix typo 2013-08-20 11:52:37 +00:00
Victor Vieux
f69c465231 rebase docs 2013-08-20 11:42:53 +00:00
Andy Rothfusz
75f4fd978d Fix #1330 and #1149. Improve CMD, ENTRYPOINT, and attach docs. 2013-08-19 19:13:26 -07:00
Kyle Conroy
e9a1246527 Enable mobile view for website.
The documentation currently uses responsive Bootstrap, but didn't
include the correct meta tag for proper display on mobile devices.
2013-08-19 11:54:51 -07:00
Andy Rothfusz
d627ff9697 Merge pull request #1590 from elmendalerenda/patch-3
fixed postgresql conf setting
2013-08-19 10:27:43 -07:00
Michael Crosby
04c16f347b Merge pull request #1396 from calavera/985-ordered-api-images
Sort APIImages by most recent creation date.
2013-08-19 09:41:39 -07:00
Michael Crosby
9c829cb5b4 Merge pull request #1581 from mhennings/workdirsupport-buildfile
Add workdir support for the Buildfile
2013-08-19 09:18:22 -07:00
elmendalerenda
83acd37161 fixed postgresql conf setting
small typo, changed listen_address to listen_addresses
2013-08-19 14:20:25 +01:00
Victor Vieux
b21f898620 assume ip_forwarding = 1 by default 2013-08-19 12:34:30 +00:00
Victor Vieux
646afab28d improve version 2013-08-19 12:07:49 +00:00
Victor Vieux
f6653c3fa5 Merge pull request #1553 from dotcloud/1540_fix_error_message
fix can't connect message with socket
2013-08-19 05:06:21 -07:00
Victor Vieux
18962d0ff3 load authConfig only when needed and fix useless WARNING 2013-08-19 11:42:38 +00:00
Song Gao
b22c973110 Added escape sequence to tutorial 2013-08-18 21:44:46 -05:00
Marco Hennings
319988336c Add workdir support for the Buildfile
For consistency the Buildfile should have the option to
set the working directory.

Of course that is one option more to the buildfile,
so please tell me if we really want this to happen.
2013-08-18 20:30:19 +02:00
Michael Crosby
67c9ce6dd1 Merge pull request #1459 from mhennings/set-working-directory
Add an option to set the working directory
2013-08-18 11:23:34 -07:00
Marco Hennings
687d27ab57 Add an option to set the working directory.
This makes it possible to simply wrap a command inside a container. This makes
it easier to use a container as an unified build environment.

Examples:

~/workspace/docker
$ docker  run  -v `pwd`:`pwd` -w `pwd` -i -t  ubuntu ls
AUTHORS		 Makefile	archive.go	   changes.go	      docker
[...]


docker  run  -v `pwd`:`pwd` -w `pwd` -i -t  ubuntu pwd
/home/marco/workspace/docker
2013-08-18 19:34:01 +02:00
David Calavera
6aff117164 Use flag.StringVar to capture the command line flags. 2013-08-17 22:28:05 -07:00
David Calavera
e69f714219 Fix syscall name. 2013-08-17 22:22:49 -07:00
David Calavera
276d2bbf1d Remove unused imports. 2013-08-17 22:22:11 -07:00
David Calavera
e6affb1b1a Sort images by tag name when the creation date is the same.
This establishes a strict alphabetical order for tags with the same creation date.
2013-08-17 22:11:34 -07:00
Matt Mueller
f409c11916 add to cli 2013-08-17 22:10:06 -07:00
Michael Crosby
3885ee00c5 Merge pull request #1575 from evanphx/tags
Show tag used when image is missing
2013-08-17 22:05:58 -07:00
Matthew Mueller
5325703c27 updated default 2013-08-17 21:54:10 -07:00
David Calavera
45543d012e Simplify term signal handler. 2013-08-17 21:29:37 -07:00
David Calavera
8a18999d23 Add the ISIG syscall back to not kill the client withing a shell with ctrl+c. 2013-08-17 21:08:00 -07:00
Evan Phoenix
07a887032a Show tag used when image is missing 2013-08-17 20:03:54 -07:00
Andy Rothfusz
1843a71911 Merge pull request #1550 from dotcloud/update-dependencies-in-readme
Update readme with dependencies for building
2013-08-16 17:46:25 -07:00
Guillaume J. Charmes
9ce577782c Merge pull request #1572 from dotcloud/1431-disable-TestContainerTop
Tests: testing, issue #1431: Temporarely disable TestContainerTop from the testsuite
2013-08-16 14:49:02 -07:00
Daniel Mizyrycki
422d4afdd5 testing, issue #1431: Temporarely disable TestContainerTop from the testsuite 2013-08-16 14:47:39 -07:00
Andy Rothfusz
09a08e0a9f Merge pull request #1569 from TylerBrock/patch-1
Add instructions for creating and using the docker group
2013-08-16 14:30:00 -07:00
Michael Crosby
5142e83d93 Merge pull request #1473 from shin-/978-opaque-v2
Reworking opaque requests in registry module
2013-08-16 12:24:51 -07:00
shin-
0418702cfc registry: removing opaqueRequest 2013-08-16 19:33:59 +02:00
Tyler Brock
674e5c8503 remove extraneous step and stupid whitespace 2013-08-16 13:30:54 -04:00
Tyler Brock
2ec141da54 Add instructions for creating and using the docker group 2013-08-16 13:19:59 -04:00
Victor Vieux
20b1e19641 add loading message 2013-08-16 13:43:09 +00:00
Andreas Savvides
f11fb706f6 Adding direct reference to contribution guidelines 2013-08-16 12:07:37 +01:00
Michael Crosby
bca19a22c5 Update readme with dependencies for building
Closes #915
2013-08-15 17:26:07 +00:00
Victor Vieux
62b45f0827 fix can't connect message
with socket
2013-08-15 13:48:08 +00:00
Victor Vieux
92a2b635a3 Merge pull request #1551 from dotcloud/hotfix_parallel_pull_display
hot fix display in parallel pull and gofmt
2013-08-15 04:43:26 -07:00
Victor Vieux
d7979ef2d0 hot fix display in parallel pull and go fmt 2013-08-15 11:42:40 +00:00
Michael Crosby
c5d8844d80 Merge pull request #1495 from bdon/master
Fix Graph ByParent() to generate list of child images per parent image.
2013-08-14 20:42:34 -07:00
Brian Olsen
c7cda86e84 Don't read from stdout in hijack unless attached. Fixes #1503 2013-08-15 02:54:06 +02:00
Guillaume J. Charmes
4caa604793 Merge pull request #1549 from dotcloud/fix-logevent-tests
Add imagename to LogEvent tests
2013-08-14 17:02:53 -07:00
Michael Crosby
7f9ba14b18 Add imagename to LogEvent tests 2013-08-14 23:43:43 +00:00
Andy Rothfusz
72660a1a2f Merge pull request #1537 from metalivedev/1517-sudodocker
Fix #1517, #1521 by adding sudo to examples and installation.
2013-08-14 16:34:00 -07:00
Andy Rothfusz
d4eab77f0c Fix #1517, #1521 by adding sudo to examples and installation. 2013-08-14 16:21:36 -07:00
Michael Crosby
9708597b0b Merge pull request #1538 from KSid/bash-completion-limit-containers
Bash Completion: Limit commands to containers of a relevant state
2013-08-14 16:16:25 -07:00
Michael Crosby
15bc2240ac Merge pull request #1505 from dotcloud/improve_events
Add image name in /events
2013-08-14 15:40:36 -07:00
Guillaume J. Charmes
631c449183 Merge pull request #1496 from xdissent/1351-volumes-from-before-volumes
* Runtime: Apply volumes-from before creating volumes
2013-08-14 15:10:41 -07:00
Michael Crosby
84a0274885 Merge pull request #1249 from unclejack/507-add_sigterm_sigint_handling_to_docker_run
#507 - make docker run handle SIGINT/SIGTERM
2013-08-14 15:00:19 -07:00
Guillaume J. Charmes
5ad8840024 Merge pull request #1541 from dotcloud/1528-fix_panic
prevent crash when .dockercfg not readable
2013-08-14 14:51:01 -07:00
Guillaume J. Charmes
8dfc47307d Merge pull request #1543 from dotcloud/fix_rmi_parse_repo_tag
add missing ParseRepositoryTag
2013-08-14 14:42:41 -07:00
shin-
aae04def7b Merge branch 'master' of https://github.com/dotcloud/docker 2013-08-14 23:32:56 +02:00
Daniel Mizyrycki
f51eb0e4b3 Merge pull request #1546 from dotcloud/1542-coverage-dependencies
testing, issue #1542: Add docker dependencies coverage testing into docker-ci
2013-08-14 14:32:15 -07:00
shin-
8b6b187a8d Merge branch 'master' of https://github.com/dotcloud/docker 2013-08-14 23:31:58 +02:00
shin-
8b26e4ea3c brew: docker-py requirement points to git repository 2013-08-14 23:31:18 +02:00
Daniel Mizyrycki
1d8562b290 testing, issue #1542: Add docker dependencies coverage testing into docker-ci 2013-08-14 14:27:46 -07:00
Michael Crosby
9662f9e56a Merge pull request #1478 from jpetazzo/929-insecure-flag
add -privileged flag and relevant tests, docs, and examples
2013-08-14 13:55:18 -07:00
Michael Crosby
25d71fb01b Merge pull request #1509 from dotcloud/1359-tar-pkg-ref
Add import for dotcloud/tar to replace std tar pkg
2013-08-14 11:26:49 -07:00
Joffrey F
12ffb522a6 Merge pull request #1519 from shin-/brew-more
Docker-brew 0.5.2 support and memory footprint reduction
2013-08-14 10:40:05 -07:00
Michael Crosby
0077678844 Merge pull request #1539 from fkautz/https-install-script
Install script should be fetched over https, not http.
2013-08-14 10:32:29 -07:00
Daniel Mizyrycki
53f2c5d6e8 Merge pull request #1545 from dotcloud/1544-docker-ci-dependencies
testing, issue #1544: Add new docker dependencies into docker-ci
2013-08-14 10:23:08 -07:00
Daniel Mizyrycki
49597f0f52 testing, issue #1544: Add new docker dependencies into docker-ci 2013-08-14 10:12:08 -07:00
Victor Vieux
c84d74df8c add missing ParseRepositoryTag 2013-08-14 16:59:21 +00:00
Victor Vieux
3c9f9945c9 prevent crash when .dockercfg not readable 2013-08-14 10:26:18 +00:00
Frederick F. Kautz IV
80ebff0fa4 Install script should be fetched over https, not http. 2013-08-13 22:28:41 -07:00
Joffrey F
62c0f433fa brew: Fixed a bug with remote repository builds 2013-08-14 05:05:44 +02:00
Kawsar Saiyeed
44eb1b3892 Limit commands to containers of a relevant state 2013-08-14 02:45:03 +01:00
Andy Rothfusz
4cb57a5438 Merge pull request #1533 from dhrp/docs-ux-changes
Docs ux changes
2013-08-13 18:03:49 -07:00
Andy Rothfusz
2e5642452b Merge pull request #1523 from dotcloud/docker-group-docs
Update docs for docker group
2013-08-13 18:02:30 -07:00
Michael Crosby
4947e32acb Merged 0.5.3 hotfix release back to master
Conflicts:
	api.go
	commands.go
	network.go
2013-08-13 23:47:29 +00:00
Thatcher Peskens
6c87db97a6 Resolved conflict in conf.py
ps. also changed navigation (swapped community and getting started)
2013-08-13 16:28:24 -07:00
Thatcher Peskens
f127c471a1 Merge remote-tracking branch 'dotcloud/master' into docs-smart-changes
Conflicts:
	docs/sources/conf.py
2013-08-13 16:25:58 -07:00
Jérôme Petazzoni
280901e5fb add -insecure flag and relevant tests 2013-08-13 16:20:22 -07:00
Thatcher Peskens
f14db49346 [docs] Some user-friendly changes to the documentation.
- Added parmalinks (closes #1527)
 - Changed the 'fork us on github' button to 'Edit this page on github', so people can edit quickly (closes #1532)
- Changed the favicon
2013-08-13 16:18:32 -07:00
Michael Crosby
f1cdba2937 Merge pull request #1516 from unclejack/up_hack_dockerfile_go_version_to_1.1.2
use Go 1.1.2 for dockerbuilder
2013-08-13 14:50:12 -07:00
Andy Rothfusz
0794f0b518 Merge pull request #1526 from metalivedev/1522-faqupdate
Added information about Docker's high level tools over LXC.
2013-08-13 13:57:47 -07:00
Andy Rothfusz
e2409ad337 Added information about Docker's high level tools over LXC. Formatting cleanup. Mailing list cleanup. 2013-08-13 13:45:07 -07:00
Guillaume J. Charmes
ca92bc7798 Merge pull request #1501 from KSid/vagrant-dev-dockerfile
Install websocket library before building docker
2013-08-13 12:16:30 -07:00
Michael Crosby
e4f35dd4cf Update docs for docker group 2013-08-13 12:05:27 -07:00
shin-
2cebe09924 brew: Display a clear error message when the path is invalid 2013-08-13 20:28:15 +02:00
shin-
e5f1b6b9a4 brew: Updated requirements 2013-08-13 20:13:39 +02:00
shin-
79fc90b646 brew: Don't build if docker daemon can't be reached 2013-08-13 20:13:39 +02:00
shin-
fb7c4214ce brew: Reuse repositories when possible 2013-08-13 20:13:39 +02:00
Michael Crosby
06a092bdb5 Merge pull request #1510 from dotcloud/bump_0.5.3
Bump to 0.5.3
2013-08-13 10:41:57 -07:00
Michael Crosby
5d25f3232c Update changelog to include hostname commit 2013-08-13 17:36:24 +00:00
Guillaume J. Charmes
1a1c89556f Fix TestEnv 2013-08-13 17:34:35 +00:00
Nolan
05219d6b52 Add hostname to the container environment. 2013-08-13 17:30:21 +00:00
Guillaume J. Charmes
9cc3d7a18b Merge pull request #1515 from dotcloud/rework_auth_push
* Registry: Improve auth push
2013-08-13 10:15:59 -07:00
unclejack
e09863fedb use Go 1.1.2 for dockerbuilder 2013-08-13 19:48:30 +03:00
Victor Vieux
2ba1300773 remove checkIfLogged 2013-08-13 14:02:49 +00:00
Victor Vieux
6cb908bb82 fix merge issue 2013-08-13 13:35:34 +00:00
Victor Vieux
5ee3c58d25 Add USER instruction 2013-08-13 12:02:17 +00:00
Michael Crosby
c3773740d9 Bump to 0.5.3 2013-08-12 23:55:42 +00:00
Steeve Morin
0ca133dd76 Handle ip route showing mask-less IP addresses
Sometimes `ip route` will show mask-less IPs, so net.ParseCIDR will fail. If it does we check if we can net.ParseIP, and fail only if we can't.
Fixes #1214
Fixes #362
2013-08-12 23:46:26 +00:00
Guillaume J. Charmes
68934878f1 Make sure ENV instruction within build perform a commit each time 2013-08-12 23:43:53 +00:00
Michael Crosby
ef1d1aefa7 Revert "docker.upstart: avoid spawning a sh process"
This reverts commit 24dd50490a.
2013-08-12 23:35:23 +00:00
Daniel Mizyrycki
c015d26e96 API, issue 1471: Allow users belonging to the docker group to use the docker client 2013-08-12 23:33:42 +00:00
Guillaume J. Charmes
f6760fca88 Merge pull request #1485 from dotcloud/1471-unixsocket-group
* Runtime: API, issue 1471: Use groups for socket permissions
2013-08-12 16:21:53 -07:00
Michael Crosby
ec61c46bf7 Add import for dotcloud/tar to replace std tar pkg 2013-08-12 22:42:29 +00:00
Andy Rothfusz
90cb66f08d Merge pull request #1508 from pborreli/typos
Fixed typos
2013-08-12 15:17:02 -07:00
Daniel Mizyrycki
999a8d7249 API, issue 1471: Allow users belonging to the docker group to use the docker client 2013-08-12 15:16:49 -07:00
Andy Rothfusz
13acf72a3e Merge pull request #1504 from zaiste/docs/postgresql-service
docs/postgresql: PostgreSQL service on Docker example
2013-08-12 14:47:19 -07:00
Solomon Hykes
aa213b48a4 Merge pull request #1488 from KSid/336-bash-completion
* Contrib: bash completion script
2013-08-12 11:50:25 -07:00
Pascal Borreli
9b2a5964fc Fixed typos 2013-08-12 18:53:06 +01:00
Guillaume J. Charmes
1110bb8e98 Merge pull request #1476 from dotcloud/improve_TestKillDifferentUser
* Tests: Improve TestKillDifferentUser to prevent timeout on buildbot
2013-08-12 10:23:08 -07:00
Victor Vieux
168e2f8c49 Merge pull request #1491 from titanous/gitignore-test
gitignore all test files
2013-08-12 09:18:31 -07:00
Michael Crosby
86ef6422f3 Merge pull request #1487 from titanous/maintainer-usernames
Add GitHub usernames to MAINTAINERS
2013-08-12 09:07:10 -07:00
Zaiste!
3af60bf375 fix/docs: ubuntu instead of base, note about root-only 2013-08-12 15:30:52 +02:00
Victor Vieux
123c80467b Added docs 2013-08-12 11:55:23 +00:00
Ken Cochrane
edba1af304 Merge pull request #1494 from KSid/typo-docs-run-command
* Documentation: Fix typo in docs for docker run -dns
2013-08-12 04:51:56 -07:00
Ken Cochrane
875e16c11b Merge pull request #1499 from seldo/master
* Documentation: Adding a reference to ps -a
2013-08-12 04:51:17 -07:00
Victor Vieux
703905d7ec ensure the use oh IDs and add image's name in /events 2013-08-12 11:50:03 +00:00
Zaiste!
d52c149075 docs/postgresql: PostgreSQL service on Docker example 2013-08-12 12:03:43 +02:00
Victor Vieux
3f95d1b9bf Merge pull request #1497 from vincentbernat/fix/ipv4-forwarding-detection
runtime: correctly detect IPv4 forwarding
2013-08-12 02:07:03 -07:00
Kawsar Saiyeed
def9598ed9 Install websocket library before building docker 2013-08-12 05:22:33 +01:00
Laurie Voss
529ee848da Adding a reference to ps -a
It was confusing to me as a first-time user that my docker attach command failed; I was expecting the container to run continuously, and when I couldn't attach to it, I assumed something was wrong. ps -a shows that the container still exists, which gives the user confidence to go to the next step.
2013-08-11 17:27:47 -07:00
Vincent Bernat
64b817a5c1 runtime: correctly detect IPv4 forwarding
When memory cgroup is absent, there was not attempt to detect if IPv4
forwarding was enabled and therefore, docker was printing a warning
for each command spawning a new container. The test for IPv4
forwarding was guarded by the test for memory cgroup.
2013-08-11 11:52:16 +02:00
Brandon Liu
02b8d14bdd Add test case for Graph ByParent(). 2013-08-11 01:24:21 -07:00
Brandon Liu
025c759e44 Fix Graph ByParent() to generate list of child images per parent image. 2013-08-11 00:37:16 -07:00
Michael Crosby
940d58806c Merge pull request #1483 from titanous/update-authors
Update AUTHORS
2013-08-10 21:16:34 -07:00
Kawsar Saiyeed
a2fb870ce3 Fix typo in docs for docker run -dns 2013-08-11 02:04:04 +01:00
Kawsar Saiyeed
e737856a7f Still missed -entrypoint from 'docker run' 2013-08-11 01:29:22 +01:00
Kawsar Saiyeed
ae1909b482 Clarified bash completion limitations 2013-08-11 01:08:47 +01:00
Kawsar Saiyeed
d75282eb14 Missed -entrypoint from 'docker run' options 2013-08-11 00:57:18 +01:00
Daniel Mizyrycki
cd9886f0a8 Merge pull request #1489 from dotcloud/1422-revert-upstart
Revert "docker.upstart: avoid spawning a `sh` process"
2013-08-10 11:07:21 -07:00
Jonathan Rudenberg
a43bae4c0b gitignore all test files 2013-08-10 13:48:24 -04:00
Kawsar Saiyeed
91ae135896 Add description and usage information 2013-08-10 13:05:31 +01:00
Greg Thornton
57b49efc98 Skip existing volumes in volumes-from
Removes the error when a container already has a volume that would otherwise
be created by `Config.VolumesFrom`. Allows restarting containers with a
`Config.VolumesFrom` set.
2013-08-10 06:37:57 +00:00
Greg Thornton
3bd73a9633 Apply volumes-from before creating volumes
Copies the volumes from the container specified in `Config.VolumesFrom` before
creating volumes from `Config.Volumes`. Skips any preexisting volumes when
processing `Config.Volumes`. Fixes #1351
2013-08-10 04:55:23 +00:00
Michael Crosby
e3acbff2ed Revert "docker.upstart: avoid spawning a sh process"
This reverts commit 24dd50490a.
2013-08-10 03:06:08 +00:00
Kawsar Saiyeed
d6e5c2c276 Add initial bash completion script
The script can be used to auto-complete commands, image names and
container ids from within a bash prompt.

This partially resolves #336.
2013-08-10 02:38:11 +01:00
Jonathan Rudenberg
4dc04d7690 Add GitHub usernames to MAINTAINERS 2013-08-09 21:16:44 -04:00
Michael Crosby
3e12349831 Merge pull request #1484 from titanous/use-range
Use ranged for loop on channels
2013-08-09 18:06:12 -07:00
Jonathan Rudenberg
7c50221de5 Use ranged for loop on channels 2013-08-09 20:42:20 -04:00
Michael Crosby
3d63087f78 Merge pull request #1481 from titanous/fix-sprint
Fix typo: fmt.Sprint -> fmt.Sprintf
2013-08-09 17:28:59 -07:00
Jonathan Rudenberg
1408f08c40 Update AUTHORS 2013-08-09 20:09:42 -04:00
Jonathan Rudenberg
3b23f02229 Fix typo: fmt.Sprint -> fmt.Sprintf 2013-08-09 19:52:05 -04:00
Guillaume J. Charmes
fd5099c9fe Merge pull request #1479 from jpetazzo/fix-testbindmounts-typo
- Tests: fix typo in TestBindMounts (runContainer called without image)
2013-08-09 16:20:24 -07:00
Jérôme Petazzoni
68b09cbe3d fix typo in TestBindMounts (runContainer called without image) 2013-08-09 16:03:05 -07:00
Guillaume J. Charmes
92cd2f5bad Merge pull request #1146 from benoitc/feature/attach_ws
* Runtime: add websocket support to /container/<name>/attach/ws
2013-08-09 15:38:24 -07:00
Guillaume J. Charmes
8d1cd63dfa Merge pull request #1460 from dotcloud/patch1
* Runtime: Mount /dev/shm as a tmpfs
2013-08-09 15:18:22 -07:00
Michael Crosby
2c4c10fb4a Merge pull request #1477 from kevinclark/builder-fixme
Only count known instructions as build steps
2013-08-09 14:45:06 -07:00
Kevin Clark
722d4e916a Add myself to AUTHORS 2013-08-09 14:39:03 -07:00
Kevin Clark
4ff649a4ea Only count known instructions as build steps
stepN is only used in the log line, so if we only produce the log line
when there's a message, it should do the right thing.

If it's *not* a valid instruction, it gets a line as well, so there's no
reason to double up.
2013-08-09 14:38:29 -07:00
unclejack
641ddaeb03 add formatting directive to failure to stop container error 2013-08-09 23:27:34 +03:00
unclejack
2ba5c91547 minor cleanup for signal handling 2013-08-09 23:23:27 +03:00
Michael Crosby
25e7227c81 Merge pull request #1462 from dotcloud/fix_build_events_output
Fix docker build and docker events output
2013-08-09 12:41:18 -07:00
Jérôme Petazzoni
04cd0a392b Merge pull request #1463 from jpetazzo/https-get-docker-io
switch from http to https for get.docker.io
2013-08-09 11:26:56 -07:00
Guillaume J. Charmes
db9d68c3e4 Improve TestKillDifferentUser to prevent timeout on buildbot 2013-08-09 10:50:58 -07:00
unclejack
88cb9f3116 keep processing signals after the first one 2013-08-09 20:33:17 +03:00
Guillaume J. Charmes
55f9610cde Merge pull request #1452 from dotcloud/improve_TestGetContainersTop
* Tests: Improve TestGetContainersTop so it does not rely on sleep
2013-08-09 10:28:34 -07:00
shin-
6178dc7f1b brew: added safeguards in script and changed default branch to 'master' 2013-08-09 16:40:28 +02:00
Solomon Hykes
b8f8f9d07e Merged 0.5.2 hotfix release back to master 2013-08-08 19:45:57 -07:00
Michael Crosby
1643943402 Merge pull request #1464 from dotcloud/bump_0.5.2
Bump to 0.5.2
2013-08-08 17:36:53 -07:00
Michael Crosby
e99a99eb6e Bump to v0.5.2 2013-08-09 00:17:35 +00:00
Michael Crosby
df9712f1c8 Change daemon to listen on unix socket by default
Conflicts:
	docs/sources/api/docker_remote_api.rst
2013-08-09 00:16:43 +00:00
Michael Crosby
28d38620f0 Merge pull request #1417 from crosbymichael/root-socket
Change daemon to listen on unix socket by default
2013-08-08 17:06:49 -07:00
Jérôme Petazzoni
1ce9b3ca9c switch from http to https 2013-08-08 17:02:59 -07:00
Jérôme Petazzoni
5c56b597a9 change network range to avoid conflict with EC2 DNS 2013-08-08 23:27:55 +00:00
Michael Crosby
f712e10cb2 Forbid certain paths within docker build ADD
Conflicts:
	buildfile_test.go
2013-08-08 23:22:14 +00:00
Victor Vieux
213365c2d2 fix docker build and docker events output 2013-08-08 22:51:39 +00:00
Sam Alba
7f02bd3b7a Merge pull request #1361 from dotcloud/library
Docker-brew and Docker standard library
2013-08-08 14:22:20 -07:00
Guillaume J. Charmes
ceb33818cd Improve TestGetContainersTop so it does not rely on sleep 2013-08-08 11:51:31 -07:00
Guillaume J. Charmes
18fc707fdf Make sure all needed mountpoint are present 2013-08-08 11:25:02 -07:00
Michael Crosby
c3027fa9ac Merge pull request #1216 from dotcloud/add_some_tests
Add some tests in server and utils
2013-08-08 09:57:18 -07:00
Victor Vieux
4249867e5b rebase 2013-08-08 14:58:52 +00:00
Victor Vieux
c804a5f827 rebase master 2013-08-08 14:56:37 +00:00
Victor Vieux
be77ee33bc Merge branch 'master' into add_some_tests 2013-08-08 14:44:56 +00:00
Victor Vieux
5928ed5d45 Merge pull request #1456 from dotcloud/1455-force_commit_build_env
Make sure ENV instruction within build perform a commit each time
2013-08-08 07:37:51 -07:00
Karan Lyons
075d30dbce Mount /dev/shm as a tmpfs
Fixes #1122.
2013-08-07 17:44:33 -07:00
Guillaume J. Charmes
6a6a2ad8a4 Make sure ENV instruction within build perform a commit each time 2013-08-07 17:23:49 -07:00
Guillaume J. Charmes
279fe144e1 Merge pull request #1420 from jpetazzo/fix-get-docker-io-upstart-script
* Packaging: Fix the upstart script generated by get.docker.io
2013-08-07 17:13:54 -07:00
Daniel Mizyrycki
cc80bd41c4 Merge pull request #1423 from krautcomputing/fix_indentation_in_vagrantfile
Fix indentation in Vagrantfile
2013-08-07 17:10:40 -07:00
Andy Rothfusz
06183e6cdc Merge pull request #1447 from kermit666/image-tag-doc
doc: syntax to run a specific image tag
2013-08-07 16:23:45 -07:00
Guillaume J. Charmes
6249cc3373 Merge pull request #1425 from dotcloud/simplify_ProgressReader
- Runtime: fix small \n error un docker build
2013-08-07 16:19:42 -07:00
Solomon Hykes
80f34c6aeb Merge pull request #1451 from dotcloud/michael-maintainer
Add Michael Crosby to core maintainers
2013-08-07 15:53:29 -07:00
Guillaume J. Charmes
a2f526dadc Merge pull request #1435 from jpetazzo/userland-proxy-should-listen-on-inaddr-any
* Runtime: Let userland proxy handle container-bound traffic
2013-08-07 15:48:17 -07:00
Guillaume J. Charmes
429d2f85cb Merge pull request #1445 from dsissitka/host
* Runtime: Updated the Docker CLI to specify a value for the "Host" header.
2013-08-07 15:38:50 -07:00
Guillaume J. Charmes
2ca018b2eb Merge pull request #1395 from c00w/490-warn-ipv4forwarding-disabled
* Runtime: Add warning when net.ipv4.ip_forwarding = 0
2013-08-07 15:31:12 -07:00
Guillaume J. Charmes
3e6e08ce00 Merge pull request #1362 from dotcloud/registry_test
* Registry: Registry unit tests + mock registry
2013-08-07 15:30:07 -07:00
Colin Rice
3e491f8698 Fix reversed IPv4Forwarding check in api.go 2013-08-07 18:28:40 -04:00
Colin Rice
ccffa69766 Add Colin Rice to AUTHORS file 2013-08-07 18:28:39 -04:00
Colin Rice
10190be5d7 Add warning when net.ipv4.ip_forwarding = 0
Added warnings to api.go, container.go, commands.go, and runtime.go
Also updated APIInfo to return whether IPv4Forwarding is enabled
2013-08-07 18:28:39 -04:00
Guillaume J. Charmes
65a4e30825 Merge pull request #1093 from monnand/910-login-info
* Runtime: fixed #910. print user name to docker info output
2013-08-07 15:09:55 -07:00
Guillaume J. Charmes
e63960caae Merge pull request #1450 from dotcloud/add-sandbox
- Builder: Forbid certain paths within docker build ADD
2013-08-07 15:04:36 -07:00
benoitc
e2ca600fd8 Merge branch 'feature/attach_ws' into feature/attach_ws2
Conflicts:
	api.go
2013-08-07 23:30:28 +02:00
Solomon Hykes
4860df1689 Add Michael Crosby to core maintainers 2013-08-07 21:24:41 +00:00
Michael Crosby
ced93bcabd Modify test to accept error from mkContainer 2013-08-07 19:07:39 +00:00
Sean P. Kane
0e21de9a25 Assume that if VAGRANT_DEFAULT_PROVIDER is set we shouldn't install vbox tools 2013-08-07 09:38:49 -07:00
Michael Crosby
3104fc8d33 Forbid certain paths within docker build ADD 2013-08-07 16:05:30 +00:00
Victor Vieux
2409df9285 Merge pull request #1434 from jpetazzo/fix-ec2-dns-conflict
change network range to avoid conflict with EC2 DNS
2013-08-07 08:43:53 -07:00
Victor Vieux
2e37be973f Merge pull request #1436 from jpetazzo/fix-loopback-interface-test-index
relax the lo interface test to allow iface index != 1
2013-08-07 08:40:55 -07:00
Dražen Lučanin
6115348dd9 doc: syntax to run a specific image tag 2013-08-07 13:57:31 +02:00
David Sissitka
416d098688 Updated my last commit to use tabs instead of spaces. 2013-08-07 05:35:38 -04:00
David Sissitka
6bbe66d2e6 Updated the Docker CLI to specify a value for the "Host" header. 2013-08-07 05:33:03 -04:00
Andy Rothfusz
3782e34e67 Merge pull request #1430 from metalivedev/1031-linuxheaders
Suggest installing linux-headers by default. Address #1187 and #1031
2013-08-06 18:56:34 -07:00
Jérôme Petazzoni
84790aafd8 relax the lo interface test to allow iface index != 1 2013-08-06 18:31:05 -07:00
Jérôme Petazzoni
fea2d5f2fe Let userland proxy handle container-bound traffic 2013-08-06 17:44:39 -07:00
Jérôme Petazzoni
9f1c9686e0 change network range to avoid conflict with EC2 DNS 2013-08-06 17:24:10 -07:00
Andy Rothfusz
e0a6f27d1b Suggest installing linux-headers by default. Address #1187 and #1031 2013-08-06 13:56:29 -07:00
Andy Rothfusz
9130ee7513 Merge pull request #1428 from dhrp/twitterhandle
changed the twitter handle
2013-08-06 13:38:22 -07:00
shin-
65c8e9242c brew: added support for push on private registries. 2013-08-06 21:08:27 +02:00
Thatcher Peskens
1d654f6156 changed the twitter handle 2013-08-06 11:40:16 -07:00
Victor Vieux
4af24e11a4 Merge pull request #1221 from crosbymichael/cmd-cp
*Client: Add docker cp command and copy api endpoint to copy container files/folders to the host
2013-08-06 09:15:39 -07:00
Michael Crosby
583f5868c9 Move copy command docs to api 1.4 document 2013-08-06 16:09:54 +00:00
Michael Crosby
d94b186080 Strip leading forward slash from resource 2013-08-06 16:09:54 +00:00
Michael Crosby
5b8cfbe15c Add cp command and copy api endpoint
The cp command and copy api endpoint allows users
to copy files and or folders from a containers filesystem.

Closes #382
2013-08-06 16:09:54 +00:00
Victor Vieux
0dbc51f4d2 Merge pull request #1394 from crosbymichael/1391-json-requests
Use mime pkg to parse Content-Type
2013-08-06 08:59:10 -07:00
Michael Crosby
754ed9043d Use mime types to parse Content-Type 2013-08-06 15:57:13 +00:00
Victor Vieux
ba17f4a06a fix small \n error un docker build 2013-08-06 14:31:51 +00:00
Manuel Meurer
b9149f45bf Fix indentation in Vagrantfile 2013-08-06 15:21:26 +02:00
Victor Vieux
b6c4b325a4 Merge pull request #1406 from dotcloud/1363-reduce_timeout-fix
Reduce connect and read timeout when pinging the registry (fixes issue #1363)
2013-08-06 04:22:44 -07:00
Nan Monnand Deng
965de6ef50 fixing #910 2013-08-05 23:37:53 -04:00
Nan Monnand Deng
303490168f Added index address into APIInfo. 2013-08-05 23:36:55 -04:00
Michael Crosby
c7c2399be9 Merge pull request #1421 from dotcloud/fix_makefile
Make sure all sources have the wanted revision
2013-08-05 20:25:32 -07:00
Guillaume J. Charmes
120a520a22 Make sure all sources have the wanted revision 2013-08-05 19:58:48 -07:00
Andy Rothfusz
7c03bd1e7a Merge branch 'docs-example-fix' of git://github.com/faizkhan00/docker
merges #1402
2013-08-05 18:21:19 -07:00
Jérôme Petazzoni
049d28868e fix the upstart script generated by get.docker.io (it was not starting dockerd on boot) 2013-08-05 18:11:13 -07:00
Michael Crosby
8934f13615 Change daemon to listen on unix socket by default 2013-08-06 00:12:56 +00:00
Guillaume J. Charmes
dcf9dfb129 Update utils_test.go 2013-08-05 16:32:25 -07:00
Andy Rothfusz
7c9604e32b Merge pull request #1387 from joevandyk/patch-1
Update amazon.rst to explain that Vagrant is not necessary for running Docker on ec2
2013-08-05 16:14:57 -07:00
Andy Rothfusz
b302ae329c Merge pull request #1418 from dhrp/remote_api_doc_improvements
Fixed some typo's and formatting issues in remote api documentation.
2013-08-05 16:13:55 -07:00
Thatcher Peskens
ff6b6f2ce1 Fixed some typo's and formatting issues in remote api documentation. 2013-08-05 15:55:40 -07:00
Thatcher
d49f141fb3 Merge pull request #1303 from dhrp/update-spinx-for-man
Enabled the docs to generate manpages.
2013-08-05 15:31:51 -07:00
Guillaume J. Charmes
590fc58de7 Update version. 2013-08-05 14:49:37 -07:00
Guillaume J. Charmes
c03561eea8 Update CHANGELOG.md 2013-08-05 14:32:54 -07:00
Nan Monnand Deng
4179f25286 fixed #910. print user name to docker info output 2013-08-05 17:25:29 -04:00
Guillaume J. Charmes
e54e8fa920 Merge pull request #1290 from dotcloud/parallel_pull
* Runtime: Parallel pull
2013-08-05 14:17:29 -07:00
Guillaume J. Charmes
2f1c05d997 Merge pull request #1374 from dotcloud/steeve-patch-1
- Runtime: Handle ip route showing mask-less IP addresses
2013-08-05 14:13:20 -07:00
Andy Rothfusz
baa4618e57 Merge pull request #1390 from grobie/clarify-amazon-documentation
Clarify Amazon EC2 installation type
2013-08-05 13:26:28 -07:00
Andy Rothfusz
dcc1e3562f Merge pull request #1404 from dnordberg/fix-base-refs
'Base' image is depreciated and should no longer be referenced in the docs.
2013-08-05 13:24:05 -07:00
Guillaume J. Charmes
f6fa353dd8 Merge pull request #1267 from sridatta/new-clean-init
* Runtime: Fix to "Inject dockerinit at /.dockerinit"
2013-08-05 13:23:22 -07:00
Andy Rothfusz
d4fa619ed1 Merge pull request #1411 from andrewmacgregor/doc-typos
Minor typos found while reading docs
2013-08-05 13:22:53 -07:00
Michael Crosby
8a851af5e6 Merge pull request #1364 from dotcloud/bump_0.5.1
Bump to v0.5.1
2013-08-05 12:10:03 -07:00
shin-
8aa9985ad0 Adapted tests to latest registry changes 2013-08-05 20:28:05 +02:00
shin-
2c85b964e3 Cleanup 2013-08-05 19:07:23 +02:00
shin-
9159c819c3 Mock access logs don't show up in non-debug mode 2013-08-05 19:06:00 +02:00
shin-
484ba4a8c5 gofmt 2013-08-05 19:06:00 +02:00
shin-
97b7b173b9 New registry unit tests remade from scratch, using the mock registry 2013-08-05 19:06:00 +02:00
shin-
29f69211c9 Mock registry: Fixed a bug where the index validation path would return a 200 status code instead of the expected 204 2013-08-05 19:06:00 +02:00
shin-
553ce165c1 registry: Fixed a bug where token and cookie info wouldn't be sent when using LookupRemoteImage(). Fixed a bug where no error would be reported when getting a non-200 status code in GetRemoteImageLayer() 2013-08-05 19:05:14 +02:00
Sam Alba
310ddec823 Disabled test server in the tests 2013-08-05 19:02:57 +02:00
Sam Alba
6926ba558f Mocked registry: Added X-Docker-Size when fetching the layer 2013-08-05 19:02:57 +02:00
Sam Alba
97d1d6f5d2 Fixed mocked registry 2013-08-05 19:02:57 +02:00
Sam Alba
5f7abd5347 Implemented a Mocked version of the Registry server 2013-08-05 19:02:57 +02:00
Victor Vieux
946bbee39a rebase master 2013-08-05 16:25:42 +00:00
Victor Vieux
bdc0e8f825 Merge pull request #1405 from jonasi/entrypoint-noargs
*Runtime: Allow ENTRYPOINT without CMD
2013-08-05 09:00:39 -07:00
Victor Vieux
1b08ab92d1 Merge pull request #1408 from dotcloud/1407-localhost_is_a_domain-fix
Always consider localhost as a domain name when parsing the FQN repos name
2013-08-05 08:50:12 -07:00
Victor Vieux
feda3db1dd Merge pull request #1382 from monnand/650-http-utils
650 http utils and user agent field
2013-08-05 08:49:12 -07:00
Andrew Macgregor
ce97a71adf Minor typos found while reading docs 2013-08-05 22:47:16 +08:00
Isao Jonas
d00fb40967 added tests for 1405 2013-08-05 09:30:27 -05:00
Isao Jonas
0f249c85ea fix entrypoint without cmd 2013-08-05 09:07:03 -05:00
Ken Cochrane
a37b42b57c Merge pull request #1400 from gorsuch/doc-typo
* Documentation: fix a typo in the ubuntu installation guide
2013-08-05 07:05:27 -07:00
Victor Vieux
dd8c59892c Merge pull request #1182 from dotcloud/change_build_usage
change tag -> repository name (and optionally a tag) in build usage
2013-08-05 04:08:33 -07:00
Victor Vieux
a97cf23355 add docs 2013-08-05 11:07:27 +00:00
Victor Vieux
030cc8d5cc Merge pull request #1389 from dotcloud/1373-improve_checklocaldns
Consider empty /etc/resolv.conf as local dns + add unit test
2013-08-05 02:29:36 -07:00
Sam Alba
c22f2617ad Always consider localhost as a domain name when parsing the FQN repos name 2013-08-04 17:59:12 -07:00
Sam Alba
c860945be2 Reduce connect and read timeout when pinging the registry (fixes issue #1363) 2013-08-04 17:42:24 -07:00
Daniel Nordberg
51d0c9238b 'Base' is depreciated and should no longer be referenced in the docs.
https://groups.google.com/forum/\#!topic/docker-club/pEjqMgcrnqA
2013-08-05 01:34:32 +03:00
Faiz K
22df1249b5 bash commands while in the container aren't in the transcript! Added. 2013-08-04 08:26:56 -05:00
David Calavera
bdaa87ff21 Print a new line after getting the password from stdin. 2013-08-03 21:30:07 -07:00
Michael Gorsuch
db0ccaac9b typo: s/connexions/connections 2013-08-03 22:11:59 -05:00
David Calavera
4089a20cf4 Exit if there is any error reading from stdin. 2013-08-03 17:27:15 -07:00
David Calavera
75ac50a9a0 Stop making a raw terminal to ask for registry login credentials.
It only disables echo asking for the password and lets the terminal to handle everything else.
It fixes #1392 since blank spaces are not discarded as they did before.
It also cleans the login code a little bit to improve readability.
2013-08-03 16:43:20 -07:00
David Calavera
58e0c68132 Merge branch 'master' into login_signal
* master: (54 commits)
  Return JSONError for HTTPResponse error
  Fix TestEnv
  Revert "Bind daemon to 0.0.0.0 in Vagrant. Fixes #1304"
  Add unit tests for build no cache
  Add no cache for docker build
  Move note about officially supported kernel
  Solved the logo being squished in Safari
  Add hostname to the container environment.
  fix same issue in api.go
  improve tests
  Return registy status code in error
  update http://get.docker.io/latest
  Add check that the request is good
  fix tests about refactor checksums
  add ufw doc
  Fixed a couple of minor syntax errors.
  Updated the description of run -d
  Make sure the index also receives the checksums
  Switch json/payload order
  Remove unused parameter
  ...
2013-08-03 15:43:43 -07:00
David Calavera
cd6aeaf979 Sort APIImages by most recent creation date.
Fixes #985.
2013-08-03 15:35:36 -07:00
Tobias Schmidt
708cd34586 Clarify Amazon EC2 installation type
Make clear this documentation is about installing docker on EC2 with the
help of vagrant, which doesn't make it easy to start multiple instances,
instead of plain vanilla EC2 installation.
2013-08-03 17:31:20 +07:00
Guillaume J. Charmes
4dcc0f316c Merge pull request #1298 from crosbymichael/1246-auth-request
* Registry: Do not require login unless 401 is received on push
2013-08-02 17:39:54 -07:00
Michael Crosby
dae585c6e4 Return JSONError for HTTPResponse error 2013-08-03 00:27:58 +00:00
Andy Rothfusz
b1d994e3b9 Merge pull request #1375 from grobie/patch-1
Move note about officially supported kernel
2013-08-02 17:24:15 -07:00
Guillaume J. Charmes
dde8f74cea Fix TestEnv 2013-08-02 15:58:10 -07:00
Daniel Mizyrycki
9e3d18e606 Merge pull request #1388 from titanous/revert-vagrant-bind
Revert "Bind daemon to 0.0.0.0 in Vagrant. Fixes #1304"
2013-08-02 15:26:05 -07:00
Guillaume J. Charmes
3e9575e275 Consider empty /etc/resolv.conf as local dns + add unit test 2013-08-02 15:23:36 -07:00
Jonathan Rudenberg
07fee44559 Revert "Bind daemon to 0.0.0.0 in Vagrant. Fixes #1304"
This reverts commit bdc79ac8b2.
2013-08-02 19:18:02 -03:00
Joe Van Dyk
b6bff0cbb1 Update amazon.rst to explain that Vagrant is not necessary for running Docker on ec2 2013-08-02 15:10:57 -07:00
Guillaume J. Charmes
ead7eb619e Merge pull request #1384 from crosbymichael/1326-build-without-cache
* Builder: Add no cache for docker build
2013-08-02 14:38:34 -07:00
Guillaume J. Charmes
29de2432ea Merge pull request #1350 from ndarilek/1348-add-hostname-to-environment
Runtime: add hostname to environment
2013-08-02 13:58:43 -07:00
Guillaume J. Charmes
ffcba1236c Merge pull request #1117 from dotcloud/add_last_version-feature
Add last stable version in `docker version`
2013-08-02 13:53:33 -07:00
Guillaume J. Charmes
0f088d28c5 Merge pull request #1336 from dotcloud/fix_ADD_permissions
Builder: Make sure ADD will create everything in 0755
2013-08-02 13:52:19 -07:00
Daniel Mizyrycki
16917275ee Merge pull request #1346 from dotcloud/1117-update_latest_in_dockerbuilder
update http://get.docker.io/latest
2013-08-02 13:51:23 -07:00
Andy Rothfusz
09ab2bfa1d Merge pull request #1343 from dotcloud/ufw_doc
Add ufw doc
2013-08-02 13:33:26 -07:00
Michael Crosby
b9f0695924 Add unit tests for build no cache 2013-08-02 19:12:38 +00:00
Nan Monnand Deng
7bade49d4c update auth_test.go 2013-08-02 14:08:16 -04:00
Michael Crosby
3a123bc479 Add no cache for docker build
Add a new flag to disable the image cache when building images.
2013-08-02 16:18:54 +00:00
Nan Monnand Deng
5bc344ab73 factory generated from one place. 2013-08-02 04:10:26 -04:00
Nan Monnand Deng
4bd287e107 auth with user agent 2013-08-02 03:30:45 -04:00
Nan Monnand Deng
6a56b7b391 Server now use request factory 2013-08-02 03:23:46 -04:00
Nan Monnand Deng
7dac26ce69 reqFactory in Registry 2013-08-02 03:08:08 -04:00
Nan Monnand Deng
793fd983ef http utils 2013-08-02 02:47:58 -04:00
Tobias Schmidt
2424480e2c Move note about officially supported kernel
It seems this a general note about kernel issues and not specific to Cgroups or namespaces.
2013-08-02 12:24:38 +07:00
Guillaume J. Charmes
f5a8e90d10 Make sure the routes IP are taken into consideration + add unit test for network overlap detection 2013-08-01 18:12:39 -07:00
Daniel Mizyrycki
2e7df5182c testing, issue #1331: Add registry functional test to docker-ci 2013-08-01 17:48:17 -07:00
Daniel Mizyrycki
ea2486d631 Merge pull request #1333 from dotcloud/1331-testing-registry
testing, issue #1331: Add registry functional test to docker-ci
2013-08-01 16:00:29 -07:00
Steeve Morin
2e72882216 Handle ip route showing mask-less IP addresses
Sometimes `ip route` will show mask-less IPs, so net.ParseCIDR will fail. If it does we check if we can net.ParseIP, and fail only if we can't.
Fixes #1214
Fixes #362
2013-08-01 02:42:22 +02:00
Thatcher
d1e1a8e78c Merge pull request #1365 from dhrp/solve-logo-squish
Solved the logo being squished in Safari
2013-07-31 15:06:26 -07:00
Thatcher
ad3b091d53 Some more improvements on the docs readme. Removed references to website. 2013-07-31 13:59:56 -07:00
Thatcher Peskens
26229d78f2 Updated docs README with instructions to preview the generated manfile, and where to get the one generated by sphinx. 2013-07-31 13:44:10 -07:00
Thatcher Peskens
e0c24ccfc3 Solved the logo being squished in Safari 2013-07-31 12:17:42 -07:00
Guillaume J. Charmes
3b89d13aaf Bump to v0.5.1 2013-07-31 10:53:36 -07:00
Victor Vieux
108635582f rebase master 2013-07-31 15:32:08 +00:00
Victor Vieux
0c0077ed6f Merge pull request #1328 from dotcloud/1307_url_port_delete-fix
Use utils.ParseRepositoryTag instead of strings.Split(name, ":") in server.ImageDelete
2013-07-31 07:55:06 -07:00
Nolan
9a604acc23 Add hostname to the container environment. 2013-07-31 09:50:53 -05:00
Victor Vieux
cd9f7f29d1 Merge pull request #1334 from dotcloud/1314-compat_broke-fix
Discard error when loading old container format
2013-07-31 01:08:08 -07:00
Victor Vieux
a7068510a5 fix same issue in api.go 2013-07-31 08:01:20 +00:00
Victor Vieux
73c6d9f135 improve tests 2013-07-31 07:56:53 +00:00
Michael Crosby
3043c26419 Return registy status code in error
Added Details map to the JSONMessage
2013-07-30 23:24:31 +00:00
Victor Vieux
e66e0289ab update http://get.docker.io/latest 2013-07-30 17:18:19 +00:00
Victor Vieux
6166380d76 rebase master 2013-07-30 16:51:50 +00:00
Victor Vieux
e4752c8c1a Add check that the request is good 2013-07-30 16:42:32 +00:00
Victor Vieux
99c27fa0dd Merge branch 'master' into add_last_version-feature 2013-07-30 16:23:06 +00:00
Victor Vieux
d5a57a4b5e Merge pull request #1344 from dotcloud/refactor_checksum_tests
fix tests about refactor checksums
2013-07-30 06:13:49 -07:00
Victor Vieux
b14c251862 fix tests about refactor checksums 2013-07-30 13:13:18 +00:00
Victor Vieux
bcd6ca3685 Merge pull request #1268 from dotcloud/refactor_checksum
Refactor checksum
2013-07-30 06:07:54 -07:00
Victor Vieux
16225c473f Merge pull request #1291 from dotcloud/ensure_mount_commit
*Builder: Allow the commit of a non-started container
2013-07-30 05:19:17 -07:00
Victor Vieux
46f59dd933 add parallel pull to 1.4 2013-07-30 12:15:33 +00:00
Victor Vieux
e1fa989ec9 rebase master 2013-07-30 11:59:31 +00:00
Victor Vieux
dd2f0d89bf Merge pull request #1238 from dotcloud/1237-improve_docker_top-feature
*Client: add ps args to docker top
2013-07-30 04:54:44 -07:00
Victor Vieux
0b57e4483a Merge branch 'master' into 1237-improve_docker_top-feature 2013-07-30 11:51:16 +00:00
Victor Vieux
7d0b8c726c add ufw doc 2013-07-30 13:47:29 +02:00
Victor Vieux
f2dc49292f Merge pull request #1342 from dsissitka/patch-4
Fixed a couple of minor syntax errors.
2013-07-30 04:20:05 -07:00
Victor Vieux
a7ace535c3 Merge pull request #1339 from dhrp/docker-run-d-description
Updated the description of run -d
2013-07-30 04:15:41 -07:00
Victor Vieux
c99e8de5a4 Merge branch 'cleanup_signal_handling' of https://github.com/calavera/docker into calavera-cleanup_signal_handling 2013-07-30 11:14:36 +00:00
Victor Vieux
c06aa62bda Merge pull request #1306 from dotcloud/1294_fix_wrong_untag_using_id_rmi
Fix wrong untag using id rmi
2013-07-30 04:09:55 -07:00
dsissitka
9ba998312d Fixed a couple of minor syntax errors. 2013-07-30 01:39:29 -04:00
Daniel Mizyrycki
7d68afb2d2 Merge pull request #1209 from zimbatm/upstart-improvements
Upstart improvements
2013-07-29 18:28:40 -07:00
Daniel Mizyrycki
bfdf1839e0 Merge pull request #1312 from titanous/vagrant-bind
Bind daemon to 0.0.0.0 in Vagrant
2013-07-29 17:06:37 -07:00
Thatcher Peskens
5dc86d7bca Updated the description of run -d
The goal is to make it more clear this will give you the container id after run completes.

Since stdout is now standard on run, "docker run -d" is the best (or only) way to get the container ID returned from docker after a plain run, but the description (help) does not hint any such thing.
2013-07-29 14:17:15 -07:00
Guillaume J. Charmes
f35491190a Merge pull request #1233 from fmd/1136-environment-variables
+ Builder: CmdAdd and CmdEnv now respect Dockerfile-set ENV variables
2013-07-29 13:43:13 -07:00
Guillaume J. Charmes
5b27652ac6 Make sure the index also receives the checksums 2013-07-29 11:30:21 -07:00
Guillaume J. Charmes
394941b6b0 Switch json/payload order 2013-07-29 11:30:17 -07:00
Guillaume J. Charmes
0f134b4bf8 Remove unused parameter 2013-07-29 11:30:17 -07:00
Guillaume J. Charmes
0badda9f15 Refactor the image size storage 2013-07-29 11:30:17 -07:00
Guillaume J. Charmes
e3f68b22d8 Handle extra-paremeter within checksum calculations 2013-07-29 11:30:17 -07:00
Guillaume J. Charmes
8ca7b0646e Refactor checksum 2013-07-29 11:30:17 -07:00
David Calavera
10e37198aa Keep the loop to allow resizing more than once. 2013-07-29 11:13:59 -07:00
Guillaume J. Charmes
f7542664e3 Make sure ADD will create everything in 0755 2013-07-29 11:03:09 -07:00
Guillaume J. Charmes
950d0312dc Merge pull request #1322 from calavera/prompt_without_defaults
Do not show empty parenthesis if the default configuration is missing.
2013-07-29 10:53:43 -07:00
David Calavera
c8ec36d1b9 Remove unnecessary signal conditional. 2013-07-29 10:28:41 -07:00
Daniel Mizyrycki
17ffb0ac84 testing, issue #1331: Add registry functional test to docker-ci 2013-07-29 09:45:19 -07:00
Victor Vieux
b2aa877bf0 fix #1314 discard error when loading old container format 2013-07-29 16:40:35 +00:00
Victor Vieux
bb241c10e2 add regression tests 2013-07-29 12:16:14 +00:00
Victor Vieux
3852d05990 add ParseRepositoryTag tests 2013-07-29 12:16:01 +00:00
Victor Vieux
63876e7dbd use ParseRepositoryTag instead on split on : in imagedelete 2013-07-29 12:15:27 +00:00
Solomon Hykes
97a2dc96f2 Remove deprecated copy from README 2013-07-28 12:57:09 -07:00
David Calavera
88b6ea993d Remove unused argument. 2013-07-27 10:17:57 -07:00
David Calavera
d4f7039793 Do not show empty parenthesis if the default configuration is missing. 2013-07-27 10:00:36 -07:00
David Calavera
bb06fe8dd9 Allow to generate signals when termios is in raw mode. 2013-07-27 09:14:38 -07:00
Guillaume J. Charmes
4399f65fb8 Merge pull request #1318 from gaffo/compile_docs
Add required go version for compilation
2013-07-26 18:33:10 -07:00
Mike Gaffney
2d85a20c71 Add required go version for compilation 2013-07-26 18:29:27 -07:00
David Calavera
7cc90f2bc5 Use a more idiomatic syntax to capture the exit. 2013-07-26 18:12:05 -07:00
David Calavera
01ce312c2d Exit from docker login on SIGTERM and SIGINT.
Fixes #1299.
2013-07-26 17:40:45 -07:00
Guillaume J. Charmes
c01d17d77d Merge pull request #1313 from titanous/update-authors
Update AUTHORS
2013-07-26 17:04:05 -07:00
Guillaume J. Charmes
ed0ba04da6 Merge pull request #1316 from dotcloud/1295-mkdir_ADD_issue
- Builder: Create directories with 755 instead of 700 within ADD instruction
2013-07-26 15:12:45 -07:00
Guillaume J. Charmes
b15cfd3530 - Builder: Create directories with 755 instead of 700 within ADD instruction 2013-07-26 14:57:16 -07:00
Guillaume J. Charmes
a438d505ba Merge pull request #1272 from dotcloud/improve_registry_cookie
Make sure the cookie is used in all registry queries
2013-07-26 14:26:29 -07:00
Jonathan Rudenberg
5eb590e79d Update AUTHORS 2013-07-26 15:48:01 -04:00
Jonathan Rudenberg
bdc79ac8b2 Bind daemon to 0.0.0.0 in Vagrant. Fixes #1304 2013-07-26 15:45:00 -04:00
Solomon Hykes
a97d858b2a Clean up 'manifesto' in docs 2013-07-26 10:21:17 -07:00
Victor Vieux
e592f1b298 add regression test 2013-07-26 10:30:36 +00:00
Victor Vieux
513a567483 fix docs 2013-07-26 10:04:46 +00:00
Victor Vieux
faf103e6ec Merge pull request #1305 from gaffo/fix-spelling
Change reserve-compatibility to reverse-compatibility
2013-07-26 02:40:56 -07:00
Victor Vieux
e608296bc6 fix wrong untag when using rmi via id 2013-07-26 09:19:26 +00:00
Mike Gaffney
4ebe2cf348 Change reserve-compatibility to reverse-compatibility 2013-07-26 01:10:42 -07:00
Thatcher Peskens
f4b63d9eea Enabled the docs to generate manpages.
* changed conf.py to reference toctree.rst instead of index
* Added note to README to upgrade your sphinx to the latest version to prevent a bug with .. note:: blocks.
2013-07-25 17:19:58 -07:00
Andy Rothfusz
422378cb85 Merge pull request #1274 from dhrp/headings_website
Removed website and updated headings.
2013-07-25 16:43:27 -07:00
Guillaume J. Charmes
594c818d85 Merge pull request #1281 from dotcloud/505-output_after_pipe-fix
- Runtime: Fixes #505 - Make sure all output is send on the network before closing
2013-07-25 13:01:31 -07:00
Fareed Dudhia
d86898b014 Fixes 1136; Reopened from 1175 with latest changes. 2013-07-25 19:45:49 +00:00
Guillaume J. Charmes
be087c9c82 Merge pull request #1293 from dotcloud/585_use_0755_instead_of_0700
use 0755 instead of 0700
2013-07-25 12:40:44 -07:00
Guillaume J. Charmes
9cc8b72a38 Merge pull request #1288 from dlintw/1286-improve-import-txz-description
Fixes #1286 improve-import-txz-description
2013-07-25 12:37:37 -07:00
Guillaume J. Charmes
3425c1b84c Make sure the cookie is used in all registry queries 2013-07-25 12:31:23 -07:00
shin-
12d575a6b1 Script cleans up downloaded repos, uses quiet build 2013-07-25 21:00:36 +02:00
Victor Vieux
1c509f4350 use 0755 instead of 0700 2013-07-25 15:45:15 +00:00
Victor Vieux
48833c7b07 add regression test + go fmt 2013-07-25 15:20:56 +00:00
Victor Vieux
f385f1860b ensure mount in commit 2013-07-25 15:18:34 +00:00
Victor Vieux
01e98bf0dd fix errors 2013-07-25 14:32:46 +00:00
Victor Vieux
f1dd299227 Use VT100 escape codes
:
2013-07-25 14:16:36 +00:00
Victor Vieux
7df6c4b9ad Merge pull request #1283 from crosbymichael/username-not-set
Copy authConfigs on save so data is not modified
2013-07-25 06:31:34 -07:00
Daniel YC Lin
8f6b6d5784 Fixes #1286 2013-07-25 15:36:32 +08:00
Michael Crosby
0fc11699ab Add regression test for authConfig overwrite 2013-07-25 03:25:16 +00:00
Michael Crosby
9332c00ca5 Copy authConfigs on save so data is not modified
SaveConfig sets the Username and Password to an empty string
on save.  A copy of the authConfigs need to be made so that the
in memory data is not modified.
2013-07-25 00:35:52 +00:00
Guillaume J. Charmes
fd9ad1a194 Fixes #505 - Make sure all output is send on the network before closing 2013-07-24 15:48:51 -07:00
Guillaume J. Charmes
6ae3305040 Merge pull request #1277 from dotcloud/add_commands_unit_tests
* Tests: Reimplement old Commands unit tests in order to insure behavior
2013-07-24 15:24:51 -07:00
shin-
94053b4225 Brew: Added cache prefilling and build summary 2013-07-24 23:48:16 +02:00
shin-
77ff537697 Brew: Fixed docker-py requirement 2013-07-24 23:48:16 +02:00
shin-
362f1735e6 Brew: Avoid duplicate commands, added --debug option, added local repo support 2013-07-24 23:48:16 +02:00
shin-
7813f2a25e Updated git repos for precise and raring 2013-07-24 23:48:16 +02:00
shin-
3781a2cc4b Added definition file for busybox and updated ubuntu 2013-07-24 23:48:16 +02:00
shin-
d47df21a33 Add brew script to the contrib folder 2013-07-24 23:48:16 +02:00
Joffrey F
0ac672fea6 Use full hash references in library/
Brew doesn't currently support abridged hashes
2013-07-24 23:48:16 +02:00
Joffrey F
c19fa83a8a Use extended syntax (indicate what type of object needs to be checked out) 2013-07-24 23:48:16 +02:00
Solomon Hykes
844a9ab85e Define tags in the library files instead of upstream 2013-07-24 23:48:16 +02:00
Solomon Hykes
a45490243b Adding Joffrey as library maintainer 2013-07-24 23:48:16 +02:00
Solomon Hykes
b06f627139 Library: hipache and ubuntu base 2013-07-24 23:48:16 +02:00
Andy Rothfusz
cc0e091a6b Merge pull request #1278 from metalivedev/logotweaks
Cleaned up long lines, switched graphic to Docker logo. General cleanup.
2013-07-24 10:37:45 -07:00
Victor Vieux
8742649aa7 improve client output 2013-07-24 17:10:59 +00:00
Victor Vieux
0e71e368a8 Add ID to JSONMessage in pull
Use goroutines to pull in parallel
If multiple images pulled at the same time, each progress is displayed on a new line
2013-07-24 15:41:34 +00:00
Victor Vieux
dfc076a123 Merge pull request #1243 from dotcloud/add_lxc_version_docker_info
*Client: LXC and Kernel version to docker info in debug mode
2013-07-24 07:44:23 -07:00
Victor Vieux
066873ebd2 rebase master 2013-07-24 14:38:40 +00:00
Victor Vieux
f6e1055727 Merge pull request #1064 from monnand/156-user-agent-header
Add user agent when calling the registry
2013-07-24 06:40:53 -07:00
Victor Vieux
6057e6ad70 add kernel version 2013-07-24 13:36:55 +00:00
Victor Vieux
ca39f15fa3 bump master 2013-07-24 13:28:01 +00:00
Victor Vieux
7953d1becb Merge pull request #1271 from dotcloud/1246_change_cfg_format
*Auth: Change dockercfg to json and support multiple auth remote
2013-07-24 05:29:51 -07:00
Victor Vieux
f4b41e1a6c fix tests 2013-07-24 12:28:22 +00:00
Victor Vieux
4bc3328e80 bump master 2013-07-24 12:18:53 +00:00
Victor Vieux
ebe17f57ff Merge pull request #1180 from dotcloud/1167_events_endpoint-feature
*Api: Add the /events endpoint
*Client: Add the docker events command
2013-07-24 04:53:36 -07:00
Victor Vieux
ee05f97c9a Merge branch 'master' into 1167_events_endpoint-feature 2013-07-24 11:49:04 +00:00
Andy Rothfusz
78c02d038f Cleaned up long lines, switched graphic to Docker logo. General cleanup. 2013-07-23 18:13:53 -07:00
Guillaume J. Charmes
bc823acc25 Reimplement old Commands unit tests in order to insure behavior 2013-07-23 17:27:49 -07:00
Daniel Mizyrycki
c21c5afe00 Merge pull request #1147 from dotcloud/1104-testing-static
testing, issue #1104: Make the test use static flags
2013-07-23 17:07:36 -07:00
Nan Monnand Deng
1ae54707a0 versionCheckers()->versionInfos(). 2013-07-23 17:17:31 -04:00
Nan Monnand Deng
ede1e6d475 Rename: VersionChecker->VersionInfo. 2013-07-23 17:05:13 -04:00
Thatcher Peskens
e701dce339 Docs: Fixed navigaton links to about page and community page
Website: Removed the website sources from the repo. The website sources are now hosted on github.com/dotcloud/www.docker.io/
2013-07-23 13:05:06 -07:00
Victor Vieux
a93a87f64a Merge branch 'stfp-858-disable-network-configuration' 2013-07-23 19:55:50 +00:00
Victor Vieux
7aba68cd54 update AUTHORS 2013-07-23 19:55:38 +00:00
Victor Vieux
dfc64d157a Merge pull request #1241 from ryfow/patch-1
Make the ENTRYPOINT example work
2013-07-23 08:45:04 -07:00
Victor Vieux
a41384ad73 add die event 2013-07-23 15:42:34 +00:00
Victor Vieux
ed7a4236b3 Add tests for the api 2013-07-23 15:42:34 +00:00
Victor Vieux
040c3b50d0 use non-blocking channel to prevent dead-lock and add test for server 2013-07-23 15:42:34 +00:00
Victor Vieux
8b3519c5f7 getEvents a bit simpler 2013-07-23 15:42:34 +00:00
Victor Vieux
ec559c02b8 add docs 2013-07-23 15:42:34 +00:00
Victor Vieux
2e4d4c9f60 add since for polling, rename some vars 2013-07-23 15:41:19 +00:00
Victor Vieux
b8d52ec266 add timestamp and change untagged -> untag 2013-07-23 15:41:19 +00:00
Victor Vieux
b5da816487 basic version of the /events endpoint 2013-07-23 15:41:19 +00:00
Victor Vieux
3bae188b8d change dockercfg to json and support multiple auth remote 2013-07-23 15:07:18 +00:00
Victor Vieux
8165e51ecc Merge branch '858-disable-network-configuration' of https://github.com/stfp/docker into stfp-858-disable-network-configuration 2013-07-23 08:44:12 +00:00
Thatcher
9a15db21a6 Merge pull request #1269 from dhrp/new-website-links
Added new docker logo to the documentation header, and added other links to docs header. Tnx @keeb !
2013-07-22 20:48:29 -07:00
Thatcher Peskens
58a1c5720a Added new docker logo to the documentation header, and added other links. 2013-07-22 20:26:40 -07:00
Stefan Praszalowicz
bc172e5e5f Invert network disable flag and logic (unbreaks TestAllocate*PortLocalhost) 2013-07-22 19:00:35 -07:00
Solomon Hykes
6745bdd0b3 Typo in 3rd-party 2013-07-22 18:39:58 -07:00
Solomon Hykes
5714f0a74e Hack: completed step 12 of the bootcamp 2013-07-22 18:36:36 -07:00
Solomon Hykes
ce43f4af1c Hack: first draft of the maintainer boot camp. Work in progress! 2013-07-22 18:32:55 -07:00
Sridatta Thatipamala
945033f1cc change permissions of initLayer to be readable by non-root users 2013-07-22 14:55:07 -07:00
Victor Vieux
5d1609f5a2 Merge pull request #1265 from dotcloud/better-bridge-defaults
*Network: Improve default network configuration
2013-07-22 13:54:35 -07:00
Solomon Hykes
4714f102d7 Allocate a /16 IP range by default, with fallback to /24. Try a total of 12 ranges instead of 3. 2013-07-22 12:06:24 -07:00
Guillaume J. Charmes
a675da65e9 Merge pull request #1262 from dotcloud/1253_add_directory_check
* Runtime: fix error message when invalid directory
2013-07-22 11:54:22 -07:00
Victor Vieux
e39755666b Merge pull request #1236 from dotcloud/1234_overwrites_expose-fix
*Builder: fix overwrites EXPOSE
2013-07-22 09:51:11 -07:00
Victor Vieux
9adba5e2e6 Merge pull request #1264 from dotcloud/fix_tests_env
fix test env
2013-07-22 09:26:34 -07:00
Victor Vieux
5c1af383eb fix test env 2013-07-22 16:26:05 +00:00
Victor Vieux
c81662eae4 Merge branch 'master' into 1237-improve_docker_top-feature
Conflicts:
	docs/sources/api/docker_remote_api.rst
2013-07-22 16:22:11 +00:00
Victor Vieux
8ea9ccf3a7 Merge pull request #1244 from dotcloud/1020_add_variable
*Runtime: Add container=lxc in default env
2013-07-22 09:17:30 -07:00
Victor Vieux
74a2b13687 fix error message when invalid directory 2013-07-22 14:52:05 +00:00
Victor Vieux
4e7f2b757e Merge pull request #1158 from cespare/1142-docker-add-fix
*Buildfile: determine a filename from a URL if the destination is a directory
2013-07-22 07:07:04 -07:00
Ken Cochrane
2bba279cf1 Merge pull request #1259 from dsissitka/patch-3
*Documentation: Updated the stop command's docs.
2013-07-22 06:59:02 -07:00
Ken Cochrane
56da77a548 Merge pull request #1258 from dsissitka/patch-2
* Documentation: Added top to the list of commands in the sidebar.
2013-07-22 06:58:09 -07:00
Victor Vieux
494b575213 Merge pull request #1255 from dsissitka/patch-1
Fixed a couple of minor syntax errors.
2013-07-22 06:45:25 -07:00
Caleb Spare
c383d59880 Update ADD documentation to specify new behavior. 2013-07-21 23:32:06 -07:00
Caleb Spare
416fdaa3d5 Remove some trailing whitespace. 2013-07-21 23:32:06 -07:00
Caleb Spare
2b0ebf5d32 Buildfile: for ADD command, determine filename from URL.
This is used if the destination is a directory. This makes the URL
download behavior more closely match file copying.

Fixes #1142.
2013-07-21 23:32:06 -07:00
Caleb Spare
f236e62d9d Test pulling remote files using ADD in a buildfile. 2013-07-21 23:32:01 -07:00
Stefan Praszalowicz
964e826a9b Document -b none 2013-07-21 18:01:52 -07:00
Stefan Praszalowicz
49673fc45c Support completely disabling network configuration with docker -d -b none 2013-07-21 17:49:09 -07:00
Stefan Praszalowicz
3342bdb331 Support networkless containers with new docker run option '-n' 2013-07-21 17:11:47 -07:00
David Sissitka
1d02a7ffb6 Updated the stop command's docs. 2013-07-21 19:00:18 -04:00
dsissitka
788935175e Added top to the list of commands in the sidebar. 2013-07-21 18:30:51 -04:00
dsissitka
32663bf431 Fixed a couple of minor syntax errors. 2013-07-20 21:27:55 -04:00
unclejack
df86cb9a5c make docker run handle SIGINT/SIGTERM 2013-07-20 13:47:13 +03:00
Guillaume J. Charmes
e3be2e959b Merge pull request #1242 from dotcloud/remove_usage_from_test
remove usage from tests
2013-07-19 14:07:24 -07:00
Victor Vieux
67f1e3f5ed add container=lxc in default env 2013-07-19 17:22:16 +00:00
Andy Rothfusz
23ea9b8968 Merge pull request #1235 from metalivedev/cleandocbld
Make docs build without warnings or errors. Minor additional cleanup.
2013-07-19 09:44:53 -07:00
Victor Vieux
921c6994b1 add LXC version to docker info in debug mode 2013-07-19 16:36:23 +00:00
Victor Vieux
ea12588524 remove usage from tests 2013-07-19 15:56:00 +00:00
Ryan Fowler
e8ad82f9ba Make the ENTRYPOINT example work
The incantation listed in the ENTRYPOINT example didn't actually pass the arguments to your script. Changing the definition to an array fixes this.
2013-07-19 10:11:21 -05:00
Victor Vieux
6e2e4cad73 Merge pull request #1204 from dotcloud/tests-less-copypaste
Hack: use helper functions in tests for less copy-pasting
2013-07-19 07:55:04 -07:00
Victor Vieux
2e0e455fa6 rebase master 2013-07-19 14:48:32 +00:00
Victor Vieux
d93742fe9a Merge pull request #1239 from dotcloud/fix_utils_tests
fix error in utils tests
2013-07-19 06:59:07 -07:00
Victor Vieux
2e3b660dd0 fix error in utils tests 2013-07-19 13:56:36 +00:00
Victor Vieux
0bd534adcf Merge pull request #1211 from dotcloud/new_logs
*Runtime: Logs are now synchronised
2013-07-19 06:43:29 -07:00
Victor Vieux
e59dd2c62c Merge pull request #1159 from unclejack/add_container_id_file_to_run
*Client: Add support for container ID files (a la pidfile)
2013-07-19 06:11:22 -07:00
unclejack
25be79208a create the cidfile before creating the container
This change makes docker attempt to create the container ID file and
open it before attempting to create the container. This avoids leaving
a stale container behind if docker has failed to create and open the
container ID file.

The container ID is written to the file after the container is created.
2013-07-19 16:03:45 +03:00
unclejack
2a3b91e3b6 docs - add example for cidfile 2013-07-19 16:03:45 +03:00
unclejack
221ee504aa docs - add cidfile flag to run docs 2013-07-19 16:03:45 +03:00
unclejack
64e74cefb7 add support for container ID files (a la pidfile) 2013-07-19 16:03:45 +03:00
Victor Vieux
eb4a0271fb bump api version to 1.4 2013-07-19 10:34:55 +00:00
Victor Vieux
cfec1c3e1b add ps args to docker top 2013-07-19 10:06:32 +00:00
Victor Vieux
2b5386f039 add regression test from @crosbymichael 2013-07-19 03:01:39 +00:00
Victor Vieux
a0eec14c7d fix overwrites EXPOSE 2013-07-19 02:47:35 +00:00
Andy Rothfusz
54f9cdb0c3 Make docs build without warnings or errors. Minor additional cleanup. 2013-07-18 19:04:51 -07:00
Guillaume J. Charmes
d6fb313220 Merge pull request #1207 from crosbymichael/819-use-persistent-volume
* Runtime: Do not overwrite container volumes from config
2013-07-18 18:51:00 -07:00
Daniel Mizyrycki
0aa2470c76 Merge pull request #1232 from dotcloud/1217-testing-coverage
Testing, issue #1217: Add coverage testing into docker-ci
2013-07-18 14:27:37 -07:00
Victor Vieux
0afed3eded handle -dev 2013-07-18 20:56:41 +00:00
Victor Vieux
39ff542142 Merge branch 'master' into add_last_version-feature 2013-07-18 20:51:31 +00:00
Victor Vieux
edc68f84f3 Merge pull request #1230 from dotcloud/switch_dev
switch version to -dev
2013-07-18 13:50:40 -07:00
Victor Vieux
0089dd05e9 switch version to -dev 2013-07-18 20:50:04 +00:00
Victor Vieux
51f6c4a737 Merge pull request #1227 from dotcloud/bump_0.5.0
Bump to 0.5.0
2013-07-18 11:51:29 -07:00
Nan Monnand Deng
cd209f406e documentation. 2013-07-18 14:22:49 -04:00
Andy Rothfusz
f4eaec3e1e Merge pull request #1226 from metalivedev/easydockerfile
Make dockerfile docs easier to find. Clean up formatting.
2013-07-18 10:14:22 -07:00
Victor Vieux
b083418257 change -b -> -v and add udp example 2013-07-18 16:25:14 +00:00
Victor Vieux
5794857f7a Merge pull request #1169 from crosbymichael/buildfile-tests
Add unit tests for buildfile config instructions
2013-07-18 08:35:23 -07:00
Michael Crosby
e7f3f6fa5a Add unit tests for buildfile config instructions
Add tests for instructions in the buildfile that
modify the config of the resulting image.
2013-07-18 05:37:28 -09:00
Victor Vieux
1b0fd7ead3 add debug and simplify docker logs 2013-07-18 13:29:40 +00:00
Victor Vieux
a926cd4d88 add legacy support 2013-07-18 13:25:47 +00:00
Andy Rothfusz
aa5671411b Make dockerfile docs easier to find. Clean up formatting. 2013-07-17 18:56:40 -07:00
Solomon Hykes
5d8efc107d + Runtime: inject dockerinit at /.dockerinit instead of overwriting /sbin/init. This makes it possible to run /sbin/init inside a container. 2013-07-17 17:13:34 -07:00
Victor Vieux
f8dfd0aa5e Merge pull request #1225 from dotcloud/hotfix_docker_rmi
*Runtime: improve docker rmi via id
2013-07-17 14:31:56 -07:00
Victor Vieux
1a226f0e28 add VolumesFrom to MergeConfig, and test 2013-07-17 21:06:46 +00:00
Andy Rothfusz
3dbf9c6560 Merge pull request #1219 from metalivedev/docs-repoupdate
Update docs with 0.5 repository information.
2013-07-17 13:53:12 -07:00
Victor Vieux
7c00201222 add Volumes and VolumesFrom to CompareConfig 2013-07-17 20:51:25 +00:00
Victor Vieux
2db99441c8 prevent any kind of operation simultaneously 2013-07-17 20:39:36 +00:00
Guillaume J. Charmes
de563a3ea3 Merge pull request #1194 from crosbymichael/build-verbose
* Builder: Add verbose output to docker build
2013-07-17 12:53:06 -07:00
Victor Vieux
9cf2b41c05 change rm usage in docs 2013-07-17 19:24:54 +00:00
Victor Vieux
f310b875f8 Merge branch 'master' of https://github.com/kencochrane/docker into kencochrane-master 2013-07-17 19:23:06 +00:00
Solomon Hykes
ac14c463d5 Changed date on changelog 2013-07-17 11:51:26 -07:00
Guillaume J. Charmes
578e888915 Merge pull request #1212 from dotcloud/merge_v_b_options
* Runtime: Merge -b and -v options
2013-07-17 11:43:47 -07:00
Victor Vieux
5231bf3653 Merge pull request #1222 from lopter/master
Always stop the opposite goroutine in network_proxy.go (closes #1213)
2013-07-17 11:40:33 -07:00
Solomon Hykes
8af945f353 Small changes in changelog wording 2013-07-17 11:39:38 -07:00
Ken Cochrane
d0e8ca1257 updated with notes from @vieux 2013-07-17 13:46:11 -04:00
Victor Vieux
5a934fc923 fix docker rmi via id 2013-07-17 15:48:53 +00:00
Louis Opter
c766d064ac Always stop the opposite goroutine in network_proxy.go (closes #1213) 2013-07-17 01:05:11 -07:00
Andy Rothfusz
0356081c0a Update repository information. 2013-07-16 17:04:41 -07:00
Daniel Mizyrycki
6e8bfc8d12 Testing, issue #1217: Add coverage testing into docker-ci 2013-07-16 13:45:43 -07:00
Guillaume J. Charmes
18e91d5f85 Update docs 2013-07-16 10:14:21 -07:00
Victor Vieux
48a892bee5 Add CompareConfig test 2013-07-16 15:58:23 +00:00
Victor Vieux
fb005a3da8 add server.ContainerTop, server.poolAdd and ser.poolRemove tests 2013-07-16 14:38:18 +00:00
Guillaume J. Charmes
1004d57b85 Hotfix: make sure ./utils tests pass 2013-07-15 17:58:23 -07:00
Nick Stinemates
f9e4ef5eb0 Merge pull request #1210 from dotcloud/improve_configmerge
improve mergeconfig, ...
2013-07-15 18:04:12 -07:00
Guillaume J. Charmes
eefbadd230 Merge -b and -v options 2013-07-15 17:51:32 -07:00
Guillaume J. Charmes
bc21b3ebf0 Bump version to 0.5.0 2013-07-15 14:57:52 -07:00
Solomon Hykes
608fb2a21e Merge pull request #1184 from dotcloud/1176-packaging-release
Hack: document PPA release step
2013-07-15 13:59:55 -07:00
Michael Crosby
92cbb7cc80 Do not overwrite container volumes from config
Fixes #819 Use same persistent volume when a container is restarted
2013-07-15 11:59:11 -09:00
Solomon Hykes
45050d9887 Merge pull request #1188 from dotcloud/1174-packaging-binary
Packaging: add pure binary to docker release
2013-07-15 13:59:06 -07:00
Daniel Mizyrycki
75a0052e64 packaging, issue #1176: Document PPA release step 2013-07-15 12:13:51 -07:00
Guillaume J. Charmes
c8efd08384 Merge pull request #1208 from crosbymichael/1201-rw-volumes-from
- Volumes: Copy VolumesRW values when using --volumes-from
2013-07-15 10:59:51 -07:00
Guillaume J. Charmes
454cd147fb Merge pull request #1096 from dotcloud/remove_os_user
* Runtime: Remove the os.user dependency and manually lookup /etc/passwd instead
2013-07-15 10:19:09 -07:00
Guillaume J. Charmes
e41507bde2 Add unit test to check wrong uid case 2013-07-15 10:05:09 -07:00
Victor Vieux
599f85d4e4 store both logs in a same file, as JSON 2013-07-15 16:17:58 +00:00
Victor Vieux
5756ba9bc4 Merge branch 'master' into new_logs 2013-07-15 13:57:54 +00:00
Victor Vieux
193a7e1dc1 improve mergeconfig, if dns, portspec, env or volumes specify in docker run, apend and not replace 2013-07-15 13:12:33 +00:00
Jonas Pfenniger
0900d3b7a6 docker.upstart: use the same start/stop events as sshd
Is probably more solid
2013-07-15 11:41:19 +01:00
Jonas Pfenniger
24dd50490a docker.upstart: avoid spawning a sh process
start script / end script create an intermediate sh process.
2013-07-15 11:40:35 +01:00
Michael Crosby
5ae8c7a985 Copy VolumesRW values when using --volumes-from
Fixes #1201
2013-07-14 18:23:20 -09:00
benoitc
d639f61ec1 reuse the type 2013-07-13 19:19:38 +02:00
Victor Vieux
9b57f9187b Merge pull request #1200 from ToothlessGear/fix-whitespaces_progessbar
Fix progressbar, without messing up other outputs
2013-07-13 08:50:50 -07:00
Victor Vieux
50e45b485f Merge pull request #1190 from dotcloud/1189-add_debug_error
* RemoteAPI: Improve debug
2013-07-13 08:15:59 -07:00
Victor Vieux
2051ebc0eb Merge pull request #1198 from dotcloud/fix_pull_tag
Fixed tag option for "docker pull" (the option was ignored)
2013-07-13 08:14:47 -07:00
benoitc
a3b1a9f01a useless function. forgot to remove it. 2013-07-13 17:04:04 +02:00
benoitc
507cef8bce useless type 2013-07-13 17:03:04 +02:00
benoitc
166eba3e28 put the websocket route in the map containing all routes
Instead of handling the websocket differently just handle it as a normal
route and upgrade it to a websocket.
2013-07-13 17:00:40 +02:00
Solomon Hykes
080243f040 Hack: use helper functions in tests for less copy-pasting 2013-07-12 17:56:55 -07:00
Guillaume J. Charmes
933b9d44e1 Merge pull request #1054 from nickstenning/getimage-by-tag
* Runtime: Reverse priority of tag lookup in TagStore.GetImage
2013-07-12 16:15:04 -07:00
Nick Stenning
44b3e8d51b Reverse priority of tag lookup in TagStore.GetImage
Currently, if you have the following images:

    foo/bar      1       23b27d50fb49
    foo/bar      2       f2b86ec3fcc4

And you issue the following command:

    docker tag foo/bar:2 foo/bar latest

docker will tag the "wrong" image, because the image id for foo/bar:1 starts
with a "2". That is, you'll end up with the following:

    foo/bar      1       23b27d50fb49
    foo/bar      2       f2b86ec3fcc4
    foo/bar      latest  23b27d50fb49

This commit reverses the priority given to tags vs. image ids in the
construction `<user>/<repo>:<tagOrId>`, meaning that if a tag that is an exact
match for the specified `tagOrId`, it will be tagged in preference to an image
with an id that happens to start with the correct character sequence.
2013-07-12 23:56:36 +01:00
Daniel Mizyrycki
9bf8ad741f Merge pull request #1083 from hukeli/debian
Keep debian package up-to-date
2013-07-12 15:24:37 -07:00
Daniel Mizyrycki
9913ebbe21 Merge pull request #1203 from dotcloud/1202-packaging-debian
Packaging, issue #1202: Upgrade vagrantfile go in debian packaging
2013-07-12 15:11:57 -07:00
Daniel Mizyrycki
c7a48e91d8 Packaging, issue #1202: Upgrade vagrantfile go in debian packaging 2013-07-12 15:06:12 -07:00
Solomon Hykes
2cbf2200ac Merge pull request #1195 from dotcloud/tests-cleanup
* Hack: tests cleanup
2013-07-12 14:51:59 -07:00
Marcus Farkas
bac5772312 *Client: Fix the progressbar, without manipulating other outputs
Prior this commit, 'docker images' and other cmd's, which used utils.HumanSize(),
showed unnecessary whitespaces.
Formatting of progress has been moved to FormatProgess(), justifing the string
directly in the template.
2013-07-12 20:15:25 +02:00
Marcus Farkas
a6e5a397bd Revert "Client: better progressbar output"
This reverts commit 3ac68f1966.
2013-07-12 20:08:45 +02:00
Ken Cochrane
364f48d6c7 updated the rmi command docs, the had typos 2013-07-12 14:05:26 -04:00
Ken Cochrane
4174e7aa7a updated the help commands on a few commands that were not correct 2013-07-12 13:55:26 -04:00
Guillaume J. Charmes
eb38750d99 Remove the os.user dependency and manually lookup /etc/passwd instead 2013-07-12 10:49:47 -07:00
Sam Alba
cd0fef633c Fixed tag option for "docker pull" (the option was ignored) 2013-07-12 10:42:54 -07:00
Michael Crosby
d0c73c28df Add param to api docs for verbose build output 2013-07-12 06:22:56 -09:00
Victor Vieux
8e6c249e48 Merge pull request #1197 from crosbymichael/buildfile-doc-ordering
Fix Docker Builder documentation section numbers
2013-07-12 05:27:47 -07:00
Victor Vieux
752f99e8a1 Merge pull request #977 from dotcloud/966-improve_docker_login_parameters-feature
* Client: Add options to docker login to be able to use it via script
2013-07-12 05:07:25 -07:00
Victor Vieux
a909223ee2 Merge pull request #1102 from dotcloud/1098-store_hostconfig_tmp
* Runtime: bind mounts are now preserved upon container restart
2013-07-12 05:04:10 -07:00
Victor Vieux
8ff271fc74 Merge pull request #1192 from dotcloud/docker_port-fix
hotfix: fix broken docker port
2013-07-12 04:57:53 -07:00
Victor Vieux
9dfac1dd65 Merge pull request #1055 from dotcloud/list_container_processes-feature
* RemoteApi: /top to list running processes in a container
* Client: docker top to list running processes in a container
2013-07-12 04:56:12 -07:00
Victor Vieux
a8a6848ce0 fix tests regarding the new test image 2013-07-12 11:54:53 +00:00
Victor Vieux
9232d1ef62 Merge branch 'master' into list_container_processes-feature 2013-07-12 11:47:27 +00:00
Victor Vieux
e9011122fb use http://get.docker.io/latest 2013-07-12 11:45:40 +00:00
Michael Crosby
90483dc912 Fix Docker Builder documentation numbering 2013-07-11 16:41:19 -09:00
Solomon Hykes
6bdb6f226b Simplify unit tests code with mkRuntime() 2013-07-11 17:59:25 -07:00
Solomon Hykes
2ac1141980 Don't leave broken, commented out tests lying around. 2013-07-11 17:58:45 -07:00
Michael Crosby
1104d443cc Revert changes from PR 1030
With streaming output of the build
changes in 1030 are no longer required.
2013-07-11 15:52:08 -09:00
Michael Crosby
49044a9608 Fix buildfile tests after rebase 2013-07-11 15:37:26 -09:00
Guillaume J. Charmes
71d2ff4946 Hotfix: check the length of entrypoint before comparing. 2013-07-11 17:31:07 -07:00
Michael Crosby
474191dd7b Add verbose output to docker build
Verbose output is enabled by default and
the flag -q can be used to suppress the verbose output.
2013-07-11 15:27:33 -09:00
Guillaume J. Charmes
637eceb6a7 Merge pull request #1124 from crosbymichael/buildfile-volumes
+ Builder: Add VOLUME instruction to buildfile
2013-07-11 17:16:57 -07:00
Victor Vieux
976428f505 change output 2013-07-11 21:04:23 +02:00
Victor Vieux
affe7caf78 fix broken docker port 2013-07-11 19:28:15 +02:00
Victor Vieux
941e3e2ef0 wip 2013-07-11 17:18:28 +00:00
Victor Vieux
b7937e268f add debug for error in the server 2013-07-11 12:21:43 +00:00
Louis Opter
5a411fa38e Make the TestAllocate{UDP,TCP}PortLocalhost more reliable
- For the TCP test try again if socat wasn't listening yet;
- For the UDP test raise the timeout to a minute to workaround what
  seems to be an issue with Linux.
2013-07-10 18:25:53 -07:00
Daniel Mizyrycki
bf26ae03cf Packaging, issue #1174: Add pure binary to docker release 2013-07-10 17:39:00 -07:00
Andy Rothfusz
3363cd5cd0 Merge pull request #1178 from dotcloud/fix-dev-environment
Fix outdated docs explaining how to setup a dev environment
2013-07-10 16:53:22 -07:00
Daniel Mizyrycki
5c49a61353 Merge pull request #1183 from dotcloud/960-packaging-PPA
Packaging, issue #960: Document PUBLISH_PPA for staging/production release
2013-07-10 16:16:31 -07:00
Daniel Mizyrycki
f83c31e188 Packaging, issue #960: Document PUBLISH_PPA for staging/production release 2013-07-10 16:06:49 -07:00
Louis Opter
8f36467107 Raise the timeouts for the TCP/UDP localhost proxy tests
Sometimes these tests fail, let's see if that improves the situation.
2013-07-10 16:05:14 -07:00
Nan Monnand Deng
73e79a3310 reduce the number of string copy operations. 2013-07-10 18:59:43 -04:00
Nan Monnand Deng
34cf976866 format in the user agent header should follow RFC 2616 2013-07-10 18:59:43 -04:00
Nan Monnand Deng
e832b01349 Removed an unnecessary nil assignment 2013-07-10 18:56:49 -04:00
Nan Monnand Deng
26c8eae6fe Removed an unnecessary error check. 2013-07-10 18:56:49 -04:00
Nan Monnand Deng
d40efc4648 added client's kernel version 2013-07-10 18:56:49 -04:00
Nan Monnand Deng
5705a49308 Insert version checkers when call NewRegistry() 2013-07-10 18:56:49 -04:00
Nan Monnand Deng
65185a565b added APIVersion when call NewRegistry 2013-07-10 18:53:38 -04:00
Nan Monnand Deng
1bb8f60d5a inserted setUserAgent in each HTTP request 2013-07-10 18:49:01 -04:00
Nan Monnand Deng
1d01189f04 Added version checker interface 2013-07-10 18:49:01 -04:00
Victor Vieux
fc3a8e409d change tag -> repo name in build usage 2013-07-10 22:44:31 +00:00
Louis Opter
8e49cb453f Merge pull request #1181 from dotcloud/export_portmapping
Export PortMapping in container.go
2013-07-10 14:24:20 -07:00
Michael Crosby
40f1e4edbe Rebased changes buildfile_test 2013-07-10 07:12:57 -09:00
Michael Crosby
1267e15b0f Add unittest for volume config verification 2013-07-10 06:59:16 -09:00
Michael Crosby
eb9fef2c42 Add VOLUME instruction to buildfile 2013-07-10 06:59:16 -09:00
Victor Vieux
43b346d93b Merge pull request #1151 from alex/patch-1
Replaced gendered language in the README
2013-07-10 07:52:30 -07:00
Victor Vieux
d918c7d9de export portmapping in network.go 2013-07-10 14:09:35 +00:00
Victor Vieux
e962e9edcf Merge pull request #1168 from dotcloud/standalone_registry
* Server: Allow push on standalone registry
2013-07-10 04:14:23 -07:00
Victor Vieux
b7a62f1f1b Merge pull request #1177 from lopter/udp-support-final
* Network: Add UDP support
2013-07-10 03:55:18 -07:00
Victor Vieux
2e5d1a2d48 Merge pull request #1164 from dotcloud/1162-import_hangs-fix
* Runtime: Untar is now faster
2013-07-10 03:37:24 -07:00
Louis Opter
fac0d87d00 Add support for UDP (closes #33)
API Changes
-----------

The port notation is extended to support "/udp" or "/tcp" at the *end*
of the specifier string (and defaults to tcp if "/tcp" or "/udp" are
missing)

`docker ps` now shows UDP ports as "frontend->backend/udp". Nothing
changes for TCP ports.

`docker inspect` now displays two sub-dictionaries: "Tcp" and "Udp",
under "PortMapping" in "NetworkSettings".

Theses changes stand true for the values returned by the HTTP API too.

This changeset will definitely break tools built upon the API (or upon
`docker inspect`). A less intrusive way to add UDP ports in `docker
inspect` would be to simply add "/udp" for UDP ports but it will still
break existing applications which tries to convert the whole field to an
integer. I believe that having two TCP/UDP sub-dictionaries is better
because it makes the whole thing more clear and more easy to parse right
away (i.e: you don't have to check the format of the string, split it
and convert the right part to an integer)

Code Changes
------------

Significant changes in network.go:

- A second PortAllocator is instantiated for the UDP range;
- PortMapper maintains separate mapping for TCP and UDP;
- The extPorts array in NetworkInterface is now an array of Nat objects
  (so we can know on which protocol a given port was mapped when
  NetworkInterface.Release() is called);
- TCP proxying on localhost has been moved away in network_proxy.go.

localhost proxy code rewrite in network_proxy.go:

We have to proxy the traffic between localhost:frontend-port and
container:backend-port because Netfilter doesn't work properly on the
loopback interface and DNAT iptable rules aren't applied there.

- Goroutines in the TCP proxying code are now explicitly stopped when
  the proxy is stopped;
- UDP connection tracking using a map (more infos in [1]);
- Support for IPv6 (to be more accurate, the code is transparent to the
  Go net package, so you can use, tcp/tcp4/tcp6/udp/udp4/udp6);
- Single Proxy interface for both UDP and TCP proxying;
- Full test suite.

[1] https://github.com/dotcloud/docker/issues/33#issuecomment-20010400
2013-07-09 17:42:35 -07:00
Solomon Hykes
a839b36e55 Fix outdated docs explaining how to setup a dev environment. Building docker with docker ftw 2013-07-09 16:48:16 -07:00
Sam Alba
316c8328aa Hardened repos name validation 2013-07-09 16:46:55 -07:00
Sam Alba
e8db031112 Fixed tag parsing when the repos name contains both a port and a tag 2013-07-09 16:46:25 -07:00
Sam Alba
59b785a282 Fixing missing tag field when pulling containers which does not exist 2013-07-09 16:45:32 -07:00
Louis Opter
1a1daca621 Fix a typo in runtime_test.go: Availalble -> Available 2013-07-09 11:52:33 -07:00
Sam Alba
837be914ca Merge branch 'master' of github.com:dotcloud/docker into standalone_registry 2013-07-09 11:31:14 -07:00
Sam Alba
f44eac49fa Fixed potential security issue (never try http on official index when polling the endpoint). Also fixed local repos name when pulling index.docker.io/foo/bar 2013-07-09 11:30:12 -07:00
Guillaume J. Charmes
0acdef4549 Merge pull request #1166 from dotcloud/networkless_tests-2
* Tests: Remove all network dependencies from the test suite
2013-07-09 11:20:18 -07:00
Guillaume J. Charmes
7d8ef90ccb Merge pull request #1173 from dotcloud/1172-ghost_restart-fix
Make sure container is not marked as ghost when it starts
2013-07-09 10:49:17 -07:00
Guillaume J. Charmes
91520838fc Make sure container is not marked as ghost when it starts 2013-07-09 10:48:33 -07:00
Guillaume J. Charmes
ada0e1fb08 Merge pull request #1049 from dotcloud/1040_ignore_stderr_tests-fix
- Tests: Ignore stderr while doing tests
2013-07-09 10:32:24 -07:00
Sam Alba
33d97e81eb Removed DOCKER_INDEX_URL 2013-07-09 08:10:43 -07:00
Sam Alba
019324015b Moved parseRepositoryTag to the utils package 2013-07-09 08:06:10 -07:00
Victor Vieux
72d278fdac Merge pull request #1170 from dotcloud/fix_type_socket
Fix typo socket
2013-07-09 03:57:45 -07:00
Victor Vieux
05d7f85af9 fix typo 2013-07-09 10:55:28 +00:00
Solomon Hykes
7fba358ae2 Merge pull request #1013 from dotcloud/standardize-build
* Hack: standardized docker's build environment in a Dockerfile
2013-07-08 21:33:45 -07:00
Solomon Hykes
9f1fc40a64 * Hack: standardized docker's build environment in a Dockerfile 2013-07-08 21:30:29 -07:00
Sam Alba
3be7bc38e0 Fixed typo (thanks unit tests) 2013-07-08 17:42:18 -07:00
Sam Alba
31c66d5a00 Re-implemented a notion of local and private repos. This allows to consider the full qualified name of the repos as the name for the local repository without breaking the calls to the Registry API. 2013-07-08 17:26:50 -07:00
Sam Alba
e7d36c9590 It is now possible to include a ":" in a local repository name (it will not be the case for a remote name). This adds support for full qualified repository name in order to support private registry server 2013-07-08 17:22:41 -07:00
Sam Alba
3e8626c4a1 Changed the tag parsing to it will work even if there is a port in the repos registry url (full qualified name for pushing on a standalone registry) 2013-07-08 17:20:41 -07:00
Guillaume J. Charmes
e14dd4d33e Merge pull request #1157 from kstaken/1156-entrypoint-builder
Builder: Fix #1156 entrypoint override from base image
2013-07-08 16:57:26 -07:00
Kimbro Staken
87a69e6753 Merge branch '1156-entrypoint-builder' of github.com:kstaken/docker into 1156-entrypoint-builder 2013-07-08 16:06:09 -07:00
Kimbro Staken
f64dbdbe3a Override Entrypoint picked up from the base image that breaks run commands in builder 2013-07-08 16:04:39 -07:00
Kimbro Staken
2b5553144a Removing the save to disk as it was not really necessary 2013-07-08 16:03:18 -07:00
Guillaume J. Charmes
e43ef364cb Remove all network dependencies from the test suite 2013-07-08 15:23:04 -07:00
Guillaume J. Charmes
08a87d4b3b Fix #1162 - Remove bufio from Untar 2013-07-08 13:42:17 -07:00
Victor Vieux
90f372af5c Merge pull request #1163 from dotcloud/1137-change_search_size-feature
* Client : uses the terminal size to display search output, add -notrunc
2013-07-08 11:47:25 -07:00
Victor Vieux
3ec29eb5da Merge pull request #1066 from mhennings/fix-broken-streaming-result
* Server: Fix streaming status to the docker client while pushing images
2013-07-08 11:21:29 -07:00
Victor Vieux
3a20e4e15d add if to prevent crash 2013-07-08 18:19:12 +00:00
Victor Vieux
fd97190ee7 uses the terminal size to display search output, add -notrunc and fix bug in resize 2013-07-08 17:20:13 +00:00
Victor Vieux
70480ce7bc Merge pull request #1030 from dotcloud/builder_display_err_log
*Builder : Display containers logs in case of build failure
2013-07-08 07:26:46 -07:00
Victor Vieux
bf7d6cbb4a rebase master 2013-07-08 13:26:29 +00:00
Victor Vieux
c059785ffb Merge pull request #1161 from dotcloud/add_remote_addr_debug
Add remote addr in debug
2013-07-08 05:46:49 -07:00
Victor Vieux
a0f5fb7394 add remote addr in debug 2013-07-08 12:45:50 +00:00
Victor Vieux
ad33e9f388 Merge pull request #1138 from dotcloud/1123-rmi_conflict-fix
* Runtime: Fix error in rmi when conflict
2013-07-08 05:19:05 -07:00
Kimbro Staken
1d1d81b0bc Cleanup white space 2013-07-08 00:18:47 -07:00
Kimbro Staken
f3d2969560 Override Entrypoint picked up from the base image that breaks run commands in builder 2013-07-08 00:11:45 -07:00
Alex Gaynor
758ea61b77 Replaced gendered language in the README 2013-07-07 13:55:02 +10:00
benoitc
8eeff01939 add websocket support to /container/<name>/attach/ws
This function add the possibility to attach containers streams to a
websocket. When a websocket is asked the request is upgraded to this
protocol..
2013-07-06 02:05:02 +02:00
Daniel Mizyrycki
4388bef996 testing, issue #1104: Make the test use static flags 2013-07-05 16:49:55 -07:00
Sam Alba
e2b8ee2723 Fixed runtime_test (ImagePull prototyped changed) 2013-07-05 16:03:22 -07:00
Guillaume J. Charmes
07dc0a5120 Merge pull request #1144 from dotcloud/standalone_registry
* Registry: Standalone registry
2013-07-05 15:56:48 -07:00
Sam Alba
d3125d8570 Code cleaning 2013-07-05 15:26:08 -07:00
Sam Alba
283ebf3ff9 fmt.Errorf instead of errors.New 2013-07-05 14:56:56 -07:00
Sam Alba
4c174e0bfb Fixed ping URL 2013-07-05 14:55:48 -07:00
Sam Alba
57a6c83547 Allowing namespaces in standalone registry 2013-07-05 14:30:43 -07:00
Sam Alba
cfc7684b7d Restoring old changeset lost by previous merge 2013-07-05 12:37:07 -07:00
Sam Alba
be49f0a118 Merging from master 2013-07-05 12:27:10 -07:00
Sam Alba
66a9d06d9f Adding support for nicer URLs to support standalone registry (+ some registry code cleaning) 2013-07-05 12:20:58 -07:00
Guillaume J. Charmes
6940cf1ecd Merge pull request #1127 from cespare/patch-1
Typo fix
2013-07-05 10:48:59 -07:00
Guillaume J. Charmes
4e0cdc016a Revert #1126. Remove mount shm 2013-07-05 10:47:00 -07:00
Guillaume J. Charmes
8a8109648a Merge pull request #1129 from cespare/style-fixes-2
Style fixes for fmt + err usage.
2013-07-05 10:31:53 -07:00
Guillaume J. Charmes
dc8b359319 Merge pull request #1126 from karanlyons/patch-1
* Runtime: Mount /dev/shm as a tmpfs.
2013-07-05 10:31:05 -07:00
Victor Vieux
dea29e7c99 Fix error in rmi when conflict 2013-07-05 16:58:39 +00:00
Daniel Mizyrycki
ab6379b3e0 Merge pull request #1133 from dotcloud/775-testing-notifications
testing, issue #775: Add automatic testing notifications to docker-ci
2013-07-04 21:48:00 -07:00
Daniel Mizyrycki
f7fed2ea5f testing, issue #775: Add automatic testing notifications to docker-ci 2013-07-04 21:43:46 -07:00
Daniel Mizyrycki
35e87ee571 Merge pull request #1132 from dotcloud/776-testing-commit
testing, issue #776: Ensure docker-ci test docker code as it was at commit time
2013-07-04 20:37:51 -07:00
Daniel Mizyrycki
ab3893ff4d testing, issue #776: Ensure docker-ci test docker code as it was at commit time 2013-07-04 20:28:54 -07:00
Caleb Spare
1277dca335 Style fixes for fmt + err usage.
fmt.Printf and friends will automatically format using the error
interface (.Error()) preferentially; no need to do err.Error().
2013-07-04 14:33:17 -07:00
Caleb Spare
ba9aef6f2c Typo fix
Error message grammar tweak
2013-07-04 12:40:14 -07:00
Karan Lyons
dd619d2bd6 Mount /dev/shm as a tmpfs.
Fixes #1122.
2013-07-04 09:58:50 -07:00
Marco Hennings
1e2ef274cd Pushing an Image causes the docker client to give an error message instead of
writing out streamed status.

This is caused by a Buffering message that is not in the correct json format:

[...]
{"status"
:"Pushing 6bba11a28f1ca247de9a47071355ce5923a45b8fea3182389f992f4
24b93edae"}Buffering to disk 244/? (n/a)..
{"status":"Pushing",[...]

The "Buffering to disk" message is originated in
srv.runtime.graph.TempLayerArchive

I am now using the StreamFormatter provided by the context from which the
method is called.
2013-07-04 10:50:37 +02:00
Guillaume J. Charmes
bcb5e36dd9 Merge pull request #1111 from cespare/style-fixes
Style fixes
2013-07-03 14:46:05 -07:00
Caleb Spare
19121c16d9 Implement several golint suggestions, including:
* Removing type declarations where they're inferred
* Changing Url -> URL, Id -> ID in names
* Fixing snake-case names
2013-07-03 14:36:04 -07:00
Caleb Spare
27ee261e60 Simplify the NopWriter code. 2013-07-03 14:35:18 -07:00
Caleb Spare
da3962266a Gofmt -s (simplify) 2013-07-03 14:35:18 -07:00
Caleb Spare
e93afcdd2b Use fmt.Errorf when appropriate. 2013-07-03 14:35:18 -07:00
Caleb Spare
dd1b9e38e9 Typo correction: Excepted -> Expected' 2013-07-03 14:35:18 -07:00
Guillaume J. Charmes
96bc9ea7c1 Merge pull request #1112 from cespare/mutex-style
Mutex style change.
2013-07-03 10:34:32 -07:00
Guillaume J. Charmes
16c8a10ef9 Merge pull request #1053 from dynport/do-not-copy-hostname-from-image
do not merge hostname from image
2013-07-03 10:34:15 -07:00
Victor Vieux
64450ae3f8 add last version 2013-07-03 17:11:00 +00:00
Joffrey F
5dcd11be16 Merge pull request #1109 from dynport/remote-lookup-fix
Fix remote lookup when pushing into registry
2013-07-03 06:29:19 -07:00
Ken Cochrane
dc91a7b641 Merge pull request #1113 from metalivedev/docs20130702
* Documentation: fix broken link on the documentation index page
2013-07-03 05:52:25 -07:00
Andy Rothfusz
11998ae7d6 Fix installation link from welcome page. 2013-07-02 16:48:57 -07:00
Caleb Spare
1cf9c80e97 Mutex style change.
For structs protected by a single mutex, embed the mutex for more
concise usage.

Also use a sync.Mutex directly, rather than a pointer, to avoid the
need for initialization (because a Mutex's zero-value is valid and
ready to be used).
2013-07-02 15:53:08 -07:00
Andy Rothfusz
6dbcdd3ed5 Merge pull request #1095 from metalivedev/docs20130627
Docs and images updates
2013-07-02 15:13:31 -07:00
Tobias Schwab
9632cf09bf fix two obvious bugs??? 2013-07-02 22:11:03 +00:00
Andy Rothfusz
96ab3c540d Merge branch 'docs20130627' of github.com:metalivedev/docker into docs20130627
Conflicts:
	docs/sources/concepts/manifesto.rst
	docs/sources/index.rst
2013-07-02 15:10:07 -07:00
Andy Rothfusz
ff964d327d Cleaning up the welcome page, terminology, and images. 2013-07-02 15:03:29 -07:00
Andy Rothfusz
4b8688f1e5 Fix broken quickstart link 2013-07-02 14:10:06 -07:00
Andy Rothfusz
55b5889a0f Merge branch 'docs20130627' of github.com:metalivedev/docker into docs20130627
Conflicts:
	docs/sources/concepts/manifesto.rst
2013-07-02 13:00:08 -07:00
Andy Rothfusz
dd4c6f6a09 Shortened lines to 80 columns 2013-07-02 12:09:57 -07:00
Andy Rothfusz
6058261a26 Clean up image text, minor updates to docs. 2013-07-02 12:09:57 -07:00
Andy Rothfusz
b461e4607d Adding files for terms 2013-07-02 12:09:57 -07:00
Andy Rothfusz
d399f72098 Cleaning up the welcome page, terminology. 2013-07-02 12:09:57 -07:00
Guillaume J. Charmes
c9e1c65c64 Merge pull request #1107 from eliasp/issue-1020
* Runtime: Don't remove the container= environment variable.
2013-07-02 11:42:44 -07:00
Victor Vieux
3042f11666 never remove the file and try to load it in start 2013-07-02 18:02:16 +00:00
Elias Probst
e5e47c9862 Don't remove the container= environment variable, as it is crucial for a lot of tools to detect, whether they're run inside an LXC container or not. 2013-07-02 19:13:37 +02:00
Victor Vieux
1c5083315d Merge pull request #1103 from shin-/1060-pull-only-tagged-images
*Registry: When no tag is specified in docker pull, skip images that are not tagged
2013-07-02 10:08:21 -07:00
Victor Vieux
27a137ccab change file location 2013-07-02 17:02:42 +00:00
shin-
7cc294e777 When no tag is specified in docker pull, skip images that are not tagged 2013-07-02 18:25:06 +02:00
Daniel Mizyrycki
a20dcfb049 Merge pull request #987 from dotcloud/601-packaging-ubuntu
Packaging|ubuntu, issue #601: Allow packaging prerm to do its job
2013-07-02 08:51:01 -07:00
Victor Vieux
06b53e3fc7 store hostConfig to /tmp while container is running 2013-07-02 12:19:25 +00:00
Victor Vieux
8f9dd86146 Merge pull request #1101 from dotcloud/fix-unit_tests
add sleep in tests and go fmt
2013-07-02 03:48:24 -07:00
Victor Vieux
ebba0a6024 add sleep in tests and go fmt 2013-07-02 10:47:37 +00:00
Victor Vieux
c9236d99d2 Merge pull request #1099 from lopter/master
*Test :  Fix
2013-07-02 02:42:21 -07:00
Louis Opter
f03c1b8eeb More unit test fixes
- Fix TestGetImagesJSON when there is more than one image in the test
  repository;
- Remove an hardcoded constant use in TestGetImagesByName;
- Wait in a loop in TestKillDifferentUser;
- Use env instead of /usr/bin/env in TestEnv;
- Create a daemon user in contrib/mkimage-unittest.sh.
2013-07-01 17:24:21 -07:00
Guillaume J. Charmes
6f23e39e6b Merge pull request #1097 from dotcloud/bump_0.4.8
Bump version to 0.4.8
2013-07-01 17:13:19 -07:00
Solomon Hykes
fe0378e9b3 Rephrase changelog 2013-07-01 17:05:49 -07:00
Guillaume J. Charmes
96a1d7c645 Bump version to 0.4.8 2013-07-01 16:58:25 -07:00
Guillaume J. Charmes
79ee8b46f4 Merge pull request #1046 from dotcloud/1043-output_id_non_attach-fix
- Runtime: Make sure the ID is displayed usgin run -d
2013-07-01 16:49:43 -07:00
Victor Vieux
55a7a8b8c9 Merge pull request #1092 from lopter/master
Fix TestGetInfo when there is more than one image in the test repository
2013-07-01 16:41:01 -07:00
Andy Rothfusz
b47873c5ac Clean up image text, minor updates to docs. 2013-07-01 16:37:13 -07:00
Andy Rothfusz
adf75d402a Adding files for terms 2013-07-01 16:37:13 -07:00
Andy Rothfusz
cb1fdb2f03 Cleaning up the welcome page, terminology. 2013-07-01 16:37:13 -07:00
Victor Vieux
d1d66b9c5f Merge pull request #1078 from kstaken/fix_json_error
* Remote API: Small fix in /start if empty host config
2013-07-01 16:36:58 -07:00
Louis Opter
6dacbb451f Fix TestGetInfo when there is more than one image in the test repository
See also #1089, #1072.
2013-07-01 15:06:08 -07:00
Guillaume J. Charmes
ead9cefadb Merge pull request #1089 from dotcloud/multiple_test_images-fix
- Tests: Fix unit tests when there is more than one tag within the test image
2013-07-01 13:58:04 -07:00
Guillaume J. Charmes
185a2fc55e Merge pull request #1086 from crosbymichael/1008-image-entrypoint
+ Builder: Add Entrypoint to builder and container config
2013-07-01 13:33:12 -07:00
Ken Cochrane
fb8fac6c60 Merge pull request #1088 from kpelykh/master
* Documentation: Update Docker Remote API client list to include Java library
2013-07-01 12:50:31 -07:00
Guillaume J. Charmes
b6f288a1ce Fix unit tests when there is more than one tag within the test image 2013-07-01 11:45:45 -07:00
zettaset-kpelykh
aa9bec96b1 Issue #1087 Docker Java API client -- added java to Docker Remote API Client document 2013-07-01 11:28:40 -07:00
Victor Vieux
11e28842ac change to top 2013-07-01 15:19:42 +00:00
Michael Crosby
b16ff9f859 Add Entrypoint to builder and container config
By setting an entrypoint in the Dockerfile this
allows one to run an image and only pass arguments.
2013-07-01 05:34:27 -09:00
Victor Vieux
348c5c4838 Merge pull request #1085 from dotcloud/1076-doc_delete-fix
fix status code in doc
2013-07-01 06:30:21 -07:00
Victor Vieux
8dcc6a0280 fix status code in doc 2013-07-01 13:28:58 +00:00
Victor Vieux
3b5ad44647 rebase master 2013-07-01 12:31:16 +00:00
Victor Vieux
5e029f7600 Merge pull request #1061 from proppy/fix-slices-ref
api,server: slice are already refs, no need to return ptr
2013-07-01 04:51:56 -07:00
Keli Hu
52cebe19e5 Keep debian package up-to-date 2013-07-01 16:15:56 +08:00
Kimbro Staken
d8d33e8b8b Adding check for content-type header 2013-06-30 10:46:09 -07:00
Solomon Hykes
b37f7d49d8 Documented release process for maintainers in hack/RELEASE.md 2013-06-29 22:08:25 -07:00
Solomon Hykes
d67d5dd963 Merge pull request #1065 from dotcloud/bump_0.4.7
Bump version to 0.4.7
2013-06-29 21:23:59 -07:00
Solomon Hykes
273e0d42b7 * Hack: change builder tests to always use the current unit test image, instead of hardcoding 'docker-ut' 2013-06-29 21:22:15 -07:00
Solomon Hykes
ca497a82ab Bump version to 0.4.7 2013-06-29 21:12:29 -07:00
Solomon Hykes
b7226316c7 * Hack: move unit tests to a different source image, to work around issues when docker-ut has more than 1 tag. 2013-06-28 19:43:55 -07:00
Guillaume J. Charmes
84f41954ae Merge pull request #1052 from lopter/master
Fix a couple critical bugs on the test suite
2013-06-28 17:00:51 -07:00
Johan Euphrosine
54da339b2c api,server: slice are already refs, no need to return ptr 2013-06-28 12:41:09 -07:00
Sam Alba
ac37fcf6f3 Fixed conflicts 2013-06-28 12:36:59 -07:00
Sam Alba
893c974b08 Resolve conflict 2013-06-28 12:32:41 -07:00
Joffrey F
30342efa37 Merge pull request #700 from dotcloud/615-pushbyid
Allow to push/pull on independent registries (by repository or image ID)
2013-06-28 10:29:10 -07:00
Daniel Mizyrycki
6165c246d4 Merge pull request #1057 from dotcloud/973-testing-stabilization
testing|stabilization, issue 973: Use docker-golang PPA and lts-raring kernel
2013-06-28 09:54:06 -07:00
shin-
72befeef24 Fixed issue in registry.GetRemoteTags 2013-06-28 18:42:37 +02:00
Victor Vieux
648c4f198b Add test 2013-06-28 16:27:00 +00:00
Daniel Mizyrycki
af2a92f22b testing|stabilization, issue 973: Use docker-golang PPA and lts-raring kernel 2013-06-28 09:23:25 -07:00
shin-
ad2f826a82 go fmt pass 2013-06-28 18:19:58 +02:00
shin-
e095a1572f Allow push by ID when using a custom registry 2013-06-28 18:19:58 +02:00
shin-
c3dd6e1926 Several fixes and updates to make this work with latest changes in master 2013-06-28 18:19:58 +02:00
Guillaume J. Charmes
67ecd2cb82 Reenable writeflusher for pull 2013-06-28 18:19:58 +02:00
Guillaume J. Charmes
57d751c377 Remove https prefix from registry 2013-06-28 18:19:58 +02:00
shin-
50075106b6 Rolled back of previous commit (skip cert verification) 2013-06-28 18:19:58 +02:00
shin-
2a1f8f6fda Ignore 'registry not found' when pushing on independent registries 2013-06-28 18:19:58 +02:00
shin-
1c817913ee Skip certificate check (don't error out on self-signed certs) 2013-06-28 18:19:58 +02:00
shin-
de0a48bd6f Tentative support for independent registries 2013-06-28 18:19:58 +02:00
Victor Vieux
8589fd6db8 Add doc 2013-06-28 18:05:41 +02:00
Victor Vieux
2e79719622 add /proc to list running processes inside a container 2013-06-28 15:51:58 +00:00
Tobias Schwab
9bfec5a538 do not merge hostname from image 2013-06-28 15:22:01 +02:00
Victor Vieux
a11fc9f067 Merge pull request #1032 from andrewsmedina/govet
following the 'go vet' suggestions for the docker package.
2013-06-28 05:27:53 -07:00
Thatcher
e12a204bcc Merge pull request #1028 from dhrp/bugfixes-on-docs
Bugfixes on documentation code
2013-06-27 19:01:13 -07:00
Louis Opter
fe014a8e6c Always return the correct test image.
And not a *random* one from its history.
2013-06-27 18:01:20 -07:00
Louis Opter
aa8ea84d11 Fix a nil dereference in buildfile_test.go
The test runtime object wasn't properly initialized.
2013-06-27 18:01:10 -07:00
Sam Alba
3175e56ad0 URL schemes of both Registry and Index are now consistent 2013-06-27 17:55:17 -07:00
Guillaume J. Charmes
800d900688 Ignore stderr while doing tests 2013-06-27 15:25:31 -07:00
Guillaume J. Charmes
1a201d2433 Merge pull request #1035 from dotcloud/fix_panic_cast
- Runtime: fix panic with unix socket
2013-06-27 15:22:18 -07:00
Guillaume J. Charmes
750c94efbb Merge pull request #1041 from unclejack/fix_minor_kernel_version_for_git_kernels
remove + from minor kernel version for kernels built from git
2013-06-27 15:21:34 -07:00
Guillaume J. Charmes
bd144a64f6 Make sure the ID is displayed usgin run -d 2013-06-27 12:48:25 -07:00
Guillaume J. Charmes
2a20e85203 Improve last log output 2013-06-27 11:10:19 -07:00
unclejack
5ed4386bbf remove + from minor kernel version 2013-06-27 17:51:17 +03:00
Victor Vieux
9d3ec7b39f fix panic with unix socket 2013-06-27 12:57:19 +00:00
Victor Vieux
e68a23bdc1 Merge pull request #1019 from dotcloud/1002-change_update_progress_bar_rate-feature
*Remote API: update progressbar every MIN(1%, 512kB)
2013-06-27 04:19:42 -07:00
Andrews Medina
6cf493bea7 following 'go vet' in utils pkg. 2013-06-27 01:40:13 -03:00
Andrews Medina
3d5633a0a0 following the 'go vet' suggestions. 2013-06-27 01:33:55 -03:00
Solomon Hykes
c4a44f6f0b Merge pull request #1029 from lopter/master
* Hack: add a script to create the docker-ut image (busybox + socat)
2013-06-26 16:28:58 -07:00
Solomon Hykes
3e29695c1f Merge pull request #602 from gabrtv/111-bind-mounts
+ Runtime: mount volumes from a host directory with 'docker run -b'
2013-06-26 15:59:35 -07:00
Solomon Hykes
46a9f29bae - Runtime: small bugfixes in external mount-bind integration 2013-06-26 15:26:47 -07:00
Gabriel Monroy
67239957c9 - Fix a few bugs in external mount-bind integration 2013-06-26 15:10:38 -07:00
Solomon Hykes
d4e62101ab * Runtime: better integration of external bind-mounts (run -b) into the volume subsystem (run -v) 2013-06-26 15:08:07 -07:00
Gabriel Monroy
4fdf11b2e6 + Runtime: mount volumes from a host directory with 'docker run -b' 2013-06-26 15:07:31 -07:00
Guillaume J. Charmes
cd0f22ef72 Merge pull request #1005 from dotcloud/1004-stdin_piping-fix
- Runtime: Fix issue when attaching stdin alone
2013-06-26 12:56:14 -07:00
Guillaume J. Charmes
27d6777376 Display containers logs in case of build failure 2013-06-26 12:50:20 -07:00
Louis Opter
e5c0b31107 Add a script to create the docker-ut image
It's a fork of the mkimage-busybox.sh script and it adds socat to the
image. (socat being needed to add udp support, see #33).

This script, like mkimage-busybox.sh, probably only works on
Debian/Ubuntu.
2013-06-26 12:35:14 -07:00
Guillaume J. Charmes
5cdbd2ed7a Merge pull request #1021 from errnoh/fix-test-filled-tmpfs
TestKill and TestMultipleContainers: run sleep instead of cat /bin/zero....
2013-06-26 11:59:06 -07:00
Guillaume J. Charmes
b44e2e71aa Merge pull request #1010 from dotcloud/1009-testing-hack
Testing|hack, issue #1009: Update make hack environment
2013-06-25 17:07:18 -07:00
Thatcher Peskens
73afc6311d Bugfixes on docs
* fixed canonical link from index
* added http redirect from builder/basics
* fixed url in redirect_home
2013-06-25 15:31:22 -07:00
Guillaume J. Charmes
6127d757a7 Add missing fprintf instead of printf 2013-06-25 10:39:11 -07:00
Erno Hopearuoho
fb86dcfb17 TestKill and TestMultipleContainers: run sleep instead of cat /bin/zero. fixes #737 2013-06-25 17:52:10 +03:00
Victor Vieux
bccf06c748 update progressbar every MIN(1%, 512kB) 2013-06-25 14:03:15 +00:00
Victor Vieux
862e223cec Merge branch 'add-daemon-storage-path-param' of https://github.com/heavenlyhash/docker into heavenlyhash-add-daemon-storage-path-param 2013-06-25 13:33:45 +00:00
Ken Cochrane
e1e2ff52fe Merge pull request #1018 from nahiluhmot/add-swipely-docker-gem
* Documentation: Added Swipely's `docker-api` gem to the table of Remote API Client Libraries.
2013-06-25 05:49:52 -07:00
Tom Hulihan
d03edf12e4 Added Swipely's docker-api gem to the table of Remote API Client
Libraries.
2013-06-25 08:26:41 -04:00
Guillaume J. Charmes
ec1dfc521c Merge pull request #992 from unclejack/use_numeric_owner_for_tar
* Runtime: use --numeric-owner for Tar and Untar
2013-06-24 18:40:43 -07:00
Guillaume J. Charmes
5190f7f33a Implement regression test for stdin attach 2013-06-24 18:36:04 -07:00
Guillaume J. Charmes
873a5aa8e7 Make NewDockerCli handle terminal 2013-06-24 18:29:08 -07:00
Guillaume J. Charmes
672d3a6c6c Make term function consistent with each other 2013-06-24 18:27:57 -07:00
Guillaume J. Charmes
a749fb2130 Make DockerCli use its own stdin/out/err instead of the os.Std* 2013-06-24 18:27:57 -07:00
Guillaume J. Charmes
25d1bc2c09 Fix issue when attaching stdin alone 2013-06-24 18:27:57 -07:00
Daniel Mizyrycki
cc63c1b584 Testing|hack, issue #1009: Update make kack environment 2013-06-24 15:01:51 -07:00
Solomon Hykes
145c622aba Merge pull request #990 from dotcloud/fix-tests-cgo
* Hack: remove dependency of unit tests on 'os/user', which cannot be used with CGO_ENABLED=0
2013-06-24 12:31:54 -07:00
Ken Cochrane
e2516c01b4 Merge pull request #932 from metalivedev/docs20130614
+ Documentation: Add terminology section
2013-06-24 10:39:25 -07:00
Victor Vieux
a3cb18d0f0 Merge pull request #1003 from dotcloud/fix_utils_tests
fix regression in utils tests introduced by #980
2013-06-24 09:14:13 -07:00
Victor Vieux
eca9f9c1a1 fix regrettion in utils tests introduced by #980 2013-06-24 16:12:39 +00:00
Daniel Mizyrycki
aee845682f Merge pull request #998 from dotcloud/861-hack-vagrant
Fixing hack/Vagrantfile to use uname for aufs linux extras
2013-06-22 20:10:00 -07:00
Anthony Bishopric
e3dbe2f2ba Fixing hack/Vagrantfile to use uname for aufs linux extras
Conflicts: hack/Vagrantfile
Resolved by: Daniel Mizyrycki <daniel@dotcloud.com>
2013-06-22 19:58:33 -07:00
Solomon Hykes
193888a2b4 Merge pull request #997 from dotcloud/bump_0.4.6
Bump version to 0.4.6
2013-06-22 13:41:48 -07:00
Solomon Hykes
9fe8bfb2bc Bump version to 0.4.6 2013-06-22 13:36:45 -07:00
Solomon Hykes
fc25973371 Merge pull request #996 from dotcloud/995-volumes-crash
- Runtime: fix a bug which caused creation of empty images (and volumes) to crash.
2013-06-22 13:35:26 -07:00
Solomon Hykes
f9acd605dc - Runtime: add regression test for issue #995 2013-06-22 13:33:43 -07:00
Solomon Hykes
290b1973a9 Fix a bug which caused creation of empty images (and volumes) to crash. FIxes #995. 2013-06-22 12:29:42 -07:00
unclejack
d7d42ff4fe use --numeric-owner for tar and untar 2013-06-22 18:53:10 +03:00
Solomon Hykes
ce9e50f4ee Remove dependency on 'os/user', which cannot be used with CGO_ENABLED=0. This allows running the tests without CGO. 2013-06-21 19:40:42 -07:00
Daniel Mizyrycki
41cdd9b27f Packaging|ubuntu, issue #601: Allow packaging prerm to do its job 2013-06-21 17:37:32 -07:00
Victor Vieux
ec6b35240e fix raw terminal 2013-06-22 00:37:02 +00:00
Victor Vieux
42bcfcc927 add options to docker login 2013-06-21 10:00:25 +00:00
Eric Myhre
e44f62a95c Add argument to allow setting base directory for docker daemon's storage to values other than "/var/lib/docker". 2013-06-20 16:29:54 -05:00
Andy Rothfusz
5183399f50 Added multilayer example image. 2013-06-18 19:31:35 -07:00
Andy Rothfusz
a780b7c6b5 New Terminology section and illustrations. 2013-06-18 19:31:35 -07:00
455 changed files with 87034 additions and 5167 deletions

8
.gitignore vendored
View File

@@ -5,13 +5,15 @@ docker/docker
a.out
*.orig
build_src
command-line-arguments.test
.flymake*
docker.test
auth/auth.test
.idea
.DS_Store
docs/_build
docs/_static
docs/_templates
.gopath/
.dotcloud
*.test
bundles/
.hg/
.git/

View File

@@ -1,4 +1,4 @@
# Generate AUTHORS: git log --all --format='%aN <%aE>' | sort -uf | grep -v vagrant-ubuntu-12
# Generate AUTHORS: git log --format='%aN <%aE>' | sort -uf | grep -v vagrant-ubuntu-12
<charles.hooper@dotcloud.com> <chooper@plumata.com>
<daniel.mizyrycki@dotcloud.com> <daniel@dotcloud.com>
<daniel.mizyrycki@dotcloud.com> <mzdaniel@glidelink.net>
@@ -23,3 +23,6 @@ Thatcher Peskens <thatcher@dotcloud.com>
Walter Stanish <walter@pratyeka.org>
<daniel@gasienica.ch> <dgasienica@zynga.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
David Sissitka <me@dsissitka.com>
Nolan Darilek <nolan@thewordnerd.info>

34
AUTHORS
View File

@@ -4,35 +4,49 @@
# For a list of active project maintainers, see the MAINTAINERS file.
#
Al Tobey <al@ooyala.com>
Alex Gaynor <alex.gaynor@gmail.com>
Alexey Shamrin <shamrin@gmail.com>
Andrea Luzzardi <aluzzardi@gmail.com>
Andreas Tiefenthaler <at@an-ti.eu>
Andrew Munsell <andrew@wizardapps.net>
Andrews Medina <andrewsmedina@gmail.com>
Andy Rothfusz <github@metaliveblog.com>
Andy Smith <github@anarkystic.com>
Anthony Bishopric <git@anthonybishopric.com>
Antony Messerli <amesserl@rackspace.com>
Barry Allard <barry.allard@gmail.com>
Brandon Liu <bdon@bdon.org>
Brian McCallister <brianm@skife.org>
Brian Olsen <brian@maven-group.org>
Bruno Bigras <bigras.bruno@gmail.com>
Caleb Spare <cespare@gmail.com>
Calen Pennington <cale@edx.org>
Charles Hooper <charles.hooper@dotcloud.com>
Christopher Currie <codemonkey+github@gmail.com>
Colin Rice <colin@daedrum.net>
Daniel Gasienica <daniel@gasienica.ch>
Daniel Mizyrycki <daniel.mizyrycki@dotcloud.com>
Daniel Robinson <gottagetmac@gmail.com>
Daniel Von Fange <daniel@leancoder.com>
Daniel YC Lin <dlin.tw@gmail.com>
David Calavera <david.calavera@gmail.com>
David Sissitka <me@dsissitka.com>
Dominik Honnef <dominik@honnef.co>
Don Spaulding <donspauldingii@gmail.com>
Dr Nic Williams <drnicwilliams@gmail.com>
Elias Probst <mail@eliasprobst.eu>
Emily Rose <emily@contactvibe.com>
Eric Hanchrow <ehanchrow@ine.com>
Eric Myhre <hash@exultant.us>
Erno Hopearuoho <erno.hopearuoho@gmail.com>
Evan Wies <evan@neomantra.net>
ezbercih <cem.ezberci@gmail.com>
Fabrizio Regini <freegenie@gmail.com>
Fareed Dudhia <fareeddudhia@googlemail.com>
Flavio Castelli <fcastelli@suse.com>
Francisco Souza <f@souza.cc>
Frederick F. Kautz IV <fkautz@alumni.cmu.edu>
Gabriel Monroy <gabriel@opdemand.com>
Gareth Rushgrove <gareth@morethanseven.net>
Guillaume J. Charmes <guillaume.charmes@dotcloud.com>
Harley Laue <losinggeneration@gmail.com>
@@ -40,6 +54,7 @@ Hunter Blanks <hunter@twilio.com>
Jeff Lindsay <progrium@gmail.com>
Jeremy Grosser <jeremy@synack.me>
Joffrey F <joffrey@dotcloud.com>
Johan Euphrosine <proppy@google.com>
John Costa <john.costa@gmail.com>
Jon Wedaman <jweede@gmail.com>
Jonas Pfenniger <jonas@pfenniger.name>
@@ -47,40 +62,59 @@ Jonathan Rudenberg <jonathan@titanous.com>
Joseph Anthony Pasquale Holsten <joseph@josephholsten.com>
Julien Barbier <write0@gmail.com>
Jérôme Petazzoni <jerome.petazzoni@dotcloud.com>
Karan Lyons <karan@karanlyons.com>
Keli Hu <dev@keli.hu>
Ken Cochrane <kencochrane@gmail.com>
Kevin Clark <kevin.clark@gmail.com>
Kevin J. Lynagh <kevin@keminglabs.com>
kim0 <email.ahmedkamal@googlemail.com>
Kimbro Staken <kstaken@kstaken.com>
Kiran Gangadharan <kiran.daredevil@gmail.com>
Konstantin Pelykh <kpelykh@zettaset.com>
Louis Opter <kalessin@kalessin.fr>
Marco Hennings <marco.hennings@freiheit.com>
Marcus Farkas <toothlessgear@finitebox.com>
Mark McGranaghan <mmcgrana@gmail.com>
Martin Redmond <mrtodo@gmail.com>
Maxim Treskin <zerthurd@gmail.com>
meejah <meejah@meejah.ca>
Michael Crosby <crosby.michael@gmail.com>
Mike Gaffney <mike@uberu.com>
Mikhail Sobolev <mss@mawhrin.net>
Nan Monnand Deng <monnand@gmail.com>
Nate Jones <nate@endot.org>
Nelson Chen <crazysim@gmail.com>
Niall O'Higgins <niallo@unworkable.org>
Nick Stenning <nick.stenning@digital.cabinet-office.gov.uk>
Nick Stinemates <nick@stinemates.org>
Nolan Darilek <nolan@thewordnerd.info>
odk- <github@odkurzacz.org>
Paul Bowsher <pbowsher@globalpersonals.co.uk>
Paul Hammond <paul@paulhammond.org>
Phil Spitler <pspitler@gmail.com>
Piotr Bogdan <ppbogdan@gmail.com>
Renato Riccieri Santos Zannon <renato.riccieri@gmail.com>
Rhys Hiltner <rhys@twitch.tv>
Robert Obryk <robryk@gmail.com>
Roberto Hashioka <roberto_hashioka@hotmail.com>
Ryan Fowler <rwfowler@gmail.com>
Sam Alba <sam.alba@gmail.com>
Sam J Sharpe <sam.sharpe@digital.cabinet-office.gov.uk>
Shawn Siefkas <shawn.siefkas@meredith.com>
Silas Sewell <silas@sewell.org>
Solomon Hykes <solomon@dotcloud.com>
Sridhar Ratnakumar <sridharr@activestate.com>
Stefan Praszalowicz <stefan@greplin.com>
Thatcher Peskens <thatcher@dotcloud.com>
Thijs Terlouw <thijsterlouw@gmail.com>
Thomas Bikeev <thomas.bikeev@mac.com>
Thomas Hansen <thomas.hansen@gmail.com>
Tianon Gravi <admwiggin@gmail.com>
Tim Terhorst <mynamewastaken+git@gmail.com>
Tobias Bieniek <Tobias.Bieniek@gmx.de>
Tobias Schmidt <ts@soundcloud.com>
Tobias Schwab <tobias.schwab@dynport.de>
Tom Hulihan <hulihan.tom159@gmail.com>
unclejack <unclejacksons@gmail.com>
Victor Vieux <victor.vieux@dotcloud.com>
Vivek Agarwal <me@vivek.im>

View File

@@ -1,5 +1,188 @@
# Changelog
## 0.6.2 (2013-09-17)
+ Hack: Vendor all dependencies
+ Builder: Add -rm option in order to remove intermediate containers
+ Runtime: Add domainname support
+ Runtime: Implement image filtering with path.Match
* Builder: Allow multiline for the RUN instruction
* Runtime: Remove unnecesasry warnings
* Runtime: Only mount the hostname file when the config exists
* Runtime: Handle signals within the `docker login` command
* Runtime: Remove os/user dependency
* Registry: Implement login with private registry
* Remote API: Bump to v1.5
* Packaging: Break down hack/make.sh into small scripts, one per 'bundle': test, binary, ubuntu etc.
* Documentation: General improvments
- Runtime: UID and GID are now also applied to volumes
- Runtime: `docker start` set error code upon error
- Runtime: `docker run` set the same error code as the process started
- Registry: Fix push issues
## 0.6.1 (2013-08-23)
* Registry: Pass "meta" headers in API calls to the registry
- Packaging: Use correct upstart script with new build tool
- Packaging: Use libffi-dev, don't build it from sources
- Packaging: Removed duplicate mercurial install command
## 0.6.0 (2013-08-22)
- Runtime: Load authConfig only when needed and fix useless WARNING
+ Runtime: Add lxc-conf flag to allow custom lxc options
- Runtime: Fix race conditions in parallel pull
- Runtime: Improve CMD, ENTRYPOINT, and attach docs.
* Documentation: Small fix to docs regarding adding docker groups
* Documentation: Add MongoDB image example
+ Builder: Add USER instruction do Dockerfile
* Documentation: updated default -H docs
* Remote API: Sort Images by most recent creation date.
+ Builder: Add workdir support for the Buildfile
+ Runtime: Add an option to set the working directory
- Runtime: Show tag used when image is missing
* Documentation: Update readme with dependencies for building
* Documentation: Add instructions for creating and using the docker group
* Remote API: Reworking opaque requests in registry module
- Runtime: Fix Graph ByParent() to generate list of child images per parent image.
* Runtime: Add Image name to LogEvent tests
* Documentation: Add sudo to examples and installation to documentation
+ Hack: Bash Completion: Limit commands to containers of a relevant state
* Remote API: Add image name in /events
* Runtime: Apply volumes-from before creating volumes
- Runtime: Make docker run handle SIGINT/SIGTERM
- Runtime: Prevent crash when .dockercfg not readable
* Hack: Add docker dependencies coverage testing into docker-ci
+ Runtime: Add -privileged flag and relevant tests, docs, and examples
+ Packaging: Docker-brew 0.5.2 support and memory footprint reduction
- Runtime: Install script should be fetched over https, not http.
* Packaging: Add new docker dependencies into docker-ci
* Runtime: Use Go 1.1.2 for dockerbuilder
* Registry: Improve auth push
* Runtime: API, issue 1471: Use groups for socket permissions
* Documentation: PostgreSQL service example in documentation
* Contrib: bash completion script
* Tests: Improve TestKillDifferentUser to prevent timeout on buildbot
* Documentation: Fix typo in docs for docker run -dns
* Documentation: Adding a reference to ps -a
- Runtime: Correctly detect IPv4 forwarding
- Packaging: Revert "docker.upstart: avoid spawning a `sh` process"
* Runtime: Use ranged for loop on channels
- Runtime: Fix typo: fmt.Sprint -> fmt.Sprintf
- Tests: Fix typo in TestBindMounts (runContainer called without image)
* Runtime: add websocket support to /container/<name>/attach/ws
* Runtime: Mount /dev/shm as a tmpfs
- Builder: Only count known instructions as build steps
- Builder: Fix docker build and docker events output
- Runtime: switch from http to https for get.docker.io
* Tests: Improve TestGetContainersTop so it does not rely on sleep
+ Packaging: Docker-brew and Docker standard library
* Testing: Add some tests in server and utils
+ Packaging: Release docker with docker
- Builder: Make sure ENV instruction within build perform a commit each time
* Packaging: Fix the upstart script generated by get.docker.io
- Runtime: fix small \n error un docker build
* Runtime: Let userland proxy handle container-bound traffic
* Runtime: Updated the Docker CLI to specify a value for the "Host" header.
* Runtime: Add warning when net.ipv4.ip_forwarding = 0
* Registry: Registry unit tests + mock registry
* Runtime: fixed #910. print user name to docker info output
- Builder: Forbid certain paths within docker build ADD
- Runtime: change network range to avoid conflict with EC2 DNS
* Tests: Relax the lo interface test to allow iface index != 1
* Documentation: Suggest installing linux-headers by default.
* Documentation: Change the twitter handle
* Client: Add docker cp command and copy api endpoint to copy container files/folders to the host
* Remote API: Use mime pkg to parse Content-Type
- Runtime: Reduce connect and read timeout when pinging the registry
* Documentation: Update amazon.rst to explain that Vagrant is not necessary for running Docker on ec2
* Packaging: Enabled the docs to generate manpages.
* Runtime: Parallel pull
- Runtime: Handle ip route showing mask-less IP addresses
* Documentation: Clarify Amazon EC2 installation
* Documentation: 'Base' image is deprecated and should no longer be referenced in the docs.
* Runtime: Fix to "Inject dockerinit at /.dockerinit"
* Runtime: Allow ENTRYPOINT without CMD
- Runtime: Always consider localhost as a domain name when parsing the FQN repos name
* Remote API: 650 http utils and user agent field
* Documentation: fix a typo in the ubuntu installation guide
- Builder: Repository name (and optionally a tag) in build usage
* Documentation: Move note about officially supported kernel
* Packaging: Revert "Bind daemon to 0.0.0.0 in Vagrant.
* Builder: Add no cache for docker build
* Runtime: Add hostname to environment
* Runtime: Add last stable version in `docker version`
- Builder: Make sure ADD will create everything in 0755
* Documentation: Add ufw doc
* Tests: Add registry functional test to docker-ci
- Documentation: Solved the logo being squished in Safari
- Runtime: Use utils.ParseRepositoryTag instead of strings.Split(name, ":") in server.ImageDelete
* Runtime: Refactor checksum
- Runtime: Improve connect message with socket error
* Documentation: Added information about Docker's high level tools over LXC.
* Don't read from stdout when only attached to stdin
## 0.5.3 (2013-08-13)
* Runtime: Use docker group for socket permissions
- Runtime: Spawn shell within upstart script
- Builder: Make sure ENV instruction within build perform a commit each time
- Runtime: Handle ip route showing mask-less IP addresses
- Runtime: Add hostname to environment
## 0.5.2 (2013-08-08)
* Builder: Forbid certain paths within docker build ADD
- Runtime: Change network range to avoid conflict with EC2 DNS
* API: Change daemon to listen on unix socket by default
## 0.5.1 (2013-07-30)
+ API: Docker client now sets useragent (RFC 2616)
+ Runtime: Add `ps` args to `docker top`
+ Runtime: Add support for container ID files (pidfile like)
+ Runtime: Add container=lxc in default env
+ Runtime: Support networkless containers with `docker run -n` and `docker -d -b=none`
+ API: Add /events endpoint
+ Builder: ADD command now understands URLs
+ Builder: CmdAdd and CmdEnv now respect Dockerfile-set ENV variables
* Hack: Simplify unit tests with helpers
* Hack: Improve docker.upstart event
* Hack: Add coverage testing into docker-ci
* Runtime: Stdout/stderr logs are now stored in the same file as JSON
* Runtime: Allocate a /16 IP range by default, with fallback to /24. Try 12 ranges instead of 3.
* Runtime: Change .dockercfg format to json and support multiple auth remote
- Runtime: Do not override volumes from config
- Runtime: Fix issue with EXPOSE override
- Builder: Create directories with 755 instead of 700 within ADD instruction
## 0.5.0 (2013-07-17)
+ Runtime: List all processes running inside a container with 'docker top'
+ Runtime: Host directories can be mounted as volumes with 'docker run -v'
+ Runtime: Containers can expose public UDP ports (eg, '-p 123/udp')
+ Runtime: Optionally specify an exact public port (eg. '-p 80:4500')
+ Registry: New image naming scheme inspired by Go packaging convention allows arbitrary combinations of registries
+ Builder: ENTRYPOINT instruction sets a default binary entry point to a container
+ Builder: VOLUME instruction marks a part of the container as persistent data
* Builder: 'docker build' displays the full output of a build by default
* Runtime: 'docker login' supports additional options
- Runtime: Dont save a container's hostname when committing an image.
- Registry: Fix issues when uploading images to a private registry
## 0.4.8 (2013-07-01)
+ Builder: New build operation ENTRYPOINT adds an executable entry point to the container.
- Runtime: Fix a bug which caused 'docker run -d' to no longer print the container ID.
- Tests: Fix issues in the test suite
## 0.4.7 (2013-06-28)
* Registry: easier push/pull to a custom registry
* Remote API: the progress bar updates faster when downloading and uploading large files
- Remote API: fix a bug in the optional unix socket transport
* Runtime: improve detection of kernel version
+ Runtime: host directories can be mounted as volumes with 'docker run -b'
- Runtime: fix an issue when only attaching to stdin
* Runtime: use 'tar --numeric-owner' to avoid uid mismatch across multiple hosts
* Hack: improve test suite and dev environment
* Hack: remove dependency on unit tests on 'os/user'
+ Documentation: add terminology section
## 0.4.6 (2013-06-22)
- Runtime: fix a bug which caused creation of empty images (and volumes) to crash.
## 0.4.5 (2013-06-21)
+ Builder: 'docker build git://URL' fetches and builds a remote git repository
* Runtime: 'docker ps -s' optionally prints container size

View File

@@ -23,7 +23,7 @@ that feature *on top of* docker.
### Discuss your design on the mailing list
We recommend discussing your plans [on the mailing
list](https://groups.google.com/forum/?fromgroups#!forum/docker-club)
list](https://groups.google.com/forum/?fromgroups#!forum/docker-dev)
before starting to code - especially for more ambitious contributions.
This gives other contributors a chance to point you in the right
direction, give feedback on your design, and maybe point out if someone

58
Dockerfile Normal file
View File

@@ -0,0 +1,58 @@
# This file describes the standard way to build Docker, using docker
#
# Usage:
#
# # Assemble the full dev environment. This is slow the first time.
# docker build -t docker .
# # Apparmor messes with privileged mode: disable it
# /etc/init.d/apparmor stop ; /etc/init.d/apparmor teardown
#
# # Mount your source in an interactive container for quick testing:
# docker run -v `pwd`:/go/src/github.com/dotcloud/docker -privileged -lxc-conf=lxc.aa_profile=unconfined -i -t docker bash
#
#
# # Run the test suite:
# docker run -privileged -lxc-conf=lxc.aa_profile=unconfined docker go test -v
#
# # Publish a release:
# docker run -privileged -lxc-conf=lxc.aa_profile=unconfined \
# -e AWS_S3_BUCKET=baz \
# -e AWS_ACCESS_KEY=foo \
# -e AWS_SECRET_KEY=bar \
# -e GPG_PASSPHRASE=gloubiboulga \
# -lxc-conf=lxc.aa_profile=unconfined -privileged docker hack/release.sh
#
docker-version 0.6.1
from ubuntu:12.04
maintainer Solomon Hykes <solomon@dotcloud.com>
# Build dependencies
run echo 'deb http://archive.ubuntu.com/ubuntu precise main universe' > /etc/apt/sources.list
run apt-get update
run apt-get install -y -q curl
run apt-get install -y -q git
run apt-get install -y -q mercurial
# Install Go
run curl -s https://go.googlecode.com/files/go1.1.2.linux-amd64.tar.gz | tar -v -C /usr/local -xz
env PATH /usr/local/go/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin
env GOPATH /go:/go/src/github.com/dotcloud/docker/vendor
env CGO_ENABLED 0
run cd /tmp && echo 'package main' > t.go && go test -a -i -v
# Ubuntu stuff
run apt-get install -y -q ruby1.9.3 rubygems libffi-dev
run gem install fpm
run apt-get install -y -q reprepro dpkg-sig
# Install s3cmd 1.0.1 (earlier versions don't support env variables in the config)
run apt-get install -y -q python-pip
run pip install s3cmd
run pip install python-magic
run /bin/echo -e '[default]\naccess_key=$AWS_ACCESS_KEY\nsecret_key=$AWS_SECRET_KEY\n' > /.s3cfg
# Runtime dependencies
run apt-get install -y -q iptables
run apt-get install -y -q lxc
volume /var/lib/docker
workdir /go/src/github.com/dotcloud/docker
# Wrap all commands in the "docker-in-docker" script to allow nested containers
entrypoint ["hack/dind"]
# Upload docker source
add . /go/src/github.com/dotcloud/docker

1
FIXME
View File

@@ -34,3 +34,4 @@ to put them - so we put them here :)
* entry point config
* bring back git revision info, looks like it was lost
* Clean up the ProgressReader api, it's a PITA to use
* Use netlink instead of iproute2/iptables (#925)

View File

@@ -1,5 +1,6 @@
Solomon Hykes <solomon@dotcloud.com>
Guillaume Charmes <guillaume@dotcloud.com>
Victor Vieux <victor@dotcloud.com>
api.go: Victor Vieux <victor@dotcloud.com>
Vagrantfile: Daniel Mizyrycki <daniel@dotcloud.com>
Solomon Hykes <solomon@dotcloud.com> (@shykes)
Guillaume Charmes <guillaume@dotcloud.com> (@creack)
Victor Vieux <victor@dotcloud.com> (@vieux)
Michael Crosby <michael@crosbymichael.com> (@crosbymichael)
api.go: Victor Vieux <victor@dotcloud.com> (@vieux)
Vagrantfile: Daniel Mizyrycki <daniel@dotcloud.com> (@mzdaniel)

View File

@@ -1,87 +0,0 @@
DOCKER_PACKAGE := github.com/dotcloud/docker
RELEASE_VERSION := $(shell git tag | grep -E "v[0-9\.]+$$" | sort -nr | head -n 1)
SRCRELEASE := docker-$(RELEASE_VERSION)
BINRELEASE := docker-$(RELEASE_VERSION).tgz
GIT_ROOT := $(shell git rev-parse --show-toplevel)
BUILD_DIR := $(CURDIR)/.gopath
GOPATH ?= $(BUILD_DIR)
export GOPATH
GO_OPTIONS ?=
ifeq ($(VERBOSE), 1)
GO_OPTIONS += -v
endif
GIT_COMMIT = $(shell git rev-parse --short HEAD)
GIT_STATUS = $(shell test -n "`git status --porcelain`" && echo "+CHANGES")
BUILD_OPTIONS = -a -ldflags "-X main.GITCOMMIT $(GIT_COMMIT)$(GIT_STATUS) -d -w"
SRC_DIR := $(GOPATH)/src
DOCKER_DIR := $(SRC_DIR)/$(DOCKER_PACKAGE)
DOCKER_MAIN := $(DOCKER_DIR)/docker
DOCKER_BIN_RELATIVE := bin/docker
DOCKER_BIN := $(CURDIR)/$(DOCKER_BIN_RELATIVE)
.PHONY: all clean test hack release srcrelease $(BINRELEASE) $(SRCRELEASE) $(DOCKER_BIN) $(DOCKER_DIR)
all: $(DOCKER_BIN)
$(DOCKER_BIN): $(DOCKER_DIR)
@mkdir -p $(dir $@)
@(cd $(DOCKER_MAIN); CGO_ENABLED=0 go build $(GO_OPTIONS) $(BUILD_OPTIONS) -o $@)
@echo $(DOCKER_BIN_RELATIVE) is created.
$(DOCKER_DIR):
@mkdir -p $(dir $@)
@if [ -h $@ ]; then rm -f $@; fi; ln -sf $(CURDIR)/ $@
@(cd $(DOCKER_MAIN); go get -d $(GO_OPTIONS))
whichrelease:
echo $(RELEASE_VERSION)
release: $(BINRELEASE)
s3cmd -P put $(BINRELEASE) s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-$(RELEASE_VERSION).tgz
s3cmd -P put docker-latest.tgz s3://get.docker.io/builds/`uname -s`/`uname -m`/docker-latest.tgz
srcrelease: $(SRCRELEASE)
deps: $(DOCKER_DIR)
# A clean checkout of $RELEASE_VERSION, with vendored dependencies
$(SRCRELEASE):
rm -fr $(SRCRELEASE)
git clone $(GIT_ROOT) $(SRCRELEASE)
cd $(SRCRELEASE); git checkout -q $(RELEASE_VERSION)
# A binary release ready to be uploaded to a mirror
$(BINRELEASE): $(SRCRELEASE)
rm -f $(BINRELEASE)
cd $(SRCRELEASE); make; cp -R bin docker-$(RELEASE_VERSION); tar -f ../$(BINRELEASE) -zv -c docker-$(RELEASE_VERSION)
cd $(SRCRELEASE); cp -R bin docker-latest; tar -f ../docker-latest.tgz -zv -c docker-latest
clean:
@rm -rf $(dir $(DOCKER_BIN))
ifeq ($(GOPATH), $(BUILD_DIR))
@rm -rf $(BUILD_DIR)
else ifneq ($(DOCKER_DIR), $(realpath $(DOCKER_DIR)))
@rm -f $(DOCKER_DIR)
endif
test: all
@(cd $(DOCKER_DIR); sudo -E go test $(GO_OPTIONS))
testall: all
@(cd $(DOCKER_DIR); sudo -E go test ./... $(GO_OPTIONS))
fmt:
@gofmt -s -l -w .
hack:
cd $(CURDIR)/hack && vagrant up
ssh-dev:
cd $(CURDIR)/hack && vagrant ssh

430
README.md
View File

@@ -1,80 +1,129 @@
Docker: the Linux container engine
==================================
Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers.
Docker is an open source project to pack, ship and run any application
as a lightweight container
Docker containers are both *hardware-agnostic* and *platform-agnostic*. This means that they can run anywhere, from your
laptop to the largest EC2 compute instance and everything in between - and they don't require that you use a particular
language, framework or packaging system. That makes them great building blocks for deploying and scaling web apps, databases
and backend services without depending on a particular stack or provider.
Docker containers are both *hardware-agnostic* and
*platform-agnostic*. This means that they can run anywhere, from your
laptop to the largest EC2 compute instance and everything in between -
and they don't require that you use a particular language, framework
or packaging system. That makes them great building blocks for
deploying and scaling web apps, databases and backend services without
depending on a particular stack or provider.
Docker is an open-source implementation of the deployment engine which powers [dotCloud](http://dotcloud.com), a popular Platform-as-a-Service.
It benefits directly from the experience accumulated over several years of large-scale operation and support of hundreds of thousands
of applications and databases.
Docker is an open-source implementation of the deployment engine which
powers [dotCloud](http://dotcloud.com), a popular
Platform-as-a-Service. It benefits directly from the experience
accumulated over several years of large-scale operation and support of
hundreds of thousands of applications and databases.
![Docker L](docs/sources/concepts/images/lego_docker.jpg "Docker")
![Docker L](docs/sources/static_files/dockerlogo-h.png "Docker")
## Better than VMs
A common method for distributing applications and sandbox their execution is to use virtual machines, or VMs. Typical VM formats
are VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In theory these formats should allow every developer to
automatically package their application into a "machine" for easy distribution and deployment. In practice, that almost never
happens, for a few reasons:
A common method for distributing applications and sandbox their
execution is to use virtual machines, or VMs. Typical VM formats are
VMWare's vmdk, Oracle Virtualbox's vdi, and Amazon EC2's ami. In
theory these formats should allow every developer to automatically
package their application into a "machine" for easy distribution and
deployment. In practice, that almost never happens, for a few reasons:
* *Size*: VMs are very large which makes them impractical to store and transfer.
* *Performance*: running VMs consumes significant CPU and memory, which makes them impractical in many scenarios, for example local development of multi-tier applications, and
large-scale deployment of cpu and memory-intensive applications on large numbers of machines.
* *Portability*: competing VM environments don't play well with each other. Although conversion tools do exist, they are limited and add even more overhead.
* *Hardware-centric*: VMs were designed with machine operators in mind, not software developers. As a result, they offer very limited tooling for what developers need most:
building, testing and running their software. For example, VMs offer no facilities for application versioning, monitoring, configuration, logging or service discovery.
* *Size*: VMs are very large which makes them impractical to store
and transfer.
* *Performance*: running VMs consumes significant CPU and memory,
which makes them impractical in many scenarios, for example local
development of multi-tier applications, and large-scale deployment
of cpu and memory-intensive applications on large numbers of
machines.
* *Portability*: competing VM environments don't play well with each
other. Although conversion tools do exist, they are limited and
add even more overhead.
* *Hardware-centric*: VMs were designed with machine operators in
mind, not software developers. As a result, they offer very
limited tooling for what developers need most: building, testing
and running their software. For example, VMs offer no facilities
for application versioning, monitoring, configuration, logging or
service discovery.
By contrast, Docker relies on a different sandboxing method known as *containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern operating system kernels now support the primitives necessary
for containerization, including Linux with [openvz](http://openvz.org), [vserver](http://linux-vserver.org) and more recently [lxc](http://lxc.sourceforge.net),
Solaris with [zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc) and FreeBSD with [Jails](http://www.freebsd.org/doc/handbook/jails.html).
By contrast, Docker relies on a different sandboxing method known as
*containerization*. Unlike traditional virtualization,
containerization takes place at the kernel level. Most modern
operating system kernels now support the primitives necessary for
containerization, including Linux with [openvz](http://openvz.org),
[vserver](http://linux-vserver.org) and more recently
[lxc](http://lxc.sourceforge.net), Solaris with
[zones](http://docs.oracle.com/cd/E26502_01/html/E29024/preface-1.html#scrolltoc)
and FreeBSD with
[Jails](http://www.freebsd.org/doc/handbook/jails.html).
Docker builds on top of these low-level primitives to offer developers a portable format and runtime environment that solves
all 4 problems. Docker containers are small (and their transfer can be optimized with layers), they have basically zero memory and cpu overhead,
they are completely portable and are designed from the ground up with an application-centric design.
Docker builds on top of these low-level primitives to offer developers
a portable format and runtime environment that solves all 4
problems. Docker containers are small (and their transfer can be
optimized with layers), they have basically zero memory and cpu
overhead, they are completely portable and are designed from the
ground up with an application-centric design.
The best part: because docker operates at the OS level, it can still be run inside a VM!
The best part: because ``docker`` operates at the OS level, it can
still be run inside a VM!
## Plays well with others
Docker does not require that you buy into a particular programming language, framework, packaging system or configuration language.
Docker does not require that you buy into a particular programming
language, framework, packaging system or configuration language.
Is your application a unix process? Does it use files, tcp connections, environment variables, standard unix streams and command-line
arguments as inputs and outputs? Then docker can run it.
Is your application a Unix process? Does it use files, tcp
connections, environment variables, standard Unix streams and
command-line arguments as inputs and outputs? Then ``docker`` can run
it.
Can your application's build be expressed as a sequence of such commands? Then docker can build it.
Can your application's build be expressed as a sequence of such
commands? Then ``docker`` can build it.
## Escape dependency hell
A common problem for developers is the difficulty of managing all their application's dependencies in a simple and automated way.
A common problem for developers is the difficulty of managing all
their application's dependencies in a simple and automated way.
This is usually difficult for several reasons:
* *Cross-platform dependencies*. Modern applications often depend on a combination of system libraries and binaries, language-specific packages, framework-specific modules,
internal components developed for another project, etc. These dependencies live in different "worlds" and require different tools - these tools typically don't work
well with each other, requiring awkward custom integrations.
* *Cross-platform dependencies*. Modern applications often depend on
a combination of system libraries and binaries, language-specific
packages, framework-specific modules, internal components
developed for another project, etc. These dependencies live in
different "worlds" and require different tools - these tools
typically don't work well with each other, requiring awkward
custom integrations.
* Conflicting dependencies. Different applications may depend on different versions of the same dependency. Packaging tools handle these situations with various degrees of ease -
but they all handle them in different and incompatible ways, which again forces the developer to do extra work.
* Conflicting dependencies. Different applications may depend on
different versions of the same dependency. Packaging tools handle
these situations with various degrees of ease - but they all
handle them in different and incompatible ways, which again forces
the developer to do extra work.
* Custom dependencies. A developer may need to prepare a custom version of his application's dependency. Some packaging systems can handle custom versions of a dependency,
others can't - and all of them handle it differently.
* Custom dependencies. A developer may need to prepare a custom
version of their application's dependency. Some packaging systems
can handle custom versions of a dependency, others can't - and all
of them handle it differently.
Docker solves dependency hell by giving the developer a simple way to express *all* his application's dependencies in one place,
and streamline the process of assembling them. If this makes you think of [XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
*replace* your favorite packaging systems. It simply orchestrates their use in a simple and repeatable way. How does it do that? With layers.
Docker solves dependency hell by giving the developer a simple way to
express *all* their application's dependencies in one place, and
streamline the process of assembling them. If this makes you think of
[XKCD 927](http://xkcd.com/927/), don't worry. Docker doesn't
*replace* your favorite packaging systems. It simply orchestrates
their use in a simple and repeatable way. How does it do that? With
layers.
Docker defines a build as running a sequence of unix commands, one after the other, in the same container. Build commands modify the contents of the container
(usually by installing new files on the filesystem), the next command modifies it some more, etc. Since each build command inherits the result of the previous
commands, the *order* in which the commands are executed expresses *dependencies*.
Docker defines a build as running a sequence of Unix commands, one
after the other, in the same container. Build commands modify the
contents of the container (usually by installing new files on the
filesystem), the next command modifies it some more, etc. Since each
build command inherits the result of the previous commands, the
*order* in which the commands are executed expresses *dependencies*.
Here's a typical docker build process:
Here's a typical Docker build process:
```bash
from ubuntu:12.10
@@ -87,295 +136,64 @@ run curl -L https://github.com/shykes/helloflask/archive/master.tar.gz | tar -xz
run cd helloflask-master && pip install -r requirements.txt
```
Note that Docker doesn't care *how* dependencies are built - as long as they can be built by running a unix command in a container.
Note that Docker doesn't care *how* dependencies are built - as long
as they can be built by running a Unix command in a container.
Install instructions
==================
Getting started
===============
Quick install on Ubuntu 12.04 and 12.10
---------------------------------------
Docker can be installed on your local machine as well as servers - both bare metal and virtualized.
It is available as a binary on most modern Linux systems, or as a VM on Windows, Mac and other systems.
```bash
curl get.docker.io | sudo sh -x
```
We also offer an interactive tutorial for quickly learning the basics of using Docker.
Binary installs
----------------
Docker supports the following binary installation methods.
Note that some methods are community contributions and not yet officially supported.
For up-to-date install instructions and online tutorials, see the [Getting Started page](http://www.docker.io/gettingstarted/).
* [Ubuntu 12.04 and 12.10 (officially supported)](http://docs.docker.io/en/latest/installation/ubuntulinux/)
* [Arch Linux](http://docs.docker.io/en/latest/installation/archlinux/)
* [Mac OS X (with Vagrant)](http://docs.docker.io/en/latest/installation/vagrant/)
* [Windows (with Vagrant)](http://docs.docker.io/en/latest/installation/windows/)
* [Amazon EC2 (with Vagrant)](http://docs.docker.io/en/latest/installation/amazon/)
Installing from source
----------------------
1. Make sure you have a [Go language](http://golang.org/doc/install) compiler and [git](http://git-scm.com) installed.
2. Checkout the source code
```bash
git clone http://github.com/dotcloud/docker
```
3. Build the docker binary
```bash
cd docker
make VERBOSE=1
sudo cp ./bin/docker /usr/local/bin/docker
```
Usage examples
==============
First run the docker daemon
---------------------------
Docker can be used to run short-lived commands, long-running daemons (app servers, databases etc.),
interactive shell sessions, etc.
All the examples assume your machine is running the docker daemon. To run the docker daemon in the background, simply type:
```bash
# On a production system you want this running in an init script
sudo docker -d &
```
Now you can run docker in client mode: all commands will be forwarded to the docker daemon, so the client can run from any account.
```bash
# Now you can run docker commands from any account.
docker help
```
Throwaway shell in a base ubuntu image
--------------------------------------
```bash
docker pull ubuntu:12.10
# Run an interactive shell, allocate a tty, attach stdin and stdout
# To detach the tty without exiting the shell, use the escape sequence Ctrl-p + Ctrl-q
docker run -i -t ubuntu:12.10 /bin/bash
```
Starting a long-running worker process
--------------------------------------
```bash
# Start a very useful long-running process
JOB=$(docker run -d ubuntu /bin/sh -c "while true; do echo Hello world; sleep 1; done")
# Collect the output of the job so far
docker logs $JOB
# Kill the job
docker kill $JOB
```
Running an irc bouncer
----------------------
```bash
BOUNCER_ID=$(docker run -d -p 6667 -u irc shykes/znc zncrun $USER $PASSWORD)
echo "Configure your irc client to connect to port $(docker port $BOUNCER_ID 6667) of this machine"
```
Running Redis
-------------
```bash
REDIS_ID=$(docker run -d -p 6379 shykes/redis redis-server)
echo "Configure your redis client to connect to port $(docker port $REDIS_ID 6379) of this machine"
```
Share your own image!
---------------------
```bash
CONTAINER=$(docker run -d ubuntu:12.10 apt-get install -y curl)
docker commit -m "Installed curl" $CONTAINER $USER/betterbase
docker push $USER/betterbase
```
A list of publicly available images is [available here](https://github.com/dotcloud/docker/wiki/Public-docker-images).
Expose a service on a TCP port
------------------------------
```bash
# Expose port 4444 of this container, and tell netcat to listen on it
JOB=$(docker run -d -p 4444 base /bin/nc -l -p 4444)
# Which public port is NATed to my container?
PORT=$(docker port $JOB 4444)
# Connect to the public port via the host's public address
# Please note that because of how routing works connecting to localhost or 127.0.0.1 $PORT will not work.
# Replace *eth0* according to your local interface name.
IP=$(ip -o -4 addr list eth0 | perl -n -e 'if (m{inet\s([\d\.]+)\/\d+\s}xms) { print $1 }')
echo hello world | nc $IP $PORT
# Verify that the network connection worked
echo "Daemon received: $(docker logs $JOB)"
```
You can find a [list of real-world examples](http://docs.docker.io/en/latest/examples/) in the documentation.
Under the hood
--------------
Under the hood, Docker is built on the following components:
* The [cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c) and [namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part) capabilities of the Linux kernel;
* [AUFS](http://aufs.sourceforge.net/aufs.html), a powerful union filesystem with copy-on-write capabilities;
* The
[cgroup](http://blog.dotcloud.com/kernel-secrets-from-the-paas-garage-part-24-c)
and
[namespacing](http://blog.dotcloud.com/under-the-hood-linux-kernels-on-dotcloud-part)
capabilities of the Linux kernel;
* [AUFS](http://aufs.sourceforge.net/aufs.html), a powerful union
filesystem with copy-on-write capabilities;
* The [Go](http://golang.org) programming language;
* [lxc](http://lxc.sourceforge.net/), a set of convenience scripts to simplify the creation of linux containers.
* [lxc](http://lxc.sourceforge.net/), a set of convenience scripts to
simplify the creation of Linux containers.
Contributing to Docker
======================
Want to hack on Docker? Awesome! There are instructions to get you started on the website: http://docs.docker.io/en/latest/contributing/contributing/
Want to hack on Docker? Awesome! There are instructions to get you
started [here](CONTRIBUTING.md).
They are probably not perfect, please let us know if anything feels wrong or incomplete.
They are probably not perfect, please let us know if anything feels
wrong or incomplete.
Note
----
We also keep the documentation in this repository. The website documentation is generated using sphinx using these sources.
Please find it under docs/sources/ and read more about it https://github.com/dotcloud/docker/tree/master/docs/README.md
Please feel free to fix / update the documentation and send us pull requests. More tutorials are also welcome.
Setting up a dev environment
----------------------------
Instructions that have been verified to work on Ubuntu 12.10,
```bash
sudo apt-get -y install lxc curl xz-utils golang git
export GOPATH=~/go/
export PATH=$GOPATH/bin:$PATH
mkdir -p $GOPATH/src/github.com/dotcloud
cd $GOPATH/src/github.com/dotcloud
git clone https://github.com/dotcloud/docker.git
cd docker
go get -v github.com/dotcloud/docker/...
go install -v github.com/dotcloud/docker/...
```
Then run the docker daemon,
```bash
sudo $GOPATH/bin/docker -d
```
Run the `go install` command (above) to recompile docker.
What is a Standard Container?
=============================
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependencies, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently a work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like how Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
### 1. STANDARD OPERATIONS
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
### 2. CONTENT-AGNOSTIC
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
### 3. INFRASTRUCTURE-AGNOSTIC
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
### 4. DESIGNED FOR AUTOMATION
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
### 5. INDUSTRIAL-GRADE DELIVERY
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded onto the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
--------------------------------
(TODO)
### Image format
### Standard operations
* Copy
* Run
* Stop
* Wait
* Commit
* Attach standard streams
* List filesystem changes
* ...
### Execution environment
#### Root filesystem
#### Environment variables
#### Process arguments
#### Networking
#### Process namespacing
#### Resource limits
#### Process monitoring
#### Logging
#### Signals
#### Pseudo-terminal allocation
#### Security
### Legal
Transfers of Docker shall be in accordance with applicable export controls of any country and all other applicable
legal requirements. Docker shall not be distributed or downloaded to or in Cuba, Iran, North Korea, Sudan or Syria
and shall not be distributed or downloaded to any person on the Denied Persons List administered by the U.S.
Transfers of Docker shall be in accordance with applicable export
controls of any country and all other applicable legal requirements.
Docker shall not be distributed or downloaded to or in Cuba, Iran,
North Korea, Sudan or Syria and shall not be distributed or downloaded
to any person on the Denied Persons List administered by the U.S.
Department of Commerce.

1
VERSION Normal file
View File

@@ -0,0 +1 @@
0.6.2

40
Vagrantfile vendored
View File

@@ -12,22 +12,20 @@ Vagrant::Config.run do |config|
# Setup virtual machine box. This VM configuration code is always executed.
config.vm.box = BOX_NAME
config.vm.box_url = BOX_URI
config.vm.forward_port 4243, 4243
# Provision docker and new kernel if deployment was not done
# Provision docker and new kernel if deployment was not done.
# It is assumed Vagrant can successfully launch the provider instance.
if Dir.glob("#{File.dirname(__FILE__)}/.vagrant/machines/default/*/id").empty?
# Add lxc-docker package
pkg_cmd = "apt-get update -qq; apt-get install -q -y python-software-properties; " \
"add-apt-repository -y ppa:dotcloud/lxc-docker; apt-get update -qq; " \
"apt-get install -q -y lxc-docker; "
# Add X.org Ubuntu backported 3.8 kernel
pkg_cmd << "add-apt-repository -y ppa:ubuntu-x-swat/r-lts-backport; " \
"apt-get update -qq; apt-get install -q -y linux-image-3.8.0-19-generic; "
# Add guest additions if local vbox VM
is_vbox = true
ARGV.each do |arg| is_vbox &&= !arg.downcase.start_with?("--provider") end
if is_vbox
pkg_cmd << "apt-get install -q -y linux-headers-3.8.0-19-generic dkms; " \
pkg_cmd = "wget -q -O - https://get.docker.io/gpg | apt-key add -;" \
"echo deb http://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list;" \
"apt-get update -qq; apt-get install -q -y --force-yes lxc-docker; "
# Add Ubuntu raring backported kernel
pkg_cmd << "apt-get update -qq; apt-get install -q -y linux-image-generic-lts-raring; "
# Add guest additions if local vbox VM. As virtualbox is the default provider,
# it is assumed it won't be explicitly stated.
if ENV["VAGRANT_DEFAULT_PROVIDER"].nil? && ARGV.none? { |arg| arg.downcase.start_with?("--provider") }
pkg_cmd << "apt-get install -q -y linux-headers-generic-lts-raring dkms; " \
"echo 'Downloading VBox Guest Additions...'; " \
"wget -q http://dlc.sun.com.edgesuite.net/virtualbox/4.2.12/VBoxGuestAdditions_4.2.12.iso; "
# Prepare the VM to add guest additions after reboot
@@ -82,15 +80,15 @@ Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
end
if !FORWARD_DOCKER_PORTS.nil?
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
(49000..49900).each do |port|
config.vm.forward_port port, port
end
Vagrant::VERSION < "1.1.0" and Vagrant::Config.run do |config|
(49000..49900).each do |port|
config.vm.forward_port port, port
end
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
end
end

622
api.go
View File

@@ -1,6 +1,8 @@
package docker
import (
"code.google.com/p/go.net/websocket"
"encoding/base64"
"encoding/json"
"fmt"
"github.com/dotcloud/docker/auth"
@@ -9,17 +11,22 @@ import (
"io"
"io/ioutil"
"log"
"mime"
"net"
"net/http"
"os"
"os/exec"
"regexp"
"strconv"
"strings"
)
const APIVERSION = 1.3
const DEFAULTHTTPHOST string = "127.0.0.1"
const DEFAULTHTTPPORT int = 4243
const APIVERSION = 1.5
const DEFAULTHTTPHOST = "127.0.0.1"
const DEFAULTHTTPPORT = 4243
const DEFAULTUNIXSOCKET = "/var/run/docker.sock"
type HttpApiFunc func(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error
func hijackServer(w http.ResponseWriter) (io.ReadCloser, io.Writer, error) {
conn, _, err := w.(http.Hijacker).Hijack()
@@ -47,26 +54,36 @@ func parseMultipartForm(r *http.Request) error {
}
func httpError(w http.ResponseWriter, err error) {
statusCode := http.StatusInternalServerError
if strings.HasPrefix(err.Error(), "No such") {
http.Error(w, err.Error(), http.StatusNotFound)
statusCode = http.StatusNotFound
} else if strings.HasPrefix(err.Error(), "Bad parameter") {
http.Error(w, err.Error(), http.StatusBadRequest)
statusCode = http.StatusBadRequest
} else if strings.HasPrefix(err.Error(), "Conflict") {
http.Error(w, err.Error(), http.StatusConflict)
statusCode = http.StatusConflict
} else if strings.HasPrefix(err.Error(), "Impossible") {
http.Error(w, err.Error(), http.StatusNotAcceptable)
statusCode = http.StatusNotAcceptable
} else if strings.HasPrefix(err.Error(), "Wrong login/password") {
http.Error(w, err.Error(), http.StatusUnauthorized)
statusCode = http.StatusUnauthorized
} else if strings.Contains(err.Error(), "hasn't been activated") {
http.Error(w, err.Error(), http.StatusForbidden)
} else {
http.Error(w, err.Error(), http.StatusInternalServerError)
statusCode = http.StatusForbidden
}
utils.Debugf("[error %d] %s", statusCode, err)
http.Error(w, err.Error(), statusCode)
}
func writeJSON(w http.ResponseWriter, b []byte) {
func writeJSON(w http.ResponseWriter, code int, v interface{}) error {
b, err := json.Marshal(v)
if err != nil {
return err
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
w.Write(b)
return nil
}
func getBoolParam(value string) (bool, error) {
@@ -80,24 +97,12 @@ func getBoolParam(value string) (bool, error) {
return ret, nil
}
func getAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version > 1.1 {
w.WriteHeader(http.StatusNotFound)
return nil
}
authConfig, err := auth.LoadConfig(srv.runtime.root)
func matchesContentType(contentType, expectedType string) bool {
mimetype, _, err := mime.ParseMediaType(contentType)
if err != nil {
if err != auth.ErrConfigFileMissing {
return err
}
authConfig = &auth.AuthConfig{}
utils.Debugf("Error parsing media type: %s error: %s", contentType, err.Error())
}
b, err := json.Marshal(&auth.AuthConfig{Username: authConfig.Username, Email: authConfig.Email})
if err != nil {
return err
}
writeJSON(w, b)
return nil
return err == nil && mimetype == expectedType
}
func postAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -106,49 +111,19 @@ func postAuth(srv *Server, version float64, w http.ResponseWriter, r *http.Reque
if err != nil {
return err
}
status := ""
if version > 1.1 {
status, err = auth.Login(authConfig, false)
if err != nil {
return err
}
} else {
localAuthConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil {
if err != auth.ErrConfigFileMissing {
return err
}
}
if authConfig.Username == localAuthConfig.Username {
authConfig.Password = localAuthConfig.Password
}
newAuthConfig := auth.NewAuthConfig(authConfig.Username, authConfig.Password, authConfig.Email, srv.runtime.root)
status, err = auth.Login(newAuthConfig, true)
if err != nil {
return err
}
status, err := auth.Login(authConfig, srv.HTTPRequestFactory(nil))
if err != nil {
return err
}
if status != "" {
b, err := json.Marshal(&APIAuth{Status: status})
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, &APIAuth{Status: status})
}
w.WriteHeader(http.StatusNoContent)
return nil
}
func getVersion(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
m := srv.DockerVersion()
b, err := json.Marshal(m)
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, srv.DockerVersion())
}
func postContainersKill(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -170,7 +145,7 @@ func getContainersExport(srv *Server, version float64, w http.ResponseWriter, r
name := vars["name"]
if err := srv.ContainerExport(name, w); err != nil {
utils.Debugf("%s", err.Error())
utils.Debugf("%s", err)
return err
}
return nil
@@ -191,12 +166,8 @@ func getImagesJSON(srv *Server, version float64, w http.ResponseWriter, r *http.
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, outs)
}
func getImagesViz(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -207,12 +178,63 @@ func getImagesViz(srv *Server, version float64, w http.ResponseWriter, r *http.R
}
func getInfo(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
out := srv.DockerInfo()
b, err := json.Marshal(out)
if err != nil {
return writeJSON(w, http.StatusOK, srv.DockerInfo())
}
func getEvents(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
sendEvent := func(wf *utils.WriteFlusher, event *utils.JSONMessage) error {
b, err := json.Marshal(event)
if err != nil {
return fmt.Errorf("JSON error")
}
_, err = wf.Write(b)
if err != nil {
// On error, evict the listener
utils.Debugf("%s", err)
srv.Lock()
delete(srv.listeners, r.RemoteAddr)
srv.Unlock()
return err
}
return nil
}
if err := parseForm(r); err != nil {
return err
}
writeJSON(w, b)
listener := make(chan utils.JSONMessage)
srv.Lock()
srv.listeners[r.RemoteAddr] = listener
srv.Unlock()
since, err := strconv.ParseInt(r.Form.Get("since"), 10, 0)
if err != nil {
since = 0
}
w.Header().Set("Content-Type", "application/json")
wf := utils.NewWriteFlusher(w)
if since != 0 {
// If since, send previous events that happened after the timestamp
for _, event := range srv.events {
if event.Time >= since {
err := sendEvent(wf, &event)
if err != nil && err.Error() == "JSON error" {
continue
}
if err != nil {
return err
}
}
}
}
for event := range listener {
err := sendEvent(wf, &event)
if err != nil && err.Error() == "JSON error" {
continue
}
if err != nil {
return err
}
}
return nil
}
@@ -225,12 +247,8 @@ func getImagesHistory(srv *Server, version float64, w http.ResponseWriter, r *ht
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, outs)
}
func getContainersChanges(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -242,12 +260,28 @@ func getContainersChanges(srv *Server, version float64, w http.ResponseWriter, r
if err != nil {
return err
}
b, err := json.Marshal(changesStr)
return writeJSON(w, http.StatusOK, changesStr)
}
func getContainersTop(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if version < 1.4 {
return fmt.Errorf("top was improved a lot since 1.3, Please upgrade your docker client.")
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
if err := parseForm(r); err != nil {
return err
}
name := vars["name"]
ps_args := r.Form.Get("ps_args")
procsStr, err := srv.ContainerTop(name, ps_args)
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, procsStr)
}
func getContainersJSON(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -270,12 +304,17 @@ func getContainersJSON(srv *Server, version float64, w http.ResponseWriter, r *h
}
outs := srv.Containers(all, size, n, since, before)
b, err := json.Marshal(outs)
if err != nil {
return err
if version < 1.5 {
outs2 := []APIContainersOld{}
for _, ctnr := range outs {
outs2 = append(outs2, ctnr.ToLegacy())
}
return writeJSON(w, http.StatusOK, outs2)
} else {
return writeJSON(w, http.StatusOK, outs)
}
writeJSON(w, b)
return nil
}
func postImagesTag(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -306,7 +345,7 @@ func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Req
}
config := &Config{}
if err := json.NewDecoder(r.Body).Decode(config); err != nil {
utils.Debugf("%s", err.Error())
utils.Debugf("%s", err)
}
repo := r.Form.Get("repo")
tag := r.Form.Get("tag")
@@ -317,13 +356,8 @@ func postCommit(srv *Server, version float64, w http.ResponseWriter, r *http.Req
if err != nil {
return err
}
b, err := json.Marshal(&APIID{id})
if err != nil {
return err
}
w.WriteHeader(http.StatusCreated)
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusCreated, &APIID{id})
}
// Creates an image from Pull or from Import
@@ -337,13 +371,28 @@ func postImagesCreate(srv *Server, version float64, w http.ResponseWriter, r *ht
tag := r.Form.Get("tag")
repo := r.Form.Get("repo")
authEncoded := r.Header.Get("X-Registry-Auth")
authConfig := &auth.AuthConfig{}
if authEncoded != "" {
authJson := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJson).Decode(authConfig); err != nil {
// for a pull it is not an error if no auth was given
// to increase compatibility with the existing api it is defaulting to be empty
authConfig = &auth.AuthConfig{}
}
}
if version > 1.0 {
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
if image != "" { //pull
registry := r.Form.Get("registry")
if err := srv.ImagePull(image, tag, registry, w, sf, &auth.AuthConfig{}); err != nil {
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
}
}
if err := srv.ImagePull(image, tag, w, sf, authConfig, metaHeaders, version > 1.3); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
@@ -372,12 +421,8 @@ func getImagesSearch(srv *Server, version float64, w http.ResponseWriter, r *htt
if err != nil {
return err
}
b, err := json.Marshal(outs)
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, outs)
}
func postImagesInsert(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -402,31 +447,37 @@ func postImagesInsert(srv *Server, version float64, w http.ResponseWriter, r *ht
return nil
}
}
b, err := json.Marshal(&APIID{ID: imgID})
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, &APIID{ID: imgID})
}
func postImagesPush(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
authConfig := &auth.AuthConfig{}
if version > 1.1 {
if err := json.NewDecoder(r.Body).Decode(authConfig); err != nil {
return err
metaHeaders := map[string][]string{}
for k, v := range r.Header {
if strings.HasPrefix(k, "X-Meta-") {
metaHeaders[k] = v
}
} else {
localAuthConfig, err := auth.LoadConfig(srv.runtime.root)
if err != nil && err != auth.ErrConfigFileMissing {
return err
}
authConfig = localAuthConfig
}
if err := parseForm(r); err != nil {
return err
}
registry := r.Form.Get("registry")
authConfig := &auth.AuthConfig{}
authEncoded := r.Header.Get("X-Registry-Auth")
if authEncoded != "" {
// the new format is to handle the authConfig as a header
authJson := base64.NewDecoder(base64.URLEncoding, strings.NewReader(authEncoded))
if err := json.NewDecoder(authJson).Decode(authConfig); err != nil {
// to increase compatibility to existing api it is defaulting to be empty
authConfig = &auth.AuthConfig{}
}
} else {
// the old format is supported for compatibility if there was no authConfig header
if err := json.NewDecoder(r.Body).Decode(authConfig); err != nil {
return err
}
}
if vars == nil {
return fmt.Errorf("Missing parameter")
@@ -436,7 +487,7 @@ func postImagesPush(srv *Server, version float64, w http.ResponseWriter, r *http
w.Header().Set("Content-Type", "application/json")
}
sf := utils.NewStreamFormatter(version > 1.0)
if err := srv.ImagePush(name, registry, w, sf, authConfig); err != nil {
if err := srv.ImagePush(name, w, sf, authConfig, metaHeaders); err != nil {
if sf.Used() {
w.Write(sf.FormatError(err))
return nil
@@ -454,7 +505,12 @@ func postContainersCreate(srv *Server, version float64, w http.ResponseWriter, r
return err
}
if len(config.Dns) == 0 && len(srv.runtime.Dns) == 0 && utils.CheckLocalDns() {
resolvConf, err := utils.GetResolvConf()
if err != nil {
return err
}
if !config.NetworkDisabled && len(config.Dns) == 0 && len(srv.runtime.Dns) == 0 && utils.CheckLocalDns(resolvConf) {
out.Warnings = append(out.Warnings, fmt.Sprintf("Docker detected local DNS server on resolv.conf. Using default external servers: %v", defaultDns))
config.Dns = defaultDns
}
@@ -474,13 +530,12 @@ func postContainersCreate(srv *Server, version float64, w http.ResponseWriter, r
out.Warnings = append(out.Warnings, "Your kernel does not support memory swap capabilities. Limitation discarded.")
}
b, err := json.Marshal(out)
if err != nil {
return err
if !config.NetworkDisabled && srv.runtime.capabilities.IPv4ForwardingDisabled {
log.Println("Warning: IPv4 forwarding is disabled.")
out.Warnings = append(out.Warnings, "IPv4 forwarding is disabled.")
}
w.WriteHeader(http.StatusCreated)
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusCreated, out)
}
func postContainersRestart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -535,12 +590,8 @@ func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.R
return err
}
if imgs != nil {
if len(*imgs) != 0 {
b, err := json.Marshal(imgs)
if err != nil {
return err
}
writeJSON(w, b)
if len(imgs) != 0 {
return writeJSON(w, http.StatusOK, imgs)
} else {
return fmt.Errorf("Conflict, %s wasn't deleted", name)
}
@@ -551,11 +602,22 @@ func deleteImages(srv *Server, version float64, w http.ResponseWriter, r *http.R
}
func postContainersStart(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
var hostConfig *HostConfig
// allow a nil body for backwards compatibility
if r.Body != nil {
if matchesContentType(r.Header.Get("Content-Type"), "application/json") {
hostConfig = &HostConfig{}
if err := json.NewDecoder(r.Body).Decode(hostConfig); err != nil {
return err
}
}
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if err := srv.ContainerStart(name); err != nil {
if err := srv.ContainerStart(name, hostConfig); err != nil {
return err
}
w.WriteHeader(http.StatusNoContent)
@@ -592,12 +654,8 @@ func postContainersWait(srv *Server, version float64, w http.ResponseWriter, r *
if err != nil {
return err
}
b, err := json.Marshal(&APIWait{StatusCode: status})
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, &APIWait{StatusCode: status})
}
func postContainersResize(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -660,7 +718,20 @@ func postContainersAttach(srv *Server, version float64, w http.ResponseWriter, r
if err != nil {
return err
}
defer in.Close()
defer func() {
if tcpc, ok := in.(*net.TCPConn); ok {
tcpc.CloseWrite()
} else {
in.Close()
}
}()
defer func() {
if tcpc, ok := out.(*net.TCPConn); ok {
tcpc.CloseWrite()
} else if closer, ok := out.(io.Closer); ok {
closer.Close()
}
}()
fmt.Fprintf(out, "HTTP/1.1 200 OK\r\nContent-Type: application/vnd.docker.raw-stream\r\n\r\n")
if err := srv.ContainerAttach(name, logs, stream, stdin, stdout, stderr, in, out); err != nil {
@@ -669,6 +740,53 @@ func postContainersAttach(srv *Server, version float64, w http.ResponseWriter, r
return nil
}
func wsContainersAttach(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if err := parseForm(r); err != nil {
return err
}
logs, err := getBoolParam(r.Form.Get("logs"))
if err != nil {
return err
}
stream, err := getBoolParam(r.Form.Get("stream"))
if err != nil {
return err
}
stdin, err := getBoolParam(r.Form.Get("stdin"))
if err != nil {
return err
}
stdout, err := getBoolParam(r.Form.Get("stdout"))
if err != nil {
return err
}
stderr, err := getBoolParam(r.Form.Get("stderr"))
if err != nil {
return err
}
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
if _, err := srv.ContainerInspect(name); err != nil {
return err
}
h := websocket.Handler(func(ws *websocket.Conn) {
defer ws.Close()
if err := srv.ContainerAttach(name, logs, stream, stdin, stdout, stderr, ws, ws); err != nil {
utils.Debugf("Error: %s", err)
}
})
h.ServeHTTP(w, r)
return nil
}
func getContainersByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
@@ -679,12 +797,13 @@ func getContainersByName(srv *Server, version float64, w http.ResponseWriter, r
if err != nil {
return err
}
b, err := json.Marshal(container)
if err != nil {
return err
_, err = srv.ImageInspect(name)
if err == nil {
return fmt.Errorf("Conflict between containers and images")
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, container)
}
func getImagesByName(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -697,35 +816,13 @@ func getImagesByName(srv *Server, version float64, w http.ResponseWriter, r *htt
if err != nil {
return err
}
b, err := json.Marshal(image)
if err != nil {
return err
}
writeJSON(w, b)
return nil
}
func postImagesGetCache(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
apiConfig := &APIImageConfig{}
if err := json.NewDecoder(r.Body).Decode(apiConfig); err != nil {
return err
_, err = srv.ContainerInspect(name)
if err == nil {
return fmt.Errorf("Conflict between containers and images")
}
image, err := srv.ImageGetCached(apiConfig.ID, apiConfig.Config)
if err != nil {
return err
}
if image == nil {
w.WriteHeader(http.StatusNotFound)
return nil
}
apiID := &APIID{ID: image.ID}
b, err := json.Marshal(apiID)
if err != nil {
return err
}
writeJSON(w, b)
return nil
return writeJSON(w, http.StatusOK, image)
}
func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
@@ -734,12 +831,10 @@ func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Requ
}
remoteURL := r.FormValue("remote")
repoName := r.FormValue("t")
tag := ""
if strings.Contains(repoName, ":") {
remoteParts := strings.Split(repoName, ":")
tag = remoteParts[1]
repoName = remoteParts[0]
}
rawSuppressOutput := r.FormValue("q")
rawNoCache := r.FormValue("nocache")
rawRm := r.FormValue("rm")
repoName, tag := utils.ParseRepositoryTag(repoName)
var context io.Reader
@@ -780,7 +875,21 @@ func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Requ
}
context = c
}
b := NewBuildFile(srv, utils.NewWriteFlusher(w))
suppressOutput, err := getBoolParam(rawSuppressOutput)
if err != nil {
return err
}
noCache, err := getBoolParam(rawNoCache)
if err != nil {
return err
}
rm, err := getBoolParam(rawRm)
if err != nil {
return err
}
b := NewBuildFile(srv, utils.NewWriteFlusher(w), !suppressOutput, !noCache, rm)
id, err := b.Build(context)
if err != nil {
fmt.Fprintf(w, "Error build: %s\n", err)
@@ -792,6 +901,36 @@ func postBuild(srv *Server, version float64, w http.ResponseWriter, r *http.Requ
return nil
}
func postContainersCopy(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
if vars == nil {
return fmt.Errorf("Missing parameter")
}
name := vars["name"]
copyData := &APICopy{}
contentType := r.Header.Get("Content-Type")
if contentType == "application/json" {
if err := json.NewDecoder(r.Body).Decode(copyData); err != nil {
return err
}
} else {
return fmt.Errorf("Content-Type not supported: %s", contentType)
}
if copyData.Resource == "" {
return fmt.Errorf("Resource cannot be empty")
}
if copyData.Resource[0] == '/' {
copyData.Resource = copyData.Resource[1:]
}
if err := srv.ContainerCopy(name, copyData.Resource, w); err != nil {
utils.Debugf("%s", err.Error())
return err
}
return nil
}
func optionsHandler(srv *Server, version float64, w http.ResponseWriter, r *http.Request, vars map[string]string) error {
w.WriteHeader(http.StatusOK)
return nil
@@ -802,24 +941,61 @@ func writeCorsHeaders(w http.ResponseWriter, r *http.Request) {
w.Header().Add("Access-Control-Allow-Methods", "GET, POST, DELETE, PUT, OPTIONS")
}
func makeHttpHandler(srv *Server, logging bool, localMethod string, localRoute string, handlerFunc HttpApiFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
// log the request
utils.Debugf("Calling %s %s", localMethod, localRoute)
if logging {
log.Println(r.Method, r.RequestURI)
}
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
if len(userAgent) == 2 && userAgent[1] != VERSION {
utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION)
}
}
version, err := strconv.ParseFloat(mux.Vars(r)["version"], 64)
if err != nil {
version = APIVERSION
}
if srv.enableCors {
writeCorsHeaders(w, r)
}
if version == 0 || version > APIVERSION {
w.WriteHeader(http.StatusNotFound)
return
}
if err := handlerFunc(srv, version, w, r, mux.Vars(r)); err != nil {
utils.Debugf("Error: %s", err)
httpError(w, err)
}
}
}
func createRouter(srv *Server, logging bool) (*mux.Router, error) {
r := mux.NewRouter()
m := map[string]map[string]func(*Server, float64, http.ResponseWriter, *http.Request, map[string]string) error{
m := map[string]map[string]HttpApiFunc{
"GET": {
"/auth": getAuth,
"/version": getVersion,
"/info": getInfo,
"/images/json": getImagesJSON,
"/images/viz": getImagesViz,
"/images/search": getImagesSearch,
"/images/{name:.*}/history": getImagesHistory,
"/images/{name:.*}/json": getImagesByName,
"/containers/ps": getContainersJSON,
"/containers/json": getContainersJSON,
"/containers/{name:.*}/export": getContainersExport,
"/containers/{name:.*}/changes": getContainersChanges,
"/containers/{name:.*}/json": getContainersByName,
"/events": getEvents,
"/info": getInfo,
"/version": getVersion,
"/images/json": getImagesJSON,
"/images/viz": getImagesViz,
"/images/search": getImagesSearch,
"/images/{name:.*}/history": getImagesHistory,
"/images/{name:.*}/json": getImagesByName,
"/containers/ps": getContainersJSON,
"/containers/json": getContainersJSON,
"/containers/{name:.*}/export": getContainersExport,
"/containers/{name:.*}/changes": getContainersChanges,
"/containers/{name:.*}/json": getContainersByName,
"/containers/{name:.*}/top": getContainersTop,
"/containers/{name:.*}/attach/ws": wsContainersAttach,
},
"POST": {
"/auth": postAuth,
@@ -829,7 +1005,6 @@ func createRouter(srv *Server, logging bool) (*mux.Router, error) {
"/images/{name:.*}/insert": postImagesInsert,
"/images/{name:.*}/push": postImagesPush,
"/images/{name:.*}/tag": postImagesTag,
"/images/getCache": postImagesGetCache,
"/containers/create": postContainersCreate,
"/containers/{name:.*}/kill": postContainersKill,
"/containers/{name:.*}/restart": postContainersRestart,
@@ -838,6 +1013,7 @@ func createRouter(srv *Server, logging bool) (*mux.Router, error) {
"/containers/{name:.*}/wait": postContainersWait,
"/containers/{name:.*}/resize": postContainersResize,
"/containers/{name:.*}/attach": postContainersAttach,
"/containers/{name:.*}/copy": postContainersCopy,
},
"DELETE": {
"/containers/{name:.*}": deleteContainers,
@@ -853,37 +1029,13 @@ func createRouter(srv *Server, logging bool) (*mux.Router, error) {
utils.Debugf("Registering %s, %s", method, route)
// NOTE: scope issue, make sure the variables are local and won't be changed
localRoute := route
localMethod := method
localFct := fct
f := func(w http.ResponseWriter, r *http.Request) {
utils.Debugf("Calling %s %s", localMethod, localRoute)
localMethod := method
if logging {
log.Println(r.Method, r.RequestURI)
}
if strings.Contains(r.Header.Get("User-Agent"), "Docker-Client/") {
userAgent := strings.Split(r.Header.Get("User-Agent"), "/")
if len(userAgent) == 2 && userAgent[1] != VERSION {
utils.Debugf("Warning: client and server don't have the same version (client: %s, server: %s)", userAgent[1], VERSION)
}
}
version, err := strconv.ParseFloat(mux.Vars(r)["version"], 64)
if err != nil {
version = APIVERSION
}
if srv.enableCors {
writeCorsHeaders(w, r)
}
if version == 0 || version > APIVERSION {
w.WriteHeader(http.StatusNotFound)
return
}
if err := localFct(srv, version, w, r, mux.Vars(r)); err != nil {
httpError(w, err)
}
}
// build the handler function
f := makeHttpHandler(srv, logging, localMethod, localRoute, localFct)
// add the new route
if localRoute == "" {
r.Methods(localMethod).HandlerFunc(f)
} else {
@@ -892,6 +1044,7 @@ func createRouter(srv *Server, logging bool) (*mux.Router, error) {
}
}
}
return r, nil
}
@@ -906,9 +1059,26 @@ func ListenAndServe(proto, addr string, srv *Server, logging bool) error {
if e != nil {
return e
}
//as the daemon is launched as root, change to permission of the socket to allow non-root to connect
if proto == "unix" {
os.Chmod(addr, 0777)
if err := os.Chmod(addr, 0660); err != nil {
return err
}
groups, err := ioutil.ReadFile("/etc/group")
if err != nil {
return err
}
re := regexp.MustCompile("(^|\n)docker:.*?:([0-9]+)")
if gidMatch := re.FindStringSubmatch(string(groups)); gidMatch != nil {
gid, err := strconv.Atoi(gidMatch[2])
if err != nil {
return err
}
utils.Debugf("docker group found. gid: %d", gid)
if err := os.Chown(addr, 0, gid); err != nil {
return err
}
}
}
httpSrv := http.Server{Addr: addr, Handler: r}
return httpSrv.Serve(l)

View File

@@ -1,5 +1,7 @@
package docker
import "encoding/json"
type APIHistory struct {
ID string `json:"Id"`
Tags []string `json:",omitempty"`
@@ -17,13 +19,23 @@ type APIImages struct {
}
type APIInfo struct {
Debug bool
Containers int
Images int
NFd int `json:",omitempty"`
NGoroutines int `json:",omitempty"`
MemoryLimit bool `json:",omitempty"`
SwapLimit bool `json:",omitempty"`
Debug bool
Containers int
Images int
NFd int `json:",omitempty"`
NGoroutines int `json:",omitempty"`
MemoryLimit bool `json:",omitempty"`
SwapLimit bool `json:",omitempty"`
IPv4Forwarding bool `json:",omitempty"`
LXCVersion string `json:",omitempty"`
NEventsListener int `json:",omitempty"`
KernelVersion string `json:",omitempty"`
IndexServerAddress string `json:",omitempty"`
}
type APITop struct {
Titles []string
Processes [][]string
}
type APIRmi struct {
@@ -32,6 +44,30 @@ type APIRmi struct {
}
type APIContainers struct {
ID string `json:"Id"`
Image string
Command string
Created int64
Status string
Ports []APIPort
SizeRw int64
SizeRootFs int64
}
func (self *APIContainers) ToLegacy() APIContainersOld {
return APIContainersOld{
ID: self.ID,
Image: self.Image,
Command: self.Command,
Created: self.Created,
Status: self.Status,
Ports: displayablePorts(self.Ports),
SizeRw: self.SizeRw,
SizeRootFs: self.SizeRootFs,
}
}
type APIContainersOld struct {
ID string `json:"Id"`
Image string
Command string
@@ -57,7 +93,17 @@ type APIRun struct {
}
type APIPort struct {
Port string
PrivatePort int64
PublicPort int64
Type string
}
func (port *APIPort) MarshalJSON() ([]byte, error) {
return json.Marshal(map[string]interface{}{
"PrivatePort": port.PrivatePort,
"PublicPort": port.PublicPort,
"Type": port.Type,
})
}
type APIVersion struct {
@@ -78,3 +124,8 @@ type APIImageConfig struct {
ID string `json:"Id"`
*Config
}
type APICopy struct {
Resource string
HostPath string
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,9 +2,7 @@ package docker
import (
"archive/tar"
"bufio"
"bytes"
"errors"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
@@ -27,10 +25,6 @@ const (
)
func DetectCompression(source []byte) Compression {
for _, c := range source[:10] {
utils.Debugf("%x", c)
}
sourceLen := len(source)
for compression, m := range map[Compression][]byte{
Bzip2: {0x42, 0x5A, 0x68},
@@ -92,7 +86,7 @@ func Tar(path string, compression Compression) (io.Reader, error) {
// Tar creates an archive from the directory at `path`, only including files whose relative
// paths are included in `filter`. If `filter` is nil, then all files are included.
func TarFilter(path string, compression Compression, filter []string) (io.Reader, error) {
args := []string{"tar", "-f", "-", "-C", path}
args := []string{"tar", "--numeric-owner", "-f", "-", "-C", path}
if filter == nil {
filter = []string{"."}
}
@@ -104,22 +98,33 @@ func TarFilter(path string, compression Compression, filter []string) (io.Reader
// Untar reads a stream of bytes from `archive`, parses it as a tar archive,
// and unpacks it into the directory at `path`.
// The archive may be compressed with one of the following algorithgms:
// The archive may be compressed with one of the following algorithms:
// identity (uncompressed), gzip, bzip2, xz.
// FIXME: specify behavior when target path exists vs. doesn't exist.
func Untar(archive io.Reader, path string) error {
if archive == nil {
return fmt.Errorf("Empty archive")
}
bufferedArchive := bufio.NewReaderSize(archive, 10)
buf, err := bufferedArchive.Peek(10)
if err != nil {
return err
buf := make([]byte, 10)
totalN := 0
for totalN < 10 {
if n, err := archive.Read(buf[totalN:]); err != nil {
if err == io.EOF {
return fmt.Errorf("Tarball too short")
}
return err
} else {
totalN += n
utils.Debugf("[tar autodetect] n: %d", n)
}
}
compression := DetectCompression(buf)
utils.Debugf("Archive compression detected: %s", compression.Extension())
cmd := exec.Command("tar", "-f", "-", "-C", path, "-x"+compression.Flag())
cmd.Stdin = bufferedArchive
cmd := exec.Command("tar", "--numeric-owner", "-f", "-", "-C", path, "-x"+compression.Flag())
cmd.Stdin = io.MultiReader(bytes.NewReader(buf), archive)
// Hardcode locale environment for predictable outcome regardless of host configuration.
// (see https://github.com/dotcloud/docker/issues/355)
cmd.Env = []string{"LANG=en_US.utf-8", "LC_ALL=en_US.utf-8"}
@@ -168,7 +173,7 @@ func CopyWithTar(src, dst string) error {
}
// Create dst, copy src's content into it
utils.Debugf("Creating dest directory: %s", dst)
if err := os.MkdirAll(dst, 0700); err != nil && !os.IsExist(err) {
if err := os.MkdirAll(dst, 0755); err != nil && !os.IsExist(err) {
return err
}
utils.Debugf("Calling TarUntar(%s, %s)", src, dst)
@@ -249,7 +254,7 @@ func CmdStream(cmd *exec.Cmd) (io.Reader, error) {
}
errText := <-errChan
if err := cmd.Wait(); err != nil {
pipeW.CloseWithError(errors.New(err.Error() + ": " + string(errText)))
pipeW.CloseWithError(fmt.Errorf("%s: %s", err, errText))
} else {
pipeW.Close()
}

View File

@@ -16,7 +16,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
cmd := exec.Command("/bin/sh", "-c", "dd if=/dev/zero bs=1k count=1000 of=/dev/stderr; echo hello")
out, err := CmdStream(cmd)
if err != nil {
t.Fatalf("Failed to start command: " + err.Error())
t.Fatalf("Failed to start command: %s", err)
}
errCh := make(chan error)
go func() {
@@ -26,7 +26,7 @@ func TestCmdStreamLargeStderr(t *testing.T) {
select {
case err := <-errCh:
if err != nil {
t.Fatalf("Command should not have failed (err=%s...)", err.Error()[:100])
t.Fatalf("Command should not have failed (err=%.100s...)", err)
}
case <-time.After(5 * time.Second):
t.Fatalf("Command did not complete in 5 seconds; probable deadlock")
@@ -37,12 +37,12 @@ func TestCmdStreamBad(t *testing.T) {
badCmd := exec.Command("/bin/sh", "-c", "echo hello; echo >&2 error couldn\\'t reverse the phase pulser; exit 1")
out, err := CmdStream(badCmd)
if err != nil {
t.Fatalf("Failed to start command: " + err.Error())
t.Fatalf("Failed to start command: %s", err)
}
if output, err := ioutil.ReadAll(out); err == nil {
t.Fatalf("Command should have failed")
} else if err.Error() != "exit status 1: error couldn't reverse the phase pulser\n" {
t.Fatalf("Wrong error value (%s)", err.Error())
t.Fatalf("Wrong error value (%s)", err)
} else if s := string(output); s != "hello\n" {
t.Fatalf("Command output should be '%s', not '%s'", "hello\\n", output)
}

View File

@@ -1 +0,0 @@
../registry/MAINTAINERS

3
auth/MAINTAINERS Normal file
View File

@@ -0,0 +1,3 @@
Sam Alba <sam@dotcloud.com> (@samalba)
Joffrey Fuhrer <joffrey@dotcloud.com> (@shin-)
Ken Cochrane <ken@dotcloud.com> (@kencochrane)

View File

@@ -5,6 +5,7 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/dotcloud/docker/utils"
"io/ioutil"
"net/http"
"os"
@@ -15,35 +16,29 @@ import (
// Where we store the config file
const CONFIGFILE = ".dockercfg"
// the registry server we want to login against
const INDEXSERVER = "https://index.docker.io/v1"
// Only used for user auth + account creation
const INDEXSERVER = "https://index.docker.io/v1/"
//const INDEXSERVER = "http://indexstaging-docker.dotcloud.com/"
//const INDEXSERVER = "https://indexstaging-docker.dotcloud.com/v1/"
var (
ErrConfigFileMissing = errors.New("The Auth config file is missing")
)
type AuthConfig struct {
Username string `json:"username"`
Password string `json:"password"`
Email string `json:"email"`
Username string `json:"username,omitempty"`
Password string `json:"password,omitempty"`
Auth string `json:"auth"`
Email string `json:"email"`
ServerAddress string `json:"serveraddress,omitempty"`
}
type ConfigFile struct {
Configs map[string]AuthConfig `json:"configs,omitempty"`
rootPath string
}
func NewAuthConfig(username, password, email, rootPath string) *AuthConfig {
return &AuthConfig{
Username: username,
Password: password,
Email: email,
rootPath: rootPath,
}
}
func IndexServerAddress() string {
if os.Getenv("DOCKER_INDEX_URL") != "" {
return os.Getenv("DOCKER_INDEX_URL") + "/v1"
}
return INDEXSERVER
}
@@ -57,62 +52,91 @@ func encodeAuth(authConfig *AuthConfig) string {
}
// decode the auth string
func decodeAuth(authStr string) (*AuthConfig, error) {
func decodeAuth(authStr string) (string, string, error) {
decLen := base64.StdEncoding.DecodedLen(len(authStr))
decoded := make([]byte, decLen)
authByte := []byte(authStr)
n, err := base64.StdEncoding.Decode(decoded, authByte)
if err != nil {
return nil, err
return "", "", err
}
if n > decLen {
return nil, fmt.Errorf("Something went wrong decoding auth config")
return "", "", fmt.Errorf("Something went wrong decoding auth config")
}
arr := strings.Split(string(decoded), ":")
if len(arr) != 2 {
return nil, fmt.Errorf("Invalid auth configuration file")
return "", "", fmt.Errorf("Invalid auth configuration file")
}
password := strings.Trim(arr[1], "\x00")
return &AuthConfig{Username: arr[0], Password: password}, nil
return arr[0], password, nil
}
// load up the auth config information and return values
// FIXME: use the internal golang config parser
func LoadConfig(rootPath string) (*AuthConfig, error) {
func LoadConfig(rootPath string) (*ConfigFile, error) {
configFile := ConfigFile{Configs: make(map[string]AuthConfig), rootPath: rootPath}
confFile := path.Join(rootPath, CONFIGFILE)
if _, err := os.Stat(confFile); err != nil {
return &AuthConfig{rootPath: rootPath}, ErrConfigFileMissing
return &configFile, nil //missing file is not an error
}
b, err := ioutil.ReadFile(confFile)
if err != nil {
return nil, err
return &configFile, err
}
arr := strings.Split(string(b), "\n")
if len(arr) < 2 {
return nil, fmt.Errorf("The Auth config file is empty")
if err := json.Unmarshal(b, &configFile.Configs); err != nil {
arr := strings.Split(string(b), "\n")
if len(arr) < 2 {
return &configFile, fmt.Errorf("The Auth config file is empty")
}
authConfig := AuthConfig{}
origAuth := strings.Split(arr[0], " = ")
authConfig.Username, authConfig.Password, err = decodeAuth(origAuth[1])
if err != nil {
return &configFile, err
}
origEmail := strings.Split(arr[1], " = ")
authConfig.Email = origEmail[1]
authConfig.ServerAddress = IndexServerAddress()
configFile.Configs[IndexServerAddress()] = authConfig
} else {
for k, authConfig := range configFile.Configs {
authConfig.Username, authConfig.Password, err = decodeAuth(authConfig.Auth)
if err != nil {
return &configFile, err
}
authConfig.Auth = ""
configFile.Configs[k] = authConfig
authConfig.ServerAddress = k
}
}
origAuth := strings.Split(arr[0], " = ")
origEmail := strings.Split(arr[1], " = ")
authConfig, err := decodeAuth(origAuth[1])
if err != nil {
return nil, err
}
authConfig.Email = origEmail[1]
authConfig.rootPath = rootPath
return authConfig, nil
return &configFile, nil
}
// save the auth config
func SaveConfig(authConfig *AuthConfig) error {
confFile := path.Join(authConfig.rootPath, CONFIGFILE)
if len(authConfig.Email) == 0 {
func SaveConfig(configFile *ConfigFile) error {
confFile := path.Join(configFile.rootPath, CONFIGFILE)
if len(configFile.Configs) == 0 {
os.Remove(confFile)
return nil
}
lines := "auth = " + encodeAuth(authConfig) + "\n" + "email = " + authConfig.Email + "\n"
b := []byte(lines)
err := ioutil.WriteFile(confFile, b, 0600)
configs := make(map[string]AuthConfig, len(configFile.Configs))
for k, authConfig := range configFile.Configs {
authCopy := authConfig
authCopy.Auth = encodeAuth(&authCopy)
authCopy.Username = ""
authCopy.Password = ""
authCopy.ServerAddress = ""
configs[k] = authCopy
}
b, err := json.Marshal(configs)
if err != nil {
return err
}
err = ioutil.WriteFile(confFile, b, 0600)
if err != nil {
return err
}
@@ -120,20 +144,31 @@ func SaveConfig(authConfig *AuthConfig) error {
}
// try to register/login to the registry server
func Login(authConfig *AuthConfig, store bool) (string, error) {
storeConfig := false
func Login(authConfig *AuthConfig, factory *utils.HTTPRequestFactory) (string, error) {
client := &http.Client{}
reqStatusCode := 0
var status string
var reqBody []byte
jsonBody, err := json.Marshal(authConfig)
serverAddress := authConfig.ServerAddress
if serverAddress == "" {
serverAddress = IndexServerAddress()
}
loginAgainstOfficialIndex := serverAddress == IndexServerAddress()
// to avoid sending the server address to the server it should be removed before marshalled
authCopy := *authConfig
authCopy.ServerAddress = ""
jsonBody, err := json.Marshal(authCopy)
if err != nil {
return "", fmt.Errorf("Config Error: %s", err)
}
// using `bytes.NewReader(jsonBody)` here causes the server to respond with a 411 status.
b := strings.NewReader(string(jsonBody))
req1, err := http.Post(IndexServerAddress()+"/users/", "application/json; charset=utf-8", b)
req1, err := http.Post(serverAddress+"users/", "application/json; charset=utf-8", b)
if err != nil {
return "", fmt.Errorf("Server Error: %s", err)
}
@@ -145,15 +180,23 @@ func Login(authConfig *AuthConfig, store bool) (string, error) {
}
if reqStatusCode == 201 {
status = "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
storeConfig = true
if loginAgainstOfficialIndex {
status = "Account created. Please use the confirmation link we sent" +
" to your e-mail to activate it."
} else {
status = "Account created. Please see the documentation of the registry " + serverAddress + " for instructions how to activate it."
}
} else if reqStatusCode == 403 {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please check your e-mail for a confirmation link.")
if loginAgainstOfficialIndex {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please check your e-mail for a confirmation link.")
} else {
return "", fmt.Errorf("Login: Your account hasn't been activated. " +
"Please see the documentation of the registry " + serverAddress + " for instructions how to activate it.")
}
} else if reqStatusCode == 400 {
if string(reqBody) == "\"Username or email already exists\"" {
req, err := http.NewRequest("GET", IndexServerAddress()+"/users/", nil)
req, err := factory.NewRequest("GET", serverAddress+"users/", nil)
req.SetBasicAuth(authConfig.Username, authConfig.Password)
resp, err := client.Do(req)
if err != nil {
@@ -166,14 +209,7 @@ func Login(authConfig *AuthConfig, store bool) (string, error) {
}
if resp.StatusCode == 200 {
status = "Login Succeeded"
storeConfig = true
} else if resp.StatusCode == 401 {
if store {
authConfig.Email = ""
if err := SaveConfig(authConfig); err != nil {
return "", err
}
}
return "", fmt.Errorf("Wrong login/password, please try again")
} else {
return "", fmt.Errorf("Login: %s (Code: %d; Headers: %s)", body,
@@ -185,10 +221,54 @@ func Login(authConfig *AuthConfig, store bool) (string, error) {
} else {
return "", fmt.Errorf("Unexpected status code [%d] : %s", reqStatusCode, reqBody)
}
if storeConfig && store {
if err := SaveConfig(authConfig); err != nil {
return "", err
}
}
return status, nil
}
// this method matches a auth configuration to a server address or a url
func (config *ConfigFile) ResolveAuthConfig(registry string) AuthConfig {
if registry == IndexServerAddress() || len(registry) == 0 {
// default to the index server
return config.Configs[IndexServerAddress()]
}
// if its not the index server there are three cases:
//
// 1. this is a full config url -> it should be used as is
// 2. it could be a full url, but with the wrong protocol
// 3. it can be the hostname optionally with a port
//
// as there is only one auth entry which is fully qualified we need to start
// parsing and matching
swapProtocoll := func(url string) string {
if strings.HasPrefix(url, "http:") {
return strings.Replace(url, "http:", "https:", 1)
}
if strings.HasPrefix(url, "https:") {
return strings.Replace(url, "https:", "http:", 1)
}
return url
}
resolveIgnoringProtocol := func(url string) AuthConfig {
if c, found := config.Configs[url]; found {
return c
}
registrySwappedProtocoll := swapProtocoll(url)
// now try to match with the different protocol
if c, found := config.Configs[registrySwappedProtocoll]; found {
return c
}
return AuthConfig{}
}
// match both protocols as it could also be a server name like httpfoo
if strings.HasPrefix(registry, "http:") || strings.HasPrefix(registry, "https:") {
return resolveIgnoringProtocol(registry)
}
url := "https://" + registry
if !strings.Contains(registry, "/") {
url = url + "/v1/"
}
return resolveIgnoringProtocol(url)
}

View File

@@ -3,6 +3,7 @@ package auth
import (
"crypto/rand"
"encoding/hex"
"io/ioutil"
"os"
"strings"
"testing"
@@ -11,7 +12,9 @@ import (
func TestEncodeAuth(t *testing.T) {
newAuthConfig := &AuthConfig{Username: "ken", Password: "test", Email: "test@example.com"}
authStr := encodeAuth(newAuthConfig)
decAuthConfig, err := decodeAuth(authStr)
decAuthConfig := &AuthConfig{}
var err error
decAuthConfig.Username, decAuthConfig.Password, err = decodeAuth(authStr)
if err != nil {
t.Fatal(err)
}
@@ -29,8 +32,8 @@ func TestEncodeAuth(t *testing.T) {
func TestLogin(t *testing.T) {
os.Setenv("DOCKER_INDEX_URL", "https://indexstaging-docker.dotcloud.com")
defer os.Setenv("DOCKER_INDEX_URL", "")
authConfig := NewAuthConfig("unittester", "surlautrerivejetattendrai", "noise+unittester@dotcloud.com", "/tmp")
status, err := Login(authConfig, false)
authConfig := &AuthConfig{Username: "unittester", Password: "surlautrerivejetattendrai", Email: "noise+unittester@dotcloud.com"}
status, err := Login(authConfig, nil)
if err != nil {
t.Fatal(err)
}
@@ -49,8 +52,8 @@ func TestCreateAccount(t *testing.T) {
}
token := hex.EncodeToString(tokenBuffer)[:12]
username := "ut" + token
authConfig := NewAuthConfig(username, "test42", "docker-ut+"+token+"@example.com", "/tmp")
status, err := Login(authConfig, false)
authConfig := &AuthConfig{Username: username, Password: "test42", Email: "docker-ut+" + token + "@example.com"}
status, err := Login(authConfig, nil)
if err != nil {
t.Fatal(err)
}
@@ -60,7 +63,7 @@ func TestCreateAccount(t *testing.T) {
t.Fatalf("Expected status: \"%s\", found \"%s\" instead.", expectedStatus, status)
}
status, err = Login(authConfig, false)
status, err = Login(authConfig, nil)
if err == nil {
t.Fatalf("Expected error but found nil instead")
}
@@ -68,6 +71,42 @@ func TestCreateAccount(t *testing.T) {
expectedError := "Login: Account is not Active"
if !strings.Contains(err.Error(), expectedError) {
t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err.Error())
t.Fatalf("Expected message \"%s\" but found \"%s\" instead", expectedError, err)
}
}
func TestSameAuthDataPostSave(t *testing.T) {
root, err := ioutil.TempDir("", "docker-test")
if err != nil {
t.Fatal(err)
}
configFile := &ConfigFile{
rootPath: root,
Configs: make(map[string]AuthConfig, 1),
}
configFile.Configs["testIndex"] = AuthConfig{
Username: "docker-user",
Password: "docker-pass",
Email: "docker@docker.io",
}
err = SaveConfig(configFile)
if err != nil {
t.Fatal(err)
}
authConfig := configFile.Configs["testIndex"]
if authConfig.Username != "docker-user" {
t.Fail()
}
if authConfig.Password != "docker-pass" {
t.Fail()
}
if authConfig.Email != "docker@docker.io" {
t.Fail()
}
if authConfig.Auth != "" {
t.Fail()
}
}

View File

@@ -1,132 +0,0 @@
package docker
import (
"fmt"
"github.com/dotcloud/docker/utils"
"os"
"path"
"time"
)
var defaultDns = []string{"8.8.8.8", "8.8.4.4"}
type Builder struct {
runtime *Runtime
repositories *TagStore
graph *Graph
config *Config
image *Image
}
func NewBuilder(runtime *Runtime) *Builder {
return &Builder{
runtime: runtime,
graph: runtime.graph,
repositories: runtime.repositories,
}
}
func (builder *Builder) Create(config *Config) (*Container, error) {
// Lookup image
img, err := builder.repositories.LookupImage(config.Image)
if err != nil {
return nil, err
}
if img.Config != nil {
MergeConfig(config, img.Config)
}
if config.Cmd == nil || len(config.Cmd) == 0 {
return nil, fmt.Errorf("No command specified")
}
// Generate id
id := GenerateID()
// Generate default hostname
// FIXME: the lxc template no longer needs to set a default hostname
if config.Hostname == "" {
config.Hostname = id[:12]
}
container := &Container{
// FIXME: we should generate the ID here instead of receiving it as an argument
ID: id,
Created: time.Now(),
Path: config.Cmd[0],
Args: config.Cmd[1:], //FIXME: de-duplicate from config
Config: config,
Image: img.ID, // Always use the resolved image id
NetworkSettings: &NetworkSettings{},
// FIXME: do we need to store this in the container?
SysInitPath: sysInitPath,
}
container.root = builder.runtime.containerRoot(container.ID)
// Step 1: create the container directory.
// This doubles as a barrier to avoid race conditions.
if err := os.Mkdir(container.root, 0700); err != nil {
return nil, err
}
if len(config.Dns) == 0 && len(builder.runtime.Dns) == 0 && utils.CheckLocalDns() {
//"WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: %v", defaultDns
builder.runtime.Dns = defaultDns
}
// If custom dns exists, then create a resolv.conf for the container
if len(config.Dns) > 0 || len(builder.runtime.Dns) > 0 {
var dns []string
if len(config.Dns) > 0 {
dns = config.Dns
} else {
dns = builder.runtime.Dns
}
container.ResolvConfPath = path.Join(container.root, "resolv.conf")
f, err := os.Create(container.ResolvConfPath)
if err != nil {
return nil, err
}
defer f.Close()
for _, dns := range dns {
if _, err := f.Write([]byte("nameserver " + dns + "\n")); err != nil {
return nil, err
}
}
} else {
container.ResolvConfPath = "/etc/resolv.conf"
}
// Step 2: save the container json
if err := container.ToDisk(); err != nil {
return nil, err
}
// Step 3: register the container
if err := builder.runtime.Register(container); err != nil {
return nil, err
}
return container, nil
}
// Commit creates a new filesystem image from the current state of a container.
// The image can optionally be tagged into a repository
func (builder *Builder) Commit(container *Container, repository, tag, comment, author string, config *Config) (*Image, error) {
// FIXME: freeze the container before copying it to avoid data corruption?
// FIXME: this shouldn't be in commands.
rwTar, err := container.ExportRw()
if err != nil {
return nil, err
}
// Create a new image from the container's base layers + a new layer from container changes
img, err := builder.graph.Create(rwTar, container, comment, author, config)
if err != nil {
return nil, err
}
// Register the image if needed
if repository != "" {
if err := builder.repositories.Set(repository, tag, img.ID, true); err != nil {
return img, err
}
}
return img, nil
}

View File

@@ -1,15 +1,16 @@
package docker
import (
"bufio"
"encoding/json"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
"io/ioutil"
"net/url"
"os"
"path"
"reflect"
"regexp"
"strings"
)
@@ -21,13 +22,15 @@ type BuildFile interface {
type buildFile struct {
runtime *Runtime
builder *Builder
srv *Server
image string
maintainer string
config *Config
context string
image string
maintainer string
config *Config
context string
verbose bool
utilizeCache bool
rm bool
tmpContainers map[string]struct{}
tmpImages map[string]struct{}
@@ -35,15 +38,11 @@ type buildFile struct {
out io.Writer
}
func (b *buildFile) clearTmp(containers, images map[string]struct{}) {
func (b *buildFile) clearTmp(containers map[string]struct{}) {
for c := range containers {
tmp := b.runtime.Get(c)
b.runtime.Destroy(tmp)
utils.Debugf("Removing container %s", c)
}
for i := range images {
b.runtime.graph.Delete(i)
utils.Debugf("Removing image %s", i)
fmt.Fprintf(b.out, "Removing intermediate container %s\n", utils.TruncateID(c))
}
}
@@ -51,20 +50,10 @@ func (b *buildFile) CmdFrom(name string) error {
image, err := b.runtime.repositories.LookupImage(name)
if err != nil {
if b.runtime.graph.IsNotExist(err) {
var tag, remote string
if strings.Contains(name, ":") {
remoteParts := strings.Split(name, ":")
tag = remoteParts[1]
remote = remoteParts[0]
} else {
remote = name
}
if err := b.srv.ImagePull(remote, tag, "", b.out, utils.NewStreamFormatter(false), nil); err != nil {
remote, tag := utils.ParseRepositoryTag(name)
if err := b.srv.ImagePull(remote, tag, b.out, utils.NewStreamFormatter(false), nil, nil, true); err != nil {
return err
}
image, err = b.runtime.repositories.LookupImage(name)
if err != nil {
return err
@@ -75,6 +64,9 @@ func (b *buildFile) CmdFrom(name string) error {
}
b.image = image.ID
b.config = &Config{}
if b.config.Env == nil || len(b.config.Env) == 0 {
b.config.Env = append(b.config.Env, "HOME=/", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin")
}
return nil
}
@@ -87,7 +79,7 @@ func (b *buildFile) CmdRun(args string) error {
if b.image == "" {
return fmt.Errorf("Please provide a source image with `from` prior to run")
}
config, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
config, _, _, err := ParseRun([]string{b.image, "/bin/sh", "-c", args}, nil)
if err != nil {
return err
}
@@ -96,17 +88,21 @@ func (b *buildFile) CmdRun(args string) error {
b.config.Cmd = nil
MergeConfig(b.config, config)
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
utils.Debugf("Command to be executed: %v", b.config.Cmd)
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
}
cid, err := b.run()
@@ -116,10 +112,44 @@ func (b *buildFile) CmdRun(args string) error {
if err := b.commit(cid, cmd, "run"); err != nil {
return err
}
b.config.Cmd = cmd
return nil
}
func (b *buildFile) FindEnvKey(key string) int {
for k, envVar := range b.config.Env {
envParts := strings.SplitN(envVar, "=", 2)
if key == envParts[0] {
return k
}
}
return -1
}
func (b *buildFile) ReplaceEnvMatches(value string) (string, error) {
exp, err := regexp.Compile("(\\\\\\\\+|[^\\\\]|\\b|\\A)\\$({?)([[:alnum:]_]+)(}?)")
if err != nil {
return value, err
}
matches := exp.FindAllString(value, -1)
for _, match := range matches {
match = match[strings.Index(match, "$"):]
matchKey := strings.Trim(match, "${}")
for _, envVar := range b.config.Env {
envParts := strings.SplitN(envVar, "=", 2)
envKey := envParts[0]
envValue := envParts[1]
if envKey == matchKey {
value = strings.Replace(value, match, envValue, -1)
break
}
}
}
return value, nil
}
func (b *buildFile) CmdEnv(args string) error {
tmp := strings.SplitN(args, " ", 2)
if len(tmp) != 2 {
@@ -128,20 +158,25 @@ func (b *buildFile) CmdEnv(args string) error {
key := strings.Trim(tmp[0], " \t")
value := strings.Trim(tmp[1], " \t")
for i, elem := range b.config.Env {
if strings.HasPrefix(elem, key+"=") {
b.config.Env[i] = key + "=" + value
return nil
}
envKey := b.FindEnvKey(key)
replacedValue, err := b.ReplaceEnvMatches(value)
if err != nil {
return err
}
b.config.Env = append(b.config.Env, key+"="+value)
return b.commit("", b.config.Cmd, fmt.Sprintf("ENV %s=%s", key, value))
replacedVar := fmt.Sprintf("%s=%s", key, replacedValue)
if envKey >= 0 {
b.config.Env[envKey] = replacedVar
} else {
b.config.Env = append(b.config.Env, replacedVar)
}
return b.commit("", b.config.Cmd, fmt.Sprintf("ENV %s", replacedVar))
}
func (b *buildFile) CmdCmd(args string) error {
var cmd []string
if err := json.Unmarshal([]byte(args), &cmd); err != nil {
utils.Debugf("Error unmarshalling: %s, using /bin/sh -c", err)
utils.Debugf("Error unmarshalling: %s, setting cmd to /bin/sh -c", err)
cmd = []string{"/bin/sh", "-c", args}
}
if err := b.commit("", cmd, fmt.Sprintf("CMD %v", cmd)); err != nil {
@@ -157,6 +192,11 @@ func (b *buildFile) CmdExpose(args string) error {
return b.commit("", b.config.Cmd, fmt.Sprintf("EXPOSE %v", ports))
}
func (b *buildFile) CmdUser(args string) error {
b.config.User = args
return b.commit("", b.config.Cmd, fmt.Sprintf("USER %v", args))
}
func (b *buildFile) CmdInsert(args string) error {
return fmt.Errorf("INSERT has been deprecated. Please use ADD instead")
}
@@ -165,6 +205,49 @@ func (b *buildFile) CmdCopy(args string) error {
return fmt.Errorf("COPY has been deprecated. Please use ADD instead")
}
func (b *buildFile) CmdEntrypoint(args string) error {
if args == "" {
return fmt.Errorf("Entrypoint cannot be empty")
}
var entrypoint []string
if err := json.Unmarshal([]byte(args), &entrypoint); err != nil {
b.config.Entrypoint = []string{"/bin/sh", "-c", args}
} else {
b.config.Entrypoint = entrypoint
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("ENTRYPOINT %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) CmdWorkdir(workdir string) error {
b.config.WorkingDir = workdir
return b.commit("", b.config.Cmd, fmt.Sprintf("WORKDIR %v", workdir))
}
func (b *buildFile) CmdVolume(args string) error {
if args == "" {
return fmt.Errorf("Volume cannot be empty")
}
var volume []string
if err := json.Unmarshal([]byte(args), &volume); err != nil {
volume = []string{args}
}
if b.config.Volumes == nil {
b.config.Volumes = NewPathOpts()
}
for _, v := range volume {
b.config.Volumes[v] = struct{}{}
}
if err := b.commit("", b.config.Cmd, fmt.Sprintf("VOLUME %s", args)); err != nil {
return err
}
return nil
}
func (b *buildFile) addRemote(container *Container, orig, dest string) error {
file, err := utils.Download(orig, ioutil.Discard)
if err != nil {
@@ -172,6 +255,24 @@ func (b *buildFile) addRemote(container *Container, orig, dest string) error {
}
defer file.Body.Close()
// If the destination is a directory, figure out the filename.
if strings.HasSuffix(dest, "/") {
u, err := url.Parse(orig)
if err != nil {
return err
}
path := u.Path
if strings.HasSuffix(path, "/") {
path = path[:len(path)-1]
}
parts := strings.Split(path, "/")
filename := parts[len(parts)-1]
if filename == "" {
return fmt.Errorf("cannot determine filename from url: %s", u)
}
dest = dest + filename
}
return container.Inject(file.Body, dest)
}
@@ -179,12 +280,15 @@ func (b *buildFile) addContext(container *Container, orig, dest string) error {
origPath := path.Join(b.context, orig)
destPath := path.Join(container.RootfsPath(), dest)
// Preserve the trailing '/'
if dest[len(dest)-1] == '/' {
if strings.HasSuffix(dest, "/") {
destPath = destPath + "/"
}
if !strings.HasPrefix(origPath, b.context) {
return fmt.Errorf("Forbidden path: %s", origPath)
}
fi, err := os.Stat(origPath)
if err != nil {
return err
return fmt.Errorf("%s: no such file or directory", orig)
}
if fi.IsDir() {
if err := CopyWithTar(origPath, destPath); err != nil {
@@ -194,7 +298,7 @@ func (b *buildFile) addContext(container *Container, orig, dest string) error {
} else if err := UntarPath(origPath, destPath); err != nil {
utils.Debugf("Couldn't untar %s to %s: %s", origPath, destPath, err)
// If that fails, just copy it as a regular file
if err := os.MkdirAll(path.Dir(destPath), 0700); err != nil {
if err := os.MkdirAll(path.Dir(destPath), 0755); err != nil {
return err
}
if err := CopyWithTar(origPath, destPath); err != nil {
@@ -212,15 +316,23 @@ func (b *buildFile) CmdAdd(args string) error {
if len(tmp) != 2 {
return fmt.Errorf("Invalid ADD format")
}
orig := strings.Trim(tmp[0], " \t")
dest := strings.Trim(tmp[1], " \t")
orig, err := b.ReplaceEnvMatches(strings.Trim(tmp[0], " \t"))
if err != nil {
return err
}
dest, err := b.ReplaceEnvMatches(strings.Trim(tmp[1], " \t"))
if err != nil {
return err
}
cmd := b.config.Cmd
b.config.Cmd = []string{"/bin/sh", "-c", fmt.Sprintf("#(nop) ADD %s in %s", orig, dest)}
b.config.Image = b.image
// Create the container and start it
container, err := b.builder.Create(b.config)
container, err := b.runtime.Create(b.config)
if err != nil {
return err
}
@@ -255,18 +367,30 @@ func (b *buildFile) run() (string, error) {
b.config.Image = b.image
// Create the container and start it
c, err := b.builder.Create(b.config)
c, err := b.runtime.Create(b.config)
if err != nil {
return "", err
}
b.tmpContainers[c.ID] = struct{}{}
fmt.Fprintf(b.out, " ---> Running in %s\n", utils.TruncateID(c.ID))
// override the entry point that may have been picked up from the base image
c.Path = b.config.Cmd[0]
c.Args = b.config.Cmd[1:]
//start the container
if err := c.Start(); err != nil {
hostConfig := &HostConfig{}
if err := c.Start(hostConfig); err != nil {
return "", err
}
if b.verbose {
err = <-c.Attach(nil, nil, b.out, b.out)
if err != nil {
return "", err
}
}
// Wait for it to finish
if ret := c.Wait(); ret != 0 {
return "", fmt.Errorf("The command %v returned a non-zero code: %d", b.config.Cmd, ret)
@@ -286,17 +410,20 @@ func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
b.config.Cmd = []string{"/bin/sh", "-c", "#(nop) " + comment}
defer func(cmd []string) { b.config.Cmd = cmd }(cmd)
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
if b.utilizeCache {
if cache, err := b.srv.ImageGetCached(b.image, b.config); err != nil {
return err
} else if cache != nil {
fmt.Fprintf(b.out, " ---> Using cache\n")
utils.Debugf("[BUILDER] Use cached version")
b.image = cache.ID
return nil
} else {
utils.Debugf("[BUILDER] Cache miss")
}
}
container, err := b.builder.Create(b.config)
container, err := b.runtime.Create(b.config)
if err != nil {
return err
}
@@ -318,7 +445,7 @@ func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
autoConfig := *b.config
autoConfig.Cmd = autoCmd
// Commit the container
image, err := b.builder.Commit(container, "", "", "", b.maintainer, &autoConfig)
image, err := b.runtime.Commit(container, "", "", "", b.maintainer, &autoConfig)
if err != nil {
return err
}
@@ -327,6 +454,9 @@ func (b *buildFile) commit(id string, autoCmd []string, comment string) error {
return nil
}
// Long lines can be split with a backslash
var lineContinuation = regexp.MustCompile(`\s*\\\s*\n`)
func (b *buildFile) Build(context io.Reader) (string, error) {
// FIXME: @creack any reason for using /tmp instead of ""?
// FIXME: @creack "name" is a terrible variable name
@@ -339,22 +469,18 @@ func (b *buildFile) Build(context io.Reader) (string, error) {
}
defer os.RemoveAll(name)
b.context = name
dockerfile, err := os.Open(path.Join(name, "Dockerfile"))
if err != nil {
filename := path.Join(name, "Dockerfile")
if _, err := os.Stat(filename); os.IsNotExist(err) {
return "", fmt.Errorf("Can't build a directory with no Dockerfile")
}
// FIXME: "file" is also a terrible variable name ;)
file := bufio.NewReader(dockerfile)
fileBytes, err := ioutil.ReadFile(filename)
if err != nil {
return "", err
}
dockerfile := string(fileBytes)
dockerfile = lineContinuation.ReplaceAllString(dockerfile, "")
stepN := 0
for {
line, err := file.ReadString('\n')
if err != nil {
if err == io.EOF && line == "" {
break
} else if err != io.EOF {
return "", err
}
}
for _, line := range strings.Split(dockerfile, "\n") {
line = strings.Trim(strings.Replace(line, "\t", " ", -1), " \t\r\n")
// Skip comments and empty line
if len(line) == 0 || line[0] == '#' {
@@ -366,15 +492,16 @@ func (b *buildFile) Build(context io.Reader) (string, error) {
}
instruction := strings.ToLower(strings.Trim(tmp[0], " "))
arguments := strings.Trim(tmp[1], " ")
stepN += 1
// FIXME: only count known instructions as build steps
fmt.Fprintf(b.out, "Step %d : %s %s\n", stepN, strings.ToUpper(instruction), arguments)
method, exists := reflect.TypeOf(b).MethodByName("Cmd" + strings.ToUpper(instruction[:1]) + strings.ToLower(instruction[1:]))
if !exists {
fmt.Fprintf(b.out, "# Skipping unknown instruction %s\n", strings.ToUpper(instruction))
continue
}
stepN += 1
fmt.Fprintf(b.out, "Step %d : %s %s\n", stepN, strings.ToUpper(instruction), arguments)
ret := method.Func.Call([]reflect.Value{reflect.ValueOf(b), reflect.ValueOf(arguments)})[0].Interface()
if ret != nil {
return "", ret.(error)
@@ -384,19 +511,24 @@ func (b *buildFile) Build(context io.Reader) (string, error) {
}
if b.image != "" {
fmt.Fprintf(b.out, "Successfully built %s\n", utils.TruncateID(b.image))
if b.rm {
b.clearTmp(b.tmpContainers)
}
return b.image, nil
}
return "", fmt.Errorf("An error occured during the build\n")
return "", fmt.Errorf("An error occurred during the build\n")
}
func NewBuildFile(srv *Server, out io.Writer) BuildFile {
func NewBuildFile(srv *Server, out io.Writer, verbose, utilizeCache, rm bool) BuildFile {
return &buildFile{
builder: NewBuilder(srv.runtime),
runtime: srv.runtime,
srv: srv,
config: &Config{},
out: out,
tmpContainers: make(map[string]struct{}),
tmpImages: make(map[string]struct{}),
verbose: verbose,
utilizeCache: utilizeCache,
rm: rm,
}
}

View File

@@ -1,7 +1,12 @@
package docker
import (
"fmt"
"io/ioutil"
"net"
"net/http"
"net/http/httptest"
"strings"
"testing"
)
@@ -21,34 +26,68 @@ type testContextTemplate struct {
dockerfile string
// Additional files in the context, eg [][2]string{"./passwd", "gordon"}
files [][2]string
// Additional remote files to host on a local HTTP server.
remoteFiles [][2]string
}
// A table of all the contexts to build and test.
// A new docker runtime will be created and torn down for each context.
var testContexts []testContextTemplate = []testContextTemplate{
var testContexts = []testContextTemplate{
{
`
from docker-ut
from {IMAGE}
run sh -c 'echo root:testpass > /tmp/passwd'
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
nil,
},
// Exactly the same as above, except uses a line split with a \ to test
// multiline support.
{
`
from docker-ut
add foo /usr/lib/bla/bar
run [ "$(cat /usr/lib/bla/bar)" = 'hello world!' ]
from {IMAGE}
run sh -c 'echo root:testpass \
> /tmp/passwd'
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
[][2]string{{"foo", "hello world!"}},
nil,
nil,
},
// Line containing literal "\n"
{
`
from {IMAGE}
run sh -c 'echo root:testpass > /tmp/passwd'
run echo "foo \n bar"; echo "baz"
run mkdir -p /var/run/sshd
run [ "$(cat /tmp/passwd)" = "root:testpass" ]
run [ "$(ls -d /var/run/sshd)" = "/var/run/sshd" ]
`,
nil,
nil,
},
{
`
from {IMAGE}
add foo /usr/lib/bla/bar
run [ "$(cat /usr/lib/bla/bar)" = 'hello' ]
add http://{SERVERADDR}/baz /usr/lib/baz/quux
run [ "$(cat /usr/lib/baz/quux)" = 'world!' ]
`,
[][2]string{{"foo", "hello"}},
[][2]string{{"/baz", "world!"}},
},
{
`
from docker-ut
from {IMAGE}
add f /
run [ "$(cat /f)" = "hello" ]
add f /abc
@@ -70,33 +109,453 @@ run [ "$(cat /somewheeeere/over/the/rainbooow/ga)" = "bu" ]
{"f", "hello"},
{"d/ga", "bu"},
},
nil,
},
{
`
from docker-ut
from {IMAGE}
add http://{SERVERADDR}/x /a/b/c
run [ "$(cat /a/b/c)" = "hello" ]
add http://{SERVERADDR}/x?foo=bar /
run [ "$(cat /x)" = "hello" ]
add http://{SERVERADDR}/x /d/
run [ "$(cat /d/x)" = "hello" ]
add http://{SERVERADDR} /e
run [ "$(cat /e)" = "blah" ]
`,
nil,
[][2]string{{"/x", "hello"}, {"/", "blah"}},
},
{
`
from {IMAGE}
env FOO BAR
run [ "$FOO" = "BAR" ]
`,
nil,
nil,
},
{
`
from {IMAGE}
ENTRYPOINT /bin/echo
CMD Hello world
`,
nil,
nil,
},
{
`
from {IMAGE}
VOLUME /test
CMD Hello world
`,
nil,
nil,
},
{
`
from {IMAGE}
env FOO /foo/baz
env BAR /bar
env BAZ $BAR
env FOOPATH $PATH:$FOO
run [ "$BAR" = "$BAZ" ]
run [ "$FOOPATH" = "$PATH:/foo/baz" ]
`,
nil,
nil,
},
{
`
from {IMAGE}
env FOO /bar
env TEST testdir
env BAZ /foobar
add testfile $BAZ/
add $TEST $FOO
run [ "$(cat /foobar/testfile)" = "test1" ]
run [ "$(cat /bar/withfile)" = "test2" ]
`,
[][2]string{
{"testfile", "test1"},
{"testdir/withfile", "test2"},
},
nil,
},
}
// FIXME: test building with 2 successive overlapping ADD commands
func constructDockerfile(template string, ip net.IP, port string) string {
serverAddr := fmt.Sprintf("%s:%s", ip, port)
replacer := strings.NewReplacer("{IMAGE}", unitTestImageID, "{SERVERADDR}", serverAddr)
return replacer.Replace(template)
}
func mkTestingFileServer(files [][2]string) (*httptest.Server, error) {
mux := http.NewServeMux()
for _, file := range files {
name, contents := file[0], file[1]
mux.HandleFunc(name, func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(contents))
})
}
// This is how httptest.NewServer sets up a net.Listener, except that our listener must accept remote
// connections (from the container).
listener, err := net.Listen("tcp", ":0")
if err != nil {
return nil, err
}
s := httptest.NewUnstartedServer(mux)
s.Listener = listener
s.Start()
return s, nil
}
func TestBuild(t *testing.T) {
for _, ctx := range testContexts {
buildImage(ctx, t, nil, true)
}
}
func buildImage(context testContextTemplate, t *testing.T, srv *Server, useCache bool) *Image {
if srv == nil {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
srv = &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
}
buildfile := NewBuildFile(srv, ioutil.Discard)
if _, err := buildfile.Build(mkTestContext(ctx.dockerfile, ctx.files, t)); err != nil {
t.Fatal(err)
httpServer, err := mkTestingFileServer(context.remoteFiles)
if err != nil {
t.Fatal(err)
}
defer httpServer.Close()
idx := strings.LastIndex(httpServer.URL, ":")
if idx < 0 {
t.Fatalf("could not get port from test http server address %s", httpServer.URL)
}
port := httpServer.URL[idx+1:]
ip := srv.runtime.networkManager.bridgeNetwork.IP
dockerfile := constructDockerfile(context.dockerfile, ip, port)
buildfile := NewBuildFile(srv, ioutil.Discard, false, useCache, false)
id, err := buildfile.Build(mkTestContext(dockerfile, context.files, t))
if err != nil {
t.Fatal(err)
}
img, err := srv.ImageInspect(id)
if err != nil {
t.Fatal(err)
}
return img
}
func TestVolume(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
volume /test
cmd Hello world
`, nil, nil}, t, nil, true)
if len(img.Config.Volumes) == 0 {
t.Fail()
}
for key := range img.Config.Volumes {
if key != "/test" {
t.Fail()
}
}
}
func TestBuildMaintainer(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
maintainer dockerio
`, nil, nil}, t, nil, true)
if img.Author != "dockerio" {
t.Fail()
}
}
func TestBuildUser(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
user dockerio
`, nil, nil}, t, nil, true)
if img.Config.User != "dockerio" {
t.Fail()
}
}
func TestBuildEnv(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
env port 4243
`,
nil, nil}, t, nil, true)
hasEnv := false
for _, envVar := range img.Config.Env {
if envVar == "port=4243" {
hasEnv = true
break
}
}
if !hasEnv {
t.Fail()
}
}
func TestBuildCmd(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
cmd ["/bin/echo", "Hello World"]
`,
nil, nil}, t, nil, true)
if img.Config.Cmd[0] != "/bin/echo" {
t.Log(img.Config.Cmd[0])
t.Fail()
}
if img.Config.Cmd[1] != "Hello World" {
t.Log(img.Config.Cmd[1])
t.Fail()
}
}
func TestBuildExpose(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
expose 4243
`,
nil, nil}, t, nil, true)
if img.Config.PortSpecs[0] != "4243" {
t.Fail()
}
}
func TestBuildEntrypoint(t *testing.T) {
img := buildImage(testContextTemplate{`
from {IMAGE}
entrypoint ["/bin/echo"]
`,
nil, nil}, t, nil, true)
if img.Config.Entrypoint[0] != "/bin/echo" {
}
}
// testing #1405 - config.Cmd does not get cleaned up if
// utilizing cache
func TestBuildEntrypointRunCleanup(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
img := buildImage(testContextTemplate{`
from {IMAGE}
run echo "hello"
`,
nil, nil}, t, srv, true)
img = buildImage(testContextTemplate{`
from {IMAGE}
run echo "hello"
add foo /foo
entrypoint ["/bin/echo"]
`,
[][2]string{{"foo", "HEYO"}}, nil}, t, srv, true)
if len(img.Config.Cmd) != 0 {
t.Fail()
}
}
func TestBuildImageWithCache(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
template := testContextTemplate{`
from {IMAGE}
maintainer dockerio
`,
nil, nil}
img := buildImage(template, t, srv, true)
imageId := img.ID
img = nil
img = buildImage(template, t, srv, true)
if imageId != img.ID {
t.Logf("Image ids should match: %s != %s", imageId, img.ID)
t.Fail()
}
}
func TestBuildImageWithoutCache(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
template := testContextTemplate{`
from {IMAGE}
maintainer dockerio
`,
nil, nil}
img := buildImage(template, t, srv, true)
imageId := img.ID
img = nil
img = buildImage(template, t, srv, false)
if imageId == img.ID {
t.Logf("Image ids should not match: %s == %s", imageId, img.ID)
t.Fail()
}
}
func TestForbiddenContextPath(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
context := testContextTemplate{`
from {IMAGE}
maintainer dockerio
add ../../ test/
`,
[][2]string{{"test.txt", "test1"}, {"other.txt", "other"}}, nil}
httpServer, err := mkTestingFileServer(context.remoteFiles)
if err != nil {
t.Fatal(err)
}
defer httpServer.Close()
idx := strings.LastIndex(httpServer.URL, ":")
if idx < 0 {
t.Fatalf("could not get port from test http server address %s", httpServer.URL)
}
port := httpServer.URL[idx+1:]
ip := srv.runtime.networkManager.bridgeNetwork.IP
dockerfile := constructDockerfile(context.dockerfile, ip, port)
buildfile := NewBuildFile(srv, ioutil.Discard, false, true, false)
_, err = buildfile.Build(mkTestContext(dockerfile, context.files, t))
if err == nil {
t.Log("Error should not be nil")
t.Fail()
}
if err.Error() != "Forbidden path: /" {
t.Logf("Error message is not expected: %s", err.Error())
t.Fail()
}
}
func TestBuildADDFileNotFound(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{
runtime: runtime,
pullingPool: make(map[string]struct{}),
pushingPool: make(map[string]struct{}),
}
context := testContextTemplate{`
from {IMAGE}
add foo /usr/local/bar
`,
nil, nil}
httpServer, err := mkTestingFileServer(context.remoteFiles)
if err != nil {
t.Fatal(err)
}
defer httpServer.Close()
idx := strings.LastIndex(httpServer.URL, ":")
if idx < 0 {
t.Fatalf("could not get port from test http server address %s", httpServer.URL)
}
port := httpServer.URL[idx+1:]
ip := srv.runtime.networkManager.bridgeNetwork.IP
dockerfile := constructDockerfile(context.dockerfile, ip, port)
buildfile := NewBuildFile(srv, ioutil.Discard, false, true, false)
_, err = buildfile.Build(mkTestContext(dockerfile, context.files, t))
if err == nil {
t.Log("Error should not be nil")
t.Fail()
}
if err.Error() != "foo: no such file or directory" {
t.Logf("Error message is not expected: %s", err.Error())
t.Fail()
}
}

View File

@@ -99,7 +99,7 @@ func Changes(layers []string, rw string) ([]Change, error) {
changes = append(changes, change)
return nil
})
if err != nil {
if err != nil && !os.IsNotExist(err) {
return nil, err
}
return changes, nil

File diff suppressed because it is too large Load Diff

View File

@@ -3,8 +3,9 @@ package docker
import (
"bufio"
"fmt"
"github.com/dotcloud/docker/utils"
"io"
_ "io/ioutil"
"io/ioutil"
"strings"
"testing"
"time"
@@ -37,7 +38,7 @@ func setTimeout(t *testing.T, msg string, d time.Duration, f func()) {
f()
c <- false
}()
if <-c {
if <-c && msg != "" {
t.Fatal(msg)
}
}
@@ -58,141 +59,109 @@ func assertPipe(input, output string, r io.Reader, w io.Writer, count int) error
return nil
}
/*TODO
func cmdWait(srv *Server, container *Container) error {
stdout, stdoutPipe := io.Pipe()
go func() {
srv.CmdWait(nil, stdoutPipe, container.Id)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
return err
}
// Cleanup pipes
return closeWrap(stdout, stdoutPipe)
}
func cmdImages(srv *Server, args ...string) (string, error) {
stdout, stdoutPipe := io.Pipe()
go func() {
if err := srv.CmdImages(nil, stdoutPipe, args...); err != nil {
return
}
// force the pipe closed, so that the code below gets an EOF
stdoutPipe.Close()
}()
output, err := ioutil.ReadAll(stdout)
if err != nil {
return "", err
}
// Cleanup pipes
return string(output), closeWrap(stdout, stdoutPipe)
}
// TestImages checks that 'docker images' displays information correctly
func TestImages(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
output, err := cmdImages(srv)
if !strings.Contains(output, "REPOSITORY") {
t.Fatal("'images' should have a header")
}
if !strings.Contains(output, "docker-ut") {
t.Fatal("'images' should show the docker-ut image")
}
if !strings.Contains(output, "e9aa60c60128") {
t.Fatal("'images' should show the docker-ut image id")
}
output, err = cmdImages(srv, "-q")
if strings.Contains(output, "REPOSITORY") {
t.Fatal("'images -q' should not have a header")
}
if strings.Contains(output, "docker-ut") {
t.Fatal("'images' should not show the docker-ut image name")
}
if !strings.Contains(output, "e9aa60c60128") {
t.Fatal("'images' should show the docker-ut image id")
}
output, err = cmdImages(srv, "-viz")
if !strings.HasPrefix(output, "digraph docker {") {
t.Fatal("'images -v' should start with the dot header")
}
if !strings.HasSuffix(output, "}\n") {
t.Fatal("'images -v' should end with a '}'")
}
if !strings.Contains(output, "base -> \"e9aa60c60128\" [style=invis]") {
t.Fatal("'images -v' should have the docker-ut image id node")
}
// todo: add checks for -a
}
// TestRunHostname checks that 'docker run -h' correctly sets a custom hostname
func TestRunHostname(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, _ := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
if err := srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-h", "foobar", GetTestImage(runtime).Id, "hostname"); err != nil {
defer close(c)
if err := cli.CmdRun("-h", "foobar", unitTestImageID, "hostname"); err != nil {
t.Fatal(err)
}
close(c)
}()
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
setTimeout(t, "CmdRun timed out", 2*time.Second, func() {
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "foobar\n" {
t.Fatalf("'hostname' should display '%s', not '%s'", "foobar\n", cmdOutput)
}
})
setTimeout(t, "CmdRun timed out", 5*time.Second, func() {
<-c
})
}
// TestRunWorkdir checks that 'docker run -w' correctly sets a custom working directory
func TestRunWorkdir(t *testing.T) {
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-w", "/foo/bar", unitTestImageID, "pwd"); err != nil {
t.Fatal(err)
}
}()
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "/foo/bar\n" {
t.Fatalf("'pwd' should display '%s', not '%s'", "/foo/bar\n", cmdOutput)
}
})
setTimeout(t, "CmdRun timed out", 5*time.Second, func() {
<-c
})
}
// TestRunWorkdirExists checks that 'docker run -w' correctly sets a custom working directory, even if it exists
func TestRunWorkdirExists(t *testing.T) {
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(nil, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c := make(chan struct{})
go func() {
defer close(c)
if err := cli.CmdRun("-w", "/proc", unitTestImageID, "pwd"); err != nil {
t.Fatal(err)
}
}()
setTimeout(t, "Reading command output time out", 2*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != "/proc\n" {
t.Fatalf("'pwd' should display '%s', not '%s'", "/proc\n", cmdOutput)
}
})
setTimeout(t, "CmdRun timed out", 5*time.Second, func() {
<-c
cmdWait(srv, srv.runtime.List()[0])
})
}
func TestRunExit(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c1 := make(chan struct{})
go func() {
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
cli.CmdRun("-i", unitTestImageID, "/bin/cat")
close(c1)
}()
@@ -202,21 +171,24 @@ func TestRunExit(t *testing.T) {
}
})
container := runtime.List()[0]
container := globalRuntime.List()[0]
// Closing /bin/cat stdin, expect it to exit
p, err := container.StdinPipe()
if err != nil {
t.Fatal(err)
}
if err := p.Close(); err != nil {
if err := stdin.Close(); err != nil {
t.Fatal(err)
}
// as the process exited, CmdRun must finish and unblock. Wait for it
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
setTimeout(t, "Waiting for CmdRun timed out", 10*time.Second, func() {
<-c1
cmdWait(srv, container)
go func() {
cli.CmdWait(container.ID)
}()
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
})
// Make sure that the client has been disconnected
@@ -233,21 +205,18 @@ func TestRunExit(t *testing.T) {
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", GetTestImage(runtime).Id, "/bin/cat")
cli.CmdRun("-i", unitTestImageID, "/bin/cat")
close(c1)
}()
@@ -271,7 +240,7 @@ func TestRunDisconnect(t *testing.T) {
// Client disconnect after run -i should cause stdin to be closed, which should
// cause /bin/cat to exit.
setTimeout(t, "Waiting for /bin/cat to exit timed out", 2*time.Second, func() {
container := runtime.List()[0]
container := globalRuntime.List()[0]
container.Wait()
if container.State.Running {
t.Fatalf("/bin/cat is still running after closing stdin")
@@ -281,40 +250,39 @@ func TestRunDisconnect(t *testing.T) {
// Expected behaviour: the process dies when the client disconnects
func TestRunDisconnectTty(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdRun returns.
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-t", GetTestImage(runtime).Id, "/bin/cat")
if err := cli.CmdRun("-i", "-t", unitTestImageID, "/bin/cat"); err != nil {
utils.Debugf("Error CmdRun: %s\n", err)
}
close(c1)
}()
setTimeout(t, "Waiting for the container to be started timed out", 2*time.Second, func() {
setTimeout(t, "Waiting for the container to be started timed out", 10*time.Second, func() {
for {
// Client disconnect after run -i should keep stdin out in TTY mode
l := runtime.List()
l := globalRuntime.List()
if len(l) == 1 && l[0].State.Running {
break
}
time.Sleep(10 * time.Millisecond)
}
})
// Client disconnect after run -i should keep stdin out in TTY mode
container := runtime.List()[0]
container := globalRuntime.List()[0]
setTimeout(t, "Read/Write assertion timed out", 2*time.Second, func() {
setTimeout(t, "Read/Write assertion timed out", 2000*time.Second, func() {
if err := assertPipe("hello\n", "hello", stdout, stdinPipe, 15); err != nil {
t.Fatal(err)
}
@@ -339,24 +307,21 @@ func TestRunDisconnectTty(t *testing.T) {
// 'docker run -i -a stdin' should sends the client's stdin to the command,
// then detach from it and print the container id.
func TestRunAttachStdin(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
ch := make(chan struct{})
go func() {
srv.CmdRun(stdin, rcli.NewDockerLocalConn(stdoutPipe), "-i", "-a", "stdin", GetTestImage(runtime).Id, "sh", "-c", "echo hello; cat")
close(ch)
defer close(ch)
cli.CmdRun("-i", "-a", "stdin", unitTestImageID, "sh", "-c", "echo hello && cat && sleep 5")
}()
// Send input to the command, close stdin
setTimeout(t, "Write timed out", 2*time.Second, func() {
setTimeout(t, "Write timed out", 10*time.Second, func() {
if _, err := stdinPipe.Write([]byte("hi there\n")); err != nil {
t.Fatal(err)
}
@@ -365,36 +330,40 @@ func TestRunAttachStdin(t *testing.T) {
}
})
container := runtime.List()[0]
container := globalRuntime.List()[0]
// Check output
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortId()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortId()+"\n", cmdOutput)
}
setTimeout(t, "Reading command output time out", 10*time.Second, func() {
cmdOutput, err := bufio.NewReader(stdout).ReadString('\n')
if err != nil {
t.Fatal(err)
}
if cmdOutput != container.ShortID()+"\n" {
t.Fatalf("Wrong output: should be '%s', not '%s'\n", container.ShortID()+"\n", cmdOutput)
}
})
// wait for CmdRun to return
setTimeout(t, "Waiting for CmdRun timed out", 2*time.Second, func() {
setTimeout(t, "Waiting for CmdRun timed out", 5*time.Second, func() {
<-ch
})
setTimeout(t, "Waiting for command to exit timed out", 2*time.Second, func() {
setTimeout(t, "Waiting for command to exit timed out", 10*time.Second, func() {
container.Wait()
})
// Check logs
if cmdLogs, err := container.ReadLog("stdout"); err != nil {
if cmdLogs, err := container.ReadLog("json"); err != nil {
t.Fatal(err)
} else {
if output, err := ioutil.ReadAll(cmdLogs); err != nil {
t.Fatal(err)
} else {
expectedLog := "hello\nhi there\n"
if string(output) != expectedLog {
t.Fatalf("Unexpected logs: should be '%s', not '%s'\n", expectedLog, output)
expectedLogs := []string{"{\"log\":\"hello\\n\",\"stream\":\"stdout\"", "{\"log\":\"hi there\\n\",\"stream\":\"stdout\""}
for _, expectedLog := range expectedLogs {
if !strings.Contains(string(output), expectedLog) {
t.Fatalf("Unexpected logs: should contains '%s', it is not '%s'\n", expectedLog, output)
}
}
}
}
@@ -402,42 +371,43 @@ func TestRunAttachStdin(t *testing.T) {
// Expected behaviour, the process stays alive when the client disconnects
func TestAttachDisconnect(t *testing.T) {
runtime, err := newTestRuntime()
if err != nil {
t.Fatal(err)
}
defer nuke(runtime)
srv := &Server{runtime: runtime}
container, err := NewBuilder(runtime).Create(
&Config{
Image: GetTestImage(runtime).Id,
CpuShares: 1000,
Memory: 33554432,
Cmd: []string{"/bin/cat"},
OpenStdin: true,
},
)
if err != nil {
t.Fatal(err)
}
defer runtime.Destroy(container)
// Start the process
if err := container.Start(); err != nil {
t.Fatal(err)
}
stdin, stdinPipe := io.Pipe()
stdout, stdoutPipe := io.Pipe()
cli := NewDockerCli(stdin, stdoutPipe, ioutil.Discard, testDaemonProto, testDaemonAddr)
defer cleanup(globalRuntime)
go func() {
// Start a process in daemon mode
if err := cli.CmdRun("-d", "-i", unitTestImageID, "/bin/cat"); err != nil {
utils.Debugf("Error CmdRun: %s\n", err)
}
}()
setTimeout(t, "Waiting for CmdRun timed out", 10*time.Second, func() {
if _, err := bufio.NewReader(stdout).ReadString('\n'); err != nil {
t.Fatal(err)
}
})
setTimeout(t, "Waiting for the container to be started timed out", 10*time.Second, func() {
for {
l := globalRuntime.List()
if len(l) == 1 && l[0].State.Running {
break
}
time.Sleep(10 * time.Millisecond)
}
})
container := globalRuntime.List()[0]
// Attach to it
c1 := make(chan struct{})
go func() {
// We're simulating a disconnect so the return value doesn't matter. What matters is the
// fact that CmdAttach returns.
srv.CmdAttach(stdin, rcli.NewDockerLocalConn(stdoutPipe), container.Id)
cli.CmdAttach(container.ID)
close(c1)
}()
@@ -458,14 +428,13 @@ func TestAttachDisconnect(t *testing.T) {
// We closed stdin, expect /bin/cat to still be running
// Wait a little bit to make sure container.monitor() did his thing
err = container.WaitTimeout(500 * time.Millisecond)
err := container.WaitTimeout(500 * time.Millisecond)
if err == nil || !container.State.Running {
t.Fatalf("/bin/cat is not running after closing stdin")
}
// Try to avoid the timeoout in destroy. Best effort, don't check error
// Try to avoid the timeout in destroy. Best effort, don't check error
cStdin, _ := container.StdinPipe()
cStdin.Close()
container.Wait()
}
*/

View File

@@ -2,6 +2,7 @@ package docker
import (
"encoding/json"
"errors"
"flag"
"fmt"
"github.com/dotcloud/docker/term"
@@ -10,11 +11,11 @@ import (
"io"
"io/ioutil"
"log"
"net"
"os"
"os/exec"
"path"
"path/filepath"
"sort"
"strconv"
"strings"
"syscall"
@@ -40,6 +41,8 @@ type Container struct {
SysInitPath string
ResolvConfPath string
HostnamePath string
HostsPath string
cmd *exec.Cmd
stdout *utils.WriteBroadcaster
@@ -52,43 +55,77 @@ type Container struct {
waitLock chan struct{}
Volumes map[string]string
// Store rw/ro in a separate structure to preserve reverse-compatibility on-disk.
// Easier than migrating older container configs :)
VolumesRW map[string]bool
}
type Config struct {
Hostname string
User string
Memory int64 // Memory limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap
CpuShares int64 // CPU shares (relative weight vs. other containers)
AttachStdin bool
AttachStdout bool
AttachStderr bool
PortSpecs []string
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
OpenStdin bool // Open stdin
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
Env []string
Cmd []string
Dns []string
Image string // Name of the image as it was passed by the operator (eg. could be symbolic)
Volumes map[string]struct{}
VolumesFrom string
Hostname string
Domainname string
User string
Memory int64 // Memory limit (in bytes)
MemorySwap int64 // Total memory usage (memory + swap); set `-1' to disable swap
CpuShares int64 // CPU shares (relative weight vs. other containers)
AttachStdin bool
AttachStdout bool
AttachStderr bool
PortSpecs []string
Tty bool // Attach standard streams to a tty, including stdin if it is not closed.
OpenStdin bool // Open stdin
StdinOnce bool // If true, close stdin after the 1 attached client disconnects.
Env []string
Cmd []string
Dns []string
Image string // Name of the image as it was passed by the operator (eg. could be symbolic)
Volumes map[string]struct{}
VolumesFrom string
WorkingDir string
Entrypoint []string
NetworkDisabled bool
Privileged bool
}
func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet, error) {
type HostConfig struct {
Binds []string
ContainerIDFile string
LxcConf []KeyValuePair
}
type BindMap struct {
SrcPath string
DstPath string
Mode string
}
var (
ErrInvaidWorikingDirectory = errors.New("The working directory is invalid. It needs to be an absolute path.")
)
type KeyValuePair struct {
Key string
Value string
}
func ParseRun(args []string, capabilities *Capabilities) (*Config, *HostConfig, *flag.FlagSet, error) {
cmd := Subcmd("run", "[OPTIONS] IMAGE [COMMAND] [ARG...]", "Run a command in a new container")
if len(args) > 0 && args[0] != "--help" {
if os.Getenv("TEST") != "" {
cmd.SetOutput(ioutil.Discard)
cmd.Usage = nil
}
flHostname := cmd.String("h", "", "Container host name")
flWorkingDir := cmd.String("w", "", "Working directory inside the container")
flUser := cmd.String("u", "", "Username or UID")
flDetach := cmd.Bool("d", false, "Detached mode: leave the container running in the background")
flDetach := cmd.Bool("d", false, "Detached mode: Run container in the background, print new container id")
flAttach := NewAttachOpts()
cmd.Var(flAttach, "a", "Attach to stdin, stdout or stderr.")
flStdin := cmd.Bool("i", false, "Keep stdin open even if not attached")
flTty := cmd.Bool("t", false, "Allocate a pseudo-tty")
flMemory := cmd.Int64("m", 0, "Memory limit (in bytes)")
flContainerIDFile := cmd.String("cidfile", "", "Write the container ID to the file")
flNetwork := cmd.Bool("n", true, "Enable networking for this container")
flPrivileged := cmd.Bool("privileged", false, "Give extended privileges to this container")
if capabilities != nil && *flMemory > 0 && !capabilities.MemoryLimit {
//fmt.Fprintf(stdout, "WARNING: Your kernel does not support memory limit capabilities. Limitation discarded.\n")
@@ -107,15 +144,22 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
cmd.Var(&flDns, "dns", "Set custom dns servers")
flVolumes := NewPathOpts()
cmd.Var(flVolumes, "v", "Attach a data volume")
cmd.Var(flVolumes, "v", "Bind mount a volume (e.g. from the host: -v /host:/container, from docker: -v /container)")
flVolumesFrom := cmd.String("volumes-from", "", "Mount volumes from the specified container")
flEntrypoint := cmd.String("entrypoint", "", "Overwrite the default entrypoint of the image")
var flLxcOpts ListOpts
cmd.Var(&flLxcOpts, "lxc-conf", "Add custom lxc options -lxc-conf=\"lxc.cgroup.cpuset.cpus = 0,1\"")
if err := cmd.Parse(args); err != nil {
return nil, cmd, err
return nil, nil, cmd, err
}
if *flDetach && len(flAttach) > 0 {
return nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
return nil, nil, cmd, fmt.Errorf("Conflicting options: -a and -d")
}
if *flWorkingDir != "" && !path.IsAbs(*flWorkingDir) {
return nil, nil, cmd, ErrInvaidWorikingDirectory
}
// If neither -d or -a are set, attach to everything by default
if len(flAttach) == 0 && !*flDetach {
@@ -127,8 +171,23 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
}
}
}
var binds []string
// add any bind targets to the list of container volumes
for bind := range flVolumes {
arr := strings.Split(bind, ":")
if len(arr) > 1 {
dstDir := arr[1]
flVolumes[dstDir] = struct{}{}
binds = append(binds, bind)
delete(flVolumes, bind)
}
}
parsedArgs := cmd.Args()
runCmd := []string{}
entrypoint := []string{}
image := ""
if len(parsedArgs) >= 1 {
image = cmd.Arg(0)
@@ -136,23 +195,51 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
if len(parsedArgs) > 1 {
runCmd = parsedArgs[1:]
}
if *flEntrypoint != "" {
entrypoint = []string{*flEntrypoint}
}
var lxcConf []KeyValuePair
lxcConf, err := parseLxcConfOpts(flLxcOpts)
if err != nil {
return nil, nil, cmd, err
}
hostname := *flHostname
domainname := ""
parts := strings.SplitN(hostname, ".", 2)
if len(parts) > 1 {
hostname = parts[0]
domainname = parts[1]
}
config := &Config{
Hostname: *flHostname,
PortSpecs: flPorts,
User: *flUser,
Tty: *flTty,
OpenStdin: *flStdin,
Memory: *flMemory,
CpuShares: *flCpuShares,
AttachStdin: flAttach.Get("stdin"),
AttachStdout: flAttach.Get("stdout"),
AttachStderr: flAttach.Get("stderr"),
Env: flEnv,
Cmd: runCmd,
Dns: flDns,
Image: image,
Volumes: flVolumes,
VolumesFrom: *flVolumesFrom,
Hostname: hostname,
Domainname: domainname,
PortSpecs: flPorts,
User: *flUser,
Tty: *flTty,
NetworkDisabled: !*flNetwork,
OpenStdin: *flStdin,
Memory: *flMemory,
CpuShares: *flCpuShares,
AttachStdin: flAttach.Get("stdin"),
AttachStdout: flAttach.Get("stdout"),
AttachStderr: flAttach.Get("stderr"),
Env: flEnv,
Cmd: runCmd,
Dns: flDns,
Image: image,
Volumes: flVolumes,
VolumesFrom: *flVolumesFrom,
Entrypoint: entrypoint,
Privileged: *flPrivileged,
WorkingDir: *flWorkingDir,
}
hostConfig := &HostConfig{
Binds: binds,
ContainerIDFile: *flContainerIDFile,
LxcConf: lxcConf,
}
if capabilities != nil && *flMemory > 0 && !capabilities.SwapLimit {
@@ -164,25 +251,41 @@ func ParseRun(args []string, capabilities *Capabilities) (*Config, *flag.FlagSet
if config.OpenStdin && config.AttachStdin {
config.StdinOnce = true
}
return config, cmd, nil
return config, hostConfig, cmd, nil
}
type PortMapping map[string]string
type NetworkSettings struct {
IPAddress string
IPPrefixLen int
Gateway string
Bridge string
PortMapping map[string]string
PortMapping map[string]PortMapping
}
// String returns a human-readable description of the port mapping defined in the settings
func (settings *NetworkSettings) PortMappingHuman() string {
var mapping []string
for private, public := range settings.PortMapping {
mapping = append(mapping, fmt.Sprintf("%s->%s", public, private))
// returns a more easy to process description of the port mapping defined in the settings
func (settings *NetworkSettings) PortMappingAPI() []APIPort {
var mapping []APIPort
for private, public := range settings.PortMapping["Tcp"] {
pubint, _ := strconv.ParseInt(public, 0, 0)
privint, _ := strconv.ParseInt(private, 0, 0)
mapping = append(mapping, APIPort{
PrivatePort: privint,
PublicPort: pubint,
Type: "tcp",
})
}
sort.Strings(mapping)
return strings.Join(mapping, ", ")
for private, public := range settings.PortMapping["Udp"] {
pubint, _ := strconv.ParseInt(public, 0, 0)
privint, _ := strconv.ParseInt(private, 0, 0)
mapping = append(mapping, APIPort{
PrivatePort: privint,
PublicPort: pubint,
Type: "udp",
})
}
return mapping
}
// Inject the io.Reader at the given path. Note: do not close the reader
@@ -216,7 +319,8 @@ func (container *Container) FromDisk() error {
return err
}
// Load container settings
if err := json.Unmarshal(data, container); err != nil {
// udp broke compat of docker.PortMapping, but it's not used when loading a container, we can skip it
if err := json.Unmarshal(data, container); err != nil && !strings.Contains(err.Error(), "docker.PortMapping") {
return err
}
return nil
@@ -230,7 +334,27 @@ func (container *Container) ToDisk() (err error) {
return ioutil.WriteFile(container.jsonPath(), data, 0666)
}
func (container *Container) generateLXCConfig() error {
func (container *Container) ReadHostConfig() (*HostConfig, error) {
data, err := ioutil.ReadFile(container.hostConfigPath())
if err != nil {
return &HostConfig{}, err
}
hostConfig := &HostConfig{}
if err := json.Unmarshal(data, hostConfig); err != nil {
return &HostConfig{}, err
}
return hostConfig, nil
}
func (container *Container) SaveHostConfig(hostConfig *HostConfig) (err error) {
data, err := json.Marshal(hostConfig)
if err != nil {
return
}
return ioutil.WriteFile(container.hostConfigPath(), data, 0666)
}
func (container *Container) generateLXCConfig(hostConfig *HostConfig) error {
fo, err := os.Create(container.lxcConfigPath())
if err != nil {
return err
@@ -239,6 +363,11 @@ func (container *Container) generateLXCConfig() error {
if err := LxcTemplateCompiled.Execute(fo, container); err != nil {
return err
}
if hostConfig != nil {
if err := LxcHostConfigTemplateCompiled.Execute(fo, hostConfig); err != nil {
return err
}
}
return nil
}
@@ -309,14 +438,15 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
utils.Debugf("[start] attach stdin\n")
defer utils.Debugf("[end] attach stdin\n")
// No matter what, when stdin is closed (io.Copy unblock), close stdout and stderr
if cStdout != nil {
defer cStdout.Close()
}
if cStderr != nil {
defer cStderr.Close()
}
if container.Config.StdinOnce && !container.Config.Tty {
defer cStdin.Close()
} else {
if cStdout != nil {
defer cStdout.Close()
}
if cStderr != nil {
defer cStderr.Close()
}
}
if container.Config.Tty {
_, err = utils.CopyEscapable(cStdin, stdin)
@@ -430,9 +560,13 @@ func (container *Container) Attach(stdin io.ReadCloser, stdinCloser io.Closer, s
})
}
func (container *Container) Start() error {
container.State.lock()
defer container.State.unlock()
func (container *Container) Start(hostConfig *HostConfig) error {
container.State.Lock()
defer container.State.Unlock()
if hostConfig == nil { // in docker start of docker restart we want to reuse previous HostConfigFile
hostConfig, _ = container.ReadHostConfig()
}
if container.State.Running {
return fmt.Errorf("The container %s is already running.", container.ID)
@@ -440,8 +574,12 @@ func (container *Container) Start() error {
if err := container.EnsureMounted(); err != nil {
return err
}
if err := container.allocateNetwork(); err != nil {
return err
if container.runtime.networkManager.disabled {
container.Config.NetworkDisabled = true
} else {
if err := container.allocateNetwork(); err != nil {
return err
}
}
// Make sure the config is compatible with the current kernel
@@ -453,20 +591,53 @@ func (container *Container) Start() error {
log.Printf("WARNING: Your kernel does not support swap limit capabilities. Limitation discarded.\n")
container.Config.MemorySwap = -1
}
container.Volumes = make(map[string]string)
// Create the requested volumes volumes
for volPath := range container.Config.Volumes {
c, err := container.runtime.volumes.Create(nil, container, "", "", nil)
if err != nil {
return err
}
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
}
container.Volumes[volPath] = c.ID
if container.runtime.capabilities.IPv4ForwardingDisabled {
log.Printf("WARNING: IPv4 forwarding is disabled. Networking will not work")
}
// Create the requested bind mounts
binds := make(map[string]BindMap)
// Define illegal container destinations
illegalDsts := []string{"/", "."}
for _, bind := range hostConfig.Binds {
// FIXME: factorize bind parsing in parseBind
var src, dst, mode string
arr := strings.Split(bind, ":")
if len(arr) == 2 {
src = arr[0]
dst = arr[1]
mode = "rw"
} else if len(arr) == 3 {
src = arr[0]
dst = arr[1]
mode = arr[2]
} else {
return fmt.Errorf("Invalid bind specification: %s", bind)
}
// Bail if trying to mount to an illegal destination
for _, illegal := range illegalDsts {
if dst == illegal {
return fmt.Errorf("Illegal bind destination: %s", dst)
}
}
bindMap := BindMap{
SrcPath: src,
DstPath: dst,
Mode: mode,
}
binds[path.Clean(dst)] = bindMap
}
if container.Volumes == nil || len(container.Volumes) == 0 {
container.Volumes = make(map[string]string)
container.VolumesRW = make(map[string]bool)
}
// Apply volumes from another container if requested
if container.Config.VolumesFrom != "" {
c := container.runtime.Get(container.Config.VolumesFrom)
if c == nil {
@@ -474,16 +645,85 @@ func (container *Container) Start() error {
}
for volPath, id := range c.Volumes {
if _, exists := container.Volumes[volPath]; exists {
return fmt.Errorf("The requested volume %s overlap one of the volume of the container %s", volPath, c.ID)
continue
}
if err := os.MkdirAll(path.Join(container.RootfsPath(), volPath), 0755); err != nil {
return nil
return err
}
container.Volumes[volPath] = id
if isRW, exists := c.VolumesRW[volPath]; exists {
container.VolumesRW[volPath] = isRW
}
}
}
if err := container.generateLXCConfig(); err != nil {
// Create the requested volumes if they don't exist
for volPath := range container.Config.Volumes {
volPath = path.Clean(volPath)
// Skip existing volumes
if _, exists := container.Volumes[volPath]; exists {
continue
}
var srcPath string
srcRW := false
// If an external bind is defined for this volume, use that as a source
if bindMap, exists := binds[volPath]; exists {
srcPath = bindMap.SrcPath
if strings.ToLower(bindMap.Mode) == "rw" {
srcRW = true
}
// Otherwise create an directory in $ROOT/volumes/ and use that
} else {
c, err := container.runtime.volumes.Create(nil, container, "", "", nil)
if err != nil {
return err
}
srcPath, err = c.layer()
if err != nil {
return err
}
srcRW = true // RW by default
}
container.Volumes[volPath] = srcPath
container.VolumesRW[volPath] = srcRW
// Create the mountpoint
rootVolPath := path.Join(container.RootfsPath(), volPath)
if err := os.MkdirAll(rootVolPath, 0755); err != nil {
return nil
}
if srcRW {
volList, err := ioutil.ReadDir(rootVolPath)
if err != nil {
return err
}
if len(volList) > 0 {
srcList, err := ioutil.ReadDir(srcPath)
if err != nil {
return err
}
if len(srcList) == 0 {
if err := CopyWithTar(rootVolPath, srcPath); err != nil {
return err
}
}
}
var stat syscall.Stat_t
if err := syscall.Stat(rootVolPath, &stat); err != nil {
return err
}
var srcStat syscall.Stat_t
if err := syscall.Stat(srcPath, &srcStat); err != nil {
return err
}
if stat.Uid != srcStat.Uid || stat.Gid != srcStat.Gid {
if err := os.Chown(srcPath, int(stat.Uid), int(stat.Gid)); err != nil {
return err
}
}
}
}
if err := container.generateLXCConfig(hostConfig); err != nil {
return err
}
@@ -491,11 +731,13 @@ func (container *Container) Start() error {
"-n", container.ID,
"-f", container.lxcConfigPath(),
"--",
"/sbin/init",
"/.dockerinit",
}
// Networking
params = append(params, "-g", container.network.Gateway.String())
if !container.Config.NetworkDisabled {
params = append(params, "-g", container.network.Gateway.String())
}
// User
if container.Config.User != "" {
@@ -510,7 +752,21 @@ func (container *Container) Start() error {
params = append(params,
"-e", "HOME=/",
"-e", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"-e", "container=lxc",
"-e", "HOSTNAME="+container.Config.Hostname,
)
if container.Config.WorkingDir != "" {
workingDir := path.Clean(container.Config.WorkingDir)
utils.Debugf("[working dir] working dir is %s", workingDir)
if err := os.MkdirAll(path.Join(container.RootfsPath(), workingDir), 0755); err != nil {
return nil
}
params = append(params,
"-w", workingDir,
)
}
for _, elem := range container.Config.Env {
params = append(params, "-e", elem)
@@ -523,10 +779,10 @@ func (container *Container) Start() error {
container.cmd = exec.Command("lxc-start", params...)
// Setup logging of stdout and stderr to disk
if err := container.runtime.LogToDisk(container.stdout, container.logPath("stdout")); err != nil {
if err := container.runtime.LogToDisk(container.stdout, container.logPath("json"), "stdout"); err != nil {
return err
}
if err := container.runtime.LogToDisk(container.stderr, container.logPath("stderr")); err != nil {
if err := container.runtime.LogToDisk(container.stderr, container.logPath("json"), "stderr"); err != nil {
return err
}
@@ -547,12 +803,14 @@ func (container *Container) Start() error {
container.waitLock = make(chan struct{})
container.ToDisk()
container.SaveHostConfig(hostConfig)
go container.monitor()
return nil
}
func (container *Container) Run() error {
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
return err
}
container.Wait()
@@ -565,7 +823,8 @@ func (container *Container) Output() (output []byte, err error) {
return nil, err
}
defer pipe.Close()
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
return nil, err
}
output, err = ioutil.ReadAll(pipe)
@@ -582,29 +841,67 @@ func (container *Container) StdinPipe() (io.WriteCloser, error) {
func (container *Container) StdoutPipe() (io.ReadCloser, error) {
reader, writer := io.Pipe()
container.stdout.AddWriter(writer)
container.stdout.AddWriter(writer, "")
return utils.NewBufReader(reader), nil
}
func (container *Container) StderrPipe() (io.ReadCloser, error) {
reader, writer := io.Pipe()
container.stderr.AddWriter(writer)
container.stderr.AddWriter(writer, "")
return utils.NewBufReader(reader), nil
}
func (container *Container) allocateNetwork() error {
iface, err := container.runtime.networkManager.Allocate()
if err != nil {
return err
if container.Config.NetworkDisabled {
return nil
}
container.NetworkSettings.PortMapping = make(map[string]string)
for _, spec := range container.Config.PortSpecs {
var iface *NetworkInterface
var err error
if !container.State.Ghost {
iface, err = container.runtime.networkManager.Allocate()
if err != nil {
return err
}
} else {
manager := container.runtime.networkManager
if manager.disabled {
iface = &NetworkInterface{disabled: true}
} else {
iface = &NetworkInterface{
IPNet: net.IPNet{IP: net.ParseIP(container.NetworkSettings.IPAddress), Mask: manager.bridgeNetwork.Mask},
Gateway: manager.bridgeNetwork.IP,
manager: manager,
}
ipNum := ipToInt(iface.IPNet.IP)
manager.ipAllocator.inUse[ipNum] = struct{}{}
}
}
var portSpecs []string
if !container.State.Ghost {
portSpecs = container.Config.PortSpecs
} else {
for backend, frontend := range container.NetworkSettings.PortMapping["Tcp"] {
portSpecs = append(portSpecs, fmt.Sprintf("%s:%s/tcp", frontend, backend))
}
for backend, frontend := range container.NetworkSettings.PortMapping["Udp"] {
portSpecs = append(portSpecs, fmt.Sprintf("%s:%s/udp", frontend, backend))
}
}
container.NetworkSettings.PortMapping = make(map[string]PortMapping)
container.NetworkSettings.PortMapping["Tcp"] = make(PortMapping)
container.NetworkSettings.PortMapping["Udp"] = make(PortMapping)
for _, spec := range portSpecs {
nat, err := iface.AllocatePort(spec)
if err != nil {
iface.Release()
return err
}
container.NetworkSettings.PortMapping[strconv.Itoa(nat.Backend)] = strconv.Itoa(nat.Frontend)
proto := strings.Title(nat.Proto)
backend, frontend := strconv.Itoa(nat.Backend), strconv.Itoa(nat.Frontend)
container.NetworkSettings.PortMapping[proto][backend] = frontend
}
container.network = iface
container.NetworkSettings.Bridge = container.runtime.networkManager.bridgeIface
@@ -615,6 +912,9 @@ func (container *Container) allocateNetwork() error {
}
func (container *Container) releaseNetwork() {
if container.Config.NetworkDisabled {
return
}
container.network.Release()
container.network = nil
container.NetworkSettings = &NetworkSettings{}
@@ -650,7 +950,9 @@ func (container *Container) monitor() {
}
}
utils.Debugf("Process finished")
if container.runtime != nil && container.runtime.srv != nil {
container.runtime.srv.LogEvent("die", container.ShortID(), container.runtime.repositories.ImageName(container.Image))
}
exitCode := -1
if container.cmd != nil {
exitCode = container.cmd.ProcessState.Sys().(syscall.WaitStatus).ExitStatus()
@@ -730,8 +1032,8 @@ func (container *Container) kill() error {
}
func (container *Container) Kill() error {
container.State.lock()
defer container.State.unlock()
container.State.Lock()
defer container.State.Unlock()
if !container.State.Running {
return nil
}
@@ -739,8 +1041,8 @@ func (container *Container) Kill() error {
}
func (container *Container) Stop(seconds int) error {
container.State.lock()
defer container.State.unlock()
container.State.Lock()
defer container.State.Unlock()
if !container.State.Running {
return nil
}
@@ -768,7 +1070,8 @@ func (container *Container) Restart(seconds int) error {
if err := container.Stop(seconds); err != nil {
return err
}
if err := container.Start(); err != nil {
hostConfig := &HostConfig{}
if err := container.Start(hostConfig); err != nil {
return err
}
return nil
@@ -878,6 +1181,10 @@ func (container *Container) ReadLog(name string) (io.Reader, error) {
return os.Open(container.logPath(name))
}
func (container *Container) hostConfigPath() string {
return path.Join(container.root, "hostconfig.json")
}
func (container *Container) jsonPath() string {
return path.Join(container.root, "config.json")
}
@@ -891,22 +1198,6 @@ func (container *Container) RootfsPath() string {
return path.Join(container.root, "rootfs")
}
func (container *Container) GetVolumes() (map[string]string, error) {
ret := make(map[string]string)
for volPath, id := range container.Volumes {
volume, err := container.runtime.volumes.Get(id)
if err != nil {
return nil, err
}
root, err := volume.root()
if err != nil {
return nil, err
}
ret[volPath] = path.Join(root, "layer")
}
return ret, nil
}
func (container *Container) rwPath() string {
return path.Join(container.root, "rw")
}
@@ -940,3 +1231,24 @@ func (container *Container) GetSize() (int64, int64) {
}
return sizeRw, sizeRootfs
}
func (container *Container) Copy(resource string) (Archive, error) {
if err := container.EnsureMounted(); err != nil {
return nil, err
}
var filter []string
basePath := path.Join(container.RootfsPath(), resource)
stat, err := os.Stat(basePath)
if err != nil {
return nil, err
}
if !stat.IsDir() {
d, f := path.Split(basePath)
basePath = d
filter = []string{f}
} else {
filter = []string{path.Base(basePath)}
basePath = path.Dir(basePath)
}
return TarFilter(basePath, Uncompressed, filter)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1 +1 @@
# Maintainer wanted! Enroll on #docker@freenode
Kawsar Saiyeed <kawsar.saiyeed@projiris.com>

1
contrib/brew/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
*.pyc

78
contrib/brew/README.md Normal file
View File

@@ -0,0 +1,78 @@
# docker-brew
docker-brew is a command-line tool used to build the docker standard library.
## Install instructions
1. Install python if it isn't already available on your OS of choice
1. Install the easy_install tool (`sudo apt-get install python-setuptools`
for Debian)
1. Install the python package manager, `pip` (`easy_install pip`)
1. Run the following command: `sudo pip install -r requirements.txt`
1. You should now be able to use the `docker-brew` script as such.
## Basics
./docker-brew -h
Display usage and help.
./docker-brew
Default build from the default repo/branch. Images will be created under the
`library/` namespace. Does not perform a remote push.
./docker-brew -n mycorp.com -b stable --push git://github.com/mycorp/docker
Will fetch the library definition files in the `stable` branch of the
`git://github.com/mycorp/docker` repository and create images under the
`mycorp.com` namespace (e.g. `mycorp.com/ubuntu`). Created images will then
be pushed to the official docker repository (pending: support for private
repositories)
## Library definition files
The library definition files are plain text files found in the `library/`
subfolder of the docker repository.
### File names
The name of a definition file will determine the name of the image(s) it
creates. For example, the `library/ubuntu` file will create images in the
`<namespace>/ubuntu` repository. If multiple instructions are present in
a single file, all images are expected to be created under a different tag.
### Instruction format
Each line represents a build instruction.
There are different formats that `docker-brew` is able to parse.
<git-url>
git://github.com/dotcloud/hipache
https://github.com/dotcloud/docker.git
The simplest format. `docker-brew` will fetch data from the provided git
repository from the `HEAD`of its `master` branch. Generated image will be
tagged as `latest`. Use of this format is discouraged because there is no
way to ensure stability.
<docker-tag> <git-url>
bleeding-edge git://github.com/dotcloud/docker
unstable https://github.com/dotcloud/docker-redis.git
A more advanced format. `docker-brew` will fetch data from the provided git
repository from the `HEAD`of its `master` branch. Generated image will be
tagged as `<docker-tag>`. Recommended if we always want to provide a snapshot
of the latest development. Again, no way to ensure stability.
<docker-tag> <git-url> T:<git-tag>
2.4.0 git://github.com/dotcloud/docker-redis T:2.4.0
<docker-tag> <git-url> B:<git-branch>
zfs git://github.com/dotcloud/docker B:zfs-support
<docker-tag> <git-url> C:<git-commit-id>
2.2.0 https://github.com/dotcloud/docker-redis.git C:a4bf8923ee4ec566d3ddc212
The most complete format. `docker-brew` will fetch data from the provided git
repository from the provided reference (if it's a branch, brew will fetch its
`HEAD`). Generated image will be tagged as `<docker-tag>`. Recommended whenever
possible.

View File

@@ -0,0 +1 @@
from brew import build_library, DEFAULT_REPOSITORY, DEFAULT_BRANCH

185
contrib/brew/brew/brew.py Normal file
View File

@@ -0,0 +1,185 @@
import os
import logging
from shutil import rmtree
import docker
import git
DEFAULT_REPOSITORY = 'git://github.com/dotcloud/docker'
DEFAULT_BRANCH = 'master'
logger = logging.getLogger(__name__)
logging.basicConfig(format='%(asctime)s %(levelname)s %(message)s',
level='INFO')
client = docker.Client()
processed = {}
processed_folders = []
def build_library(repository=None, branch=None, namespace=None, push=False,
debug=False, prefill=True, registry=None):
dst_folder = None
summary = Summary()
if repository is None:
repository = DEFAULT_REPOSITORY
if branch is None:
branch = DEFAULT_BRANCH
if debug:
logger.setLevel('DEBUG')
if not (repository.startswith('https://') or repository.startswith('git://')):
logger.info('Repository provided assumed to be a local path')
dst_folder = repository
try:
client.version()
except Exception as e:
logger.error('Could not reach the docker daemon. Please make sure it '
'is running.')
logger.warning('Also make sure you have access to the docker UNIX '
'socket (use sudo)')
return
#FIXME: set destination folder and only pull latest changes instead of
# cloning the whole repo everytime
if not dst_folder:
logger.info('Cloning docker repo from {0}, branch: {1}'.format(
repository, branch))
try:
rep, dst_folder = git.clone_branch(repository, branch)
except Exception as e:
logger.exception(e)
logger.error('Source repository could not be fetched. Check '
'that the address is correct and the branch exists.')
return
try:
dirlist = os.listdir(os.path.join(dst_folder, 'library'))
except OSError as e:
logger.error('The path provided ({0}) could not be found or didn\'t'
'contain a library/ folder.'.format(dst_folder))
return
for buildfile in dirlist:
if buildfile == 'MAINTAINERS':
continue
f = open(os.path.join(dst_folder, 'library', buildfile))
linecnt = 0
for line in f:
linecnt = linecnt + 1
logger.debug('{0} ---> {1}'.format(buildfile, line))
args = line.split()
try:
if len(args) > 3:
raise RuntimeError('Incorrect line format, '
'please refer to the docs')
url = None
ref = 'refs/heads/master'
tag = None
if len(args) == 1: # Just a URL, simple mode
url = args[0]
elif len(args) == 2 or len(args) == 3: # docker-tag url
url = args[1]
tag = args[0]
if len(args) == 3: # docker-tag url B:branch or T:tag
ref = None
if args[2].startswith('B:'):
ref = 'refs/heads/' + args[2][2:]
elif args[2].startswith('T:'):
ref = 'refs/tags/' + args[2][2:]
elif args[2].startswith('C:'):
ref = args[2][2:]
else:
raise RuntimeError('Incorrect line format, '
'please refer to the docs')
if prefill:
logger.debug('Pulling {0} from official repository (cache '
'fill)'.format(buildfile))
client.pull(buildfile)
img = build_repo(url, ref, buildfile, tag, namespace, push,
registry)
summary.add_success(buildfile, (linecnt, line), img)
processed['{0}@{1}'.format(url, ref)] = img
except Exception as e:
logger.exception(e)
summary.add_exception(buildfile, (linecnt, line), e)
f.close()
if dst_folder != repository:
rmtree(dst_folder, True)
for d in processed_folders:
rmtree(d, True)
summary.print_summary(logger)
def build_repo(repository, ref, docker_repo, docker_tag, namespace, push, registry):
docker_repo = '{0}/{1}'.format(namespace or 'library', docker_repo)
img_id = None
dst_folder = None
if '{0}@{1}'.format(repository, ref) not in processed.keys():
logger.info('Cloning {0} (ref: {1})'.format(repository, ref))
if repository not in processed:
rep, dst_folder = git.clone(repository, ref)
processed[repository] = rep
processed_folders.append(dst_folder)
else:
dst_folder = git.checkout(processed[repository], ref)
if not 'Dockerfile' in os.listdir(dst_folder):
raise RuntimeError('Dockerfile not found in cloned repository')
logger.info('Building using dockerfile...')
img_id, logs = client.build(path=dst_folder, quiet=True)
else:
img_id = processed['{0}@{1}'.format(repository, ref)]
logger.info('Committing to {0}:{1}'.format(docker_repo,
docker_tag or 'latest'))
client.tag(img_id, docker_repo, docker_tag)
if push:
logger.info('Pushing result to registry {0}'.format(
registry or "default"))
if registry is not None:
docker_repo = '{0}/{1}'.format(registry, docker_repo)
logger.info('Also tagging {0}'.format(docker_repo))
client.tag(img_id, docker_repo, docker_tag)
client.push(docker_repo)
return img_id
class Summary(object):
def __init__(self):
self._summary = {}
self._has_exc = False
def _add_data(self, image, linestr, data):
if image not in self._summary:
self._summary[image] = { linestr: data }
else:
self._summary[image][linestr] = data
def add_exception(self, image, line, exc):
lineno, linestr = line
self._add_data(image, linestr, { 'line': lineno, 'exc': str(exc) })
self._has_exc = True
def add_success(self, image, line, img_id):
lineno, linestr = line
self._add_data(image, linestr, { 'line': lineno, 'id': img_id })
def print_summary(self, logger=None):
linesep = ''.center(61, '-') + '\n'
s = 'BREW BUILD SUMMARY\n' + linesep
success = 'OVERALL SUCCESS: {}\n'.format(not self._has_exc)
details = linesep
for image, lines in self._summary.iteritems():
details = details + '{}\n{}'.format(image, linesep)
for linestr, data in lines.iteritems():
details = details + '{0:2} | {1} | {2:50}\n'.format(
data['line'],
'KO' if 'exc' in data else 'OK',
data['exc'] if 'exc' in data else data['id']
)
details = details + linesep
if logger:
logger.info(s + success + details)
else:
print s, success, details

63
contrib/brew/brew/git.py Normal file
View File

@@ -0,0 +1,63 @@
import tempfile
import logging
from dulwich import index
from dulwich.client import get_transport_and_path
from dulwich.repo import Repo
logger = logging.getLogger(__name__)
def clone_branch(repo_url, branch="master", folder=None):
return clone(repo_url, 'refs/heads/' + branch, folder)
def clone_tag(repo_url, tag, folder=None):
return clone(repo_url, 'refs/tags/' + tag, folder)
def checkout(rep, ref=None):
is_commit = False
if ref is None:
ref = 'refs/heads/master'
elif not ref.startswith('refs/'):
is_commit = True
if is_commit:
rep['HEAD'] = rep.commit(ref)
else:
rep['HEAD'] = rep.refs[ref]
indexfile = rep.index_path()
tree = rep["HEAD"].tree
index.build_index_from_tree(rep.path, indexfile, rep.object_store, tree)
return rep.path
def clone(repo_url, ref=None, folder=None):
is_commit = False
if ref is None:
ref = 'refs/heads/master'
elif not ref.startswith('refs/'):
is_commit = True
logger.debug("clone repo_url={0}, ref={1}".format(repo_url, ref))
if folder is None:
folder = tempfile.mkdtemp()
logger.debug("folder = {0}".format(folder))
rep = Repo.init(folder)
client, relative_path = get_transport_and_path(repo_url)
logger.debug("client={0}".format(client))
remote_refs = client.fetch(relative_path, rep)
for k, v in remote_refs.iteritems():
try:
rep.refs.add_if_new(k, v)
except:
pass
if is_commit:
rep['HEAD'] = rep.commit(ref)
else:
rep['HEAD'] = remote_refs[ref]
indexfile = rep.index_path()
tree = rep["HEAD"].tree
index.build_index_from_tree(rep.path, indexfile, rep.object_store, tree)
logger.debug("done")
return rep, folder

35
contrib/brew/docker-brew Executable file
View File

@@ -0,0 +1,35 @@
#!/usr/bin/env python
import argparse
import sys
try:
import brew
except ImportError as e:
print str(e)
print 'Please install the required dependencies first'
print 'sudo pip install -r requirements.txt'
sys.exit(1)
if __name__ == '__main__':
parser = argparse.ArgumentParser('Build the docker standard library')
parser.add_argument('--push', action='store_true', default=False,
help='Push generated repositories')
parser.add_argument('--debug', default=False, action='store_true',
help='Enable debugging output')
parser.add_argument('--noprefill', default=True, action='store_false',
dest='prefill', help='Disable cache prefill')
parser.add_argument('-n', metavar='NAMESPACE', default='library',
help='Namespace used for generated repositories.'
' Default is library')
parser.add_argument('-b', metavar='BRANCH', default=brew.DEFAULT_BRANCH,
help='Branch in the repository where the library definition'
' files will be fetched. Default is ' + brew.DEFAULT_BRANCH)
parser.add_argument('repository', default=brew.DEFAULT_REPOSITORY,
nargs='?', help='git repository containing the library definition'
' files. Default is ' + brew.DEFAULT_REPOSITORY)
parser.add_argument('--reg', default=None, help='Registry address to'
' push build results to. Also sets push to true.')
args = parser.parse_args()
brew.build_library(args.repository, args.b, args.n,
args.push or args.reg is not None, args.debug, args.prefill, args.reg)

View File

@@ -0,0 +1,2 @@
dulwich==0.9.0
-e git://github.com/dotcloud/docker-py.git#egg=docker-py

22
contrib/brew/setup.py Normal file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env python
import os
from setuptools import setup
ROOT_DIR = os.path.dirname(__file__)
SOURCE_DIR = os.path.join(ROOT_DIR)
test_requirements = []
setup(
name="docker-brew",
version='0.0.1',
description="-",
packages=['dockerbrew'],
install_requires=['dulwich', 'docker'] + test_requirements,
zip_safe=False,
classifiers=['Development Status :: 3 - Alpha',
'Environment :: Other Environment',
'Intended Audience :: Developers',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Utilities'],
)

543
contrib/docker.bash Normal file
View File

@@ -0,0 +1,543 @@
#!bash
#
# bash completion file for core docker commands
#
# This script provides supports completion of:
# - commands and their options
# - container ids
# - image repos and tags
# - filepaths
#
# To enable the completions either:
# - place this file in /etc/bash_completion.d
# or
# - copy this file and add the line below to your .bashrc after
# bash completion features are loaded
# . docker.bash
#
# Note:
# Currently, the completions will not work if the docker daemon is not
# bound to the default communication port/socket
# If the docker daemon is using a unix socket for communication your user
# must have access to the socket for the completions to function correctly
__docker_containers_all()
{
local containers
containers="$( docker ps -a -q )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
}
__docker_containers_running()
{
local containers
containers="$( docker ps -q )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
}
__docker_containers_stopped()
{
local containers
containers="$( comm -13 <(docker ps -q | sort -u) <(docker ps -a -q | sort -u) )"
COMPREPLY=( $( compgen -W "$containers" -- "$cur" ) )
}
__docker_image_repos()
{
local repos
repos="$( docker images | awk 'NR>1{print $1}' )"
COMPREPLY=( $( compgen -W "$repos" -- "$cur" ) )
}
__docker_images()
{
local images
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$images" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}
__docker_image_repos_and_tags()
{
local repos images
repos="$( docker images | awk 'NR>1{print $1}' )"
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$repos $images" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}
__docker_containers_and_images()
{
local containers images
containers="$( docker ps -a -q )"
images="$( docker images | awk 'NR>1{print $1":"$2}' )"
COMPREPLY=( $( compgen -W "$images $containers" -- "$cur" ) )
__ltrim_colon_completions "$cur"
}
_docker_docker()
{
case "$prev" in
-H)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-H" -- "$cur" ) )
;;
*)
COMPREPLY=( $( compgen -W "$commands help" -- "$cur" ) )
;;
esac
}
_docker_attach()
{
if [ $cpos -eq $cword ]; then
__docker_containers_running
fi
}
_docker_build()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-no-cache -t -q -rm" -- "$cur" ) )
;;
*)
_filedir
;;
esac
}
_docker_commit()
{
case "$prev" in
-author|-m|-run)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-author -m -run" -- "$cur" ) )
;;
*)
local counter=$cpos
while [ $counter -le $cword ]; do
case "${words[$counter]}" in
-author|-m|-run)
(( counter++ ))
;;
-*)
;;
*)
break
;;
esac
(( counter++ ))
done
if [ $counter -eq $cword ]; then
__docker_containers_all
fi
;;
esac
}
_docker_cp()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
else
_filedir
fi
}
_docker_diff()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_events()
{
case "$prev" in
-since)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-since" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_export()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_help()
{
if [ $cpos -eq $cword ]; then
COMPREPLY=( $( compgen -W "$commands" -- "$cur" ) )
fi
}
_docker_history()
{
if [ $cpos -eq $cword ]; then
__docker_image_repos_and_tags
fi
}
_docker_images()
{
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-a -notrunc -q -viz" -- "$cur" ) )
;;
*)
local counter=$cpos
while [ $counter -le $cword ]; do
case "${words[$counter]}" in
-*)
;;
*)
break
;;
esac
(( counter++ ))
done
if [ $counter -eq $cword ]; then
__docker_image_repos
fi
;;
esac
}
_docker_import()
{
return
}
_docker_info()
{
return
}
_docker_insert()
{
if [ $cpos -eq $cword ]; then
__docker_image_repos_and_tags
fi
}
_docker_inspect()
{
__docker_containers_and_images
}
_docker_kill()
{
__docker_containers_running
}
_docker_login()
{
case "$prev" in
-e|-p|-u)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-e -p -u" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_logs()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_port()
{
if [ $cpos -eq $cword ]; then
__docker_containers_all
fi
}
_docker_ps()
{
case "$prev" in
-beforeId|-n|-sinceId)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-a -beforeId -l -n -notrunc -q -s -sinceId" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_pull()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-t" -- "$cur" ) )
;;
*)
;;
esac
}
_docker_push()
{
return
}
_docker_restart()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-t" -- "$cur" ) )
;;
*)
__docker_containers_all
;;
esac
}
_docker_rm()
{
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-v" -- "$cur" ) )
;;
*)
__docker_containers_stopped
;;
esac
}
_docker_rmi()
{
__docker_image_repos_and_tags
}
_docker_run()
{
case "$prev" in
-cidfile)
_filedir
;;
-volumes-from)
__docker_containers_all
;;
-a|-c|-dns|-e|-entrypoint|-h|-lxc-conf|-m|-p|-u|-v|-w)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-a -c -cidfile -d -dns -e -entrypoint -h -i -lxc-conf -m -n -p -privileged -t -u -v -volumes-from -w" -- "$cur" ) )
;;
*)
local counter=$cpos
while [ $counter -le $cword ]; do
case "${words[$counter]}" in
-a|-c|-cidfile|-dns|-e|-entrypoint|-h|-lxc-conf|-m|-p|-u|-v|-volumes-from|-w)
(( counter++ ))
;;
-*)
;;
*)
break
;;
esac
(( counter++ ))
done
if [ $counter -eq $cword ]; then
__docker_image_repos_and_tags
fi
;;
esac
}
_docker_search()
{
COMPREPLY=( $( compgen -W "-notrunc" -- "$cur" ) )
}
_docker_start()
{
__docker_containers_stopped
}
_docker_stop()
{
case "$prev" in
-t)
return
;;
*)
;;
esac
case "$cur" in
-*)
COMPREPLY=( $( compgen -W "-t" -- "$cur" ) )
;;
*)
__docker_containers_running
;;
esac
}
_docker_tag()
{
COMPREPLY=( $( compgen -W "-f" -- "$cur" ) )
}
_docker_top()
{
if [ $cpos -eq $cword ]; then
__docker_containers_running
fi
}
_docker_version()
{
return
}
_docker_wait()
{
__docker_containers_all
}
_docker()
{
local cur prev words cword command="docker" counter=1 word cpos
local commands="
attach
build
commit
cp
diff
events
export
history
images
import
info
insert
inspect
kill
login
logs
port
ps
pull
push
restart
rm
rmi
run
search
start
stop
tag
top
version
wait
"
COMPREPLY=()
_get_comp_words_by_ref -n : cur prev words cword
while [ $counter -lt $cword ]; do
word="${words[$counter]}"
case "$word" in
-H)
(( counter++ ))
;;
-*)
;;
*)
command="$word"
cpos=$counter
(( cpos++ ))
break
;;
esac
(( counter++ ))
done
local completions_func=_docker_${command}
declare -F $completions_func >/dev/null && $completions_func
return 0
}
complete -F _docker docker

View File

@@ -3,7 +3,7 @@
# Original version by Jeff Lindsay <progrium@gmail.com>
# Revamped by Jerome Petazzoni <jerome@dotcloud.com>
#
# This script canonical location is http://get.docker.io/; to update it, run:
# This script canonical location is https://get.docker.io/; to update it, run:
# s3cmd put -m text/x-shellscript -P install.sh s3://get.docker.io/index
echo "Ensuring basic dependencies are installed..."
@@ -35,17 +35,23 @@ else
fi
fi
echo "Downloading docker binary and uncompressing into /usr/local/bin..."
curl -s http://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-latest.tgz |
tar -C /usr/local/bin --strip-components=1 -zxf- \
docker-latest/docker
echo "Downloading docker binary to /usr/local/bin..."
curl -s https://get.docker.io/builds/$(uname -s)/$(uname -m)/docker-latest \
> /usr/local/bin/docker
chmod +x /usr/local/bin/docker
if [ -f /etc/init/dockerd.conf ]
then
echo "Upstart script already exists."
else
echo "Creating /etc/init/dockerd.conf..."
echo "exec env LANG=\"en_US.UTF-8\" /usr/local/bin/docker -d" > /etc/init/dockerd.conf
cat >/etc/init/dockerd.conf <<EOF
description "Docker daemon"
start on filesystem and started lxc-net
stop on runlevel [!2345]
respawn
exec /usr/local/bin/docker -d
EOF
fi
echo "Starting dockerd..."

49
contrib/mkimage-unittest.sh Executable file
View File

@@ -0,0 +1,49 @@
#!/bin/bash
# Generate a very minimal filesystem based on busybox-static,
# and load it into the local docker under the name "docker-ut".
missing_pkg() {
echo "Sorry, I could not locate $1"
echo "Try 'apt-get install ${2:-$1}'?"
exit 1
}
BUSYBOX=$(which busybox)
[ "$BUSYBOX" ] || missing_pkg busybox busybox-static
SOCAT=$(which socat)
[ "$SOCAT" ] || missing_pkg socat
shopt -s extglob
set -ex
ROOTFS=`mktemp -d /tmp/rootfs-busybox.XXXXXXXXXX`
trap "rm -rf $ROOTFS" INT QUIT TERM
cd $ROOTFS
mkdir bin etc dev dev/pts lib proc sys tmp
touch etc/resolv.conf
cp /etc/nsswitch.conf etc/nsswitch.conf
echo root:x:0:0:root:/:/bin/sh > etc/passwd
echo daemon:x:1:1:daemon:/usr/sbin:/bin/sh >> etc/passwd
echo root:x:0: > etc/group
echo daemon:x:1: >> etc/group
ln -s lib lib64
ln -s bin sbin
cp $BUSYBOX $SOCAT bin
for X in $(busybox --list)
do
ln -s busybox bin/$X
done
rm bin/init
ln bin/busybox bin/init
cp -P /lib/x86_64-linux-gnu/lib{pthread*,c*(-*),dl*(-*),nsl*(-*),nss_*,util*(-*),wrap,z}.so* lib
cp /lib/x86_64-linux-gnu/ld-linux-x86-64.so.2 lib
cp -P /usr/lib/x86_64-linux-gnu/lib{crypto,ssl}.so* lib
for X in console null ptmx random stdin stdout stderr tty urandom zero
do
cp -a /dev/$X dev
done
chmod 0755 $ROOTFS # See #486
tar -cf- . | docker import - docker-ut
docker run -i -u root docker-ut /bin/echo Success.
rm -rf $ROOTFS

View File

@@ -16,27 +16,34 @@ import (
var (
GITCOMMIT string
VERSION string
)
func main() {
if utils.SelfPath() == "/sbin/init" {
if selfPath := utils.SelfPath(); selfPath == "/sbin/init" || selfPath == "/.dockerinit" {
// Running in init mode
docker.SysInit()
return
}
// FIXME: Switch d and D ? (to be more sshd like)
flVersion := flag.Bool("v", false, "Print version information and quit")
flDaemon := flag.Bool("d", false, "Daemon mode")
flDebug := flag.Bool("D", false, "Debug mode")
flAutoRestart := flag.Bool("r", false, "Restart previously running containers")
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge")
bridgeName := flag.String("b", "", "Attach containers to a pre-existing network bridge. Use 'none' to disable container networking")
pidfile := flag.String("p", "/var/run/docker.pid", "File containing process PID")
flGraphPath := flag.String("g", "/var/lib/docker", "Path to graph storage base dir.")
flEnableCors := flag.Bool("api-enable-cors", false, "Enable CORS requests in the remote api.")
flDns := flag.String("dns", "", "Set custom dns servers")
flHosts := docker.ListOpts{fmt.Sprintf("tcp://%s:%d", docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT)}
flHosts := docker.ListOpts{fmt.Sprintf("unix://%s", docker.DEFAULTUNIXSOCKET)}
flag.Var(&flHosts, "H", "tcp://host:port to bind/connect to or unix://path/to/socket to use")
flag.Parse()
if *flVersion {
showVersion()
return
}
if len(flHosts) > 1 {
flHosts = flHosts[1:len(flHosts)] //trick to display a nice defaul value in the usage
flHosts = flHosts[1:] //trick to display a nice default value in the usage
}
for i, flHost := range flHosts {
flHosts[i] = utils.ParseHost(docker.DEFAULTHTTPHOST, docker.DEFAULTHTTPPORT, flHost)
@@ -51,12 +58,13 @@ func main() {
os.Setenv("DEBUG", "1")
}
docker.GITCOMMIT = GITCOMMIT
docker.VERSION = VERSION
if *flDaemon {
if flag.NArg() != 0 {
flag.Usage()
return
}
if err := daemon(*pidfile, flHosts, *flAutoRestart, *flEnableCors, *flDns); err != nil {
if err := daemon(*pidfile, *flGraphPath, flHosts, *flAutoRestart, *flEnableCors, *flDns); err != nil {
log.Fatal(err)
os.Exit(-1)
}
@@ -67,12 +75,19 @@ func main() {
}
protoAddrParts := strings.SplitN(flHosts[0], "://", 2)
if err := docker.ParseCommands(protoAddrParts[0], protoAddrParts[1], flag.Args()...); err != nil {
if sterr, ok := err.(*utils.StatusError); ok {
os.Exit(sterr.Status)
}
log.Fatal(err)
os.Exit(-1)
}
}
}
func showVersion() {
fmt.Printf("Docker version %s, build %s\n", VERSION, GITCOMMIT)
}
func createPidFile(pidfile string) error {
if pidString, err := ioutil.ReadFile(pidfile); err == nil {
pid, err := strconv.Atoi(string(pidString))
@@ -100,7 +115,7 @@ func removePidFile(pidfile string) {
}
}
func daemon(pidfile string, protoAddrs []string, autoRestart, enableCors bool, flDns string) error {
func daemon(pidfile string, flGraphPath string, protoAddrs []string, autoRestart, enableCors bool, flDns string) error {
if err := createPidFile(pidfile); err != nil {
log.Fatal(err)
}
@@ -118,7 +133,7 @@ func daemon(pidfile string, protoAddrs []string, autoRestart, enableCors bool, f
if flDns != "" {
dns = []string{flDns}
}
server, err := docker.NewServer(autoRestart, enableCors, dns)
server, err := docker.NewServer(flGraphPath, autoRestart, enableCors, dns)
if err != nil {
return err
}

15
docs/Dockerfile Normal file
View File

@@ -0,0 +1,15 @@
from ubuntu:12.04
maintainer Nick Stinemates
run apt-get update
run apt-get install -y python-setuptools make
run easy_install pip
add . /docs
run pip install -r /docs/requirements.txt
run cd /docs; make docs
expose 8000
workdir /docs/_build/html
entrypoint ["python", "-m", "SimpleHTTPServer"]

View File

@@ -1,2 +1,2 @@
Andy Rothfusz <andy@dotcloud.com>
Ken Cochrane <ken@dotcloud.com>
Andy Rothfusz <andy@dotcloud.com> (@metalivedev)
Ken Cochrane <ken@dotcloud.com> (@kencochrane)

View File

@@ -1,14 +1,12 @@
Docker documentation and website
================================
Docker Documentation
====================
Documentation
-------------
This is your definite place to contribute to the docker documentation. The documentation is generated from the
.rst files under sources.
The folder also contains the other files to create the http://docker.io website, but you can generally ignore
most of those.
This is your definite place to contribute to the docker documentation. After each push to master the documentation
is automatically generated and made available on [docs.docker.io](http://docs.docker.io)
Each of the .rst files under sources reflects a page on the documentation.
Installation
------------
@@ -25,26 +23,22 @@ Usage
* Change the `.rst` files with your favorite editor to your liking.
* Run `make docs` to clean up old files and generate new ones.
* Your static website can now be found in the `_build` directory.
* To preview what you have generated run `make server` and open <http://localhost:8000/> in your favorite browser.
* To preview what you have generated run `make server` and open http://localhost:8000/ in your favorite browser.
Working using GitHub's file editor
----------------------------------
Alternatively, for small changes and typo's you might want to use GitHub's built in file editor. It allows
you to preview your changes right online. Just be carefull not to create many commits.
you to preview your changes right online. Just be careful not to create many commits.
Images
------
When you need to add images, try to make them as small as possible (e.g. as gif).
Notes
-----
* The index.html and gettingstarted.html files are copied from the source dir to the output dir without modification.
So changes to those pages should be made directly in html
* For the template the css is compiled from less. When changes are needed they can be compiled using
lessc ``lessc main.less`` or watched using watch-lessc ``watch-lessc -i main.less -o main.css``
Guides on using sphinx
----------------------
* To make links to certain pages create a link target like so:
@@ -75,3 +69,12 @@ Guides on using sphinx
* Code examples
Start without $, so it's easy to copy and paste.
Manpages
--------
* To make the manpages, simply run 'make man'. Please note there is a bug in spinx 1.1.3 which makes this fail.
Upgrade to the latest version of sphinx.
* Then preview the manpage by running `man _build/man/docker.1`, where _build/man/docker.1 is the path to the generated
manfile
* The manpages are also autogenerated by our hosted readthedocs here: http://docs-docker.dotcloud.com/projects/docker/downloads/

View File

@@ -1 +1 @@
Solomon Hykes <solomon@dotcloud.com>
Solomon Hykes <solomon@dotcloud.com> (@shykes)

View File

@@ -2,6 +2,9 @@
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
.. COMMENT use http://pythonhosted.org/sphinxcontrib-httpdomain/ to
.. document the REST API.
=================
Docker Remote API
=================
@@ -12,36 +15,102 @@ Docker Remote API
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
- Since API version 1.2, the auth configuration is now handled client side, so the client has to send the authConfig as POST in /images/(name)/push
- By default the Docker daemon listens on unix:///var/run/docker.sock and the client must have root access to interact with the daemon
- If a group named *docker* exists on your system, docker will apply ownership of the socket to the group
- The API tends to be REST, but for some complex commands, like attach
or pull, the HTTP connection is hijacked to transport stdout stdin
and stderr
- Since API version 1.2, the auth configuration is now handled client
side, so the client has to send the authConfig as POST in
/images/(name)/push
2. Versions
===========
The current verson of the API is 1.3
Calling /images/<name>/insert is the same as calling /v1.3/images/<name>/insert
You can still call an old version of the api using /v1.0/images/<name>/insert
The current version of the API is 1.5
:doc:`docker_remote_api_v1.3`
Calling /images/<name>/insert is the same as calling
/v1.5/images/<name>/insert
You can still call an old version of the api using
/v1.0/images/<name>/insert
:doc:`docker_remote_api_v1.5`
*****************************
What's new
----------
.. http:post:: /images/create
**New!** You can now pass registry credentials (via an AuthConfig object)
through the `X-Registry-Auth` header
.. http:post:: /images/(name)/push
**New!** The AuthConfig object now needs to be passed through
the `X-Registry-Auth` header
.. http:get:: /containers/json
**New!** The format of the `Ports` entry has been changed to a list of
dicts each containing `PublicPort`, `PrivatePort` and `Type` describing a
port mapping.
:doc:`docker_remote_api_v1.4`
*****************************
What's new
----------
.. http:post:: /images/create
**New!** When pull a repo, all images are now downloaded in parallel.
.. http:get:: /containers/(id)/top
**New!** You can now use ps args with docker top, like `docker top <container_id> aux`
.. http:get:: /events:
**New!** Image's name added in the events
:doc:`docker_remote_api_v1.3`
*****************************
docker v0.5.0 51f6c4a_
What's new
----------
.. http:get:: /containers/(id)/top
List the processes running inside a container.
.. http:get:: /events:
**New!** Monitor docker's events via streaming or via polling
Builder (/build):
- Simplify the upload of the build context
- Simply stream a tarball instead of multipart upload with 4 intermediary buffers
- Simply stream a tarball instead of multipart upload with 4
intermediary buffers
- Simpler, less memory usage, less disk usage and faster
.. Note::
The /build improvements are not reverse-compatible. Pre 1.3 clients will break on /build.
.. Warning::
The /build improvements are not reverse-compatible. Pre 1.3 clients
will break on /build.
List containers (/containers/json):
- You can use size=1 to get the size of the containers
Start containers (/containers/<id>/start):
- You can now pass host-specific configuration (e.g. bind mounts) in
the POST body for start calls
:doc:`docker_remote_api_v1.2`
*****************************
@@ -52,14 +121,25 @@ What's new
----------
The auth configuration is now handled by the client.
The client should send it's authConfig as POST on each call of /images/(name)/push
.. http:get:: /auth is now deprecated
.. http:post:: /auth only checks the configuration but doesn't store it on the server
The client should send it's authConfig as POST on each call of
/images/(name)/push
Deleting an image is now improved, will only untag the image if it has chidrens and remove all the untagged parents if has any.
.. http:get:: /auth
.. http:post:: /images/<name>/delete now returns a JSON with the list of images deleted/untagged
**Deprecated.**
.. http:post:: /auth
Only checks the configuration but doesn't store it on the server
Deleting an image is now improved, will only untag the image if it
has children and remove all the untagged parents if has any.
.. http:post:: /images/<name>/delete
Now returns a JSON structure with the list of images
deleted/untagged.
:doc:`docker_remote_api_v1.1`
@@ -74,7 +154,7 @@ What's new
.. http:post:: /images/(name)/insert
.. http:post:: /images/(name)/push
Uses json stream instead of HTML hijack, it looks like this:
Uses json stream instead of HTML hijack, it looks like this:
.. sourcecode:: http
@@ -101,12 +181,13 @@ Initial version
.. _a8ae398: https://github.com/dotcloud/docker/commit/a8ae398bf52e97148ee7bd0d5868de2e15bd297f
.. _8d73740: https://github.com/dotcloud/docker/commit/8d73740343778651c09160cde9661f5f387b36f4
.. _2e7649b: https://github.com/dotcloud/docker/commit/2e7649beda7c820793bd46766cbc2cfeace7b168
.. _51f6c4a: https://github.com/dotcloud/docker/commit/51f6c4a7372450d164c61e0054daf0223ddbd909
==================================
Docker Remote API Client Libraries
==================================
These libraries have been not tested by the Docker Maintainers for
These libraries have not been tested by the Docker Maintainers for
compatibility. Please file issues with the library owners. If you
find more library implementations, please list them in Docker doc bugs
and we will add the libraries here.
@@ -116,12 +197,21 @@ and we will add the libraries here.
+======================+================+============================================+
| Python | docker-py | https://github.com/dotcloud/docker-py |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-ruby | https://github.com/ActiveState/docker-ruby |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-client | https://github.com/geku/docker-client |
+----------------------+----------------+--------------------------------------------+
| Ruby | docker-api | https://github.com/swipely/docker-api |
+----------------------+----------------+--------------------------------------------+
| Javascript (NodeJS) | docker.io | https://github.com/appersonlabs/docker.io |
| | | Install via NPM: `npm install docker.io` |
+----------------------+----------------+--------------------------------------------+
| Javascript | docker-js | https://github.com/dgoujard/docker-js |
+----------------------+----------------+--------------------------------------------+
| Javascript (Angular) | dockerui | https://github.com/crosbymichael/dockerui |
| **WebUI** | | |
+----------------------+----------------+--------------------------------------------+
| Java | docker-java | https://github.com/kpelykh/docker-java |
+----------------------+----------------+--------------------------------------------+
| Erlang | erldocker | https://github.com/proger/erldocker |
+----------------------+----------------+--------------------------------------------+
| Go | go-dockerclient| https://github.com/fsouza/go-dockerclient |
+----------------------+----------------+--------------------------------------------+

View File

@@ -1,3 +1,8 @@
.. use orphan to suppress "WARNING: document isn't included in any toctree"
.. per http://sphinx-doc.org/markup/misc.html#file-wide-metadata
:orphan:
:title: Remote API v1.0
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
@@ -12,7 +17,7 @@ Docker Remote API v1.0
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- Default port in the docker daemon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
2. Endpoints
@@ -300,8 +305,8 @@ Start a container
:statuscode 500: server error
Stop a contaier
***************
Stop a container
****************
.. http:post:: /containers/(id)/stop
@@ -827,7 +832,7 @@ Build an image from Dockerfile via stdin
{{ STREAM }}
:query t: tag to be applied to the resulting image in case of success
:query t: repository name to be applied to the resulting image in case of success
:statuscode 200: no error
:statuscode 500: server error

View File

@@ -1,3 +1,7 @@
.. use orphan to suppress "WARNING: document isn't included in any toctree"
.. per http://sphinx-doc.org/markup/misc.html#file-wide-metadata
:orphan:
:title: Remote API v1.1
:description: API Documentation for Docker
@@ -13,7 +17,7 @@ Docker Remote API v1.1
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- Default port in the docker daemon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
2. Endpoints
@@ -301,8 +305,8 @@ Start a container
:statuscode 500: server error
Stop a contaier
***************
Stop a container
****************
.. http:post:: /containers/(id)/stop

View File

@@ -1,3 +1,8 @@
.. use orphan to suppress "WARNING: document isn't included in any toctree"
.. per http://sphinx-doc.org/markup/misc.html#file-wide-metadata
:orphan:
:title: Remote API v1.2
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
@@ -12,7 +17,7 @@ Docker Remote API v1.2
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- Default port in the docker daemon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
2. Endpoints
@@ -312,8 +317,8 @@ Start a container
:statuscode 500: server error
Stop a contaier
***************
Stop a container
****************
.. http:post:: /containers/(id)/stop
@@ -865,7 +870,7 @@ Build an image from Dockerfile via stdin
{{ STREAM }}
:query t: tag to be applied to the resulting image in case of success
:query t: repository name to be applied to the resulting image in case of success
:query remote: resource to fetch, as URI
:statuscode 200: no error
:statuscode 500: server error

View File

@@ -1,3 +1,8 @@
.. use orphan to suppress "WARNING: document isn't included in any toctree"
.. per http://sphinx-doc.org/markup/misc.html#file-wide-metadata
:orphan:
:title: Remote API v1.3
:description: API Documentation for Docker
:keywords: API, Docker, rcli, REST, documentation
@@ -12,7 +17,7 @@ Docker Remote API v1.3
=====================
- The Remote API is replacing rcli
- Default port in the docker deamon is 4243
- Default port in the docker daemon is 4243
- The API tends to be REST, but for some complex commands, like attach or pull, the HTTP connection is hijacked to transport stdout stdin and stderr
2. Endpoints
@@ -220,6 +225,46 @@ Inspect a container
:statuscode 500: server error
List processes running inside a container
*****************************************
.. http:get:: /containers/(id)/top
List processes running inside the container ``id``
**Example request**:
.. sourcecode:: http
GET /containers/4fa6e0f0c678/top HTTP/1.1
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"PID":"11935",
"Tty":"pts/2",
"Time":"00:00:00",
"Cmd":"sh"
},
{
"PID":"12140",
"Tty":"pts/2",
"Time":"00:00:00",
"Cmd":"sleep"
}
]
:statuscode 200: no error
:statuscode 404: no such container
:statuscode 500: server error
Inspect changes on a container's filesystem
*******************************************
@@ -294,27 +339,34 @@ Start a container
.. http:post:: /containers/(id)/start
Start the container ``id``
Start the container ``id``
**Example request**:
**Example request**:
.. sourcecode:: http
.. sourcecode:: http
POST /containers/e90e34656806/start HTTP/1.1
**Example response**:
POST /containers/(id)/start HTTP/1.1
Content-Type: application/json
.. sourcecode:: http
{
"Binds":["/tmp:/tmp"]
}
HTTP/1.1 200 OK
:statuscode 200: no error
:statuscode 404: no such container
:statuscode 500: server error
**Example response**:
.. sourcecode:: http
HTTP/1.1 204 No Content
Content-Type: text/plain
:jsonparam hostConfig: the container's host configuration (optional)
:statuscode 200: no error
:statuscode 404: no such container
:statuscode 500: server error
Stop a contaier
***************
Stop a container
****************
.. http:post:: /containers/(id)/stop
@@ -793,7 +845,7 @@ Remove an image
{"Deleted":"53b4f83ac9"}
]
:statuscode 204: no error
:statuscode 200: no error
:statuscode 404: no such image
:statuscode 409: conflict
:statuscode 500: server error
@@ -873,7 +925,8 @@ Build an image from Dockerfile via stdin
The Content-type header should be set to "application/tar".
:query t: tag to be applied to the resulting image in case of success
:query t: repository name (and optionally a tag) to be applied to the resulting image in case of success
:query q: suppress verbose build output
:statuscode 200: no error
:statuscode 500: server error
@@ -936,7 +989,10 @@ Display system-wide information
"NFd": 11,
"NGoroutines":21,
"MemoryLimit":true,
"SwapLimit":false
"SwapLimit":false,
"EventsListeners":"0",
"LXCVersion":"0.7.5",
"KernelVersion":"3.8.0-19-generic"
}
:statuscode 200: no error
@@ -1006,6 +1062,36 @@ Create a new image from a container's changes
:statuscode 500: server error
Monitor Docker's events
***********************
.. http:get:: /events
Get events from docker, either in real time via streaming, or via polling (using `since`)
**Example request**:
.. sourcecode:: http
POST /events?since=1374067924
**Example response**:
.. sourcecode:: http
HTTP/1.1 200 OK
Content-Type: application/json
{"status":"create","id":"dfdf82bd3881","time":1374067924}
{"status":"start","id":"dfdf82bd3881","time":1374067924}
{"status":"stop","id":"dfdf82bd3881","time":1374067966}
{"status":"destroy","id":"dfdf82bd3881","time":1374067970}
:query since: timestamp used for polling
:statuscode 200: no error
:statuscode 500: server error
3. Going further
================

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -452,7 +452,7 @@ User Register
"username": "foobar"'}
:jsonparameter email: valid email address, that needs to be confirmed
:jsonparameter username: min 4 character, max 30 characters, must match the regular expression [a-z0-9_].
:jsonparameter username: min 4 character, max 30 characters, must match the regular expression [a-z0-9\_].
:jsonparameter password: min 5 characters
**Example Response**:

View File

@@ -2,9 +2,10 @@
:description: Documentation for docker Registry and Registry API
:keywords: docker, registry, api, index
.. _registryindexspec:
=====================
Registry & index Spec
Registry & Index Spec
=====================
.. contents:: Table of Contents
@@ -154,7 +155,7 @@ API (pulling repository foo/bar):
.. note::
**Its possible not to use the Index at all!** In this case, a deployed version of the Registry is deployed to store and serve images. Those images are not authentified and the security is not guaranteed.
**Its possible not to use the Index at all!** In this case, a deployed version of the Registry is deployed to store and serve images. Those images are not authenticated and the security is not guaranteed.
.. note::
@@ -367,7 +368,8 @@ POST /v1/users
{"email": "sam@dotcloud.com", "password": "toto42", "username": "foobar"'}
**Validation**:
- **username** : min 4 character, max 30 characters, must match the regular expression [a-z0-9_].
- **username**: min 4 character, max 30 characters, must match the regular
expression [a-z0-9\_].
- **password**: min 5 characters
**Valid**: return HTTP 200
@@ -566,4 +568,4 @@ Next request::
---------------------
- 1.0 : May 6th 2013 : initial release
- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new source namespace.
- 1.1 : June 1st 2013 : Added Delete Repository and way to handle new source namespace.

View File

@@ -13,9 +13,9 @@ Docker Usage
To list available commands, either run ``docker`` with no parameters or execute
``docker help``::
$ docker
$ sudo docker
Usage: docker [OPTIONS] COMMAND [arg...]
-H=[tcp://127.0.0.1:4243]: tcp://host:port to bind/connect to or unix://path/to/socket to use
-H=[unix:///var/run/docker.sock]: tcp://host:port to bind/connect to or unix://path/to/socket to use
A self-sufficient runtime for linux containers.
@@ -30,12 +30,15 @@ Available Commands
command/attach
command/build
command/commit
command/cp
command/diff
command/events
command/export
command/history
command/images
command/import
command/info
command/insert
command/inspect
command/kill
command/login
@@ -52,5 +55,6 @@ Available Commands
command/start
command/stop
command/tag
command/top
command/version
command/wait

View File

@@ -10,4 +10,50 @@
Usage: docker attach CONTAINER
Attach to a running container
Attach to a running container.
You can detach from the container again (and leave it running) with
``CTRL-c`` (for a quiet exit) or ``CTRL-\`` to get a stacktrace of
the Docker client when it quits.
To stop a container, use ``docker stop``
To kill the container, use ``docker kill``
Examples:
---------
.. code-block:: bash
$ ID=$(sudo docker run -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
top - 02:05:52 up 3:05, 0 users, load average: 0.01, 0.02, 0.05
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 373572k total, 355560k used, 18012k free, 27872k buffers
Swap: 786428k total, 0k used, 786428k free, 221740k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 17200 1116 912 R 0 0.3 0:00.03 top
top - 02:05:55 up 3:05, 0 users, load average: 0.01, 0.02, 0.05
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 373572k total, 355244k used, 18328k free, 27872k buffers
Swap: 786428k total, 0k used, 786428k free, 221776k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top
top - 02:05:58 up 3:06, 0 users, load average: 0.01, 0.02, 0.05
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 373572k total, 355780k used, 17792k free, 27880k buffers
Swap: 786428k total, 0k used, 786428k free, 221776k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 17208 1144 932 R 0 0.3 0:00.03 top
^C$
$ sudo docker stop $ID

View File

@@ -10,7 +10,10 @@
Usage: docker build [OPTIONS] PATH | URL | -
Build a new container image from the source code at PATH
-t="": Tag to be applied to the resulting image in case of success.
-t="": Repository name (and optionally a tag) to be applied to the resulting image in case of success.
-q=false: Suppress verbose build output.
-no-cache: Do not use the cache when building the image.
-rm: Remove intermediate containers after a successful build
When a single Dockerfile is given as URL, then no context is set. When a git repository is set as URL, the repository is used as context
@@ -19,25 +22,44 @@ Examples
.. code-block:: bash
docker build .
sudo docker build .
| This will read the Dockerfile from the current directory. It will also send any other files and directories found in the current directory to the docker daemon.
| The contents of this directory would be used by ADD commands found within the Dockerfile.
| This will send a lot of data to the docker daemon if the current directory contains a lot of data.
| If the absolute path is provided instead of '.', only the files and directories required by the ADD commands from the Dockerfile will be added to the context and transferred to the docker daemon.
|
This will read the ``Dockerfile`` from the current directory. It will
also send any other files and directories found in the current
directory to the ``docker`` daemon.
The contents of this directory would be used by ``ADD`` commands found
within the ``Dockerfile``. This will send a lot of data to the
``docker`` daemon if the current directory contains a lot of data. If
the absolute path is provided instead of ``.`` then only the files and
directories required by the ADD commands from the ``Dockerfile`` will be
added to the context and transferred to the ``docker`` daemon.
.. code-block:: bash
docker build - < Dockerfile
sudo docker build -t vieux/apache:2.0 .
| This will read a Dockerfile from Stdin without context. Due to the lack of a context, no contents of any local directory will be sent to the docker daemon.
| ADD doesn't work when running in this mode due to the absence of the context, thus having no source files to copy to the container.
This will build like the previous example, but it will then tag the
resulting image. The repository name will be ``vieux/apache`` and the
tag will be ``2.0``
.. code-block:: bash
docker build github.com/creack/docker-firefox
sudo docker build - < Dockerfile
| This will clone the github repository and use it as context. The Dockerfile at the root of the repository is used as Dockerfile.
| Note that you can specify an arbitrary git repository by using the 'git://' schema.
This will read a ``Dockerfile`` from *stdin* without context. Due to
the lack of a context, no contents of any local directory will be sent
to the ``docker`` daemon. ``ADD`` doesn't work when running in this
mode because the absence of the context provides no source files to
copy to the container.
.. code-block:: bash
sudo docker build github.com/creack/docker-firefox
This will clone the Github repository and use it as context. The
``Dockerfile`` at the root of the repository is used as
``Dockerfile``. Note that you can specify an arbitrary git repository
by using the ``git://`` schema.

View File

@@ -14,19 +14,36 @@
-m="": Commit message
-author="": Author (eg. "John Hannibal Smith <hannibal@a-team.com>"
-run="": Config automatically applied when the image is run. "+`(ex: {"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
-run="": Config automatically applied when the image is
run. "+`(ex: {"Cmd": ["cat", "/world"], "PortSpecs": ["22"]}')
Full -run example::
{"Hostname": "",
"User": "",
"CpuShares": 0,
"Memory": 0,
"MemorySwap": 0,
"PortSpecs": ["22", "80", "443"],
"Tty": true,
"OpenStdin": true,
"StdinOnce": true,
"Env": ["FOO=BAR", "FOO2=BAR2"],
"Cmd": ["cat", "-e", "/etc/resolv.conf"],
"Dns": ["8.8.8.8", "8.8.4.4"]}
{
"Entrypoint" : null,
"Privileged" : false,
"User" : "",
"VolumesFrom" : "",
"Cmd" : ["cat", "-e", "/etc/resolv.conf"],
"Dns" : ["8.8.8.8", "8.8.4.4"],
"MemorySwap" : 0,
"AttachStdin" : false,
"AttachStderr" : false,
"CpuShares" : 0,
"OpenStdin" : false,
"Volumes" : null,
"Hostname" : "122612f45831",
"PortSpecs" : ["22", "80", "443"],
"Image" : "b750fe79269d2ec9a3c593ef05b4332b1d1a02a62b4accb2c21d589ff2f5f2dc",
"Tty" : false,
"Env" : [
"HOME=/",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"StdinOnce" : false,
"Domainname" : "",
"WorkingDir" : "/",
"NetworkDisabled" : false,
"Memory" : 0,
"AttachStdout" : false
}

View File

@@ -0,0 +1,14 @@
:title: Cp Command
:description: Copy files/folders from the containers filesystem to the host path
:keywords: cp, docker, container, documentation, copy
============================================================================
``cp`` -- Copy files/folders from the containers filesystem to the host path
============================================================================
::
Usage: docker cp CONTAINER:RESOURCE HOSTPATH
Copy files/folders from the containers filesystem to the host
path. Paths are relative to the root of the filesystem.

View File

@@ -0,0 +1,34 @@
:title: Events Command
:description: Get real time events from the server
:keywords: events, docker, documentation
=================================================================
``events`` -- Get real time events from the server
=================================================================
::
Usage: docker events
Get real time events from the server
Examples
--------
Starting and stopping a container
.................................
.. code-block:: bash
$ sudo docker start 4386fb97867d
$ sudo docker stop 4386fb97867d
In another shell
.. code-block:: bash
$ sudo docker events
[2013-09-03 15:49:26 +0200 CEST] 4386fb97867d: (from 12de384bfb10) start
[2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) die
[2013-09-03 15:49:29 +0200 CEST] 4386fb97867d: (from 12de384bfb10) stop

View File

@@ -21,6 +21,6 @@ Displaying images visually
::
docker images -viz | dot -Tpng -o docker.png
sudo docker images -viz | dot -Tpng -o docker.png
.. image:: images/docker_images.gif

View File

@@ -12,9 +12,11 @@
Create a new filesystem image from the contents of a tarball
At this time, the URL must start with ``http`` and point to a single file archive (.tar, .tar.gz, .bzip)
containing a root filesystem. If you would like to import from a local directory or archive,
you can use the ``-`` parameter to take the data from standard in.
At this time, the URL must start with ``http`` and point to a single
file archive (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) containing a
root filesystem. If you would like to import from a local directory or
archive, you can use the ``-`` parameter to take the data from
standard in.
Examples
--------
@@ -22,19 +24,21 @@ Examples
Import from a remote location
.............................
``$ docker import http://example.com/exampleimage.tgz exampleimagerepo``
``$ sudo docker import http://example.com/exampleimage.tgz exampleimagerepo``
Import from a local file
........................
Import to docker via pipe and standard in
``$ cat exampleimage.tgz | docker import - exampleimagelocal``
``$ cat exampleimage.tgz | sudo docker import - exampleimagelocal``
Import from a local directory
.............................
``$ sudo tar -c . | docker import - exampleimagedir``
Note the ``sudo`` in this example -- you must preserve the ownership of the files (especially root ownership)
during the archiving with tar. If you are not root (or sudo) when you tar, then the ownerships might not get preserved.
Note the ``sudo`` in this example -- you must preserve the ownership
of the files (especially root ownership) during the archiving with
tar. If you are not root (or sudo) when you tar, then the ownerships
might not get preserved.

View File

@@ -0,0 +1,23 @@
:title: Insert Command
:description: Insert a file in an image
:keywords: insert, image, docker, documentation
==========================================================================
``insert`` -- Insert a file in an image
==========================================================================
::
Usage: docker insert IMAGE URL PATH
Insert a file from URL in the IMAGE at PATH
Examples
--------
Insert file from github
.......................
.. code-block:: bash
$ sudo docker insert 8283e18b24bc https://raw.github.com/metalivedev/django/master/postinstall /tmp/postinstall.sh

View File

@@ -8,6 +8,17 @@
::
Usage: docker login
Usage: docker login [OPTIONS] [SERVER]
Register or Login to the docker registry server
-e="": email
-p="": password
-u="": username
If you want to login to a private registry you can
specify this by adding the server name.
example:
docker login localhost:8080

View File

@@ -10,4 +10,4 @@
Usage: docker rm [OPTIONS] CONTAINER
Remove a container
Remove one or more containers

View File

@@ -8,6 +8,6 @@
::
Usage: docker rmimage [OPTIONS] IMAGE
Usage: docker rmi IMAGE [IMAGE...]
Remove an image
Remove one or more images

View File

@@ -8,20 +8,77 @@
::
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Usage: docker run [OPTIONS] IMAGE[:TAG] [COMMAND] [ARG...]
Run a command in a new container
-a=map[]: Attach to stdin, stdout or stderr.
-c=0: CPU shares (relative weight)
-d=false: Detached mode: leave the container running in the background
-cidfile="": Write the container ID to the file
-d=false: Detached mode: Run container in the background, print new container id
-e=[]: Set environment variables
-h="": Container host name
-i=false: Keep stdin open even if not attached
-privileged=false: Give extended privileges to this container
-m=0: Memory limit (in bytes)
-n=true: Enable networking for this container
-p=[]: Map a network port to the container
-t=false: Allocate a pseudo-tty
-u="": Username or UID
-d=[]: Set custom dns servers for the container
-v=[]: Creates a new volume and mounts it at the specified path.
-dns=[]: Set custom dns servers for the container
-v=[]: Create a bind mount with: [host-dir]:[container-dir]:[rw|ro]. If "host-dir" is missing, then docker creates a new volume.
-volumes-from="": Mount all volumes from the given container.
-entrypoint="": Overwrite the default entrypoint set by the image.
-w="": Working directory inside the container
-lxc-conf=[]: Add custom lxc options -lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
Examples
--------
.. code-block:: bash
sudo docker run -cidfile /tmp/docker_test.cid ubuntu echo "test"
This will create a container and print "test" to the console. The
``cidfile`` flag makes docker attempt to create a new file and write the
container ID to it. If the file exists already, docker will return an
error. Docker will close this file when docker run exits.
.. code-block:: bash
docker run mount -t tmpfs none /var/spool/squid
This will *not* work, because by default, most potentially dangerous
kernel capabilities are dropped; including ``cap_sys_admin`` (which is
required to mount filesystems). However, the ``-privileged`` flag will
allow it to run:
.. code-block:: bash
docker run -privileged mount -t tmpfs none /var/spool/squid
The ``-privileged`` flag gives *all* capabilities to the container,
and it also lifts all the limitations enforced by the ``device``
cgroup controller. In other words, the container can then do almost
everything that the host can do. This flag exists to allow special
use-cases, like running Docker within Docker.
.. code-block:: bash
docker run -w /path/to/dir/ -i -t ubuntu pwd
The ``-w`` lets the command being executed inside directory given,
here /path/to/dir/. If the path does not exists it is created inside the
container.
.. code-block:: bash
docker run -v `pwd`:`pwd` -w `pwd` -i -t ubuntu pwd
The ``-v`` flag mounts the current working directory into the container.
The ``-w`` lets the command being executed inside the current
working directory, by changing into the directory to the value
returned by ``pwd``. So this combination executes the command
using the container, but inside the current working directory.

View File

@@ -10,5 +10,5 @@
Usage: docker search TERM
Searches for the TERM parameter on the Docker index and prints out a list of repositories
that match.
Searches for the TERM parameter on the Docker index and prints out
a list of repositories that match.

View File

@@ -8,6 +8,8 @@
::
Usage: docker stop [OPTIONS] NAME
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop a running container
-t=10: Number of seconds to wait for the container to stop before killing it.

View File

@@ -0,0 +1,13 @@
:title: Top Command
:description: Lookup the running processes of a container
:keywords: top, docker, container, documentation
=======================================================
``top`` -- Lookup the running processes of a container
=======================================================
::
Usage: docker top CONTAINER
Lookup the running processes of a container

View File

@@ -15,12 +15,15 @@ Contents:
attach <command/attach>
build <command/build>
commit <command/commit>
cp <command/cp>
diff <command/diff>
events <command/events>
export <command/export>
history <command/history>
images <command/images>
import <command/import>
info <command/info>
insert <command/insert>
inspect <command/inspect>
kill <command/kill>
login <command/login>
@@ -37,5 +40,6 @@ Contents:
start <command/start>
stop <command/stop>
tag <command/tag>
top <command/top>
version <command/version>
wait <command/wait>
wait <command/wait>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

View File

@@ -1,16 +0,0 @@
:title: Concepts
:description: -- todo: change me
:keywords: concepts, documentation, docker, containers
Concepts
========
Contents:
.. toctree::
:maxdepth: 1
../index

View File

@@ -30,6 +30,7 @@ import sys, os
html_additional_pages = {
'concepts/containers': 'redirect_home.html',
'concepts/introduction': 'redirect_home.html',
'builder/basics': 'redirect_build.html',
}
@@ -50,9 +51,7 @@ source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
#disable the parmalinks on headers, I find them really annoying
html_add_permalinks = None
html_add_permalinks = u''
# The master toctree document.
master_doc = 'toctree'
@@ -202,7 +201,7 @@ latex_elements = {
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'Docker.tex', u'Docker Documentation',
('toctree', 'Docker.tex', u'Docker Documentation',
u'Team Docker', 'manual'),
]
@@ -232,7 +231,7 @@ latex_documents = [
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'docker', u'Docker Documentation',
('toctree', 'docker', u'Docker Documentation',
[u'Team Docker'], 1)
]
@@ -246,7 +245,7 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Docker', u'Docker Documentation',
('toctree', 'Docker', u'Docker Documentation',
u'Team Docker', 'Docker', 'One line description of project.',
'Miscellaneous'),
]

View File

@@ -1,5 +1,5 @@
:title: Contribution Guidelines
:description: Contribution guidelines: create issues, convetions, pull requests
:description: Contribution guidelines: create issues, conventions, pull requests
:keywords: contributing, docker, documentation, help, guideline
Contributing to Docker

View File

@@ -5,53 +5,59 @@
Setting Up a Dev Environment
============================
Instructions that have been verified to work on Ubuntu Precise 12.04 (LTS) (64-bit),
To make it easier to contribute to Docker, we provide a standard
development environment. It is important that the same environment be
used for all tests, builds and releases. The standard development
environment defines all build dependencies: system libraries and
binaries, go environment, go dependencies, etc.
Dependencies
------------
Step 1: install docker
----------------------
**Linux kernel 3.8**
Docker's build environment itself is a Docker container, so the first
step is to install docker on your system.
Due to a bug in LXC docker works best on the 3.8 kernel. Precise comes with a 3.2 kernel, so we need to upgrade it. The kernel we install comes with AUFS built in.
You can follow the `install instructions most relevant to your system
<https://docs.docker.io/en/latest/installation/>`_. Make sure you have
a working, up-to-date docker installation, then continue to the next
step.
.. code-block:: bash
Step 2: check out the source
----------------------------
# install the backported kernel
sudo apt-get update && sudo apt-get install linux-image-generic-lts-raring
::
# reboot
sudo reboot
Installation
------------
.. code-block:: bash
sudo apt-get install python-software-properties
sudo add-apt-repository ppa:gophers/go
sudo apt-get update
sudo apt-get -y install lxc xz-utils curl golang-stable git aufs-tools
export GOPATH=~/go/
export PATH=$GOPATH/bin:$PATH
mkdir -p $GOPATH/src/github.com/dotcloud
cd $GOPATH/src/github.com/dotcloud
git clone git://github.com/dotcloud/docker.git
git clone http://git@github.com/dotcloud/docker
cd docker
go get -v github.com/dotcloud/docker/...
go install -v github.com/dotcloud/docker/...
Step 3: build
-------------
When you are ready to build docker, run this command:
::
sudo docker build -t docker .
This will build the revision currently checked out in the
repository. Feel free to check out the version of your choice.
If the build is successful, congratulations! You have produced a clean
build of docker, neatly encapsulated in a standard build environment.
You can run an interactive session in the newly built container:
::
sudo docker run -i -t docker bash
Then run the docker daemon,
To extract the binaries from the container:
.. code-block:: bash
::
sudo $GOPATH/bin/docker -d
sudo docker run docker sh -c 'cat $(which docker)' > docker-build && chmod +x docker-build
Run the ``go install`` command (above) to recompile docker.

View File

@@ -9,27 +9,29 @@ CouchDB Service
.. include:: example_header.inc
Here's an example of using data volumes to share the same data between 2 couchdb containers.
This could be used for hot upgrades, testing different versions of couchdb on the same data, etc.
Here's an example of using data volumes to share the same data between
2 CouchDB containers. This could be used for hot upgrades, testing
different versions of CouchDB on the same data, etc.
Create first database
---------------------
Note that we're marking /var/lib/couchdb as a data volume.
Note that we're marking ``/var/lib/couchdb`` as a data volume.
.. code-block:: bash
COUCH1=$(docker run -d -v /var/lib/couchdb shykes/couchdb:2013-05-03)
COUCH1=$(sudo docker run -d -v /var/lib/couchdb shykes/couchdb:2013-05-03)
Add data to the first database
------------------------------
We're assuming your docker host is reachable at `localhost`. If not, replace `localhost` with the public IP of your docker host.
We're assuming your docker host is reachable at `localhost`. If not,
replace `localhost` with the public IP of your docker host.
.. code-block:: bash
HOST=localhost
URL="http://$HOST:$(docker port $COUCH1 5984)/_utils/"
URL="http://$HOST:$(sudo docker port $COUCH1 5984)/_utils/"
echo "Navigate to $URL in your browser, and use the couch interface to add data"
Create second database
@@ -39,7 +41,7 @@ This time, we're requesting shared access to $COUCH1's volumes.
.. code-block:: bash
COUCH2=$(docker run -d -volumes-from $COUCH1) shykes/couchdb:2013-05-03)
COUCH2=$(sudo docker run -d -volumes-from $COUCH1 shykes/couchdb:2013-05-03)
Browse data on the second database
----------------------------------
@@ -47,7 +49,8 @@ Browse data on the second database
.. code-block:: bash
HOST=localhost
URL="http://$HOST:$(docker port $COUCH2 5984)/_utils/"
echo "Navigate to $URL in your browser. You should see the same data as in the first database!"
URL="http://$HOST:$(sudo docker port $COUCH2 5984)/_utils/"
echo "Navigate to $URL in your browser. You should see the same data as in the first database"'!'
Congratulations, you are running 2 Couchdb containers, completely isolated from each other *except* for their data.
Congratulations, you are running 2 Couchdb containers, completely
isolated from each other *except* for their data.

View File

@@ -2,6 +2,28 @@
:description: A simple hello world example with Docker
:keywords: docker, example, hello world
.. _running_examples:
Running the Examples
====================
All the examples assume your machine is running the docker daemon. To
run the docker daemon in the background, simply type:
.. code-block:: bash
sudo docker -d &
Now you can run docker in client mode: by default all commands will be
forwarded to the ``docker`` daemon via a protected Unix socket, so you
must run as root.
.. code-block:: bash
sudo docker help
----
.. _hello_world:
Hello World
@@ -11,26 +33,28 @@ Hello World
This is the most basic example available for using Docker.
Download the base container
Download the base image (named "ubuntu"):
.. code-block:: bash
# Download a base image
docker pull base
# Download an ubuntu image
sudo docker pull ubuntu
The *base* image is a minimal *ubuntu* based container, alternatively you can select *busybox*, a bare
minimal linux system. The images are retrieved from the docker repository.
Alternatively to the *ubuntu* image, you can select *busybox*, a bare
minimal Linux system. The images are retrieved from the Docker
repository.
.. code-block:: bash
#run a simple echo command, that will echo hello world back to the console over standard out.
docker run base /bin/echo hello world
sudo docker run ubuntu /bin/echo hello world
**Explanation:**
- **"sudo"** execute the following commands as user *root*
- **"docker run"** run a command in a new container
- **"base"** is the image we want to run the command inside of.
- **"ubuntu"** is the image we want to run the command inside of.
- **"/bin/echo"** is the command we want to run in the container
- **"hello world"** is the input for the echo command
@@ -47,4 +71,108 @@ See the example in action
</div>
Continue to the :ref:`hello_world_daemon` example.
----
.. _hello_world_daemon:
Hello World Daemon
==================
.. include:: example_header.inc
And now for the most boring daemon ever written!
This example assumes you have Docker installed and the Ubuntu
image already imported with ``docker pull ubuntu``. We will use the Ubuntu
image to run a simple hello world daemon that will just print hello
world to standard out every second. It will continue to do this until
we stop it.
**Steps:**
.. code-block:: bash
CONTAINER_ID=$(sudo docker run -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done")
We are going to run a simple hello world daemon in a new container
made from the *ubuntu* image.
- **"docker run -d "** run a command in a new container. We pass "-d"
so it runs as a daemon.
- **"ubuntu"** is the image we want to run the command inside of.
- **"/bin/sh -c"** is the command we want to run in the container
- **"while true; do echo hello world; sleep 1; done"** is the mini
script we want to run, that will just print hello world once a
second until we stop it.
- **$CONTAINER_ID** the output of the run command will return a
container id, we can use in future commands to see what is going on
with this process.
.. code-block:: bash
sudo docker logs $CONTAINER_ID
Check the logs make sure it is working correctly.
- **"docker logs**" This will return the logs for a container
- **$CONTAINER_ID** The Id of the container we want the logs for.
.. code-block:: bash
sudo docker attach $CONTAINER_ID
Attach to the container to see the results in realtime.
- **"docker attach**" This will allow us to attach to a background
process to see what is going on.
- **$CONTAINER_ID** The Id of the container we want to attach too.
Exit from the container attachment by pressing Control-C.
.. code-block:: bash
sudo docker ps
Check the process list to make sure it is running.
- **"docker ps"** this shows all running process managed by docker
.. code-block:: bash
sudo docker stop $CONTAINER_ID
Stop the container, since we don't need it anymore.
- **"docker stop"** This stops a container
- **$CONTAINER_ID** The Id of the container we want to stop.
.. code-block:: bash
sudo docker ps
Make sure it is really stopped.
**Video:**
See the example in action
.. raw:: html
<div style="margin-top:10px;">
<iframe width="560" height="350" src="http://ascii.io/a/2562/raw" frameborder="0"></iframe>
</div>
The next example in the series is a :ref:`python_web_app` example, or
you could skip to any of the other examples:
.. toctree::
:maxdepth: 1
python_web_app
nodejs_web_app
running_redis_service
running_ssh_service
couchdb_data_volumes
postgresql_service
mongodb

View File

@@ -1,84 +0,0 @@
:title: Hello world daemon example
:description: A simple hello world daemon example with Docker
:keywords: docker, example, hello world, daemon
.. _hello_world_daemon:
Hello World Daemon
==================
.. include:: example_header.inc
The most boring daemon ever written.
This example assumes you have Docker installed and with the base image already imported ``docker pull base``.
We will use the base image to run a simple hello world daemon that will just print hello world to standard
out every second. It will continue to do this until we stop it.
**Steps:**
.. code-block:: bash
CONTAINER_ID=$(docker run -d base /bin/sh -c "while true; do echo hello world; sleep 1; done")
We are going to run a simple hello world daemon in a new container made from the base image.
- **"docker run -d "** run a command in a new container. We pass "-d" so it runs as a daemon.
- **"base"** is the image we want to run the command inside of.
- **"/bin/sh -c"** is the command we want to run in the container
- **"while true; do echo hello world; sleep 1; done"** is the mini script we want to run, that will just print hello world once a second until we stop it.
- **$CONTAINER_ID** the output of the run command will return a container id, we can use in future commands to see what is going on with this process.
.. code-block:: bash
docker logs $CONTAINER_ID
Check the logs make sure it is working correctly.
- **"docker logs**" This will return the logs for a container
- **$CONTAINER_ID** The Id of the container we want the logs for.
.. code-block:: bash
docker attach $CONTAINER_ID
Attach to the container to see the results in realtime.
- **"docker attach**" This will allow us to attach to a background process to see what is going on.
- **$CONTAINER_ID** The Id of the container we want to attach too.
.. code-block:: bash
docker ps
Check the process list to make sure it is running.
- **"docker ps"** this shows all running process managed by docker
.. code-block:: bash
docker stop $CONTAINER_ID
Stop the container, since we don't need it anymore.
- **"docker stop"** This stops a container
- **$CONTAINER_ID** The Id of the container we want to stop.
.. code-block:: bash
docker ps
Make sure it is really stopped.
**Video:**
See the example in action
.. raw:: html
<div style="margin-top:10px;">
<iframe width="560" height="350" src="http://ascii.io/a/2562/raw" frameborder="0"></iframe>
</div>
Continue to the :ref:`python_web_app` example.

View File

@@ -1,22 +1,25 @@
:title: Docker Examples
:description: Examples on how to use Docker
:keywords: docker, hello world, node, nodejs, python, couch, couchdb, redis, ssh, sshd, examples
:keywords: docker, hello world, node, nodejs, python, couch, couchdb, redis, ssh, sshd, examples, postgresql
Examples
============
========
Contents:
Here are some examples of how to use Docker to create running
processes, starting from a very simple *Hello World* and progressing
to more substantial services like you might find in production.
.. toctree::
:maxdepth: 1
running_examples
hello_world
hello_world_daemon
python_web_app
nodejs_web_app
running_redis_service
running_ssh_service
couchdb_data_volumes
postgresql_service
mongodb
running_riak_service

View File

@@ -0,0 +1,100 @@
:title: Building a Docker Image with MongoDB
:description: How to build a Docker image with MongoDB pre-installed
:keywords: docker, example, package installation, networking, mongodb
.. _mongodb_image:
Building an Image with MongoDB
==============================
.. include:: example_header.inc
The goal of this example is to show how you can build your own
docker images with MongoDB preinstalled. We will do that by
constructing a Dockerfile that downloads a base image, adds an
apt source and installs the database software on Ubuntu.
Creating a ``Dockerfile``
+++++++++++++++++++++++++
Create an empty file called ``Dockerfile``:
.. code-block:: bash
touch Dockerfile
Next, define the parent image you want to use to build your own image on top of.
Here, well use `Ubuntu <https://index.docker.io/_/ubuntu/>`_ (tag: ``latest``)
available on the `docker index <http://index.docker.io>`_:
.. code-block:: bash
FROM ubuntu:latest
Since we want to be running the latest version of MongoDB we'll need to add the
10gen repo to our apt sources list.
.. code-block:: bash
# Add 10gen official apt source to the sources list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10
RUN echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
Then, we don't want Ubuntu to complain about init not being available so we'll
divert /sbin/initctl to /bin/true so it thinks everything is working.
.. code-block:: bash
# Hack for initctl not being available in Ubuntu
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -s /bin/true /sbin/initctl
Afterwards we'll be able to update our apt repositories and install MongoDB
.. code-block:: bash
# Install MongoDB
RUN apt-get update
RUN apt-get install mongodb-10gen
To run MongoDB we'll have to create the default data directory (because we want it to
run without needing to provide a special configuration file)
.. code-block:: bash
# Create the MongoDB data directory
RUN mkdir -p /data/db
Finally, we'll expose the standard port that MongoDB runs on (27107) as well as
define an ENTRYPOINT for the container.
.. code-block:: bash
EXPOSE 27017
ENTRYPOINT ["usr/bin/mongod"]
Now, lets build the image which will go through the ``Dockerfile`` we made and
run all of the commands.
.. code-block:: bash
docker build -t <yourname>/mongodb .
Now you should be able to run ``mongod`` as a daemon and be able to connect on
the local port!
.. code-block:: bash
# Regular style
MONGO_ID=$(docker run -d <yourname>/mongodb)
# Lean and mean
MONGO_ID=$(docker run -d <yourname>/mongodb --noprealloc --smallfiles)
# Check the logs out
docker logs $MONGO_ID
# Connect and play around
mongo --port <port you get from `docker ps`>
Sweet!

View File

@@ -9,10 +9,11 @@ Node.js Web App
.. include:: example_header.inc
The goal of this example is to show you how you can build your own docker images
from a parent image using a ``Dockerfile`` . We will do that by making a simple
Node.js hello world web application running on CentOS. You can get the full
source code at https://github.com/gasi/docker-node-hello.
The goal of this example is to show you how you can build your own
docker images from a parent image using a ``Dockerfile`` . We will do
that by making a simple Node.js hello world web application running on
CentOS. You can get the full source code at
https://github.com/gasi/docker-node-hello.
Create Node.js app
++++++++++++++++++
@@ -92,7 +93,7 @@ To install the right package for CentOS, well use the instructions from the
# Enable EPEL for Node.js
RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# Install Node.js and npm
RUN yum install -y npm-1.2.17-5.el6
RUN yum install -y npm
To bundle your apps source code inside the docker image, use the ``ADD``
command:
@@ -109,16 +110,17 @@ Install your app dependencies using npm:
# Install app dependencies
RUN cd /src; npm install
Your app binds to port ``8080`` so youll use the ``EXPOSE`` command to have it
mapped by the docker daemon:
Your app binds to port ``8080`` so youll use the ``EXPOSE`` command
to have it mapped by the docker daemon:
.. code-block:: bash
EXPOSE 8080
Last but not least, define the command to run your app using ``CMD`` which
defines your runtime, i.e. ``node``, and the path to our app, i.e.
``src/index.js`` (see the step where we added the source to the container):
Last but not least, define the command to run your app using ``CMD``
which defines your runtime, i.e. ``node``, and the path to our app,
i.e. ``src/index.js`` (see the step where we added the source to the
container):
.. code-block:: bash
@@ -135,7 +137,7 @@ Your ``Dockerfile`` should now look like this:
# Enable EPEL for Node.js
RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# Install Node.js and npm
RUN yum install -y npm-1.2.17-5.el6
RUN yum install -y npm
# Bundle app source
ADD . /src
@@ -149,19 +151,20 @@ Your ``Dockerfile`` should now look like this:
Building your image
+++++++++++++++++++
Go to the directory that has your ``Dockerfile`` and run the following command
to build a docker image. The ``-t`` flag lets you tag your image so its easier
to find later using the ``docker images`` command:
Go to the directory that has your ``Dockerfile`` and run the following
command to build a docker image. The ``-t`` flag lets you tag your
image so its easier to find later using the ``docker images``
command:
.. code-block:: bash
docker build -t <your username>/centos-node-hello .
sudo docker build -t <your username>/centos-node-hello .
Your image will now be listed by docker:
.. code-block:: bash
docker images
sudo docker images
> # Example
> REPOSITORY TAG ID CREATED
@@ -177,17 +180,17 @@ container running in the background. Run the image you previously built:
.. code-block:: bash
docker run -d <your username>/centos-node-hello
sudo docker run -d <your username>/centos-node-hello
Print the output of your app:
.. code-block:: bash
# Get container ID
docker ps
sudo docker ps
# Print app output
docker logs <container id>
sudo docker logs <container id>
> # Example
> Running on http://localhost:8080
@@ -225,8 +228,8 @@ Now you can call your app using ``curl`` (install if needed via:
>
> Hello World
We hope this tutorial helped you get up and running with Node.js and CentOS on
docker. You can get the full source code at
We hope this tutorial helped you get up and running with Node.js and
CentOS on docker. You can get the full source code at
https://github.com/gasi/docker-node-hello.
Continue to :ref:`running_redis_service`.

View File

@@ -0,0 +1,158 @@
:title: PostgreSQL service How-To
:description: Running and installing a PostgreSQL service
:keywords: docker, example, package installation, postgresql
.. _postgresql_service:
PostgreSQL Service
==================
.. note::
A shorter version of `this blog post`_.
.. note::
As of version 0.5.2, docker requires root privileges to run.
You have to either manually adjust your system configuration (permissions on
/var/run/docker.sock or sudo config), or prefix `docker` with `sudo`. Check
`this thread`_ for details.
.. _this blog post: http://zaiste.net/2013/08/docker_postgresql_how_to/
.. _this thread: https://groups.google.com/forum/?fromgroups#!topic/docker-club/P3xDLqmLp0E
Installing PostgreSQL on Docker
-------------------------------
For clarity I won't be showing commands output.
Run an interactive shell in Docker container.
.. code-block:: bash
sudo docker run -i -t ubuntu /bin/bash
Update its dependencies.
.. code-block:: bash
apt-get update
Install ``python-software-properies``.
.. code-block:: bash
apt-get install python-software-properties
apt-get install software-properties-common
Add Pitti's PostgreSQL repository. It contains the most recent stable release
of PostgreSQL i.e. ``9.2``.
.. code-block:: bash
add-apt-repository ppa:pitti/postgresql
apt-get update
Finally, install PostgreSQL 9.2
.. code-block:: bash
apt-get -y install postgresql-9.2 postgresql-client-9.2 postgresql-contrib-9.2
Now, create a PostgreSQL superuser role that can create databases and
other roles. Following Vagrant's convention the role will be named
`docker` with `docker` password assigned to it.
.. code-block:: bash
sudo -u postgres createuser -P -d -r -s docker
Create a test database also named ``docker`` owned by previously created ``docker``
role.
.. code-block:: bash
sudo -u postgres createdb -O docker docker
Adjust PostgreSQL configuration so that remote connections to the
database are possible. Make sure that inside
``/etc/postgresql/9.2/main/pg_hba.conf`` you have following line:
.. code-block:: bash
host all all 0.0.0.0/0 md5
Additionaly, inside ``/etc/postgresql/9.2/main/postgresql.conf``
uncomment ``listen_addresses`` so it is as follows:
.. code-block:: bash
listen_addresses='*'
*Note:* this PostgreSQL setup is for development only purposes. Refer
to PostgreSQL documentation how to fine-tune these settings so that it
is enough secure.
Create an image and assign it a name. ``<container_id>`` is in the
Bash prompt; you can also locate it using ``docker ps -a``.
.. code-block:: bash
docker commit <container_id> <your username>/postgresql
Finally, run PostgreSQL server via ``docker``.
.. code-block:: bash
CONTAINER=$(sudo docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf')
Connect the PostgreSQL server using ``psql``.
.. code-block:: bash
CONTAINER_IP=$(sudo docker inspect $CONTAINER | grep IPAddress | awk '{ print $2 }' | tr -d ',"')
psql -h $CONTAINER_IP -p 5432 -d docker -U docker -W
As before, create roles or databases if needed.
.. code-block:: bash
psql (9.2.4)
Type "help" for help.
docker=# CREATE DATABASE foo OWNER=docker;
CREATE DATABASE
Additionally, publish there your newly created image on Docker Index.
.. code-block:: bash
sudo docker login
Username: <your username>
[...]
.. code-block:: bash
sudo docker push <your username>/postgresql
PostgreSQL service auto-launch
------------------------------
Running our image seems complicated. We have to specify the whole command with
``docker run``. Let's simplify it so the service starts automatically when the
container starts.
.. code-block:: bash
sudo docker commit <container_id> <your username>/postgresql -run='{"Cmd": \
["/bin/su", "postgres", "-c", "/usr/lib/postgresql/9.2/bin/postgres -D \
/var/lib/postgresql/9.2/main -c \
config_file=/etc/postgresql/9.2/main/postgresql.conf"], PortSpecs": ["5432"]}
From now on, just type ``docker run <your username>/postgresql`` and
PostgreSQL should automatically start.

View File

@@ -9,13 +9,16 @@ Python Web App
.. include:: example_header.inc
The goal of this example is to show you how you can author your own docker images using a parent image, making changes to it, and then saving the results as a new image. We will do that by making a simple hello flask web application image.
The goal of this example is to show you how you can author your own
docker images using a parent image, making changes to it, and then
saving the results as a new image. We will do that by making a simple
hello flask web application image.
**Steps:**
.. code-block:: bash
docker pull shykes/pybuilder
sudo docker pull shykes/pybuilder
We are downloading the "shykes/pybuilder" docker image
@@ -27,46 +30,66 @@ We set a URL variable that points to a tarball of a simple helloflask web app
.. code-block:: bash
BUILD_JOB=$(docker run -d -t shykes/pybuilder:latest /usr/local/bin/buildapp $URL)
BUILD_JOB=$(sudo docker run -d -t shykes/pybuilder:latest /usr/local/bin/buildapp $URL)
Inside of the "shykes/pybuilder" image there is a command called buildapp, we are running that command and passing the $URL variable from step 2 to it, and running the whole thing inside of a new container. BUILD_JOB will be set with the new container_id.
Inside of the "shykes/pybuilder" image there is a command called
buildapp, we are running that command and passing the $URL variable
from step 2 to it, and running the whole thing inside of a new
container. BUILD_JOB will be set with the new container_id.
.. code-block:: bash
docker attach $BUILD_JOB
sudo docker attach $BUILD_JOB
[...]
We attach to the new container to see what is going on. Ctrl-C to disconnect
While this container is running, we can attach to the new container to
see what is going on. Ctrl-C to disconnect.
.. code-block:: bash
BUILD_IMG=$(docker commit $BUILD_JOB _/builds/github.com/shykes/helloflask/master)
Save the changed we just made in the container to a new image called "_/builds/github.com/hykes/helloflask/master" and save the image id in the BUILD_IMG variable name.
sudo docker ps -a
List all docker containers. If this container has already finished
running, it will still be listed here.
.. code-block:: bash
WEB_WORKER=$(docker run -d -p 5000 $BUILD_IMG /usr/local/bin/runapp)
BUILD_IMG=$(sudo docker commit $BUILD_JOB _/builds/github.com/shykes/helloflask/master)
- **"docker run -d "** run a command in a new container. We pass "-d" so it runs as a daemon.
- **"-p 5000"** the web app is going to listen on this port, so it must be mapped from the container to the host system.
Save the changes we just made in the container to a new image called
``_/builds/github.com/hykes/helloflask/master`` and save the image id in
the BUILD_IMG variable name.
.. code-block:: bash
WEB_WORKER=$(sudo docker run -d -p 5000 $BUILD_IMG /usr/local/bin/runapp)
- **"docker run -d "** run a command in a new container. We pass "-d"
so it runs as a daemon.
- **"-p 5000"** the web app is going to listen on this port, so it
must be mapped from the container to the host system.
- **"$BUILD_IMG"** is the image we want to run the command inside of.
- **/usr/local/bin/runapp** is the command which starts the web app.
Use the new image we just created and create a new container with network port 5000, and return the container id and store in the WEB_WORKER variable.
Use the new image we just created and create a new container with
network port 5000, and return the container id and store in the
WEB_WORKER variable.
.. code-block:: bash
docker logs $WEB_WORKER
sudo docker logs $WEB_WORKER
* Running on http://0.0.0.0:5000/
view the logs for the new container using the WEB_WORKER variable, and if everything worked as planned you should see the line "Running on http://0.0.0.0:5000/" in the log output.
View the logs for the new container using the WEB_WORKER variable, and
if everything worked as planned you should see the line "Running on
http://0.0.0.0:5000/" in the log output.
.. code-block:: bash
WEB_PORT=$(docker port $WEB_WORKER 5000)
WEB_PORT=$(sudo docker port $WEB_WORKER 5000)
lookup the public-facing port which is NAT-ed store the private port used by the container and store it inside of the WEB_PORT variable.
Look up the public-facing port which is NAT-ed. Find the private port
used by the container and store it inside of the WEB_PORT variable.
.. code-block:: bash
@@ -74,7 +97,8 @@ lookup the public-facing port which is NAT-ed store the private port used by the
curl http://127.0.0.1:$WEB_PORT
Hello world!
access the web app using curl. If everything worked as planned you should see the line "Hello world!" inside of your console.
Access the web app using curl. If everything worked as planned you
should see the line "Hello world!" inside of your console.
**Video:**

View File

@@ -1,22 +0,0 @@
:title: Running the Examples
:description: An overview on how to run the docker examples
:keywords: docker, examples, how to
.. _running_examples:
Running the Examples
--------------------
All the examples assume your machine is running the docker daemon. To run the docker daemon in the background, simply type:
.. code-block:: bash
sudo docker -d &
Now you can run docker in client mode: all commands will be forwarded to the docker daemon, so the client
can run from any account.
.. code-block:: bash
# now you can run docker commands from any account.
docker help

View File

@@ -16,12 +16,13 @@ Open a docker container
.. code-block:: bash
docker run -i -t base /bin/bash
sudo docker run -i -t ubuntu /bin/bash
Building your image
-------------------
Update your docker container, install the redis server. Once installed, exit out of docker.
Update your Docker container, install the Redis server. Once
installed, exit out of the Docker container.
.. code-block:: bash
@@ -45,7 +46,7 @@ container running in the background. Use your snapshot.
.. code-block:: bash
docker run -d -p 6379 <your username>/redis /usr/bin/redis-server
sudo docker run -d -p 6379 <your username>/redis /usr/bin/redis-server
Test 1
++++++
@@ -54,8 +55,8 @@ Connect to the container with the redis-cli.
.. code-block:: bash
docker ps # grab the new container id
docker inspect <container_id> # grab the ipaddress of the container
sudo docker ps # grab the new container id
sudo docker inspect <container_id> # grab the ipaddress of the container
redis-cli -h <ipaddress> -p 6379
redis 10.0.3.32:6379> set docker awesome
OK
@@ -70,8 +71,8 @@ Connect to the host os with the redis-cli.
.. code-block:: bash
docker ps # grab the new container id
docker port <container_id> 6379 # grab the external port
sudo docker ps # grab the new container id
sudo docker port <container_id> 6379 # grab the external port
ip addr show # grab the host ip address
redis-cli -h <host ipaddress> -p <external port>
redis 192.168.0.1:49153> set docker awesome

View File

@@ -0,0 +1,151 @@
:title: Running a Riak service
:description: Build a Docker image with Riak pre-installed
:keywords: docker, example, package installation, networking, riak
Riak Service
==============================
.. include:: example_header.inc
The goal of this example is to show you how to build a Docker image with Riak
pre-installed.
Creating a ``Dockerfile``
+++++++++++++++++++++++++
Create an empty file called ``Dockerfile``:
.. code-block:: bash
touch Dockerfile
Next, define the parent image you want to use to build your image on top of.
Well use `Ubuntu <https://index.docker.io/_/ubuntu/>`_ (tag: ``latest``),
which is available on the `docker index <http://index.docker.io>`_:
.. code-block:: bash
# Riak
#
# VERSION 0.1.0
# Use the Ubuntu base image provided by dotCloud
FROM ubuntu:latest
MAINTAINER Hector Castro hector@basho.com
Next, we update the APT cache and apply any updates:
.. code-block:: bash
# Update the APT cache
RUN sed -i.bak 's/main$/main universe/' /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
After that, we install and setup a few dependencies:
- ``curl`` is used to download Basho's APT repository key
- ``lsb-release`` helps us derive the Ubuntu release codename
- ``openssh-server`` allows us to login to containers remotely and join Riak
nodes to form a cluster
- ``supervisor`` is used manage the OpenSSH and Riak processes
.. code-block:: bash
# Install and setup project dependencies
RUN apt-get install -y curl lsb-release supervisor openssh-server
RUN mkdir -p /var/run/sshd
RUN mkdir -p /var/log/supervisor
RUN locale-gen en_US en_US.UTF-8
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN echo 'root:basho' | chpasswd
Next, we add Basho's APT repository:
.. code-block:: bash
RUN curl -s http://apt.basho.com/gpg/basho.apt.key | apt-key add --
RUN echo "deb http://apt.basho.com $(lsb_release -cs) main" > /etc/apt/sources.list.d/basho.list
RUN apt-get update
After that, we install Riak and alter a few defaults:
.. code-block:: bash
# Install Riak and prepare it to run
RUN apt-get install -y riak
RUN sed -i.bak 's/127.0.0.1/0.0.0.0/' /etc/riak/app.config
RUN echo "ulimit -n 4096" >> /etc/default/riak
Almost there. Next, we add a hack to get us by the lack of ``initctl``:
.. code-block:: bash
# Hack for initctl
# See: https://github.com/dotcloud/docker/issues/1024
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -s /bin/true /sbin/initctl
Then, we expose the Riak Protocol Buffers and HTTP interfaces, along with SSH:
.. code-block:: bash
# Expose Riak Protocol Buffers and HTTP interfaces, along with SSH
EXPOSE 8087 8098 22
Finally, run ``supervisord`` so that Riak and OpenSSH are started:
.. code-block:: bash
CMD ["/usr/bin/supervisord"]
Create a ``supervisord`` configuration file
+++++++++++++++++++++++++++++++++++++++++++
Create an empty file called ``supervisord.conf``. Make sure it's at the same
level as your ``Dockerfile``:
.. code-block:: bash
touch supervisord.conf
Populate it with the following program definitions:
.. code-block:: bash
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:riak]
command=bash -c ". /etc/default/riak && /usr/sbin/riak console"
pidfile=/var/log/riak/riak.pid
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
Build the Docker image for Riak
+++++++++++++++++++++++++++++++
Now you should be able to build a Docker image for Riak:
.. code-block:: bash
docker build -t "<yourname>/riak" .
Next steps
++++++++++
Riak is a distributed database. Many production deployments consist of `at
least five nodes <http://basho.com/why-your-riak-cluster-should-have-at-least-
five-nodes/>`_. See the `docker-riak <https://github.com/hectcastro /docker-
riak>`_ project details on how to deploy a Riak cluster using Docker and
Pipework.

View File

@@ -12,19 +12,27 @@ SSH Daemon Service
**Video:**
I've create a little screencast to show how to create a sshd service and connect to it. It is something like 11
minutes and not entirely smooth, but gives you a good idea.
I've create a little screencast to show how to create a sshd service
and connect to it. It is something like 11 minutes and not entirely
smooth, but gives you a good idea.
.. note::
This screencast was created before ``docker`` version 0.5.2, so the
daemon is unprotected and available via a TCP port. When you run
through the same steps in a newer version of ``docker``, you will
need to add ``sudo`` in front of each ``docker`` command in order
to reach the daemon over its protected Unix socket.
.. raw:: html
<div style="margin-top:10px;">
<iframe width="800" height="400" src="http://ascii.io/a/2637/raw" frameborder="0"></iframe>
</div>
You can also get this sshd container by using
::
docker pull dhrp/sshd
sudo docker pull dhrp/sshd
The password is 'screencast'
@@ -33,47 +41,57 @@ The password is 'screencast'
.. code-block:: bash
# Hello! We are going to try and install openssh on a container and run it as a servic
# let's pull base to get a base ubuntu image.
$ docker pull base
# I had it so it was quick
# now let's connect using -i for interactive and with -t for terminal
# we execute /bin/bash to get a prompt.
$ docker run -i -t base /bin/bash
# now let's commit it
# which container was it?
$ docker ps -a |more
$ docker commit a30a3a2f2b130749995f5902f079dc6ad31ea0621fac595128ec59c6da07feea dhrp/sshd
# I gave the name dhrp/sshd for the container
# now we can run it again
$ docker run -d dhrp/sshd /usr/sbin/sshd -D # D for daemon mode
# is it running?
$ docker ps
# yes!
# let's stop it
$ docker stop 0ebf7cec294755399d063f4b1627980d4cbff7d999f0bc82b59c300f8536a562
$ docker ps
# and reconnect, but now open a port to it
$ docker run -d -p 22 dhrp/sshd /usr/sbin/sshd -D
$ docker port b2b407cf22cf8e7fa3736fa8852713571074536b1d31def3fdfcd9fa4fd8c8c5 22
# it has now given us a port to connect to
# we have to connect using a public ip of our host
$ hostname
# *ifconfig* is deprecated, better use *ip addr show* now
$ ifconfig
$ ssh root@192.168.33.10 -p 49153
# Ah! forgot to set root passwd
$ docker commit b2b407cf22cf8e7fa3736fa8852713571074536b1d31def3fdfcd9fa4fd8c8c5 dhrp/sshd
$ docker ps -a
$ docker run -i -t dhrp/sshd /bin/bash
$ passwd
$ exit
$ docker commit 9e863f0ca0af31c8b951048ba87641d67c382d08d655c2e4879c51410e0fedc1 dhrp/sshd
$ docker run -d -p 22 dhrp/sshd /usr/sbin/sshd -D
$ docker port a0aaa9558c90cf5c7782648df904a82365ebacce523e4acc085ac1213bfe2206 22
# *ifconfig* is deprecated, better use *ip addr show* now
$ ifconfig
$ ssh root@192.168.33.10 -p 49154
# Thanks for watching, Thatcher thatcher@dotcloud.com
# Hello! We are going to try and install openssh on a container and run it as a service
# let's pull ubuntu to get a base ubuntu image.
$ docker pull ubuntu
# I had it so it was quick
# now let's connect using -i for interactive and with -t for terminal
# we execute /bin/bash to get a prompt.
$ docker run -i -t base /bin/bash
# yes! we are in!
# now lets install openssh
$ apt-get update
$ apt-get install openssh-server
# ok. lets see if we can run it.
$ which sshd
# we need to create privilege separation directory
$ mkdir /var/run/sshd
$ /usr/sbin/sshd
$ exit
# now let's commit it
# which container was it?
$ docker ps -a |more
$ docker commit a30a3a2f2b130749995f5902f079dc6ad31ea0621fac595128ec59c6da07feea dhrp/sshd
# I gave the name dhrp/sshd for the container
# now we can run it again
$ docker run -d dhrp/sshd /usr/sbin/sshd -D # D for daemon mode
# is it running?
$ docker ps
# yes!
# let's stop it
$ docker stop 0ebf7cec294755399d063f4b1627980d4cbff7d999f0bc82b59c300f8536a562
$ docker ps
# and reconnect, but now open a port to it
$ docker run -d -p 22 dhrp/sshd /usr/sbin/sshd -D
$ docker port b2b407cf22cf8e7fa3736fa8852713571074536b1d31def3fdfcd9fa4fd8c8c5 22
# it has now given us a port to connect to
# we have to connect using a public ip of our host
$ hostname
# *ifconfig* is deprecated, better use *ip addr show* now
$ ifconfig
$ ssh root@192.168.33.10 -p 49153
# Ah! forgot to set root passwd
$ docker commit b2b407cf22cf8e7fa3736fa8852713571074536b1d31def3fdfcd9fa4fd8c8c5 dhrp/sshd
$ docker ps -a
$ docker run -i -t dhrp/sshd /bin/bash
$ passwd
$ exit
$ docker commit 9e863f0ca0af31c8b951048ba87641d67c382d08d655c2e4879c51410e0fedc1 dhrp/sshd
$ docker run -d -p 22 dhrp/sshd /usr/sbin/sshd -D
$ docker port a0aaa9558c90cf5c7782648df904a82365ebacce523e4acc085ac1213bfe2206 22
# *ifconfig* is deprecated, better use *ip addr show* now
$ ifconfig
$ ssh root@192.168.33.10 -p 49154
# Thanks for watching, Thatcher thatcher@dotcloud.com

View File

@@ -9,45 +9,152 @@ FAQ
Most frequently asked questions.
--------------------------------
1. **How much does Docker cost?**
How much does Docker cost?
..........................
Docker is 100% free, it is open source, so you can use it without paying.
2. **What open source license are you using?**
What open source license are you using?
.......................................
We are using the Apache License Version 2.0, see it here: https://github.com/dotcloud/docker/blob/master/LICENSE
We are using the Apache License Version 2.0, see it here:
https://github.com/dotcloud/docker/blob/master/LICENSE
3. **Does Docker run on Mac OS X or Windows?**
Does Docker run on Mac OS X or Windows?
.......................................
Not at this time, Docker currently only runs on Linux, but you can use VirtualBox to run Docker in a
virtual machine on your box, and get the best of both worlds. Check out the :ref:`install_using_vagrant` and :ref:`windows` installation guides.
Not at this time, Docker currently only runs on Linux, but you can
use VirtualBox to run Docker in a virtual machine on your box, and
get the best of both worlds. Check out the
:ref:`install_using_vagrant` and :ref:`windows` installation
guides.
4. **How do containers compare to virtual machines?**
How do containers compare to virtual machines?
..............................................
They are complementary. VMs are best used to allocate chunks of hardware resources. Containers operate at the process level, which makes them very lightweight and perfect as a unit of software delivery.
They are complementary. VMs are best used to allocate chunks of
hardware resources. Containers operate at the process level, which
makes them very lightweight and perfect as a unit of software
delivery.
5. **Can I help by adding some questions and answers?**
What does Docker add to just plain LXC?
.......................................
Docker is not a replacement for LXC. "LXC" refers to capabilities
of the Linux kernel (specifically namespaces and control groups)
which allow sandboxing processes from one another, and controlling
their resource allocations. On top of this low-level foundation of
kernel features, Docker offers a high-level tool with several
powerful functionalities:
* *Portable deployment across machines.*
Docker defines a format for bundling an application and all its
dependencies into a single object which can be transferred to
any Docker-enabled machine, and executed there with the
guarantee that the execution environment exposed to the
application will be the same. LXC implements process sandboxing,
which is an important pre-requisite for portable deployment, but
that alone is not enough for portable deployment. If you sent me
a copy of your application installed in a custom LXC
configuration, it would almost certainly not run on my machine
the way it does on yours, because it is tied to your machine's
specific configuration: networking, storage, logging, distro,
etc. Docker defines an abstraction for these machine-specific
settings, so that the exact same Docker container can run -
unchanged - on many different machines, with many different
configurations.
* *Application-centric.*
Docker is optimized for the deployment of applications, as
opposed to machines. This is reflected in its API, user
interface, design philosophy and documentation. By contrast, the
``lxc`` helper scripts focus on containers as lightweight
machines - basically servers that boot faster and need less
RAM. We think there's more to containers than just that.
* *Automatic build.*
Docker includes :ref:`a tool for developers to automatically
assemble a container from their source code <dockerbuilder>`,
with full control over application dependencies, build tools,
packaging etc. They are free to use ``make, maven, chef, puppet,
salt,`` Debian packages, RPMs, source tarballs, or any
combination of the above, regardless of the configuration of the
machines.
* *Versioning.*
Docker includes git-like capabilities for tracking successive
versions of a container, inspecting the diff between versions,
committing new versions, rolling back etc. The history also
includes how a container was assembled and by whom, so you get
full traceability from the production server all the way back to
the upstream developer. Docker also implements incremental
uploads and downloads, similar to ``git pull``, so new versions
of a container can be transferred by only sending diffs.
* *Component re-use.*
Any container can be used as a :ref:`"base image"
<base_image_def>` to create more specialized components. This
can be done manually or as part of an automated build. For
example you can prepare the ideal Python environment, and use it
as a base for 10 different applications. Your ideal Postgresql
setup can be re-used for all your future projects. And so on.
* *Sharing.*
Docker has access to a `public registry
<http://index.docker.io>`_ where thousands of people have
uploaded useful containers: anything from Redis, CouchDB,
Postgres to IRC bouncers to Rails app servers to Hadoop to base
images for various Linux distros. The :ref:`registry
<registryindexspec>` also includes an official "standard
library" of useful containers maintained by the Docker team. The
registry itself is open-source, so anyone can deploy their own
registry to store and transfer private containers, for internal
server deployments for example.
* *Tool ecosystem.*
Docker defines an API for automating and customizing the
creation and deployment of containers. There are a huge number
of tools integrating with Docker to extend its
capabilities. PaaS-like deployment (Dokku, Deis, Flynn),
multi-node orchestration (Maestro, Salt, Mesos, Openstack Nova),
management dashboards (docker-ui, Openstack Horizon, Shipyard),
configuration management (Chef, Puppet), continuous integration
(Jenkins, Strider, Travis), etc. Docker is rapidly establishing
itself as the standard for container-based tooling.
Do I lose my data when the container exits?
...........................................
Not at all! Any data that your application writes to disk gets preserved
in its container until you explicitly delete the container. The file
system for the container persists even after the container halts.
Can I help by adding some questions and answers?
................................................
Definitely! You can fork `the repo`_ and edit the documentation sources.
42. **Where can I find more answers?**
Where can I find more answers?
..............................
You can find more answers on:
* `Docker club mailinglist`_
* `Docker user mailinglist`_
* `Docker developer mailinglist`_
* `IRC, docker on freenode`_
* `Github`_
* `Ask questions on Stackoverflow`_
* `Join the conversation on Twitter`_
.. _Docker club mailinglist: https://groups.google.com/d/forum/docker-club
.. _Docker user mailinglist: https://groups.google.com/d/forum/docker-user
.. _Docker developer mailinglist: https://groups.google.com/d/forum/docker-dev
.. _the repo: http://www.github.com/dotcloud/docker
.. _IRC, docker on freenode: irc://chat.freenode.net#docker
.. _Github: http://www.github.com/dotcloud/docker
.. _Ask questions on Stackoverflow: http://stackoverflow.com/search?q=docker
.. _Join the conversation on Twitter: http://twitter.com/getdocker
.. _Join the conversation on Twitter: http://twitter.com/docker
Looking for something else to read? Checkout the :ref:`hello_world` example.

View File

@@ -1,127 +1,36 @@
:title: Introduction
:description: An introduction to docker and standard containers?
:title: Docker Documentation
:description: An overview of the Docker Documentation
:keywords: containers, lxc, concepts, explanation
.. _introduction:
.. image:: static_files/dockerlogo-h.png
Introduction
============
------------
Docker -- The Linux container runtime
-------------------------------------
``docker``, the Linux Container Runtime, runs Unix processes with
strong guarantees of isolation across servers. Your software runs
repeatably everywhere because its :ref:`container_def` includes any
dependencies.
Docker complements LXC with a high-level API which operates at the process level. It runs unix processes with strong guarantees of isolation and repeatability across servers.
``docker`` runs three ways:
Docker is a great building block for automating distributed systems: large-scale web deployments, database clusters, continuous deployment systems, private PaaS, service-oriented architectures, etc.
* as a daemon to manage LXC containers on your :ref:`Linux host
<kernel>` (``sudo docker -d``)
* as a :ref:`CLI <cli>` which talks to the daemon's `REST API
<api/docker_remote_api>`_ (``docker run ...``)
* as a client of :ref:`Repositories <working_with_the_repository>`
that let you share what you've built (``docker pull, docker
commit``).
Each use of ``docker`` is documented here. The features of Docker are
currently in active development, so this documentation will change
frequently.
- **Heterogeneous payloads** Any combination of binaries, libraries, configuration files, scripts, virtualenvs, jars, gems, tarballs, you name it. No more juggling between domain-specific tools. Docker can deploy and run them all.
- **Any server** Docker can run on any x64 machine with a modern linux kernel - whether it's a laptop, a bare metal server or a VM. This makes it perfect for multi-cloud deployments.
- **Isolation** docker isolates processes from each other and from the underlying host, using lightweight containers.
- **Repeatability** Because containers are isolated in their own filesystem, they behave the same regardless of where, when, and alongside what they run.
.. image:: concepts/images/lego_docker.jpg
What is a Standard Container?
-----------------------------
Docker defines a unit of software delivery called a Standard Container. The goal of a Standard Container is to encapsulate a software component and all its dependencies in
a format that is self-describing and portable, so that any compliant runtime can run it without extra dependency, regardless of the underlying machine and the contents of the container.
The spec for Standard Containers is currently work in progress, but it is very straightforward. It mostly defines 1) an image format, 2) a set of standard operations, and 3) an execution environment.
A great analogy for this is the shipping container. Just like Standard Containers are a fundamental unit of software delivery, shipping containers (http://bricks.argz.com/ins/7823-1/12) are a fundamental unit of physical delivery.
Standard operations
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers define a set of STANDARD OPERATIONS. Shipping containers can be lifted, stacked, locked, loaded, unloaded and labelled. Similarly, standard containers can be started, stopped, copied, snapshotted, downloaded, uploaded and tagged.
Content-agnostic
~~~~~~~~~~~~~~~~~~~
Just like shipping containers, Standard Containers are CONTENT-AGNOSTIC: all standard operations have the same effect regardless of the contents. A shipping container will be stacked in exactly the same way whether it contains Vietnamese powder coffee or spare Maserati parts. Similarly, Standard Containers are started or uploaded in the same way whether they contain a postgres database, a php application with its dependencies and application server, or Java build artifacts.
Infrastructure-agnostic
~~~~~~~~~~~~~~~~~~~~~~~~~~
Both types of containers are INFRASTRUCTURE-AGNOSTIC: they can be transported to thousands of facilities around the world, and manipulated by a wide variety of equipment. A shipping container can be packed in a factory in Ukraine, transported by truck to the nearest routing center, stacked onto a train, loaded into a German boat by an Australian-built crane, stored in a warehouse at a US facility, etc. Similarly, a standard container can be bundled on my laptop, uploaded to S3, downloaded, run and snapshotted by a build server at Equinix in Virginia, uploaded to 10 staging servers in a home-made Openstack cluster, then sent to 30 production instances across 3 EC2 regions.
Designed for automation
~~~~~~~~~~~~~~~~~~~~~~~~~~
Because they offer the same standard operations regardless of content and infrastructure, Standard Containers, just like their physical counterpart, are extremely well-suited for automation. In fact, you could say automation is their secret weapon.
Many things that once required time-consuming and error-prone human effort can now be programmed. Before shipping containers, a bag of powder coffee was hauled, dragged, dropped, rolled and stacked by 10 different people in 10 different locations by the time it reached its destination. 1 out of 50 disappeared. 1 out of 20 was damaged. The process was slow, inefficient and cost a fortune - and was entirely different depending on the facility and the type of goods.
Similarly, before Standard Containers, by the time a software component ran in production, it had been individually built, configured, bundled, documented, patched, vendored, templated, tweaked and instrumented by 10 different people on 10 different computers. Builds failed, libraries conflicted, mirrors crashed, post-it notes were lost, logs were misplaced, cluster updates were half-broken. The process was slow, inefficient and cost a fortune - and was entirely different depending on the language and infrastructure provider.
Industrial-grade delivery
~~~~~~~~~~~~~~~~~~~~~~~~~~
There are 17 million shipping containers in existence, packed with every physical good imaginable. Every single one of them can be loaded on the same boats, by the same cranes, in the same facilities, and sent anywhere in the World with incredible efficiency. It is embarrassing to think that a 30 ton shipment of coffee can safely travel half-way across the World in *less time* than it takes a software team to deliver its code from one datacenter to another sitting 10 miles away.
With Standard Containers we can put an end to that embarrassment, by making INDUSTRIAL-GRADE DELIVERY of software a reality.
Standard Container Specification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(TODO)
Image format
~~~~~~~~~~~~
Standard operations
~~~~~~~~~~~~~~~~~~~
- Copy
- Run
- Stop
- Wait
- Commit
- Attach standard streams
- List filesystem changes
- ...
Execution environment
~~~~~~~~~~~~~~~~~~~~~
Root filesystem
^^^^^^^^^^^^^^^
Environment variables
^^^^^^^^^^^^^^^^^^^^^
Process arguments
^^^^^^^^^^^^^^^^^
Networking
^^^^^^^^^^
Process namespacing
^^^^^^^^^^^^^^^^^^^
Resource limits
^^^^^^^^^^^^^^^
Process monitoring
^^^^^^^^^^^^^^^^^^
Logging
^^^^^^^
Signals
^^^^^^^
Pseudo-terminal allocation
^^^^^^^^^^^^^^^^^^^^^^^^^^
Security
^^^^^^^^
For an overview of Docker, please see the `Introduction
<http://www.docker.io>`_. When you're ready to start working with
Docker, we have a `quick start <http://www.docker.io/gettingstarted>`_
and a more in-depth guide to :ref:`ubuntu_linux` and other
:ref:`installation_list` paths including prebuilt binaries,
Vagrant-created VMs, Rackspace and Amazon instances.
Enough reading! :ref:`Try it out! <running_examples>`

View File

@@ -1,18 +1,77 @@
:title: Installation on Amazon EC2
:description: Docker installation on Amazon EC2 with a single vagrant command. Vagrant 1.1 or higher is required.
:title: Installation on Amazon EC2
:description: Docker installation on Amazon EC2
:keywords: amazon ec2, virtualization, cloud, docker, documentation, installation
Amazon EC2
==========
Please note this is a community contributed installation path. The only 'official' installation is using the
:ref:`ubuntu_linux` installation path. This version may sometimes be out of date.
.. include:: install_header.inc
There are several ways to install Docker on AWS EC2:
Installation
------------
* :ref:`amazonquickstart` or
* :ref:`amazonstandard` or
* :ref:`amazonvagrant`
Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant 1.1 or higher is required.
**You'll need an** `AWS account <http://aws.amazon.com/>`_ **first, of course.**
.. _amazonquickstart:
Amazon QuickStart
-----------------
1. **Choose an image:**
* Open http://cloud-images.ubuntu.com/locator/ec2/
* Enter ``amd64 precise`` in the search field (it will search as you
type)
* Pick an image by clicking on the image name. *An EBS-enabled
image will let you use a t1.micro instance.* Clicking on the image
name will take you to your AWS Console.
2. **Tell CloudInit to install Docker:**
* Enter ``#include https://get.docker.io`` into the instance *User
Data*. `CloudInit <https://help.ubuntu.com/community/CloudInit>`_
is part of the Ubuntu image you chose and it bootstraps from this
*User Data*.
3. After a few more standard choices where defaults are probably ok, your
AWS Ubuntu instance with Docker should be running!
**If this is your first AWS instance, you may need to set up your
Security Group to allow SSH.** By default all incoming ports to your
new instance will be blocked by the AWS Security Group, so you might
just get timeouts when you try to connect.
Installing with ``get.docker.io`` (as above) will create a service
named ``dockerd``. You may want to set up a :ref:`docker group
<dockergroup>` and add the *ubuntu* user to it so that you don't have
to use ``sudo`` for every Docker command.
Once you've got Docker installed, you're ready to try it out -- head
on over to the :doc:`../use/basics` or :doc:`../examples/index` section.
.. _amazonstandard:
Standard Ubuntu Installation
----------------------------
If you want a more hands-on installation, then you can follow the
:ref:`ubuntu_linux` instructions installing Docker on any EC2 instance
running Ubuntu. Just follow Step 1 from :ref:`amazonquickstart` to
pick an image (or use one of your own) and skip the step with the
*User Data*. Then continue with the :ref:`ubuntu_linux` instructions.
.. _amazonvagrant:
Use Vagrant
-----------
.. include:: install_unofficial.inc
And finally, if you prefer to work through Vagrant, you can install
Docker that way too. Vagrant 1.1 or higher is required.
1. Install vagrant from http://www.vagrantup.com/ (or use your package manager)
2. Install the vagrant aws plugin
@@ -31,16 +90,17 @@ Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant
4. Check your AWS environment.
Create a keypair specifically for EC2, give it a name and save it to your disk. *I usually store these in my ~/.ssh/ folder*.
Check that your default security group has an inbound rule to accept SSH (port 22) connections.
Create a keypair specifically for EC2, give it a name and save it
to your disk. *I usually store these in my ~/.ssh/ folder*.
Check that your default security group has an inbound rule to
accept SSH (port 22) connections.
5. Inform Vagrant of your settings
Vagrant will read your access credentials from your environment, so we need to set them there first. Make sure
you have everything on amazon aws setup so you can (manually) deploy a new image to EC2.
Vagrant will read your access credentials from your environment, so
we need to set them there first. Make sure you have everything on
amazon aws setup so you can (manually) deploy a new image to EC2.
::
@@ -54,7 +114,8 @@ Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant
* ``AWS_ACCESS_KEY_ID`` - The API key used to make requests to AWS
* ``AWS_SECRET_ACCESS_KEY`` - The secret key to make AWS API requests
* ``AWS_KEYPAIR_NAME`` - The name of the keypair used for this EC2 instance
* ``AWS_SSH_PRIVKEY`` - The path to the private key for the named keypair, for example ``~/.ssh/docker.pem``
* ``AWS_SSH_PRIVKEY`` - The path to the private key for the named
keypair, for example ``~/.ssh/docker.pem``
You can check if they are set correctly by doing something like
@@ -69,10 +130,12 @@ Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant
vagrant up --provider=aws
If it stalls indefinitely on ``[default] Waiting for SSH to become available...``, Double check your default security
zone on AWS includes rights to SSH (port 22) to your container.
If it stalls indefinitely on ``[default] Waiting for SSH to become
available...``, Double check your default security zone on AWS
includes rights to SSH (port 22) to your container.
If you have an advanced AWS setup, you might want to have a look at https://github.com/mitchellh/vagrant-aws
If you have an advanced AWS setup, you might want to have a look at
https://github.com/mitchellh/vagrant-aws
7. Connect to your machine
@@ -86,7 +149,7 @@ Docker can now be installed on Amazon EC2 with a single vagrant command. Vagrant
.. code-block:: bash
docker
sudo docker
Continue with the :ref:`hello_world` example.
Continue with the :ref:`hello_world` example.

View File

@@ -7,10 +7,6 @@
Arch Linux
==========
Please note this is a community contributed installation path. The only 'official' installation is using the
:ref:`ubuntu_linux` installation path. This version may sometimes be out of date.
Installing on Arch Linux is not officially supported but can be handled via
either of the following AUR packages:
@@ -36,6 +32,10 @@ either AUR package.
Installation
------------
.. include:: install_header.inc
.. include:: install_unofficial.inc
The instructions here assume **yaourt** is installed. See
`Arch User Repository <https://wiki.archlinux.org/index.php/Arch_User_Repository#Installing_packages>`_
for information on building and installing packages from the AUR if you have not

View File

@@ -7,9 +7,10 @@
Binaries
========
**Please note this project is currently under heavy development. It should not be used in production.**
.. include:: install_header.inc
**This instruction set is meant for hackers who want to try out Docker on a variety of environments.**
**This instruction set is meant for hackers who want to try out Docker
on a variety of environments.**
Right now, the officially supported distributions are:
@@ -23,22 +24,18 @@ But we know people have had success running it under
- Suse
- :ref:`arch_linux`
Check Your Kernel
-----------------
Dependencies:
-------------
* 3.8 Kernel (read more about :ref:`kernel`)
* AUFS filesystem support
* lxc
* xz-utils
Your host's Linux kernel must meet the Docker :ref:`kernel`
Get the docker binary:
----------------------
.. code-block:: bash
wget http://get.docker.io/builds/Linux/x86_64/docker-latest.tgz
tar -xf docker-latest.tgz
wget --output-document=docker https://get.docker.io/builds/Linux/x86_64/docker-latest
chmod +x docker
Run the docker daemon
@@ -56,10 +53,10 @@ Run your first container!
.. code-block:: bash
# check your docker version
./docker version
sudo ./docker version
# run a container and open an interactive shell in the container
./docker run -i -t ubuntu /bin/bash
sudo ./docker run -i -t ubuntu /bin/bash

View File

@@ -0,0 +1,125 @@
:title: Installation on Gentoo Linux
:description: Docker installation instructions and nuances for Gentoo Linux.
:keywords: gentoo linux, virtualization, docker, documentation, installation
.. _gentoo_linux:
Gentoo Linux
============
.. include:: install_header.inc
.. include:: install_unofficial.inc
Installing Docker on Gentoo Linux can be accomplished by using the overlay
provided at https://github.com/tianon/docker-overlay. The most up-to-date
documentation for properly installing the overlay can be found in the overlay
README. The information here is provided for reference, and may be out of date.
Installation
^^^^^^^^^^^^
Ensure that layman is installed:
.. code-block:: bash
sudo emerge -av app-portage/layman
Using your favorite editor, add
``https://raw.github.com/tianon/docker-overlay/master/repositories.xml`` to the
``overlays`` section in ``/etc/layman/layman.cfg`` (as per instructions on the
`Gentoo Wiki <http://wiki.gentoo.org/wiki/Layman#Adding_custom_overlays>`_),
then invoke the following:
.. code-block:: bash
sudo layman -f -a docker
Once that completes, the ``app-emulation/docker`` package will be available
for emerge:
.. code-block:: bash
sudo emerge -av app-emulation/docker
If you prefer to use the official binaries, or just do not wish to compile
docker, emerge ``app-emulation/docker-bin`` instead. It is important to
remember that Gentoo is still an unsupported platform, even when using the
official binaries.
The package should already include all the necessary dependencies. For the
simplest installation experience, use ``sys-kernel/aufs-sources`` directly as
your kernel sources. If you prefer not to use ``sys-kernel/aufs-sources``, the
portage tree also contains ``sys-fs/aufs3``, which contains the patches
necessary for adding AUFS support to other kernel source packages (and a
``kernel-patch`` use flag to perform the patching automatically).
Between ``app-emulation/lxc`` and ``app-emulation/docker``, all the
necessary kernel configuration flags should be checked for and warned about in
the standard manner.
If any issues arise from this ebuild or the resulting binary, including and
especially missing kernel configuration flags and/or dependencies, `open an
issue <https://github.com/tianon/docker-overlay/issues>`_ on the docker-overlay
repository or ping tianon in the #docker IRC channel.
Starting Docker
^^^^^^^^^^^^^^^
Ensure that you are running a kernel that includes the necessary AUFS support
and includes all the necessary modules and/or configuration for LXC.
OpenRC
------
To start the docker daemon:
.. code-block:: bash
sudo /etc/init.d/docker start
To start on system boot:
.. code-block:: bash
sudo rc-update add docker default
systemd
-------
To start the docker daemon:
.. code-block:: bash
sudo systemctl start docker.service
To start on system boot:
.. code-block:: bash
sudo systemctl enable docker.service
Network Configuration
^^^^^^^^^^^^^^^^^^^^^
IPv4 packet forwarding is disabled by default, so internet access from inside
the container will not work unless ``net.ipv4.ip_forward`` is enabled:
.. code-block:: bash
sudo sysctl -w net.ipv4.ip_forward=1
Or, to enable it more permanently:
.. code-block:: bash
echo net.ipv4.ip_forward = 1 | sudo tee /etc/sysctl.d/docker.conf
fork/exec /usr/sbin/lxc-start: operation not permitted
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Unfortunately, Gentoo suffers from `issue #1422
<https://github.com/dotcloud/docker/issues/1422>`_, meaning that after every
fresh start of docker, the first docker run fails due to some tricky terminal
issues, so be sure to run something trivial (such as ``docker run -i -t busybox
echo hi``) before attempting to run anything important.

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@@ -1,12 +1,17 @@
:title: Documentation
:description: -- todo: change me
:keywords: todo, docker, documentation, installation, OS support
:title: Docker Installation
:description: many ways to install Docker
:keywords: docker, installation
.. _installation_list:
Installation
============
There are a number of ways to install Docker, depending on where you
want to run the daemon. The :ref:`ubuntu_linux` installation is the
officially-tested version, and the community adds more techniques for
installing Docker all the time.
Contents:
.. toctree::
@@ -19,5 +24,6 @@ Contents:
amazon
rackspace
archlinux
gentoolinux
upgrading
kernel

View File

@@ -0,0 +1,7 @@
.. note::
Docker is still under heavy development! We don't recommend using
it in production yet, but we're getting closer with each
release. Please see our blog post, `"Getting to Docker 1.0"
<http://blog.docker.io/2013/08/getting-to-docker-1-0/>`_

Some files were not shown because too many files have changed in this diff Show More